Skip Navigation
 
ACF
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™  |  Print      

Office of Planning, Research & Evaluation (OPRE) skip to primary page content
Advanced
Search

 Table of Contents | Previous | Next

CHAPTER 5 - EXTENT OF IMPLEMENTATION

In this chapter:

  • Description of Rating Scales
  • Extent of Implementation (Ratings)
    - Family Involvement
    - Education
    - Social Services
    - Health
  • Comparison of NRCT
    and Self-Assessment Ratings

This chapter summarizes the overall extent of implementation for each of the components of the Transition Demonstration Program -- Social Services, Family Involvement, Education, and Health -- and presents two perspectives on that implementation: the viewpoint of the National Research Coordinating Team and the viewpoint of the sites themselves. Information is drawn from two distinct but complementary sources: the ratings of program implementation completed by the National Research Coordinating Team (NRCT) and the self-assessment ratings provided by the local sites via the Program Implementation Profile. Ratings from each source are presented separately and then compared for each of the four components.

DESCRIPTION OF RATING SCALES

The level of implementation is important because (1) it reflects the extent to which programs were successful in achieving program goals and (2) it may affect ultimate outcomes. National Research Coordinating Team ratings are available for each of the 31 sites as a whole and in each of the component areas. These ratings utilize a 6-point scale for Family Involvement, Social Services, and Health and a 5-point scale for Education, with possible values ranging from 1 to 6 arrayed as follows:

1. Minimal or no evidence of activity
2. Program supports traditional school-based activity but does little to extend or expand on school programs
3. Program provides enhanced services in some areas
4. Program provides multiple, innovative activities but with some unevenness in availability
5-6. Program provides multiple, innovative activities with consistent availability to all participant groups

The ratings for each site are presented in Table 13.

Table 13.
Level of Program Implementation by Local Sites
based on National Research Coordinating Team ratings*
Team
Social Services
Family Involvement
Health
Education
Overall Score
N-1
3
1
2
1
7
N-2
2
2
2
2
8
N-3
2
3
2
1
8
N-4
3
2
2
2
9
N-5
3
2
2
2
9
N-6
2
2
2
4
10
N-7
3
3
2
3
11
N-8
3
4
4
3
11
N-9
2
4
3
3
12
N-10
4
2
3
4
13
N-11
4
3
4
3
14
N-12
3
4
5
3
15
N-13
5
4
3
3
15
N-14
4
3
4
4
15
N-15
4
4
4
3
15
N-16
4
3
5
3
15
N-17
3
4
4
4
15
N-18
4
4
4
3
15
N-19
4
5
4
3
16
N-20
4
4
4
4
16
N-21
5
4
5
2
16
N-22
4
5
4
4
17
N-23
5
4
5
3
17
N-24
6
3
5
4
18
N-25
5
5
5
4
19
N-26
6
6
4
3
19
N-27
5
6
5
4
20
N-28
4
6
6
4
20
N-29
6
4
6
4
20
N-30
6
5
5
5
21
N-31
5
6
6
5
22

 

* Scale of 1 ( low) to 6 ( high) for social services, family involvement, and health; 1 ( low) to 5 ( high) for education

In the final year of the program, self-assessments were completed at each site by committees broadly representing the participants in and constituents of the Transition Demonstration Programs. Project directors facilitated the process of completion but were not to directly influence committee ratings. In nearly every site, the Governing Board reviewed the full document and approved it for submission to the National Research Coordinating Team.

The self-assessment ratings within the Program Implementation Profile use a 10-point scale, with possible values ranging from 1 (low) to 10 (high). The scale is anchored by only three descriptors:

  • 1 = Limited implementation
  • 5 = Moderate implementation
  • 10 = High degree of implementation

The sites’ self-assessment ratings are available for 25 sites (the remaining sites did not submit Program Implementation Profiles) and for each of the four component areas, as well as for three additional areas of interest: Leadership, Continuity with Head Start, and Readiness to Change. Ratings were made of (1) degree of implementation at the beginning of the project (1992-93), (2) degree of implementation at the end of the project (1996-97), and (3) amount of variation in implementation across schools participating within this local Transition program. Self-assessment ratings for each of four components (beginning and end of implementation) are presented in Table 14.

It is interesting to note that only two sites rated themselves as being uniformly low in implementation in the first year, and only two sites rated themselves as being uniformly high. The two sites indicating low implementation in the first year showed substantial gains in implementation by the ending ratings. Generally, sites indicated uniformly moderate implementation across the four components, a finding that is consistent with information obtained during early site visits. Three sites indicated generally low implementation with one component showing some strength in the first year, while eight other sites indicated generally moderate to above average implementation with one component showing significant weakness in the first year. In these sites, there were often deliberate decisions made to concentrate program resources and emphasis on a year-by-year basis, rather than attempt an extensive implementation of all four components at once. Thus, it appears clear that site self-ratings reflect the varying approaches taken to implementation as well as the variation in level of self-perceived success in implementation.

Table 14
Local sites self-report of level of program implementation*
Site Social services Family Involvement Health Education
Beginning End Beginning End Beginning End Beginning End
P-1
1.0
9.0
1.0
9.0
1.0
9.0
1.0
7.0
P-2
2.0
9.0
2.0
9.5
2.0
9.0
2.0
9.0
P-3
2.0
9.0
1.0
9.0
2.0
9.0
7.0
9.0
P-4
2.0
8.0
1.0
4.5
**
**
6.0
8.0
P-5
2.5
8.5
3.5
9.5
8.0
9.5
3.5
7.5
P-6
4.5
9.5
4.0
7.5
3.0
6.0
2.0
6.0
P-7
5.0
7.0
6.0
8.0
5.0
7.0
6.0
6.0
P-8
5.0
9.5
2.0
7.0
5.0
7.0
5.0
7.0
P-9
5.0
9.0
9.0
9.0
5.0
9.0
4.0
9.0
P-10
5.0
10.0
6.5
10.0
1.0
8.0
3.0
10.0
P-11
5.0
9.0
0.5
7.0
2.0
8.0
8.0
8.5
P-12
5.0
9.0
10.0
8.0
2.5
9.0
8.0
10.0
P-13
5.0
9.0
3.0
7.0
8.0
9.0
6.0
9.0
P-14
5.5
8.5
1.0
8.0
3.5
8.0
1.5
8.0
P-15
5.5
5.5
6.0
9.0
2.5
8.5
8.5
7.5
P-16
5.5
7.0
8.0
7.5
**
7.5
1.5
6.0
P-17
6.5
9.5
3.5
7.5
9.5
9.5
9.5
9.5
P-18
7.0
9.5
7.5
3.5
**
6.5
4.0
9.0
P-19
7.0
9.0
3.5
7.0
3.5
7.5
1.0
6.5
P-20
7.0
10.0
6.0
9.0
5.0
2.0
0.0
5.5
P-21
8.0
8.0
5.0
9.0
5.5
8.0
5.5
9.0
P-22
8.0
8.0
8.0
9.0
8.0
8.0
4.0
5.0
P-23
9.0
2.0
8.5
9.0
7.5
5.0
8.0
9.0
P-24
9.0
9.0
4.0
8.0
9.0
9.0
7.5
6.5
P-25
9.5
9.50
9.5
8.50
7.5
7.5
6.5
6.0
P-26
9.5
9.5
9.5
10.0
9.5
9.5
10.0
5.0

* Scale of 1 (low) to 10 (high) at the beginning and end of the Transition Demonstration Program

FAMILY INVOLVEMENT

NRCT Ratings of Family Involvement

 

Figure 5.1 Implementation of family involvement activities

[D]

 

Figure 5.1 presents the ratings of implementation of the family involvement component that were completed by the National Research Coordinating Team. Eleven sites achieved ratings of 5 or 6, indicating evidence of multiple and innovative activities to stimulate and encourage family involvement in education. Activities were offered at diverse times, and parents were included in the planning. Sites achieving the highest ratings provided highly visible and individualized activities that were consistently available to all participant groups without the site, were offered frequently, and included parents in all aspects of the planning, modification, and implementation of activities. More than half of the sites (54%) achieved moderate ratings of 3 or 4. These sites were found to have, with moderate to vigorous effort, accomplished tasks such as: (1) establishing parent resource centers; (2) providing educational activities for use at home and supporting the families in the completion of those activities; (3) reducing barriers to family participation in school-based activities, and (4) planning activities based on surveys of family interests and needs. Six sites were given ratings of 1 or 2, indicating minimal implementation in the area of family involvement. The implementation efforts in these sites were limited to the support of existing efforts on the part of the schools and did not add substantial intensity to the efforts to involve families in education.

Self-Assessment Ratings of Family Involvement

 

Figure 5.2 Change in degree of implementation of family involvement component on program implementatino profile (self ratings)

[D]

 

Figure 5.2 summarizes the self-ratings provided by sites on the Program Implementation Profile concerning the degree of implementation of the family involvement component at the beginning of the implementation and at the end. Overall, the average rating of this component at the beginning (1992-93) was 4.8, indicating a moderate degree of implementation. There was, however, a significant amount of variation in the ratings that sites gave themselves. Seven sites (29%) gave themselves a rating of 2 or less, indicating very limited implementation in the initial year, and seven sites gave themselves high ratings for initial implementation. In contrast, there was little variability among sites in ratings of implementation at the end of the project period. The average rating was 8.0, indicating a very high degree of implementation. Only two sites gave themselves a rating of less than 7 at the end.

Comparing the ratings given by individual sites of beginning and ending implementation levels, it is evident that some sites perceived large amounts of change in implementation over time. Six sites had differences between beginning and ending ratings of 6 points or more. Three sites showed a decline in ratings of 2 to 4 points. Only one site did not indicate any notable change over time in the implementation of family involvement, in part because the site rated itself at the very highest levels of implementation at both times (9.5 and 10.0).

Asked to rate the degree of variability within their site, half of the sites indicated that there was at least moderate variability across schools within their programs in the level of implementation of the family involvement component. Of those 16 sites, nine indicated that there was a substantial amount of intra-site variability by giving a rating of 8 or higher. These ratings most likely reflect a deliberate decision to individualize services at the district or building level, but they may also reflect differences in the school populations and neighborhoods as well as the differences in staff within the various schools. Eight sites (26%) showed very low variability ratings (2 or less), indicating a perception of consistency in implementation across the schools.

EDUCATION

NRCT Ratings of Education

 

Figure 5.3 Implementation of Education Activities

[D]

 

The ratings of the implementation of the education component by the National Research Coordinating Team are summarized in Figure 5.3. The large majority of sites (23 of 31, or 74%) obtained a rating of 3 or 4, indicating moderate to moderately high implementation of the education component. These sites were found to: work with teachers to identify staff development needs and provide training based on that assessment; to reduce barriers for teachers by providing materials, substitute teachers, and reimbursement for conference expenses; to provide programs specifically designed to enhance academic and social development of students (e.g., tutoring programs, summer enrichment programs, social skills development activities, reading clubs); to monitor changes in classrooms and teacher practices and modify classroom supports and training based on observations; and to include teachers and principals in the design of curriculum and staff development activities.

Six sites were given lower ratings, indicating that activities to enhance the educational component were primarily restricted to encouraging discussion among teachers and limited dissemination of basic information about developmentally appropriate practices. Only two sites achieved the highest implementation rating of 5 for their efforts in the educational component. These sites showed consistent and broad evidence of multiple, innovative programs to enhance student performance; enrichment programs designed to support classroom-based instruction; efforts to integrate home-based and classroom-based instructional efforts; specific activities related to continuity of curriculum throughout the early childhood years; individualized support for teachers; extensive involvement of teachers and principals in the design and implementation programs for students, families, and school staff; and efforts to coordinate educational support activities with other programs within the school (e.g., Title I, special education, library, bilingual education, etc.).

It should be noted that while a number of sites showed evidence of one or more of the activities that characterized the highest rating, unevenness in implementation resulted in lower overall ratings for that area. Two sites, for example, had strong indications of efforts to create and maintain continuity in curriculum and teaching practices starting in Head Start and continuing through the early elementary grades. Several sites specifically designed enrichment and home educational activities to support classroom-based instructional activities. A number of sites provided individualized teacher supports in the form of mentors or peer coaches. However, only two sites showed evidence that was comprehensive and consistent enough to warrant the highest rating.

Self-Assessment Ratings of Education

 

Figure 5.4 Change in degree of implementation of education component on program implementation profile

[D]

 

Figure 5.4 summarizes the self-ratings provided by sites on the Program Implementation Profile concerning the degree of implementation of the education component at the beginning and at the end of the five-year implementation. Using the 10-point scale, the average rating of implementation at the beginning of the project (1992-93) was 4.8, indicating moderate implementation. Seven sites (29%) gave themselves a rating of 2 or less, indicating very limited implementation in the initial year. In contrast, however, the average rating at the end of the five years was 7.8, indicating a moderately high degree of implementation. Six sites (25%) gave themselves a rating of less than 7, indicating only moderate implementation.

As with other components reviewed earlier in the chapter, some sites reported a large amount of change in the education component over time. Of the 24 sites providing ratings, five sites had beginning and ending ratings that differed by at least 6 points on the 10-point scale. Another nine sites (37%) had change scores of between 3 and 5, while three sites had change scores of 0, indicating no difference in their rating of degree of implementation at the beginning and end of the project. One site indicated that the degree of implementation in the education component decreased substantially over the life of the project. The beginning rating given by that site was 5 points higher than the ending rating. No explanation was given by the site for this perceived decrease in implementation.

Asked to rate the variability within the site, six sites (25%) indicated that there was very little variability (ratings of less than 3) across schools within their programs in the implementation of the education component. Eight sites (33%) indicated a moderate amount of variability (ratings of between 3 and 7), and 10 sites (42%) indicated a large amount of variability (ratings of greater than 7). This degree of perceived variability within a site is the greatest among the four components. It is consistent with reports from site visits and from project directors that there were substantial differences among schools and teachers in the degree of acceptance and implementation of developmentally appropriate practices and other efforts to improve the educational practices in schools.

SOCIAL SERVICES

NRCT Ratings of Social Services

Figure 5.3 Implementation of Education Activities

[D]

 

Figure 5.5 presents the ratings of social service implementation completed by the National Research Coordinating Team. A third of the sites achieved ratings of 5 or 6, indicating multiple innovative contacts with families, outreach to hard-to-reach families to bring them into participation in program activities, vigorous efforts to minimize duplication of services, evidence that support plans guided service provision for individual families and that plans were modified based on outcomes, use of a strengths-based model of family support, and evidence of specific efforts to promote the independence of families in goal setting and service access. Sites achieving the highest ratings also demonstrated extensive and broad-reaching efforts to serve hard-to-reach families, extensive participation in community efforts to modify or create new services, and considerable evidence of incorporation of strength-based models.

Another 17 sites were given moderate ratings. These ratings reflected evidence of active efforts to reduce barriers to access to service, multiple individualized contacts with families, evidence of efforts to address specific cultural needs of families, and family participation in the development and enactment of family support plans. Only 4 sites were given ratings of 2, indicating limited evidence of supportive social services for families. These sites tended to be those in which there was a great deal of turnover in program staff, limiting the sites’ ability to mount or maintain a consistent effort.

Self-Assessment Ratings of Social Services

Figure 5.6 Change in degree of implementatino of scoial service component on program implementation profile

[D]

 

Figure 5.6 summarizes the self-ratings provided by sites on the Program Implementation Profile concerning the degree of implementation of the social services component at the beginning of the implementation and at the end. Overall, the average rating of implementation at the beginning (1992-93) was 5.2, indicating moderate implementation. By the final year, this increased to 8.7, with only 3 sites rating themselves less than 8.

The distribution of ratings across the 25 sites indicates substantial variation at the beginning of the project. Ratings ranged from 1 to 10, indicating that some sites began with very limited social services in the schools for children and families, while other sites started the project with much higher degrees of support already available. By the end of the project, however, there is little variation in the ratings. Twenty-two of the 25 sites submitted ratings of 8 or higher, indicating their perception of strong implementation in the final year of the project.

Looking at the degree of change indicated for each site, it is noted that some sites reported a large amount of change in the social services component over time. Twenty percent indicated that a great deal of change had taken place – their beginning and ending ratings at least 6 points apart. Another 40 percent of the sites had moderate change scores between 3 and 5 points, while 20 percent indicated no appreciable change over five years. Note, however, that four of these “no-change” sites had high self-ratings at the beginning of the project period, while one site had a moderate rating at both periods.

Asked to rate the degree of variability within the site, nearly a third of the sites indicated that there was very little variation (ratings of less than 3) across schools in implementing the social services component. Another third of the sites indicated a moderate amount of difference (ratings of between 3 and 7), and the remaining third of the sites indicated a great deal of difference in how well social services were provided (ratings of greater than 7). As with the family involvement component, these ratings likely reflect a decision by the Transition Demonstration Program to individualize services at the district or school level, contributing to differences in the level of implementation within a site. The ratings may also reflect differences in enthusiasm or cooperation across school administrators and teachers or in continuity of staff at the different schools.

HEALTH

NRCT Ratings of Health

Figure 5.7 Implementation of Health, Mental Health, & Nutrition Services

[D]

 

Figure 5.7 summarizes the NRCT ratings of implementation of the health component. Seven sites were given ratings of 2, indicating that health care needs of children and families were primarily met by referral and that existing screening programs were encouraged but not enhanced by project efforts. The large majority of sites, however, achieved ratings indicating moderate to high implementation. Moderate ratings (3 or 4), achieved by 13 sites, indicated that these sites provided resources and programs to enhance existing activities in the areas of health and nutrition, made efforts to reduce barriers to access for children and families, provided funds to meet emergency needs, provided systematic follow-through on results of screening programs, and developed proactive programs designed to promote health, fitness, and nutrition for children and families (e.g., special topic classes, health fairs, classroom instruction, social skills development programs).

Eleven sites achieved higher NRCT ratings in health, indicating that they provided a wider variety of innovative health-related programs and activities and placed specific emphasis on prevention, health promotion, and family wellness. Of these 11 sites, three were given the highest rating of 6, based on evidence of highly unique and broadly available programming, fundamental emphasis on wellness and prevention as central themes, comprehensive programming encompassing all areas of health, and strong evidence of efforts to facilitate system change in the community and the schools, thus building capacity to meet the ongoing health needs of children and families.

Self-Assessment Ratings of Health

Figure 5.8 Change in Degree of Implementationof Health Component of Program Implementation Profile

[D]

 

Figure 5.8 summarizes the self-ratings by sites concerning implementation of the health component at the beginning and end of the five-year implementation. Using the 10-point scale, the average implementation rating at the project’s beginning (1992-93) was 4.7, indicating moderate implementation. Seven sites (29%) gave themselves a rating of 2 or less, indicating very limited implementation in the first year. In contrast, however, the average rating at the end of the five years was 8.0, indicating a high degree of implementation. At this time, only four sites gave themselves a rating of less than 7.

Looking at the degree of change indicated for each site, it is noted that some sites (29%) reported a large amount of change in the health component over time. Of the 24 sites providing ratings, seven sites had beginning and ending ratings that differed by at least 6 points. Another 5 sites (21%) had change scores of between 3 and 5, while 3 sites had change scores of 0, indicating no difference in their rating of degree of implementation at the beginning of the project and at the end.

Asked to rate the variability within the site, 8 sites (33%) indicated that there was very little variability (ratings of less than 3) across schools within their programs in the implementation of the health component. Twelve sites (50%) indicated a moderate amount of variability (ratings of between 3 and 7), and only 4 sites indicated a large amount of variability (ratings of greater than 7).

Comparison of NRCT and Self-Assessment Ratings

Figure 5.9 Comparisons of External and Self-assessment Ratings

[D]

 

Ratings of implementation for each of the four components were compared, and Figure 5.9 presents the results. (Note: Both sets of ratings were converted to a single scale — 0, 2, 4, 6, 8, 10 — before comparison. The ratings for the education component were converted to: 0, 2.5, 5.0, 7.5, 10.) Only self-assessment ratings of implementation at the end of the project were compared to the NRCT ratings.

There were noticeable differences between the ratings of component implementation made by the National Research Coordinating Team and the sites themselves. Ratings made by the National Research Coordinating Team, which rated programs against pre-specified criteria, tended to be lower for the majority of sites in every component area. The differences in NRCT and self-assessment ratings are not unexpected, and are most likely related to several factors, including (1) the differences in the two rating scales; (2) differences in the rating methods; and (3) real discrepancies in perceptions or evaluations of the programs. As described earlier in this chapter and in Chapter 3, the two scales and the processes by which ratings were made were quite different. The NRCT rating scale was designed to capture differences and, to the extent possible, to discriminate differences among sites, while the self-assessment (Program Implementation Profile) rating scale was not designed to discriminate in such a fashion. The NRCT rating scale was more global in its approach, while the Program Implementation Profile was designed to reflect implementation in some detail. Second, the NRCT ratings were created by a single individual using extensive written documents, site visit reports, and interviews (although ratings were validated by other reviewers). The self-assessment ratings were created by committees of individuals within each site, via a consensus process and using a broad range of information obtained from written documents, interviews, observation, and other sources. Thus, to some extent, differences in perceptions at the local site level may have been obscured through this process. Third, the NRCT ratings were completed with a perspective that encompassed all 31 sites, while the self assessment ratings were focused on a single site. Thus, both the purposes and processes of producing ratings were different in the two endeavors and therefore would be expected to yield somewhat different results.

SUMMARY FINDINGS

This chapter presents ratings of implementation produced through two processes: ratings made by the National Research Coordinating Team (NRCT) and self-assessment ratings given by each site through a consensus process. Different scales and different rating processes yielded, not unexpectedly, somewhat different results. Key findings are as follows:

  1. Sites tended to indicate that the levels of implementation were low to moderate in the first year of the program. By the end of the five-year implementation period, sites indicated consistently higher degrees of implementation in all program areas. This is consistent with the a priori expectation that the implementation of the comprehensive Transition Demonstration Programs would take some time to accomplish.
  2. Sites tended to indicate moderate to high degrees of variability within sites. The variation seen in implementation within a site is most likely related to conscious decisions to individualize program offerings to meet the unique needs of specific schools and neighborhoods. Variation in implementation within a site may also reflect differences in level of acceptance among school personnel or differences in continuity in staff.
  3. There is relatively little variability across sites at the end of the implementation period as reflected in the self-assessment ratings. The self-assessment ratings from the Program Implementation Profile generally reflect a perception on the part of the sites that they achieved the goals they set for their projects. The appreciation of accomplishments reflects the views of a variety of stakeholders within the site, because of the broad representation on the committees completing ratings.
  4. The NRCT ratings, however, do indicate variability across sites. Distributions of NRCT ratings evidence a range from limited to extensive implementation for each of the four components, with the majority of sites showing moderate implementation. A few sites showed relatively limited implementation across all components, and, similarly, a few sites showed consistently extensive implementation. Most sites achieved at least moderate implementation of all components and many of them showed evidence of extensive implementation in one or more areas. Even sites with lower levels of implementation achieved moderate ratings in at least one component.

Taken together, the self-assessment and NRCT ratings of the program implementation efforts in the 31 sites indicate that the large majority of sites did, in fact, implement innovative, comprehensive, creative programs to build on the strengths and meet the needs of the children, families, schools, and communities within which they operated. Variation in implementation, both in type and extent of program offered, reflected, at least in part, the inherent variation in the communities, neighborhoods, organizations, and cultures participating in the National Transition Demonstration Project across the 31 sites.

In Chapter 7, a group of six highly successful local sites and the eight least successful sites in terms of program implementation are identified. Exploratory analyses about what contributed to the tremendous differences between these types of sites are presented in that chapter.



 

 

 Table of Contents | Previous | Next