Skip Navigation
 
ACF
          
ACF Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News   |   HHS Home

  Questions?  |  Privacy  |  Site Index  |  Contact Us  |  Download Reader™  |  Print      

skip to primary page content
Head Start's National Research Conference

Methods and Measures Development

Parents’ Voices: Assessment Validation Procedure with Parent Partners
Yumiko Sekino, Marlo Perry, Rachel Fusco, John Fantuzzo

Presenters: Yumiko Sekino, Marlo Perry, Rachel Fusco

The purpose of the present study is to examine the content validity of two assessments measuring children’s social-emotional development. The Child Behavior Checklist (CBCL; Achenbach, 1991) is a frequently used behavior rating scale in Head Start research. Developed by the psychiatric community, the CBCL asks parents to rate the frequencies of internalizing and externalizing behaviors as a measure of social-emotional competence. The Penn Interactive Peer Play Scale – Parent Version (PIPPS-P; Fantuzzo, Mendez, & Tighe, 1998) focuses on assessing children’s social-emotional development in the natural context of peer play. This behavior rating scale was developed by Head Start parents and teachers in partnership with university researchers and has been used in several studies involving Head Start children and families. This study seeks to examine the content validity of behavior items across these two measures by using a mixed methods approach to focus on the following questions:

  • What kinds of questions are parents comfortable and not comfortable answering honestly?
  • Do demographic characteristics relate differentially to parent responses?
  • What kinds of reasons do parents give for not wanting to answer particular questions honestly?

Ninety-two Head Start parent leaders recruited through the Policy Council consented to participation in this study. Parents were invited to sort 144 CBCL and PIPPS items into three piles (Not Comfortable, Comfortable, Very Comfortable). Sorting was in response to the question, “to what degree would you feel comfortable answering this question honestly?” Overall, across the combined 144 items, 30 items had the highest frequency in the category of “Not Comfortable.” Twenty-nine were from the CBCL, one was from the PIPPS, indicating that parents were more comfortable answering items on the PIPPS than on the CBCL. Chi-square analyses were used to determine if there were significant differences in responses based on demographic variables. No significant differences were found in parent responses for “not comfortable” items between groups based on sex, marital status, or education level.

Additionally, closer examination of items falling in the Not Comfortable category indicated that parents primarily found these questions to be “offensive” or “threatening.” Three independent raters coded parent response to why they found the question offensive/threatening. Codes for items found to be offensive were as follows: Not Age Appropriate; Blaming Parents; Uncomfortable with Subject Matter; Labeling Child; Unthinkable Behavior for My Child; Personal – Private/Family Matter; Personal – Professional Matter; Personal – Context-relevant; No Code. The codes used for items found to be threatening were the same as the above with the addition of the following code: Fear of Consequences to Answering Question. Overall interrater agreement was 89.4%.

Overall, results from this study highlight the need for a partnership approach that involves parents in the development of assessment tools to be used in our Head Start programs. As we know from collaborative studies with parents (Fantuzzo & McWayne, 2004; Fantuzzo, McWayne, & Childs, 2005), quality assessments can only be developed when parents are involved as partners in the development process.

References

Achenbach, T. M. (1991). Manual for the Child Behavior Checklist/4-18 and 1991 Profile. Burlington, VT: University of Vermont, Department of Psychiatry.

Fantuzzo, J. & McWayne, C. (2002). The relationship between peer-play interactions in the family context and dimensions of school readiness for low-income preschool children. Journal of Educational Psychology, 94, 79-87.

Fantuzzo, J., McWayne, C., & Childs, S. (2006). Scientist-Community Collaborations: Dynamic tensions between rights and responsibilities. In J.E. Trimble & C.B. Fisher (Eds.), The Handbook of Ethical Research with Ethnocultural Populations and Communities (pp. 27-50). Thousand Oaks, CA: Sage Publications.

Fantuzzo, J., Mendez, J., & Tighe, E. (1998). Parental assessment of peer play: Development and validation of the parent version of the Penn Interactive Peer Play Scale. Early Childhood Research Quarterly, 13 (4), 659-676.


 

Findings from the National School Readiness Indicators Initiative: A 17 State Partnership
Catherine B. Walsh, Kathy Thornburg

Presenters: Catherine B. Walsh, Kathy Thornburg

The National School Readiness Indicators Initiative involved teams from 17 states, including Arizona, Arkansas, California, Colorado, Connecticut, Kansas, Kentucky, Maine, Massachusetts, Missouri, New Hampshire, New Jersey, Ohio, Rhode Island, Vermont, Virginia and Wisconsin. Each state team was comprised of policy makers, community leaders and researchers.

States identified and developed indicators based on the research and science of early childhood development, advice and resources from experts across the country and peer-to-peer learning. The core indicators and emerging indicators that will be presented in the poster session are the result of a synthesis of the individual work of the 17 states. It is hoped that this rich list of critical measures – based on hard research and state experiences – will serve as a framework to focus more attention on the needs of the youngest children and their families.

The indicators selected by the 17 states point to a core set of common school readiness indicators. Highlighted in this poster session are core indicators in the areas of ready children, ready families, ready communities, ready services (including health care and early education), and ready schools.

Policymakers and community leaders can use the core set of indicators to measure progress toward improved outcomes for young children and families. The set of core indicatorswere selected based on several criteria: (1) each of the core indicators had been selected as a high priority school readiness indicator by multiple states; (2) the core indicators reflect conditions that can be altered through state policy actions; (3) a change in one or more of the core indicators will influence children’s school readiness; (4) each of the core indicators is currently measurable using state and local data.

Annual monitoring of key school readiness indicators can signal if things are moving in the right direction—and if they are not. Measuring progress over time can lead to more informed decisions about programs, policies and investments. A major goal of the 17 state initiative was achieved when states produced state-level reports on the set of school readiness indicators selected by their state team and released the reports to highlight key issues affecting young children in their states. The poster session will highlight the work of two states – Missouri and Rhode Island – to annually report on the school readiness of young children using a comprehensive set of school readiness indicators.

In addition, lessons learned in using indicators to move an early childhood policy agenda will be highlighted, including the following: (1) Indicators must be linked with policy and communications; (2) School readiness is both/and (i.e. literacy as well as social-emotional); (3) Kindergarten assessment – necessary but not sufficient; (4) It is essential to include policymakers; (5) Build cross-sector teams; (6) What gets measured gets done; (7) Celebrate progress; and, (8) Strategically pursue emerging indicators.

The lessons learned from the 17 states are a starting point for other states as they develop state and local school readiness indicator systems. For a more complete understanding of the findings that will be shared, please visit www.gettingready.org.

References

Early childhood in a social context: A chartbook. (2004). New York, NY: Commonwealth Fund and Child Trends.

Getting ready: Findings from the national school readiness indicators initiative: A 17 state partnership. (2005). Providence, RI: Rhode Island Kids Count.

Pathways mapping initiative: school readiness pathway. (2004). Washington, DC: Project on Effective Interventions at Harvard University. www.pathwaystooutcomes.org

Rouse, C., Brooks-Gunn, J. and McLanahan, S. eds. (2005). The future of children. School readiness: Closing racial and ethnic gaps, Vol. 15, No. 1.


 

Preliminary Validity of the Preschool Self-Regulation Assessment (PSRA)
Radiah Smith-Donald, Kirsten Carroll, Paul Goyette, Molly Woods Metzger, Ta-Tanisha Young, C. Cybele Raver

Presenters: Radiah Smith-Donald, Molly Woods Metzger

Head Start has traditionally focused on supporting children’s social-emotional and behavioral adjustment as well as their cognitive development. Developmental research linking self-regulation with school readiness provides support for this approach (Blair, 2002; Eisenberg, et al, 2003; Fantuzzo et al, 2004; Olson & Hoza, 1993, Raver, 2002). As researchers and service providers try to determine how best to support children’s development, they require a means of reliably evaluating children’s strengths and weaknesses in these different areas before and after intervention. This research explores the validity of a new, portable direct assessment of preschoolers’ self-regulatory skills.

Methods

The pilot run of the Chicago School Readiness Project (CSRP) yielded data for 63 preschoolers (ages 3.5-6 years; 67% Hispanic, 25% Black) from two Head Start programs.

The Preschool Self-Regulation Assessment (PSRA) consists of a battery of tasks which was administered one-on-one to assess children’s self-regulatory skills. PSRA tasks were adapted from lab-based assessments of effortful control (Murray & Kochanska, 2002), compliance (Brumfield & Roberts, 1998), and executive control (Blair, 2002) and take 20 minutes to administer. In addition, assessors completed a 28-item report describing children’s emotions and behavior during the assessment. The framework and descriptors of the assessor report were adapted from the Leiter-R social-emotional rating scale and Disruptive Behavior-Diagnostic Observation Schedule coding system (DB-DOS; Wakschlag et al, 2005).

Children’s early academic skills were assessed using the Head Start National Reporting System battery (NRS). Teacher report of preschoolers’ behavior problems and competencies was also collected via the Social Competence and Behavior Evaluation (SCBE-short form: LaFreniere & Dumas, 1996) and Behavior Problems Index (BPI: Zill & Peterson, 1986).

Results

The descriptive results suggest that children generally remained emotionally positive, followed directions, and that they rarely exhibited defiance, anger, or aggression. Performance on the PSRA tasks varied widely across children. Factor analyses using principal component extraction suggest multiple self-regulation components.

Correlation analyses demonstrate modest to moderate significant associations between PSRA and assessor report constructs and teacher-reported behavior problems and competencies. Preschoolers’ attention, planning, compliance, and impulse control were negatively correlated with teacher report of both internalizing and externalizing behavior problems (r = -.3). Preschoolers’ attention/planning and compliance were positively correlated (r = .5) with teacher report of social competence (children’s ability to negotiate conflicts and work well with other children). Preschoolers’ positive emotion during the assessment (positive affect, social engagement of the examiner, and sense of mastery and confidence) was not associated with behavior problems or competencies teachers reported seeing in the classroom.

PSRA constructs were significantly correlated with children’s performance on the NRS battery of early academic skills. Children’s impulse control, attention, and positive emotion during the assessment were positively associated with their early math and verbal skills (r = .2-.4). Children’s impulse control was also associated with their performance on the letter naming subscale of the NRS.

Conclusion

Findings provide support for the PSRA as a short, multidimensional measure of preschoolers’ self-regulation that can be taken to scale. In addition, they reiterate the significance of self-regulation for student success.

References

Blair, C. (2002). School readiness: Integrating cognition and emotion in a neurobiological conceptualization of children’s functioning at school entry. American Psychologist, 57, 111-127.

Brumfield, B.D., & Roberts, M.W. (1998). A comparison of two measurements of child compliance with normal preschool children. Journal of Clinical Child Psychology. 27, 109-116.

Eisenberg, N., Valiente, C., Fabes, R. A., Smith, C. L., Reiser, M., Shepard, S. A., et al. (2003). The relations of effortful control and ego control to children’s resiliency and social functioning. Developmental Psychology, 39, 761-776.

Fantuzzo, J., Perry, M. A., McDermott, P. (2004). Preschool approaches to learning and their relationship to other relevant classroom competencies for low-income children. School Psychology Quarterly, 19, 212-230.

LaFreniere, P. J., & Dumas, J. E. (1996). Social competence and behavior evaluation in children ages 3 to 6 years: The short form (SCBE-30). Psychological Assessment, 8, 369-377.

Murray, K., & Kochanska, G. (2002). Effortful control: Factor structure and relation to externalizing and internalizing behaviors. Journal of Abnormal Child Psychology, 30, 503-414.

Olson, S. L., & Hoza, B. (1993). Preschool developmental antecedents of conduct problems in children beginning school. Journal of Clinical Child Psychology, 22, 60-67.

Raver, C. C. (2002). Emotions Matter: Making the case for the role of young children’s emotional development for early school readiness. Social Policy Report, 16(3), 3-24.

Wakschlag, L. S., Leventhal, B. L., Briggs-Gowan, M. J., Danis, B., Keenan, K., Hill, C., et al. (2005). Defining the “disruptive” in preschool behavior: What diagnostic observation can teach us. Clinical Child and Family Psychology Review, 8 , 183-201.

Zill, N., & Peterson, J.L. (1986). Behavior problems index. Washington, DC: Child Trends.


 

The EARLI Mathematics Probes: Initial Reliability and Validity Evidence for Children in Head Start
James DiPerna, Paul Morgan, Puiwa Lei, Erin Reid, Qiong Wu

Presenters: James DiPerna, Paul Morgan, Erin Reid

Despite increased interest in promoting the development of early skills to prevent academic difficulty at school entry, few measures exist for monitoring growth in specific early academic skills during the preschool years. In addition, many published measures with adequate reliability and validity evidence for diagnostic purposes often lack sufficient skill coverage at the preschool level (U. S. Dept. of Health & Human Services, 2002). Progress-monitoring is one way to regularly assess children’s growth in early academic skills and enhance instructional planning. Through the use of progress monitoring measures, teachers are better able to change or modify their instruction to promote children’s mastery of key academic skills.

Curriculum-Based Measurement (CBM; Deno, 1985) is a well-established approach for tracking children’s growth in academic skills during the elementary years. CBM has strong reliability and validity evidence to support its use for progress-monitoring in mathematics. The measures or “probes” used in CBM are brief (typically 1-2 minutes), standardized, direct, and can be used by teachers to guide a child’s instruction over time. The purpose of the Early Arithmetic, Reading and Learning Indicators (EARLI) Project is to develop a set of progress-monitoring measures that assess key early mathematics and literacy skills for children in Head Start. This poster presentation reports the initial design, development, field-testing, and outcomes for the EARLI Mathematics measures.

The EARLI Mathematics measures consist of six probes. Counting Aloud asks children to count aloud, starting with 1, in the correct sequence. Number Naming has children read individual numbers in isolation. Counting Objects requires children to count the number of squares within a set. For the Grouping task, children are expected to rapidly identify the number of dots within a small group (6 or less). Pattern Recognition has children identify patterns using short sequences of basic shapes (i.e., circle, square, and triangle). Measurement requires children to identify fundamental measurement concepts (e.g., taller, shorter, higher, lower) using basic shapes.

We administered these probes to approximately 300 Head Start children on three occasions during the 2005-06 academic year. To assess concurrent validity of the EARLI probes, selected mathematics subtests from the Woodcock-Johnson III Tests of Achievement (WJ-III; Woodcock, McGrew, & Mather, 2001) were administered to a subsample of participating children. Results of initial analyses suggested appropriate item and scale qualities for the pilot mathematics measures. Internal consistency for the measures ranged from adequate to excellent, and scores appeared to be sensitive to developmental differences. Though additional research is necessary, the EARLI Mathematics measures may eventually help Head Start teachers adjust their instruction in ways that further increase the number of children who successfully transition into kindergarten.

References

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232.

United States Department of Health and Human Services. (2002). Early childhood education and school readiness: Conceptual models, constructs, and measures. Retrieved April 5 from http://www.nichd.nih.gov/crmc/cdb/Kyle-workshop.pdf.

Woodcock, R. W., McGrew, K. J., & Mather, N. (2001). Woodcock-Johnson III Tests of Cognitive Abilities and Achievement. Chicago: Riverside Publishing.


 

Predictive Validity of Parent and Teacher Reports of Language Ability in an Early Intervention Project
Stefanie Footer, Katherine E. Bono, Nurit Sheinberg

Presenters: Stefanie Footer, Katherine E. Bono, Nurit Sheinberg

One of the common goals of early intervention programs is to improve child language ability (e.g., Peterson et al., 2005). However, it is unclear how this ability should be measured. Standardized assessments of language administered in the laboratory are advantageous because behavior can be directly observed by researchers who may have less bias than parents or teachers. The disadvantages of these types of assessments are that they often take a long time to administer, they are costly, and they are difficult to use with very young children who have limited language skills. Parent or teacher report provides an efficient, inexpensive alternative to standardized assessments.

The question then becomes, who should provide this report; parents or teachers? The purpose of the current study was to examine the predictive validity of parent and teacher reports of early language ability in the context of an early intervention program for children who were prenatally exposed to cocaine. Predictive validity is important in this context so that children who are language delayed can be identified and services can begin as early as possible so as to prevent potential long-term negative outcomes (Lester et al., 2000).

Four hundred and thirteen children (48.7% male) who were prenatally exposed to cocaine and enrolled in an early intervention project participated. Participants were primarily identified as African American and low SES. Parents or other primary caregivers (e.g., foster parents) and teachers completed the Receptive-Expressive Emergent Language Test (REEL; Bzoch & League, 1991) when the children were 18 and 24 months old. A revised version of the Reynell, a standardized assessment (Reynell & Gruber, 1990), was used to measure language development when the children were 36 months old.

All of the early language scores as reported by teachers and primary caregivers at 18 and 24 months were significantly correlated to the overall language score from the standardized assessment at 36 months. The strongest predictors of the 36 month language scores were the teacher report at 18 months and the maternal report at 24 months. In the next set of analyses, chi squares were computed to compare delay status (i.e., delayed vs. not delayed) at the early assessment to delay status at the later assessment. Delay status at all of the earlier assessments, including maternal report at 18 and 24 months and teacher report at 18 and 24 months, was related to delay status at the 36 month assessment; however early teacher reports of delay classified more children correctly at 36 months than early parent reports of delay.

These results suggest that although both teacher and parent report of language ability may be valid measures of language and therefore useful for evaluating the impact of early intervention on language outcomes, teacher report may be more valid for the early identification of language delay.

References

Bzoch, K.R., & League, R. (1991). Receptive-Expressive Emergent Language Scale, 2nd Edition, Austin, TX: Pro-Ed.

Lester, B. M., Boukydis, C. F. Z., & Twomey, J. E. (2000). Maternal substance abuse and child outcome. In C. H. J. Zeanah (Ed.), Handbook of infant mental health (2 ed., pp. 161-175). New York, NY: The Guilford Press.

Peterson, P., Carta, J.J., & Greenwood, C. (2005). Teaching enhanced milieu language teaching skills to parents in multiple risk families. Journal of Early Intervention, 27, 94-109.

Reynell, J.K., & Gruber, C.P. (1990). Reynell Developmental Language Scales. Los Angeles, CA: Western Psychological Services.


 

The Parenting Stress Index Short Form: Alternate Scales Low-Income Parents of Preschoolers
Leanne Whiteside-Mansell, Lorraine McKelvey, Catherine Ayoub, Andrea Hart, Richard Faldowski

Presenters: Leanne Whiteside-Mansell, Lorraine McKelvey, Catherine Ayoub, Andrea Hart, Richard Faldowski

This study examines the psychometric properties of the Parenting Stress Index-Short Form (PSI-SF; Abidin, 1995) for parents of preschool children using two large, ethnically diverse, U.S. studies. Parenting stress is a complex construct that involves behavioral, cognitive, and affective components and believed to be associated with poor developmental outcomes for children (Deater-Deckard, 1998). The PSI-SF items are conceptualized by the developer as representing three broadly defined latent constructs with 12 items each: Parental distress (PD, items 1-12), Parent-Child Dysfunctional Interaction (P-CDI, items 13-24), and Difficult Child (DC, items 25-36). Most items are scored from 1 (SA) to 5 (SD) with 3 (Not Sure) the mid-point.

Data for the study come from subsamples of two multi-site, evaluations of early intervention programs: Starting Early, Starting Smart (SESS, N = 1063, Children ages 10, 41, 54 months) or Early Head Start (EHS, N = 1608, Children ages 15, 25, 37 months) Evaluation. SESS was designed to integrate behavioral health services for families of young, low-income children into early childhood education settings. EHS is modeled after Head Start but focuses on low-income pregnant women and families with infants and toddlers. The EHS study included the PD and P-CDI scales and modified the administration. The SESS study used the full PSI-SF with typical administration.

This study examines the factor structure, reliability, and validity of the PSI-SF. That is, Parental Distress was modeled as General Distress (i.e. not distress directly related to parenting; 5 items) and Parenting Demands Distress (7 items). Parent-Child Dysfunction was modeled as Dyadic Interaction (6 items) and Perception of the Child (5 items) representing parental perceptions of characteristics of the child.

The proposed factor structure was supported for parents in both studies with good fit of data to model based on fit indices. Reliability coefficients were high (ranging from .6 to .9) for all scales in both samples. Scales were correlated in expected ways with measures of the home environment, maternal depression, and child problem behavior. No impact of the modified implementation of the PSI-SF was observed.

In sum, this study demonstrated that, whereas the PSI-SF scales PD and PCDI were reliable and valid for this sample of low-income parents of preschool children, the decomposition of the scales was equally useful and might prove to have enhanced utility. The benefits include shorter more focused scales that may be used separately but based on the validity coefficients are similar to the longer versions in their ability to predict key indicators. However, just as the frequently used total score of the PSI captures a broad array of aspects of parenting stress, these subscales together capture multiple aspects of parenting distress and parent-child interaction.

References

Abidin, R. R. (1995). Parenting Stress Index (3rd ed.): Professional Manual. Lutz, FL: Psychological Assessment Resources, Inc.

Deater-Deckard, K. & S. Scarr (1996). Parenting stress among dual-earner mothers and fathers: Are there gender differences? Journal of Family Psychology, 10(1), 45-59.


 

Psychometric-Based Refinements to the Center for Epidemiological Studies Depression (CES-D) Scale for Low-Income Mothers
Richard A. Faldowski, Leanne Whiteside-Mansell, Lorraine McKelvey, Gui-Young Hong, Andrea D. Hart, Catherine M. Ayoub

Presenter: Richard A. Faldowski

This presentation describes a research project designed to understand psychometric characteristics of the Center for Epidemiological Studies Depression (CES-D) scale (Radloff, 1977) among low-income mothers, with suggestions for refinement. The CES-D consists of 20 self-report items on symptoms associated with depression. Responses to all items are made on 4 point scales (0 to 3) anchored to frequencies with which a symptom was experienced during the past week: 0 = ‘rarely or none of the time (less then one day)’ to 3 = ‘most or all of the time (5-7 days).’ Total scores on the CES-D range from 0 to 60, with higher scores indicating more depressive symptoms. A cutoff score of 16, representing individuals who either endorse a small number of items as present frequently or most of the 20 items as present less frequently during the past week, is commonly used to identify individuals afflicted with problematic levels of depressive symptoms (Eaton, Smith, Ybarra, Muntaner & Tien, 2004).

With a Flesch-Kincaid grade level of 4.1 (Metric, 2005), the CES-D has been one of the most widely used and heavily researched instruments in community-based studies of depression. Previous psychometric studies have yielded total score internal consistencies ranging from .8 to .9, while test-retest reliabilities over intervals from 2 weeks to 1 year tend to fall between .4 and .7 (Eaton, Smith, Ybarra, Muntaner & Tien, 2004). A recent analysis by Shumway and colleagues (2004) suggests that, despite its relatively low-reading level, the “cognitive complexity” of questions and response options on the CES-D are moderate (overall rank: 6th most complex out of 15 self-administered depression measures examined).

Although the CES-D scale is also widely used for assessing maternal depression in evaluations of child development programs, few large scale studies have systematically explored its psychometric properties or functional operating characteristics for low-income respondents. This is especially problematic in light of survey research results which suggest that around two-thirds of general population respondents interpret some questions differently than intended by the researcher (Belson, 1981; Oskenberg, Cannell & Kanton, 1991). In populations with less access to educational opportunities, the problem of idiosyncratic question and response option interpretation is likely exacerbated. For such respondents, the quality of information elicited by a measure may be enhanced by providing fewer rather than greater numbers of response categories.

The current presentation reports results from a large-scale psychometric study of CES-D properties among low income mothers in the Early Head Start Research and Evaluation Project. Using a combination of exploratory and confirmatory factor analysis, it is shown that four positively worded items induce a method artifact into estimates of the measure’s factorial structure. Using item response theory, it is shown that, for many of the scale items, 4 point response options are unnecessary and a simplified set of response options, combined with an associated simplified scoring scheme, can be employed. Implications of the findings for program practice as well as future research directions are discussed.

References

Belson, W.A. (1981). The design and understanding of survey questions. Aldershot, England:Gower.

Eaton, W.W., Smith, C., Ybarra, M., Muntaner, C. & Tien, A. (2004). Center for Epidemiologic Studies Depression Scale: Review and Revision (CESD and CESD–R). In M.E. Maruish (Ed.), The Use of Psychological Testing for Treatment Planning and Outcomes Assessment. Volume 3: Instruments for Adults (3rd ed.), (pp. 363-377). Mahwah, NJ: LEA.

Measurement Excellence and Training Resource Information Center (METRIC). (2005). Critical review of Center for Epidemiologic Studies Depression Scale (CES-D). Available from URL [accessed Thursday, June 30, 2005]: http://www.measurementexperts.org//instrument/instrument_reviews.asp?detail=12

Oskenberg, L., Cannell, C.F. & Kanton, G. (1991). New strategies for pretesting survey questions. Journal of Official Statistics, 7, 349-365.

Radloff, L.S. (1977). The CES-D scale: a self-report depression scale for research in the general population. Applied Psychological Measurement, 1, 385-401.

Shumway, M., Sentell, T., Unick, G. & Bamberg, W. (2004). Cognitive complexity of self-administered depression measures. Journal of Affective Disorders, 83, 191–198.


 

Learning Links: An Approaches to Learning Curriculum for Head Start Children
John W. Fantuzzo, Paul McDermott, Lauren Angelo, Yumiko Sekino

Presenters: John W. Fantuzzo, Yumiko Sekino

In recent years, the National Educational Goals Panel (1997) has underscored the importance of approaches to learning. Approaches to learning is listed as one of the five essential components of children’s school readiness, making it a national strategic focus for early intervention with children at risk for poor academic outcomes. One series of studies in this area has focused on children’s learning behaviors. Learning behaviors are characteristics and behavior patterns children demonstrate when they approach and undertake learning tasks. These behaviors, such as task persistence, competence motivation, flexibility, and attentiveness, are recognized as keystone elements in successful preschool and school performance (Barnett, Bauer, Ehrhardt, Lentz, & Stollar, 1996; Fantuzzo, Perry, & McDermott, 2004). Learning behaviors are distinct from cognitive ability and contribute to school success above and beyond intelligence (Schaefer & McDermott, 1999). Additionally, these skills are considered to be observable and teachable. Researchers have focused on the capacity of these empirically produced learning behavior dimensions to inform educational planning during elementary school (McDermott & Watkins, 1987; Stott et al., 1988).

An increased focus on preschool as a critical period for developing skills highlights the need to study learning behaviors as they impact early childhood development. Currently, there is no empirically-based comprehensive, program or school-wide method intended to teach learning behaviors in preschool. The purpose of this study was to develop an empirically-based preschool curriculum targeting learning behaviors.

Empirically-based Behavioral Objectives . The Preschool Learning Behaviors Scale (PLBS; McDermott, Francis, Green, & Stott) was completed by teachers during the 2000-2001 school year for 2,329 Philadelphia Head Start children. In order to identify a meaningful collection of behavior sets, the PLBS data were analyzed using exploratory factor analysis and cluster analysis. These analytic procedures were used to establish a maximally robust, stable, and educationally meaningful collection of behavioral sets that reflected the same phenotypic skills. The distinct sets produced from both exploratory factor analysis and cluster analysis were reviewed for meaningful content. It was determined that the nine resultant areas from factor analysis were the most meaningful and cohesive. The nine areas elicited from this analysis were named for their component items. These areas included attention, task initiative, confident risk taking, task approach, enthusiasm for learning, frustration tolerance, group learning, acceptance of assistance, and consequential behavior.

Learning Links Curriculum . A team of early childhood educators and researchers worked on translating the empirically-derived behavioral sets into nine distinct curriculum modules. Each of the nine modules was developed to focus on teaching children an important learning behavior skill. The overall purpose of the Learning Links Curriculum is to teach children these fundamental skills that connect them to important learning opportunities in the classroom. Each curriculum module is structured to maximize children’s learning opportunities through diverse methods of instruction across multiple contexts in the classroom. These various contexts provide multiple learning opportunities for children to practice and reinforce positive learning behaviors. In addition to classroom methods of instruction, the curriculum emphasizes home-school collaborations to promote positive learning behaviors across both school and home contexts.

References

Barnett, D. W., Bauer, A. M., Ehrhardt, K. E., Lentz, F. E., & Stollar, S. A. (1996). Keystone targets for changes: Planning for widespread positive consequences. School Psychology Quarterly, 11, 95-117.

Fantuzzo, J.W., Perry, M.A., & McDermott, P.A. (2004). Preschool approaches to learning and their relationship to other relevant classroom characteristics for low-income children. School Psychology Quarterly, 19, 212-230.

Kagen, S.L., Moore, E. & Bredekamp, S. (1995). Reconsidering children’s early development and learning: Toward common views and vocabulary . National Education Goals Panel, Washington D.C.

McDermott, P. & Watkins, M. (1987). Microcomputer systems manual for Multidimensional Assessment of Children. San Antonio, TX: Psychological Corporation.

National Education Goals Panel. (1997). Getting a good start in school. Washington, DC: Author.

Schaefer, B.A. & McDermott, P.A. (1999). Learning behavior and intelligence as explanations for children’s scholastic achievement. Journal of School Psychology, 37, 299-313.

Stott, D. H., McDermott, P. A., Green, L. F., & Francis, J. M. (1988). Learning Behaviors Scale and Study of Children's Learning Behaviors research edition manual. San Antonio: The Psychological Corporation.


 

The Early Communication Indicator for Infants and Toddlers: Systematic Replication and Translation from Research
Charles Greenwood, Judith Carta, Dale Walker

Presenters: Judith Carta, Dale Walker

Early interventionists are increasingly held accountable for the progress of children receiving their services. The groundswell for accountability has moved from the public schools to include programs that serve society’s youngest children. The number of children served Early Head Start (EHS) is large and growing, and so too are the national resources dedicated to these services. EHS serves over 61,500 children at a cost of over six hundred million dollars. Although increasing numbers of young children are being served in EHS, early educators responsible for providing those services have limited means of knowing whether their intervention practice is helping to move children toward important outcomes.

Sensitive and technically adequate measures are needed that can be used as indicators of progress of individual children and programs. Such measures need to be practical for use with diverse children and to be scaled up at a national level. Currently, few such measures exist for this youngest population of children and federal, state, and local programs have limited capacity to produce and use child outcome information for program improvement. Existing assessments are not conducted often enough to provide early educators with ongoing information on how an individual child is doing, or when aggregated together, whether a program of intervention is working.

Individual Growth and Development Indicators (IGDIs) are recently developed tools that can help to give early educators effective and efficient tools for monitoring growth toward important outcomes. The Early Communication Indicator (ECI) for children infants and toddlers is one of a set of IGDIs that has been developed for measuring the progress of infants and toddlers toward expressing their wants and needs. For the ECI, rates of children’s communication are determined by scoring the number of gestures, vocalizations, words and multiple words to portray children’s growth in communication over time. The ECI is a play-based, standardized assessment during which a familiar adult play partner, assesses the child’s communication over a 6-minute play session. The ECI is administered quarterly or more frequently if necessary, and rates of communication growth are determined and made available through a website that supports data entry by interventionists and provides immediate feedback on rates of growth for both individual and groups of children in programs. Until recently however, most of what was known about the ECI was been based on a single report.

In this poster, findings are presented from three expanded samples of children: two from research applications with researchers acting as assessors, and one from a statewide evaluation of Early Head Start programs conducted by practitioners trained by researchers to be ECI assessors. Comparisons are made across samples. The original growth parameters were replicated generally and differences were associated with differences in sample and assessor characteristics. More representative growth parameters are reported based on the larger aggregate sample. Implications are discussed.


 

Early childhood family involvement practices among Latino families.
Patricia H. Manz, Marika Ginsburg-Block, Christine McWayne

Presenter: Patricia H. Manz

Reflecting the changing demographics of our nation, the proportion of Latino children and families in the Head Start population is growing and now constitutes 31%; this is among the largest ethnic minority population served by Head Start (ACF, 2005). Quality family involvement measurement for the diverse Latino community is necessary for facilitating the development and evaluation of family involvement programs in Head Start. The construction of culturally responsive and beneficial measures, however, is challenged by the range of cultural and linguistic diversity among Latinos (Lopez, 2004).

In order to address some of the current limitations regarding our knowledge of Latino family involvement during the early childhood period, three pilot studies were conducted in different regions of the U.S. with diverse Latino subgroups, using the Family Involvement Questionnaire – Early Childhood version (FIQ-EC; Fantuzzo, Tighe & Childs, 2000). Each of these studies undertook independent, partnership-directed approaches for creating a Spanish translation of the FIQ-EC. In all, over 1,000 Latino family members participated in these studies.

Although variations in the Spanish translations exist across these three studies, certain similarities are noted with the various Latino groups. Foremost the promise of the FIQ-EC for this population is evident; all three studies confirm the construct validity of the FIQ-EC for the diverse Latino families. Also noted is the consistency of relationships among FIQ-EC dimensions and developmental outcomes for Head Start children. School-Based Involvement was a salient predictor of child outcomes and Home-School Conferencing significantly related to child outcomes in the Northeastern studies. Most notable, however, was the lack of significant relationships between the Home-Based Involvement dimensions and child outcomes. This finding is in direct contrast to previous research with Head Start African American families, which found Home-Based Involvement to be the most significant correlate of child outcomes. This convergence of evidence underscores the need to conduct culture-specific, multi-site investigation into the measurement of family involvement for important subpopulations.

This poster will present each of the three independent studies directed toward the development of a Spanish translation of the FIQ-EC. For each individual study, the adaptation process for creating a Spanish version for the specific subpopulation and the attained psychometric support will be detailed. Based on the findings across these three studies, directions for the advancing our understanding and assessment of family involvement among diverse Latino cultures will be presented.

References

Fantuzzo, J., Tighe, E., & Childs, S. (2000). Family Involvement Questionnaire: A multivariate assessment of family participation in early childhood education. Journal of Educational Psychology, 92 (2), 367-376.

Lopez, M. L. (2004). Multidimensional perspectives on readiness: The evolution of “readiness” within a rapidly changing cultural & ecological context. Invited address for the 4th Annual Cross-University Collaborative Mentoring Conference. New York University. New York, NY.