Assessing Young Children’s Learning: An Interview with Head Start Program Director Mary Lockhart-Findling
This interview is the first in a series, “Measuring Children’s Development: Voices from the Field,” offering various perspectives on improving the assessments of children’s skills in prekindergarten.
Head Start, a federal early childhood program, provides over a million children with learning, health, and family well-being services across every U.S. state and territory and in American Indian and Alaskan Native communities. Fundamental to its success are systems of supports that regularly capture information and apply data-driven insights to enhance the quality, responsiveness, and alignment of Head Start services with the strengths and needs of children and families with diverse backgrounds.
The National Head Start Association’s Victoria L. Jones and MDRC’s Samuel Maves sat down in conversation with Dr. Mary Lockhart-Findling, director of a Head Start program at United Community Action Partnership in Minnesota, to learn about how her program collects and uses information from pre-K assessments. During the interview, the group discussed the ins and outs of pre-K assessments from a director’s perspective, how they provide benefits to programs, and how they can be improved to strengthen children’s development.
VLJ/SM: Who is served by the Head Start program at United Community Action Partnership?
MLF: The United Community Action Partnership Head Start program serves about 600 children in Head Start and Early Head Start in centers and home-based services across nine rural counties located in central and southwest Minnesota. The communities we serve are rich with diversity and home to many recent immigrants. Thirty-three percent of enrolled children speak a language other than English at home. United Community Action classrooms have a diverse student body, including strong representation from Hispanic, Somali, Karen, and Hmong communities in the area.
VLJ/SM: How do pre-K assessments work at your centers?
MLF: To assess an individual child’s development, United Community Action Partnership uses a measurement tool called Teaching Strategies GOLD (TS GOLD). Data collection is an ongoing process with checkpoints in the fall, winter, spring, and summer for those classrooms that are part of the full-year program. Classrooms that are a part of the 9-month program use only the first three checkpoints. All of our classroom staff have iPads with a TS GOLD app that they use to put live notes in from child observations. Staff can upload videos to the app as well. With the tool, you’re able to generate reports on areas where you have plenty of documentation and can also determine areas where you may need some more attention.
Because our program is part of Head Start, we have to make sure our assessments align with the Head Start Early Learning Outcomes Framework. Then there are also the Head Start standards, such as having programs conduct assessments during at least three checkpoints each year. Head Start provides guidelines around formative assessments (ongoing assessments used to tailor instruction to support a child’s individual development). For instance, you’re not allowed to use the data to rank kids or teachers for reward or punishment. Comparisons should only be made to see how students and teachers are doing, and what learning improvements can be made.
Currently, our program uses the ESI-R, or the Early Screening Inventory—as a developmental screener. However, the state of Minnesota has recently decided to discontinue use of the ESI-R as a screening tool going forward so our school districts are switching to the Minneapolis Preschool Screening Instrument. We are getting started with using that screening tool to identify children who may have developmental delays or disabilities.
To assess classroom quality in our Head Start program, we use the Classroom Assessment Scoring System (CLASS). All of our coaches and education managers are CLASS-certified. We do what we can to keep their statistical reliability up. Our coaches use the CLASS as a coaching tool as well. They do a formal CLASS observation twice a year.
VLJ/SM: How does your program incorporate data from assessments into classroom practices?
MLF: Since assessment is ongoing, data are entered into the online system and then our program is structured so that we have coaches review the data with teachers, set goals, and make plans to improve teaching strategies. The coaches formalize a data debrief with teachers following each checkpoint period. During the debrief, coaches will sit down with teaching teams and go through all the data—where the strengths are, where there are challenges, and what teaching strategies we need to implement to make improvements. Coaches and teachers will also go through the class profile of where kids are and where they should be, which is what the goal setting is based on. They take a look at where kids are by demographic characteristics as well—for instance, looking into kids with individualized education plans (IEPs) or individualized family service plans (IFSPs) or doing an analysis by gender or home language. Each teaching team will set individual goals and then as an agency, we set goals when we look at the program-wide data as well.
For example, our child-level assessments showed some really interesting results this past fall. For the first time ever, we were finding some delays in children’s physical development. Typically, physical development has been a strength that we’ve identified in our students, with most children either meeting expectations for physical development or otherwise exceeding them. But when we collected data in the fall, we only really had about 60 percent of our four-year-olds on track to meet those specific school-readiness goals. Knowing that, we’ve tried to incorporate more active learning into our instruction. For example, today it’s too cold for children to go outside and play. Instead, our coaches are working with our teachers to incorporate new instructional strategies in order to get our kids up and moving. There is a special focus on some of those skills that might have been more detailed in the assessments, such as fine motor skills and other areas where kids might not have had a lot of practice.
VLJ/SM: What are some of the strengths of your measurement tools and the way they operate within your assessment process?
MLF: One of the main strengths of the TS GOLD measurement tool is that we’ve been using it for a very long time and we’re familiar with it. We know our teachers are reliable because of the reliability testing and the training they go through. Another strength of our existing child assessment tool is that it gives you live information. If the data are good and you’re putting it in regularly, you can see how you should plan for small groups with particular sets of kids and how you should individualize for a particular child—all in real time. With an ongoing tool you can make those instructional decisions quickly on a weekly basis. There are so many reports that are accessible though the app. There’s a function called the class profile tool, which allows you to see at a glance how to plan for specific groups of children that are at specific levels in their development. Another advantage that is really important is that our measurement tool aligns with the Head Start requirements in the Early Learning Outcomes Framework.
VLJ/SM: As a center director, which learning domains are most beneficial to measure as part of your work?
MLF: Because we’ve been using the same measurement tool for a long time, we have trend data that show us that early math skills are very highly predictive of later school success. For a long time, we’ve noticed that math was really our lowest area for the percentage of children meeting target indicators of school readiness. We’ve been concentrating on that for the past couple of years and we’ve been seeing progress. We had some really good results last year, so that’s exciting. Every year is a little different, so we try to look at the bigger picture along with the current picture. The measurement tool we use is pretty comprehensive, so most all of the learning domains are there.
VLJ/SM: How could your measurement tools be improved to benefit administrators and teachers?
MLF: Since there is so much content covered by the assessment tool, it may be a little too much for teachers. It is a huge tool and it could even be whittled down to the key dimensions that predict later school success. It would probably be helpful to make it a smaller tool. There is some repetition of learning domains that require additional research to see if they are really reflective of later school success, or even kindergarten outcomes. Since the tool is so lengthy, the question is—how can we whittle it down to remove what’s not necessary?
VLJ/SM: How do teachers in your program feel about the assessment process?
MLF: Teachers’ feelings about assessments largely depend on how much they’ve experienced the process—whether they are new teachers and just surviving or whether they are very experienced and know the tool inside and out. What I mostly hear is that the length of the tool can be challenging—the amount of capacity that teachers have to devote to the tool is a big part of how useful it will be for them. But teachers find that it is an effective tool for planning and understanding, especially when they’re individualizing instruction and setting up small group learning. It’s also helpful for following what children’s interests might be when planning curriculum activities.
That said, the tool can be really cumbersome to new teachers. Hopefully some of the tools embedded in the TS GOLD app reduce the burden of assessment, but it doesn’t always work as planned. However, when we do data reviews and teachers really see the growth that’s happening, they get very excited because they know all the work that they’ve put into that growth. And when they can see the larger picture and where their classroom falls into that aggregate-level data, they do really get excited. I think our teachers are becoming pretty data-savvy as a whole.
Some teachers have questions about the reliability of assessments. They’ll point out that they may have more children on IEPs or some other circumstances that maybe aren’t picked up by or acknowledged by the assessment tool. The data aren’t going to tell us how many children should be referred for mental health services, for example. And that’s important because something like social skills could really be impacting learning.
VLJ/SM: What conversations have your staff had on how your assessment process accounts for cultural differences between the populations you serve?
MLF: We’ve been trying to work through the question of cultural differences ourselves. We notice that about a third of our students come to us as non-English speakers or dual language learners (DLLs). What we’re finding is that those DLLs, no matter what language, seem to be behind their English-only speaking peers. We’re finding those results even in the fall when we’re just assessing children to see where they’re at when we haven’t had any impact on them. So, we’re not sure if that’s reflective of the tool, if there is implicit bias in assessing, or if it’s simply that children are coming to us at those different skill levels. It’s a question we have but we don’t necessarily have an answer to it as we continue to look for trends. We’ve done a lot of training on how to assess DLLs, but we’re still finding this in our data even at our first checkpoint period. We’re trying to cover training on the things that we think might be the cause of the disparities, with a lot of implicit bias training and training on how to assess non-English speakers. It could be the tool itself too, we just don’t know.
Mary Lockhart-Findling, PhD is the Head Start Director at United Community Action Partnership. She has worked at the program for 33 years in multiple roles, becoming the director in 2016.
Victoria L. Jones is the National Head Start Association’s (NHSA) Senior Director of Data. Victoria supports efforts to strengthen child and family outcomes by helping Head Start programs move toward a culture of continuous quality improvement. Her portfolio includes leading the Data Design Initiative, working on the Parent Gauge team, and contributing to other projects at NHSA that directly support Head Start practitioners.