Early Childhood Research & Practice is in the process of moving to the early childhood special education program at Loyola University Chicago after 17 years at the University of Illinois at Urbana-Champaign. We are delighted by the opportunity to “pass the torch” to our Loyola early childhood colleagues.

We suggest you visit ECRP’s Facebook page for future updates.

Mirar esta página en español HomeJournal ContentsIssue Contents
Volume 10 Number 2
©The Author(s) 2008

Coaching as Part of a Pilot Quality Rating Scale Initiative: Challenges to—and Supports for—the Change-Making Process

Debra J. Ackerman
National Institute for Early Education Research
Rutgers, The State University of New Jersey


Several nonprofit agencies in a large Midwestern city provide assistance to early care and education programs participating in a pilot Quality Rating Scale (QRS) initiative by pairing them with itinerant consultants, who are known as coaches. Despite this assistance, not all programs improve their QRS score. Furthermore, while pilot stakeholders know how much time a coach has spent working with a particular program, little is known about the consulting process itself and the additional challenges that can impede coaches’ attempts to bring about change. As a result, QRS stakeholders lack a clear understanding of the assistance that coaches might need, as well as the additional programmatic interventions that could augment coaches’ work. This article reports on a study focusing on the QRS consulting process, as well as some of the factors that present challenges to that process. The article concludes with recommendations for addressing these issues and in turn enhancing the QRS coaching process.


Research suggests that the quality of center-based early care and education (ECE) and family child care homes in this country is generally mediocre (Helburn, Culkin, Morris, Mocan, Howes, & Phillipsen, 1995; Kontos, Howes, Shinn, & Galinsky, 1994). In addition, parents often have little knowledge about how good a program actually is (Cryer & Burchinal, 1997). In an attempt to improve quality and help parents make informed choices about the most appropriate settings for their young children, at least 14 states give ECE programs the opportunity to participate in voluntary quality rating scale (QRS) initiatives (National Child Care Information Center, 2007). Also known as Quality Rating & Improvement Systems, these initiatives seek to provide an incentive for programs to improve their quality by paying higher subsidy rates upon attaining higher scores. Knowing where programs lie on a QRS can presumably also help parents judge the quality of their ECE options (Mitchell, 2005).

In 2004, several nonprofit agencies in a large Midwestern city began implementing a pilot QRS initiative. The initiative awards ECE programs 1 to 5 stars based on their Rating Scale (Harms & Clifford, 1989; Harms, Clifford, & Cryer, 1998; Harms, Cryer, & Clifford, 2003) score(s), partnership with families, adult-to-child ratios, and the education and training received by staff (see Appendix A). To help programs successfully participate in the QRS initiative, these agencies also provide three main types of assistance. First, center-based programs and family child care providers may submit an application for mini-grants of up to $5,000 and $2,000, respectively, to purchase materials or fund facility improvements. Second, teachers and providers may apply for scholarships to attain their Child Development Associate (CDA) credential or associate’s degree. Finally, to help participants navigate the many programmatic and professional changes that often come with improving practice (Fullan, 2001), they are also provided with itinerant consultants, who are known as coaches.

Despite receiving coaching and financial aid, almost two-thirds of ECE programs participating in the first two years of the pilot did not improve their QRS score (personal communication with QRS stakeholder, November 22, 2006). Furthermore, while stakeholders knew the quantity of money and coaching time provided to programs, little was known about the consulting process itself and how it might be improved. In addition, stakeholders did not have concrete information about the site-related factors that may have impeded coaches’ efforts. As a result, QRS stakeholders lacked a clear understanding of the assistance that coaches might need, as well as the additional programmatic interventions that could augment coaches’ work. With the aim of shedding light on these issues, this article reports on a study focusing on the QRS consulting process, as well as some of the contextual factors that present challenges to that process. The article also outlines some recommendations for mitigating these issues and for future research. To begin, the theoretical framework guiding the study is discussed.

The Challenges of Bringing About Change in ECE Settings

Itinerant consultants are used in early childhood settings in a variety of change-focused ways. Consultants help teachers integrate new knowledge and curricula into their practice (Freidus, Grose, & McNamara, 2001; Ryan & Hornbeck, 2004; Ryan, Hornbeck, & Frede, 2004), serve children with special needs (Buysse, Schulte, Pierce, & Terry, 1994; Harris & Klein, 2004; Hendrickson, Gardner, Kaiser, & Riley, 1993), and improve program quality (Fiene, 2002; Palsha & Wesley, 1998; Sibley & Kelly, 2005; Wesley, 1994). They also link training initiatives with teachers’ everyday classroom experiences (Divine & Fountain, 2005; Lizakowski, 2005). In addition, consultants share information about training opportunities, lessening behavior problems, and structuring the daily schedule (Ackerman & Thomas, 2007; Head Start Bureau, 2001).

While itinerant consulting would also seem to be an effective means for helping ECE settings increase their QRS scores, bringing about such change may not be an automatic process. As is the case in many change-focused situations, adopting new beliefs and practices often involves unlearning old ways, rethinking new approaches, and “taking risks” (Bransford, Brown, & Cocking, 2000, p. 183). Individuals involved in a change process must therefore perceive any modification as a significant priority, or the short-term personal costs of a new activity or approach might appear to outweigh the long-term benefits. They must have a sense of clarity regarding both the goal of the effort and what they need to do to achieve the goal in their daily practice. They must also sense that the change is complex enough to be worth the effort but not so demanding that success seems impossible. In addition, program quality must not suffer because adoption of a change is placed ahead of quality and practicality (Fullan, 2001).

Bringing about change in ECE settings specifically can be constrained by additional challenges. While consultants may prefer a more collaborative approach to problem solving and goal setting (Buysse, Schulte, Pierce, & Terry, 1994), the lack of free time to meet can hamper such efforts (Bellm, Whitebook, & Hnatiuk, 1997; Harris & Klein, 2002; Pavia, Nissen, Hawkins, Monroe, & Filimon-Demyen, 2003). Even if time is available, almost three-fourths of states do not require teachers in center-based programs to undergo any preservice training (Ackerman, 2004). Teachers may therefore not have the educational background that facilitates a collaborative approach (File & Kontos, 1992). In turn, they may not be knowledgeable about or understand the “why” behind their everyday practice, much less be accustomed to reflecting on child development theory or their practice (Dickinson & Caswell, 2007).

Furthermore, minimal profits provide little incentive for family child care providers to improve their quality (Helburn, Morris, & Modigliani, 2002). Low wages also contribute to turnover problems and teachers’ ability to pay for training sessions that might improve their knowledge and practice (Ackerman, 2006). While directors are key for program quality (Bloom, 1999; Morgan, 2000; Neugebauer, 2000), the nominal credentials required for directors in most states (LeMoine & Azer, 2005) may indicate that they also lack the expertise or knowledge base to facilitate quality improvements. Programs may also have limited professional development funds or resources necessary to put change-focused training into action (Zeece, 2001).

Consultants themselves may lack the skill set necessary to be effective. While they usually come to the field already possessing a college degree and some type of early-childhood-related experience (Wesley & Buysse, 2004), as change agents, they need an additional specialized knowledge base (Rust & Freidus, 2001). They may begin their coaching believing that change will be easy, and therefore they may be unprepared when confronted with mistrust, a lack of leadership or engagement in the change process, or the personal problems of program staff (McCallister, 2001; Rust, Ely, Krasnow, & Miller, 2001). Contextual issues that are beyond their control can also constrain implementation (Poglinco, Bach, Hovde, Rosenblum, Saunders, & Supovitz, 2003). Training may be required not only in the content area of their work but also in encouraging buy-in, working with adults, implementing the kinds of strategies that can bring about change, and developing effective communication and conflict management skills (Buysse & Wesley, 2005; File & Kontos, 1992, 1998; Klein & Harris, 2004). Consultants must also understand that bringing about change is often not just about teaching technical skills but also about changing the culture and vision of the organizations they work with (Lieberman, 2001).

In sum, while the provision of itinerant consulting would intuitively seem to enhance ECE providers’ capacity to improve program quality, the process may not be the equivalent of a simple “1 + 1 = 2” equation. With this framework in mind, the purpose of this study was to determine the following:


To learn more about the coaching process and the issues that affect efforts to raise QRS scores, a survey was conducted of the coaches participating in the pilot QRS initiative. The data collection and analysis procedures follow a description of the sample.


Seventeen of the 18 QRS coaches working for the nonprofit agencies affiliated with the pilot initiative participated in this study. The coaches were briefed about the study at a monthly meeting attended by the author and other QRS stakeholders. They were informed that they would receive more details about what their participation would entail through a follow-up electronic communication. The coaches were also told that their participation was voluntary and would not affect their continued employment.

As can be seen in Table 1, all participants are female. Three are African American, one is Latina, and the remainder are Caucasian. Coaches’ ages ranged from 25 to 64 years old, with the average being 46. Their experience as a QRS coach ranged from less than one year to 13 years. Six of the coaches were relatively “new to the job.” Seven coaches had at least five years of experience with their agency. The majority of coaches earned annual salaries between $30,000 and $40,000. Only two earned more than this amount. Their caseloads varied from 4 to 24 sites. Eleven coaches worked with both family child care homes and center-based programs. Five worked exclusively with center-based settings, and one worked with family child care providers only.

Table 1
Coach Demographics*



Present Job Experience

EC Experience

Direct Care Experience

Alayne MA, Educational Administration 2 8 5
Cady BA, Family & Child Development
MA, Family Life Education
1.3 26 2
Ellie MA, Early Childhood Education Curriculum & Instruction <1 17 9.5
Esther BA, Child Development <1 13 13
Freddie 2 years coursework in Home Economics Education 6 33 22
Gail BS, Elementary Education 2.5 15 15
Jaclyn AA, Interior Design
BA, Elementary Education
8 17 18
Giselle BA, Psychology
MA, Curriculum & Instruction with EC emphasis
<1 9 7
Kari BA, Early Childhood Education
MA, Early Childhood Education/Education
<1 24 4
Kristy BA, Business & Psychology
MA, Early Childhood
5 30 25
Lisa BA, Child & Family Development 2 14 11.5
Luisa AA, Early Childhood Education
BA, Psychology & Social Work
MA, Education
10 22 15
Maris BA, Early Childhood Education <1 18 8
Mia MA, Theology 5 20 0
Mona BA, Human Development & General Studies 6 6 0
Sharon BA, Child & Family Development
Graduate credits in Adult/Continuing Education & Management of Nonprofit Organizations
13 32 20


BA, History of Art & Architecture
MA, Educational Psychology (Special Education)
MEd, Child Study
<1 24 21
*AA = associate’s degree; BA = bachelor’s degree; Ed = Education; MA = master’s degree.

The coaches came to their jobs with a relevant educational background. With the exception of one coach, all had attained a minimum of a bachelor’s degree. Almost all of their degrees focused on education more broadly or some aspect of early childhood. In addition, each had experience working in an early-childhood-focused program. Thirteen participants had previous experience working as a teacher trainer or mentor or providing technical assistance to early childhood initiatives. While two coaches did not have any experience being an ECE teacher or director, the average length of direct care experience was 13 years.

Data Collection

Data collection occurred through a self-administered survey (see Appendix B) that was sent electronically to the coaches after obtaining their informed consent. They were asked to download the survey to their home or work computer and then return it to the author via email. Pseudonyms were assigned to all coaches to protect their privacy.

The survey was written by the author and other QRS stakeholders, including representatives from one of the QRS agencies and the organization funding the study. The survey evolved after this small group met to discuss the lack of concrete information on the coaching process and how it might be improved. The group then met again and communicated via phone and email regarding what they wanted to learn from the coaches, some questions that might provide this information, and which data collection method was best given time and budget constraints. Once the group decided on using a self-administered instrument, the author wrote a draft survey, which was reviewed and revised by the group and piloted with several former coaches. Their feedback informed the last stage of the survey process, which was to refine the questions and choose which ones were most relevant.

The final survey had three sections and consisted of 78 closed- and open-ended questions. The coaches were asked to answer the questions in Section A with their most-challenging center-based program in mind. Section B’s focus was their most-challenging family child care home. For both sections, the term “most challenging” was left undefined in order to gather as much information as possible about the difficulties coaches faced. The 12 questions in Section C—which was for all coaches, no matter which auspice they worked in—asked them to reflect more broadly on their overall coaching and provide demographic information about themselves.

Five questions in Sections A and B focused on the characteristics of the center or family child care home that each coach considered to be the most challenging. About half of the remaining questions in both sections asked about the four key phases of the QRS coaching process: (1) providing an initial orientation to programs, (2) sharing the quality performance profile, (3) developing the quality improvement plan, and (4) implementing the plan. Reflecting the change process and the ECE literature, the additional questions in these two sections asked coaches about the importance of various criteria when making decisions within these phases, as well as the organizational and personal factors that may have made any phase more difficult to carry out. The questions themselves are outlined in more detail below.

Data Analysis

The author used a mixed methods approach to analyze coaches’ survey responses. The closed-ended responses were entered into an SPSS database to generate descriptive statistics. Coaches’ qualitative responses were sorted according to the study’s research questions and then coded by looking for patterns within individual survey questions, as well as across different questions (Miles & Huberman, 1994). These patterns were then grouped to generate emerging themes (Stake, 1995). Survey responses that summed up these emerging themes were highlighted for possible inclusion in this article.

While coaches’ self-reported responses about their activities and the challenges within programs were not verified, the author looked for convergence within an individual coach’s survey. All of the responses were triangulated by comparing them with what other coaches shared, as well. In addition, preliminary findings were shared with QRS stakeholders to make sure they “made sense” based on their greater knowledge of participating programs (Huberman & Miles, 1994; Mathison, 1988; Meijer, Verloop, & Beijaard, 2002; Patton, 2002).


As mentioned above, QRS coaching involves four sequential phases. First, coaches brief a program on the initiative. Next, after researchers from a local university assess program quality using the appropriate Rating Scale(s) (Harms & Clifford, 1989; Harms, Clifford, & Cryer, 1998; Harms, Cryer, & Clifford, 2003) and note other QRS criteria, coaches explain the resulting Quality Performance Profile (QPP) to participants. Coaches then use the QPP to develop a Quality Improvement Plan. In the final stage, the coach helps a program implement the plan, with the aim of attaining a higher QRS rating when reassessed in a year.

While these specific activities may be sequential, coaches’ survey responses often indicated that their efforts to help programs improve their QRS score were not always equally linear. In addition to keeping quality issues in mind, they must also pay attention to participant buy-in and willingness and capacity to enact change. Other challenges can also make improving ECE program quality easier said than done.

"Getting to Know You": Beginning the Coaching Process

Buysse and Wesley (2005) talk about the importance of gaining entry and building the relationship in any itinerant consultation initiative. Coaches were therefore asked through an open-ended question how they initially familiarized their most-challenging center or family child care home with the QRS initiative and their role as coach.

Several of the recently hired coaches could not answer this question because they had not been present during the first phase of the QRS process. However, those who coached a site from the beginning usually followed a two-pronged approach, particularly when working with centers. First, coaches explained the QRS initiative through a one-on-one conversation with the director. They also provided the director with QRS-related handouts. The second step of the familiarization process involved a presentation to the staff, often using PowerPoint slides and again providing handouts. Special care was taken to emphasize and answer questions about the key elements of the QRS initiative, as well as the potential benefits to participants, including scholarships for staff professional development and mini-grants for programmatic concerns.

Coaches were also asked through an open-ended question about their strategies for moving beyond mere “gaining entry” and getting their relationships “up and running” with directors, teachers, and family child care providers in these challenging sites. Despite their itinerant status, their responses indicated that none of the coaches viewed her role to be one of merely showing up, dispensing advice, and then leaving. Instead, great care was taken to build a professional relationship. Coaches wrote of immediately learning teachers’ names, spending time in individual classrooms, attending staff meetings and trainings, making weekly appointments with directors, and, in general, “getting to know them personally.” Furthermore, these activities seemed to reflect an intrinsic desire to build a coach-mentee relationship. In fact, when asked through an additional open-ended question about the special qualities, traits, or other skills they bring to coaching, almost every coach remarked on her ability to establish rapport or build relationships with people.

For some coaches, getting the relationship “up and running” also involved helping teachers out in their classrooms. Jaclyn shared her strategy:

I worked along side [sic] the staff getting to know them and the children they serve. I supported them with behavioral issues, such as biting, with techniques to use, and possible solutions. I found out what kinds of support they felt they needed. I brought in resources I purchased at thrift stores, garage sales, and the recycled materials shop.

Some coaches mentioned that another important aspect of getting their coach/mentee relationship “up and running” was consciously adopting a respectful recognition for the difficult nature of bringing about program change. As Lisa shared:

I listened to their ideas and took their frustrations seriously, respecting the difficulties they faced and attempting to hold them accountable, while understanding their barriers to forward movement.

Coaches also reported that they tried to present themselves as true partners in the change process. Trina summed this up by saying she had a “firm belief that what we develop together will always be much more than what any of us can do on our own.” In addition, despite their years of experience as teachers and directors, no coach commented that she considered herself to be an early childhood expert. However, many remarked that such experience gave them a strong desire to be an ECE-focused change agent. As Alayne commented:

I presented myself not as the reigning authority in early childhood, but as a concerned individual who may be able to help them answer some of the questions that they had about the children or the way that they taught.

In a similar vein, Ellie commented, “I listened to all they had to say and shared stories about my experience as a teacher, creating the atmosphere of ‘we are all not perfect, but we can work to improve what we do.’"

Lastly, perhaps because the QRS process can entail pointing out the low overall quality of many programs and thus potentially alienating participants, coaches also indicated an emphasis on reminding participants of the positives in their programs. As Lisa also shared about her strategy for building a relationship with a challenging family child care provider, “I would just talk with her about how things were going, listen to her concerns, share stories, and affirm the things that she did well.”

Interpreting the Quality Performance Profile

Once programs have been oriented to the QRS initiative and also assessed according to the QRS criteria framework, coaches must share the Quality Performance Profile (QPP) with directors, teachers, and family child care providers. The survey asked coaches through an open-ended question to explain how they interpreted the QPP with their most-challenging center or family child care provider.

Center-based coaches generally met with the director or administrative team first to review each section of the QPP, how the scoring worked, and what the results meant. For many coaches, this process involved explaining in further detail the ECERS (Harms, Clifford, & Cryer, 1998) or ITERS (Harms, Cryer, & Clifford, 2003). A similar process was followed with family child care providers, but using the results of the FDCRS (Harms & Clifford, 1989). No matter what the setting, coaches also reframed the QPP into constructs that span the different elements of the ECERS, ITERS, or FDCRS. As Kristy explained, “[The director and I] looked at global things that needed to be done (handwashing and sanitation), things that needed money to achieve (furnishings and materials), and things that we could teach the staff (interactions, parent involvement).”  

In centers, coaches’ next step—unless asked by a director not to do so—was to review the QPP with the entire staff. This step might involve making a notebook explaining the various sections of the QPP and “how the components would be involved in their program” or reports for individual classroom teachers. Whether talking with directors, teachers, or home providers, coaches continued to make an effort to “focus on the program strengths and what [they] were already doing well.” Ellie shared that for her it was important to explain that “the QPP does not mean that the program is bad, but rather that [it] is where they are in their journey to provide excellence of care, so there is the need to identify areas of weakness and improve upon them.” Mona shared that in her most-challenging child care home she “framed [the QPP] as a baseline/snapshot from which we could discuss and consider and we decided upon goals and priorities for program improvement.”

It should be noted that this emphasis on program strengths does not mean that coaches are blind to a program’s faults. If anything, one gets the sense that in many programs the “areas of weakness” went beyond what the ECERS, ITERS, or FDCRS might measure. Rather, such an emphasis seemed to be a method for both easing the sting of disappointing results and also helping QRS participants not feel overwhelmed. This approach also seems to be a strategy for encouraging participant buy-in as coaches move into the next step in the process: the Quality Improvement Plan.

Goal Setting: Determining a Quality Improvement Plan

Once directors and teachers or family child care providers understand their QPP, the third phase of the QRS journey involves using the QPP to design a “plan of action” for improving quality. Formally known as a Quality Improvement Plan (QIP), the QIP serves as a blueprint for goals that a program will work toward achieving.

The survey asked coaches through an open-ended question about their process for building a QIP at their most-challenging center or family child care home. Coaches described meeting with directors and teachers to formulate overall program goals and specific classroom goals, and also to break down goals into individual “small steps.” They also prioritized the recommendations or suggested working on a few goals at a time. QIP goals do not seem to be static, either, in that they are periodically reviewed and updated by coaches. One coach mentioned that she placed her most-challenging center’s goals on a clipboard in each classroom to help facilitate the review process.

The survey also asked coaches to indicate on a 1-5 Likert scale—with 1 meaning the issue was not at all important and 5 meaning it was very important—the importance of various criteria when making decisions about their most-challenging site’s QIP. They were also given the opportunity to comment on why they ranked any criterion as more or less important. The criteria were drawn from the literature on change in ECE specifically and education settings more generally, the subscales of the quality assessment instruments used in the QRS initiative, and the overall QRS process.

As can be seen in Table 2, while the relative importance of most of these criteria in QIP decision making was not exactly the same for centers and family child care homes, the number one criterion in both settings was health and safety. Examples included “staff lifting children by their arms, children left unsupervised or staff leaving child unattended on a changing table.” Because such actions can also be state licensing violations, this factor also appeared to be non-negotiable in terms of receiving immediate attention.

Table 2
Importance of Various Criteria in QIP Decision Making



(n = 16)

FCC Homes
(n = 12)

Health & safety issues 5.00 4.92
Areas in which director/provider could experience early success 4.75 4.42
Director/provider preference 4.44 4.75
The financial cost of rectifying any issues 4.06 4.00
Goals that required minimal effort on the part of the coach or provider 4.00 3.58
Issues related to children’s learning environments 3.94 3.67
Issues that would constrain the completion of additional tasks 3.81 3.79
Issues that would result in greater QRS points 3.56 3.83
Goals that required the least coaching to achieve 3.38 2.83
Goals that required a significant amount of coaching to achieve 3.25 2.50
Issues that would have taken a long time to address or accomplish 2.75 2.58
Issues that were related to accreditation 2.13 2.50

When not involving a health and safety violation, however, staff experiencing early success was listed as the second and third most important QIP criteria in centers and family child care homes, respectively. Ellie pointed out that “A little success always increases staff confidence, helping them to believe they can tackle bigger issues.” Lisa added that “anything that could allow [her most-challenging family child care provider] to take steps forward and see that ‘complete’ on a goal … was a great motivator.” Rounding out the top three in both settings was the personal preferences of the center director or family child care provider. Coaches are not merely being “nice,” either. Again, focusing on this criterion seems to be a purposeful strategy for enhancing the likelihood that change will occur. As Sharon noted, “If the director did not see the criteria as a priority, then it would be pretty difficult to accomplish anything!”

The fourth most important criterion in either setting was the financial cost of rectifying any quality issues. This factor was defined as issues being included or excluded in the QIP because the issues were either affordable or very expensive, respectively. However, most coaches indicated somewhere in their survey responses that a big issue for their most-challenging programs was insufficient funds to support quality improvements. As mentioned above, programs can apply for mini-grants, which average $3,100 in centers and $1,500 in family child care homes (personal communication with QRS stakeholder, November 9, 2006). Lisa noted that “The cost was a factor in setting goals because we knew funds were not available.” In Gail’s most-challenging center, “There was few funds for even basic materials like paper.” Sharon added that her challenging center “is really operating with bare bones funding, often having difficult times meeting overhead and salary expense, with little money available for program improvement.”

One theme that emerged in coaches’ comments regarding why they ranked the remaining criteria as they did was that their coaching was often based less about time or effort on their part, or improving children’s learning environment, and more about what needed to change first so that other goals might be reached in the future. For example, Kristy mentioned that “before we could get new materials, we needed shelves to put them on.” Gail also expressed that in her challenging center, “there were many underlying issues that needed to be addressed before work could begin on those areas that would increase QRS points.” This philosophy also reflects the incremental nature of scoring for the ECERS, ITERS, and FDCRS, which can guide decision making for the QIP. Kristy explained her approach to quality improvement:

We worked on the [ECERS/ITERS] scores that were the lowest first…. We used grant money to do physical improvements that would also help raise the lowest scores. I found it helpful to all involved to ignore items whose scores were too far above the overall average to be achievable. In other words, if the center scored 3.14, we needed to work on all the 1s, all the 3s, and at least half the 5s to raise their score to a 4.0 [and thus receive QRS points].

Interestingly, while often touted as a proxy for program quality (Whitebook, 1996) and required to receive the highest ranking in any QRS system (Mitchell, 2005), the item to receive the lowest average Likert scale score for QIP decision making was accreditation. While this may be because accreditation only results in 2 points toward the total QRS score, the larger reason may be that most of the challenging programs cited by the coaches began with 1 star. Such an initial rating also means that program quality is too low for accreditation consideration and thus has little relevance in terms of short-term QIP goals.

Improvement Focus: Choosing between Directors or Teachers

In addition to deciding which factors should take precedence when crafting a QIP, when working in a center, coaches must decide whether they should focus their efforts on the director or the center’s staff. The survey therefore also asked coaches which entity received their initial attention and, through an additional open-ended question, why this was the case. Nine of the 16 coaches who worked in centers chose the director. For two coaches, this decision was the expressed wish of the director. In a third case, high teacher turnover left the coach with no other alternative. In the remaining six cases, coaches intentionally chose the director because, as Ellie explained:

...it is important for [her] to see the big picture and to help in the plan of action that will help attain the goal. if [she] understands and she is willing, she will support the staff in their effort to work toward the goal.

Kari also reflected this viewpoint in her explanation. To her, “the director is the leader. All things must go through her initially.”

Yet, seven center-based coaches first directed their attention to teachers—especially when directors were not strong administrators. As Gail explained, “The fact that there were three directors with undefined roles meant that there was no one person who would take responsibility for making changes.” Similarly, Alayne recalled that “the director often did not follow through with the teachers, and therefore no tasks were being completed.” Kristy related the “lack of intentionality” on the part of the director at her most-challenging center, as well as the directors’ inability to hold her teachers accountable. The director’s lack of follow-through and communication abilities also influenced Lisa’s initial focus. She mentioned that her “coaching seemed to be more direct and intensive if applied to the actual teachers…. In this way, they would be completely informed, there would be less miscommunication, and they could see their accomplishments more quickly as I related them to the classroom and their teaching.”

At the same time, almost all of the center-based coaches eventually included the “other” party—meaning teachers if their initial focus was the director, or vice-versa—in their coaching. Coaches who subsequently focused on teachers talked about the need for staff buy-in and helping teachers develop individual classroom goals as a means for overall program improvement. Interestingly, for those coaches who first worked with teachers, their subsequent focus with a director was not solely on QIP items. Instead, they also tried to help the director become an accountability-focused administrator. Kristy explained the reasoning for such a focus:

I heard many times that teachers would do what they were supposed to when I was there, but not when I left. The director felt powerless in working with the teachers, and had little say in the goals being made. By working with the director, she became the one responsible for holding the teachers accountable and was able to begin bringing the consistency of performance up. She felt more like she was leading her team, and things began being done more consistently. 

The recognition that coaching needed to focus on helping a director improve her administrative skills as a means for bringing about change at the classroom level was mentioned by five of the seven coaches who initially focused their efforts on teachers. As is discussed in the next section, this is not the only contextual issue that has the potential to constrain coaches’ efforts to bring about change.

Issues Impacting Coaches’ Work in Their Most-Challenging Sites

Given the role that contextual factors can play in constraining change, the survey asked coaches to rate on a 1-5 Likert scale whether various issues contributed to making a center or family child care home worthy of the self-defined “most-challenging” designation (see Table 3). Reflecting the minimal educational requirements necessary to work as a director, center-based teacher, or family child care provider, four of the issues were related to accessing training, and two focused on literacy levels and English-language skills. The remaining three issues were drawn from discussions between the author and coaches at their pre-survey meeting, as well as conversations with the team who helped write the survey. Coaches were asked to rate each issue with a 1 if it was not an issue at their most-challenging center or child care home, a 3 if it was somewhat of an issue, and a 5 if it was a very large issue.

Table 3
Factors Contributing to a Site’s Coaching Challenges



(n = 16)

FCC Homes
(n = 12)

Participants* had financial, health, or other personal issues that took up most of their time or distracted them from the coaching goals. 3.31 4.00
Participants did not have the opportunity to observe best practices in high-quality programs. 3.44 3.08
The attitudes or motivation level of participants were not conducive toward—or even worked against—bringing about change. 2.94 2.75
Participants had inadequate literacy levels. 2.81 2.00
Transportation problems resulted in limited access to training. 2.75 1.64
Participants couldn’t afford to pay for training. 2.63 1.45
Location of training made access difficult. 2.63 1.36
Trainings were held at an inconvenient time. 2.44 1.55
Participants were not fluent in English. 1.56 1.00
*Participants = director, teachers, or family child care provider.

As can also be seen in Table 3, only two factors were rated on average greater than 3. The factor with the highest average rating for center-based coaches was participants (director, teachers, or child care provider) did not have the opportunity to observe best practices in high-quality programs. The most salient issue for family child care home coaches—and also the only factor to receive an average rating of 4.0—was participants had personal issues that took up most of their time or distracted them from the goals of the QRS initiative. These issues included spouses’ health, behavioral problems of a teenage son, the death of a parent, and care of a disabled sibling. The coaches noted that in more-traditional jobs, employees might take time off to address such issues. However, in a family child care home, “taking time off” would also involve finding a substitute caregiver or closing down temporarily, both of which would likely not be feasible options.

While not ordered exactly the same in both settings, the mean scores for issues related to accessing training were 2.44–2.75 in centers and 1.55–1.64 in family child care homes. Participants’ English fluency was not an issue. However, staff literacy levels received a mean score of 2.81 from center-based coaches and 2.0 from those working in child care homes, scoring higher than any of the training access factors.

Coaches were also given the opportunity through an open-ended question to note any additional issues that contributed to a program being particularly challenging to coach. Four of the issues related to QRS participants’ individual characteristics and were cited 11 separate times. These include the personal problems of a director or family child care provider, and the lack of motivation, as well as capacity to follow through or make changes. Program-level issues included low teacher wages and resulting quality, the lack of time for professional development, a program’s financial instability as a result of low enrollment, and the negative organizational dynamics in play at a program.

Coaches’ Reflections on Failures and Successes

The QRS process is intended to help ECE centers and family child care homes improve their quality. Yet, despite a well-thought-out mix of evaluation, plan making, and technical assistance, not all participating sites actually improve. Of the 84 programs that participated in the pilot QRS initiative over a two-year period, 41 stayed at the same star level and 12 decreased their star rating (personal communication with QRS stakeholder, November 22, 2006).

To provide even more insight into why some programs failed to improve, two open-ended questions asked coaches to reflect more broadly on their experiences. Coaches were first asked to think about whether there were patterns or similar characteristics among all their sites that have failed to improve or had declining scores. The second question was similar but the inverse, asking for any noticed patterns or characteristics in programs that have experienced the most improvement.

Twelve of the 17 coaches indicated that they saw commonalities among programs at either end of this continuum. As can be seen in the left half of Table 4, a common characteristic cited for programs that failed to improve or experienced a declining QRS score related to a lack of motivation or follow-through on the part of a director or family child care provider. Other characteristics mirror the constraints in bringing about change in ECE settings, such as inadequate financial capacity and the education level of participants.  

Table 4
Common Characteristics of Sites
“Failed to Improve” Sites “Experienced the Most Improvement” Sites

Director-level Characteristics

Lack of expectation, motivation, buy-in, or follow-through to make changes or succeed Sense of personal commitment and motivation to do a better job or wanting to participate in QRS in order to increase quality
Personal problems of director or family child care provider Higher educational level and degree of professionalism, as well as personnel management and leadership skills
Lack of leadership Strong follow-up on part of director, with expectation of accountability

Program-level Characteristics

Inadequate financial capacity to implement change  
Low teacher wages/low quality of staff/high staff turnover  
Low education level of QRS participants; lack of knowledge about curriculum and developmentally appropriate practice Majority of teachers with CDA or better; director sets professional development example

As can be seen in the right half of Table 4, programs that are perceived to have experienced the most improvement share attributes that are in some cases the direct opposite of these “failed” characteristics. For example, eight coaches mentioned the director or family child care provider having a sense of personal commitment and motivation to do a better job. Another common characteristic often cited of programs that improve their QRS scores is teachers having a CDA. All of the remaining comments related to the director. Bloom (1999) argues that “directors are the ‘gatekeepers to quality,’ [and] set the standards for teachers and other staff to follow” (p. 207). Coaches appear to perceive directors as the “make or break” gatekeepers for QRS success, as well.

Coaches were asked one final open-ended question about when they have felt most successful as a coach. Many coaches answered by recalling certain activities that they engaged in when helping programs, such as giving feedback, providing training, and helping teachers attain their CDA. In addition, almost all of the coaches’ responses had a theme of positive movement on the part of the program they were helping, such as “seeing the center steadily move” or “providing them with the support to move forward.”

Given their focus on learning and growth, it was surprising that no coach said she felt most successful when she learned something new about the coaching process—whether from a colleague or an ongoing training, or through her own experience—and was able to integrate it into her repertoire of strategies to help a challenging program. To be sure, the survey’s emphasis was on the coaching process at their most-challenging sites, rather than perceptions of their own growth as a coach. However, the absence of a more personally introspective answer about the value of personal learning may also reflect the fact that the initial and ongoing training provided to coaches varies widely. Furthermore, outside of their monthly meetings, coaches do not have a formal mechanism that enables them to be in “regular contact with each other” (Carter & Curtis, 1994, p. 209). The implications of this finding, as well as the others highlighted in this article, are discussed next.


The purpose of this article was to share the results of a survey of the coaching provided as part of a pilot QRS initiative and the challenges that coaches face in helping programs improve their QRS score. As outlined above, coaches share information about the QRS initiative and work with programs to create a plan for improving quality. They also make a concerted effort to mitigate the itinerant nature of their work and create an intentional “change partnership” with programs. In order to nurture this partnership, coaches make decisions regarding which party—teachers or directors—can initially get the change process underway, as well as which improvement goals can help facilitate short- and long-term participant buy-in.

At the same time, various programmatic issues challenge coaches’ efforts to bring about positive change. Many of these issues mirror those plaguing the ECE field more generally, including minimal staff credentials, directors with insufficient administrative skills, inadequate funds for equipment and supplies, and high turnover rates. Given this context, it is not surprising that many coaches cite the inability of program staff to observe high-quality programs as a challenge. Moreover, it is probably not a stretch to argue that rectifying these challenges actually requires a major public policy intervention. In the meantime, however, as states move forward with QRS initiatives, this study has implications for enhancing coaches’ change-making efforts.

Implications for Enhancing Coaches’ Knowledge

Coaches come to their jobs with an educational background and experience level that would seem conducive to helping participants improve their quality. They also appear to be aware of the importance of establishing collaborative partnerships with QRS participants. However, coaches’ responses—particularly if they are new to the job—sometimes indicated that they did not have a sufficient “bag of tricks” for dealing with the challenges present in some programs. Furthermore, conversations with QRS stakeholders prior to the survey revealed that the training that coaches receive from their respective agencies is not identical. All coaches learn about the QRS initiative itself and the observation instruments used, but only some have access to preservice and inservice training that emphasizes how to be an effective coach.

This study therefore has two implications for enhancing the change-making process through enhancing the knowledge base of coaches themselves. First, the complexity of the coaching process and the challenges associated with helping QRS participants improve program quality suggest the need for initial and ongoing coach training that goes beyond the mechanics of the QRS initiative. While the specifics of what such training would look like were not the focus of this study, it seems obvious that an educational background in early childhood and experience in the field are not sufficient on their own to successfully perform the job of QRS coach.

Second, given that each coach seems to have her own unique strategies for gaining and enhancing participant buy-in throughout the process, this study also suggests that coaches may benefit from access to the very type of supportive feedback they strive to give QRS participants. This strategy might be accomplished by pairing coaches with each other, much like the relationships often established to help novice teachers become acclimated to their new role in the classroom (Brindley, Fleege, & Graves, 2002). What might be especially useful to all coaches no matter what their level of experience is some sort of online forum where they could ask each other questions and share advice, similar to what is available to the Training & Curriculum Specialists working in the U.S. Army’s child development centers (Ackerman, 2007). This type of support would also mirror the attitude many coaches voiced about their work, which was summed up by Trina earlier in the article as “what we develop together will always be much more that what any of us can do on our own.”

Enhancing Coaching by Simultaneously Improving QRS Participant Capacity

As also highlighted in this study’s findings, coaches must sometimes provide directors or programs with “sidestep” assistance before they can get back on track with the QRS. This study therefore also suggests that coaches’ efforts might be further enhanced by providing QRS participants with more than professional development scholarships or mini-grants to purchase equipment and supplies. Of course, no technical assistance model can fully address the lack of adequate funding and infrastructure in the field. However, it might be worthwhile to develop or invest in a leadership training program that focuses exclusively on building administrative skills and knowledge (e.g., Bloom & Sheerer, 1992). Attention also needs to be paid to program staff with low literacy levels so they can fully benefit from any training opportunities. Social workers could also be provided as a resource for staff that are overwhelmed by the day-to-day issues in their lives and are not aware of the other resources available to them.

Similarly, for those participants who have not had the opportunity to witness best practices on a consistent basis or learn about them in a teacher preparation program, videos could be acquired that are aligned with the type of quality assessed via the ITERS, ECERS, or FDCRS. In addition, rather than only awarding mini-grants to buy a limited amount of supplies and equipment, participating programs might be assessed right at the beginning of the QRS process for their materials needs. Perhaps in partnership with local philanthropic foundations, these items could then be directly provided to programs in order to immediately improve children’s learning environments. QRS-related training can also focus on how to best incorporate these items into teachers’ classroom practice.

Limitations and Future Questions

While this study suggests the need for more consistent preservice and inservice coach training, a method for supporting coach mentoring and networking, and expanded assistance for QRS participants, its limitations should also be noted. The survey was developed by the author and a team of QRS stakeholders and also piloted with several former coaches, but the results rely on self-report and were not verified. In addition, coaches were asked to think about their most-challenging center or family child care provider when answering the majority of survey questions. It is possible that their coaching process or challenges are different in easier-to-coach settings. The survey also did not ask about partnerships with families, which counts for about 25% of the total QRS points available. The fact that a survey was used to gather these data also limits the study. A large number of questions were open-ended, but different information might have been obtained if coaches were interviewed using a semi-formal protocol.

Despite these limitations, this study also suggests that further research is needed to fully understand how to effectively support QRS-based change. For example, it might be worthwhile to ascertain the characteristics of programs that seem to have difficulty improving their quality to create a profile of which ones might benefit from an expanded technical assistance model. Programs could then be randomly assigned to receive one or more interventions or no intervention other than the traditional coaching and mini-grant approach. Results could then be compared to assess the effectiveness of various technical assistance models. Coaches’ perceptions regarding the value of their expanded training and networking could also be gathered through interviews or focus groups. This type of research is particularly crucial if the QRS initiative “goes to scale,” as then more-informed decisions can be made regarding the best use of limited public funding.

In summary, given the number of young children enrolled in ECE and the effects of such programs on children’s cognitive and social-emotional development (Ackerman & Barnett, 2006), it is crucial that quality remain a focus of policy makers and early childhood stakeholders. Parents also need a reliable source of information about the quality of any program. While QRS initiatives do not negate the need for better public support of ECE, they do have the potential to begin addressing ECE quality issues, particularly when programs are provided with coaches who can serve as effective change-making partners.


The study reported on in this article was made possible through an Early Learning Opportunities Act grant from the U.S. Department of Health and Human Services and is the result of a joint collaboration between the Mid-America Regional Council’s Metropolitan Council on Early Learning and the Francis Child Development Institute at Metropolitan Community College–Penn Valley in Kansas City, Missouri. The opinions expressed in this article are those of the author and do not necessarily reflect the views of the funder or sponsors. The author gratefully acknowledges the assistance of Nancy Mitchell, Jerry Kitzi, Rebecca Curtis, and Jim Caccamo in implementing the study, as well as Kathryn Fuger of the University of Missouri–Kansas City Institute for Human Development for sharing data about prior QRS evaluations.


Ackerman, Debra J. (2004). States’ efforts in improving the qualifications of early care and education teachers. Educational Policy, 18(2), 311-337.

Ackerman, Debra J. (2006). The costs of being a child care teacher: Revisiting the problem of low wages. Educational Policy, 20(1), 85-112.

Ackerman, Debra J. (2007). “The learning never stops”: Lessons from military child development centers for teacher professional development policy. Early Childhood Research & Practice, 9(1). Retrieved April 24, 2008, from http://ecrp.illinois.edu/v9n1/ackerman.html

Ackerman, Debra J., & Barnett, W. Steven. (2006). Increasing the effectiveness of preschool programs. New Brunswick, NJ: National Institute for Early Education Research. Retrieved April 24, 2008, from http://nieer.org/resources/research/IncreasingEffectiveness.pdf

Ackerman, Debra J., & Thomas, Jessica. (2007). Study of the impact of preschool program intensity (SIPPI): A randomized trial of half-day vs full-day Head Start Programs in the Chicago Public Schools, Year I status report. New Brunswick, NJ: National Institute for Early Education Research.

Bellm, Dan; Whitebook, Marcy; & Hnatiuk, Patty. (1997). The early childhood mentoring curriculum. Washington, DC: Center for the Child Care Workforce.

Bloom, Paula Jorde. (1999). Building director competence: Credentialing and education. Journal of Early Childhood Teacher Education, 20(2), 207-214.

Bloom, Paula Jorde, & Sheerer, Marilyn. (1992). The effect of leadership training on child care program quality. Early Childhood Research Quarterly, 7(4), 579-594.

Bransford, John D.; Brown, Ann L.; & Cocking, Rodney R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

Brindley, Roger; Fleege, Pam; & Graves, Stephen. (2002). Preparation never ends: Sustaining a mentorship model in early childhood teacher education. Journal of Early Childhood Teacher Education, 23(1), 33-38.

Buysse, Virginia; Schulte, Ann C.; Pierce, Patsy P.; & Terry, Delores. (1994). Models and styles of consultation: Preferences of professionals in early intervention. Journal of Early Intervention, 18(3), 302-310.

Buysse, Virginia, & Wesley, Patricia W. (2005). Consultation in early childhood settings. Baltimore, MD: Paul H. Brookes.

Carter, Margie, & Curtis, Deb. (1994). Training teachers: A harvest of theory and practice. St. Paul, MN: Redleaf Press.

Cryer, Debby, & Burchinal, Margaret. (1997). Parents as child care consumers. Early Childhood Research Quarterly, 12(1), 35-58.

Dickinson, David K., & Caswell, Linda. (2007). Building support for language and early literacy in preschool classrooms through in-service professional development: Effects of the Literacy Environment Enrichment Program (LEEP). Early Childhood Research Quarterly, 22(2), 243-260.

Divine, Katherine, & Fountain, Cheryl. (2005). The Duval County professional development consortium 2004-2005 report. Jacksonville: Florida Institute of Technology at the University of North Florida.

Fiene, Richard. (2002). Improving child care quality through an infant caregiver mentoring project. Child & Youth Care Forum, 31(2), 79-87.

File, Nancy, & Kontos, Susan. (1992). Indirect service delivery through consultation: Review and implications for early intervention. Journal of Early Intervention, 16(3), 221-233.

File, Nancy, & Kontos, Susan. (1998). Preparing personnel for integration: Needs and strengths of early intervention and early childhood education professionals. Journal of Early Childhood Teacher Education, 19(3), 181-191.

Freidus, Helen; Grose, Claudia; & McNamara, Margaret. (2001). Implementing curriculum change: Lessons from the field. In Frances O’Connell Rust & Helen Freidus (Eds.), Guiding school change: The role and work of change agents (pp. 57-77). New York: Teachers College Press.

Fullan, Michael. (2001). The new meaning of educational change (3rd ed.). New York: Teachers College Press.

Harms, Thelma, & Clifford, Richard M. (1989). Family day care rating scale. New York: Teachers College Press.

Harms, Thelma; Clifford, Richard M.; & Cryer, Debby. (1998). Early childhood environment rating scale (Rev. ed.). New York: Teachers College Press.

Harms, Thelma; Cryer, Debby; & Clifford, Richard M. (2003). Infant/toddler environment rating scale (Rev. ed.). New York: Teachers College Press.

Harris, Kathleen C., & Klein, M. Diane. (2002). Itinerant consultation in early childhood special education: Issues and challenges. Journal of Educational and Psychological Consultation, 13(3), 237-247.

Harris, Kathleen C., & Klein, M. Diane. (2004). An emergent discussion of itinerant consultation in early childhood special education. Journal of Educational and Psychological Consultation, 15(2), 123-126.

Head Start Bureau. (2001). Putting the PRO in protégé: A guide to mentoring in Head Start and Early Head Start. Washington, DC: Author.

Helburn, Suzanne W.; Culkin, Mary L.; Morris, John; Mocan, Naci; Howes, Carollee; & Phillipsen, Leslie. (1995). Cost, quality, and child outcomes in child care centers (Executive summary). Denver, CO: Economics Department, University of Colorado-Denver.

Helburn, Suzanne W.; Morris, John R.; & Modigliani, Kathy. (2002). Family child care finances and their effect on quality and incentives. Early Childhood Research Quarterly, 17(4), 512-538.

Hendrickson, Jo M.; Gardner, Nancy; Kaiser, Ann P.; & Riley, Ann. (1993). Evaluation of a social interaction coaching program in an integrated day-care setting. Journal of Applied Behavior Analysis, 26(3), 213-225.

Huberman, A. Michael, & Miles, Matthew B. (1994). Data management and analysis methods. In Norman K. Denzin & Yvonna S. Lincoln (Eds.), Handbook of qualitative research (pp. 428-444). Thousand Oaks, CA: Sage.

Klein, M. Diane, & Harris, Kathleen C. (2004). Considerations in the personnel preparation of itinerant early childhood special education consultants. Journal of Educational and Psychological Consultation, 15(2), 151-165.

Kontos, Susan; Howes, Carollee; Shinn, Marybeth; & Galinsky, Ellen. (1994). Quality in family and relative care. New York: Teachers College Press.

LeMoine, Sarah, & Azer, Sheri. (2005). Center child care licensing requirements (November 2005): Minimum early childhood education (ECE) preservice qualifications, administrative, and annual ongoing training hours for directors. Retrieved November 4, 2006, from http://nccic.org/pubs/cclicensingreq/cclr-directors.pdf

Lieberman, Ann. (2001). The professional lives of change agents: What they do and what they know. In Frances O’Connell Rust & Helen Freidus (Eds.), Guiding school change: The role and work of change agents (pp. 155-162). New York: Teachers College Press.

Lizakowski, Theresa. (2005). Minnesota Early Literacy Training Project: Final report highlights. Early Report, 31(1), 1-1-5.

Mathison, Sandra. (1988). Why triangulate? Educational Researcher, 17(2), 13-17.

McCallister, Cynthia. (2001). From ideal to real: Unlocking the doors of school reform. In Frances O’Connell Rust & Helen Freidus (Eds.), Guiding school change: The role and work of change agents (pp. 37-56). New York: Teachers College Press.

Meijer, Pauline C.; Verloop, Nico; & Beijaard, Douwe. (2002). Multi-method triangulation in a qualitative study on teachers' practical knowledge: An attempt to increase internal validity. Quality & Quantity, 36(2), 145-167.

Miles, Matthew B., & Huberman, A. Michael. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage.

Mitchell, Anne W. (2005). Stair steps to quality: A guide for states and communities developing quality rating systems for early care and education. Alexandria, VA: United Way Success By 6.

Morgan, Gwen. (2000). The director as a key to quality. In Mary L. Culkin (Ed.), Managing quality in young children's programs: The leader's role (pp. 40-58). New York: Teachers College Press.

National Child Care Information Center (NCCIC). (2007). Quality rating systems: Quality standards. Retrieved July 25, 2008, from http://nccic.acf.hhs.gov/poptopics/qrs-criteria-websites.pdf

Neugebauer, Roger. (2000). What is management ability? In Mary L. Culkin (Ed.), Managing quality in young children's programs: The leader's role (pp. 97-111). New York: Teachers College Press.

Palsha, Sharon A., & Wesley, Patricia W. (1998). Improving quality in early childhood environments through on-site consultation. Topics in Early Childhood Special Education, 18(4), 243-253.

Patton, Michael Quinn. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.

Pavia, Louise; Nissen, Hannah; Hawkins, Carol; Monroe, Mary Ellen; & Filimon-Demyen, Debra. (2003). Mentoring early childhood professionals. Journal of Research in Childhood Education, 17(2), 250-260.

Poglinco, Susan M.; Bach, Amy J.; Hovde, Kate; Rosenblum Sheila; Saunders, Marisa; & Supovitz, Jonath A. (2003, May). The heart of the matter: The coaching model in America’s Choice schools. Philadelphia: Consortium for Policy Research in Education, University of Pennsylvania Graduate School of Education.

Rust, Frances O’Connell; Ely, Margot; Krasnow, Maris H.; & Miller, LaMar P. (2001). Professional development of change agents: Swimming with and against the currents. In Frances O’Connell Rust & Helen Freidus (Eds.), Guiding school change: The role and work of change agents (pp. 16-36). New York: Teachers College Press.

Rust, Frances O’Connell, & Freidus, Helen. (Eds.). (2001). Guiding school change: The role and work of change agents. New York: Teachers College Press.

Ryan, Sharon, & Hornbeck, Amy. (2004). Mentoring for quality improvement: A case study of a mentor teacher in the reform process. Journal of Research in Childhood Education, 19(1), 78-95.

Ryan, Sharon; Hornbeck, Amy; & Frede, Ellen. (2004). Mentoring for change: A time use study of teacher consultants in preschool reform. Early Childhood Research & Practice, 6(1). Retrieved April 10, 2008, from http://ecrp.illinois.edu/v6n1/ryan.html

Sibley, Annette, & Kelly, Wendy. (2005). Technical assistance for early childhood programs: A review of systems and initiatives. Atlanta, GA: Quality Assist.

Stake, Robert E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Wesley, Patricia W. (1994). Providing on-site consultation to promote quality in integrated child care programs. Journal of Early Intervention, 18(4), 391-402.

Wesley, Patricia W., & Buysse, Virginia. (2004). Consultation as a framework for productive collaboration in early intervention. Journal of Educational and Psychological Consultation, 15(2), 127-150.

Whitebook, Marcy. (1996). NAEYC accreditation as an indicator of program quality: What research tells us. In Sue Bredekamp & Barbara A. Willer (Eds.), NAEYC accreditation: A decade of learning and the years ahead (pp. 31-46). Washington, DC: National Association for the Education of Young Children.

Zeece, Pauline Davey. (2001). All dressed up and no place to grow! In Bonnie Neugebauer & Roger Neugebauer (Eds.), Staff challenges: Articles from Child Care Information Exchange (pp. 113-117). Redmond, WA: Child Care Information Exchange.

Author Information

Debra J. Ackerman, Ph.D., is Associate Director for Research at the National Institute for Early Education Research (NIEER). Her work focuses on policy issues related to preschool education and the professional development of the early education workforce.

Debra J. Ackerman
National Institute for Early Education Research
120 Albany Street, Tower 1, Suite 500
New Brunswick, NJ 08901
Telephone: 732-932-4350
Email: dackerman@nieer.org

Appendix A

QRS Criteria and Corresponding Point Values
Indicator Center FCCH

Learning Environment/Average ECERS, ITERS, or FDCRS Score

3.50* 2 2
4.00 4 4
4.70 6 6
5.50 8 8
6.00 10 10

Family Partnerships
Total score on Family Partnership checklist & Parent Survey (25 pts. each)

< 30 0 0
30 – 37 4 4
38 – 45 8 8
46 – 50 10 10

 Direct Care Staff Training & Education

45 hours in early childhood education over last 3 years 1 1
6 college credits in early childhood education or a CDA 2 2
24 college credits in early childhood education 3 3
Associate’s degree with 24 credits in early childhood education 5 5
Bachelor’s with education focus and 24 credits in early childhood education 7 7
Master’s degree in early childhood education or related field 7 7

 Center Administrative Staff

Associate’s degree with 24 credits in early childhood education 1 NA
Bachelor’s with education focus and 24 credits in early childhood education 2
Master’s degree in early childhood education or related field 3
Staff-Child Ratios Center FCCH
0-17 months 18-23 months 24-35 months 30-36 months 36-47 months 48 months+ 30-71 months
1:5 1:5 1:7 1:8 1:10 1:12 1:10 4
1:4 1:4 1:6 1:7 1:9 1:10 1:9 6
1:3 1:3 1:5 1:6 1:8 1:8 1:8 8
Group Size Center FCCH
Age 13 or under Age 6 or under School age Age 2 or under
8     Any number   1
12   At least 2 0   1
6     Any number   2
10   At least 2 0   2
6     3 or less   5
8   At least 2 2 or less   5
4     3 or 4   6
0 6 0 2 or less   6
7   At least 2 2 or less   7
8   At least 2 0   7
5     2 or less   8
6     0   8
Accreditation by NAEYC or NAFCC 2 2
Total Available Points 42 39
*Programs must score at least a 3.50 (2 points) on their respective learning environment rating observation in order to achieve 2 stars.
Points Needed to Achieve QRS Stars
QRS Stars Points Needed
Centers Family Child Care Homes
1 1-9 1-9
2 10 10
3 18 17
4 26 24
5 34 31

Appendix B

Survey Questions

  1. Is this program in a rural, suburban, or urban setting?
  2. How many children does the program serve?
  3. What was the initial QRS rating of the program?
  4. What was the QRS rating of the program 1 year later?
  5. What is the educational background of the center director?
  6. How many years has the director worked in this capacity?
  7. Briefly describe how you initially familiarized the program with the QRS initiative and with your role as coach. 
  8. Once the initial QRS assessment had been completed, did you initially focus your coaching on the director of the program or on the individual teachers? Why?
  9. Did the focus of your coaching eventually transition to or include the director/individual teachers (whichever was not your initial focus)? Why?
  10. How did you interpret the Quality Performance Profile (QPP) to the director/individual teachers?
  11. What kinds of things did you do to get your relationship with the people you were coaching “up and running”? 
  12. What was your process for building a Quality Improvement Plan? 

    We’d now like to know the importance of various criteria when you were coaching this most-challenging program to develop its Quality Improvement Plan (QIP). In answering these questions, “1” means that the issue was not at all important, “3” means that it was somewhat important, and “5” means that you considered it to be very important in developing the QIP. How important was it to give priority to…
  13. The preferences of the center director?
  14. Areas in which the program could experience early success? 
  15. Issues that would constrain the completion of additional tasks?
  16. Health and safety issues? 
  17. Issues related to children’s learning environments?
  18. Issues that would result in greater QRS points?
  19. Issues that were also related to accreditation?
  20. Goals that required minimal effort—on the part of the program or yourself—to complete?  
  21. Issues that would take a long time to address or accomplish?
  22. The financial cost of rectifying any issues? (either too expensive or very affordable)
  23. Goals that required the least coaching to achieve?
  24. Goals that required a significant amount of coaching to achieve? 
  25. If you and the director of the most-challenging program decided not to work on a particular quality goal, which of the following two reasons best describe why?
    • They did not seem to grasp the nature of the goal.
    • They were not willing to address the goal.
    • They were not able to work on the goal for personal reasons.
    • They needed the goal to be divided into smaller steps.
    • We disagreed about the importance of the goal.
    • I was unable to provide the time necessary to assist the provider in meeting the goal.
    • It took too long to address the goal and we ran out of time. 
    • Other:  

Some factors outside of your control may contribute to making a site more challenging to coach. In thinking about the program that has been the most challenging for you, please rank the following factors from 1 to 5, with “1” meaning it was not an issue at this site, “3” meaning it was somewhat of an issue, and “5” meaning it was a very large issue.

  1. Director or teachers had limited access to CDA classes or training events because they couldn’t afford to pay for them.
  2. Director or teachers had limited access to CDA classes or training events because of transportation problems.
  3. Director or teachers had limited access to CDA classes or training events because of where they were located.
  4. Director or teachers had limited access to CDA classes or training events because they were held at an inconvenient time.
  5. Director or teachers had inadequate literacy levels.
  6. Director or teachers were not fluent in English.
  7. Director or teachers did not have the opportunity to observe best practices in high-quality programs.
  8. Director or teachers had financial, family, health, or other personal issues that took up most of their time or distracted them from the coaching goals.
  9. The attitudes and/or motivation level of the director or teachers were not conducive toward—or even worked against—bringing about a positive change in program quality.
  10. Please describe any other barriers or issues that contributed to this program being the “most-challenging” center-based program for you as a coach.
  11. In thinking about all of the sites you have coached—whether challenging or not, and whether in a center or family child care home—do you see “patterns” or find similar characteristics in those programs that have experienced the most improvement after being coached? If yes, please explain.
  12. Again, in thinking about all of the sites you have coached, do you see “patterns” or find similar characteristics in those programs that either failed to improve or declined in their scores after being coached? If yes, please explain.
  13. Once again, in thinking about all of the sites you have coached, please briefly describe the situation in which you have felt most successful as a coach. 
  14. Are there any additional comments or observations you would like to share about the sites that have been the most challenging to coach versus the least challenging to coach?
  15. What is your age? 
  16. What is your highest degree? 
  17. How many years of early childhood experience do you have?
  18. How many of these years were spent in direct care or education of children ages Birth-5?
  19. How long have you been a coach with your current organization? 
  20. Do you have previous early childhood coaching experience of any kind? If yes, how many years and what kind of coaching did you provide?
  21. What special personal qualities, traits, professional expertise, or other skills do you bring to coaching? 
  22. On a scale of 1 to 5, with “1” meaning you derive no or very little satisfaction, “3” means you are generally satisfied, and “5” means you derive significant satisfaction, how much satisfaction do you derive from your work as a coach?