Sign up for the
Email Newsletter


Send comments
to the ECRP Editor.

Share Mirar esta página en español HomeJournal ContentsIssue Contents
Volume 7 Number 1
©The Author(s) 2005

Assessing the Quality of Early Years Learning Environments

Glenda Walsh
Stranmillis University College

John Gardner
Queen's University Belfast


This article describes a means of evaluating early years classrooms from the perspective of the child's experience. Nine key themes, such as motivation and independence, are identified as representing significant aspects of a high-quality environment for learning. The manner in which these manifest themselves in relation to the three elements of the interactional triangle-the children, the adults, and their physical environment-is assessed by means of an observation schedule called the Quality Learning Instrument (QLI). The paper illustrates the design and validation of the instrument with data from a project involving observations of classroom practice in Northern Ireland primary schools and Danish kindergartens. It describes how judgments made using the instrument can be triangulated or "calibrated" against the judgments of experts not connected with the data collection. The article concludes with the argument that the instrument may be successfully used to provide a basis for external quality assessments or as a means for early years teachers to reflect on the environment for learning that they generate in their own classrooms.


Few would disagree that the earliest years of a child's education are fundamentally formative, and throughout the world, governments and educators are investing their respective resources in the development and enhancement of learning opportunities for young children. In Northern Ireland, where the research reported here was undertaken, there has been a tradition of relatively conservative education (see, for example, Caul, 1990) characterized by early enrollment to formal schooling. Children in Northern Ireland are required to attend primary school in their fifth year, from 4 years 2 months of age onward. In their Year-1 classes, they are presented with the formal curriculum at Key Stage 1, the first of the four stages of curriculum that schools are required to adopt in England, Wales, and Northern Ireland (though it should be noted that children entering schools in England and Wales will be a year or more older). The relatively prescribed curriculum of reading, writing, listening, numeracy, and so forth, has given rise to a continuing concern that it detracts not just from children's enjoyment of their first experience of schooling (though some may have attended nursery and other forms of preschool) but also from their experience of childhood (Elkind, 2001). Should this initial formality in learning prove difficult for the children, some of whom may not have the requisite motor or social skills, their future development may be inhibited by an early sense of failure (Sharp, 2002).

One popular alternative to formal curricular approaches to early years education is the play-based learning environment favored by Scandinavian countries such as Denmark. The suggestion that Northern Ireland early years education adopt play-based principles and practice, even if only in conjunction with formal activities, generates considerable debate between those who espouse the so-called "3Rs" approach (Reading, wRiting and aRithmetic) and those who view the play-based model as more appropriate and beneficial to early years learning. Although much of this argument is discursive, evidence suggests that the prevalent "formal" approach is not appropriate. For example, a study conducted by Sheehy, Trew, Rafferty, McShane, Quiery, and Curran (2000) found that the more formal Year-1 curriculum was not meeting the needs of disadvantaged 4- to 5-year-old children in Northern Ireland. Furthermore, the Northern Ireland Council for Curriculum, Examinations, and Assessment (CCEA) has canvassed views widely in several major consultation exercises and has committed itself to a more constructivist approach for the young child, stating:

Children learn best when all areas of an integrated, carefully planned, curriculum are implemented informally using methodologies that are interactive, practical and enjoyable. Children should have opportunities to experience much of their learning though well planned and challenging play. (CCEA, 2003, p. 7)

It was in the context of this debate that we began our evaluation of the existing early years provision in Year-1 classes (4- to 5-year-olds) in 10 Northern Ireland primary schools. With no examples of play-based provision available, we informed our work with visits to and evaluations of the practices in 10 kindergarten settings in Denmark. However, at the outset of our study, we faced one central problem. How does one assess the quality of early years learning environments? In this paper, we present the details of the method and instrument we used to accomplish our assessments.

Measuring the Quality of Early Years Learning Environments

In any attempt to measure the quality of a learning environment, Statham and Brophy (1992) advise us that the provision of "an objective rating scale for measuring quality has to assume that there is an explicit model of what constitutes good provision" (p. 145). Furthermore, any method that adopts a simplistic tick-box approach may suffer, as Athey (1990) puts it, from being: "measurement without description and conceptual understanding [which] can capture only the organizational surface of trivial features of situations" (p. 8).

A generic tool for evaluating all types of environments is something of a holy grail in early years education, but a number of tailored approaches do exist. Some narrow the focus to the results of tests conducted with the students-implying, in essence, that the higher the student scores are, the higher the quality of the program. Such measures may include students' IQ scores and academic performance (see, for example, Sundell, 1992, and Tymms, Merrell, & Henderson, 2000). Other approaches tend to focus more on what Katz (1995) describes as a "top-down perspective of quality" (p. 120). Katz goes on to explain that the top-down perspective of quality incorporates "selected characteristics of the program, the setting, the equipment, and other features" (p. 120). In this way, the actual quality of the learning environment is derived from a selection of structural features, based on a particular model of early childhood practice. A prominent example of this would be the Early Childhood Environment Rating Scale (ECERS) (Harms & Clifford, 1980; Harms, Clifford, & Cryer, 1996). This assessment, based on notions of developmentally appropriate practice (Sylva, Siraj-Blatchford, & Taggart, 2003), includes consideration of the space and furnishings, personal care routines, type of activities available, and the program structure.

For any particular context, however, there is much to be gained from eschewing a generic approach and developing a contextualized or in-house instrument instead-a policy strongly argued by Balageur, Mestres, and Penn (1992):

.the process involved in defining quality-with the opportunity it provides to explore and discuss values, objectives and priorities-is of utmost importance, and can be lost where people simply adopt existing measures. (p. 11)

In this paper, we describe the development of one such instrument that is "in-house" in its provenance but that we recommend for more general usage across a variety of contexts. Its design owes much to what Katz terms a "bottom-up perspective of quality"-that is, how a program is experienced by the participating children (Katz, 1995, p. 120). In this way, the notion that the quality of learning environments can only be assessed in terms of outcomes, context, and teaching style has been challenged. Instead it is our intention that the quality of an early years setting is principally determined by the way in which the learning and developmental needs of the main stakeholders-that is, the children-are met.

What Aspects of an Early Years Environment Warrant a Quality Assessment?

Taking Balageur, Mestres, and Penn's (1992) point, we opted to identify a means of assessing quality that best fitted our purposes-that is, an instrument that would not be perceived as being biased toward either formal or play-based early years practice. We required an instrument that enabled us to gain an insight into how the children responded to the two contrasting programs in an attempt to evaluate which was most suitable for the learning needs of 4- to 5-year-old children. The didactic teaching and passive learning models, prevalent in the practices of early years settings in Northern Ireland, entail relatively frequent instances of activities such as "copying from the board," alphabet practice, and "coloring in" shapes-activities that are generally oriented to rote learning or motor-skills development. However, the more widely supported constructivist model of learning would demand considerably more participation and choice from the children themselves. We focused our analysis, therefore, on an experiential model of how young children learn. This model draws heavily on the work of philosophers such as Dewey (1938)-"all genuine education comes through experience" (p. 25)-and on Piagetian ideas of children constructing their own knowledge through interaction with the environment. In addition, children are not perceived to learn in isolation but rather in the company of their peers and significant others who can support them as they learn. In this way, the experiential model of learning is also deeply rooted in the Vygotskian notion of social constructivism.

A number of key features of experiential learning may be identified and summarized as follows, along with the keyword identifiers that we use later in the paper:

We explain the importance attached to these features below.

Motivation and Concentration

The constructivist ideal holds sway in early years theoretical discourse and centers on the view that, as Watson (2000) states, "knowledge is not passively received and absorbed but actively built up by the individual" (p. 136). On this basis, young children must therefore engage actively in the learning process to ensure that effective learning takes place. Laevers (1993) considered that the intense involvement of children in such contexts facilitates their overall development. He defined involvement as "a quality of human activity, characterized not only by a high level of motivation, but also by concentration and persistence, intense perceptions and experience of meaning, a strong flow of energy and a high degree of satisfaction" (p. 61). The experiential learning model recognizes the importance, to which Laevers alludes, of the intrinsic motivation for young children's learning and educational achievement, and this view is widely stressed by others such as Deci and Ryan (1980, 1985), Dweck and Leggett (1988), and Ames (1992). Children armed with this internal drive become what Dweck (1986) referred to as "mastery" learners-that is, learners who are challenge seeking, who persist in the face of difficulty, and who enjoy "exerting effort in the pursuit of task mastery" (p. 1040).

The experiential model, therefore, also espouses the idea that fostering a positive disposition toward learning (i.e., developing an environment in which children are fully motivated and actively absorbed in the learning process) is as important as developing young children's knowledge and skill acquisition (Katz, 1995, 1999).


Embedded in the experiential learning literature is the belief that children should have some control over the learning activities in which they are engaged. Howe (1999), for example, has argued that children's independent actions and feelings of self-control are important to later development. He and like-minded researchers take the view that when children believe that the outcome of a situation depends on their own actions, they engage more effort in the process, and positive feelings of self-esteem and social competence are increased.

Confidence and Well-Being

A wealth of persuasive argument (e.g., Greenhalgh, 1994; Goleman, 1996; Laevers, 1996) has referred to the importance of children's emotional stability for learning and development. For example, Goleman (1996) indicated that people with a high level of confidence and self-esteem are more likely to be content and effective in their lives. There is also evidence to suggest that nutrition and physical exercise are crucial components of neurological growth and development (Leavitt, Tonniges, Thomas, & Rogers, 2003). Based on this premise is the belief that young children require a learning environment that is warm, secure, and positive, where they can feel happy, healthy, safe, and comfortable (Ball, 1994; Moss, 1996).

Social Interaction and Respect

The need for positive social relationships has been identified as another feature of an experiential learning environment. Vygotsky (1978; originally published in 1926) advocated the importance of rich interactive settings for profitable learning experiences. His work underlined the crucial role of significant others in children's learning and in helping the children to extend their learning beyond what they can do alone. Rogoff (1990) emphasized that to day engagement of children and adults in shared activities contributes to the rapid progress of children in becoming skilled participants in the intellectual and social lives of their genes, social interaction and social arrangements are an essential aspect of child development, without which it would be impossible to conceive of a child developing. (p. 138)

Underpinning this aspect of a child's development is respect for others, peers, and adults. As Adams (1996) puts it, children need to be encouraged to "think of themselves as learners and to accept and appreciate those around them" (p. 52).

Multiple Skill Acquisition

It is also acknowledged within the experiential model that children's learning is not separated into distinct subject areas but is holistic in nature. Gardner (1993, 1999) expresses this view best, advocating the importance of a broad and balanced curriculum. At essence in his much-quoted work on multiple intelligences, including linguistic, logical, musical, and kinesthetic, is the need to address all aspects of children's development in their early years and, of course, subsequently.

Higher-Order Thinking Skills

Proponents of an experiential learning model also argue that young children are capable of demonstrating sophisticated levels of complex thinking when provided with an appropriate learning environment. A good example of this is Aubrey's (1993) research on the mathematical competence of young children. For this reason, proponents advocate what Katz (1995) referred to as "educative" (p. 90) experiences, rather than "frivolous one shot activities" (p. 35).

A learning environment constructed on such a basis will clearly cause the learners to think about what they are doing. It encourages them to engage in a reflective process and to participate in much problem solving and logical reasoning that will contribute to the development of their higher-order thinking skills (Costello, 2000; McGuinness, 1999).

We have drawn from the literature nine key themes that we feel would be integral to any high-quality learning environment, and these are summarized by their keywords as follows:

Operationalizing the Assessment

Having decided what aspects of an early years learning environment would be appropriate for evaluation, the next stage in the work required us to find a means to operationalize the assessment itself. Aside from the paper-based quality evaluations, which might be conducted using student performance profiles, staff profiles, resource inventories, and so forth, the only realistic and valid means of assessing the environment in which children are learning is to conduct observation visits. This method was chosen for the project, and the aim of each observation visit was to evaluate the way in which the key features of a high-quality learning environment, outlined in the previous section, manifested themselves in real early years settings. With each visit lasting two days, sufficient data, including notes and video recordings, could be collected to identify examples of high- or low-quality experiences under each key theme. The only drawback to such an approach, of course, is the very large volume of data that is generated.

To cope with such large volumes of field notes and video data, a common method of data reduction is thematic analysis (Miles & Huberman, 1994). In our case, the key themes were preselected and facilitated the first analysis of the data. Collection and analysis were facilitated by means of a matrix in which the themes (motivation, independence, etc.) formed the vertical column headings and each setting formed the horizontal row headings. This matrix is illustrated in Table 1 using data from the project.

Table 1
Illustration of the Initial Analysis Matrix Using Data from Two of the Project Settings against Three of the Key Themes
Setting Motivation Concentration Independence Continued .arrow

Activities on offer are stimulating and practical.

Children are very keen at story and playtime.

Environment is cheerful and colorful.

Few of the children appear distracted.

Children are mainly involved in what they do, showing some precision in the process.

On occasions, the teacher appears to be challenging the children.

On a few occasions, the children are encouraged to participate in classroom chores.

Few signs of initiative are shown.

Activities appear quite directed.

Teacher decides what should be done and when.

The furniture is child sized, but children are not free to use the materials unless the teacher tells them to do so.








Children appear bored. The majority have dull expressions.

Little eagerness is shown.

Activities appear quite boring and repetitive.

There are few opportunities for hands-on work.

The environment is dull.

Teacher is in control and maintains a level of concentration by walking around the classroom and reprimanding anyone who does not work.

On occasions, the children are engaged in time-wasting activities, but very quickly in most instances, they are brought back on task by the teacher.

Children have no choice. They are told exactly what to do. Specific time is set aside to go to toilet.

Some children are encouraged to deliver messages and to help at tidying up.


Further structuring of the data was then achieved by introducing the main elements of the interactional triangle-the children, the adults, and their physical environment. The use of these as row headings enabled the second level of analysis to be based on a new matrix in which each cell is bounded by the theme and by the interaction focus. For example, one cell would have the children's actions in the context of their motivation, while another would have the adult's actions in relation to the children's motivation. This second matrix became the final data collection grid, which we have termed the Quality Learning Instrument (QLI),1 and is represented schematically in Table 2.

Table 2
Illustration of the QLI Data Collection Grid: The Thematic Analysis Matrix
Settings Interactional Element Themes arrow
Motivation Concentration Independence Confidence Continued . arrow
1 Children












The next stage of the work required us to identify how the judgments of quality could be derived from the data captured on the QLI grid.

Making Judgments on Quality

Data collection and analysis instruments, whether quantitative or qualitative, can only take the researcher part of the way toward establishing credible research findings. Synthesis and interpretation are needed, and these are invariably judgment based. The next stage in assessing the quality of the learning environments that we had selected was therefore to judge whether each item of the data collected provided evidence of a low- or high-quality environment for learning. We were able to use the QLI to facilitate such judgments by building into it exemplars of high and low quality for each theme in relation to each element of the interactional triangle (i.e., the children, the adults, and the environment). However, any such exemplars could be open to criticism on the grounds that their analyses may be entirely subjective, in the sense of not being corroborated by other judges. In our study, it could be argued that we were able to validate each other's judgments. However, despite our expertise and the video and field note data that we used to support our judgments, the question would remain for many third parties: How reliable and valid are the judgments?-that is, would other judges record the same findings?

Qualitative instruments do not generally claim or seek measurement reliability, which Kirk and Miller (1986) describe as "the extent to which a measurement procedure yields the same answer however and whenever it is carried out" (p. 19). Kirk and Miller considered the calibration process that is needed to ensure that a new thermometer reads, say, 38 degrees at the same level of heat as other thermometers. To do this, its gradations must be set against standard gradations-that is, it must be calibrated against trustworthy standards-either another standardized thermometer or fixed temperature environments such as ice (0 degrees Celsius) and boiling water (100 degrees Celsius). Drawing the analogy into the qualitative domain, it is reasonable to expect, in most research contexts, that efforts are made to ensure that judgments made by one researcher will attract the confidence of those to whom they are presented. It is not accurate (reliable) measurements, in relation to the data from which they are inferred, that are needed so much as credible (valid) judgments. This credibility/validity is generally achieved by opening up the data collection, analysis, synthesis, and interpretation processes to the scrutiny of others-that is, to test the judgments made against the knowledge and expertise of others.

Instrument Validity

To validate the QLI, therefore, we invited a group of experts to consider the instrument and its themes overall. Specifically we asked them to consider the validity of the "indicators" of high- or low-quality activity, which we had chosen to link each theme with each element of the interactional triangle (children, adults, and environment). For example, under the theme of Motivation, is it valid to consider observations that the children are "eager to participate in the activities" or are "energetic, enthusiastic, and display a degree of curiosity and interest in the activities" to be indicative of a learning environment that is motivational? A selection of the indicators for Motivation are presented in Table 3.

Table 3
Indicators of High and Low Levels of Motivation
The observations suggested that a high level of motivation is in evidence when: The observations suggested that a low level of motivation is in evidence when:

The children are

  • eager to participate in the activities;
  • energetic, enthusiastic, and display a degree of curiosity and interest in the activities.

The adults

  • offer stimulating, relevant, and age-appropriate activities;
  • show a degree of interest and interact appropriately, allowing the children some degree of freedom and choice;
  • are cheerful and enthusiastic.

The environment

  • is spacious, airy, and aesthetically pleasing;
  • resources are plentiful, attractive, and age-appropriate;
  • some exciting areas are available, e.g., an Aladdin's cave reading corner, a cellar;
  • children get the opportunity to use their environment, both inside and outside.

The children

  • appear apathetic and unenthusiastic, e.g., lying over the tables, wandering around the room, yawning, etc.;
  • seem to complete the activity out of obligation rather than interest.

The adults

  • show little interest in the children's activities or dominate them;
  • initiate activities that are uninteresting, not age-appropriate or relevant to young children;
  • offer little variety or choice.

The environment

  • is dull and lacking in character;
  • resources tend to be routine and uninspiring;
  • space is limited;
  • children have little opportunity to use the environment available.

The QLI was therefore sent to a group of early years experts, eight in Northern Ireland and six in Denmark, to comment on its face validity. The sample of experts, with an average experience of 23 years of service, included one government inspector, five university lecturers, two local authority advisors, two early years researchers, and four early years teachers in management positions. All of them agreed that the QLI addressed key indicators of high-quality practice in the context of early years education, endorsing the themes and indicators used as relevant and comprehensive. The Danish experts expressed their satisfaction with the way in which the schedule referred to skill areas other than reading, writing, and numeracy, and both groups agreed that the format was simple and straightforward to apply.

Calibration of the Instrument

Though a set of indicators may be validated in this manner, there remains the possibility that we as researchers could interpret them in some idiosyncratic way, perhaps, in extreme cases, judging a particular early years setting to be of high quality when others might analyze the same evidence and judge it to be pedestrian or worse. It will always be important for researchers to test whether the interpretations of what they observe, and the judgments at which they arrive, can stand up to the scrutiny of other judges. In our case, such triangulation or "calibration" of our judgments' validity was no less important. We therefore invited a separate expert group to review video footage of classroom activity that we had previously assessed-the aim being to test whether their interpretations of the quality of the activities and processes that they observed matched our own interpretations of the same data.

The calibration study was conducted with 10 early years teachers from Northern Ireland acting as judges-the majority of them holding a position of responsibility, two as vice-principals of primary schools and four as early years coordinators for their schools. The integrity of the process was consolidated by the fact that the teachers were unknown to each other and arrived at their judgments in isolation. They also brought different levels of Year-1 experience and, in some cases, training to the process.

The process itself involved each of the teachers being sent an extract of video, taken in a Danish kindergarten, accompanied by a set of instructions, a selection of photographs (of the physical environment), and a copy of the QLI. A sample record sheet, based on observations from an imaginary kindergarten, was included in the pack to provide the teachers with an illustration of what was to be expected. Table 4 provides examples of the observation data that we had recorded from both Northern Ireland and Danish settings.

Table 4
Examples of Observation Data Illustrating High- and Low-Quality Levels of Motivational Context

Examples of high-quality motivational contexts

Examples of low-quality motivational contexts

  1. In Kindergarten 5, many children spent most of the afternoon outdoors. Some were pursuing each other through the bushes. Others were busy helping a pedagogue in the greenhouse, or watching in anticipation as a pedagogue lit a campfire. Another group was splashing about in the sandpit, which was saturated with water, getting themselves as wet and dirty as possible.
  2. In Classroom 4, during structured play, a group of children were eagerly looking at photographs of their trip to the zoo. After chatting about their experiences, they painted pictures of the animals they saw, to add to the zoo display.
  3. A group of five boys were playing on trucks in Kindergarten 2. They were making the sound of a fire siren and added a rope and some buckets to their trucks. A pedagogue provided them with a hose, and they pedaled hastily to the sand tray to put out "the bush fire."
  1. The children in Classroom 8 had been asked to complete a worksheet, which involved them coloring a snake red, to allow the teacher to hear reading. Having completed the activity quickly, easily, and in many cases carelessly, a group of boys began to hide under tables and throw books across the table at one another.
  2. In Classroom 1, the entire class was involved in playing the "farmer wants a wife" in the assembly hall during a PE lesson. The children sang the song repeatedly (approximately five times), showing little signs of enthusiasm in the process. A group of boys started to pull the others in the circle and then ran to the toilets. Other children then left the circle and went to the toilet.

The instructions asked the teachers to view the video in its entirety first, before using the QLI. It was then to be re-watched, and examples of observed practice were to be noted in the grid provided (see Table 2). The teachers were asked to use the QLI as an observation schedule to rate the children and adults' actions and the physical environment on offer. Each category (e.g., children's actions) in each of the theme areas was then to be scored on a 5-point scale-5 being at the highest end and 1 at the lowest of a range of high- and low-quality learning activities. The video was then to be viewed for a third time to enable any additional comments to be added. The teachers were not privy to our "scores" (these having been prepared before sending the materials off), and the final calibration process involved comparing our scores to those of the teachers to establish whether there was convergence.

The Teachers' Scores

As illustrated in Table 5 for the themes of Concentration and Confidence, our scores and those of the teachers corresponded well for each of the themes (the scores are multiplied by 20 to give a score out of 100 for ease of reading).

Table 5
Teachers' and Researchers' Mean Scores for Concentration and Confidence
  Feature arrow
Concentration Confidence
Children Adults Environment Children Adults Environment
Teachers' Mean Scores x 20 86 76 66 86 64 82
Researchers' Scores x 20 80 80 60 80 60 80

For each of the themes in the table, at least seven of the teachers were in consistent agreement. For those who were in conflict with the others, it was always a single point difference (e.g., 5 instead of 4). Similar levels of agreement between the teachers were achieved for all of the themes except environment. Given the high level of agreement in the other areas, it was considered more likely that the disparity of opinion on the environment issue might be explained by the lack of adequate evidence shown in the video extracts and photographs. For example, it was possible that some of the teachers concentrated less than others on the background information and relevant details from the video, and this might have influenced their judgments. Perhaps more tellingly, some people do not take the physical environment and resources sufficiently into consideration when evaluating the quality of a learning context, and for this reason, some of the teachers may not have given it much thought.

Concluding Remarks

In view of the fact that the expert teachers made their judgments in isolation from us and each other, and given the range of experiences the group had, the extent of agreement between their scores and ours allowed us to consider that our judgments arising from the use of the QLI were valid. A claim we feel we can make for the QLI, therefore, is that the instrument can act as a lens through which the quality of the learning experience of an early years setting can be assessed and recorded in narrative form. It facilitates a focus being placed on a significant number of the key ingredients (themes) that underpin a high-quality learning environment such as an early years classroom. The themes, and indicators of high and low quality for each of them, should attract reasonably widespread endorsement in the early years community, perhaps with extensions or amendments for local circumstances. With a complementary triangulation or "calibration" exercise to test their interpretations, researchers will also be able to consolidate the validity of the judgments that they make using the QLI. The selection of judges for the calibration can be from within the research or practitioner communities or indeed may be a combination of both. The judgment process essentially requires the evaluator to decide whether the collated evidence best fits into the high or low categories of quality or lies somewhere in between. In this way, the QLI provides a detailed picture of a setting's "performance" on each key theme against each aspect of the triangle of interaction. By this means, strengths and weaknesses can be easily identified and illustrated with evidential observations.

The ability of the QLI to reveal weaknesses was demonstrated when it was put to the test of assessing the quality of the selected Year-1 settings in our project. Clearly there is the possibility that an inaccurate judgment might be made in cases, for example, where a "bad" environment could be experienced in a "good" way by some children-that is, they could enjoy the bad learning environment! To address this potential problem, we therefore included criteria in which a low-level quality in relation to independence, for example, was the judgment associated with circumstances in which the teaching strategies tended to be authoritarian (little independence) or the children were allowed complete freedom (high independence), but where no constructive planning for the development of independence is apparent. The comprehensive nature of the instrument-that is, the fact that the entire learning triangle is being addressed in accordance with an array of key themes such as motivation, concentration, and so forth, helps to ensure that a true assessment of the quality of the learning experience is made.

Although the themes cannot be exhaustive, the QLI did provide an insight into the quality of the environment from a "whole child's" perspective-that is, academically (motivation, concentration, higher-order thinking skills, and multiple skills), socially (social interaction and respect), and emotionally (confidence, well-being, and independence). The instrument also proved relatively easy to use, requiring no more than one morning's observation. To ensure that accurate judgments are made, however, users should be relatively experienced in the field of early childhood education.

Even though it has been developed with reference to many sources, the Quality Learning Instrument (QLI) possesses a degree of originality in the manner in which it has been developed in the field, and it has been used successfully in two cultures, attracting the endorsement of a number of early years experts in both. Its nine themes can be argued to represent significant aspects of the processes, which happen in an early years setting, that are most likely to contribute to children's learning. As such, they resonate with the outcome areas that Pascal and Bertram (1999) identified in their Accounting Early for Life Long Learning Project. While the themes act as process indicators, they could also be viewed as outcome "measures" if we envisage the measures needed as not being merely numbers or "facts, subjects and disciplines of knowledge" (Pascal & Bertram, 1999, pp. 101-102). Laevers (2000) also argues this case, challenging the view that "narrowly defined academic achievements" are the only means of measuring educational outcomes (p. 20). As Claxton and Carr (2004) indicate:

While it is important to present students with valuable and engaging topics, this "content curriculum" ought to be accompanied by attention to the attitudes, values, and habits towards learning in general which are being strengthened (or weakened) in the process. (p. 87)

The QLI could certainly supplement frameworks such as those forming the basis of "Quality in Diversity" (ECEF/NCB, 1998) or Carr's "dispositional framework" (Carr, 1998), which emphasize the importance of positive dispositions as measures of learning outcomes. We would argue, therefore, that the QLI provides not only an easy-to-use and comprehensive assessment schedule for external quality evaluation or purposes but also a means for early years teachers to assess the quality of their own practice and inform and develop their understanding of children's learning. With respect to professional development, the QLI could provide early years teachers with a means to engage in self-evaluation and reflective dialogue, a process that has been highlighted as contributing significantly to effective teaching and learning (Moyles, Adams, & Musgrove, 2002).


  1. The QLI is currently being used as one of the main assessment instruments in Northern Ireland's Early Years Enriched Curriculum Evaluation Project. This longitudinal study (Sproule, Rafferty, Trew, Walsh, McGuiness, & Sheehy, 2001, 2002, 2003) aims to evaluate the quality of an innovative play-based curriculum, which is being tested in a number of Year-1 classes in Northern Ireland. In due course, the instrument itself will be further evaluated and refined in the light of these extensive trials.


We wish to thank all of the pupils, teachers, and early childhood experts who contributed to the development of the Quality Learning Instrument (QLI). We also wish to acknowledge the anonymous reviewers of this paper for their insightful and constructive comments.


Adams, Leah. (1996). Quality curriculum for quality care and education. International Journal of Early Childhood, 28(2), 49-52.

Ames, Carole. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84(3), 261-271. EJ 452 395.

Athey, Chris. (1990). Extending thought in young children. A parent-teacher partnership. London: Paul Chapman.

Aubrey, Carol. (1993). An investigation of the mathematical knowledge and competencies which young children bring into school. British Educational Research Journal, 19(1), 27-41. EJ 480 262.

Balageur, Irene; Mestres, Juan; & Penn, Helen. (1992). Quality services for young children [Online]. Available: [2005, February 24]. ED 346 957. Editor's note: This url has changed:

Ball, Christopher. (1994). Start right: The importance of early learning. London: Royal Society for the Encouragement of the Arts, Manufacturers and Commerce (RSA).

Carr, Margaret. (1998, September). Learning stories: A framework for assessing children's experiences in an early childhood setting. Paper presented at the 8th European Conference on Quality in Early Childhood Research, Santiago de Compostela, Spain.

Caul, Leslie. (1990). Schools under scrutiny: The case of Northern Ireland. Basingstoke, UK: Macmillan Education.

CCEA. (2003). The revised Northern Ireland primary curriculum foundation stage. Belfast: Author.

Claxton, Guy, & Carr, Margaret. (2004). A framework for teaching learning: The dynamics of disposition. Early Years, 24(1), 87-97.

Costello, Patrick J. M. (2000). Thinking skills and early childhood education. London: David Fulton.

Deci, Edward L., & Ryan, Richard M. (1980). The empirical exploration of intrinsic motivational processes. In Leonard Berkowitz (Ed.), Advances in experimental social psychology (Vol. 13, pp. 39-80). New York: Academic Press.

Deci, Edward L., & Ryan, Richard M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum Press.

Dewey, John. (1938). Experience and education. New York: Macmillan.

Dweck, Carol S. (1986). Motivational process affecting learning. American Psychologist, 41(10), 1040-1048. EJ 360 271.

Dweck, Carol S., & Leggett, Ellen L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95(2), 256-273. EJ 386 538.

Early Childhood Education Forum/National Children's Bureau. (1998). Quality in diversity in early learning. A framework for early childhood practitioners. London: National Children's Bureau Enterprises.

Elkind, David. (2001, Summer). Early childhood education: Developmental or academic. Education Next [Online]. Available: [2005, February 24].Editor's note: This url has changed:

Gardner, Howard. (1993). Multiple intelligences: The theory and practice. New York: Basic Books. ED 446 124.

Gardner, Howard. (1999). Intelligence reframed: Multiple intelligences for the 21st century. New York: Basic Books. ED 435 610.

Goleman, Daniel. (1996). Emotional intelligence: Why it can matter more than IQ. London: Bloomsbury.

Greenhalgh, Paul. (1994). Emotional growth and learning. London: Routledge.

Harms, Thelma, & Clifford, Richard. (1980). The early childhood environment rating scale. New York: Teachers College Press.

Harms, Thelma; Clifford, Richard M.; & Cryer, Debby. (1996). Early childhood environment rating scale (Rev. ed.). London: Teachers College Press. ED 422 128.

Howe, Michael J. A. (1999). A teacher's guide to the psychology of learning. Oxford: Blackwell.

Katz, Lilian G. (1995). Talks with teachers of young children: A collection. Norwood, NJ: Ablex. ED 380 232.

Katz, Lilian G. (1999). Another look at what young children should be learning. ERIC Digest. Champaign, IL: ERIC Clearinghouse on Elementary and Early Childhood Education. ED 430 735.

Kirk, Jerome, & Miller, Marc L. (1986). Reliability and validity in qualitative research. London: Sage.

Laevers, Ferre. (1993). Deep level learning: An exemplary application on the area of physical knowledge. European Early Childhood Education Research Journal, 1(1), 53-68. EJ 467 556.

Laevers, Ferre. (1996, September). Social competence, self-organization and exploratory drive and creativity: Definition and assessment. Paper presented at the 6th European Early Childhood Education Research Association Conference on the Quality of Childhood Education, Lisbon.

Laevers, Ferre. (2000). Forward to basics! Deep-level learning and the experiential approach, Early Years, 20(2), 20-29.

Leavitt, Caroline H.; Tonniges, Thomas F.; & Rogers, Martha F. (2003). Good nutrition-The imperative for positive development. In Marc H. Bornstein, Lucy Davidson, Corey Keyes, & Kristin Moore (Eds.), Well-being: Positive development across the life course (pp. 39-51). London: Erlbaum. ED 481 045.

McGuinness, Carol. (1999). From thinking skills to thinking classrooms: A review and evaluation of approaches for developing pupils' thinking skills (Department for Education and Employment Research Brief No. 115). London: HMSO.

Miles, Matthew, B., & Huberman, Michael A. (1994). Qualitative data analysis: An expanded sourcebook. London: Sage.

Moss, Peter. (1996). Defining objectives in early childhood services. European Early Childhood Education Research Journal, 4(1), 17-31. EJ 528 177.

Moyles Janet; Adams, Sian; & Musgrove, Alison. (2002). SPEEL study of pedagogical effectiveness in early learning. London: Department for Education and Skills.

Pascal, Christine, & Bertram, Tony. (1999). Accounting early for lifelong learning. In Lesley Abbott & Helen Moylett (Eds.), Early education transformed (pp. 93-104). London: Falmer Press.

Rogoff, Barbara. (1990). Apprenticeship in thinking. Oxford: Oxford University Press.

Sharp, Caroline. (2002, November). School starting age: European policy and recent research. Paper presented at the LGA Seminar "When Should Our Children Start School?", London.

Sheehy, Noel; Trew, Karen; Rafferty, Harry; McShane, Deidre; Quiery, Nula; & Curran, Sandra. (2000). The Greater Shankill early years project evaluation report. Belfast: CCEA.

Sproule, Liz; Rafferty, Harry; Trew, Karen; Walsh, Glenda; McGuiness, Carol; & Sheehy, Noel. (2001). The early years enriched curriculum evaluation project: First year report. Belfast: School of Psychology, Queen's University Belfast.

Sproule, Liz; Rafferty, Harry; Trew, Karen; Walsh, Glenda; McGuiness, Carol; & Sheehy, Noel. (2002). The early years enriched curriculum evaluation project: Second year report. Belfast: School of Psychology, Queen's University Belfast.

Sproule, Liz; Rafferty, Harry; Trew, Karen; Walsh, Glenda; McGuiness, Carol; & Sheehy, Noel. (2003). The early years enriched curriculum evaluation project: Third year report. Belfast: School of Psychology, Queen's University Belfast.

Statham, June, & Brophy, Julia. (1992). Using the early childhood environment rating scale in playgroups. Educational Research, 34(2), 141-148.

Sundell, Knut. (1992). Instructional style, age span in childhood groups and speech, cognitive and socio-emotional status. In Ferre Laevers (Ed.), Defining and assessing quality in early childhood education (Studia Paedagogica, No. 16). Leuven, Belgium: Leuven University Press.

Sylva, Kathy; Siraj-Blatchford, Iram; & Taggart, Brenda. (2003). Assessing quality in the early years. Early Childhood Environment Rating Scale Extension (ECERS-E). Four curricular subscales. Stoke on Trent, UK: Trentham Books.

Tymms, Peter; Merrell, Christine; & Henderson, Brian. (2000). Baseline assessment and progress during the first three years at school. Educational Research and Evaluation, 6(2). 105-129. EJ 612 339.

Vygotsky, L. S. (1978). Mind in society. The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Watson, Judith. (2000). Constructive instruction and learning difficulties. Support for Learning, 15(3), 134-141.

Author Information

Dr. Glenda Walsh is a senior lecturer at Stranmillis University College, Queens University Belfast. She teaches curriculum and teaching studies, child development, and all aspects of early years education. Her research interests are principally embedded in the evaluation of early years practice and process measures of quality. Her doctoral thesis concentrated on the play versus formal debate, evaluating the quality of practice in Northern Ireland and Denmark for the 4- to 5-year-old child. This paper grew out of work begun in her doctoral thesis and will be of interest to all those involved in the process of evaluation in early years education.

Dr. Glenda Walsh
Senior Lecturer in Early Years Education
Stranmillis University College
Stranmillis Road
Northern Ireland
Telephone: 02890 384 432

John Gardner is a professor of education and the Dean of the Faculty of Legal, Social, and Educational Sciences at Queen's University, Belfast. He has published several books and over 50 journal articles. Current research interests include studies of the effectiveness of Reading Recovery and other reading intervention programs, quality assessments of preschool settings, the impact of assessment on pupil motivation, and the process of school evaluation and inspection. He is a member of the influential Assessment Reform Group, a UK lobbying group that promotes, inter alia, the importance of formative assessment. He is a member of the Council of the British Educational Research Association and is Deputy Chair of the Academy of Learned Societies for the Social Sciences.

Professor John Gardner, Dean
Faculty of Legal, Social, and Educational Sciences
The Queen's University of Belfast
Room 23.G01
23 University Square
Northern Ireland
Telephone: +44 (0) 28 9097 5372
Fax: +44 (0) 28 9023 8133

This article has been accessed 22,193 times through June 1, 2007.