Contents
Learning From Leadership: Investigating the Links to Improved Student Learning
Click here to download the full report:
Learning From Leadership: Investigating the Links to Improved Student Learning
Key Findings
- District data-use practices have a substantial influence on principals‘ data-use practices.
- Most principals have and use considerable amounts of evidence about the status of individual students and their student populations.
- Very few principals have systematically-collected evidence about the school and classroom conditions that would need to change for achievement to improve.
- A slim majority of principals process their data in collaboration with their staffs and call on district staff members and others with special expertise to help them with data analysis and use.
- When schools are considered in the aggregate, typical approaches to data use by districts and principals have no measurable influence on student achievement. But variations in data use, specifically in elementary schools, explain a significant amount of variation in student achievement.
- Leaders in high data-use schools have clear purposes for analyzing data. They engage their staff collectively in data analysis, build internal capacity for this work, and use data to solve problems, not simply to identify them.
- Principals can play a key role in establishing the purposes and expectations for data use. They can provide structured opportunities (collegial groups and time for data use), sessions for data-use training and assistance, access to expertise, and follow-up actions. Where principals do not make data use a priority—where they do not mobilize expertise to support data use and create working conditions to facilitate data use in instructional decision making—teachers are not likely to do it on their own.
Introduction
A decade ago, it was disconcertingly easy to find education leaders who dismissed student-achievement data and systematic research as having only limited utility for improving schools or school systems. Today, we have come full circle. It is hard to attend an education conference or read an education magazine without encountering broad claims for data-based decision making.
Against a broad background of increased interest in educators‘ uses of data, we were motivated to pursue this strand of our research by five broad issues. First, we aimed to clarify state and district approaches to data use. Second, we wanted to better understand the relationship between districts‘ and principals‘ orientations to evidencebased decision making. Compelling evidence now suggests that this relationship is a central explanation for the how data are used in schools.219 Third, while principals and teachers everywhere are being admonished to use more and different data in their decision making,220 we were curious to know what their typical response to data use is.
Our fourth purpose was to better understand patterns of data use in schools where evidence-based decision making had become a priority. Finally, we wanted to know whether typical approaches to data use by districts and principals have any discernable influence on student achievement. Almost all accountability-driven, large-scale reform efforts assume that greater attention by districts and schools to systematically collected data is a key lever for improving student performance. But evidence in support of this assumption is thin and mixed.221 Perhaps, we surmised, there are important conditions to be met or thresholds to be surpassed before such data use matters.
Current scholarship highlights educators‘ increasing reliance on data use at the school and district levels. These reports often are based on case studies of one or a few sites, chosen to exemplify positive stories of data use.222 Studies of this sort provide insights about uses of data, organizational conditions (e.g., leadership, resources, professional trust between teachers and between teachers and administrators) conducive to data use, and ways in which data use can evolve and become more comprehensive and institutionalized in ongoing work routines over time. The innovations and activity surrounding data use are, however, quite recent; and the brief track record to date makes it difficult to be confident about the effects of data use, particularly effects on student achievement.
Prior Research
We framed data collection and analysis for this section of our research according to five variables about which there is considerable prior evidence. In this framework, summarized in Figure 9,
student achievement is the dependent variable, influenced most directly by the
decisions and actions of school staffs, especially principals.
Types of evidence available to the school (often from the district) and existing
conditions influencing how data are interpreted and used are variables shaping the
processes for interpreting evidence by principals and their colleagues in their
decisions and actions. This framework acknowledges the reciprocity of relationships among these variables. For example, the outcome of data interpretation processes might not be actions or decisions aimed directly at student learning; instead, it might be a search for additional types of evidence considered crucial to decision making, or push-back on some external influences on data use considered unhelpful by principals and teachers.
Types of Data (Breadth, Nature and Patterns of Use)
Breadth of data. Our conception of variation in the breadth of data used by principals took, as its point of departure, the framework guiding our overall project. Principals‘ actions or practices are determined by their thoughts, values, and feelings. These internal states have antecedents: principals‘ own past experiences, knowledge, and beliefs, as well as their interpretations of the consequences of their current practices for the local and wider contexts in which they find themselves. Yeh (2006) has adopted a similar interpretive perspective in his research on teachers‘ response to data from state tests, with a focus on teacher attitudes, in particular.
Figure 9: Framework for understanding evidence-informed processes
The framework for our overall project also points to the mostly indirect influence of principals‘ actions on students and on student learning.223 Such actions are mediated, for example, by school conditions such as academic press,224 with significant consequences for teaching and learning and for powerful features of classroom practice such as teachers‘ uses of instructional time.225 Evidence-informed decision making by principals, guided by this understanding of principals‘ work, includes having and using a broad array of evidence about many things: key features of their school‘s external context; the status of school and classroom conditions mediating leaders‘ own leadership practices; and the status of their students‘ learning.
Nature of data (informal vs. formal). The admonition to be "more evidencebased" should not be taken literally. It is certainly not the case that teachers and administrators have been making evidence-free decisions for the past hundred years. But the evidence available to teachers and principals has often come from their impressions of "ordinary workplace practice"; these typically narrative accounts of experience "constitute a pervasive feature of workplace discourse and a resource for workplace learning" (Little, 2007, p. 220).
We can‘t say a priori whether shifting the weight of emphasis from informal to formal evidence for decision making will improve schools; it is an empirical question.226 The current emphasis on using student performance data to guide improvement efforts also calls for greater attention by those in schools to measurable patterns of student performance at the school level, or by student sub-groups, in addition to the conventional interest in individual student needs and progress.
Furthermore, the systematically collected evidence available to most schools today is almost entirely evidence about the current status of student achievement. In some schools this consists almost entirely of externally mandated test data gathered toward the end of the school year. While information about achievement is obviously critical for schools, it has almost nothing to say about the causes of such achievement or the strategies that might be useful for improving achievement levels. Furthermore, for data of this sort, schools rely mainly on results from large-scale national or state testing programs. Most of these programs focus only on a narrow band of objectives in the formal curriculum; they have unknown levels of reliability at the school level; they are cross-sectional in nature; and the results they yield become available to schools only after lengthy time delays.
227
228
Patterns of data use. Based on a study of data-driven decision making in 36 schools, Ikemoto and Marsh (2007) developed a conceptual framework of four models of school data use, varying by the complexity of the data used and the complexity of the analysis and decision making in question. They labeled these models basic (simple data, simple analysis, simple decision making), analysis-focused (simple data, complex analysis, complex decision making), data-focused (complex types of data, simple analysis, simple decision making), and inquiry-focused (complex types of data, complex analysis, complex decision making). We found these dimensions of data use, if not the archetypes, helpful in comparing data use across our site-visit districts and schools.
Conditions Influencing Data Use in Schools
Ikemoto and Marsh (2007) also have identified a set of school and district conditions likely to support data use in schools. Developing these conditions requires leadership, most obviously from principals,229 although others might certainly contribute. The conditions include accessibility and timeliness of data; perceived validity of data; staff capacity and support for considering data; time available to interpret and act on the data; partnerships with external organizations for analyzing and interpreting data; and tools for data collection and interpretation (procedures and instruments). Similar conditions fostering data use in schools have been identified by Wilson (2004), Heritage and Yeagley (2005), and Yeh (2006).
From a three-year case study of the uses of evidence related to instructional decision-making at the district level, Coburn, Touré and Yamashita (2009) identified key factors influencing the uses of data. These factors include the congruence of sources of evidence with the prior beliefs of decision-makers, the content knowledge of individuals using data to advocate alternative views, organizational structures that inhibit or promote shared understanding of instructional matters, resource constraints, and the micropolitics of authority and power in decision-making processes. With the exception of micropolitical processes, these factors are similar to several of the conditions described by Ikemoto and Marsh (2007), including perceived validity of data, staff capacity, and organizational resources (e.g., time, contexts for collaborative work).
Certain forms of leadership and organizational culture also may foster data use, particularly when they reflect norms and values supporting careful use of systematically collected data (Ikemoto & Marsh, 2007), creating what Katz and his colleagues (2002) refer to as an "evaluation habit of mind" within schools. Justification for including this condition in our analytic framework can also be found in evidence reported by Louis, Febey, and Schroeder. They found that active efforts "by district-level administrators to mediate sense-making affected teachers‘ attitudes toward accountability policies and standards-driven reform" (2005, p. 177). Firestone and Gonzalez (2007) also demonstrate the quite different ways in which data are interpreted and used in schools and districts depending upon whether dominant norms in district culture are oriented to accountability or organizational learning.
Processes Used for Data Interpretation and Decision Making
Approaches to interpreting data vary. Two school leaders having access to the same data may use different approaches for making sense of it, and some approaches will be more productive than others. Ikemoto and Marsh (2007; see
Patterns of data use, above) provide a compelling case for the hierarchical nature of four such approaches in terms of their value for school-improvement decisions. These approaches vary along five dimensions, in Ikemoto and Marsh‘s conceptualization; we summarize these below, along with a sixth dimension we have added.
-
Number of data sources: Variation on this dimension ranges from a single source (e.g., an annual standardized reading assessment) to multiple sources (e.g., other standardized tests and teacher-created assessments). Justification for considering this dimension can be found in basic accounts of the limitations and biases inevitably associated with any single type or source of data.230 Knapp and his colleagues (2007) have described several mistakes schools can make if they rely on only one data source.
-
Nature and extent of data analysis. While Ikemoto and Marsh (2007) acknowledge that, in some circumstances, simple forms of data analysis might be quite appropriate, less obvious but critical underlying explanations for results will sometimes require more complex analysis. Disaggregating data by student groups, for example, is a minimum requirement for pinpointing the potential sources of underperformance among students in many school contexts. External standards or criteria used in the interpretation of data may also add a valuable dimension of complexity.
-
Who is involved in data interpretation and use? At the least productive end of this dimension, one person (usually the school administrator) does most of the analysis and interpretation and then reports the results to teachers. The most productive end of this dimension entails using multiple participants in data analysis, interpretation, and decision making. Participants may come together in what Wayman and colleagues (2006) call collaborative data teams. These are professional learning communities with access to information about their students‘ learning. Collaborative structures for making sense of data have been recommended by many others, as well.231
-
Engagement of special expertise. This dimension considers the nature and extent of engagement by people with expert knowledge from outside the school staff—for example, district staff with technical expertise in measurement or university faculty members with specialties relevant to the content of particular assessments. At the least productive end of this continuum, no specialists are used; at the most productive end, experts are selected to provide assistance for well-defined reasons. The presence or absence of expert knowledge may matter a great deal, regardless of its source. Coburn, Touré, and Yamashita (2009) found, for example, that district-level educators‘ use of evidence related to instruction was significantly influenced by their own content knowledge about the issues in question (e.g., explanations for low math scores, best approaches to reading instruction).
-
Number of data points. This dimension focuses on data collected at one point in time or data collected at several points in time. School district officials and principals may consider, for example, evidence collected at one testing date or evidence collected at several points— e.g., data on student growth against expected learning standards throughout the year, and from year to year. Longitudinal evidence that displays trends and trajectories has greater potential than snapshot data for informing educators‘ school-improvement activity.
-
Extent of use. In addition to the above five dimensions along which principals and schools may vary in their uses of data, we also inquired about
extent of use, a broader indicator of the prevalence of data use in schools. Within this dimension we incorporate variability in the types and number of organizational contexts in which data are used (e.g., school-improvement planning meetings, grade team meetings, data retreats).
Data Use and Student Learning
Evidence about the impact of data use on student learning is still quite meager; it has to be cobbled together from different strands of research. The most compelling line of research focuses on teachers‘ use of formative or "just-in-time evidence"232 about students‘ learning to shape their own instruction. Black and Wiliam‘s (2004) review of more than 250 studies serves as the primary source for the claim that formative assessment, in Popham‘s words, "can fundamentally transform the way a teacher teaches" (2008, p. vii).
Evidence is mixed at best about the impact of large-scale state and district testing programs on student achievement. Koretz (2005), for example, claims that evidence about the effects of assessment-based accountability is both sparse and discouraging. Indeed, a vigorous critique of the effects of large-scale assessment has developed as the tests in question have become increasingly high-stakes for students, teachers, and administrators.233 On the other hand, in a comparison of high- and low-accountability states, Carnoy and Loeb (2002) found significantly greater achievement in eighth-grade mathematics for students in high-accountability states, with no difference in retention or high school completion rates.
Some evidence from research on effective schools and school districts making improvement shows that data-informed decision making, with an emphasis on data about student progress and outcomes, is characteristic of district-level leadership in these settings.234 Coburn, Touré, and Yamashita‘s (2009) case study of data use in one school district reveals, however, that educators and other interested parties may use of assessment data and other forms of evidence symbolically rather than instrumentally, as different policy actors attempt to influence decisions to reflect their preferences. This finding challenges the simplistic view that data use for school improvement is a straightforward, objective process.
New Evidence
To better understand the four broad issues motivating this strand of our research, we undertook complementary sub-studies using qualitative (site-visit interviews) and quantitative (surveys, student achievement measures) data at the district and/or school levels.
- Sub-study one focused on the types and nature of data use by principals in their decision making; district influences on data-informed decision making by principals; and the relationship between school data use and variability in student achievement.
- Sub-study two focused on data use and support for data use in schools and at the district level, along with case studies of six site-visit schools identified from our surveys as high data-use schools.
While our research questions varied for each analysis, they all employed the Ikemoto and Marsh framework as a common organizer for analysis and discussion. The discussion that follows integrates findings from each sub-study where appropriate.
Method
Sub-study one. Interview data collected from 27 principals during the second round of site visits provided the qualitative evidence for this sub-study. While these interviews were relatively open-ended, our analysis of them was explicitly guided by the framework described above. Our quantitative evidence consisted of responses collected from 3,969 teachers and 107 principals during the first round of surveys (for a response rate of approximately 70%). The school was the unit of analysis. Data from each of the 107 schools included responses from the principal and seven or more teachers. Five questions on the principal survey asked about the extent of their districts‘ approach to data use; four questions inquired about principals‘ own approach to data use; and two questions on the teacher survey asked teachers about their principals‘ approach to data use.
Data about annual levels of achievement in literacy and mathematics provided the final source of evidence for this analysis. These data, obtained from each school's website, derived from state testing programs. We explored the relationship between variations in data use and student achievement using average annual achievement measures. Following Linn‘s (2003) advice for generating stable achievement measures, we represented each school‘s performance by the combined mathematics and language scores for all grades tested, averaged over three years. We also examined mathematics and language scores separately.
We did not select schools for sub-study one on the basis of their data-use practices. Rather, we selected them to represent the normal distribution of schools on such variables as size, student SES, and school level, but weighted more heavily in favor of schools serving high-needs students. We assume that the data-use practices portrayed by our data are typical of many schools across the country.
Sub-study two. Here we examined what district administrators (e.g., superintendents, assistant superintendents, curriculum and assessment directors) from the 18 site-visit districts had to say about data use for decision making at the district and school levels. For this analysis all district administrator transcripts across the three site visits were reviewed. Comments related to evidence use and factors affecting data use were collected using the Ikemoto and Marsh (2007) schemas of data use conditions and processes as a framework for organizing the data prior to undertaking a more in-depth inductive analysis of findings within those dimensions.
We also used items about data use from Round One of the teacher and principal surveys to measure the extent of data use in schools. We sorted site-visit schools into high (one standard deviation or more above the mean), medium, and low (one standard deviation below the mean) data-use groups, and we selected six high data-use schools for case study analysis of the interview data from principals and from teachers. This sample comprised five elementary schools and one middle school from five districts located in four of the nine states. The analytical process adhered to that described above, except that case studies of data use were constructed for each school and then compared across the six schools to draw greater insight.
Results
State Approaches to Data Use
To explore this issue we used data from sub-study two. The U.S. government and the states have created an accountability context in which data are a prominent feature. District leaders play a key role in determining how data are actually used in their districts. They model data use in district decision making; they set expectations for data use in school-improvement activities, and monitor the efforts that follow; they make use of supplementary tools to facilitate data use (e.g., data reports for schools, curriculumembedded assessment instruments of student learning); and they mobilize expertise (locally developed or accessed externally) to help principals and teachers use data properly in decisions they make about improving student learning and school results. Very few principals are deeply and skillfully engaged in data use on their own, and isolated engagement is not sustainable in the face of staff turnover.
Superintendents acknowledge that federal and state standards and accountability systems have created a situation in which district and school personnel cannot ignore evidence about students who are struggling or failing to meet mandated standards for academic performance, as reflected in test results and other indicators of student success (e.g., attendance, graduation rates). With few exceptions, the district leaders we interviewed describe this as a positive turn of events, though they are not all equally well supported by their state education agencies in local efforts to make use of these and other kinds of performance data.
Respondents frequently identified the following issues associated with state expectations and support for data use:
- whether or not state assessment data are made available in a timely manner that enables local educators to make meaningful use of data
- whether or not state data reports provide sufficient detail to enable local educators to identify specific curriculum expectations that are and are not being met by individuals and sub-groups of students
- whether or not the state provides diagnostic and formative assessment tools aligned with state curriculum standards to help school personnel track student progress and provide assistance during the year
- whether or not the state education agency and/or state supported education service units have sufficient expertise to respond to local needs for effective data use
- the compatibility of state assessments and supplementary assessments that districts develop or adopt to compensate for gaps in the state system
Relationships between District and School Approaches to Data Use
Districts differed in their approaches to and support for data-based decision making. The differences reflect differences in state accountability systems; they also reflect differences in how district leaders use the data resources provided by the states, and in how they compensate for perceived deficiencies.
We examined data from interviews with district and school administrators concerning district data use. The fit of any district to Ikemoto and Marsh‘s typology of approaches to data use (basic, analysis-focused, data-focused, and inquiry-focused) is imperfect. However, the distinctions Ikemoto and Marsh draw are useful for describing how district leaders approach and support the use of data. We highlight salient similarities, differences, and trends in the complexity of data use from a district perspective.
In all districts, leaders were attentive to state test results and other required accountability measures (e.g., graduation rates, attendance)—for individual schools and for the district in relation to state proficiency standards and AYP targets. Some district leaders also gathered data from schools using district performance benchmarks and indicators. At a minimum, leaders used these data to identify concerns about the performance of students overall in selected curricular areas, or about specific schools and groups of students. Most districts supplemented state test data with other kinds of student assessments—norm-referenced tests, e.g., and diagnostic and formative assessments of individual student needs.
Diagnostic and formative assessments are meant to be used by school personnel to identify students requiring special program interventions (e.g., remedial programs, tutoring) or more differentiated instruction in the classroom. It is typically the district that mobilizes access to these assessment tools. We encountered variability in the extent to which districts and schools rely on state diagnostic and formative assessment instruments, commercial assessment instruments, or district-developed instruments.
Our evidence shows a trend toward increasing the array of data that district and school personnel consult in making decisions. Beyond the practical challenges of training people about how to interpret data and making time for them to do it, districts faced a major challenge in issues of compatibility and alignment among elements of assessment systems. To the extent that districts and schools are accountable for meeting state performance standards, any assessments that are not clearly linked to performance on those standards is problematic.
This problem is less evident in districts that have developed curricula well aligned to state standards, and that have succeeded in developing curriculum-embedded diagnostic and formative assessments of individual student progress. In these districts, data generated from regular assessments by classroom teachers are aligned with state standards, and it is likely to provide guidance for interventions that will foster improved performance according to those standards.
Districts also varied in their expectations of and support for the people assigned to lead, or participate, in the analysis of data. District size was clearly a factor here. Whereas large districts were likely to employ assessment and evaluation specialists (individuals or teams), small districts were more likely to rely on district administrators or curriculum directors with expertise in assessment matters. Small districts also were more likely to draw upon expert advice and assistance provided by curriculum and assessment specialists from state-supported education service centers.
District leaders recognized the need to develop capacity for data use among school personnel, particularly in decisions about school-improvement initiatives and instructional programs. We observed what seems to be a progression in district approaches to developing that capacity. In some settings district leaders reported a shift: initially, an emphasis on developing principals‘ expertise in data use; next, an emphasis on training selected teachers in each school as resident experts; and, more recently, an emphasis on encouraging and supporting data use by classroom teachers, working in teams.
Districts varied in the complexity of the data analyses they called for. In part, this variation reflects the level of detail provided in state data reports; it also reflects what district leaders do (or do not do) to compensate for perceived deficiencies in those reports. Some states do not provide test results in a form that makes it easy for principals and teachers to do an item analysis showing where students did not perform well, and which curriculum standards are linked to those test items. In these cases, school personnel were likely to make superficial use of state data—identifying broad areas of concern, but with little understanding of specific needs for improvement—unless the district were to provide special assistance with the task.
Even states do provide data in a form that allows for item analysis, some districts stop short of providing schools with strategies and tools needed to investigate underlying factors that might be causing identified problems. In the few districts that exemplified an inquiry-focused approach to data use (in Ikemoto and Marsh‘s terms), district leaders posed questions and then proceeded to explore them with existing and new data, as needed. In one district, the superintendent asked how many students were reaching Grade 5 without reading proficiently, and why? District leaders uncovered a pattern of low teacher expectations and social promotion in the primary grades. This led to a series of interventions: a standards-based report card, enforcement of promotion policies, and inservice training and communication with teachers about raising expectations for young children‘s learning
We observed one other shift in the evolution of data use. In a few districts, district and school leaders reported that analysis of trend data by district and/or state assessment specialists had led to the identification of early indicators of students academically at risk, based on test scores or other factors (e.g., family circumstances), in lower grade levels. While state education agency specialists had made tools available for trend analysis in one of the states we sampled, the shift toward assembling and making trend data available to district and school personnel has been largely a district-level initiative. This has become possible thanks to the growing availability of software that enables educators to store and retrieve longitudinal data on students, individually or by groups. (While access to trend data is increasing, however, district and school personnel were more apt to talk about its availability and potential than its use).
Types of Data Used by Principals and Teachers
Principals across the sample of site-visit schools confirmed the extensive use of systematically collected evidence about student achievement. All but one principal referred to state-mandated assessment results. Sixteen of the 27 principals mentioned district-mandated measures of student achievement. A few talked about the development of diagnostic and formative assessments, aligned with state and district curriculum standards, used by teachers to track student performance. These data were often used to identify and provide targeted interventions for struggling students. High data-use schools, particularly, emphasized the development and systematic use of diagnostic and formative assessments of student learning.
Principals also referred to evidence about their students as a group, including student mobility rates, attendance rates, graduation rates, proportion of students eligible for free or reduced-price lunch, students "at risk," and students with handicaps of various sorts. At a minimum, they used this sort of data in compliance with policy requirements for reporting student test results and for allocating students and district resources to categorically prescribed programs, such as Title I. Less frequently, school and district personnel used background information for help in interpreting student and school performance data. This more complex use of data was more likely in high-data use contexts.
Principals and teachers in some districts reported the adoption of computerized data management systems, and the potential these systems suggested for displaying and using trend data on student performance. But they talked more about the added workload involved in entering data into the systems than about actual retrieval, analysis, and use of trend data for decision making.
When we asked about data use for decision making related to improvement in the quality of teaching and learning, principals across the site-visit schools spoke mostly about student assessment data, not about data on teacher performance or the need for professional support. Some principals, however, reported that student performance data (particularly formative data at the classroom level) related to targeted schoolimprovement goals (e.g., for reading, writing) did enter into their discussions with teachers during regular teacher supervision processes.
A few principals mentioned unobtrusive methods of learning about what was happening in classrooms through workplace discussions with teachers individually or in teacher teams (e.g., grade-level, subject teams, professional learning community groups). Several described observations they were able to make regarding teachers' instructional practices and students' responses during informal classroom walk-throughs (which appear to be an increasingly common administrative practice in schools). In high data-use schools, principals were more likely to connect teacher supervision processes and the more informal observations and conversations to specific instructional improvement goals and initiatives.
No one talked about aggregating information about individual teacher performance, from formal or informal supervision processes, for use in decisions about improvement goals and progress. Perhaps principals did not routinely think of the information they were assimilating through observation and talk about teaching practice as "data." From an outsider‘s perspective, however, observation and talk certainly could yield evidence relevant to administrative decisions.
In sum, we offer two general observations. First, principals and teachers had considerable amounts of evidence about the status of individual students and their student populations, and they used it in various ways. But they had little formal evidence about the organizational conditions that might need to change if classroom and school performance were to improve. Second, high and low data-use schools differed little in respect to the data available to them. Differences were more evident in the uses schools made of the available data.
Patterns of Data Use in High Data-Use Schools
Guided by Ikemoto and Marsh‘s (2007) framework, we used evidence from substudy two to describe patterns of data use, especially in high use schools.
Complexity. The scope, frequency, and complexity of data use were greater in high data-use schools, as were the potential contributions of data use to improvement in teaching and learning. Principals in most schools, for example, cited state test results as a factor in setting school-improvement goals. The number of sites where principals and teachers were actively using data to monitor the outcomes of school-improvement plans, however, was more limited.
Teachers and principals in many schools reported using diagnostic assessment instruments as a basis for identifying struggling students and placing them in remedial programs at the beginning of the school year. School personnel in higher data-use schools were more likely to report using formative assessments of student progress at intervals across the school year; they were also more likely to rely on cyclical decisions about which students needed additional help through remedial or enrichment programs, afterschool tutoring, and differentiated instruction in the classroom. Less frequently, principals and teachers reported using data in making decisions about professional development plans or in the course of conversations with parents about student performance and programming.
Specific purposes. Teachers have always evaluated their students for the purpose of grading and marking report cards. Incorporating student performance data into decisions about instruction has been less common. That use of data, we found, was more likely to occur in settings where district and school leaders had linked data use to specific purposes. In some schools, for example, teachers used diagnostic and formative assessment data to make decisions about student placement in remedial reading or math programs, or in school-based tutoring programs. Principals arranged in-service training to increase teachers‘ repertoires of instructional strategies in order to foster differentiated instruction in subject areas targeted for improvement.
Participants. Use of data was largely a collective activity in schools. It happened in grade team meetings, subject groups, professional learning community groups, committees convened to assess and monitor needs for at-risk students, school leadership or improvement teams, or in whole-staff events, such as data retreats and faculty meetings.
In some schools, inquiry-oriented data use was being modeled by the principal, but had not yet evolved into a more collective activity involving teachers, as well. The principal in one school, for example, did her own investigation of why so many Hispanic students entering the school at Grade 3 had not moved on to English medium classrooms, as expected, by Grade 6, and she presented her findings and plans to her staff. In another school, the principal sought out comparison data on state test results from other schools in an effort to learn why his schools‘ performance rating had slipped below the state‘s exemplary rating, and he took action based on his analysis.
Sources of expertise. Our interview data point to five potential sources of expertise in data use in schools: central office personnel (superintendents, curriculum or assessment specialists); state-supported regional education center specialists; principals; key teachers trained to serve as assessment and data experts; and classroom teachers in general. In lower data-use schools educators tended to depend on external expertise, or to rely on the principal or a key teacher (e.g., counselor, literacy coach) as the resident data expert. In higher data-use schools, expertise was more widely distributed. Principals and teachers reported increasing efforts to develop the capacity of teachers to engage collectively in data analysis for instructional decision making, supported by but not dependent on other experts. Data use was often the focus of professional learning community initiatives. Districts contributed by offering training in the use of curriculumlinked classroom assessments, school-wide data analysis events, coaching of teacher teams (grade or subject teams, professional learning community groups), and the purchase and training in the use of data software .
Key role of principal. Principals played a key leadership role in establishing purposes and expectations for data use. They also provided structured opportunities for data use (collegial groups and time), learning about data use through training and assistance, access to expertise, and follow-up actions. Principal leadership in this respect was crucial. Where principals do not make data use a priority—mobilizing expertise to support data use and create conditions to facilitate data use in instructional decisionmaking— teachers are not doing it on their own. We did see examples in some schools of principals providing leadership for data use in the absence of well organized district-level leadership and support. Overall, however, the scope and complexity of data use in schools mirrored the data use orientations, practices, expectations, and support shown by district office leaders.
Problem solving. In all the schools we studied, school personnel were using student performance data to comply with external accountability requirements and to identify problems at the school, student sub-group, or individual student levels. However, principals and teachers in only a few settings had progressed beyond using data for problem
identification to using data for problem
solving. Principals and teachers who had turned to problem solving were gathering and analyzing data in order to understand the causes or factors related to the problems in question and to monitor the effects of interventions implemented in order to ameliorate those problems.
In one elementary school, for example, the principal and teachers identified improvement in children‘s expository writing as a school goal. The principal mobilized teachers to develop mid-year writing prompts to supplement beginning- and end-of-year assessments developed by the district. She called on district consultants to provide inservice training for teachers, not only on the use and interpretation of assessments based on the district‘s standards-based writing rubric but also on teaching methods associated with identified goals for improvement in writing. She organized the teachers into professional learning communities dedicated to studying student progress and the effects of teacher interventions. And she and the teachers implemented a process whereby teachers interviewed students about their responses to the strategies for teaching writing that teachers were using.
Challenges. On the face of it, the push toward using increasingly complex types of data and increasingly complex analyses to inform decisions seems like a good idea. But we observed tensions in some schools between traditional norms of decision-making (reliance on established expertise) and the recent move toward decisions informed by evidence. The tension was especially notable in settings where districts mandated the use of computer-based data management systems to record (and potentially retrieve and use) many forms of assessment information, student characteristics, and program placement data (e.g., by grade, classroom, sub-group population) over time. Teachers talked about data overload, emphasizing the time required to enter information into these systems as well as the time and expertise required to retrieve and interpret it. It often remained unclear what specific purposes these systems were to serve. Tension also surfaced when school or district leaders called for data-informed decisions to be made in areas where those decisions had traditionally been made by teachers on the basis of their individual and collective expertise. This issue was most salient in schools where the vast majority of students were already performing at high levels.
Effects of Data Use on Student Achievement
We used quantitative and qualitative methods to examine the relationship between data use and student achievement. The quantitative analysis focused on responses to principal and teacher surveys and on our measures of student achievement in literacy and mathematics. First we entered three measures of data use (principals’ view of district data use, their own data use, and teachers’ perceptions of principal data use), as a block, into a regression equation. We entered the four demographic variables (student diversity, poverty, school level and school size) in the final equation. None of the measures of data use had a significant effect on student achievement when added to the equation on their own, nor did they have any unique explanatory value when combined with the four demographic measures in the final equation.
The demographic variables explained about 19% of the variance in student achievement, with school level and diversity each explaining about 5% of that variance. We used the same variables for another analysis that reversed the order of entry for the data use and demographic variables. The results were essentially the same. We conducted a third analysis with these variables, using only the elementary schools (52). In this analysis, data-use variables did have a significant effect on achievement, explaining 19% of the variance with the first equation [F(3,51) = 5.03, p<.05]. The explained variation increased to 24% in the second equation with the demographic measures, but only
perceptions of district use had a significant effect. However, the reduction of the number of cases (to fewer than 10 per variable for the regression analysis) limits the reliability of this result.
Given this weak statistical evidence of positive relationships between student achievement and district or school data use (as reflected in the principal and teacher survey items), we turned to our qualitative data, which provided the following insights:
- The availability of student assessment data in the context of current federal, state, and district accountability requirements is causing district and school personnel to justify their goals and plans for improvement, focusing in particular on students and schools that are not meeting standards-based performance expectations and targets.
- The potential for these focused improvement plans to make a difference in the quality of student learning is highly dependent on the degree to which local educators are able to align local curriculum, teaching, and assessment practices with the external measures against which they are being held to account.
- District and school efforts to improve student learning are more likely to have a positive effect when the data available and the analysis performed by local educators go beyond the mere identification of problem areas to an investigation of the specific nature of the problem, and factors contributing to it, for the students and settings where it is situated.
- Improving teaching and learning with the use of data is only as effective as are the insights gained with data analysis and the consequent actions taken regarding the problem and how it might be solved.
Our quantitative and qualitative findings lead us to speculate that there may be both a lower and an upper threshold beyond which increased or improved use of data by school and/or district personnel simply will not make much difference. One of the large, low- SES urban districts in our sample, for example, had been classified under AYP regulations as in need of district-level intervention by the state, because so many of its schools were not meeting AYP targets. In this situation, it seems likely that there are fundamental social, resource, and perhaps leadership issues affecting student engagement and performance in schools, such that significant improvement without changes in those fundamental conditions is unlikely, even through curricular and instructional improvements informed by detailed analyses of assessment data.
On the other end of the spectrum, our sample included districts and schools that were performing at high levels relative to state performance standards. In such a setting, there may be a saturation point beyond which additional forms of data or expectations for data use simply do not add much value—only more work. In these situations the real imperative for improvement may have more to do with rethinking and redefining the goals for student learning than with increasingly complicated patterns of data use.
Implications for Policy and Practice
Four implications for policy and practice emerged from this section of our study.
- Districts are encouraged to spend less time ensuring that schools have large amounts of data and more time helping principals and teachers figure out how such data might help them do the job they are trying to do. In addition to multiple measures of student achievement, most principals had access to data about background characteristics of their student populations, including socioeconomic status, poverty, and diversity. No doubt these characteristics account for significant variation in achievement in typical schools. Indeed, in our sample of schools, these variables far outweighed the effects of principals‘ data use. So the challenge is to transform data not only into actionable evidence, but also to help principals understand the implications of such evidence for their improvement efforts.
- Districts and schools would benefit from collecting data about local
family educational cultures – norms, beliefs, values, and practices reflecting families‘ dispositions toward schooling and their role in it. Many elements of such cultures (e.g., parental expectations for children‘s success at school) are malleable in response to school intervention and make quite significant contributions to student achievement (Hattie, 2009). But we saw little evidence of districts or schools collecting systematic evidence about these variables.
- Districts should work with school principals to help expand the range of highquality data available to schools in order to more fully encompass the range of variables implicated in schools‘ problem-solving efforts. Very few principals had systematically-collected evidence about the school and classroom conditions that would need to change for their students‘ achievement to improve. Many of these conditions are evident in other strands of our larger study including, for example, teachers‘ dispositions toward collaboration, teacher efficacy, trust, academic press, and disciplinary climate.
- While districts do need to help all schools increase the sophistication of their datause processes, priority should be given to helping secondary schools. A slim majority of principals processed their data in collaboration with their staffs and called on district staff members and others with special expertise to help them with data analysis and use, as normative theory on this matter recommends. But the typical approaches to data use by districts and principals had no measurable influence on student learning across school levels in the aggregate. In elementary schools, however, data use may account for a significant proportion of the variation in student achievement, over and above the effects of student diversity, poverty, and school size.
«
Previous |
Next »
References
219. Wohlstetter, Datnow & Park (2008).
220. E.g., Linn (2003).
221. Koretz (2005).
222. See, e.g., school and district case study examples in Mandinach & Honey (2008).
223. e.g., Hallinger (1996).
224. Goddard et al. (2000a).
225. Resnick et al. (2007).
226. e.g., Heritage & Yeagley (2005).
227. Knapp et al. (2007); Leithwood & Levin (2005).
228. Computerized on-line data information systems are increasingly available for use by educators. These systems store and provide easy access to a wide range of standardized and classroom-based assessment data on students as individuals and in groups, as well as data about student attendance and demographic variables. Indeed, in several of our site-visit districts, systems of this type were introduced in the final years of our study. Beyond selected district or school case reports of exemplary use by developers and local implementation champions (e.g., Mandinach & Honey, 2008), however, we are not aware of any research that documents how widespread the adoption of these systems is, nor do we know of evidence about the effectiveness of their implementation or their impact on instructional decision-making and student learning on a large scale.
229. Firestone & Gonzalez (2007); Wayman et al. (2006).
230. Brewer & Hunter (1989); Yin (1984).
231. E.g., Earl & Katz (2002); Heritage & Yeagley (2005); and Knapp et al. (2007).
232. Erickson (2007).
233. E.g., McNeil (2000b) ; Mintrop (2004).
234. E.g., Cawelti & Protheroe (2001); Murphy & Hallinger (1988); and Togneri & Anderson (2003).