Contents
Learning From Leadership: Investigating the Links to Improved Student Learning
Click here to download the full report:
Learning From Leadership: Investigating the Links to Improved Student Learning
Key Findings
District policies and practices around instruction are sufficiently powerful that they can be felt, indirectly, by teachers as stronger and more directed leadership behaviors by principals. Higher performing districts tend to be led by district staff who:
- Communicate a strong belief in the capacity of teachers and principals to improve the quality of teaching and learning, and in the district‘s capacity to develop the organizational conditions needed for that to happen (high collective efficacy).
- Build consensus about core expectations for professional practice (curriculum, teaching, leadership).
- Differentiate support to principals in relation to evidence of compliance and skill in implementing the expectations, with flexibility for school-based innovation
- Set clear expectations for school leadership practices, and establish leadershipdevelopment systems to select, train, and assist principals and teacher leaders consistent with district expectations.
- Provide organized opportunities for teachers and principals to engage in schoolto- school communication, focusing on the challenges of improving student learning and program implementation.
- Develop and model strategies and norms for local inquiry into challenges related to student learning and program implementation.
- Coordinate district support for school improvement across organizational units (e.g., supervision, curriculum and instruction, staff development, human resources) in relation to district priorities, expectations for professional practice, and a shared understanding of the goals and needs of specific schools.
Introduction
This chapter examines ways in which districts foster improvements in teaching and learning. We assumed at the outset (1) that successful districts focus on and support efforts to improve teaching and learning and (2) that districts are not all alike in the ways in which they embody this focus in policies and actions. Our analysis supports both of these assumptions.
Our findings also suggest that differences between districts, regarding efforts to improve teaching and learning, cannot be ascertained merely by asking administrators and specialists to articulate their priorities. All district leaders believe that they focus on instruction, but we found substantial variation from district to district in the levels of skill and understanding with which they address this focus. To describe and analyze interdistrict differences it is necessary to examine actual practices related to curriculum and instruction, and the interaction of those practices with other strands of district-level action and influence.
Prior Evidence
A number of studies in the 1970s and 1980s documented differences in districtlevel orientations and approaches to educational change. Berman and McLaughlin (1977) distinguished districts in terms of bureaucratic, opportunistic, or problem-solving motivations of district authorities. Not surprisingly, they found that teachers and principals implemented and developed new programs and practices more effectively in districts that approached change with a problem-solving orientation. Rosenholtz (1989) differentiated between "stuck" and "moving" districts in her investigation of teachers‘ workplace conditions and change. More effective schools were located in districts that give a higher priority to improving teaching and learning. Berman et al. (1981) reached a similar conclusion. They distinguished among four district roles in the school improvement process: controlling (district regulates what is to be done, how, and by whom); directive (district sets goals, establishes a master plan, and controls funds, but leaves some discretion for schools to determine how to implement the plan and achieve the goals); facilitative (district gives schools autonomy and support to decide on their own needs, goals, and programs); and neglect (district provides no special guidance or support to schools). Schools in facilitative districts did the best job of identifying and addressing school needs and approaches to change.
Others have focused on the link between strategy and effect in district efforts to improve schools. Louis (1989), drawing from a survey and case-study investigation of initiatives in urban secondary schools, identified four district-level approaches to school improvement: innovation implementation (uniform processes and outcomes), evolutionary planning (uniform processes, variable outcomes), goal-based accountability (variable processes, uniform outcomes), and professional investment (variable processes and outcomes). Like Berman et al., Louis emphasized the importance of relationships between schools and districts, as evident in levels of bureaucratic control (rules and regulations) and organizational coupling (e.g., shared goals, community, joint planning and coordination).
The issue of top-down versus bottom-up approaches to improvement has a long history. Massell and Goertz (2002) described alternative, and reportedly successful, topdown and bottom-up district strategies for change and improvement, with the implication that no best way can be generalized to all settings. Spillane (2002) found that district leaders‘ approaches to facilitating implementation of state curriculum policy are shaped in part by their conceptions of teacher learning: quasi-behaviorist, situated, and quasicognitive. Other research has pointed to the possibility that top-down and bottom-up approaches need not be viewed as alternatives, but can be combined.235
Recent research on the district role in school-improvement activity has focused increasingly on the identification of specific district-level policies, actions, and conditions that are related to improvement in teachers‘ and students‘ performance. Much of this research converges on a common set of policies, actions, and conditions associated with district-wide improvement and effectiveness, as described in section 2.2, above.236 Findings from this research are consistent with investigations that have focused specifically on the actions of superintendents and other senior administrators.237
In sum, districts vary in how they understand and approach the task of improving teaching and learning. However, much of the research bearing on this point was undertaken prior to the era of standards and accountability-driven reform that began to take shape in the 1990s and was universalized in the United States under the federal No Child Left Behind Act. It remains to be seen whether districts will differ markedly from one another or converge on common approaches as they work to improve teaching and learning in this new policy context.
Historically, school districts have supported schools differentially according to differences in school types (e.g., elementary, middle, high schools) and compliance requirements specified by legislated categorical differences in students and programs (e.g., Title I, ELL). The latter categories of support are rationalized in terms of the perceived challenges schools face in serving certain categories of students. Contemporary accountability policies have created the added expectation that districts will differentiate support to schools on the basis of achievement results from state testing programs and other accountability measures, with particular attention to be given to schools where large numbers of students are not meeting standards of proficiency. Exactly how that expectation plays out in school districts has not been systematically studied. On the one hand, districts may simply be complying with specified interventions to schools that fail to meet Adequate Yearly Progress targets. On the other hand, school district leaders may be developing and implementing their own strategic responses to various school needs for improvement, in conjunction with NCLB and state-mandated interventions.
New Evidence
Method
We obtained data for this component of our study from the second round of principal and teacher surveys and from evidence collected in interviews during all three rounds of our site visits to 18 districts.
Survey analysis. The second principal survey contained six items intended to measure principals‘ perceptions of the districts‘ focus on and support for improvements in teaching and learning. We used these items to address two questions:
- How do principals assess the emphasis given to improving teaching and learning by their district administrators?
- Does the district‘s emphasis on teaching and learning affect the principal‘s instructional leadership behavior?
We analyzed the responses to these six items descriptively, and we developed a scale that combined them.
-
District focus on instruction scale. We added standardized scores for the individual measures. The Alpha for the scale is .89. To examine the question of how district policies and practices in the area of instructional improvement are reflected at the building level, we used teacher assessments of their principals‘ instructional leadership from the second survey.238
In addition, we used the scale measuring teachers‘ perceptions of their principal‘s instructional leadership behavior, which was described in detail in Chapter 1.2.
-
Principal instructional leadership scale. Six items in the teacher survey measured the frequency of principal instructional leadership behaviors on a five-point scale ranging from never to 10 or
more times. These included
discussed instructional issues with you, observed your classroom instruction, and provided or located resources to help staff improve their teaching. We added the standardized measures, and produced a scale with an alpha of .94.
Site interview analysis. All three of the site-visit protocols used in the individual interviews probed for district priorities and strategies. We constructed case studies of 12 of the 18 districts, focusing on two strands of analysis:
- district improvement efforts and state policy influence
- district-wide goals and support systems for school improvement
Our selection of districts for case analysis was purposive; we sought to increase the variability of district characteristics, and we drew upon the research team‘s knowledge of the sites. For an analysis of how district administrators differentiate support for improvement to schools, for example, we focused on medium- to large-sized districts serving multiple schools at all levels, rather than small districts with only an elementary, middle, and high school. For our analysis of the relationship between district improvement efforts and state influences (see also section 3.3), we focused mainly on the small- to medium-sized districts, given that more than 90% of school districts in the United States serve less than 25,000 students, and given our impression that much research on the district role in educational reform is concentrated on the experiences of large, urban districts.
In order to understand the effects of administrator turnover at the district level, we concentrated on districts where there were changes in the superintendency during the course of our study. The sample of district office personnel interviewed in each district varied according to district size and organizational structure. We interviewed senior administrators and staff, including the superintendent, assistant superintendents or directors for curriculum, assessment, and staff development; and line superintendents responsible for supervision and support of designated schools. In small districts, we also interviewed school principals who often took on system-level roles or functioned as the superintendent‘s leadership team for consultation and decisions on district-wide matters. In larger districts, we interviewed principals only in the site-visit schools.
This analysis is based on overall district approaches to improving and sustaining the quality of teaching and learning, with particular attention to how district leaders conceptualize and address variability in school performance and progress in implementing local improvement efforts.
Survey Analysis
Principals’ assessments of district instructional focus. Six questions in the second principal survey tapped principals‘ assessments of the priority given by their district administrators to teaching and learning. As can be seen in Tables 2.5.1-2.5.5, principals generally believed that their districts were clearly focusing on this area. However, the responses also suggest some differences. For example, principals give the highest ratings to the district‘s ability to clearly communicate standards for
instructional improvement. Clearly communicate expected standards for high-priority areas of instruction had a mean of 4.9 on a six point scale. Also highly rated is
Have a detailed plan for improving instruction across the district (mean of 4.8).
Principals are slightly less generous in their general assessment of the degree to which their districts
Are active and effective in supporting excellent instruction (mean of 4.67). When they rate specific actions, however, they are even more discriminating: the district‘s ability to
Clarify the steps needed to improve the quality of instruction has a mean of 4.5, while the question of how frequently they
Communicate about best practice in high-priority areas of instruction has a mean of 3.6, which falls between categories of
occasionally and
often on a five-point scale.
An ANOVA indicates that responses to the six questions did not differ significantly by school level (elementary, middle, high school), school size, or characteristics of the student population (percent non-white and percent eligible for free and reduced-price lunch). In addition, there was no significant variation in the responses of principals and assistant principals.
In sum, while principals believe that districts prioritize improved instruction, variations appear in responses to particular questions about whether principals receive clear guidelines and support for making changes at the school level. This variation suggests that in some districts there may be a gap between the "vision" and strategic plan for improved instruction, on the one hand, and, on the other, the way in which specific support for improved instruction is delivered at the school level. As we saw in the case of professional development for principals, the gap between a set of high standards and tangible support for those standards may be critical in determining how well principals can respond within their school settings.
Figure 10: Principal Perceptions of District Actions Related to Improved Teaching and Learning
District focus on instruction and principals’ instructional leadership. We assume that improving building-level leadership is one of the most promising approaches districts can take to fostering change. Current research suggests not only that districts must have a coherent leadership development program (as we have suggested in our investigation of professional development in Section 2.2); they must also consistently emphasize the improvement of instruction as a primary goal.
We conducted a regression of the Principal Instructional Leadership measure on the principals‘ responses to items in the
District Focus on Instruction scale, including building characteristics (size and level), student characteristics (% minority and % FRP) as control variables in the model. The results, presented in Table 2.6.1., show a significant prediction of principal instructional leadership behaviors, with the predictors explaining 36% of the variance in principal instructional leadership. While the characteristics of the school and its student population, taken together, have a strong association with principals‘ instructional leadership, the measure of
District focus on instruction has a significant regression coefficient.
This finding is quite remarkable: It suggests that district policies and practices focused on instruction are sufficiently powerful that they can be felt by teachers as an animating force behind strong, focused leadership by principals. While we do not, in this section, look for a relationship between district practices and student learning, we have already established that instructional leadership by principals has an impact on teachers‘ classroom practices, which, in turn, affect student learning. This is perhaps our most powerful finding regarding the indirect connection between the choices and priorities set by districts and the classroom experience of students.
Cross-Case Analysis
Our results are organized around the dimensions most frequently mentioned by Superintendents as bases for providing strategic direction and support for improved teaching and learning in schools, including the following:
- student performance on standards and indicators;
- school progress in implementing district expectations (curriculum, instruction);
- principals‘ leadership expertise for school improvement;
- school-based factors that explain differences in student performance and program implementation (e.g., instructional expertise, curriculum implementation, learning gaps, staffing, leadership, material resources);
- school/student characteristics (size, staff, SES, ELL, mobility, facilities).
Student performance on standards and indicators. Not surprisingly, district administrators are highly sensitized to how well their schools are performing against state proficiency standards and Annual Yearly Progress (AYP) targets. In the higherperforming districts, district staff corroborate the survey data that suggest the importance of developing local instructional foci and learning standards. Interviews suggest that higher-performing districts uniformly describe the district targets as aligned with—but exceeding—those of the state. Sometimes, as in two of our large urban and suburban districts, this was articulated in terms of broad goals, such as college readiness for all. More commonly, respondents claimed that district expectations for student learning were more rigorous than (yet compatible with) those mandated by the state. This was particularly so in settings where district leaders mobilized the development of districtlevel curriculum content and performance expectations across all areas of curriculum (not only in externally-tested subjects). In the two districts referred to above, for example, district personnel also told stories of multi-year, district-wide curriculum development projects resulting in production of curriculum frameworks and materials that satisfied both state and local goals for student learning.
We encountered similar findings in some small rural districts, notwithstanding the fact that they had fewer professional staff at the district level. One rural Nebraska district led by little more than a superintendent and a curriculum director volunteered to participate in the pilot phase of the state‘s decentralized curriculum and accountability system. Classroom teachers, led by the local curriculum director, developed a district curriculum consistent with state curriculum expectations. District and school personnel in these settings talked enthusiastically about implementing their curriculum, and they spoke positively about achievement results for their students as evidence of its quality. In contrast, in other districts, local educators talked mainly about implementing the statemandated curriculum, and about implementing externally developed programs to satisfy state-level expectations. The benchmark for success was performance on state-mandated tests, and they communicated little sense of striving for more ambitious goals for student learning.
In sum, where district administrators believe that their local standards are aligned to and exceed external standards and accountability measures, and where results on state tests are well above average, administrators tend to emphasize their own benchmarks as a focus for school-improvement efforts. Districts in which students are performing less well on state tests tend, on the other hand, to see themselves as driven by external standards and assessments, and to view the district as less able to determine local priorities and needs. In addition, district administrators in higher-performing districts are more likely to be positive about state curriculum standards and the validity of accountability indicators than those in districts that perform less well.
In higher-performing settings, district leaders are more likely to set continualimprovement goals for students and schools already meeting the minimum standards; they are also more likely to specify targets for students and schools struggling to meet standards. In several of the higher-performing districts in our sample (including large urban/suburban as well as rural districts), for example, district leaders and school personnel described recent and ongoing district-wide efforts to support teacher implementation of differentiated instruction. In one rural Midwestern district the superintendent championed a three-year teacher-development initiative focused on differentiated instruction. Teams of teachers were sent each summer to external professional development programs focused on this aim; these teams then were expected to lead school-based in-service training activities throughout the following year. Interestingly, in this case and in others where district-wide differentiated instruction initiatives were underway, the explicit rationale provided by district personnel was to help teachers ensure that the needs of "high-ability learners" were not being ignored, given the predominant state emphasis on interventions to close the achievement gap between low- and high-achieving students. In these settings, local goals and related initiatives are often framed in terms of satisfying local community expectations—an argument that is most frequently heard in districts that serve large numbers of middleand high-income families, and where there are few or no schools performing below state standards.239
In higher-performing districts, leaders did not expect improvement in lowperforming schools to occur merely by means of inputs required under federal and state policies (e.g., school choice, tutoring, prescribed needs assessments and schoolimprovement planning, curriculum audits, advice from external consultants). They adopted additional, district-level intervention strategies. In one high-performing midwestern urban district, for example, two schools became a focus for district intervention during the final year of our study because they failed to meet AYP targets (the first two schools to be designated in that status). In addition to taking advantage of additional funding from the state, and attending mandatory workshops offered by the state for all schools identified as not meeting AYP, district leaders (curriculum superintendent, curriculum directors, school improvement director) conducted their own investigations of the problems in student performance and followed up with district support tailored to each school‘s needs. In the middle school, for example, they determined that the principal needed help with his instructional leadership skills; that teachers were not setting and communicating clear expectations for student learning; and that Title 1 students were not getting adequate, specialized academic support. Throughout the year the superintendent and directors met and coached the principal on regular monthly and weekly schedules; district curriculum personnel worked with teachers on their instructional needs; and the district supported efforts to improve after-school programs for low-performing students.
In contrast, a middle school in a small, high- poverty district in one of our southern states also failed to meet AYP targets (the district had a history of adequate, albeit not high performance, across its schools on state proficiency tests). In compliance with state requirements, an external school improvement consultant was brought in. The school staff had little positive to say about that consultant‘s input, and district leaders did not report any district initiatives to deal with the situation other than supporting and relying on the principal and teachers to find a solution. We heard similar criticisms about the effectiveness of state support-system interventions for low-performing schools in one of our large, high-poverty, low- performing urban school districts—where (again) the district developed no plan for systematic intervention to ameliorate the problem.
In higher-performing settings, district leaders often proactively monitored trends in schools‘ academic performance and in their community contexts (e.g., demographic trends). Leaders did this in order to identify schools potentially at risk of not meeting AYP targets in future years; then they could target those schools and students for intervention. In one large, high-performing suburban district (i.e., 90% or more of students in most schools achieving at or above state proficiency standards), district leaders noticed demographic changes occurring in several elementary schools. The neighborhoods served by the schools were experiencing an influx of low-income families from the adjacent city. District leaders became concerned that school achievement results might decline unless something was done to support teachers and principals in efforts to respond effectively to the needs of students from low-income families. District leaders developed a set of indicators to track demographic changes and performance, and they used these indicators to designate certain schools as at-risk of declining performance, thus qualifying for additional district support (e.g., staffing, program, funding). They did so in such a way, however, that the district could sustain the initiative on its regular budget (rather than seeking and depending on additional funding from the state or foundations, for example). This example, and the prior illustration of one district‘s intensive efforts to turn around a school failing to meet AYP targets, point to a critical issue for school district leaders. In their responses, they talked about the challenges— financial and in human-resource needs—they faced in providing effective support for increasing numbers of schools requiring special interventions, as stipulated by government policies.
Educators from all districts talked about the need for (and utilization of) diagnostic and formative assessments of student progress throughout the school year, in addition to state achievement-test data. Leaders in higher-performing districts guided colleagues in the development of local assessment instruments. These instruments were aligned with state and local curriculum standards; teachers were expected to administer them at designated intervals and to use the results for instructional planning (see section 2.4 for examples). In some settings school personnel relied mainly on assessment tools developed or endorsed by their state education agencies, perhaps supplemented by formative assessments developed by classroom teachers in their own schools.
School progress implementing district expectations. School districts varied in the range and specificity of district-mandated expectations for professional practice—in particular, for curriculum and instruction. We are hesitant to claim that district leaders in higher-performing districts uniquely promoted more standardized, district-wide curriculum content and materials, because the trend everywhere is to increase standardization. Compared to others, however, district leaders in higher-performing districts appear to have invested in district-wide curriculum development over a longer period of time, using well-institutionalized district curriculum systems. As that development unfolded, efforts to align and coordinate other strands of district support (teacher development, school leadership development, school-improvement planning, performance monitoring) evolved. (This evolution in district support systems was more likely where continuity in district leadership, both administrators and professional staff, was evident.) Progressive alignment, refinement, and synergy among these dimensions of district support may account more for higher performance than curriculum standardization per se.
In addition to curriculum standardization, leaders in higher-performing districts were more likely than others to promote and support implementation of particular instructional strategies regarded as effective. Expectations for uniformity in instructional practices can focus on general or subject-specific teaching methods defined by district staff as "best practices" (e.g., cooperative learning, guided reading, technology use, methods of differentiating instruction) and/or on implementation of specific district, state, or commercial programs that prescribe teaching and learning activities and materials. In one of our high-performing districts, for example, all new elementary school teachers are required to participate in district-developed year-long courses on effective strategies for teaching beginning and more advanced readers. In another high-performing suburban district, sample lesson plans replete with suggested teaching strategies, learning activities, and curriculum resources are built into the district‘s online curriculum guide for teachers. Although teachers are not formally required to implement these lessons, they do have to adhere to a lesson-design format that requires them to target district curriculum objectives, to integrate computer-based learning activities into every lesson, and to engage students in small group and independent learning activities. Teachers reported that the district guide for curriculum and instruction exerts a strong influence on what they do.
In addition to providing or recommending teaching methods, leaders in higherperforming districts provided direction and support for the use of common methods of assessing and reporting student learning, aligned to curriculum expectations. Rather than complaining about loss of autonomy, many teachers we interviewed appeared to appreciate the greater clarity of expectations and access to instructional tools (e.g., course scope/sequence, lesson plans, materials, assessments) that often accompany district-wide curriculum development and support for implementation. Their receptivity to standard forms of instructional practice, however, was conditional upon the quality of district support for implementation (staff development, materials, supervision), perceived fit with state/district curriculum requirements, evidence of student impact, and opportunities for teacher discretion within the boundaries established by the district.
Leaders in higher-performing settings not only worked to establish and communicate clear expectations for curriculum and instruction; they developed and applied mechanisms for monitoring the implementation of district expectations through supervision systems and school-improvement plans. In the most fully elaborated support systems, district leaders initially ensured common training and resources across relevant sectors of the district; then they used monitoring systems to gather information about compliance and progress in school-level implementation. They also provided differentiated follow-up assistance—in some cases, to help school personnel master and comply with district expectations; in other cases, where compliance was no longer an issue, to help school personnel use the program in question more effectively and obtain better results.
All districts used internal and external expertise to help teachers implement district expectations for curriculum and instruction. For obvious reasons, larger districts made greater use of district curriculum and instruction staff than small districts did. Smaller districts relied more on state-supported regional education centers and local universities for in-service training and assistance, and for brokering contacts with other external consultants. Having district-level expectations for curriculum and instruction makes it easier for district leaders to monitor and respond to school-level implementation. In fact, as we will show in Section 3.3, principals in many districts pay more attention to meeting local standards than to state meeting standards, in part because of the systems we have described above.
Reliance on outside assistance for implementation can be challenging because of the costs, the potential problems of fit with local expectations for practice, and the absence of local expertise to provide timely follow-up assistance in response to schoolspecific needs. Having a central office curriculum and instruction unit does not, however, guarantee the coherence and effectiveness of district support for implementation of district-wide programs. Our evidence indicates that, compared to others, teachers in smaller districts did not feel less supported (Section 1.6). In fact the opposite is true: teachers from smaller districts rated district support higher than teachers from mediumor larger-sized districts. This suggests that size and district resources cannot account for the value-added effect of support for improved instruction. It is possible that larger districts pay less attention to the quality and utility of support for teachers because they assume that they have greater quality control over employees, while smaller districts are more attentive to the quality and utility of their "purchases."
We also observe that higher-performing districts make greater efforts than others to maximize communication and coordination among different central office units in their interaction with teachers and principals. In other words, district office units acted more interdependently than independently in relation to district-wide and school-specific needs. The interdependent action occurred partly through interdepartmental structures. These structures make it possible for district staff members to let one another know who is doing what at district and school levels. District unit interdependence may also involve a team approach to assessing and responding to school-specific needs for help with implementation, depending on the problem.
In addition, some district leaders actively facilitated networked communication, sharing, and joint problem solving among schools. This occurred through districtorganized opportunities for principals to speak to one another in principals‘ meetings, leadership programs, or peer-coaching arrangements. Larger districts sometimes create systems of teacher leaders linked through district curriculum and instruction specialists. Networking between schools helps district leaders to identify differences in school needs and to enable school personnel to find solutions among themselves, rather than relying solely on the district for help.
Principals’ expertise in guiding school improvement. While most central office administrators spoke about unevenness in the leadership strengths of their principals, leaders in higher-performing districts expressed greater confidence in their ability to improve the quality of school leadership through hiring practices, leadershipdevelopment programs, school placement, and supervision (see also Section 2.2 of this report on district contributions to principals‘ efficacy).
In a minority of the districts we studied, principal effectiveness was still attributed to innate rather than learned capacities, and low school performance was viewed as a consequence of external factors (state policies, school community characteristics) rather than district and principal leadership. District leaders faced with struggling schools were less rather than more likely to sponsor leadership-development initiatives or to provide strategic help to principals; they focused instead on recruiting a different sort of administrator. In one of the large, low-performing urban districts in our sample, district administrators expressed the belief that principals were essentially born, not made. They talked more about the need to replace principals in low-performing schools than about prospects for developing their leadership skills. Not surprisingly, in this setting, district leaders did not describe any local professional-development programs for principals.
In higher-performing districts, central office leaders not only believed in their capacity to develop principals; they set expectations for implementation of specific sets of leadership practices. This required focusing on specific areas of leadership practice separately (e.g., methods of clinical supervision, school-improvement planning, classroom walk-throughs, uses of student performance data), or within comprehensive guidelines or frameworks for leadership practice.240 In one of the higher-performing urban districts in our sample, district officials organized a three-year principaldevelopment program based on Marzano‘s balanced leadership program. They supplemented this with additional training in clinical supervision. They designed districtwide in-service programs for principals, focused specifically on new curriculum initiatives (e.g., revision of the elementary mathematics program) or school-improvement initiatives (e.g., developing a professional learning communities effort, extending to all schools). In addition, the Associate Superintendent for Curriculum and Instruction dedicated portions of each monthly meeting with elementary and secondary school principals to collective leadership-development activities.
District leaders in higher-performing settings invested in the development of common professional learning experiences for principals, focused on district expectations for instructional leadership and administration. They did not rely chiefly on principals‘ participation in state certification programs or on support for individual principals‘ professional interests (addressed, e.g., in external workshops, conferences, and university programs; see also section 2.2 of this report).
Leaders in higher-performing districts communicate explicit expectations for principal leadership; they provide learning experiences in line with these expectations; they monitor principal follow-through and intervene with further support as needed. This kind of supervision is not limited to formal procedures for appraisal by principals. The more likely scenario is that gaps in principals‘ leadership expertise are identified through ongoing monitoring and discussion about school performance and improvement plans. Where gaps in leadership skills are identified, district leaders are more likely to intervene personally—advising and coaching the principal—than to call on outside expertise. This pattern of interaction stems not only from the clear expectations for practice that are characteristic of high-performing districts, but also from district leaders‘ confidence in their capacity to help principals master those practices.
School factors related to differences in performance. In higher-performing settings, district leaders understood that the reasons for differences in student performance, or in implementation of district initiatives, were particular to the setting. Similar problems (e.g., declining test scores, weak follow-through with a district professional learning communities initiative) might result from different contributing conditions in different schools. Therefore, standard solutions were considered unlikely to apply in all situations.
Leaders in these districts engaged school staff members in collaborative inquiry about the unique circumstances affecting student learning or teacher performance in their schools. They then tailored district support for improvement to the analysis of schoolspecific needs, rather than relying primarily on centrally determined interventions based on categorical differences among schools and their students (e.g., size, SES, ELL, facilities) or set performance cut-off levels. They invested in external and locally created data bases to inform inquiry and decision-making related to differences in student outcomes and degrees of program implementation (see section 2.4 for specific examples related to district support for data use in schools).
Challenges and Trends
Our efforts to attain greater precision in understanding "the district difference" were alternately frustrating and fascinating. Our quantitative data point to a strong district effect, noted particularly in the relationship between district policies and practices, and teachers‘ reports of principals‘ instructional leadership. Frustration arose, however, from the multivariate and often indirect nature of what district personnel do to influence school improvement, and the difficulty of isolating the effects of any one variable on the actions and outcomes of the work of principals and teachers. Our overall conclusion is that there is no simple list of "to do" actions that will allow district leaders to create the conditions that promote improved instruction and student learning. Instead, district leaders‘ actions in relation to key policy conditions are highly interdependent and require "steady work" on multiple fronts. Most district policies and practices that can be linked to real improvements for teaching and learning evolve over relatively long periods of time; this finding points to the critical importance of patience and sustained, continual efforts aimed at improvement. That focus is present in the more successful districts (even where there have been leadership changes); it was distinctly lacking in districts with district leadership turnover or inconsistent policy development.
Our evidence for district-wide approaches to improving and sustaining the quality of teaching and learning pointed to some key challenges and trends faced overall and, in particular, by higher-performing districts in our sample. Leaders in these settings were explicit about their commitment to ambitious learning goals for all students, not just for those not performing at acceptable proficiency levels. They spoke about the difficulty they face, however, in specifying and generating consensus for clear goals and plans for improvement in the learning of average and high-performing students and schools. It may be easier to focus improvement efforts on obvious problems than on successes, even when there are no guaranteed solutions to the obvious problems.
In higher-performing settings, district leaders are likely to be vigilant and strategic about sustaining good performance where it is happening. They engage in monitoring activities to enable early identification of student and school results and factors (e.g., demographic changes) that might jeopardize continuing high performance, and they take action. State accountability systems focus attention and resources on low performance and remediation, but in many school districts across the country district leaders are as much concerned, if not more, about sustaining good performance and about establishing agendas for student learning beyond proficiency scores on standardized tests. These concerns are rising as educators and policy makers continue to raise the AYP bar.
Increasing standardization of curriculum, instruction, and assessment appears to be a universal trend in the United States—at the district and state levels. Yet standardization does not yield the same performance results everywhere. Our evidence from higher-performing districts offers some insight into how standardization can contribute to high performance. In essence, standardization of expectations for curriculum and instruction (and even leadership practice) creates a platform for improving the quality of leadership, instruction, and learning. Using this platform, district leaders can develop support systems that promote quality implementation of the common expectations. The creation of such support systems takes time and skill, and it requires organizational learning to figure out what works well. Unfortunately not all districts benefit from the leadership continuity, skill, and resources needed to develop equally effective support systems in a context of standardized expectations.
From district leaders in our higher-performing settings, we have learned that once standard expectations for curriculum, instruction, and leadership are implemented and sustained with a reasonable degree of fidelity and quality, further improvement in the quality of teaching and learning is unlikely to be gained by doing more of the same. To reach the students not currently well served requires differentiated (not common) solutions grounded in local analysis of learning needs and circumstances of struggling students. In effect, in these districts, three levels of support for school improvement can be observed, in addition to bureaucratically prescribed inputs. Level One encompasses common inputs to all schools to develop the basic knowledge, skills, and resources necessary to understand and work towards district expectations. Level Two supports efforts to provide additional input and assistance to schools and school personnel that are at risk or struggling to meet expectations for professional practice and student achievement. Level Three supports are the most complex. At this level, district and school personnel may undertake collaborative inquiry into important problems, and engage in a search for solutions that go beyond current knowledge and expectations.
Implications for Policy and Practice
Six implications for policy and practice emerged from this section of our study.
- District leaders need to establish clear expectations across multiple dimensions of improvement activity as the bases for increasing coherence, coordination, and synergy in the effectiveness of district improvement efforts over time.
- District leaders should combine a common core of support for efforts to implement district expectations with differentiated support aligned to the needs of individual schools.
- District leaders are encouraged to embrace and discuss ways in which effective school-leadership practices can be acquired through intentional leadershipdevelopment efforts that include both formal professional development activities and collegial work.
- One of the most productive ways for districts to facilitate continual improvement is to develop teachers‘ capacity to use formative assessments of student progress aligned with district expectations for student learning, and to use formative data in devising and implementing interventions during the school year.
- Districts should strive for continuity in district leadership. Such continuity is integral to the development and implementation of a coherent and effective support system for improving and sustaining the quality of student and school performance.
- District leaders need to take steps to monitor and sustain high-level student performance wherever it is found, and to set ambitious goals for student learning that go beyond proficiency levels on standardized tests. Focusing improvement efforts solely on low-performing schools and students is not a productive strategy for continual improvement in a district.
< <
Previous |
Next > >
References
235. Elmore & Burney (1998).
236. Anderson (2006); Campbell & Fullan (2006); Cawelti & Protheroe (2001); Hightower et al. (2002).
237. Murphy & Hallinger (1988); Waters & Marzano (2006).
238. We also investigated the relationship between district focus on instruction and principals‘ selfassessments of their expertise in providing instructional support to teachers. We argue, however, that a stronger test of the importance of the district‘s role is to look for the reflection of improved principal leadership on the part of those who experience it.
239. The phenomenon of schools targeted as "in need of improvement" because of failure to achieve state achievement targets under NCLB/ AYP regulations began to surface in our district-level findings during the final year of data collection (2006-2007). The number of schools failing to meet AYP targets was nil or small in many of these districts (e.g., 2 of 60 schools in one large district), although in one state an entire district was designated as "in need of improvement."
240. E.g., Marzano et al. (2005) on balanced leadership; Dufour et al. (2005) on professional learning communities; and Fullan (2001a) on leading in a culture of change.