Contents
Improving State Evaluation of Principal Preparation Programs
Click here to download the full report:
Improving State Evaluation of Principal Preparation Programs
Implementing a quality system for evaluating principal preparation programs is complex work. For many states, this work will be new and difficult and will require a significant commitment of time, expertise, and resources. A few states have paved the way, and their experience can provide a road map for others.
Illinois
By 2015, Illinois had been working for more than a decade to develop and implement policies focused on improving principal preparation. Formal conversations among stakeholders culminated in the passage of comprehensive legislation that replaced a general administrative credential with one specifically focused on preparation for the principalship, increased the rigor of selection into programs and program content, required programs to collaborate with school districts, and required programs to include an internship that gave candidates authentic leadership experiences. The law mandated that all preparation programs in operation be approved under these new requirements; once regulations were finalized in 2011, programs had three years to fully meet new requirements. The state convened and trained a review panel consisting of teachers, principals, superintendents, university representatives, and members of the business community to provide constructive feedback in advance of consequential decisions about program approval by the Illinois State Educator Preparation and Licensure Board (ISEPLB). Twenty-six of 31 programs which previously offered a general administrative credential received approval from ISEPLB to prepare principals, and the policy requirements caused them to view districts, rather than individuals, as their primary consumers. Further, "[t]hese changes require[d] programs to move beyond the focus on a single program outcome-graduates securing administrative positions-to the actual impact the principal candidate ultimately has on school improvement and student outcomes."12
Illinois' approach is notable not only for a substantial increase in the rigor of expectations for programs, but also for a consistent effort to engage a wide array of stakeholders. Over the last 15 years, formal committees have had a hand in developing policy ideas, monitoring the quality of implementation on an ongoing basis, and suggesting tweaks to the rules and regulations. According to a case study on Illinois' principal preparation legislation, the involvement of stakeholders from the outset and their continued collaboration allowed the group to "capitalize on specific windows of opportunity" to advance their collective agenda.13
Delaware
Delaware also has prioritized program evaluation as a means for improving principal preparation. Regulatory changes adopted by the Delaware State Board of Education in 2014 require the Delaware Department of Education to develop scorecards for teacher and leader preparation programs.14 The scorecards are based on data submitted by each program to the state and calculated from state data systems, and they supplement each program's accreditation by the Council for the Accreditation of Educator Preparation (CAEP). Taken together, the scorecards and the CAEP accreditation process will allow Delaware to assess the four critical dimensions of program evaluation described in design principle D1 above-inputs, processes, outputs, and graduate outcomes. Delaware is currently working with stakeholders to finalize the indicators that will be used for the first year of assessing leader preparation programs. The table below outlines how Delaware's draft indicators correspond to the four key dimensions of program evaluation.
Table 1. Crosswalk of Delaware Draft Indicators and Recommended Indicator Categories
Recommended Indicator Categories15 | Delaware Draft Indicators |
---|
Program Inputs: Indicators that reflect the program's ability to recruit and select high-potential aspirants and to diversify the pool of aspiring principals | - Selectivity in admissions
- Candidates' prior teaching performance
- Diversity of candidates
|
Program Processes: Indicators that reflect the quality of learning experiences for aspiring principals | |
Program Outputs: Indicators that reflect the success of aspirants in completing a rigorous program and being hired into principal and assistant principal roles | - Placement in administrative roles within one and three years
- Placement in administrative roles in high-need schools
- Retention in administrative roles
|
Graduate Outcomes: Indicators that reflect the impact that program graduates have, both on practices in the schools they lead and on student learning | - Improvements in culture and climate in schools led by graduates
- Student growth in schools led by graduates
- Percentage of graduates deemed highly effective on the state's administrator evaluation instrument
- Perceptions of graduates' performance as measured by perceptual surveys of districts and program participants
|
The scorecards are designed to provide districts and candidates with comparable information about programs. Because making such information public is new and because presenting individual program and cross-program comparative data is challenging, a key aspect of the state's approach is to publish the information in year one (expected to be 2016) without any expectation of using it for decisions about program status. Once state leaders and stakeholders have had an opportunity to shape the content and formatting of the scorecards through use in the first year, data from the scorecards will be used to determine whether programs remain in good standing until their next CAEP accreditation (which occurs on a seven-year cycle) or whether they are given a probationary status.16
Lessons for Other States
Both Delaware and Illinois offer useful models and lessons for other states looking to improve their evaluation of principal preparation programs. One key lesson is the importance of state context. It is clear that each state's focus and pace necessarily will be influenced by current conditions in at least the following two areas.
A. Focus, alignment, and positioning of state leadership: The extent to which state leaders have prioritized school leadership-and specifically school leader preparation-in the state's educational improvement agenda, and the extent to which the state education agency (SEA) is positioned to be an effective resource for local education agencies and leadership preparation programs.
State education leaders (i.e., governors, legislators, state board of education members, chief state school officers, deans of schools of education, associations, and others) may be focused on a wide array of issues, ranging from the adequacy of state public education funding, to the content of student learning standards and assessments, to the quality of teacher evaluation systems. They also may differ on the relative priority of issues, let alone particular solutions to those issues. Ideally, these leaders and stakeholders share an understanding of how improved principal leadership would contribute to improved educational outcomes. If state political leaders and relevant agencies have a shared commitment to improving principal preparation programs, there are a number of steps that can be taken, such as the modification of existing policies that support program review processes and the targeting of funds to support implementation of a program evaluation system informed by the guidance offered in this document. Finally, meaningful improvement is more likely if the SEA is seen as a supportive partner interested in the improvement and innovation of preparation programs, and not just concerned with compliance.
B. Technical capabilities of the state education agency: The extent to which the SEA has crucial capabilities needed to support a new evaluation system, especially those related to data collection and the analysis and substantive review of programs.
In order to implement a strong evaluation system, a state needs a robust system of current data that includes important data on individual educators (e.g., their role, licensure status, evaluation ratings, etc.), enables tracking over time, and allows for connections between school-level data on leaders and preparation programs. Without these capabilities, a state might start small-for example, limiting the evaluation to available data (e.g., program input data collected and submitted by programs) and incentivizing programs to collect and report on their own output and impact data. However, data collected in this way should be interpreted with caution and not made public due to concerns about verification. Meanwhile, the state could invest in building a more robust data system.
In order to implement a strong evaluation system, a state also needs substantial capacity to compile, clean, and analyze data. This capacity is both a resource consideration, in that the state needs to fund the analytical capability, and an expertise consideration. Ideally, those conducting the analyses have experience in evaluating preparation programs, and particularly principal preparation programs. If the state does not have these resources, it might consider partnering with research institutions or consortia with data analysis capabilities.
Finally, effective implementation requires investment in program review. In particular, the state may need to train and maintain a cadre of reviewers if it intends to conduct periodic in-depth reviews of individual programs. If resources are limited, the state might consider limiting the number of programs requiring in-depth review on an annual basis or partnering with approved professional associations to conduct the in-depth reviews.
« Previous | Next »
References
12. See p. 4 of Baron, D., & Haller. A. (2014). Redesigning principal preparation and development for the next generation: Lessons from Illinois. Normal, IL: Illinois State University, Center for the Study of Education Policy.
13. Baron & Haller (2014), p. 21.
14. The regulations can be found at http://regulations.delaware.gov/AdminCode/title14/200/290.shtml
15. For more detail on these indicator categories, see the companion guide developed by New Leaders and UCEA.
16. This is the approach that Delaware is already taking with teacher preparation. Scorecards have already been published, along with a detailed description of the method used for choosing indicators (see http://www.doe.k12.de.us/Page/2573 for details). Delaware is currently working with stakeholders to finalize the indicators that will be used for the first year of assessing leader preparation programs.