Contents

Improving State Evaluation of Principal Preparation Programs

Click here to download the full report:
Improving State Evaluation of Principal Preparation Programs

This report was commissioned by The Wallace Foundation. It is the product of a collaborative partnership between the University Council for Educational Administration (UCEA) and New Leaders. The four authors-Gina Ikemoto, Matthew Kelemen, Michelle Young and Pamela Tucker-built consensus on all recommendations and contributed equally to the content of the report. The document itself was greatly improved by editing by Amy Mazzariello and design by Amy Norskog.

Recommended citation: UCEA and New Leaders (2016). Improving state evaluation of principal preparation programs. Retrieved from: www.sepkit.org.

Improving State Evaluation of Principal Preparation Programs 

Far too many people completing state-approved principal preparation programs are not ready to assume assistant principal or principal positions.1 This concerning fact has motivated a number of programs to pioneer effective, evidence-based practices, such as increasing the rigor of their admissions or developing partnerships with schools and districts to provide candidates with authentic opportunities to practice leadership and receive feedback on their performance.2 Unfortunately, there is little evidence that these and other research-based practices are broadly in use across programs. In fact, there has been a proliferation of programs, particularly less-rigorous online programs, in response to consumer demand for cheaper and more flexible options.3 Ensuring programs reflect best practices, such as providing opportunities for authentic practice, is a central opportunity for state policy-makers looking to improve the quality of principal leadership.

State policy-makers are uniquely positioned to ensure that the quality of principal preparation improves because most states have statutory authority to do so. Specifically, states grant initial and ongoing approval for principal preparation programs to operate, and they issue licenses for individuals to serve as principals. Because of this authority, a recent comprehensive report on state policy identified the approval and oversight of principal preparation programs as one of six potentially powerful areas for state policy action to improve the effectiveness of school principals.4

At the heart of a state's authority to approve programs is the opportunity to evaluate those programs. What is more, high-quality program evaluation is a means to improve programs. Data collected through program evaluation provides critical evidence for identifying areas where programs could improve their design or execution. Thus, in addition to providing insight into the quality of programs through a set of agreed-upon metrics, the data collected and analyzed during the evaluation process can be used to inform program changes.5

Unfortunately, most states have not yet developed robust program evaluation systems.6 In 2015, the University Council for Educational Administration (UCEA) and New Leaders completed a project, with support from The Wallace Foundation, that aimed to produce ideas and resources that would enable states to design and conduct feasible, fair, and useful evaluation. The project resulted in a detailed model and a related set of tools for effective state evaluation of principal preparation programs.

This publication draws on findings from the UCEA/New Leaders project, highlighting key design principles that states should consider as they begin the work of improving the evaluation of their principal preparation programs. It also includes examples from two states that have done substantial work in this arena, as well as one of the tools from the UCEA/New Leaders project-a readiness assessment rubric for states.

Why design principles? This paper outlines a set of design principles for state leaders to consider as they work to improve the quality of principal preparation program evaluation. While states will undoubtedly want and need to develop systems unique to their context, they could benefit from having a set of guideposts to organize what can be complex work. The design principles are based on program evaluation generally and on the evaluation of principal preparation programs specifically, as well as in-depth conversations with a diverse group of academics, policy-makers, and practitioners.

The UCEA/New Leaders Partnership

The UCEA/New Leaders project combined the expertise of two organizations with deep collective knowledge of research and practice related to the preparation of school leaders. UCEA is a consortium of 99 higher education institutions that has a 60-year track record of building the knowledge base on effective leadership preparation, designing and utilizing preparation program standards, and developing evaluation tools and practices designed to improve the preparation and professional development of educational leaders and professors. New Leaders operates cutting-edge principal preparation programs that are producing highly effective leaders and uses the knowledge gained from rigorously evaluating its own programs to inform federal, state, and local policy and practice, as well as training other preparation programs on how to design and conduct program evaluation.8

Engaging Experts in Design

The New Leaders/UCEA project also involved a deep and iterative collaboration with state leaders who have experience designing or implementing preparation evaluation systems, methodologists with experience evaluating principal preparation programs, representatives of national organizations focused on issues of leadership preparation, principal preparation program leaders with experience evaluating their own or other programs, district leaders with experience evaluating internal or external programs, and principals. Twenty-five academics, policy-makers, and practitioners participated as advisors.9 Their participation included a series of webinars addressing specific and challenging issues (i.e., state authority and leadership, data considerations, the rigor of outcomes and processes, and consumer needs and priorities); a two-day, designfocused convening; and a review of the tools developed. Five experts also conducted in-depth reviews of all of the documents before they were finalized.

Why examples?

A small number of states in recent years have made efforts to use the evaluation of principal preparation programs as a strategy for improvement. Illinois, for example, has spent a decade developing and refining a new set of expectations for principal preparation programs, most notably requiring deeper partnerships between programs and school districts, and has required all programs in the state to redesign their programs based on the new criteria. Other states, such as Delaware, Florida, and Tennessee, are at earlier stages of development.

Such examples demonstrate some of the possibilities of high-quality program evaluation, and they surface several tensions for states. Some of these are political in nature. For example, attention to principal leadership as a statewide focus necessarily competes with other priorities, especially a focus on teacher quality and support. In addition, while some programs are increasing selectivity in the admission of students in order to improve overall quality, others depend on open admissions to generate revenue for schools of education. Other tensions are more technical in nature. For example, evaluations are ideally driven by data about program quality and outcomes, but state systems for collecting and interpreting data are often too limited to support such an approach (i.e., they lack direct measures of program quality and outcomes, and the available data for indirect measures are insufficient or of low quality). Moreover, evaluations are ideally diagnostic in nature, using program quality and outcome data to drive inquiries into the sources of successes, needed improvements, and concerns that warrant further investigation. However, state systems are not always organized to use data in this way and often lack the necessary capacity for diagnosis and support. Managing these tensions is essential in developing systems of evaluation that contribute to better outcomes.

Why tools?

An exhaustive review of the existing literature on the evaluation of principal preparation programs makes plain that states lack good models, tools, and resources. To address this problem, UCEA and New Leaders worked with a diverse group of academics, policy-makers, and practitioners to develop a model approach to program evaluation and a suite of related tools and resources. A list of all the tools and resources developed by New Leaders and UCEA can be found on pages 11-12. To find the New Leaders/UCEA model, tools, and resources, see www.s​epkit.org.

« Previous | Next »

References

1. Hull, J. (2012). The principal perspective: Full report. Alexandria, VA: Center for Public Education; Young, M. D., & Brewer, C. (2008). Fear and the preparation of school leaders: The role of ambiguity, anxiety, and power in meaning making. Educational Policy, 22(1), 106-129.

2. Darling-Hammond, L., LaPointe, M., Meyerson, D., & Orr, M. T. (2009). Preparing principals for a changing world: Lessons from effective school leadership programs. San Francisco, CA: Jossey-Bass; Davis, S. H., & Darling-Hammond, L. (2012). Innovative principal preparation programs: What works and how we know. Planning and Changing, 43(1/2), 25-45.

3. Anderson, E., & Reynolds, A. L. (2015). A policymaker's guide: Research-based policy for principal preparation program approval and licensure. Charlottesville, VA: The University Council for Educational Administration.

4. Manna, P. (2015). Developing excellent school principals to advance teaching and learning: Considerations for state policy. New York, NY: The Wallace Foundation.

5. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage Publications; Orr, M. T., Young, M. D., & Rorrer, A. K. (2010). Developing evaluation evidence: A formative and summative evaluation planner for educational leadership preparation programs. Charlottesville, VA: UCEA National Center for the Evaluation of Educational Leadership Preparation and Practice.

6. Anderson, E., & Reynolds, A. L. (2015).

7. Orr, M. T., Young, M. D., & Rorrer, A. K. (2010).

8. Neuman-Sheldon, B., Ikemoto, G. S., Bailey, M., Erdfarb, T., Nerenberg, L., Patterson, N., & Valdez, M. (2014). Principal preparation program self-evaluation: Lessons learned by New Leaders. New York, NY: New Leaders.

9. Mónica Byrne-Jiménez, Hofstra University (New York); Mary Canole, Council of Chief State School Officers; Stevie Chepko, Council for the Accreditation of Educator Preparation; Matthew Clifford, American Institutes for Research; Shelby Cosner, University of Illinois at Chicago; Brian Dassler, Florida Department of Education; Jacquelyn Davis, George W. Bush Institute; Benjamin Fenton, New Leaders; Susan Gates, RAND Corporation; Mark Gooden, University of Texas at Austin; Jackie Gran, New Leaders; Steven Gross, Temple University (Pennsylvania); Sara Heyburn, Tennessee State Board of Education; Susan Korach, Ritchie Program for School Leaders, University of Denver (Colorado); Paul Manna, College of William and Mary; Tricia McManus, Hillsborough County Public Schools (Florida); Glenn Pethel, Gwinnett County Public Schools (Georgia); Diana Pounder, University of Utah Education Policy Center; Frances Rabinowitz, Bridgeport Public Schools (Connecticut); Carol Riley, National Association of Elementary School Principals; Cortney Rowland, National Governors Association; Christopher Ruszkowski, Delaware Department of Education; Erin Swanson, Martin Millennium Academy (North Carolina); Brenda Turnbull, Policy Studies Associates; David Volrath, Maryland State Department of Education.