Photo by M.V. Jantzen on Flickr.

DC would likely close some successful schools while expanding failing schools if it relies upon a study released last week.  The much-anticipated study, which the Deputy Mayor for Education commissioned to help plan school closures and charter school policies, is highly flawed.

The goal of the study was to help DCPS balance out near-­empty buildings in some locations with over­crowded ones in others, taking into account the quality of the schools. 

For all its colorful charts and maps, the report uses a faulty measure of school quality and does not make any serious attempt to predict how families and schools might react to the changes it proposes.  With such important decisions at stake, the Deputy Mayor should insist upon more rigorous analysis.

The report authors crunched a lot of numbers in an admirably short period of time and produced some very interesting descriptive statistics, like the percentage of students below 185 percent of the poverty line in charters (75) versus DCPS (67).

The study counts, within each of 39 neighborhood clusters in the city, the number of “performance,” or high quality, seats in schools and compares that to the number of school-age students living in that cluster. The difference is called a service gap.

It recommends schools for closure, or in some cases investment, to reduce these service gaps. But it doesn’t justify the type of investment. Is it facilities? More teachers? Better teachers?

The authors define a “performance seat” as a seat in a school in the top tier of a 4-tier rating system they devised. Each school’s tier comes from estimated percentages of its students who were judged “proficient” on the state assessment test in recent years, projected 4 years into the future assuming a straight line trend. 

This study raises a lot of questions for most researchers and even lay readers.  Two big flaws stand out, which are so basic and could do significant damage if city leaders overlook the problems.

It uses a flawed measure of school performance. At the heart of this paper is a 4-tier rating of school quality that relies on the percent of students who are proficient on the state test (called the DC-CAS). Never mind the fact that a proficiency rate throws away information by focusing only on whether a score was above or below a fixed cut point instead of how high or low it was.

Student proficiency rates have long been discredited as a school performance measure because proficiency rates capture student achievement at a point in time, but say little about how much the school or its teachers contributed to its current students’ performance.

For example, a middle school could have declining proficiency rates if a feeder school begins sending more at-risk students to it, even if the teachers are especially skilled at working with a challenging population.

At a bare minimum, a sensible measure accounts for what a student knew before enrolling in the school (for example, using the student’s score from the prior year). This is why more and more states, including DC, have adopted student achievement growth measures instead of proficiency rates for their teacher and school performance indicators.

Using a trend in proficiency rates doesn’t help, and only creates a false sense of “gains” which is more likely to measure demographic change and other differences between successive cohorts of students cycling through a school than the performance of the schools’ educators. That’s because it compares students in one year to different students, instead of students in one year to the same students in the prior year.

By relying on flawed measures of school performance, policymakers risk closing down schools that are best equipped to work with challenging populations and replacing them with ones that would fail miserably if they started working with a different student body.

It ignores human behavior. There is a big difference between bean-counting and behavioral analysis. The latter recognizes that families make choices (within budget constraints) about where they live and where they send their kids to school.

School leaders make decisions too — over what programs to offer and how to allocate scarce resources to produce successful educational outcomes or whatever else they may value. In the case of charter schools, administrators choose whether to open a charter, where to locate, and what to offer.

In modeling supply and demand, however, the report ignored all of these factors. The report makes no attempt to model the behavior of these actors to predict the effect of different policies on outcomes. It is a bean-counting exercise.

For example, this study would say that a neighborhood has no service gap if it had a successful but highly specialized charter school, such as a Spanish immersion school.  Obviously such a school could draw students from all over the city and residents of the immediate neighborhood may either not want to attend such a program or not be able to rely on being admitted because the pool of students in the lottery is so large.

Acting on this flawed study could end up making service gaps worse. For example, an affluent neighborhood may have far too many seats for its own students and yet its schools can be overcrowded because families from far flung neighborhoods want affluent peers or a school in a neighborhood with better housing stock.

Building more schools in the less affluent neighborhoods will not necessarily solve that problem. It might just create more under-utilized space. Yet that’s exactly what this study recommends.

A smarter policy would strategically locate new schools partway between the current over-enrolled schools and the under-enrolled ones and design curricular offerings to induce the optimal mixing of students. Or better yet, the policy could rely more on information and transportation than simply construction and demolition.

In other words, knowing that a school is under-enrolled is less important than knowing why it is under-enrolled.  It’s important to know why parents make the choices that they make, not to just tally up their choices at a moment in time like an accountant.

It is possible to model the supply and demand of schooling without making naïve assumptions about schools and families.  For example, there is work in progress by economists at Carnegie Mellon University demonstrating how it can be done.

In my own research I have simulated parental choice outcomes using behavioral parameters estimated from school choice data. This analysis illustrated how family preferences over the racial composition of the student body as well as commute distance and other factors such as school program offerings can influence sorting outcomes.

Planners can also consider trends in demographics, housing construction, and transit. They can simulate the results of a wide range of charter school and DCPS policies including not only facilities siting and improvements but varied attendance zones and expanded access to information about and transportation to schools beyond the immediate neighborhood.

The District needs sophisticated guidance to begin comprehensive, city-wide planning of school closures and investments and to help coordinate land use policy with charter school expansion.  Unfortunately, this report doesn’t provide enough of this guidance.

Steven Glazerman is an economist who studies education policy and specializes in teacher labor markets. He has lived in the DC area off and on since 1987 and settled in the U Street neighborhood in 2001. He is a Senior Fellow at Mathematica Policy Research, but any of his views expressed here are his own and do not represent Mathematica.