Photo of standardized test from Shutterstock.

Every year DC announces, with much fanfare, the annual results of the standardized test that all DC public school students take, the DC CAS. Last year the scores were declared historic because they rose by 4 points. This year’s scores barely budged, but there was still a big press event and much discussion of whether they show education reform has been working in DC. But how reliable are the test scores?

Here are 6 things to bear in mind when considering the DC CAS results:

Proficiency rates are decided by policy-makers. The scores that DC has released aren’t actual test scores, but the percent of students deemed proficient. Proficiency is not an absolute. Rather, DC education officials pick a certain “cut score.” Students who score above that level are classified as proficient or advanced, and those below are basic or below basic.

That means that proficiency rates are essentially a matter of policy. Last year, controversy erupted when DC officials rejected a recommendation from teachers to use a new grading scale that would have made it harder for students to be considered proficient.

DC officials have said they kept the old grading scale in order to ensure that scores from prior years would be comparable, but critics have charged manipulation and called for officials to release students’ underlying scores.

Proficiency rates don’t tell you about growth. If all you’re hearing about is how many students are proficient, you don’t know how many students have moved up from one category to another. For example, students who manage to move up from below basic to basic aren’t counted.

Given that many students in DC schools are in the below basic category and are unlikely to jump straight to proficient, a focus on proficiency could mean a lot of progress is taking place under the radar. It also ends up giving credit to schools who start with a lot of high-achieving students rather than to schools that start with low-achievers and bring them up.

DC does track student growth at individual schools as well as proficiency rates, and both rates are available as part of a school’s equity report. But for some reason the fanfare is all about proficiency rates.

Individual schools’ scores can fluctuate wildly from year to year. The focus last week was on the overall proficiency rate for DC, or the overall rate for DC Public Schools as opposed to the overall rate for the charter sector. But DC officials also released school-by-school results. And, as Emma Brown pointed out in the Washington Post, some schools’ scores went way up while others went way down.

For example, Drew Elementary gained 34 points in math and 18 points in reading, while Tubman lost 24 points and 14 points in those subjects. There may be reasons for these shifts. Tubman, one of DCPS’s success stories last year, had a new principal this year. And statisticians might argue that these wild fluctuations cancel each other out when the scores are aggregated.

But for me, at least, the ups and downs raise questions about the reliability of the tests. Do schools really change that much from year to year?

Standardized tests only measure certain skills. Standardized tests can measure simple skills, like addition and subtraction, and they can measure how well students have absorbed and retained the facts they’ve been taught. But they’re not very good at measuring higher-order analytical abilities.

That means that even students who do well on standardized tests often lack what MIT researchers have called fluid intelligence, including the ability to analyze abstract problems and think logically.

What are the implications of that finding for students who do badly on standardized tests? It’s possible they possess analytical abilities that aren’t reflected in their test results. But my hunch is that they’re at least as much in need of learning higher-order thinking skills as those who test well.

The larger point is that standardized testing has led teachers to focus on drilling rather than on fostering the kinds of skills that are crucial for success after high school. In theory, that’s what the Common Core will do.

The test questions may be badly written. In order to avoid having to rewrite questions every year, DC doesn’t release the questions on the DC CAS. There are a few science and math questions from 2009 available online, but nothing on the literacy side, and nothing recent.

But if the DC CAS questions are anything like the sample questions for the Common-Core-aligned tests DC and other school districts will give starting next year, they may be part of the problem.

When I took an online practice test for the PARCC tests that DC will use, I found questions that were unclear, pitched at way too high a level, or that just made no sense. Apparently there are similar problems with sample questions for the tests produced by the other Common Core test consortium, Smarter Balanced.

Changes in test scores can result from demographic change. More affluent students tend to do better on standardized tests. So it’s possible that increases in test scores in DC largely reflect an influx of relatively affluent kids into the school system. That possibility is bolstered by the fact that test scores haven’t increased much for at-risk subgroups of DCPS students in recent years.

None of this means we should abandon standardized tests. We need some way to assess, on a large scale, how schools are doing, and right now it’s not clear there’s any other good way to do that. And test scores have served an important function in pointing up the disparities in the achievement levels of higher- and lower-income students.

But we should supplement the scores with other measures when possible. And we should take them, especially those that only reflect proficiency rates, with a large grain of salt.

Natalie Wexler is a DC education journalist and blogger. She chairs the board of The Writing Revolution and serves on the Urban Teachers DC Regional Leadership Council, and she has been a volunteer reading and writing tutor in high-poverty DC Public Schools.