Photo by Juan Pablo Mejia on Flickr.

There’s been a lot of talk about the most recent DC CAS reading and math scores and what they mean. But another set of test scores, assessing students’ writing skills, hasn’t gotten much attention. What do they mean, if anything?

The DC CAS has included a composition section since 2011, but the 2012-13 school year is the first time the scores have been factored into a school’s rating for purposes of federal law. (The scores still don’t count for DCPS teacher evaluations.) Like the reading and math scores, writing scores went up this year. But measuring writing proficiency is even trickier than assessing skills in reading and math.

All DCPS and DC charter school students in grades 4, 7, and 10 took DC’s standardized writing test last spring. The overall proficiency rate for DCPS students was just under 50%. For charter school students the rate was slightly higher, just over 52%. In both cases that represents an increase over prior years.

Although the one-year increases were fairly modest, the gains from 2011 were dramatic for both sectors: DCPS writing scores increased by 16.9 points, and charter scores by 19.5 points.

Does that mean that DC students are now aces at writing? Based on what I know personally and what I’ve heard from teachers and parents, I doubt it. And the data show that more DCPS students fall into the “below basic” category in writing (20%) than in either reading (17%) or math (18%).

On the other hand, there are also more students in the “advanced” category: 21%, as compared to only 11% in reading and 16% in math. So the writing results are more polarized than other categories: there are more really good writers, as measured by the test, but also more really bad ones.

In fact, in DCPS schools there are fewer students scoring at the “proficient” level in writing than in reading and math. Only 29% of students scored proficient in writing, whereas the figure for reading was 36% and for math 33%. Because the proficiency rate is a combination of “proficient” and “advanced,” it’s really the relatively high number of students in the advanced category who are pulling the writing proficiency rate up to 50%.

Breakdown of DCPS scores

DCPS has released a breakdown of scores by ward, subgroup, and school, so it’s possible to try to draw some conclusions about which DCPS students are doing better or worse in writing. (OSSE has not yet released a school-by-school breakdown of composition scores for charter schools, although the schools themselves have received the information.)

For the most part, the DCPS writing score breakdown is what you would expect. Students in Wards 2 and 3 scored a lot higher than those in Wards 7 and 8, with over 75% proficient as compared to about 36% and 30%, respectively. White students were about 80% proficient, Hispanics about 50%, and blacks about 44%.

But there are some puzzling discrepancies. Common sense would indicate that reading and writing scores should track each other fairly closely, and in most cases that’s true. But in some cases the two scores diverge significantly.

Perhaps it’s not surprising that at 5 schools reading scores were much higher than writing scores, since writing is generally a harder skill to master. At Orr Elementary School, for example, reading proficiency was about 32%, while writing was only about 9%.

What’s harder to explain are the 7 schools where the writing score was substantially higher than reading. At Stanton Elementary School, for example, reading proficiency was just under 20%, but writing was at 40%. Even more dramatic was Garfield Elementary. There, only about 15% of students scored proficient in reading, but over 55% scored proficient in writing.

And remember that overall, only 11% of DCPS students scored advanced in reading, while almost twice as many did so in writing.

Subjectivity in the scoring

One possible explanation for these inconsistencies is that scoring a writing test is not as objective a process. Multiple-choice reading and math tests are scored by machines, but the DC CAS writing assessment is scored by human beings.

According to OSSE, these human beings are hired and trained by CTB/McGraw Hill and Kelly Services, a temporary personnel agency formerly known as Kelly Girl. Steps are taken to ensure quality and consistency: all raters must have a bachelor’s degree or higher, all are interviewed and screened, and all receive training in the scoring rubric. Raters review sample “exemplary responses,” so they know what they’re looking for. In addition, about 10% of answers are scored by a second person who doesn’t know the score given by the first reader.

Still, it’s inevitable that more subjectivity is involved in scoring a test for which there’s no one right answer. A criterion like “Fully addresses the demands of the question or prompt,” which is part of the scoring rubric, leaves a certain amount of wiggle room.

So it’s not really clear what these tests are telling us about the state of students’ writing skills. Nor is it clear that DC students will do as well on future standardized writing assessments.

Starting in 2011, DC revised its writing prompts to be more aligned with the rigorous Common Core curriculum, which emphasizes analyzing and interpreting texts. For example, the old sample 10th-grade writing prompt for the DC CAS was: “Who is likely to accomplish more—the person who adjusts to society as it is, or the person who attempts to change it?” In their answers, students were invited to draw on their reading as well as their own experience and observations, but they didn’t need to interpret a text.

The new Common Core-aligned sample 10th-grade question for the DC CAS gives students a two-page story to read and then asks questions about how the author conveys a certain message through the characters. That may be more challenging than the previous DC CAS writing test, but it’s probably a lot easier than the writing questions students will begin to confront in school year 2014-15.

More complex questions

That’s when DC will replace its own test with a test devised by a consortium called PARCC, which DC and 19 states have joined. Like the current DC CAS, the PARCC test will ask students to read and respond to texts. But the texts for the sample 10th-grade PARCC questions are much more complex: an excerpt from a high-flown 1941 blank-verse translation of the Daedalus and Icarus myth from Ovid’s Metamorphoses, and a poem on the same subject by Anne Sexton. One of the writing prompts asks students to analyze how Icarus’s experience of flying is portrayed differently in the two texts. It’s an assignment many college students would no doubt find challenging.

So if writing scores drop precipitously once the PARCC test comes in, that won’t necessarily mean students’ writing skills have suddenly plummeted. And although we’ll be able to compare writing scores in DC to those in other jurisdictions that are using the same test, there may still be some subjectivity in the scoring, since the PARCC writing tests will also probably be graded by human beings.

Using a test to evaluate writing is inherently a tricky business. PARCC has said it is considering using computers to evaluate its writing tests, and the other Common Core testing consortium is actually trying to “train” computers to do the job. But that approach would inevitably bring its own problems.

And some students simply don’t write well under pressure. It might make more sense to assess writing skills by means of a student portfolio rather than a time-limited test, but it’s hard to see how that could be done on a mass scale.

On the other hand, if we don’t test writing, it probably won’t get taught. Surely one reason writing has been neglected in recent years is that the tests mandated by No Child Left Behind have focused exclusively on reading and math. And writing is too crucial a skill to be ignored. So the best we can do,  it seems, is to administer writing tests and actually pay some attention to the results. But we also need to remind ourselves to take those results with a grain of salt.