Photo of test score sheet from Shutterstock.
Next year DC public school students will take a new standardized test that’s supposed to test their critical thinking skills. But a number of questions on a publicly available practice test are confusing, unrealistically difficult, or just plain wrong.
A recent post on Greater Greater Education criticized the design of the Common Core-aligned test that DC will begin using next year, which is being designed by a consortium of states called PARCC. But judging from sample questions, there are problems with the test that are even more basic than its design.
Like the author of the recent post, I took the PARCC English Language Arts practice test for 10th-graders. I’ve been a writing tutor for a number of 10th- and 11th-graders in a high-poverty DCPS school, so I have some idea of students’ literacy skills at that grade level.
The test consists of 23 questions, most of which have two parts that are based on a given text. I ran into problems with the very first question. The relevant passage, taken from a short story, read as follows:
I was going to tell you that I thought I heard some cranes early this morning, before the sun came up. I tried to find them, but I wasn’t sure where their calls were coming from. They’re so loud and resonant, so it’s sometimes hard to tell.
Part A of the question asked for the meaning of the word “resonant” as used in this passage. The choices were:
- A. intense
- B. distant
- C. familiar
- D. annoying
I was stumped. None of these words matched the definition of resonant as I know it, which is more like “echoing.” But, being a dutiful test-taker, I knew I had to choose A, B, C, or D.
Maybe this particular author was using the word in an idiosyncratic way? I tried to forget what I knew about the meaning of resonant and instead looked for what teachers call “context clues.” From what I know, many DC 10th-graders would be in a similar position.
The passage said that the speaker wasn’t sure where the cranes’ calls were coming from, and the only word that seemed to square with that fact was “distant.” So I chose B.
Wrong. The answer was A, “intense.”
I suppose you could make an argument that “intense” makes sense here as a definition of resonant, but it’s not obvious what it would be. Why would intensity make it hard to identify the source of a sound?
That was the most egregious example I found of a badly written test question. But I found a number of other instances where the correct answers were far from clear.
The next set of questions, for example, concerned a descriptive passage about a firefly hunt. The passage described the fireflies as “sketching their uncertain lines of light down close to the surface of the water.”
One question asked what was implied by the phrase “uncertain lines of light.” I chose: “The lines made by the fireflies are difficult to trace.”
Wrong. The correct answer? “The lines made by the fireflies are a trick played upon the eye.”
Well, maybe. But there was nothing in the passage to indicate that one answer was any better than the other. Do we really want to make the results of high-stakes tests depend on such arbitrary distinctions?
Another literature-based set of questions was relatively clear, although not exactly easy. But one basic problem was that, right at the beginning, the author used the word “machine” to mean bicycle. If you put your cursor over the word “machine,” which was underlined, and clicked, that rather unlikely definition popped up. But if you didn’t happen to do that, it would be pretty hard to understand the passage.
Reading Supreme Court opinions
I did a lot better on a section where all the questions were based on excerpts from a majority and a dissenting opinion in a Supreme Court case about the First Amendment. But then again, I have a law degree, and, having spent a year as a law clerk to a Supreme Court Justice, I have a lot of experience interpreting Supreme Court opinions.
I suspect the average DC 10th-grader will have a much harder time with that section than I did. It’s not that the questions call for extensive background knowledge in constitutional jurisprudence. In fact, they call for reading a text closely and making inferences based on that text, just as Common Core-aligned tests are supposed to.
But if a test-taker confronts a lot of unfamiliar concepts and vocabulary words, she’s unlikely to understand the text well enough to make any inferences. In just the first few paragraphs of the majority opinion, she’ll confront the words “nascent,” “undifferentiated,” and “apprehension.” Based on my experience, few DC high school students are familiar with these words. Nor are they familiar with what the Supreme Court does or what the First Amendment is.
I certainly hope that someday DC 10th-graders will be able to understand texts like these well enough to use them to demonstrate their analytical abilities, and I hope that day comes as soon as possible. But only those who have no idea what’s going on in DC’s public high schools can believe it will come by next year. Most likely, students will either guess at the answers or just give up.
Many DC students, especially those in high school, are performing far below grade level. Even if their teachers manage to bring up their performance substantially, questions like these won’t be able to measure that improvement because they’ll still be pitched at too high a level.
Real test questions are secret
Of course, students won’t actually be getting these practice questions when they take the tests next year. Nor did they get these questions this spring, when many of them participated in field tests. But because the real questions are closely guarded secrets, the practice questions are all we have to go on.
Recently the principal of a New York City elementary school complained that she wasn’t allowed to reveal “the content of passages or the questions that were asked” on the Common Core-aligned tests given at her school—although she did say the tests were “confusing, developmentally inappropriate, and not well aligned with Common Core standards.”
Those weren’t PARCC tests, but PARCC has an even more draconian rule about its field tests. According to its website, teachers aren’t even allowed to see the test questions: “Items may not be viewed by anyone other than the students who are participating in the Field Test.”
That may be why, in all the furor over the Common Core, we haven’t heard complaints about mistakes or lack of clarity in the questions.
I’m not opposed to the Common Core, nor am I opposed to testing. But the success of any initiative depends on its implementation. And from what we can tell, so far PARCC’s implementation leaves a lot to be desired.
No doubt it’s hard to design a good multiple choice test, especially one that is trying to assess higher-order thinking skills. But at least some of the problems with the practice questions could easily be fixed. Given that PARCC has received at least $170 million to design these assessments, you would think they could have done a better job.
(The PARCC practice test I took also had 3 essay questions, which were pretty challenging. Scoring those will raise a whole other set of difficulties.)
Let’s hope that the results of the PARCC field tests will bring these problems to light, and that PARCC will manage to fix them before students take the real tests a year from now. Perhaps it would help if other members of the public tried taking the practice tests for other grade levels to see if they suffer from similar defects. If you’re able to do that, please let us know your impressions in the comments.
These tests are important. DCPS will use the results to evaluate teachers, and we’ll all be relying on them to determine if efforts to improve education are working. Not to mention that students who take them may suffer a good deal of stress, even though the tests won’t affect their grades.
We need to be sure that these are tests of what students are actually learning, not just guessing games.