Photo by nchoz on Flickr.
Were Republican voters under-sampled? Would exclusion of cell phones skew results away from a candidate favored by younger voters? Do we actually expect 69% of registered voters to show up?
These are interesting questions and valid criticisms. But in the end, the poll turned out to be very accurate, almost eerily so.
Let’s first compare the election night results with the poll results:
First off, note that the poll gets Perry Redd’s and Paul Zuckerberg’s election results exactly right, and Michael Brown’s small showing justifies his exclusion from the poll. Essentially none of the undecided voters went for Redd, Zuckerberg, or Brown.
Another common criticism of the poll results was that 43% were undecided: with that many undecided, any candidate would seem to have a chance. But a more likely result is that the undecided voters will, in the end, follow the pattern of the already-decided voters. For the four major candidates, we look at this by comparing the election night results to the percent of the decided voters each candidate got in the poll:
Here we see that the results for both Patrick Mara and Anita Bonds nearly exactly match. This tells us that the undecided voters, in the end, broke for Mara and Bonds in exactly the same proportion as the decided voters in the poll had.
On the other hand, when compared with their shares of the decided voters, Matthew Frumin under-performed on election night and Elissa Silverman over-performed. They were, of course, the two most closely-matched candidates, so we can add their totals together to see how the polling predicted their combined performance:
|Frumin + Silverman||38.97%||37%|
Their combined share of the decided voters in the polling was within two percentage points of their combined election night totals. The close matches for Mara, Bonds, and Frumin-Silverman show that it’s reasonable to presume that the undecideds, even at 43%, will not deviate too strongly from the decideds.
Silverman did get more of the undecided voters than Frumin did, which is evidence of some degree of coalescence. Many would have been happy with either Frumin or Silverman, and perhaps were wavering between the two. When the poll (and other indicators) showed that Silverman was finishing stronger, they gave her their support. From the perspective of the Silverman campaign, though, this was too little and too late.
The first take-away from the numbers is that polling, even in a low-turnout special election in DC, can be very accurate. The second take-away is that polling data which shows one candidate to be stronger than another can lead to support consolidating behind the stronger candidate.
As Patrick Mara reminded us, Tuesday’s election was the third one in recent memory in which multiple reform-minded self-styled progressive candidates have split the vote, giving a win to the establishment candidate. (Though others dispute whether Mara can claim the label of “progressive.”) Many have wished for a progressive coalition which would rally around a single candidate.
One other thing that this poll has shown is that polling itself does not need to be the exclusive province of the traditional media and the campaigns. If Dr. Bronner’s Magic Soaps can support a poll, anyone can. We should all thank Adam Eidinger—the longtime radical DC political activist and Dr. Bronner’s employee who organized the poll—for showing us that.
There’s no reason a group of like-minded activists couldn’t commission it’s own timely and transparent polls, and to use their results to consolidate support for the strongest favored candidate.