Skip to main content

[catastrophic technical glitches in original diary led to rapid deletion - hoping this re-post functions properly]

This is an exit poll story, about trying to get to the bottom of why the early exit polls in the 2004 US presidental election over-estimated Kerry's share of the vote, an especially painful overestimate because unlike an error of comparable magnitude in 1992, where the exit polls also over-stated the Democratic vote, the difference in 2004 was the difference between defeat and victory. It's a geeky story, because it's about a tiny mathematical piece of the puzzle - but a piece that may be crucial to interpreting the data.  And it's a blog story, because that tiny piece emerged from the collective networked wetware that is the blogosphere.

Read on....

In November 2004, armies of lawyers were on standby in Florida in anticipation of a repeat of the protracted vote-count and legal battle that had eventually handed the presidency to Bush in 2000.  In the event, Ohio turned out to be the state under the spotlight, where the inordinate length of the queues to vote, allegations of machine shortages in strongly Democratic precincts, and a general air of distrust of touch-screen paper-less voting machines supplied by a manufacturer whose chief executive, Walden O'Dell, had famously stated his commitment to "to helping Ohio deliver its electoral votes to the president next year" fuelled widespread speculation that the exit poll error was not due to errors in the exit polls, but corruption of the vote count.

So which was it?  

Answering this question is complicated.  There is no doubt in anyone's mind that the early exit poll projections of Kerry's margin differed significantly from the result, which means that those projections were well outside their "margin of error". In other words, something other than chance caused the discrepancy.  Unfortunately, the sheer size of the discrepancy doesn't tell you which number is wrong.

In January 2005, Edison-Mitofsky (E-M), the pollsters responsible for the exit-polls, issued a public evaluation of their exit-polling"system".  In it they concluded that

While we cannot measure the completion rate by Democratic and Republican voters, hypothetical completion rates of 56% among Kerry voters and 50% among Bush voters overall would account for the entire Within Precinct Error that we observed in 2004.

In support of this claim, they noted that the overestimate of the Kerry vote was greater wherever factors were present that were likely to make selection of voters for interview less than random.  In other words, E-M's conclusion is that the Great 2004 Exit Poll discrepancy resulted from Bush voters being more reluctant than Kerry voters to be interviewed, exacerbated where their voter sampling protocol left greater opportunities for Bush voters to escape unpolled.

So how did E-M come to this conclusion?

Exit poll methodology in US presidential elections normally involves two levels of sampling, both of which are vulnerable to error.  At one level, a selection of precincts is sampled from each state, which, it is hoped, will be representative of the totality of  precincts within that state.  Assessing how representative the sample actually was can be easily checked  after the election simply by comparing the estimate made by the actual results from the sampled precincts with the actual state results.   However, a second level of sampling occurs within each of the sampled precincts, the sampling of the voters themselves.  As this is done at the precincts on election day, and involves human interactions, it is potentially more vulnerable to bias.  

In their evaluation, Edison-Mitofsky conclude that their precinct sampling was fine, if anything, slightly over-estimating Bush's share of the vote.  It was at the level of voter sampling within the precincts themselves that error occurred. This source of error is referred to as the Within-Precinct Error (WPE), and represents the difference between the predicted and the actual percentage margin between the candidates for that precinct.  The average WPE in 2004 overestimated Kerry's share of the vote by a full 6.5 percentage points.

Two hypotheses have been offered.  One is the E-M hypothesis, which has been dubbed the "shy Bush voter" or "reluctant Bush responder" hypothesis.   An alternative hypothesis is the politely termed "vote-count corruption" hypothesis, which states that a greater proportion of Kerry votes than Bush votes went uncounted, and/or that extra Bush votes were somehow added to the total.  This second hypothesis is at the heart of allegations that the exit polls in 2004 are evidence of election fraud.

In March 2005, a group of academics working for the US Counts Votes (USCV) National Election Data Archive Project US Counts Votes posted a paper entitled  Analysis of the 2004 Presidential Election Exit Poll Discrepancies.A number of diaries about the paper appeared on Daily Kos including a recommended diary by Jpol USCV analyse the data given in the E-M evaluation, and conclude that because total response rates are not significantly lower in strongly Republican precincts than in strongly Democratic precincts, the "reluctant Bush responder" hypothesis is not supported.  They then turn to what they consider a plausible alternative hypothesis entitled: "reluctant Bush responder in mixed political company".  In essence, this second hypothesis is that Bush voters may have been more reluctant to respond in precincts in which voters for the Democratic candidate were in the majority.  

They test this hypothesis by referral to a table in the E-M report that gives the mean and median WPEs for precincts with different degrees of partisanship as indicated by the final vote count.  Five categories are given: highly Democratic precincts; moderately Democratic precincts; even precincts; moderately Republican precincts; and highly Republican precincts.  The relevant table is reproduced below:

Mean and median WPE, mean absolute WPE and number of precincts (N) by precinct partisanship, from page 36 of the Edison/Mitofsky report.

In the convention used by E-M, a negative WPE indicates an overestimate of the proportion of votes for the Democratic candidate, and a positive WPE indicates an over-estimate of the proportion of votes for the Republican candidate. From the table it can be seen that the most negative mean and median WPEs are in the "high Republican" category of precincts. Thus, contrary to the "reluctant Bush responder in mixed political company hypothesis" the greatest overestimates of the Kerry vote appear to have occurred in highly Republican precincts.

From this table, USCV calculate the relative response rates of the two groups of voters, and conclude that in order to satisfy the "reluctant Bush responder" hypothesis, implausible patterns of non-response rates have to be inferred. The USCV authors claim that the E-M data are more consistent with a "Bush strongholds have more vote-count corruption" hypothesis.

Houston, we have a problem.

Figuring out the nature of the problem is what has been happening in the blogosphere over the past few days.

I had been aware for some time that the WPE was a "confounded variable".  If, in a particular precinct, something was wrong with either the sampling of the voters or with the counting of their votes, so that the proportions of Kerry voters polled was greater than the proportion of Bush voters polled (alternatively, that the proportion of Kerry votes uncounted was greater than the proportion of Bush votes uncounted, or even, that the proportion of fictitious extra Kerry votes was less than the proportion of fictitious extra Bush votes) the effect on the WPE would be greater if voters in that precinct were evenly divided than if the precinct was highly partisan.  I therefore devised an algebraic formula (what RonK calls my "fancy function") that would enable the "true" sampling bias to be retrieved from the partisanship and WPE data.  I posted some interim conclusions in a diary on Daily Kos.  

I was aware, but not fully aware, that my fancy function was something of a fudge.  We do not have the partisanship data for each precinct sampled, nor do we have the WPEs.  All we have is the state results from the count, and the mean WPEs for each state. Similarly, in the table above, we only have mean and median WPEs for precincts in broad bands of partisanship.  However, using my formula on those values seemed to confirm the inference drawn by USCV, who in fact updated their report (using their own-brand fancy function), kindly acknowledging me for drawing attention to the possible confound, and reporting that after correcting for the problem the inferences remained valid.

I thought I was on the right lines, and Mark Blumenthal, who linked from a piece on his Mystery Pollster blog to my diary, thought so too.  In his piece, he made a similar, though not identical point, regarding possible artefacts that might have led to the data in the E-M, which he suggested were artefacts arising from sampling error.   This was very perceptive, although, as it turns out, wrong. Enter couple more people from cyberspace, DKos's DemFromCT, and Rick Brady, from Stones Cry Out, both of whom commented on my DKos piece. Other Kossacks, including RonK, commented on both blogs. We started exchanging emails.  Rick was sure that sampling error couldn't produce Mark's artefact.  I was running simulations that were producing it, but I wasn't sure why, or to what extent it was related to my confound.  Mark became convinced that the artefact was not due to sampling error and issued a "mea culpa" update on his blog piece, but invited people to wait for further developments....  Meanwhile the algebra was getting more and more hairy, and our kitchen table was lying inch deep in back-of-the-envelope formulae, at least half of them in my husband's spidery writing, despite his continual nagging at me to forget exit polls and get on with my dissertation

The problem essentially was that the WPE is a function of two variables - the partisanship of the precinct and the amount of bias in the poll (or in the vote-count), and trying to visualise what happens to a third variable when you manipulate variables one and two can be tricky.  So being trained as an architect, I did what architects do when they want to visualise things in three dimensions, and I made a model.  

A computational model is like any other kind of model, and it lets you peer at things from unusual angles.  What I wanted to know was, essentially this:

If some characteristic of a precinct means that you will tend to poll a bigger percentage of one kind of voter than another, what will that variance in sampling bias do to the WPEs for different levels of precinct partisanship?  The mistake that Mark had made, and which Rick first spotted, is that this is a different question to to the question "what effect will sampling error have on the WPE for precincts of different levels of partisanship?" Sampling error is what you get if you randomly sample a percentage of all voters, and sometimes, by chance, get a few extra Democrats, and sometimes, by chance get a few extra Republicans.  It is analogous to tossing a coin 100 times and getting roughly 50 heads and 50 tails each time.  Sometimes you'll get a few extra heads, sometimes a few extra tails, but over time, if you've got nothing better to do, the extra heads will be balanced by the extra tails, and your average score will be 50 heads. When you didn't get 50, what you got was sampling error.

But what I am talking here about is analagous to tossing loaded dice. Imagine you have a set of dice, and paint each of them black on some sides and white on others.  You give some friends one each and get them to throw them 100 times, again several times over.  Each time they have to record how many blacks they get and how many whites.  If they dice are honest dice, the friends who have a die with four black faces will tend to throw a black four times out of six (67% black). Those whose dice have 1 black face will tend to throw a black only 1 time out of six (17% black). And in fact, your friends could figure out how many black faces there were on their die by noting what percentage of blacks they threw (slow, prone to sampling error, but given enough throws, it would work).  However, if you had loaded the dice, you could seriously mislead them.  If you had a die with three black faces and three white faces, but loaded it so that it landed showing a black more often than it showed a white, you could trick your friend into thinking that the die had more than three black faces.

Now, imagine two scenarios: in one, you were completely random in the way you loaded the dice.  Some are lightly loaded, some not at all, some favour the white sides, and some favour the blacks.  In a second scenario you load them so that more of the dice load for black than load for white.  These are the scenarios I modelled.

Instead of dice I modelled precincts, and instead of varying the number of black faces on a die, I varied the percentage of voter in each precinct who voted for Kerry or Bush.  I took a million of each kind of precinct.  For scenario one, I randomly loaded the dice - some were scarcely loaded at all, some were loaded for Bush and some were loaded for Kerry.  You could imagine perhaps, that, in some precincts, the interviewer appealed to Bush voters more than Kerry voters, so more Bush voters ended up being interviewed.  In others, the interviewer seemed more friendly to Kerry voters.  (Or to be even-handed with my assumptions, imagine that in some precincts the vote tabulators deleted Kerry votes, and in others they deleted Bush votes - the math is the same). But the point is that in this scenario the bias in each precinct could go either way.  

In the second scenario, I loaded more of the dice in favour of Kerry.  In some precincts, the "interviewer" was still more attractive to Bush voters, in some the interviewer had no effect.  But the average "interviewer" effect was in favour of Kerry voters.  

What I wanted to know was what effect this would have on precincts with different percentages of real votes for each candidate.  In other words, for a given level of loading, and a given level of support for a candidate, what would be the effect on the guesstimate of the vote?  

Figure 1 shows my model output for the first scenario.

Figure 1

Model output for "no net bias" conditions.  Kerry's share of the vote is plotted on the x axis (with "high Republican" precincts on the left, where Kerry's vote share is small, and "high Democrat" precincts on the right, where Kerry's vote share is large). The Y axis represents the magnitude of the bias as determined by Mean WPE (white line); median WPE (red line) and bias index (green line).  Negative WPEs and positive bias indices represent an over-estimate of the Kerry vote.

Along the bottom of the plot is Kerry's share of the counted vote, ranging from highly Republican precincts on the left (only 10% Kerry) and highly Democratic precincts on the right (90% Kerry).  The white line represents the average for each category of precinct WPE.  You can easily see that where the support for each candidate is even, the WPE is zero.  Even though some of the precincts are very biased, some in Bush's favour, some in Kerry's, the average mis-estimate (WPE) is zero.  However, where Bush's support is strong, the WPE goes negative, indicating an over-estimate of Kerry's vote.  And where Kerry's vote is strong, the WPE goes positive, indicating an over-estimate of Bush's vote.  Remember, in each of these precincts, even the extremely partisan precincts, the bias is as likely to be in Bush's favour as Kerry's - the dice are randomly loaded. But the effect on strongly Republican precincts is very different from the effect on strongly Democratic precincts.

The green line represents the median WPE for each precinct.  The median is unaffected by the partisanship.  From this, math geeks can deduce that the distribution of WPEs is  skewed.  In English, this means that as many precincts over-estimated a particular candidate's vote as underestimated it, but that in highly partisan precinct, the over-estimates of the minority candidate tended to be much larger than the under-estimates of the majority candidate.

Computatational modeling is fun.

The white line shows the bias estimate made by my fancy function.  To my intense pleasure, it shows that the net bias is zero (which it is) and that the estimate is unaffected by precinct partisanship.

The model output for scenario 2 is illustrated below.  In this scenario I jacked up the "response rate" of the "Kerry voters" relative to the "Bush voters".  There were still plenty of precincts where the net bias was zero, and even plenty where the net bias was in "Bush's" favour (there are 9 million of them after all) but the average "bias" is towards Kerry voters, so that the average WPE is no longer zero, but negative, indicating a net over-estimate of the Kerry vote, as in 2004.

Figure 2

Model output for "Net Kerry bias" condition.  The axes and legends are as for Figure 1

As you can see, this plot looks very different. Again the white line represents the mean WPE for each category.  This time, all but the most strongly Democratic precincts have a negative WPE, and the WPEs are most negative for precincts with 30% Democratic support.  However, the red line representing the medians in this plot is also curved. It only touches the white line once, at around the 45% mark, so it is only here that the distribution of the WPEs is symmetrical, and both are strongly negative.  In the highly Republican "precincts" as in the E-M table, both mean and median are negative, the median being less negative than the mean.  For more Democratic precincts, the distribution is skewed in the opposite direction - the mean being closer to zero than the median.

Again the green line represents my fancy function, which for abstruse mathematical reasons has the opposite sign to the WPE - a positive value indicates an over-estimate of the Democratic vote. Once again to my intense pleasure, the green line correctly indicates an overestimate of the Kerry vote, and also correctly indicates that the bias it represents was not different in precincts of different levels of partisanship, whatever misleading tale is being told by the mean and median WPEs.

So what does all this mean?

Firstly, it means that the WPE is a terrible way to assess where either polling bias or electoral fraud occurred, as its value depends on the very variable we want to examine - the partisanship of the precinct or the state.  The only way of finding out whether differential non-response rates (aka "shy Bush voters") are responsible for the WPE is to find out to what extent the degree to which random sampling protocol was compromised (which, fortunately for us tends to exacerbate any "shy voter" tendencies" and thus brings it statistically out of the woodwork), is correlated with variance in a measure of bias that is not confounded by precinct partisanship, such as that given by my "fancy function".  I hope that the people at Edison-Mitofsky will read this great Blogspheric effort, do the math, and tell us the answer.  If a large proportion of variance in "bias" remains unexplained, or if there are suspicious looking outliers in the data, then maybe the "vote-corruption" hypothesis will retain some legs (though they are looking very wobbly to my eyes right now).

Secondly, the really unexpected (to me) finding from of this computational experiment is that  even where the net signed WPE is zero, it cannot be inferred that no bias occurred, as randomly distributed precinct-specific biases may nonetheless favour one candidate as frequently and as greatly as the other.  So just because a polling firm gets the right answer in future will not tell us whether or not bias (or even fraud) occurred.  However bias (and fraud) will leave a tell-tale finger print when WPEs are plotted against precinct partisanship.  If the mean and/or median WPEs start wandering off into that S shape, we will know that someone's cheating.  Unfortunately we won't know whether the fingerprints belong to a pollster who looks more appealing to one kind of voter than the other, or to a hacker of the vote. They are both the same.

Finally, the results of this experiment suggest that  the pattern of mean and median WPEs by precinct partisanship reported by E-M, may not be, as USCV claim, an indication of bias concentrated in Bush strongholds, but rather the pattern that might be expected if over-polling of Kerry voters (or under-polling of Bush voters) was fairly uniform and widespread. In Figure 3,  below, I've plotted the Eidson-Mitofsky data from the table above.  If you compare Figure 1 with  Figure 3, there is a bit of a family likeness, particularly in the way the median diverges from the mean in the high Republican category.

Figure 3

EM median and mean WPE values for five categories of precinct. Trend-lines are best quadratic fit.

If so, then it would seem that the conclusion drawn in the USCV report, that the pattern observed requires "implausible" patterns of non-response, and thus leaves the "Bush strongholds have more vote-vount corruption" hypothesis as "more consistent with the data" is not  unjustified.  The pattern instead is consistent with the E-M hypothesis of widespread "reluctant Bush responders" - whether or not the "political company" was "mixed".  

True geeks can download a paper giving all the hairy equations by clicking here

Cross posted on New European Times

Update 27th April 2005: the link to the geeky paper now goes to an updated version; the old one is archived here

Originally posted to Febble on Tue Apr 19, 2005 at 05:10 AM PDT.

EMAIL TO A FRIEND X
Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags

?

More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  Please someone tell me (4.00)
    if they can't see the figures!
  •  original and insightful work (none)
    worthy of publication.

    Skip the math for the moment in favor of the 'meta'. What's happened here is that inquring minds (including USCV, to their credit) have published on the internets a critique of a US election. Their mathematical models and concerns were available for peer review, and Febble (and Rick Brady from Stones Cry Out and Mark Blumenthal from Mystery Pollster have also added their insight). Daily Kos' peer review had a major role to play as well, as this is where the work first appeared (and got its first critique).

    Congratulations, all!! Well done!

    What resulted is a collaboration across party lines, political viewpoints and an ocean or two. Febble's work has been reviewed by others (including the USCV crew) and hopefully will be further reviewed at the upcoming The American Association for Public Opinion Research meeting in mid-May.

    It isn't a question of liking or disliking Febble's conclusions, it's a question of whether the work stands up to peer review. Further, since Febble (and the others linked) don't (yet) have a PhD, the idea that non-PhDs have nothing to contribute is sheer nonsense and blown away by this original contribution to public opinion research.

    Read it slowly and check out the other links if/when Brady and Blumenthal comment on this. It's worth understanding that there is more than one explanation for the exit poll discrepancy, and for future improvement of the exit poll tool, Febble's 'artefact' may prove very useful.

    "Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

    by Greg Dworkin on Tue Apr 19, 2005 at 05:26:41 AM PDT

    •  Funny, I seem (none)
      to recall alot of questioning on this site, if not downright ridicule, of anyone with mathematical or statistical backgrounds who offered up analyses of the exit polls who weren't exit poll "professionals" or who didn't have a strong background in exit polling.  

      I could be mistaken, but weren't you one of those posters who were dissing the analyzers who lacked a strong "pollster" background but were questioning the official interpretation of the final polls?

      And now you've breathlessly assessed this guy's work as "ready for publication" even though this guy, who apparently enjoys modelling, does not even have a ph.d. in statistics, much less experience in polling.

      Not that I'm challenging his work--I'm not.  Just what appears to be a double standard.  It's just odd that since he supports the conclusion you've been supporting all along, you tout his work so enthusiastically while previously dismissing non-pollster analysts [unless I've got you mixed up with some other poster, maybe DHinMI?].

      Bush's presidency is now inextricably yoked to the policies of aggression and subjugation. Mike Whitney

      by dfarrah on Tue Apr 19, 2005 at 06:11:23 AM PDT

      [ Parent ]

      •  you are mistaken (none)
        in fact, I've been rather consistent in my posting views, and these same posters have been cited by me all along.

        "Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

        by Greg Dworkin on Tue Apr 19, 2005 at 06:21:17 AM PDT

        [ Parent ]

      •  "This Guy"... (none)
        ...isn't a guy.
      •  here is an exchange that may (none)
        clarify: link.

        What I took exception to is USCV saying that only PhDs can criticize their paper.

        "Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

        by Greg Dworkin on Tue Apr 19, 2005 at 06:28:00 AM PDT

        [ Parent ]

        •  Man (none)
          Only people with a PhD in statistics can criticize their work?  Sheesh, talk about arrogance.  Oh, and a gross disregard for the scientific method as well.
          •  well, hang on... (none)
            In case anyone is still following this thread --

            I don't know whether Kathy Dopp ever said that "only people with a PhD in statistics can criticize [our] work," but if she did, I'm morally certain that she didn't mean it.  Kathy doesn't have a PhD herself, but that isn't the issue.

            Without revealing any confidences, I can say the obvious: USCV isn't convinced by Febble's arguments, and Febble isn't convinced by USCV's arguments.  And so it goes.  This is a tough issue with emotional resonance for most of us.

            •  here's the quote (none)
              A Polling Interlude

              Kathy Dopp, the President of US Count Votes (USCV), issued a public challenge "for any PhD level credentialled (sic) statistician who is affiliated with any university in America to find any statements in our 'Response to Edison/Mitofsky Report' that they believe are incorrect and publicly refute it."

              What are you morally certain of? The quote (via Mystery pollster) was originally at olbermann Watch and I have seen it repeated elsewhere:


              USCountVotes would love to issue a challenge for any PhD level credentialled statistician who is affiliated with any university in America to find any statements in our "Response to Edison/Mitofsky Report" that they believe are incorrect and publicly refute it. We will be happy to respond when and if such a person can be found. Some in our group are planning to provide an addendum to our study that points up two additional substantial evidence that are provided by the E/M report that refutes their theory that "Kerry voters more likely to answer exit polls". If there is a statistician in America who will risk his academic reputation by publicly speaking out against our factual findings on the E/M report then we are very pleased to begin a public academic debate with that person."

              "Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

              by Greg Dworkin on Sun Apr 24, 2005 at 12:18:53 PM PDT

              [ Parent ]

            •  as to the second part (none)
              I can say the obvious: USCV isn't convinced by Febble's arguments, and Febble isn't convinced by USCV's arguments.  And so it goes.

              That's what peer review is for. Lay it out and let the review process run its course. But there are talented people without PhD's (including Kathy, as you point out) who should not be excluded from the discussion.

              "Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

              by Greg Dworkin on Sun Apr 24, 2005 at 12:23:37 PM PDT

              [ Parent ]

      •  BTW... (none)
        ...you seem to be overlooking the fact that Febble may not herself have that background in exit polling, but she worked in collaboration with professional pollsters (like Blumenthal) and at least one person (RonK) who's been involved in statewide recounts.  And in collaboration, they've detected flaws in the analysis of the statistics folks (working without anyone with past expererience with election data) who've issued the other reports.

        Febble is, iirc, doing grad work in statistics.  Any solid analysis would have to have a statistical component.  But this particular analysis is, to my knowledge, the only one that had significant imput from folks familiar with election polling and analysis of past election returns.  And significantly, this analysis, written by someone who herself has expressed the belief that there was probably a degree of fraud in the last election, discounts the exit polls as evidence to support the claim.  

      •  Appeal to Authority (none)
        I believe the discussion was aimed at trying to avoid the fallacy argumentum ad verecundiam, or Appeal to Authority.  

        You should never point to someone's credentials as evidence for the veracity of their claims, unless they are an authority in that specific field.  Neither USCV nor febble are authorities in the field of exit polling (check their published work - easy enough).  

        To appeal to their authority in arguments for or against their claims is fallacious.  In fact, the ONLY true experts in this field have either spoken (Edison-Mitofsky) or relatively silent on the matter (Merkle and Edelman for example).  

        Therefore, it is not a logical fallacy to say, "I don't think the exit polls point to fraud, because Edison-Mitofsky, the experts in this field, say they don't suggest fraud."  It may not be wise (skepticism is healthy), but it is not fallacious.  

        It is a logical fallacy to say, "I think the exit polls point to fraud, because USCV's team of mathematicians and statisticians says they point to fraud."  

        And it is a logical fallacy to say, "I don't think the exit polls point to fraud, because a Daily KOS diarist febble, a former USCV contributor and PhD candidate, says they don't suggest fraud."  

        Stick to the merits of the argument and validity of the science because in this argument between non-exit poll experts, that is ALL that matters.

  •  Invisible figures (none)
    (Reminds me of that missing exit poll raw data)
  •  Question (none)
    In all of these analyses, who's considered whether absentee ballots may have skewed the results more than in the past?  Here's what I'm wondering:

    If the exit polls of people who vote on election day typically skew Democratic, some of that skew will be conpensated in the final results if the AV vote trends Democratic, depending on how the exit pollers model the expected AV vote.  However, if the AV vote doesn't lean Dem, that makes the sample bias even less representative of the overall results.  Since seniors, for the first time in a close election since before the New Deal, appear to have leaned quite heavily toward Bush, that would screw up the exit polls if (as seems to be the case) they already tend to lean Democratic prior to being adjusted.  If, as I suspect is true, AV voters comprised a larger part of the overall electorate than in previous election, that would further exacerbate the problem.  

    Can anyone tell me how is the AV problem being addressed?

    •  I'm not sure... (none)
      The absentee question has been looked at before, but I'm not sure how much effect this would have.  I know that for some states, E-M randomly called absentee voters and included them in their poll.
      •  Well (none)
        That opens up a whole other mess of methodological issues.  

        I "know" anecdotally and from looking at past returns in specific jurisdictions that AV voting has increased greatly in Michigan over the last ten years, and I'm pretty sure I've read that, to varrying degrees, the same is true nationally.  If the "exit polls" of people who vote on election day are getting less reliable compared to the total results, that would seem to me a prime area of investigation in trying to figure out why.

        •  according to Richard Morin (WaPo polling director) (none)
          writing on 11/20:

           

          That final weighting also is central to the controversy over real or imagined electoral irregularities. It's true that exit poll results available on CNN and other networkWeb sites late into election night showed Kerry with that now-infamous three percentage point lead, an advantage based exclusively on exit polling and a pre-election survey of absentee voters. When those survey results were statistically adjusted in the wee hours of Wednesday to reflect the actual vote, Bush suddenly -- and seemingly mysteriously -- jumped into the lead nationally and in several key, closely contested states.

          "Politics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies." - Groucho Marx

          by Greg Dworkin on Tue May 03, 2005 at 05:42:15 PM PDT

          [ Parent ]

  •  problems (none)
    ok here is my problem with your analyse
    you site 1996 as equal error in exit polls but didn't a different group do exit polling that year ?

    the new consortium started doing exit polling in 2002 for democratic primaries (with accurate results)

    the other problem i have is

     -- we can delve into exit polls and stats to prove any point but what i want to know is--

    how come a consortium of news organization exit polls were wrong

     .. how come internal exit polls of John Kerry were wrong ..

    how come internal REPUBLICAN exit polls were wrong..

    and in fact the ONLY correct exit polling was done by Karl Rove.

    so why hasn't anyone asked for Karl Roves methods since he seems to be the only one that got it right

    •  Dems and Repubs Didn't Have... (none)
      ...internal exit polls.  They tracked turnout in targeted pct's, but they didn't have exit polls.  Their reactions were to the release of the raw numbers that we're responding to.
      •  why did this not happen? (none)
        I can't fathom why the parties -- Dems in particular -- didn't invest in their own exit polls for the election.  They certainly had ample warning about the Diebold machines (and that funny-smelling outcome in GA '02), so I'd have thought they'd have wanted some backup method of testing the vote count.  I recognize that resources might have been better placed elsewhere, but I'd have put a little effort in this direction.

        Which is not to rank myself among the conpiracists.  I always thought Bush could score as high as about 51%, and, though it galls me to think he could perform so close to the top of his range, I have no reason to think fraud is an automatic suspicion.  (Though part of me will always wonder why Bush and Rove wasted time running up the score in Wyoming et al.; it doesn't seem like something you'd do with OH and FL truly unsettled)

        •  Cost is Prohibitive (none)
          The problem with American media exit polls is that they're done on the cheap.  To do a good exit poll would probably cost about 10 times what's spent right now.  

          Besides, in the event of close finishes they'll do some fairly sophisticated sampling to determine whether it makes sense to pursue a recount.  The only place where the totals were close and a change in results would have helped Kerry win was in Ohio.  While there was clearly voter intimidation in Ohio, there's still little or no evidence that's been accepted by anyone with a background in dealing with election data to suggest there was the type of massive fraud necessary to pull off a 118,000 vote margin of victory.  

  •  Simulation is a fearsome tool (none)
    Very nice work!

    In the original USCV report, they said that exit polling correctly modeled the elections for senators and congressmen. Have you modelled any state elections and used the congressional race data for comparison?

    Furthermore, don't the curves look the same whether the error is caused by incorrect polling or by incorrect vote-tallying? I'm not sure that this analysis indicates why there is a skew. But it does appear that the skew is uniform across the precincts.

    •  My view is that the curves (none)
      will look the same regardless of whether the problem is in the poll or the count.  Someone is trying to convince me otherwise at present, and I'll keep you informed.

      I'm fairly sure I'm right though. ;)

    •  True - poll bias or the count (none)
      I don't think febble's work disproves fraud.  USCV argues that a uniform differential response rate cannot plausibly explain the pattern in WPE.  Febble demonstrates that a uniform level of bias across all precincts could explain the general pattern in the table.  

      Now, is it possible that fraud in Bush strongholds and sampling bias in strong Dem strongholds could explain the pattern?  I suppose, sure.  But the argument was that it was implausible that that differential response could explain the pattern.  That argument is toast unless of course they are now saying that fraud can somehow be distinguished from sampling bias, which I cannot see how that can be...  Bias is bias - either in the count or in the poll.  No?  

      First, let USCV revise their paper to remove statements regarding the implausibility that a uniform level of sampling bias could account for the pattern in the table on page 36, then let them  move goal posts.

      --------------------------------------------------
      However, has anyone wondered why USCV puts so much stock in the accuracy of the response rates?

      Notice the logic of the USCV paper.  Just before introducing the rBrmpc hypothesis, they "refute" the rBr hypothesis by pointing to the response rates by precinct partisanship.  They say that the bivariate relationship "bears directly on the plausibility of the report's central hypothesis, and it goes in the wrong direction."  Given their "faith" in the validity of these response rates, they introduce the rBrmpc hypothesis and proceed to knock down by looking at WPE by precinct partisanship.  Without having established the validity of the response rate data, they reject the rBrmpc hypothesis, saying that bias should be minimized in the strong Bush precincts.    

      This is where the math v. survey methodology folks diverge.  Survey methodologists see more than data, but also humans who collect and report those data.  Given what we know about the training of these interviewers and the fact that they seem to be mostly young college kids, is it not reasonable to assume that the in High Bush precincts, there could be a tendency to fudge from the sampling interval and oversample Kerry supporters (college kids attract college kids, no?), yielding both a high response rate and high bias - suggesting support for rBr.  

      At this point, I'm only speculating, but it seems that we will be drawn into an endless tit for tat argument with USCV if we cannot get them to concede that for rBr to be true, response rates themselves could be suspect.  Therefore they cannot logically refute rBr by relying on response rates, which can be infered from a fair reading of the rBr hypothesis may be compromised.

      --------------------------------------------------
      To the first point you raise:

      In the original USCV report, they said that exit polling correctly modeled the elections for senators and congressmen.
      I'm not sure that they really say that exactly, but there is a discussion about the Senate/Presidential disparity in WPE on page 16.  

      The USCV study concludes:

      There is no logic to account for non-responders or missed voters when discussing the difference in the accuracy of the results for the Senate versus the presidential races in the same exit poll.

      Ah...  But they are assuming that WPE is a correct measure of "difference in the accuracy" are they not?  Febble demonstrates that WPE is hopelessly confounded by not necessarily the partisanship of the precinct, but by the partisanship of the vote for the specific polled race.

      I will collate important aspects of two comments I left over at Mystery Pollster.

      I know this is going to get jumbled, but try to follow the data from Survey USA.  The last number to the right of each state is the "swing" between the margins Bush-Kerry (Pres race) and Rep-Dem (Senate race).  

      Pres Senate
      State B-K Rep-Dem Swing

      Colorado 3% -4% 7%
      Arkansas 4% -6% 10%
      Pennsylvania -1% 18% 19%
      Washington -4% 6% 10%
      California -11% -21% 10%
      Florida 1% 0% 1%
      Ohio 2% 28% 26%
      Missouri 5% 19% 14%
      North Carolina 8% 5% 3%
      South Carolina 11% 11% 0%
      Kentucky 21% 9% 12%
      Oklahoma 30% 9% 21%
      Georgia 12% 16% 4%
      Indiana 19% -27% 46%
      Illinois -12% -39% 27%

      These data clearly suggest that there was A LOT of ticket splitting going on in the 2004 elections.

      Now consider Febble's analysis (and now Appendix B of the USCV study) that demonstrates that WPE is not a pure measure of bias and is confounded by the partisanship of the vote. Since the actual tickets were split as strongly suggested by the pre-election polling, it is not likely that the WPE for Senate would match the presidential WPE. To know for sure, E-M would have to apply febble's variable to the precinct level WPEs for both the Senate race and the US Presidential race.  

      But even then, I don't think it would be a useful measure for comparison. In states where ticket splitting was very common (I suggest all the states I list above except for Florida, the Carolinas, and Georgia), there is too much noise to draw any real conclusions.

      Assume that Kerry voters are oversampled at a ratio of 1.5:1. How can we know if oversampled Kerry voters split their tickets?  How do we know if a Democrat who voted for Bush did not split her/his ticket?  That is, "bias" is measured by oversampling of supporters of one candidate over another. Just because there was "bias" in the presidential exit poll, doesn't mean that there will also be bias in the Senate exit poll.

      My point is that it seems reasonable that for a consistent and geographically dispersed level of Kerry bias in the presidential exit poll, there would not be an equally consistent and dispersed Democratic bias in Senate exit polls, even if WPE was a perfect measure of bias.

      I've just started thinking about this issue as I suspect if USCV has anywhere left to turn, it is to this argument.  

      If anyone can add to this discussion, please do.  

      •  the congressional races are more neutral (none)
        I don't think the amount of "swing" matters - we'd expect congressional races to have their own properties.

        If there's a split ticket then the person polled will tell the pollster that they voted one way for president and another way for congress. It's reasonable to find some congressional races that were not controversial and divided along party lines. Then the data from this district can be used as a test case to determine whether the exit polls for the congressional race predicted the % of people voting for each side accurately.

        It's interesting, just wish I had time to play around with this data myself.

      •  A month later, and I critique (none)
        I agree that Febble's excellent analysis goes a long way toward explaining how the rBr hypothesis could produce the apparent anomalies in USCV's paper. However, I'm not sure your critique of USCV's "President v. Senate" argument holds water.

        In your table above, there were only three states where the Senate races were within the Presidential WPE: Colorado, N. Carolina, and Florida. Only in Colorado was there enough ticket-splitting to even remotely deserve the label "significant."

        Just because there was "bias" in the presidential exit poll, doesn't mean that there will also be bias in the Senate exit poll.

        True. But does it make sense that there would be near-nationwide bias in the Presidential exit poll, but no bias whatsoever in any Senate exit poll, even in the above states, where there was minimal ticket-splitting?

        Again, I agree that Febble's analysis explains the most significant question USCV raised - the apparently anomalous correlation between precinct partisanship and WPE. That correlation seems to defy common sense, but a lot of things in statistics defy common sense, and Febble does a wonderful job of explaining what's really going on: the anomaly is caused not so much by the rBr bias itself as by random variations in that bias.

        And frankly, she blows a big hole in USCV's case. The "President v. Senate" and the "paper ballots v. other voting technologies" anomalies are rather weaker claims to fraud. But I don't think they've quite been put to rest yet, and you're just cavalierly dismissing the former with handwaving.

        "Did I say 500 tons of sarin and 25,000 liters of anthrax? I meant 'weapons of mass destruction-related program activities.'"

        by Mathwiz on Thu May 19, 2005 at 03:23:34 PM PDT

        [ Parent ]

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site