don't trust the polls

In the middle of my final dissertation draft, I present a few nitpicky points on polling.
"Margin of error". The margin of error on a poll is a rather tricky number. It rests on the assumption of Poisson statistics. Put very simply, if you run the poll over and over again (within a short period of time), 68% of the time the polls will present results that agree within the margin of error.

That means, of course, that 32% of the time, they will be outside of the margin of error. Put another way, the chance that the poll is wrong by more than the margin of error is roughly one-in-three. And that's best case, relying on Gaussian statistics; chances are the probability is even higher.

If you want to calculate the margin of error yourself, it's pretty simple. If you have a survey with N respondents, and your candidate gets M "yeses", then the margin of error is the square root of M over N. For a question that people split about 50-50 on, and a 1000 person poll, that means the error is about +-2%. Pollsters usually report sqrt(N) divided by N, in a (rare) fit of underestimating their errors, which is where you get the famous 3% error you see a lot.

Looking at a large sample of polls, for Bush, you can see that +-3% is a pretty good estimate of the margin of error. Most of the points lie within that band.

But things get more complicated. Look at that plot again. You'll find that, e.g., Gallup is systematically a high estimate of Bush support, while Pew systematically underestimates it. Surveys have different methodologies. They may call at different times during the day; they may ask different questions.

They also may attempt to "correct" their numbers. For example, they may discover that their telephone poll got a much higher number of Republicans than expected given voter identification. So they might try to "weight" their poll. This is an extremely dangerous thing to do, but I imagine it is a significant source of the trends between polling organizations.

There are also systematic errors. As Matt mentioned, what about cellphone-only users. I googled, and got an estimate of 3%-7% of the population with only a cellphone. That is a much higher number than I would expect.

Does it matter? Only if cellphone users are significantly different from the rest of the population. If, e.g., cellphone users go for Kerry 60-40 versus a nationwide 50-50, missing the cellphone users will lead to an additional systematic error of at most 0.7%. This is much less than the margin of error, but note that this error will never go away, unless you change how you do the survey. Even if you survey 100,000 people, your survey will always be between 0.3 and 0.7% off.

There are many other possible sources of systematic error. In the end, you are not studying how the population feels. You are studying how the population with access to a telephone at 6 pm feels, and then attempting to "correct" the answer.

Taking into account both the statistical margin of error (3%), and the variance seen between different pollsters on that Bush survey (3%), and adding in quadrature, you get a total statistical error of just over 4% on a random poll.

But again, this assumes that pollsters systematic errors average out; it's entirely possible that all the pollsters are doing silly things. It also assumes that there are enough polls to look at that the distribution of the systematics is Gaussian, another assumption that if incorrect means the error is probably even worse.

The takehome message: the three percent number is truly "best case". There is nothing better. You can increase your sample size in a particular poll, but you will then run into systematic errors of 3% that don't average out. The "true" error is more like 4% on a particular poll, which means, yes, that a formal "beyond the margin of error" split of 3.5% is still meaningless.

The second takehome message: a one or two percent difference between candidates -- even if you see it in a dozen polls from different groups -- is meaningless. Your candidate is not winning. Your candidate is not losing. All you know is that you can't tell who is winning or who is losing. THERE IS NO MEANING TO IT! ARGH! STOP IT ALL OF YOU NOW AASDDAWDEGEWAWQW INTERNET NERD EXPLOSION

Thank you. Don't forget to vote! It's the only poll that counts.

[*] caveat on averaging different polls. If you truly believe the Central Limit Theorem applies, you can theoretically average a whole bunch of polls -- from different pollsters (to average over systematics) -- and get better error. If you have ten polls from a range of polling organizations, and average down, you can just about call a 2% difference "real" -- but I wouldn't.

The assumption that systematics average out is probably wrong, since pollsters generally listen and talk to each other and make similar assumptions -- that shift over time, so don't think you can just tack on a clever "adjustment factor" of your own.

Finally, you may think you "know" which pollster got it right and so can ignore systematics and go with the plain ol' 3% error and not the >4%. But bear in mind that pollsters are making incredibly fine adjustments here. I don't believe "big" things like political affiliation (in the case of roughly "neutral" gallup or pew) can explain those smaller deviations. (Pay-for-play polls for media consumption only, comissioned by known wings of a party do probably fudge; you must always rely on the pollster acting in "good faith".)

Tags: (all tags)

Diaries

Advertise Blogads