The difference among these polls (yes, even though they were within the margin of error) is in their voter turnout models. These models are proprietary.
If voter turnout is very high in 2004 (and the decision of those individual voters to participate is very late) a number of opinion research firms will have mud on their faces.
Once an opinion research firm comes up with a model there are other challenges to getting a decent sample:
1. Refusal [hang up] rates are supposedly getting higher. Part of this is caused by commercial phone solicitations. An additional challenge is caller ID - the phone bank cannot in any way indicate who the poll is for, otherwise voters tend to answer the way they think you want them to. Interview scripts should contain stock phrases to deflect "who are you doing this for?" questions.
Converting "refusals" with a call back can be done, but it is very difficult.
Substitution [replacing refusals] in a sample injects additional error.
Weighting [done when the demographic targets of a sample have not been achieved] also injects additional error.
2. Commercial phone match databases are atrocious. I've seen match rates to voter files from 45% to 60%. I've also seen voter files that were heavily massaged to 75% match rates - this is labor intensive and still misses the unlisted numbers and cell phone only households.
Another technique is "one plus dialing". That is, the numbers in an exchange are randomly selected, then the final digit is increased by one. Unlisted numbers are then included in the sample, but you also get business numbers, fax lines, and people who are not registered to vote. This technique entails another type of sample model.
3. In many places "voter history" is part of the voter file, so you can at least check to see if a respondent is a frequent or possible likely voter. People have a tendency to not be accurate in telling interviewers about their voting history (one of the methods used to discern a "likely voter") - they are reluctant to admit that they don't do something that they should be doing.
The pros are good at this, but in a tight race within the margin of error you're not going to be able to predict the winner.
The campaigns are looking at crosstabulations anyway - trying to find ways to target and communicate to different demographic groups.
It refers to the number of votes Al Gore defeated dubya by in 2000.
I paid $40 to commemorate the same with a brick - installed in the sidewalk next to the criminal justice center near the town square in my community. The brick states "Al Gore won in 2000 by 543,895 votes." It was originaly installed outside the front entrance of the library, but a local right wing republican busybody complained and the brick was removed. I objected to the removal and, after local press coverage escalated, the brick was reinstalled in the new location.
So, I have a fond place in my heart for that number.
It is more difficult to get a usable sample on different days of the week. Certain demographic groups are less available on certain days - if a poll conducts interviews on those days they might have to substitute in the sample (because they had fewer available respondents or a higher number of refusals), injecting additional error in the poll.
The formula for the 95% confidence level [margin of error]is:
1.96 x sqrt((A% x B%)/sample)
Where A and B are the percentages of responses to the question.
In the Friday, May 7 Kansas City Star Opinion section ["Voices", Metropolitan Edition, B6] a reader states:
I'm having a difficult time understanding the furor over the pictures of the Iraqi prisoners. As distatsteful as they appear, they certainly don't represent torture. To refer to this as torture belittles and makes light of the real torture, including the torture of countless Iraqis at the hands of Saddam Hussein. Murderers and terrorists don't give you information because you ask nicely
One can wonder if this is the start of an "astroturf" campaign.
Next I suppose we'll all be hearing, "We were only hazing them - it was just a bonding ritual."
The opinion research I've seen in my geographic area shows a disconnect between respondents' views on issues and their self identified ideology and party affiliation.
The practical effect is even worse. These same voters don't connect the concrete ideological acts (votes, support of legislation, etc.) of their elected representatives with those individuals. The voters don't pay attention - for a variety of reasons.
The solution is education. It's labor intensive and takes the commitment of a large number of people. I am not yet confident that we have enough individuals who can or will do just that.