- Long-term trend towards rise of government and public sector research
- Increased emphasis on evidence-based policymaking
- consultation
- policy options
- evaluating results
- Greater demand for user-focused services
- scoping needs
- customer experience
- tracking satisfaction
- segmenting customers
- Greater need for efficiency savings
- prioritisation
- understanding "what works"
- Public service reform/more for less
- Co-production and behaviour change
- Rising demands for accountability and transparency
- Works across all areas of public policy
- Only 0.16% of work is political polling for the media.
- Public opinion poll: views of a representative sample of a defined population.
- Polling measures perceptions rather than truth.
- Perceptions determine public opinion — not truth.
- Perceptions are also outcomes in their own right.
- Our experiences of services determine perceptions of their quality.
- Fear often has little relationship to risk
- Fear of crime has gone up, but crime has ultimately gone down.
- Worries about litter and environmental factors always rate highly; when addressed, overall worries about crime decrease.
- Polls important, but need to be used appropriately.
- How they're reported is also very important.
- Only 2/3 of people can easily reduce 20% to 1/5 (I.e., percent to a fraction)
- Four main poll types:
- Peacetime polls (where we are now)
- Snapshot indicator of how the public thinks it should vote at a given moment.
- Polls don't predict what would happen; ask about a hypothetical election, answers are hypothetical too.
- Better seen as a barometer -- doesn't predict the weather, measure something that is useful to know in predicting the weather.
- Regular Ipsos MORI opinion polls
- Monthly political monitor
- ICM, Yougov, GfK NOP, Populus, Comres, et al.
- Campaign polls (during the campaign itself)
- Measures what the public thinks it is going to do, but not a perfect predictor.
- 14% of voters in 2010 said they made up their minds on whether/how they'll vote in the final 24 hours, another 9% in the last week.
- Interesting angle -- key marginal seats. Collaborated with Reuters.
- The Ipsos MORI "Worm" -- how opinion changes during leaders' speeches.
- Final poll (eve of election poll -- includes adjustments not applied to other polls; not "pure" polling
- Exclude not registered to vote (5%)
- Check postal voting (15%)
- Further turnout adjustments:
- Definitely decided or might change mind
- How important is the result
- How certain if raining
- Whether sample has been contacted by campaigners
- Plus: call-backs for late swing and imputed vote of refusers.
- "Armchair" Labour voters -- say they'll vote Labour but don't turn out.
- Was pretty close last election to outcome.
- Exit poll (Who you voted for instead of how you'll vote)
- Sit outside election booths and ask people.
- Careful selection of polling stations
- One person watches people leaving with a counter, send the other to select someone every n (10?) people.
- Selected people are given a ballot, are asked to mark it. Results taken back and interpreted.
- Tends to be polling stations since last time; look at "swing," or amount of change. Attempt to extrapolate to entire constituency and see whether enough swing to unseat incumbent.
- Add up the sum of all swing to see how much change in system.
- What makes a good poll?
- Question wording is crucial.
- Be relevant to respondent
- Be easily understood
- Unambiguous in meaning
- Mean the same to client, researcher and all respondents
- Relate to the survey objectives
- Not be overly influenced by the context of the question
- Having a good sample is crucial
- Purely 'random' sampling no longer used — too difficult, expensive.
- Well-conducted quota sampling produces weighted samples with a variance similar to random samples of the same size
- Most people don't have strong views on a lot of things; you want their voice represented as well.
- Reading and reporting the polls
- However static public opinion actually is, the polls provide the media with a basis for giving the impression of flux, change and excitements. The more polls there are ... the more true this is.
- However improbable a poll finding is, the media will publish of broadcast it. The more improbable a poll's finding is, the more likely the media will give it prominence.
- Good questions to ask:
- When were the fieldwork dates?
- Was the sample representative
- Is it a panel study, face-to-face, telephone or an online poll?
- Are the questions unbiased?
- Are differences statistically significant?
- Watch the share, not the lead.
- Shares should be within margin of error; it is this that will lead to variances in lead.
- Differences will not be statistically significant.
- Things to watch:
- Generalising beyond the sample
- Implying change when no trend has been measured
- Emphasising the unimportant
- Highlighting statistically insignificant findings
- Quoting out of context
- Curious and spurious: the Sweet FA Prediction Model
- Weird correlation between the colours of the FA winner and the government taking power that year.
- Data can be found that fits whatever prediction.
- Polling isn't designed to predict the future.