Many survey reports are based on such poor sampling procedures that
they do not deserve to be taken seriously. This is especially true of reports
based on "focus groups," which offer human interest but are subject
to vast amounts of error. In general, focus groups are used to capture spontaneous reactions and ideas and allow the observation of group dynamics and organizational issues. However, internet surveys also cannot represent the general population adequately at present, though this is an area where some serious attempts are being made to compensate for the
inherent difficulties and inherent sampling bias.
Another variation of focus groups can be seen in ‘tracking polls’ conducted by
lots of news organizations around the presidential election. This means that
every night they interview a few hundred people and ask them who they're going
to vote for. The problem with tracking polls is that each night's results are
extremely volatile, because on a given night only 300 or 400 people will be
interviewed. This gives you a margin of error of around 5 points, which means,
in the above example, that if Kerry and Bush are less than ten points apart, we
don't know who's really leading. Since a single night's polling is perfectly
likely to jump up and down given the large margin of error, interpreting its
movement is basically impossible. Tracking polls are good for interpreting
long-term trends, but not good for finding out whether things are different
today than they were yesterday. And even though many
organizations use "moving averages" when they report tracking polls
(aggregating together a few days' results at a time to boost sample
size), anything other than a huge swing in the result will be meaningless.
Next : Putting It All Together
Previous : Margin of Error