They’re fast, they’re cheap, and you can easily get larger sample sizes, but are they worth it? There is controversy over the accuracy of automated polls, also known as Interactive Voice Response (IVR) polls. Traditional pollsters are still skeptical, while IVR proponents claim they’re the wave of the future. More unsettling, however, is new research that suggests some IVR polls may have been biased to conform to traditional polls.

Whether or not those concerns are accurate, there are certainly advantages that come with IVR polls. In addition to being cost-effective, they ensure that questions are asked the same way every time. You also don’t have to worry about issues with accents from the interviewers. And since there isn’t a real person talking, the respondents may feel less pressure and express their honest opinions.

A 2009 Pew Research Center study found that telephone polls “did very well in forecasting the outcome of the election in 2008.” The American Association for Public Opinion Research produced a paper in 2009 on presidential primary polling which concluded that the use of IVR polls “made no difference to the accuracy of estimates.”

Of course, automated polls aren’t without their own drawbacks. The main problem is that auto-dialed calls have a bad reputation, largely due to the annoying commercial calls that people get in the evening. Respondents may hang up before even listening to the purpose of the call. Questions have to be short and you can only ask so many questions to retain respondent interest.

Perhaps, a greater cause for concern was raised by Dr. Joshua Clinton and Steve Rodgers, political scientists at Vanderbilt and Princeton, respectively, who published a paper in 2013 which suggested that IVR polls from the 2012 GOP primary may have been calibrated to match traditional, live interviews. Their research showed that IVR polls done before human polls were less accurate than those conducted after the results of a human poll were published, and were poor predictors. It appears from their research that the accuracy of the robo-poll often depended on whether a live-operator poll was also conducted.

If pollsters are using other surveys as a benchmark for their polls, then it calls into question the genuine value of their IVR approach. It means these polls aren’t really giving us new information. It also means they’re not methodologically independent.. Campaigns should bear in mind that such polling data is, to a degree, a reflection of a “consensus” of other polls which may not be accurate.

To prove their point, Clinton and Rodgers charted the respective error rates of IVR polls in states where there were live-operator polls, compared to where there were only IVR polls.  You can see from the chart below that without the guidestar of live-operator polls, the IVR’s were significantly less accurate.  Cheaper, yes.  A replacement for traditional live-operator polling?  Not yet.

Figure 1 Avg Error Human vs Robo Poll