The polls were off more than usual in the last presidential election and the polling industry is suffering from low response rates, mode changes, and unrepresentative samples. But G. Elliott Morris finds that polling has long been vital for democracy and has mostly been improving over time. Pollsters have always had to adapt to new challenges and are doing so again. Given the benefits for prediction and for the knowledge of scholars and political leaders, we have to get it as close to right as we can. That starts with acknowledging the difficulties and lowering our expectations for precision.
Guest: G. Elliott Morris, The Economist
Study: Strength in Numbers
Transcript
Matt Grossmann: The past and future of polling, this week on the science of politics. For the Niskanen Center, I’m Mat Grossman. The polls were off more than usual in the last presidential election, and the polling industry is under a lot of strain, but public opinion surveys are still the best way to understand the views of the American public and can still be useful for predicting elections and demonstrating public support for policy initiatives. What can we learn from the history of polling about how to get it right? And how is polling evolving to avoid degrading?
This week, I talk with G. Elliot Morris of The Economist, about his new Norton book Strength In Numbers. He finds that polling has mostly been on a long upward trajectory. Pollsters have always had to adapt to new challenges of their time, but public opinion is so fundamental to democracy that Morris says we have to get it as close to right as we can. That starts with acknowledging the difficulties and lowering our expectations for precision. Here’s our conversation, which started with the story behind Strength In Numbers. So give us the pitch. What are the big takeaways from Strength In Numbers?
G. Elliot Morris: Well, I’ll answer a slightly different question, which is why I wrote the book and along the way it’ll kind of answer your question. Right, I’m a data journalist. I both cover polls and forecast elections, to the extent that understanding polls is part of my job. I wanted to write this book to help explain them to other people also, in just the sort of education journalism way. So I started with the 2016 and then while I was writing the book, the 2020 election happened, which are not exactly good examples for the poll. So it sounds kind of crazy to write a book about polls right now. But part of the thing I wanted to do was to explain to readers why the errors we saw in the polling for those presidential elections wasn’t in fact so atypical, or at least not as atypical as the media made it seem, and to explain to readers how polls work under the hood, so that maybe they’ll be more accepting of larger errors in the future.
So while I’m doing research for that, reading the archives of the American public opinion, research and political opinion quarterly or public opinion quarterly, what have you, this other thread pops up in pollsters and academics writing, which is to me much more profound as the consumer of polls and of their writing. And that is this democracy angle that you don’t just want to do public opinion surveys before an elections to predict elections, you want to do it for much more important reasons, figuring out what the public wants the government to do on certain issues, but even more basic than that, what issues they want the government to focus some attention on. It’s not just do you approve of welfare spending, but are you having trouble paying your rent or affording groceries, putting food on the table? That sort of thing. And then all that data can be used to increase the quality of the representation people get.
At least that’s the sort of main thread in some of the more democratic writing on the polls. And so that’s what the book becomes, right? I take my pollster and election forecaster hat off a little bit and put on the citizen hat. And I think that the book is serving that purpose in a new way. And that’s also the progression of the book set into thirds, so also there’s a bit on forecasting, there’s a bit on the history of polling, and a bit on methods and where the industry goes from here.
Matt Grossmann: So you were last on the day after the 2020 election, I was very nice after you had been up all night to make you come back. And as you say, the polls were off for the second consecutive presidential election, but in the book you sort of come down on the side of they were off but not by more than we should have expected. So, did anything change in your view from experiencing the 2020 election and the repeated performance of the polls?
G. Elliot Morris: The polls in 2020 were worse than in 2016. The error was I think about 1.5 times as large at the state level, and of course it’s almost double at the national level. So the polls are worse. I think the book says that. But the claim I’m advancing is that if you look at these polls individually, if you study response rates for polls and you ask pollsters how their methods break down, what they’ll tell you is that when people aren’t answering the polls as often as they used to, you know, we have a response rate maybe around one or 5% depending on how you do a poll now, then you have a higher chance for error due to the people answering your poll not being representative of the population as a whole.
And so in 2020, this is worse because it seems like the error is from Republicans not answering the polls. And that’s what AAPOR has said, the American Association for Public Opinion Research after the 2020 election. And that’s a pretty huge problem for pollsters because there’s no correct number to a decimal point or what have you of the number of Republicans in the electorate, and so you can’t weight your data, you can’t weight your poll after the effect to adjust for that. You have to increase the quality of your sample before you’re even doing those adjustments. So I would say the polls are definitely battered, they’re bruised. The pollsters have a lot to figure out, but they’re not broken. This tool that’s suffering more from non-response on a political dimension is sort of encountering more problems than pollsters have ever had before, but sort of the same patterns have existed in the past. And so hopefully pollsters can innovate, and the book talks about some ways they’re trying to do that.
Matt Grossmann: So we expect too much of polling or at least too much precision. What should we expect from polling?
G. Elliot Morris: Yeah, I would say if you’re writing a headline about how the polls are catastrophically broken, or you should never trust the polls again, then you’re on the point of the spectrum that’s like you’re expecting too much out of them. Equally, if you’re writing an election forecasting model and betting your entire life savings on an outcome, then maybe you’re expecting too much precision out of those polls, at least if you’re writing an election forecasting model with the polls. The position I take as both a forecaster, and I’ll continue to do forecasting, but as someone who wants to use polls for this broader purpose, that if you’re a reporter on the polls, or if you’re consuming the polls, then you should not expect them to have two percentage points of error on a candidate’s vote share or something.
You should expect that to be five or six percentage points for an individual poll on average. And then it can be even worse than that if non-response is higher, if the question wording or likely voter filters or what have you, and I understand I’m being kind of wacky here, are wrong. And the more fundamental problem is that if people aren’t as likely to answer the phone, and you’re not able to predict that with their demographic traits as pollsters currently don’t seem to be able to do with 100% accuracy, then the entire industry can be biased in the same direction. So it’s hard to put a number on here, but it seems like the conversation is wrong.
What I tell people is just to take the margin of error and double it. That’s what the science tells us that the true size of the margin of error is. And if the outcome you’re trying to rule out is within that margin of error, then don’t rule it out. Don’t write your article about how a candidate is going to win the election for sure. Don’t write an article about how this race has shifted three points with a single poll, and treat the tool as the product of a scientific research process that has lots of assumptions so that that’s pretty artful and not offering you laser like predictive accuracy.
Matt Grossmann: So a lot of the book ended up being on the history of the polling. What did you personally learn new from that history?
G. Elliot Morris: Well, the best story I found in the archives was of this character named Emil Hurja. Emil Hurja is I guess essentially the first election forecaster. He might be considered the first scientific pollster, depending on how you define scientific. And he’s not someone that you’re aware of in the way that we’re aware of George Gallup, who started the first polling firms, but may not have in fact been the first person producing polls for a president or for public. So Emil Hurja, he’s the son of some Finnish immigrants in the upper peninsula of Michigan. He’s sort of an every man in the way that men were in that era in the late 1920s, early 1930s. When he’s working he goes to gold mines in Alaska and owns a newspaper in Breckenridge, Texas, and then goes and trades oil futures on the New York Stock Exchange.
And then he gets involved in political polling after doing all of that. Oh, I should say he’s also an ambassador to World War I peace talks for the University of Washington. He’s ever doing everything, and he settles on polls in part because of his Finnish immigrant background. In mining, he talks to mathematicians and statisticians from Minnesota and thinks, well, this sampling process for mining where you hit a rock in a mine, and if you see ore then you mine more there for more ore is sort of similar to how you mine public opinion, where if you take a sample, it could be relatively representative of the population as a whole. And that is the sort of fundamental basis for polling, also, by the way, the fundamental problem which I’m sure we’ll get into.
And so he conducts polls for the Democratic National Committee in 1932. He’s rather successful. He predicts that Franklin Roosevelt’s going to win by like seven million votes. He wins by 7.1 million. He’s immediately heralded in the press as sort of the Crystal Gazer of Crystal Falls or the Wizard of Washington, then he becomes Franklin. Roosevelt’s pollster. And he’s not conducting polls in the way that we would want someone to conduct polls now, he’s sending out members of the DNC to go canvas work progress sites and sending that data back to Washington. He’s taking some of the literary digest polling and re-weighting it based on how inaccurate it was in the previous election. And he takes some of George Gallup’s data too, and puts it into early versions of polling averages and that sort of thing. He invented lots of the tools that we use today, even though we don’t get a fame or credit for that. So that was, I think, the most interesting story I stumbled upon.
Matt Grossmann: So histories that I’ve read of polling by historians are, I guess, unsurprisingly more negative. They come at it maybe from a less quantitative perspective, but they tend to emphasize that polling did have an effect on the American public’s view of itself, but it tended to kind of valorize what the average person believed, and social comparison across individuals and groups. And that it from the beginning implied unrealistic precision, that is, it was always, it was always snake oil, at least a little bit in terms of being able to say, “This is the big new thing.” So what do you think? Is that history too negative?
G. Elliot Morris: I think those are valid criticisms of the polls at the time they were written. To the extent that polls are the result of a single scientific process, it’s not one that’s invented overnight or over a few year lifespan of a polling firm, but at this point basically over a century. So those methodological criticisms I think ring true for early polling. And by the way, there are still pollsters who don’t really pass methodological muster today. So there are some bad eggs in the polling industry, or I’d say some bad pollsters around the edges, some of them in the market research industry that conduct political polls for attention or political assault ensue do the same thing for clients. And then we have ideologically biased pollsters today too. So certainly you can’t trust every poll, and that’s sort of the methodological critique.
I think the real advice here is actually some hard advice. You have to look at the poll that’s presented in front of you. You have to ask yourself questions about the methodology, whether or not it’s being conducted in a transparent way. If they’re sharing their information about how they’re conducting the poll, it’s a pretty good sign, and if they’re not, it tends to disregard that information. If they seem ideologically biased, et cetera. So you have to exercise some discretion, just like you would exercise discretion over the result of some other scientific process, like an academic journal article, or what have you, except at least there you have peer review. You don’t necessarily have that for polls. There is a criticism of the pollsters that I cover in the book that’s not so methodological, but more theoretical about polls tending to lead politicians in areas they shouldn’t be led.
So on wars, maybe on questions of rights, and I would agree with those two. You definitely don’t want to use a poll for everything. And to the extent that the book is advancing an argument from the pollsters, it’s certainly not advancing what the political scientist, Sidney Verba also decried calling it government by survey. And I think that would be a sort of uniquely bad idea for all these reasons of uncertainty, but also because you can think of an advocacy organization releasing a poll and pushing their issue that they have a pre-baked opinion on, and then government opinion being tied to that. That is also sort of a plainly bad idea. So there’s definitely limits in what you should use polls for. The way I’m thinking about it is that imagining a counterfactual sort of American history or a political environment today where you don’t have polls also seems bad, or maybe even worse, I would say. So that’s the balance I’m trying to strike.
Matt Grossmann: What about the more social historical critique? You portray this as being about putting democracy in action. But other historians portray it as more of an outgrowth of consumer marketing research. It’s about figuring out how to manipulate the public to sell products and moving that into the political arena, or it’s about establishing what normal is and putting a positive spend on normal and a negative spend on outliers.
G. Elliot Morris: So I really like Sarah Igo and Joe Laporte’s histories of polling, and also of political targeting and whatnot, and their histories of some of the early basically presidential political consultants. One of the quotes I remembered that’s in the book, that’s from the research for the book, is George Gallup recalling why he went into polling. And right, it’s not for the democracy reasons as you say, it’s because he literally wondered aloud to himself, “Well, if it works for toothpaste, why not politics?” So you’re right. I think that is a potentially nefarious original intent for the polls. But to the extent that we have something pretty powerful here, I think polling needs some people to argue for using it in good ways. Maybe I would say, even if it’s not used optimally all the time, then we in the press who are already serving the people in a way, should also use this tool to be serving them as well.
Matt Grossmann: So you’re pretty optimistic on the trajectory as well, that we’ve learned new things and that’s helped us do polling better at least over the long time period. But there’s certainly been a lot of complaints about contemporary polling. So I guess let’s just start with the basics of low response rates and potentially unrepresentative samples. Most of the polls we read are either from 2% to 3% response rate phone polls, or from online polls that originate essentially in paid, opt-in survey takers who take a lot of surveys. Why should we believe either one of those?
G. Elliot Morris: The first response is why should we believe them, well, they’re reasonably accurate. They’re basically what you would expect for the job they’re doing or maybe even better, considering those technological problems. Those problems of low response rates, of really polarized public, also means you have to get a pretty good sample of all the individual groups in the public to have a decent poll.
Why should we trust them? Well, they do a pretty good job, even when they’re so-called wrong, even when they miss elections, if they’re off by two or three percentage points. Yeah, that’s a lot in an election, but in the broader purpose of polling … at least in the democratic sense, not in the market research sense … and how they’re read in Washington by elected leaders, bureaucrats, issue advocacy organizations and interest groups, a two- or three-percentage point error on a poll showing 60% or 65% of the public in favor of something or, 20% or 30% of the public saying they’re having trouble paying rent or food or whatever, is actually not that big of a deal.
In the latter chapters of the book, while the book is optimistic in this democracy sense, it’s certainly pretty, I think, realistic about the problems that pollsters face right now, which is Republicans, at least right now, are not as likely as Democrats to take surveys. There’s literally no concrete methodological solution to that problem. There’s only really more guesswork or a lot more money spent on higher response-rate polls with new methodologies, or in this case, actually with all with the original methodologies, which is fielding surveys by mail. Pollsters have found higher response rates, better representation of conservatives and of religious conservatives, evangelicals, Mormons, traditionally underpolled groups. That is promising, but we won’t know if that works until we know it doesn’t break down in similar ways as the past.
Really the optimistic case here is, well, there’s been misfires before. The first test of scientific polling in 1948 was a pretty big miss. The 1936 polls, which were the first-ever scientific polls by Gallup, were off by 12 percentage points on the presidential margin of victory. They were off by four percentage points last time. There is this sort of march toward progress for the pollsters, this long-term positive innovation that has helped them measure the public better, but right, it’s not perfect, and there’s no guarantee that’s going to continue in the future, certainly not in perpetuity.
Matt Grossmann: Is that just because we’ve been lucky about what is correlated with non-response and what we think might be? That is, we have ways of correcting for things we can anticipate might be correlated with non-response if it’s not strictly partisanship, as you mentioned. Just looking at my own online surveys, when we get to stuff that maybe wasn’t anticipated, like how many people are working from home or how many people have a long commute, you tend to get these bigger biases because of just the kinds of people who are available to take online surveys, and no one’s trying to correct for that kind of bias. Is it possible that we actually just are lucky and there’s a lot of ways to get to a poll that’s near 50/50 for elections, and so it’s looking better than it is?
G. Elliot Morris: Yeah. To some extent, closer, more polarized elections make the job of pollsters easier, in that they can just weight to the results of the last election and get reasonably close. The way we know that polls have some degree of quality is by matching them with government benchmarks or by doing simulation tests, where pollsters create different versions of the polls and see whether or not the averages for issue opinions change markedly by shifting the political distribution of the polls, in ways that their demographic models aren’t already taking into account.
There again, it’s a story of uncertainty and an argument, I think therefore, for lowering our expectations for what we want out of this tool. Again, if you’re expecting one or two points of bias on average, then that might be too low. If you’re making decisions on polls that have pretty clear, robust methods, with findings that are the same across firms or methodologies or questions, then you’re doing it the, quote, I guess right way.
Matt Grossmann: Talk about the interaction between what polls can do, but also the characteristic of public opinion on elections. We’ve been mostly talking about things like a presidential poll, where most people do have an opinion about which candidate they like better. We’re currently in the primary cycle. We have a bunch of low-salience races. It’s obviously harder to do both turnout and vote choice there, but also you might just have people whose opinions are not that strong at the time that you take the interview. How do the characteristics of the election interact with these problems in polling?
G. Elliot Morris: Well, one thing I don’t really get into in the book … partly because of space, but also the narrative structure of the book is a bit different than an academic book … is a few other iterations on top of the yes/no, do you support this thing or not, do you know about this thing or not polls. There’s deliberative polling, where you would field someone a poll first in a larger experiment or research study, and then you would have conversation. You’d disseminate information, you would let them talk to each other, and then you’d poll them again. That’s a better match for the democratic process or at least how people process information, to the extent that you’re looking at a group that’s processing information.
I think one of the undercovered biases … and again, something not totally mentioned in the book, but something I talk about elsewhere … is this bias towards high-political engagement people. We know that has downstream effects. I think maybe you’ve even talked about this recently on your podcast, about how high political engagement could be exaggerating polarization on issue opinions or presidents or what have you. I keep going back, right?
There are lots of arguments against certain use cases for the polls, and there are ways that they break down and there are some ways that they’re pretty robust, and there are areas where pollsters definitely need to do more work, at least from my outside reading of how they work. I think this question over whether or not we’re more polarized or we’re much less polarized than we seem to be, based off of what looks to be a pretty strong correlation in the increase of polarization over the last 20 years and the decrease in response rates, is not one that the book ever gets into, but is one of the more interesting things that maybe we could have included in an epilogue if we had more time.
I guess the other thing I think about is that we have plenty of other indicators as well that we look at if we’re being quantitative about election forecasting, but if we’re trying to assess whether or not the public supports something or whether or not it’s a good idea. In the book, I reference this decision matrix presented by political scientists that predict government outcomes, or how legislators would vote for something, in which polls are not the only input into that process but their own convictions are, the pressures that they get from interest groups and activists and public opinion as revealed by other sources, so from the media or from broader discussion as well. The argument I’m making in the book is that polls deserve a place in that matrix but they definitely shouldn’t be the only one, because of all of these problems.
Matt Grossmann: If we’re trying to predict the outcome of the congressional elections this year, we have several problems with doing that now. One is we don’t know who’s going to turn out to vote, one is we might not be reaching the right people that we need to, and another is that people might be undecided or might be likely to change their mind in a particular direction. How would you rate the relative importance of those in us being able to predict the right answer?
G. Elliot Morris: To answer the question an entirely different way, the big thing I’m worried about is people using overall top-line presidential approval ratings to predict election outcomes now. Now that we are more polarized in our vote choice, presidential approval rating might be less predictive, in that a president like Joe Biden today could have a presidential approval rating of 37% because he’s not liked by lots of Democrats, who are loyal in the election regardless. I think that’s going to be the biggest factor in whether or not these past models hold, but that’s in terms of using polls as input into models, not the direct observations of the polls themselves.
I would still expect the generic ballot poll for the House to be pretty reliable. One problem there with the prediction is that, if there’s fewer competitive congressional races than there are in the past, then swings in the national popular vote translate to fewer swings in seats, and so those models have to take polarization into account as well. That’s something we’ve tried to do at The Economist, but it’s not obvious how you would want to do that.
Some things we’ve tried are like using in the models, as an independent variable, the share of people who say they voted for a different presidential candidate in the last election than the one before it. You could also think about maybe using, as an input, the difference in partisan approval ratings of a president. We’re in this environment now where it would look like polarization is lower, because there’s lots of liberals and young people that are Democrats that disapprove of the president. Even that falls apart in certain ways. It’s not a 100% solution. In terms of polls predicting congressional races, that’s the big problem right now, is how polarization changes all of our equations.
Matt Grossmann: I understand the point about how people are using presidential approval, but is part of that that we actually know something about voters that they don’t know, which is that they may be disappointed with their side, but that has not usually translated into them supporting the other side in an election?
G. Elliot Morris: Well, I would think that voters know that about themselves too. I mean, so I’m a young person. When I talk to other people in the under-30 age group on polls, they also say, if they’re disapproving of Joe Biden, “Oh, well, he’s too old.” Some of them say, “He’s too centrist, he’s neoliberal, he’s not buying into the things that I want him to buy into,” and so he’s a bad leader or what have you, but then they say, “I could never vote for a Republican.” That shows up directly, directly in the polling. I mean, if you take a cross-tab of Democratic vote intention in November by presidential approval and by party ID, you still see that the Democratic group is strongly, strongly pro-D for November.
Matt Grossmann: You work for a magazine, where most people spend more time talking to people as a form of reporting. Compare just the relative merits of taking a poll versus something like a focus group or just man on the street interviews for thinking about public opinion.
G. Elliot Morris: You know, if there are any focus group companies listening that want a media partner, The Economist doesn’t typically do that. We typically interview people who do focus groups for other outlets, but that’s something we would probably be interested in. We do have this partnership with YouGov where we do the weekly surveys, and part of the value there is in the absolute measures of public opinion that are revealed in any survey, but also in the trends. I mean, we have the data going back to 2009, and you don’t really get that in an objective way when you’re doing man on the street canvasing or interviewing, or necessarily in the focus groups. That’s a signal that you get from polling that you don’t really get from another source.
It’s also fast and pretty cheap now. Even with online reputable outlets like YouGov, it’s a lot cheaper than a live call or phone survey. There’s certainly cheaper ways to conduct polling than to do the matched online non-probability samples, but as we were talking about earlier, it’s questionable whether or not those online outlets that don’t invest a lot of money into their methodologies are worth it, I would say, or are revealing good portraits of the public.
Matt Grossmann: Is there anything that’s better about the more qualitative ways of looking at it?
G. Elliot Morris: Yeah. We certainly do still talk to people. I mean, we use all the other sources of information in our reporting. Polling is just pretty accessible. At the end of the day, it is just another way of talking to people, and you don’t want to only rely on that one mode of interviewing them. I think the focus groups are also pretty interesting, in that they’re typically focused on people who are cross-pressured psychologically or who have changed their vote choice recently, and so you’re exposed to groups of people that you normally wouldn’t be exposed to in media narratives or reporting. I think partisans anecdotally just get this lion’s share of media attention for the sort of sky-is-falling, polarization, we’re-so-divided takes that seem to be around the corner every single week. I’m certainly not only ever looking at polling data.
Matt Grossmann: We’ve been mostly talking about election polls, but you make a case for the importance of issue polling. The responses I usually get on this are that polling is still good because most issue polls show large majorities for one side, and it doesn’t have to be as precise, in other words, and problems with partisan non-response don’t necessarily lead to huge problems with issue polling. As you know, studies of the connections between issue polling and ballot propositions in some ways invalidate both of those claims. They say that you really shouldn’t believe large majorities for one side of an issue area because when it is debated, it’s going to come down, and that there really might be problems when it comes to predicting that are just as big as the problems we have in election polling. What do you think?
G. Elliot Morris: My reading of the ballot referendum prediction using polling literature is that the conclusion’s not don’t trust polls ever on these issues, but if you’re trying to predict an outcome of a ballot referendum, you have to take another angle, another dimension to public opinion into account, which is the status quo bias acting on voter preferences downstream. Ballot referendums also have other forces that could push the outcome to deviate from the true public opinion of the whole population. Also, I guess before I say that, the population of people voting in the ballot initiatives is not the same as the population being polled.
Typically there’s lots of propaganda in all elections, and in ballot referendums you have fewer guardrails against that sort of propaganda, because there’s less indirect selection. There tends to be a little less deliberation, just because the timeline for these things is shorter. That’s, I think, decreasing now. It seems like ballot initiatives, the campaigns take their sweet time these days, and there’s more of them, I guess which is good from the statistical perspective, trying to dissect these differences between true opinion and dissect these differences between true opinion and revealed opinion and of status quo bias. But the takeaway that I got is that the further away you are from 50/50, the more careful you want to be with whether or not you’re capturing something that’s valuable. Even if the true opinion on say universal background checks for guns is 80 or 90%, if it’s not predictive, you have to think about how you want to use that number. You probably shouldn’t just throw it away, right? You want to temper your expectations for if there’s going to be ballot initiative. You want to wonder how you’re going to push your Congressman to support the thing that you think people around you support if they think that a different way of revealing that opinion would give a different result.
Matt Grossmann: Let me re-ask that in a different way, because you refer to it as the true opinion might be different than the opinion as revealed in the ballot measure campaign.
G. Elliot Morris: Or it might be somewhere in between, right? The forces could be canceling each other out, I guess too.
Matt Grossmann: Okay. But I think there’s also reason to believe that rather than it being easier to do issue polling, it should be harder. There’s all kinds of wording effects. There’s non-opinions in the electorate compared to something like a presidential election. I guess is it really true that we should be less worried about issue polling than election polling, given some reasons to believe that it should actually be harder to find what the true opinion is?
G. Elliot Morris: I think it’s just an argument for being more careful with the data. I mean, that’s the I think point of view that someone who sort of knew about public opinion research and read the book would come away with. I mean, one thing we’re doing in this conversation is thinking about polls as a research process and as a statistical data generating process that has tons of different actors along the way. Like human behavior is also stochastic. Our psychologies are all over the place. That’s not really the conversation. That’s not the narrative. That’s not the conventional wisdom about polling, right?
I mean, if we’re thinking about polls as political scientists here in this conversation, that’s kind of what I want everyone else to do when they’re thinking about polls as general election tools also. I’m not trying to skirt your question. I’m just sort of underscoring that if… I’m not saying that these problems don’t exist, but the broader purpose of this is to say let’s think about these tools as researchers. The direct studies in the book about the quality of the issue positions is from these Pew simulation studies of looking at differences in opinion if you’re waiting by different partisan groups.
Even they find differences after those re-waitings in their opinions, in their sort of preelection polling and in their high quality benchmark polls. I agree that these are problems, especially the non-attitude issue that polls run into, especially because you get higher response rates among politically engaged people. The solution there is probably to try to find those low engagement responses like Pew is trying to do with their male surveys. SSRS is doing this also, and that’s the pollster for CNN. Those are areas that pollsters are looking into to try to combat some of these issues. The way I’m thinking of this is not…
The way I think about the data, when I’m doing my work, when I’m writing is not that any single poll is giving you the ground truth measure for the opinion, but that you want to look at all of the poll’s distributions, both the distribution of all of the polling data you get, but also like the distributions that the error and the data generating process, so to speak, could be generating. That is a pretty uncertain portrait. I mean, you do not want to make decisions only on polls in general, but especially if the opinion is like 52% or 55% even. Now, these measures are pretty unbiased.
They tend to be across question wording. That’s one thing also to look for is polls with different question wording. It’s super expensive, but the real thing here that seems promising to me every time I keep coming back to is the sort of deliberative polling stuff.
Matt Grossmann: Your chance to get wonky to tout some of the recent improvements in polling. You point to some use of Mr. P for geographic better representativeness and some of the online survey companies. Tell us what the cutting edge looks like.
G. Elliot Morris: The biggest developments that I tell the story of with this certain associated characters in the book are two things. First is the development by some Obama campaign staffers, Erin Hartman and her colleagues, in the data cave there 2012 of polls that are taking partisan and other types of response rate into account when they’re selecting people to poll off of the voter file. They’ll conduct a poll and they’ll see what variables are associated with nonresponse using statistical models, using some pretty advanced machine learning stuff these days, but back then not quite as sophisticated. Of course, we could have a whole debate about whether or not you want to use a machine learning model or a low drip model or whatever.
And then once they get this predicted probability of how likely is someone to respond to a survey, then they select them inversely proportional again. You said we were going to get wonky, so let’s get wonky. And that allows pollsters in theory to correct for nonresponse in the sample selection stage. They rely less on waiting and they can target the problem sort of before it happens and biases their data. Of course, it’s not perfect. The New York Times used a method or a type, variant of this methodology in 2020 and they still had big errors. Partly that’s due to not having voter registration on the voter file… Sorry, party registration on the voter file in every state. You can’t apply those adjustments in every state.
If we have an electoral system where states are mattering sort of increasingly, the difference between them and the popular vote sort of determining outcomes, then that’s not going to be a solution forever either. In MRP, Mr. P lovingly called or maybe we want to call double regression and post stratification, or maybe even Mrs. P with shrinkage and synthetic post-stratification are all different methodologies to try to transform sort of cheaper obtained national polls, transforming them into estimates of state opinion. That can be valuable to someone like me who doesn’t have access to high quality estimates of state opinion. It can transform our national surveys into estimates of the electoral college vote and provide a better sense of the election outcome.
But also there it’s not perfect. One thing that Mr. P algorithms can let us do if we’re using basing statistics is directly account for uncertainty that goes beyond the margin of sampling error, so we can be a little more honest about or a lot more honest about how accurate or inaccurate our estimates are going to be.
Matt Grossmann: What do you make of just differences in polling based on format? Even if we are reaching the same people, we often get different answers in the online format versus phone format. Lots of the academic surveys switched in the middle of COVID to some new format that we didn’t have previous experience with. We seem to be finding that a lot of trends break or have very different estimates in one form than the other. Is that a reason also to think that we’re not really getting very precise estimates whichever one we’re using?
G. Elliot Morris: The recommendation at the end of the book is that you don’t want to rely only on one mode for polling nowadays, because we have technologies where you can ask someone a poll with a live caller interview or with a robocaller, although maybe you shouldn’t do that over text message or online, or even face-to-face or on mail, then you can combine those things. Pollsters are turning to mixed mode samples to try to look at those differences between their methods. Campaign pollsters, private pollsters have been doing this for some time now. Their reputation sort of relies on being accurate. I mean, they have sort of like lose clients in their livelihood if they’re not. That’s a signal that that’s sort of one direction to move into.
Now, certainly not saying that all private polls are conducted to the highest methodological standard, but those mixed mode surveys are sort of the development here the way pollsters are trying to combat this issue. One thing I see in the history of polling when I’m writing this book, when I’m talking about it is that there are constant problems with surveying. Pollsters are almost always trying to fight their last battle. They observe issues, at least from the methodological side, and try to adjust for them after the fact. There’s no promise that these things are going to work, that mixed mode sampling, for example, is going to work. Of course, you can have, as we have now, the political engagement problem across modes.
That’s the one to really look out for. I remember in The Economist with YouGov of the 2020 Democratic primary, for example, the polls are from The Economist or the polls from YouGov, I should say, are systematically more favorable to Elizabeth Warren after all the waiting, even after some political engagement waiting, because evidently Elizabeth Warren supporters were more likely to be politically engaged and therefore more likely to take online surveys after controlling for everything else. There’s a lot of uncertainty there too and that gets back to this primary election problem.
I mean, that is just sort of another argument with not writing headline news about every single new survey that comes out by taking these things, taking the full shape of opinion as revealed towards multiple distributions into account. I understand that’s a really statistical argument, like not everyone’s going to be able to perform those calculations or whatever, but maybe that’s an argument for every single person in America buying the book, or maybe that’s an argument for people who are producing that type of news just to try to listen to what pollsters are saying about their scientific, but also artful process.
Matt Grossmann: That’s still mostly on the sampling side though. I guess I’m saying if the same person interviewed in different ways expresses different racial attitudes, expresses different views of trust in government or social trust in general, bigger differences than ones we find in like change over time, for example, in the same mode. Should that tell us something about the quality of opinions that we are getting?
G. Elliot Morris: Yeah, you should probably use the mode that is empirically more reliable, I guess. I mean, the thing I’m thinking of here is the increase in a negative partisanship as revealed by face-to-face and online differences in the ANES. I didn’t realize that those were the same people being interviewed in different modes. That’s really interesting new knowledge to me.
Matt Grossmann: Not true of that particular example, but there are studies that do both for the same.
G. Elliot Morris: Okay. Right? I mean, putting my basing hat on, when we have priors from one survey, if we have statistical evidence or even qualitative evidence that those measures aren’t reliable or breakdown in certain circumstances, certainly we should take that into account. I think I’ve reported on some of this stuff in the past too. Again, something that doesn’t necessarily come up in the book, but there are sort of other explanations for the structure of the book, the narrative that trade nonfiction, et cetera. Again, we’re having a pretty high level researcher conversation here. I think my argument is that the press should be doing that more often.
Matt Grossmann: You also cover this recent history moment when there was a move toward aggregation, that the answer was don’t believe any one poll, believe some kind of averaging or modeling of the polls together. Obviously you’ve been a part of that as well. What is the sort of case that that is a major move forward and the case that it might have been overblown?
G. Elliot Morris: I’ll give you the way that it works in the past and the way it might not work in the future. Aggregation of polls is advantageous for two reasons. First, because it controls, supposedly, for sampling variation between surveys all else being equal, like differences in waiting schemes or questionnaires or what have you being accounted for in some sort of model. Combining polls should give you a better signal of the public opinion. I think that’s empirically valid. Certainly if you take just samples of five or 10 surveys and predict election outcomes or use those as inputs into your election forecasting models, and then you do the same thing with like hundreds of surveys. You do empirically get better predictions with hundreds of surveys than less.
It’s not evident that that should be the case, by the way. Like in stock trading, for example, if you just take random samples of stocks, some of the literature says you can like outperform the S&P in a couple of years, probably not over the long-term. That is a pretty important technological and methodological advancements. It’s also a huge improvement over how polls are being reported in the 90s or early 2000s. When we get the first election poll aggregators and issue poll aggregators in the 2006 with Pollster.com, with academics like Charles Franklin and Mark Blumenthal, then we get FiveThirtyEight. We get Huffington Post Pollster, which is the sort of former Pollster.com crew with some people in the media.
And now it seems like everyone has a polling aggregator. That’s a huge improvement over reporting every single poll, and it also gives sort of more responsible consumers of the polls and polling journalists a tool that we can use to point out errors in other people’s stories to say, “Well, actually, empirically, you want to use the average.” You cite X study and say, “Here’s what the average is right now,” and you link to the polling average,” and that sort of workflow is only possible with aggregation. It also gives us the overtime comparisons that are more robust than campaign narratives would traditionally be where someone would pick out a poll conducted recently and then pick out the more favorable poll to show movement in the campaign from a couple months ago.
In a sort of journalistic capacity also, it’s more advantageous to have aggregation. Now, what aggregation doesn’t solve for… If aggregation solves for noise in polls, in the sampling error of polls, it doesn’t account for bias. The only way to really account for that bias using some of the traditional tools is to just simulate the error of the aggregate by running the model multiple times, telling your computer that all the polls could be biased X percentage points in either direction. And that’s what election forecasting models do. There are some advancements here to the election forecasting models themselves.
In The Economist 2020 election forecasting model that we built with the help of Andrew Gelman at Columbia University and his TA, a good friend, Merlin Heidemanns, we included a term in the aggregation model that as far as we can tell hasn’t been used before to account for systematic differences, all else being equal, between pollsters that are trying to account for non-response on their end, partisan nonresponse on their end, and all the other polls. And although that worked out pretty well in 2016, the model gives in back testing, of course, the better predictions for Hillary Clinton’s odds of victory and margin of victory in each state. It didn’t give us nearly the type of performance gains we wanted in 2020, likely because non-response was also a problem within party partisan groups, so the Republicans that were most likely to support Trump were least likely to answer polls. And if you don’t have probabilistic scores for likelihood of supporting Trump in your model, then you can’t [inaudible 00:52:01] away for that.
Now, some pollsters do have that sort of information, so maybe a polling aggregate in the future could rely on updating those polls or just detecting systematic differences between those polling firms and other polling firms. But as long as you have a steady stream of data that’s biased, then your aggregates are going to be biased, too, so that’s where communicating uncertainty becomes even more important. And so, and your election forecasting models are pretty good at this. I think on average, they’re pretty well-calibrated. But that’s statistical calibration, or the goodness of fit of those models in being well-calibrated, does not necessarily translate to improving conversation about elections at the same rates. It’s still possible that a well-calibrated model could be used to sway opinion about public opinion in bad ways. I think that we saw that in 2016, certainly.
Matt Grossmann: So, you’re asking politicians and the public to believe public opinion polls and think about how widely their own opinions are shared by the American public at a time when that sounds a tad utopian, because we can’t even get people to accept the clear election results of the previous elections. So, how can we expect polling to play a important role in democratic responsiveness in the current resistance to expert information or data of any kind?
G. Elliot Morris: Yeah, I really wanted to write an epilogue to the book because there’s no great answer to partisan disinformation or bad faith elite opinion leadership. That is going to have downstream effects on public opinion. When I talked to the political science scholars of polls, but also to pollsters themselves, they’re pretty forthright that you want to put on your citizen hat here and think about whether or not these opinions that you’re revealing in the polls are all that informed or that they could be changed over time. So, if you have opinions that are informed by propaganda, we really observe that even historically during war time, but right now might also observe as the result of polarized information environments, then you don’t want to base all of your decisions on those polls.
But if you have an issue that’s being polled regularly over time that has a pretty steady stream of information that’s covered by a wide ideological variety of outlets and by smart journalists who are fair to the facts, then I think you can trust the opinion on those issues. But I also wanted to write in this epilogue that that’s not necessarily a problem of the polls themselves. That’s a product of our electoral institutions and of the other institutions that are acting on voter psychology. You don’t necessarily knock pollsters for that, but that is something else to take into account when you’re writing stories about people.
Matt Grossmann: So, you are a student of political science and keep close tabs on the research, but you also have enough distance to give a bit of a critique. Polling has historically been very important to how the public perceives social science in general, or the capacity for social science. So, what have you observed in the positives and negatives of, especially, the relationship between social science and the media or informing the public? And is there any lesson from polling that we should take into consideration there?
G. Elliot Morris: That’s a great question. In the earliest days of numbers being used in the newsroom, we got a book by a professor Kenneth Mayer about how to use numbers in the newsroom, how polls, not just polls, but other sources of information are generated, how to process that information and then include it in your stories in context. That, I think, was huge. I mean, that book is used in data journalism, but also normal journalism coursework even to this day. And I think that data journalism on a whole, if we’re taking social scientists in the newsroom to represent social science, is a product of both the increasing quantitative rigor of lots of social science to which also, I think, has been positive. You’ve written a book about how social science got better, so maybe you would be a better place to answer that question than I would.
And we rely at work now and the data journalism team on lots of social science techniques on a daily basis, so that’s one way that it’s shaped our work. And it helps us tell stories we wouldn’t otherwise be able to tell using numbers we can process, the big data or even small data, what have you, and write stories on society and the people as revealed by social science, even in ways when we’re not doing our own social science. That’s a way of saying it’s enabled us to tell new stories.
To the extent that quantitative social science and quantitative journalism has taken precision too far in its work has led people to discover stories in data that are the results of biases or fabrication, or what have you, and left some people blind to problems in the data generation process, that’s also a direct issue. The rise of quantitative journalism and social science here. Again, you have to blame the bad things on the bad actors to some extent, but the corrective there is to have good training for journalists in how they’re processing quantitative social science, too. And when I talk to data journalism students, most of them have already had social science coursework, as well, quantitative and qualitative, so I think the university campus education on these things, too, is also improved. And so, that improves journalism by way of teaching people how to process that information if nothing else.
Matt Grossmann: So, one of the reasons for skepticism on the right of both journalism and academics is that we’re disproportionately composed of Democrats and liberals. Polling, or any kind of quantitative data should be a potential check on those biases, but there’s still skepticism. Is there reason to believe that the characteristics of the people who are running surveys and interpreting them matter for the results we get?
G. Elliot Morris: Certainly ideologically biased questions pop up a lot, even from public pollsters with, presumably, I’m just trying to be careful here with my words, neutral partisanship. Or people who have worked in politics before are now conducting so-called nonpartisan public polls. And the good thing about that is if the pollster’s releasing a questionnaire, you can check it, and if they’re not releasing a questionnaire, you can rationalize and trust that number less. But that’s really incumbent on journalists reporting on that information to make those judgements in their reporting of it. And usually, of course, there’s some selecting on the dependent variable here, but the journalists I talk to are pretty good at that. That’s not all journalists. Certainly have outlets outside the mainstream outlets I think that can be even worse with this. That’s not necessarily their fault. There is obviously a pressure to produce content in journalism for the audience. That’s literally the only way to make money, I guess, in the grand scheme of things. And so, there’s going to be some incentives here other than to tell the truth of those numbers.
But hopefully, when people read the book, they might understand those things and act accordingly. I’m hoping the book can be educational, especially along these lines. There’s some examples of ideologically biased polls in the book that serve as a cautionary tale. But it’s certainly a real danger along certain axes. Inside academia, the way that those biases might present themselves… The academics I talk to seem to be pretty straight shooters, even if they’re like 99% liberal. That’s not a new finding, certainly. So, the way that those biases might present themselves in the output is in the questions that get asked of the public. If you’re only asking about the things that are important to you as an elite on one side of the logical spectrum, you might not be revealing a fair portrait of the American public opinion as a whole. Maybe there could be some issues in the response options to those questions, as well. But I think there’s probably more framing effects here than there are effects of actual biases in the survey questions themselves.
Matt Grossmann: So, what’s next for you? Is there a new book in the offing, or what will you be looking for in 2022 to write the paperback epilogue maybe?
G. Elliot Morris: The paperback epilogue will be decisively more negative, probably, about the polls, mostly as a function of more negative environment for the government and the public, if one ever materializes. The really great thing about writing this book… And I really enjoyed writing it. It’s a departure from my daily job, which is to write, essentially, 1,000 words at most, typically closer to 500 words, about some story and then to move on. The great thing was being able to spend a lot of time with the same subject, especially something that I discovered was so important to everyone in our democracy.
And I also got to talk to lots of really smart people like you and your colleagues and people I have known for a really long time, or relatively long time in the scheme of my career, which is also relatively short so far, in polls and political science. So, hopefully, if there’s another book, it’s informed by the same themes by political science by trying to elevate lots of this information I’ve learned to the public consciousness, try to improve journalism or maybe, even if we’re lucky, improve democracy. Yeah, I’m going to take a breather before I even think about that next one.
Matt Grossmann: And anything we didn’t get to that you wanted to include or any take home message you want to leave the listeners with?
G. Elliot Morris: The book is hopeful about surveys. It is a decisively pro-democracy book. I want people to use polls for democratic purposes, even though there’s all these caveats that we’ve talked about here in a pretty tough conversation, by the way, Matt. And aside from all of that, my message in the book to reporters is if you’re going to report on a poll, you need to think about it like you would any other product of science. There’s extensive academic data generating process to every single top line number you get from a poll. You have to think about these polls like a science reporter would think about an academic article. That’s really the only way to digest this information. So, I don’t have that direct message to the press in the book, so I’ll put that one here. Hopefully some people will listen and change their minds. And that would go even further in changing the conversation about polls and changing how we think about how we use them.
Matt Grossmann: There’s a lot more to learn. The Science of Politics is available biweekly from the Niskanen Center and part of the Democracy Group Network. I’m your host, Matt Grossman. If you like this discussion, I recommend checking out these episodes. How Much Are Polls Misrepresenting Americans?, How Does the Public Move Right When Policy Moves Left, The Hyper Involved Versus the Disengaged, The Role of Political Science in American Life, and Interpreting the Early Results of the 2020 Election. Please check out Strength in Numbers and then listen in next time.
Photo Credit: iStock