(Pew Research Heart)

Pew Research Center conducted this report to understand how errors in correctly representing the level of support for Joe Biden and Donald Trump in preelection polling could affect the accuracy of questions in those aforementioned polls (or other polls) that mensurate public opinion on issues. Specifically, if polls about issues are underrepresenting the Republican base the way that many 2020 preelection polls appeared to, how inaccurate would they be on measures of public opinion about issues? We investigated by taking a set of surveys that measured a wide range of issue attitudes and using a statistical procedure known as weighting to accept them mirror two different scenarios. One scenario mirrored the true election upshot among voters (a 4.iv-point Biden reward, and some other essentially overstated Biden's reward (a 12-point lead). For this analysis, we used several surveys conducted in 2020 with more than ten,000 members of Pew Research Center'due south American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses that ensures that virtually all U.S. adults take a chance of selection. Questions in these surveys measured opinions on problems such as health intendance, the proper scope of government, immigration, race, and the nation'southward response to the coronavirus pandemic. These opinions were examined to see how they differed betwixt the two scenarios.

Well-nigh preelection polls in 2020 overstated Joe Biden's pb over Donald Trump in the national vote for president, and in some states incorrectly indicated that Biden would likely win or that the race would be shut when it was not. These problems led some commentators to argue that "polling is irrevocably broken," that pollsters should be ignored, or that "the polling industry is a wreck, and should exist blown upwards."

A large overstatement of voter support for Joe Biden has a minimal effect on measures of opinion about issues

The true picture of preelection polling's functioning is more nuanced than depicted past some of the early on broad-brush postmortems, but information technology is clear that Trump's strength was not fully accounted for in many, if non virtually, polls. Election polling, however, is just ane application of public opinion polling, though evidently a prominent ane. Pollsters often point to successes in forecasting elections as a reason to trust polling equally a whole. But what is the relevance of election polling'due south problems in 2020 for the rest of what public opinion polling attempts to do? Given the errors in 2016 and 2020, how much should nosotros trust polls that attempt to mensurate opinions on issues?1

A new Pew Research Middle analysis of survey questions from nearly a year'southward worth of its public opinion polling finds that errors of the magnitude seen in some of the 2020 election polls would alter measures of stance on issues by an average of less than 1 percentage point. Using the national tally of votes for president equally an anchor for what surveys of voters should look like, analysis across 48 result questions on topics ranging from energy policy to social welfare to trust in the federal government found that the fault associated with underrepresenting Trump voters and other Republicans by magnitudes seen in some 2020 ballot polling varied from less than 0.5 to iii percentage points, with most estimates irresolute hardly at all. Errors of this magnitude would not alter any substantive interpretations of where the American public stands on important issues. This does not mean that pollsters should quit striving to have their surveys accurately represent Republican, Autonomous and other viewpoints, but it does mean that that errors in election polls don't necessarily atomic number 82 to comparable errors in polling about bug.

How is it possible that underestimating GOP electoral support could accept such a small touch on on questions near issues?

Why did we cull to test a 12-bespeak Biden lead as the alternative to an authentic poll?

We created a version of our surveys with an overstatement of Biden's advantage in the election (a "tilted version") to compare with a "balanced version" that had the correct Biden advantage of 4.four percent points. The 12 percentage bespeak Biden lead used in the "tilted" version of the simulation is arbitrary, only it was chosen because it was the largest lead seen in a national poll released past a major news organization in the two weeks prior to Election Day, as documented by FiveThirtyEight. Several polls had Biden leads that were nearly every bit large during this time menstruation. The simulation, including the manipulation of party affiliation amidst nonvoters, is described in greater detail beneath.

This finding may seem surprising. Wouldn't a poll that forecast something equally large every bit a 12 per centum bespeak Biden victory also mislead on what share of Americans support the Blackness Lives Thing motion, think that the growing number of immigrants in the U.Due south. threatens traditional American community and values, or believe global climate change is mostly acquired by human activity?

The accuracy of issue pollingcouldbe harmed by the same problems that affected election polling because support for Trump vs. Biden is highly correlated with party affiliation and opinions on many bug. Pew Inquiry Center has documented the steadily increasing alignment of party affiliation with political values and opinions on problems, a type of political polarization. Information technology stands to reason that measures of political values and opinions on issues could be harmed by whatever it is that led measures of candidate preference to be incorrect.

Only "highly correlated" does not mean "the same as." Fifty-fifty on problems where sizable majorities of Republicans and Democrats (or Trump and Biden supporters) line up on opposite sides, there remains more multifariousness in opinion among partisans about issues than in candidate preference. In recent elections, virtually nine-in-ten of those who place with a political party vote for the presidential candidate of that political party, a share that has grown over fourth dimension. But that loftier degree of consistency betwixt opinions on issues and candidate preference – or party affiliation – is rare. That fact limits the extent to which errors in estimates of candidate preference can impact the accuracy of issue polling.

Visualizing a closely divided electorate

Election polling in closely divided electorates like those in the U.S. right now demands a very loftier degree of precision from polling. Sizable differences in the margin between the candidates can result from relatively small errors in the limerick of the sample. Changing a small-scale share of the sample tin make a large difference in the margin between two candidates.

Changing the preferences of 38 voters out of 1,000 respondents shifts the poll's margin from a 12-point Biden advantage to about 4 points, the actual 2020 election result

To visualize how few voters need to change to bear upon the margin betwixt the candidates, consider a hypothetical poll of 1,000 adults. Ane version shows Biden prevailing over Trump past 12 pct points (left side of the effigy), while the version on the right shows the authentic election results. Biden voters are shown as blueish squares and Trump voters as red squares (votes for third-party candidates are shown in greyness forth the bottom), but the strip in the middle shows the voters who change from the left effigy to the correct ane.

The version on the correct shows the bodily 2020 election results nationally – a Biden reward of a niggling more than than 4 per centum points. The poll on the right was created past slightly increasing the representation of Trump voters and decreasing the representation of Biden voters, and so that overall, the poll changes from a 12-betoken Biden reward to a iv-point Biden reward. This adjustment, in effect, flips the vote preferences of some of the voters. How many voters must be "changed" to move the margin from 12 points to about iv points?

The respond is non very many – simply 38 of the i,000, or almost iv% of the total. The Biden voters who are replaced by Trump voters are shown every bit the dark blue vertical strip in the middle of the left-hand panel of the graphic (12-point victory) and dark red in the right panel (more than modest iv-point victory).

In addition to shifting the margin in the race, this change in the sample composition has implications for all the other questions answered by the Trump and Biden voters. The Trump voters, whose numbers have increased statistically, now have a larger voice in questions about immigration, climate change, the advisable size and scope of the federal regime, and everything else in the surveys. The Biden voters accept a correspondingly smaller voice.

But every bit may be apparent by comparison the pictures on the left and correct, the two pictures of the electorate are quite similar. They both show that the land is very divided politically. Neither party has a monopoly on the voting public. Nevertheless, while the division is fairly close to equal, information technology is not completely equal – Republicans exercise not outnumber Democrats among actual voters in either one. But the margin among voters is pocket-size. It is this closeness of the political division of the country, even under the scenarios of a sizable forecast fault, that suggest that conclusions about the broad shape of public opinion on issues are non likely to be greatly affected by whether ballot polls are able to pinpoint the margin between the candidates.

Simulating two versions of political back up among the public

To demonstrate the range of possible error in event polling that could result from errors like those seen in 2020 election polling, we conducted a simulation that produced ii versions of several of our stance surveys from 2020, similar to the manipulation depicted in the hypothetical example shown higher up. One version included exactly the right share of Trump vs. Biden voters (a Biden advantage of 4.four pct points) – we volition telephone call it the "balanced version" – and a 2nd version included too many Biden voters (a Biden advantage of 12 percent points, which was the largest lead seen in a public poll of a major polling organization'due south national sample released in the last two weeks of the entrada, as documented by FiveThirtyEight). We'll call it the "tilted version."

Just nearly all of Pew Research Middle's public opinion polling on issues is conducted amid thegeneral public and non simply amongstvoters. Nonvoters make up a sizable minority of general public survey samples. In our 2020 post-ballot survey, nonvoters were 37% of all respondents (8% were noncitizens who are ineligible to vote and the rest were eligible adults who reported not voting). It'south entirely possible that the same forces that led polls to underrepresent Trump voters would lead to the underrepresentation of Republicans or conservatives amid nonvoters. Thus, we need to produce ii versions of the nonvoting public to continue with our 2 versions of the voters.

Unlike the situation amongst voters, where we accept the national vote margin equally a target, we practice not accept an agreed-upon, objective target for the distribution of partisanship among nonvoters. Instead, for the purposes of demonstrating the sensitivity of opinion measures to changes in the partisan balance of the nonvoter sample, we created a sample with equal numbers of Republicans and Democrats amidst nonvoters to go with the more accurate election issue (the Biden 4.4-point margin among voters), and a 10-point Democratic Party affiliation nonvoter advantage to get with the larger (and inaccurate) 12-indicate Biden margin among voters.2 These adjustments, in event, simulate different samples of the public. In improver to the weighting to generate the candidate preference and party affiliation scenarios, the surveys are weighted to be representative of the U.S. adult population past gender, race, ethnicity, education and many other characteristics.3 This kind of weighting, which is common do amongst polling organizations, helps ensure that the sample matches the population on characteristics that may be related to the opinions people agree.

American Trends Panel surveys used in this report

The simulation takes reward of the fact that our master source of data on public opinion is the American Trends Panel, a gear up of more than 10,000 randomly selected U.S. adults who have agreed to take regular online surveys from the states. We conducted surveys with these same individuals approximately twice per calendar month in 2020, with questions ranging across politics, religion, news consumption, economic circumstances, applied science use, lifestyles and many more topics. For this analysis, we chose a set of 48 survey questions representing a wide range of important topics on nine different surveys conducted during 2020.

After the Nov election, nosotros asked our panelists if they voted, and if then, for whom. We also collect a measure out of party affiliation for all panelists, regardless of their voter status. With this information, nosotros can dispense the share of Biden vs. Trump voters in each poll, and Democrats vs. Republicans among nonvoters, and look back at their responses to surveys earlier in the year to estimate how our reading of public stance on issues differs in the two versions.

Before describing the results in more detail, it's important to exist explicit about the assumptions underlying this practise. Nosotros tin can manipulate the share of voters for each presidential candidate and the share of Democrats and Republicans amongst nonvoters, but the results may not tell the full story if the Trump and Biden voters in our surveys do not accurately represent their voters in the population. For case, if believers of the net conspiracy theories known as QAnon are a much higher share of Trump voters in the population than in our panel, that could affect how well our simulation reflects the bear on of changing the number of Trump voters. The same is true for our adjustments of the relative shares of Democrats and Republicans. If the partisans in our console do not accurately reverberate the partisans in the general public, we may not capture the full touch on of over- or underrepresenting one party or the other.

How much can the balance of these ii scenarios impact measures of stance on problems?

The adjustment from the tilted version (a 12-point Biden advantage with a x-point Democratic reward in party amalgamation among nonvoters) to the counterbalanced version (a 4.four-point Biden advantage with equal numbers of Democrats and Republicans amidst nonvoters), makes very little deviation in the residuum of stance on issue questions. Beyond a set of 48 opinion questions and 198 answer categories, most answer categories changed less than 0.5%. The average alter associated with the adjustment was less than 1 percentage point, and approximately twice that for the margin between alternative answers (e.grand., favor minus oppose). The maximum alter observed across the 48 questions was 3 points for a particular answer and 5 points for the margin between alternative answers.

Most survey answers did not change when comparing 'balanced' and 'tilted' versions

One iii-indicate departure was on presidential job approval, a measure very strongly associated with the vote. In the counterbalanced version, 39 percentage approved of Trump's job performance, while 58 percent disapproved. In the tilted version, 36 percent approved of Trump's performance and sixty percentage disapproved. Two other items likewise showed a 3-point difference on one of the response options. In the balanced version, 54% said that it was a bigger problem for the country that people did not come across racism that was occurring, compared with 57% among the tilted version. Similarly, in the balanced version, 38% said that the U.S. had controlled the coronavirus outbreak "as much as it could have," compared with 35% who said this in the tilted version. All other questions tested showed smaller differences.

Estimates differ little between a poll mirroring the election versus one overstating Biden support

Opinion questions on problems that have been at the core of partisan divisions in U.S. politics tended to be the only ones that showed any divergence between the balanced version and the tilted version. Preference for smaller versus bigger regime, a fundamental dividing line betwixt the parties, differed by 2 points between the versions. Perceptions of the touch of immigration on the country, a core issue for Donald Trump, likewise varied by 2 points between the two versions. The belief that human action contributes "a keen deal" to global climate change was ii points higher in the tilted version. The share of Americans saying that government should exercise more than to aid the needy was 2 points college in the tilted version than the counterbalanced version.

Despite the fact that news audiences are quite polarized politically, in that location were typically only small differences betwixt the two versions in how many people accept been relying on particular sources for news in the backwash of the presidential election. The share of people who said that CNN had been a major source of news about the presidential election in the period subsequently Election Twenty-four hour period was 2 points higher in the tilted version than the balanced version, while the share who cited Play a joke on News as a major source was i signal higher in the counterbalanced version than the tilted version.

The consummate prepare of comparisons amidst the 48 survey questions are shown in the topline at the end of this report.

Why don't big differences in candidate preference and party amalgamation effect in big differences in opinions on issues?

Opinions on issues and government policies arestrongly, only notperfectly, correlated with partisanship and candidate preference. A minority of people who back up each candidate do non concord views that are consistent with what their candidate or party favors. Amidst nonvoters, support amongst partisans for their political party's traditional positions – especially among Republicans – is fifty-fifty weaker. This fact lessens the impact of changing the balance of candidate support and party amalgamation in a poll.

There'due south almost never a one-to-ane correspondence betwixt the share of voters for a candidate and the share of people holding a detail opinion that aligns with the opinion of that candidate's political party. Three examples from a summer 2020 survey illustrate the point.

Most of the candidates' voters divide as expected on issues, but some do not

Asked whether they favor a larger government providing more services or a smaller regime providing fewer services, nearly i-fourth of Biden's supporters (23%) opted for smaller government, a position not usually associated with Democrats or Democratic candidates. On a question virtually whether the growing number of newcomers from other countries threatens American values or strengthens its society, nearly i-tertiary of Trump'southward supporters (31%) take the pro-immigrant view, despite the fact that the Trump administration took a number of steps to limit both legal and illegal immigration. And most one-fourth of Trump's supporters (24%) say that it is the responsibility of the federal government to make certain all Americans accept health care coverage, hardly a standard Republican Party position.

Shifting the focus to political party amalgamation amidst nonvoters, we see even less fidelity of partisans to outcome positions typically associated with those parties. For example, nearly half of Republicans and independents who lean Republican but did not vote (47%) said that the growing number of immigrants from other countries strengthens American social club. And 43% of them favor a larger government providing more services. A 55% majority of Republican nonvoters in this survey believe that it is the responsibility of the federal government to make sure that all Americans have health insurance coverage. This is still considerably smaller than the share of Democratic nonvoters who retrieve the government is responsible for ensuring coverage (78%), just information technology is far more than we see amongst Republican voters.

More variation in opinion on issues among nonvoters

These "defectors" from the political party line, in both directions and among both voters and nonvoters, weaken the ability of changes in the partisan or voting limerick of the sample to affect the opinion questions. Adding more than Trump voters and Republicans also does add more skeptics well-nigh immigration, simply nearly a third of the additional Trump voters say immigrantsstrengthen American social club, a view shared by about half of Republican nonvoters. This ways that our survey question on immigration does non change in lockstep with changes in how many Trump supporters or Republicans are included in the poll. Similarly, the Biden voter grouping includes plenty of skeptics about a larger regime. Pump up his support and you go more supporters of bigger regime, but, on balance, non as many as you might wait.

We want different things from opinion polls and election polls

Non all applications of polling serve the aforementioned purpose. We wait and demand more than precision from election polls because the circumstances demand information technology. In a closely divided electorate, a few percentage points matter a great deal. In a poll that gauges opinions on an outcome, an error of a few percentage points typically will not matter for the conclusions we draw from the survey.

Those who follow ballot polls are rightly concerned almost whether those polls are still able to produce estimates precise enough to draw the balance of support for the candidates. Ballot polls in highly competitive elections must provide a level of accuracy that is hard to reach in a world of very low response rates. Merely a small share of the survey sample must alter to produce what we perceive as a dramatic shift in the vote margin and potentially an incorrect forecast. As was shown in the graphical simulation before, an error of iv pct points in a candidate's support can mean the divergence betwixt winning and losing a shut election. In the context of the 2020 presidential election, a change of that small size could have shifted the issue from a spot-on Biden lead of 4.4 points to a very inaccurate Biden atomic number 82 of 12 points.

Differences of a magnitude that could make an election forecast inaccurate are less consequential when looking at issue polling. A flip in the voter preferences of 3% or 4% of the sample can change which candidate is predicted to win an election, just it isn't enough to dramatically change judgments about opinion on most outcome questions. Unlike the measurement of an intended vote selection in a close election, the measurement of opinions is more subjective and likely to be affected by how questions are framed and interpreted. Moreover, a total understanding of public opinion most a political issue rarely depends on a unmarried question like the vote choice. Often, multiple questions probe different aspects of an issue, including its importance to the public.

Astute consumers of polls on issues commonly understand this greater complexity and subjectivity and factor it into their expectations for what an event poll can tell them. The goal in upshot polling is often non to go a precise percentage of the public that chooses a position but rather to obtain a sense of where public stance stands. For example, differences of iii or iv per centum points in the share of the public maxim they would prefer a larger government providing more than services matter less than whether that is a viewpoint endorsed by a big bulk of the public or by a small minority, whether it is something that is increasing or decreasing over time, or whether it divides older and younger Americans.

How do we know that issue polling – even by the dissimilar or more lenient standards we might apply to them – is accurate?

The reality is that we don't know for sure how accurate upshot polling is. But good pollsters take many steps to improve the accurateness of their polls. Expert survey samples are usually weighted to accurately reflect the demographic composition of the U.South. public. The samples are adjusted to friction match parameters measured in high-quality, high response rate government surveys that can be used as benchmarks. Many opinions on issues are associated with demographic variables such every bit race, education, gender and historic period, only as they are with partisanship. At Pew Inquiry Center, we too adjust our surveys to match the population on several other characteristics, including region, religious affiliation, frequency of net usage, and participation in volunteer activities. And although the assay presented here explicitly manipulated party amalgamation among nonvoters as role of the experiment, our regular approach to weighting also includes a target for party amalgamation that helps minimize the possibility that sample-to-sample fluctuations in who participates could innovate errors. Collectively, the methods used to marshal survey samples with the demographic, social and political profile of the public help ensure that opinions correlated with those characteristics are more accurate.

Every bit a result of these efforts, several studies have shown that properly conducted public opinion polls produce estimates very similar to benchmarks obtained from federal surveys or administrative records. While not providing directly evidence of the accuracy of measures of opinion on bug, they suggest that polls can accurately capture a range of phenomena including lifestyle and wellness behaviors that may be related to public stance.

But it'southward besides possible that the topics of some opinion questions in polls – even if not partisan in nature – may exist related to the reasons some people cull not to participate in surveys. A lack of trust in other people or in institutions such as governments, universities, churches or scientific discipline, might exist an example of a phenomenon that leads both to nonparticipation in surveys and to errors in measures of questions related to trust. Surveys may take a smaller share of distrusting people than is likely true in the population, and and then measures of these attitudes and anything correlated with them would be at least somewhat inaccurate. Polling professionals should be mindful of this type of potential error. And nosotros know that measures of political and civic engagement in polls are biased upward. Polls tend to overrepresent people interested and engaged in politics likewise every bit those who take office in volunteering and other helping behaviors. Pew Research Heart weights its samples to address both of these biases, just there is no guarantee that weighting completely solves the problem.

Does any of this propose that nether-counting Republican voters in polling is adequate?

No. This assay finds that polls about public opinion on issues can be useful and valid, even if the poll overstates or understates a presidential candidate'southward level of support past margins seen in the 2020 election. Simply this does non mean that pollsters should quit striving to have their surveys accurately represent Republican, Autonomous and other viewpoints. Errors in the partisan composition of polls tin can get in both directions. As recently as 2012, election polls slightly underestimated Barack Obama'southward back up.

Despite cautions from those within and exterior the profession, polling will continue to exist judged, fairly or not, on the functioning of preelection polls. A continuation of the recent underestimation of GOP electoral support would certainly do further damage to the field'southward reputation. More fundamentally, the goal of the public opinion research community is to represent the public's views, and anything within the profession's command that threatens that goal should exist remedied, even if the consequences for estimates on topics other than election outcomes are pocket-size. Pew Research Center is exploring ways to ensure we achieve the correct share of Republicans and that they are comfy taking our surveys. We are too trying to continuously evaluate whether Republicans and Trump voters – or indeed, Democrats and Biden voters – in our samples are fully representative of those in the population.

Limitations of this analysis

One forcefulness of this assay is that the ballot is over, and it's not necessary to guess at what Trump support ought to have been in these surveys. And past using respondents' self-reported vote choice measured after the election, nosotros avoid complications from respondents who may have changed their minds between taking the survey and casting their ballot.

However, this study is not without its limitations. It's based on polls conducted by merely one arrangement, Pew Research Heart, and these polls are national in scope, unlike many election polls that focused on individual states. The underlying mechanism that weakens the association between levels of candidate support (or political party amalgamation) and opinions on issues should employ to polls conducted by whatever system at any level of geography, only we examined it using only our surveys.

Another important assumption is that the Trump voters and Biden voters who agreed to be interviewed are representative of Trump voters and Biden voters nationwide with respect to their opinions on bug. We cannot know that for certain.