data demystified #2: Analyzing "Don't Knows" and "Neithers"
Data for Progress seeks to illuminate the world using the newest techniques in data science and to bring those techniques to the general public. To that end, we'll be featuring blog posts explaining these data science techniques and how they can improve our understanding of the world. Our second blog post comes from political scientists Anne Whitesell (@annewhitesell) and Kevin Reuning (@KevinReuning), who used our data to explore "don't know" and "neither" responses.
By Kevin Reuning (@KevinReuning) and Anne Marie Whitesell (@annewhitesell)
The Problem
Progressive candidates, and their supporters, are introducing a variety of new policy ideas to the public agenda. Policy alternatives such as public banking, universal basic income, and job guarantees are for the first time gaining widespread interest among advocacy groups and candidates. While these ideas may generate high levels of enthusiasm among the activists who have championed these policies for some time, they are unfamiliar to the mass public, and this can complicate attempts to “measure” their attitudes on these subjects. Voters may be less aware of the intricacies of these ideas, but are nevertheless asked for their opinions.
Survey respondents who do not have a strong opinion on issues that are new to the progressive agenda, or whose interests are “uncrystallized,” face two options when answering survey questions. They may choose to select the middle option (“neither agree nor disagree”) or they may choose to say they “don’t know” how they feel about the issue. Survey respondents who are self-conscious about their knowledge may respond that they “neither agree nor disagree” with an issue rather than admit that they do not know enough about the issue to provide a response. This is often called satisficing.
In addition to the complications that arise from asking survey respondents about issues with which they are unfamiliar, political science and psychology literature has shown that some groups of people are more likely to take the middle stance on an issue or declare that they “don’t know.” In studies of political knowledge, for instance, women are shown to have a lower propensity to guess when they are unsure; that is, women are more likely to say that they don’t know, while men are more likely to try and come up with an answer on the spot. Moreover, research has shown that lower levels of political knowledge among ethnic and racial minority groups are more likely the result of different political experiences that provide different kinds of knowledge. Therefore, analysis of public opinion surveys needs to consider how responses may have raced and gendered components.
A Solution
We account for the uncertainty that accompanies new issues by estimating what survey responses would look like if respondents had a “true” position on the issue. For those who like math a more complete discussion of how we do this is here. In short, we use the data to estimate latent variables of each individual's ideology, propensity to select the middle category as a satisficing option, and the propensity to say “don’t know.” Identifying “don’t know” responses is easy as that is a survey option. In contrast we cannot know when someone has selected a middle category because they are satisficing instead of truly being ambivalent about the policy. Instead we model this process through a middle inflated logit where we assume middle responses are a function of an individual's probability of being a satisficer and being ideological in the middle, while all other responses are a function of an individual’s probability of not being a satisficer and their ideological position. In addition we model the latent variables as a function of a participant’s characteristics, where not all characteristics are used to model each of the latent variables.
After estimating the latent variables we use the measure of ideology and its relationship to the survey questions to simulate what responses would like like if no one was saying “don’t know” and if no one was satisficing. As estimates they are not perfect, but can give us a hypothetical idea of what a full-information world would look like.
Some Results
We can start by looking at #AbolishICE, a proposal that only recently became common in progressive circles Not surprisingly, surveys asking opinions on abolishing ICE have a high proportion of “don’t know” responses. In the figure below, we report the un-adjusted support for Abolishing ICE among non-white women, non-white men, white women, and white men as the bars. Just as suspected above, these sorts of responses are most common among women. Non-white women are the most likely to say that they don’t know, and are the most likely to say they neither oppose nor support the idea of abolishing ICE. The lines show our estimates of responses if everyone provided a true position based on their ideology. What we see is that now a majority of non-white women support abolishing ICE, while the estimates for non-white men also increases substantially. In contrast, increases in the proportion saying they oppose and proportion saying they support are about equal for white women. Our estimate for opposition among white women now crosses the 50% threshold. White men continue to oppose abolishing ICE, although the gap between support and opposition closes.
Support for public generic drugs has a different pattern of change caused by re-estimating responses as a function of just ideology. In the raw data, white women are the most likely to support public generic drugs while non-white men are the least likely to support it (and the most likely to oppose it). After adjusting for the differential likelihood of saying “don’t know” and providing a false neither, we see that non-white women are most likely to support public generic drugs, and non-white men support it at approximately the same rate as white women. Support from white men barely changes, while opposition from them does increase, although only by a marginal amount.
Adjustments do not always lead to increase in support; universal basic wealth, for example, loses ground. There are increases in opposition to it from almost all groups, while only marginal increases in support, if that.
So What?
The estimates here should not be taken as a new truth. We are not attempting to say that real support for these issues is drastically different than what surveys find. What we do want activists and others to understand is that surveying on new issues is tricky. Not all individuals respond the same way when they are presented with something they are unsure about. And the difference in responses are important as it tends to be more conservative voices (white men) that are surest in themselves and so loudest. Polling and surveying are valuable tools, but we need to think about how people are responding to these questions as we ask them and what this means for the way we implement our strategies.
Anne Marie Whitesell (@annewhitesell) is an assistant professor of political science at Ohio Northern University.
Kevin Reuning (@KevinReuning) is an assistant professor of political science at Miami University.
Based on Data for Progress (@DataProgress) / YouGov Blue polling fielded between July 13 and 16 with 1,515 voting eligible adults, weighted to be nationally representative. See question wording and more data analysis here.