Share this post

By Rainer Kocsis, Senior Analyst

“It was Tom Bradley’s 1982 race for governor of California which he lost to George Deukmejian despite leading in the public opinion polls that gave the Bradley Effect its name.

In a recent Pivotal Research white paper, we discussed how poor sampling strategies create unreliable data. But sometimes, even an unbiased sample can provide biased responses.

In this paper, we highlight one of the most common errors in market research—the social desirability bias—and offer remedies to avoid it.”

You can download a PDF version of this piece here, or read on!

The “Bradley effect” is a polling phenomenon involving high support for candidates in opinion polls not reflected by election results. The concept was named following Tom Bradley’s run for Governor of California in 1982.

Bradley, the Democratic mayor of Los Angeles, was ahead of his Republican opponent George Deukmejian entering the final days of the election. Bradley had a clear lead and numerous media outlets confidently projected that he would win. It was a different story on election night: much to the puzzlement of the Democratic party and many Californians, Deukmejian defeated Bradley.

 

Political observers posited that some Republican voters voiced their support for Bradley in phone polling to avoid appearing racist or politically incorrect. In other words, that the expression of certain opinions and attitudes is less socially acceptable, which may lead some people to tell pollsters that they are more tolerant than they actually are.

The Silent Majority? 

Many additional examples of the Bradley effect followed in the years since the namesake’s 1982 loss. The same concept has since been documented at various points in history and has been called by many different names.

      • In 1983, the same thing happened when Chicago mayoral candidate Harold Washington—a Democrat—received a smaller vote share than the polls expected.
      • In 1989, Virginia gubernatorial candidate Douglas Wilder (D) won a more narrow victory over his Republican opponent than polling indicated, leading some observers to call it the “Wilder Effect.”
      • David Dinkins (D) lost to Rudy Giuliani (R) in the 1993 New York City mayoral election, with some attribution to the “Dinkins Effect.”
      • Polling in the 2016 US Presidential election infamously underestimated Donald Trump’s support and overestimated support for Hillary Clinton.

The Bradley Effect has also been observed outside of a US context. In Britain, it is known as the “Shy Tory Factor” and as the “Spiral of Silence” in Germany.

Nota bene: the Bradley Effect and racism

It is worth noting that Tom Bradley, Douglas Wilder, and David Dinkins are all Black, and that one of the primary explanations for the Bradley Effect is race. It is true that the Bradley Effect disproportionately strikes Black politicians—in 2008, some political pundits believed that Obama needed to win the national pre-election polls by 6 to 9 points before he could be assured a victory.

However, the Bradley Effect is not about racism per se. It is about misleading pollsters for fear of the interviewer’s judgement in which racism is sometimes, but not always, part of the scope.

Members of the religious right, for example, may deceive pollsters about their views on issues, like abortion if they don’t think their positions are socially acceptable. In other words, any cultural issue (as opposed to a policy issue) that is generally not considered a “valid” reason for voting against a particular candidate.

No one would deny that race matters in politics, and hidden racial motives certainly contribute to the Bradley Effect. However, many other factors can impact a candidate’s chances in the aggregate.

Ask no Questions, be told no lies

In the field of survey research, the idea that survey respondents may misrepresent their true intentions or preferences for reasons related to the perceived social acceptance of certain views is known as social desirability bias.  This complexity poses a serious limitation to interpreting the findings of survey research: popular opinions are over-reported and unpopular opinions are under-reported.

Social desirability bias occurs when survey respondents provide answers according to society’s expectations, or based on how they think the researcher wants them to respond, despite their actual feelings.

Unlike individuals who are unashamed of having controversial points of view, social desirability bias applies to those who are aware that their views are unpopular and attempt to obscure their genuine viewpoints. In other words, it is when survey respondents align their answers in a manner that will be viewed favourably by others, regardless of how they really feel.

Social desirability bias is also of concern any time there is a societal taboo or expectation around the topic being investigated, such as sexual behaviour and illegal acts:

  • Under-reported – Respondents typically under-report the frequency of substance use, deny doing it at all, avoid answering the question, or attempt to rationalize the behaviour (e.g., “I only use drugs at parties”). When reporting numbers of sexual partners, men tend to inflate the number, while women tend to underestimate theirs.
  • Inflated – Achievements, such as income and earnings, are approached uncomfortably and often inflated. Indicators of charity or benevolence, as well as compliance with rules and regulations, are often inflated.
  • Avoided or denied – Social desirability bias can occur when the topic of the survey or interview is a sensitive one—family planning (including use of contraceptives and abortion), religion, and patriotism are often either avoided or denied with a fear of the interviewer’s judgement.

In other words, respondents primarily have a tendency to present themselves in the most favourable way possible by giving “pseudo-opinions”—responses that they feel are more appropriate or socially acceptable to others, and therefore responding in a way that isn’t a true reflection of their own position.

 

The takeaway and empirical solutions

Social desirability bias is an important problem, and ignoring this area of concern can affect the reliability, validity, and fairness of the quantity being measured or assessed. Over-estimating scores correlated with convergent behaviour can attenuate the predictive power of the research, given that it will contain more errors.

To tackle this problem, several empirical approaches have been proposed strategically deployed to detect and limit the social desirability bias and uncover a respondent’s latent views.

 

Survey administration that emphasises anonymity and confidentiality, such as Interactive Voice Response (IVR) surveys, can elicit higher honesty when reporting answers. Impersonal IVR polls (a.k.a. “robopolls”) do not involve interaction with a human at all, so the respondent does not feel like they are talking to a “neighbour” to whom they must appear socially desirable, or that the interviewer is “pushing” the respondent support a particular candidate or position. In 2016, voters supported Hillary Clinton by a 21-point margin in live-interview polls but only by a 7-point margin in automated polls. In 2012, Barrack Obama performed better in live-interview polls, while Mitt Romney performed better in automated polls. An anonymous survey setting can also be established by using paper surveys, returned by envelope or mail or to a ballot box, or completed electronically online via computer, smartphone, or tablet. Even in live-interview surveys, assurances of data confidentiality can decrease suspicion and concern, and increase the subject’s feelings of trust that their responses to sensitive questions will not be linked to their personal information.

Specialized questioning techniques may also reduce bias when asking questions sensitive to social desirability. The unmatched-count technique asks respondents to indicate how many of a list of several items they have done or are true for them. Respondents are randomized to receive either a list of non-sensitive items or that same list plus the sensitive item of interest. Differences in the total number of items between the two groups indicate how many of those in the group receiving the sensitive item said yes to it.

The grouped-answer method (also known as the “two-card method,” “triangular method,” “crosswise method,” or “hidden sensitivity method”) is a type of randomized response technique where participants are asked to choose one of several combinations of answer choices such that one sensitive response option is combined with at least one non-sensitive response option. For example, a participant will be asked whether their birth year is even and whether they have performed an illegal activity; if yes to both or no to both, to select A, and if yes to one and no to the other, to select B. By combining sensitive and non-sensitive questions, the participant’s response to the sensitive item is masked.

Another prominent model for dealing with acquiescence is the nominative technique, or “best friend technique,” which asks participants about the behaviour of their close friends rather than about their own behaviour. For example, participants are asked if they know whether their best friend has done a certain sensitive behaviour. This enables the researcher to estimate the actual prevalence of the given behaviour among the study population without needing to know the true state of any one individual respondent.

However, the validity of this technique is limited. Forcing participants to choose a substantive answer when they honestly don’t have an opinion (or don’t understand the question) means they have no choice but to select a response that suggests they do have an opinion. Thus, the solution to fence-sitting (forcing respondents who claim to have no opinion to choose an option) often causes problems for those who genuinely have no opinion. So, allowing for fence-sitting may be necessary if the researcher’s goal is to learn something about people who aren’t familiar with every topic in the survey. In that case, it may be necessary to employ one of the other approaches described above.

While these complex question techniques may reduce social desirability bias, they may also be confusing or misunderstood by respondents. Beyond specific techniques, social desirability bias can be reduced by wording questions neutrally and by avoiding leading and loaded questionsa topic this white paper series has covered previously.

Wrap-up

As polling techniques improve and their precision increases, the Bradley Effect will continue to disappear. Maybe one day, in the future, the Bradley Effect will not exist any longer.

Until then, though, market research professionals need to be aware of this mistake and understand how to avoid it. If you want to be certain that the data you collect from research are error-free, select a partner with considerable and demonstrated experience.

If you are interested in more engagement opportunities with Pivotal Research, please contact the author @ rkocsis@pivotalresearch.ca.

mobile ethnography example

Related Posts