This paper proceeds as follows. First, I present a brief history of survey research under repression. Survey research has been surprisingly widespread across autocracies since at least the 1960s, used for a variety of legitimate and illegitimate ends. Second, I offer a set of recommendations for assessing the accuracy of surveys conducted in repressive environments. This ‘checklist’ for evaluating such surveys can be employed even without specialized training in survey research or statistical methods. Finally, I conclude with a discussion of alternatives to mass surveys for assessing public attitudes in repressive polities. These alternatives can be used as informal checks on the validity of surveys or, when surveys are absent, as alternative proxies for public attitudes about government and other areas of interest.
An Overview of Survey Research under Repression
Only rarely has survey research broached a fourth category: independent surveys dealing with politically sensitive questions under genuinely repressive conditions. Much rarer—but perhaps of the greatest interest—are independent, politically sensitive surveys, conducted under conditions of growing repression.
Survey research by (and for) the state
State-sponsored surveys were also used widely for propaganda purposes (Welsh 1981). In the aftermath of Poland’s 1980-81 Solidarity Revolution, the Institute for Basic Problems of Marxism-Leninism at the Central Committee of the Communist Party publicized its survey of Polish workers in Spring 1982, reporting high levels of satisfaction with the government (Mason 1985, 205ff).
Building on the logic of these earlier efforts, formalized state-run polling emerged as threats to the Communist system intensified. In 1982, following the Solidarity Revolution, Poland’s General Wojciech Jaruzelski announced the planned formation of a state-run public opinion research institution. The result, the Public Opinion Research Center in Warsaw (CBOS), served “to carry out fast surveys on issues relevant to current political events. Politicians expected to be informed of changes in the social mood of the public, of the most important perceived problems, and of sources of possible protests and conflicts. ... the Center was conceived as a way of strengthening the Communist system” (Kwiatkowski 1992, 363; 1985). Soviet Premier Yuri Andropov made a similar announcement in 1983 and other Communist states began to follow suit. Toward the end of the Cold War the tables were turned when state-sponsored research became a tool of reform-minded leaders eager to use public opinion to enact policy change (Slider 1985). Here again the purposes of the state, and not the accuracy of the surveys, were in view.
Other politically sensitive survey research by authoritarian states appears to serve genuine information-gathering purposes. Sieger (1990) remarks on information gathering as an essential motivation for opinion research in Marxist-Leninist societies: “a policy tool employed to ensure the efficient control of society by the party elite. The information gathered may have been used in decision-making, in the evaluation of already implemented policies or in the manipulation and mobilization of the citizens” (ibid., 325). To this end East Germany had no less than four state-run institutions devoted to carrying out survey research (ibid., 327). In a similar vein, the Chinese government formed the China Social Survey System (CSSS) in 1984 to undertake polling on social, economic, and political issues of importance to the government (Mason 1989). Survey research to assess public mood, like polling to temper public mood, was common across the Soviet bloc (Welsh 1981).
It was never clear, however, to what extent these surveys provided the authorities useful information (Shlapentokh 1973). Sieger (1990, 328) notes the frequency of politically sensitive questions by the East Germans with no non-responses. Swafford (1992, 353) observes, under the Soviet system, “survey researchers had no basis for expecting candor on the broad range of topics that might arouse the ire of authorities. There was no public opinion.” Non-Communist authoritarian systems exhibit similar patterns. Suleiman (1987, 63), for example, concludes that survey research in the Arab world is only possible if a survey question or theme is not “termed sensitive” by authorities.
Survey research in transitional states
Unlike the wave of polling in Western and Central Europe following World War II, however, this new wave of post-Soviet polling was accompanied by the same kinds of concerns over socially desirable response that typified polling under authoritarian rule. Grushin (1993) and Shlapentokh (1994) regard anxiety in respondents to be the chief, if not only, reason for the gross failures to predict the outcome of Russia’s 1993 Duma elections. Petrenko and Olson (1994) cite the effect on voters of the government’s harsh repression of the opposition during the constitutional crisis of October, only two months before elections, concluding that up to one-third of Zhirinovsky voters surveyed prior to voting answered pollsters insincerely. Fear-biased response was also an issue during democratic transition in Latin America (Anderson 1994; Bischoping and Schuman 1994; 1992).
Fortunately, many of the early concerns about respondent fear to politically sensitive questions during periods of political transition appear overstated. Using data unavailable in the earlier analyses of Russia’s 1993 elections, Miller, et al. (1996) find more support for the “late swing” hypothesis to explain Zhirinovsky’s unexpected success, casting doubt on explanations relying on ‘hidden,’ ‘fearful,’ or ‘ashamed’ voters (Shlapentokh 1994). In Latin America response bias identified in pre-election polls was corrected with modified collection methods (Beltran and Valdivia 1999).
In other settings the prospects for coercion—and hence concern over response bias—are even more remote. In his 1997 analysis of attitude constraint among elites and the public in Beijing, Chen (1999) is content to ignore the possibility of response-desirability issues altogether, though most of the survey questions used in the analysis invite respondents to criticize the government. The same omission characterizes Raghubir and Johar’s (1999) contemporaneous study of opinion in Hong Kong following the 1997 handover from Britain to China. This despite the fact that their survey, focusing on respondents’ attitudes about the return of Chinese rule, was conducted only one month after the July 1997 transition. However, before the Hong Kong transition, a majority of Hong Kong residents were optimistic about a return to Chinese sovereignty (ibid. 1999). Chen (1999, 194, 196) identifies China as a “transitional” state at this time, noting that authorities had reconsidered the use of mass coercion in the wake of Tiananmen Square.
Non-sensitive survey research
The bulk of contemporary opinion research in non-democracies—political or non-political—consists of attitudinal studies. Tessler (2002), for example, examines the relationship between Islamic religiosity and democratic-mindedness across several Middle Eastern and North African countries. Related work in the Middle East analyzes attitudes toward religion, national identity, family, gender relations, Western culture, and democracy (Moaddel 2007; Tessler, et al. 2006; Inglehart 2003; Moaddel 2003; Moaddel and Azadarmaki 2002). While these studies may seek to draw inferences from respondents’ political dispositions, questions tend to be of a general nature, and avoid specific issues that could be considered politically sensitive (e.g., Tessler, et al. 2004).
Politically sensitive surveys under repression
Recent analyses of political opinion under repression have been confined to research questions and designs that can accommodate respondent fear. Geddes and Zaller (1989), for example, analyze support for government policies as a function of political awareness at the height of Brazil’s authoritarian rule in the early 1970s. Despite the repressive nature of the regime, complete with “abductions, torture, and murder to deal with outspoken opponents” (ibid., 325), Geddes and Zaller argue that the data are not overly taxed by problems of response desirability. Interviewers were college-aged and anti-regime, judged most interviewees to be ‘sincere’ respondents, and interviewees judged ‘sincere’ or ‘insincere’ exhibited no differences in opposition to the regime (ibid., 326). More importantly, since their concern was with patterns of support, distortions about levels of support for the government rendered response-desirability problems a non-issue for the analysis.
Similarly, Zhu and Rosen (1993) analyze individual-level causes for support for anti-government protests in China; a politically sensitive issue in the wake of the country’s 1986-87 student demonstrations. Of the three survey questions composing their dependent variable, support for protest, the two most politically sensitive—regarding the student demonstrations and the resignation of General Secretary Hu Yaobang—elicited “don’t know” responses of 34 and 44 percent, respectively. These “don’t knows” are retained in the analysis as a neutral category between support and opposition to protests. Most recently, in Kern and Hainmueller’s (2009) analysis of the effects of Western media exposure on support for the government among East Germans, preference falsification does not pose a problem for the research design for reasons similar to those cited by Geddes and Zaller (ibid., 381).
However it is the exception rather than the rule when, in a repressive setting, the nature of the research question negates concerns over desirability pressures from sensitive questions. Early efforts to measure the validity of politically sensitive questions in the context of repression reflect this reality. In Poland, Sulek (1989) compared identical questions from the government-run opinion center (CBOS) and an independent academic institute, from 1983-87. He found the identity of the survey institution affects respondents’ willingness to voice opinions critical of the government, though this effect diminished as the Communist system unraveled, in the late 1980s.3 Under perhaps more repressive conditions, in Bulgaria, Welsh (1981, 192) reports more striking “interviewer effects.” When identical questions were asked by party officials and associates of an academic institution, support for the regime was 15 percent higher among respondents answering questions from the regime’s surveyors.
Using Surveys from Politically Repressive Environments
1: Proper and transparent survey design
However, assuming the transparency of a survey’s method, the method should be one that generates a survey with basic integrity: The sample should be randomly drawn from the population of interest and large enough to mirror that population with reasonable certainly. For example, a random sample of about 1,000 respondents is adequate for surveys of the adult population of the United States; the best surveys of public opinion in the Russian Federation use a sample size of about 1,600. A sample of inadequate size will lack precision and limit the analyst’s ability to make inferences about the opinions of the population. However, quota-sampling techniques—which correct for the problem of undersized sampling by approaching more and more potential respondents until the survey’s ‘quota’ is filled—may be inappropriate in repressive settings, where a particular type of respondent is willing to participate and another type is not. Quota sampling in repressive contexts may inadvertently oversample regime supporters and more outspoken regime opponents while under-representing those who oppose the regime or its policies but are fearful of voicing that opinion.
2: Survey reliability
3: Survey validity
Substantive validity is the degree to which responses to politically sensitive questions reflect actual opinions and preferences. Even if criteria for survey design, execution, reliability and demographic validity are met, the absence of substantive validity to politically sensitive surveys means that respondents are failing to tell the truth, presumably out of fear of the governing regime. In democratic countries, the honesty of individuals responding to politically sensitive questions, ceteris paribus, is the premise of all political and policy-oriented survey research. Such honesty cannot be assumed under political repression, necessitating new tests for discerning the truthfulness of individuals’ responses to questions they may fear to answer frankly.
Tests for substantive validity are more complex than the tests already summarized above, and a full series of such tests are detailed in Horne and Bakker (2009), using several politically oriented opinion surveys from Iran. In brief, tests for substantive validity involve the identification of politically oriented questions from the survey, and the classification of these political questions into two possible categories: ‘critical questions’ wherein the respondent has the opportunity to express criticism of the regime, and ‘non-critical questions’ where the opportunity to express support for or opposition to the government and its policies does not exist. Having identified a survey’s critical and non-critical questions (which must, among other qualities, possess ordinal-level response categories), it is possible to compare the attributes of the two question types. Surveys possessing a high degree of substantive validity are more likely to possess critical and non-critical questions with (1) similar variances and (2) similar frequency of non-response. Significantly less variance in critical questions suggest a possible validity problem, inasmuch as responses are more similar to one another when respondents have the opportunity to criticize the regime. That is, respondents may be ‘toeing the party line’ when the variance in response to critical questions is narrow relative to the variance in non-critical questions. Similarly, when non-response varies dramatically between critical and non-critical questions, it is likely that a significant number of respondents are adjusting their answers to fit the expectations of the regime.
Conclusion: Other Measures of Public Opinion
In the short term, however, the more common problem for many country analysts will be the lack of regular survey data emanating from their country of interest. How can the practitioner or academic speak about public opinion without surveys? The means of assessing opinion in the absence of surveys has already been suggested by Charles Tilly (1983, 462), with reference to the study of opinion historically, before surveys: “...through a wide variety of collective action ordinary people have left a trail of interests, complaints, demands, and aspirations that remains visible to observers who know where to look.”
Taking her cue from Tilly, Herbst (1993, 48) details no less than a dozen techniques (what Tilly calls the repertoires for expression) for the expression and assessment of public opinion throughout history: From the oratory of ancient Greece to printing in the 16th century, the use of crowds, petitions, and salons in the 17th century, 18th century revolutionary movements, and organized strikes, general elections, straw polls, newspapers, and letter-writing in the 19th century. The 20th century sees the introduction of mass-media political programming, refinements in crowd estimation (ibid., 133ff), and finally the first sample-based political surveys in the 1930s. In a subsequent work, Herbst (1994) argues that indicators of opinion such as these are actually capable of telling us more about the sentiment of distinct interest groups than is sample-based polling. Politically marginalized groups generally are interested in expressing themselves within the political mainstream, Herbst argues; successful groups are adept at developing forms of communication accessible to elites and other segments of society.
The argument that the repertoires of collective action constitute a means for assessing public opinion in history is equally applicable to assessing opinion under repression. Further still, the study of collective action in the absence of surveys is not limited methodologically to qualitative analysis, along the lines of Herbst’s work. Burstein and Freudenberg (1978) assess the impact of anti-war demonstrations on Senate votes on the Vietnam War from 1964 to 1973; Burstein (1979) again looks to demonstrations to explain variation in the passage of civil rights legislation since World War II. More recently, Burstein and Linton (2002) provide a meta-study of findings pertaining to the policy effects of political parties, interest groups, and social movement organizations.
Beyond these examples, events-data analysis can capture both types and degrees of collective action, and can do so in politically opaque parts of the world, though this method has yet to be applied to the study of political responsiveness. Operational coding of autocratic leaders’ public statements, which may vary in content and emphasis over time, may provide clear clues about the state of domestic opinion from the perspective of the governing elite. Broader proxies of public sentiment, such as economic indicators and migration patterns, may serve as useful robustness checks. Regardless of the approach taken, if empirical research on the domestic politics of the most autocratic states is to advance in the near-term, it cannot be overly wed to survey methods. A broad approach to the measure of public opinion in politically repressive countries will serve the diplomatic practitioner and academic best.
Anderson, Leslie. 1994. “Neutrality and bias in the 1990 Nicaraguan Pre-election Polls: A Comment on Bischoping and Schuman.” American Journal of Political Science 38(2): 486-94.
Beltran, Ulises and Marcos Valdivia. 1999. “Accuracy and Error in Electoral Forecasts: The Case of Mexico.” International Journal of Public Opinion Research 11(2): 115-34.
Bischoping, Katherine and Howard Schuman. 1992. “Pens and Polls in Nicaragua: An Analysis of the 1990 Pre-election Surveys.” American Journal of Political Science 36(2): 331-50.
________. 1994. “Pens, Polls, and Theories: The 1990 Nicaraguan Election Revisited: A Reply to Anderson.” American Journal of Political Science 38(2): 495-59.
Burstein, Paul. 1979. “Public Opinion, Demonstrations, and the Passage of Antidiscrimination Legislation.” Public Opinion Quarterly 43(2): 157-72.
Burstein, Paul and William Freudenburg. 1978. “Changing Public Policy: The Impact of Public Opinion, War Costs, and Anti-War Demonstrations on Senate Voting on Vietnam War Motions, 1964-1973.” American Journal of Sociology 84: 99-122.
Burstein, Paul and April Linton. 2002. “The Impact of Political Parties, Interest Groups, and Social Movement Organizations on Public Policy.” Social Forces 81: 380-408.
Cantril, Hadley and Mildred Strunk. 1951. Public Opinion 1936 – 1946. Princeton, NJ: Princeton University Press.
Chen, Jie. 1999. “Comparing Mass and Elite Subjective Orientations in Urban China.” Public Opinion Quarterly 63: 193-219.
Geddes, Barbara and John Zaller. 1989. “Sources of Popular Support for Authoritarian Regimes.” American Journal of Political Science 33(2): 319-47.
Gollin, Albert E. 1992. “Public Opinion Research as Monitor and Agency in Revolutionary Times: Editor’s Introduction.” International Journal of Public Opinion Research 4(4): 299-301.
Grushin, Boris. 1993. “Pochemu Nelzia Verit Bolshinstvu Oprosov, Provodimykh V Byvshem SSSR.” Nezavisimaia Gazeta, October 28.
Herbst, Susan. 1993. Numbered Voices: How Opinion Polling Has Shaped American Politics. Chicago: University of Chicago Press.
________. 1994. Politics at the Margin: Historical Studies of Public Expression Outside the Mainstream. Cambridge, UK: Cambridge University Press.
Horne, Cale and Ryan Bakker. 2009. “Public opinion in an Autocratic Regime: An Analysis of Iranian Public Opinion Data 2006-2008.” Midwest Political Science Association 2009 Annual Convention, Chicago, Illinois, March 2009.
Inglehart, Ronald (ed.). 2003. Human Values and Social Change: Findings form Values Surveys. Leiden, Netherlands: Brill.
Kern, Holger Lutz and Jens Hainmueller. 2009. “Opium for the Masses: How Foreign Media Can Stabilize Authoritarian Regimes.” Political Analysis 17: 377-99.
Kwiatkowski, Piotr. 1992. “Opinion Research and the Fall of Communism: Poland 1981-1990.” International Journal of Public Opinion Research 4(4): 358-74.
Mason, David S. 1985. Public Opinion and Political Change in Poland, 1980-1982. Cambridge, UK: Cambridge University Press.
Millard, William J. 1989. “The USIA Central American Surveys.” Public Opinion Quarterly 53: 134-35.
Miller, William L., Stephen White, and Paul Heywood. 1996. “Twenty-five Days To Go: Measuring and Interpreting the Trends in Public Opinion During the 1993 Russian Election Campaign.” Public Opinion Quarterly 60: 106-27.
Moaddel, Mansoor. 2003. “Public Opinion in Islamic Countries: Survey Results.” Footnotes 31(1): 1-7.
________. 2007. Values and Perceptions of the Islamic Publics: Findings from Values Surveys. New York: Palgrave.
Moaddel, Mansoor and Taghi Azadarmaki. 2002. “The Worldviews of Islamic Publics: The Cases of Egypt, Iran, and Jordan.” Comparative Sociology 1(3/4): 299-319.
________. 1993. The Spiral of Silence: Public Opinion—Our Social Skin, Chicago: University of Chicago Press.
Norris, Pipa and Ronald Inglehart. 2004. Sacred and Secular. Cambridge, UK: Cambridge University Press.
Petrenko, Elena and Alexander Olson. 1994. “Predskazuiema Li Politicheskaia Situatsia V Rossii.” Moskovie Novosti, March 27.
Raghubir, Priya and Gita Venkataramani Johar. 1999. “Hong Kong 1997 in Context.” Public Opinion Quarterly 63: 543-65.
Shlapentokh, Vladimir. 1973. The Empirical Validity of Sociological Information. Moscow: Statistika.
________. 1994. “The 1993 Russian Election Polls.” Public Opinion Quarterly 58: 579-602.
Sieger, Karin. 1990. “Opinion Research in East Germany: A Challenge to Professional Standards.” International Journal of Public Opinion Research 2(4): 323-44.
Slider, Darrell. 1985. “Party-Sponsored Public Opinion Research in the Soviet Union.” Journal of Politics 47(1): 209-27.
Suleiman, Michael W. 1987. “Challenges and Rewards of Survey Research in the Arab World: Problems of Sensitivity in a Study of Political Socialization.” In Survey Research in the Arab World by Mark A. Tessler, Monte Palmer, Tawfic E. Farah, and Barbara Lethem Ibrahim. Boulder, CO: Westview Press.
Sulek, A. 1989. “O rzetelnosci i nierzetelnosci badan sondazowych w Polsce. Proba analizy empirycznej.” Kultura i Spoleczenstwo 1: 23-49.
Swafford, Michael. “Sociological Aspects of Survey Research in the Commonwealth of Independent States.” International Journal of Public Opinion Research 4(4): 346-57.
Tabin, Marek. 1990. “Podziemne badania ankietowe w Polsce.” Kultura i Spo?eczen´stwo 34(1): 203-11.
Tessler, Mark. 1987. “Introduction: Survey Research in Arab Society.” In Survey Research in the Arab World by Mark A. Tessler, Monte Palmer, Tawfic E. Farah, and Barbara Lethem Ibrahim. Boulder, CO: Westview Press.
________. 2002. “Islam and Democracy in the Middle East: The Impact of Religious Orientations on Attitudes toward Democracy in Four Arab Countries.” Comparative Politics 34(3): 337-54.
Tessler, Mark, Carrie Konold, and Megan Reif. 2004. “Political Generations in Developing Countries: Evidence and Insights from Algeria.” Public Opinion Quarterly 68(2): 184-216.
Tessler, Mark, Mansoor Moaddel, and Ronald Inglehart. 2006. “Getting to Arab Democracy: What Kind of Democracy Do Iraqis Want?” Journal of Democracy 17(1): 38-50.
Tilly, Charles. 1983. “Speaking Your Mind Without Elections, Surveys, or Social Movements.” Public Opinion Quarterly 47(4): 461-78.
Welsh, William A. 1981. Survey Research and Public Attitudes in Eastern Europe and the Soviet Union. New York: Pergamon Press.
Zhu, Jian-Hua and Stanley Rosen. 1993. “From Discontent to Protest: IndividualLevel Causes of the 1989 Pro-Democracy Movement in China.” International Journal of Public Opinion Research 5(3): 234-49