For faster downloading, Part I of this conference report is divided into the following linked sections:
· Background & Introduction:
Prof. Richard Kohn
· Roundtable I: Presentation
by Prof. Ole Holsti
· Panelists: Profs.
David Cheshier and
· Questions from the Floor
· About TISS
Part II of the report will appear in the Summer 1998 issue of American Diplomacy
You may send comments or questions by e-mail in care of the Editor,
| TISS Conference Report: Bridging Gaps In the Study of|
Public Opinion and American Foreign Policy
Timothy Hynes, State University of West Georgia
David Cheshier, Georgia State University
Erik Doxtader, University of North Carolina-Chapel Hill
Ole Holsti, Duke University (Presenter)
|Following Professor Holsti's presentation, a panel composed of two notable young scholars of what might be called "public argument studies," David Cheshier and Erik Doxtader, raised questions regarding Professor Holsti's conception of the public, how a public is called into existence on particular issues, and the implications of his research. |
Hynes: To begin the facilitation of that conversation across interdisciplinary lines, let me now call on professors David Cheshire and Erik Doxtader. We will begin with David Cheshier.
David Cheshier: It's a privilege to be here and to interact with Professor Holsti. I'm an admirer of your work and your book.
I would like to start by describing just for a minute, because it sets up a series of questions I'd like to raise, the way the term "public" is talked about in the field of communication and public argument. One, I consider it a useful adjunct to work that tries objectively to ascertain the public interest, but however we want to put it, there are many who study public argument who emphasize in their work the sense in which the public is always essentially illusory. You are absolutely right to point out the influence of Walter Lippmann's work on this and his suggestive phrase, "the phantom public." His sense of that in the early 1920s basically emphasizes that we don't have the public we need. For legitimate authority to function, we need a certain kind of public that we'll never have because it lacks certain capacities.
Today in certain forms of public argument scholarship, the argument is made even more forcefully to take the position that the public is always necessarily illusory, which is to say that since the public in contemporary culture is never ascertainable publics don't come walking up your driveway what's interesting is the way in which the public and its interest is represented in discourse and public controversy. That way of thinking about the public and its interest as constituted in language where polling, for example, is not of the public itself, but a representation of the public and its interest, suggests a series of questions that I'd like to pursue pertinent to your work. To start it at the most general level I'd like to ask in what sense you take the word public in the term public opinion? Who is the public of public opinion?
Holsti: This is an interesting question. The premise of a lot of research is that we can get some handles on that through polls. We know that a lot of the issues that earlier have been quite controversial, for example, sampling problems. Gallup and his ability to call the 1936 election when the Literary Digest, which had been largely seen as the most authoritative survey, demonstrated that, in fact, sampling is a much better way to try to get at the public. For most people, though, in trying to do research and trying to assess trends and trying to do comparisons within cross groups, we have to, for better or worse, rely on what we can get from surveys. The alternative, if we really take the assumption to its logical conclusion, would probably lead to a determination that the undertaking is not worth the ending.
Cheshier: Maybe another way to think about it is, if you start with the assumption that the public is illusory, to stress that even illusory publics are very necessary fictions; the idea of the public interest is evoked by politicians who use it to great benefit to call the nation to a higher moral purpose, for example. But what I'm suggesting is less a mythological point than to suggest that if one starts with the assumption that the public, as represented even in sophisticated polling data, is a representation admittedly a precise representation that assumption indicates that perhaps our attention should be focused in some different places. For example, maybe it's better to attend not so much to the different attitudes structures of élites and the masses, but rather our attention should be focused on looking at the ways in which political leaders who seek authority invoke particular visions of the public and its interests, and how, in turn, groups of citizens respond to those applications in particular ways. Maybe that's what you're getting at with your call for archival case-by-case research.
Holsti: That's part of it, but there's another related issue which is how in fact the public knows what it knows. That takes us into the media, that takes us into education, and other kinds of issues. But certainly the focus on the leaders is very, very important and it is a mistake to assume, as is often done in the past, that it's a constant rather than a variable. It's a mistake to fall either into the hardcore realist's position or the hardcore liberal position, which tend to argue that the impact of the public is a constant. I would certainly agree the focus on the ways in which leaders are able to evoke things like national interest is very crucial.
Erik Doxtader: I will simply follow up on that idea in terms of the question of how leaders evoke particular ideas in formation of policy and how this may shape public opinion. Before I do that I would echo David's comments about the book. This is an incredibly impressive book. The depth of analysis and simply the richness of the bibliography makes it invaluable to everyone.
My question is based in part on the claim that you make in the book that the process of polling is a product of the Cold War. During the Cold War institutions military institutions, foreign policy institutions employ and develop very precise logics. Here, I'm particularly thinking about nuclear deterrence and the many ways it's used to define the "state of security." At the same time these institutions are working under what appear to be enormous legitimacy burdens. The stakes of mistakes are quite high. In both cases, the public may become confused. For instance, in NSC-68 the military claims to represent the interests of all citizens and at the same time, a paragraph later, it argues that criticism of American national security policy is itself evidence of Communist infiltration. Thus it would seem that the military relies on public opinion, but goes to some lengths to shape it.
My question then is, how exactly does and can the process of polling assess how institutions use polls to promote their own programs? Is there a way of separating the formation of opinion from the formation of an institutional agenda which may or may not serve what we are broadly calling public interest?
Holsti: During the later stages of Vietnam War surveys like the Verba Stanford survey sensed that existing polls which largely asked, Do you approve of the administration's policy in Vietnam or Southeast Asia? were really quite inadequate, particularly when we have the kind of argumentation that was so common in the Johnson administration, that we have three choices. We can essentially be cowards and renege on our commitments, we can do what the radical right hawks want and nuke them, or we can pursue current policies. This is a way of trying to use two horrendous examples to try to legitimate a particular position.
It was the sense that the existing surveys simply did not adequately try to get at what, in fact, public opinion was on the war in Southeast Asia. What they did not show was that the public was ready to get out of Vietnam at all costs, but they showed that there was a much more complex reasoning about the war. Lyndon Johnson was famous for always having in his back pocket the latest poll which supported what he wanted. Surveys can at least try to break down some of that relatively simplistic kind of thinking.
One further example on the Cold War concerns reactions to the death of Stalin within the Eisenhower Administration. This is recounted in a book by one of Eisenhower's speech writers [Emmet John Hughes]. How do you respond to the death of Stalin? One of the arguments that Dulles made over and over again was that we can't let up because if we do, the public not only in the U.S., but throughout the Western world will think that this competition is all over. And so the great trap is to play ball with the post-Stalin leadership, not only because they may not be sincere, but because essentially the public would lose its interest in supporting the existing policies.
Doxtader: Let me follow up with that. On the one side of it, that seems to point out the idea that we need to make polling questions more complex, that simply saying "Are you in favor of a particular policy?" doesn't really get us very much. At the same time, there's research that's come out of economics specifically dealing with the problem of polling and contingent valuation. This work in economics indicates that there's something which is produced in the process of polling called a "warm glow." This means that people are answering poll questions the way that they think they ought to be answering the questions. In other words, they reply with what they believe to be a sort of abstract moral imperative, whether it's waving the flag or something else. The question that I have is, is there a way of assessing this problem, and more fundamentally, what is it that we are studying when we are doing polling? What is it that constitutes an opinion? Is an opinion simply a belief that citizens arrive at independently of others, or is opinion formation a deliberative process?
Holsti: There are a whole lot of questions clearly embedded in that. One of them is do people respond with their true opinions or what they believe to be politically correct? We know that experience, for example, in the 1968 election polls repeatedly found that the number of people who said they would vote for George Wallace turned out to be less than those that actually voted for him. Did you reveal yourself to be a redneck bigot, if you said you were going to vote for George Wallace? That's a real problem.
Another is the question of the framing effect of previous questions leading up to a specific question. One of the ways we deal with that, particularly now that there are these computer-aided survey techniques that are available in a number of places, is to use an experimental design. For example, a certain number of the people who are surveyed may be given questions in a particular order to be different from questions given in another order. Paul Sniderman, who's at both Stanford and Berkeley, has done some really interesting studies of an issue that lends itself very much to the whole question of political correctness affirmative action. By developing an experimental research design you create a kind of interactive effect between the interviewer and the subject, and you can try to cope with some of those kinds of problems that you try to get at the impact of the framing and the order of the questions.
We have taken some useful steps in the right direction, but ultimately the enterprise has to rely to some extent on the assumption that respondents will, whether it's sort of a gut reaction or deeply held belief, respond in the ways that they really feel. There's not an attempt to deliberately mess up results. If that assumption turns out to be fundamentally wrong in huge numbers of cases, then the whole enterprise is very much suspect.
Cheshier: This brings us to something I find a little bit of a curiosity. It's a particular argument you are making. At every point in the book you're arguing for methodologies which get in various nuanced and sophisticated ways to the public's opinions on foreign policy issues. That argument is combined with findings that the public has a certain stable attitudinal structure or belief structure, but that on particular issues they are very ignorant of the details of a particular case. That too seems to lead to a conclusion which calls for intensive case study. But then there's this recommendation you make at the end for standardized questioning and the context of argument for "complexification." I wonder what standardized questioning really gets us?
Holsti: The argument for standardized questioning is simply that because responses are sensitive to the way in which the issues and questions are framed, in the absence of at least some standard questions it's very, very hard to accumulate results. It's very hard to compare results over time. One of the nice things that the Chicago Council has been doing every four years in its surveys on American foreign policy (which deal with both a large general public sample and a small élite sample) is that they carried over a number of their questions. If they do a 1998 survey it will be the seventh in twenty-four years. This gives us at least some ways of gauging changes over time. While I'm not arguing is that everybody ought to do every survey using identical questions, I'm saying that it would be terrific if, for example, Richard Sobel's list or someone's became a kind of standard that everyone would build in some parts into their survey. We'd have the opportunity to accumulate results to be able to compare across surveys. Right now it's very, very hard to do so.
This is one of the insights that Gallup was credited for adding to surveys, but even Gallup did not always follow his own advice. My favorite example of this is that there was a question about, do you support foreign aid, but one year he threw in the clause "in order to undermine Communist threat." Well, that is a very difficult kind of question. What I'm advocating is not that every survey be a carbon copy of every other one, but if everybody would use some common questions we would be able over a longer period of time to build up the kind of time series comparable to those on presidential approval and performance.
Cheshier: Let me follow that up quickly because I understand that we need to ask both and not just standardized questions, but the question remains, what do the standardized questions add, what extra information do they provide? It seems to me like you offer some of those examples in the book where standardized questions not only didn't add anything, but they, in fact, diverted us from getting at the heart of what was going on. The Vietnam general questions tended to mask the true subtle changes that were going on. The Desert Storm general polls on whether we ought to intervene militarily tended to mask the growing support for a military response. It was only when we moved to a more precise questioning that we discovered that these general questions were failing us. So what does this kind of longitudinal questioning add?
Holsti: There are two facets. The more standardized questions would deal not with a specific issue because those issues change over time. One of the problems that Almond's book ran into is that he asked people, what's on your mind? what is the most important issue? Well, those things change. In periods of high employment, it's reasonable people think unemployment is a big problem. In a great Cold War crisis, that comes up as most important.
The point is that there are certain things that we know are recurring issues of foreign affairs. For example, how actively should the U.S. be involved in the world? Levels of support for things like foreign aid, United Nations, and North Atlantic Treaty Organization, the appropriate levels of defense spending, and issues like that. These are long run, recurring kinds of issues. Those are the issues that are appropriate for the kind of standardized questions. If South Korea goes bankrupt tomorrow, pollsters are going to be asking questions about how should we respond. That is not a kind of recurring issue and you wouldn't want to then build in a standard question dealing with such specifics. It's a question of being able to separate out the current and non-recurrent kinds of issues from those things that predictably are a recurring part of foreign affairs.
Doxtader: I'd like to ask a general and perhaps a more specific question about what we do with this kind of work. There is an argument to be made which says that simply knowing what people think is intrinsically valuable, that the more we know the better off we are for whatever reason or for no reason at all. In the field of critical security studies, as well as in other places, there is an increasing sense that this argument has become a ruse, and that this is particularly true within the sort of dynamic between realism and liberalism that's situated at the beginning of the book. On the one hand, there's an argument to be made that says: realism fractures public opinion into an incoherent babble that re-inscribes what critical security studies have called the "myth of state sovereignty." On the other hand, there's an argument that liberalism presupposes that the public is composed of autonomous hyper-rational human beings who more often than not happen to be men.
The question then is, what do we do with polling data and how do we decide what, and who decides, what the role of the public ought to be in the creation of policy? I think that gets a little bit at the question that you talk about when you talked about how we begin to study impact. But prior to the question of how we study impact is do we have some obligation to think about the question of what that impact should be?
Holsti: There's no consensus now on the question of what the impact of public opinion should be. There has not been in the past, and it's not likely in the future. There's going to continue to be a debate which has been with us since the Founding Fathers and earlier.
On the question of to what purposes could the survey be put, these can be largely determined by the people who undertake the surveys. To go back to the example of the Verba Stanford surveys, this emerged out of long agonizing discussions among people at Stanford about the way the Vietnam War was going, about the way it was being portrayed, or misportrayed, by the administration, and that it would be immensely useful, both in a scholarly, but even more in a policy sense, to be able to determine in a much more nuanced way what in fact the American public believed about the war in Vietnam. Each investigator thinks about the purposes of the survey? What are the kinds of questions, perhaps, what kind of samples are appropriate? That kind of pluralism is really quite appropriate. I would assume and hope that among surveys that are being undertaken and will be undertaken are those that try to take this kind of critical stance. This is a very appropriate use of social science methods.
Doxtader: To follow-up, you make what I think is a compelling argument about the need for debate over the problem of public and foreign policy. However, Fishkin's work on deliberation suggests that polls and the process of polling within our culture have watered down our incentives to have those kinds of debates, that there is a sort of attitude that if we answer the poll we've done our civic duty. I'm curious if you can speak a little bit to the differences that may or may not exist between the kinds of polls you're working with and, for instance, what Fishkin is up to in his models of deliberation in attempting to derive public opinion through a process of debate.
Holsti: Well, I don't see these as being mutually incompatible. It seems to me that what you're asking me is the level of public discourse about critical foreign policy issues. Those of you who are not familiar with Fishkin's work may recall a few years ago he brought a group of citizens into publicly televised discussions for three or four days about critical issues. That's not incompatible with trying to do the other things, as well. It seems to me that surveys properly used can, in fact, help in trying to generate some of this discussion.
Again, I'll go back to the example of the Verba Stanford survey. Because that survey hit the front pages of the New York Times and other major newspapers, it did in fact to help in some way to stimulate a kind of a debate about what the war in Vietnam was all about. The administration's assumption was that as long as fifty-plus percent of the public thought that the course was right, it ended the discussion.
The Verba survey was a very appropriate use of survey data. They can serve to stimulate this kind of public discussion. Thus I don't see the Fishkin approach or the Gallup approach as being incompatible.