This sample Election Research Paper is published for educational and informational purposes only. Like other free research paper examples it is not a custom research paper. If you need help with writing your assignment, please use research paper writing services and buy a research paper on any topic.
- Demographics and Political Geography
- Survey-Based Research and Social Group Theory
- The Michigan School: Party Identification and Issue Impacts
- Party Identification and Voting Predispositions
- Issue Voting and Rational Choice Theory
- Economic Voting
- Predispositions and Cues at the Aggregate Level
- Multimotivated Voting
- Representational Consequences
- Election Data
As the defining institution of democracy, elections have always been a major focus of attention both for political theorists and analysts—and for journalists and practicing politicians. Indeed, it is hard to talk about democracy, either generally or at a national level, without referring to elections. this research paper focuses on specialized research, mostly quantitative, into voting at general elections. It thus passes over the interesting and growing literature on direct policy elections (referendums and initiatives) since they are still not a major feature of contemporary democracies except in Switzerland and, at the state level, in the United States. On the other hand, the choices voters make are framed by political parties, so the entry has to consider their activities, particularly as they impinge on voting.
Demographics and Political Geography
While analyses of party organization and campaigning appeared from the latter part of the 19th century onward, the tradition of systematic quantitative studies of voting statistics began with Andre Siegfried in France, Harold Gosnell in the American Midwest, and V. O. Key in the American South from 1900 onward. Such analyses used demographic data—social and voting statistics, generally from the smallest constituency unit avail-able—to make sophisticated inferences about individual voting behavior and its influence on both personal characteristics and political history.
This tradition of research, however, passed out of the mainstream under the impact of two developments. The first was the discovery of the ecological fallacy involved in inferring individual behavior from aggregate statistics. For example, the finding that constituencies in the United Kingdom with a large Black population voted disproportionately for the racist British National Party (BNP) does not imply that the BNP attracts non-White votes. On the contrary, it shows that Whites in such constituencies vote for it disproportionately, under the impact of high immigration. It is hard to tell from aggregate statistics what exactly is going on at the individual voter level. The second development relates to the use of mass survey techniques.
Survey-Based Research and Social Group Theory
In addition to difficulties involving the ecological fallacy, the problem for the statistical foundations of demographic research was compounded by the application of mass survey techniques to study individual voters directly. This approach was pioneered by Paul Lazarsfeld and his associates in Erie County, New York, for the U.S. presidential election of 1940, and replicated in Elmira, New York, in 1948. Not only did these studies demonstrate the feasibility of applying statistical techniques of sampling and quantitative analysis to voting, but they also demonstrated the way in which data col-lection and analysis had to be driven by theory to produce relevant conclusions or even to develop a questionnaire in the first place.
Coming from a market research background, Lazarsfeld envisaged voters as behaving like consumers choosing between tins of beans on the supermarket shelves. Relying solely on advertising for their information, brands promoted more in the media would sell better than ones promoted less. The 1940 questionnaire thus featured many questions on voter media exposure (with parallel content analyses of actual media coverage of candidates), with a few questions on social characteristics thrown in for classification purposes. To their surprise, the research team found few and weak correlations between media coverage and voting behavior but strong correlations with class, religion, and urban versus rural residence.
In response, they jettisoned the media and adopted instead a social group theory of voting behavior, measured through an Index of Political Predisposition (IPP), which very roughly measured probabilities for individuals with varying combinations of social characteristics voting Democrat or Republican. On this basis, the questionnaire for the 1948 U.S. presidential election sought to understand how voters’ flow of information and voting preferences were structured by their social group member-ship, pulling them back to the historical group choice in the course of the campaign. Social group theory reached its apotheosis, with the help of computers, when Ithiel de Sola Pool, Samuel Popkin, and Paul Abelson pooled all existing public opinion surveys of American voters to create demographic profiles of the U.S. states and the impact of current issues on small subgroups of voters. On the basis of these simulations, they correctly predicted the result of the 1960 and 1964 presidential elections in advance, achieving high correlations with the actual electoral college votes of the non-Southern states.
The Michigan School: Party Identification and Issue Impacts
The publication by Angus Campbell, Philip Converse, Warren Miller, and Donald Stokes of The American Voter in 1960 not only substituted national mass surveys for local ones but also proposed an alternative conceptualization of voting choice to social group theory. It was not that this “Michigan School” ignored the influence of social factors. They saw them, however, as lying rather far back in the “funnel of causality” preceding the voting act. Their influence was channeled through an individual’s party identification—their affective orientation to an important group object in their environment. Social influences were thus transmuted into individual psychology. Party loyalty then colored the voters’ reactions to immediate influences on how they viewed the candidates and issues of the campaign. The change in these from the end of the 1940s to the 1950s explained the landslide election of the Republican Dwight Eisenhower after the Democratic rally of Harry Truman in 1948.
This conceptualization of voting as essentially individual and psychological in nature dominated empirical research on elections in both the United States and the rest of the world for the next 30 years. In a modified form, it continues to this day. Its dominance is only partly explained by its comprehensive reach and intellectual power. It was also embedded in the design of questionnaires for the mass election surveys now carried out regularly in almost all the developed democracies of the world. The Michigan authors themselves were active in stimulating and organizing such surveys in Europe, from whence they spread to Asia, Latin America, and Australasia.
For the first time, such surveys provided an insight into the political thinking and behavior of ordinary individuals, complemented by the massive increase in both the number and sophistication of public opinion polls. In addition, for the first time, successive surveys provided information about the dynamics of opinion change between elections and the effect this had on party support. Such over-time analyses required continuity in the questions asked at each election, simply to provide comparable information over long periods. This embedded the Michigan School’s conceptualizations and measures even in studies that did not necessarily share their preconceptions or that even set out to challenge them.
Party Identification and Voting Predispositions
The major challenge was to understand the concept of party identification and whether it really differed, methodologically or conceptually, from voting intention. Michigan-style surveys carried out in Europe seemed to indicate that the two were correlated and fluctuated together. If so, what was the point of distinguishing the two?
One answer was to generalize from the specific concept of party identification to the idea of enduring predispositions toward voting in a certain way. The majority of electors vote in the same way from election to election. At the same time, some change their vote, providing the dynamics for election change and party competition. It therefore seems that both predispositions and specific campaign influences such as issues and candidates have to be brought into the explanation of voting choice. A group of largely European scholars in Party Identification and Beyond (2009) argued that both Campbell and his associates and Lazarsfeld and his associates had been trying to do just that. Both party identification (PI) and the IPP were trying to measure predispositions, while both allowed for the differentiated impacts on strong partisans and weaker ones of short-term campaign effects or election cues.
Issue Voting and Rational Choice Theory
This recognition of short-term effects rendered the debate of the 1970s on the “replacement” of PI by “issue effects” from new issues, such as Vietnam, gender bias, and the environment, rather superfluous. It did, however, mark the emergence for the first time of purely theoretical influences on voting research as distinct from the theory-driven data analysis that had dominated the field up to then. These influences stemmed from the seminal presentation by Anthony Downs of An Economic Theory of Democracy (1957)—a collection of many theories and models, not generally reconcilable with each other, of how office-seeking parties competed for votes and electors cast votes for the party closest to them on policy.
The most influential of Downs’s models (because it seemed to fit Anglo-American politics so well) was where two parties competed with each other on policy in a one-dimensional left-right space, along which electors ranged themselves in a symmetrical unimodal distribution, with the central peak coinciding with the median (middle) voter position. To form a majority of 50% plus one, the median vote was crucial, so both parties would move toward the more moderate position of the median voter to secure a majority. This accounted for the consensual bargaining and policy compromises of the Anglo-American democracies, as contrasted with the immobilisme and ideological rigidity of multiparty governments.
Downs’s two-party spatial model had an increasing influence on election research from the 1970s onward, in line with the growing dominance within political science of rational choice reasoning and mathematical modeling. One aspect of his ideas was not closely pursued, however—his demonstration that it was not rational to vote. He assumed that electors wish to consider their policy preference and decide whether to vote on the basis of their utility from this, discounted by the probability of their vote changing the election outcome. Since this probability is always vanishingly small, nobody should vote. As the majority do (in general elections at least), other factors, principally a sense of duty, have to be drafted in as explanations— supported so far as they go by the survey data.
Rational choice theorists seemed to have little difficulty with incorporating duty or altruism into their equations on turnout. On the other hand, voting choice seemed potentially explainable by self-interested, primarily economic, calculations. Government’s ability to sustain prosperity and growth, increase employment, and avoid inflation were increasingly seen as the main drivers of vote. Conceptually, theorists also seemed capable of linking political science more closely with economics in the study of political economy.
While such analyses could be and were carried through at the individual level, on survey data both economic indicators and voting statistics were readily available in aggregated form. Where necessary, survey responses themselves could be aggregated to give preference and policy time series. As a result, the focus of empirical research shifted—from surveys and individual-level analysis to the aggregate level— in the 1980s. As the major objective was to explain (and preferably predict) the overall election outcome, aggregate equations and econometric methods formed a more direct way to approach this. Moreover, the general availability of aggregate indicators lent a comparative dimension to research—all governments published them, whereas election surveys were not always carried out in the countries of interest.
Despite 2 decades of intensive analysis, however, little progress was made toward the ultimate goal of a unified theory of economic voting. A review of around 2,000 concluded that economic conditions produced contradictory and conflicting political results. The main generalization to emerge from numerous studies over different countries was that governments generally lost votes at the rate of 2.2% of their previous vote. The loss was the cost of governing, expressed in economic fashion but hardly due to exclusively economic factors.
Predispositions and Cues at the Aggregate Level
One reaction to the failure of economic trends to produce consistent voting responses was to regard them as one type of issue among others. A comprehensive classification of issues could be made on the basis of surveys and newspaper reports, and a universal direction and weight can be assigned to each issue type. By seeing what issues were prominent in each campaign, the result could be predicted and compared with the actual outcome.
The idea that certain issues always favor certain parties stemmed from a salience theory of party competition and elections, whereby parties always strove to make their own issues prominent and downgrade those of their opponents. However, the number of votes affected by issues prominent in the campaign was always less than the basic vote of the party—the average vote it received independently of issue effects. This vote was explained as being cast by core supporters or party loyalists— party identifiers or core social group voters under another name. In this way, the old survey-based frameworks reemerged at aggregate level under other names—still necessary concepts, however, to cope with the stability and change simultaneously present in election results.
Ian Budge and Dennis Farlie’s attempt to explain and predict elections in this way remained an isolated venture. They used the same logic in ad hoc models designed to predict the outcome of a general election. In Britain, these took the form of the Essex model. This was a multivariate regression equation, which allowed previous voting support, economic expectations, and tax levels to explain all the month-to-month fluctuation in voting intentions that they could. Sharp falls or increases in voting support were identified with events that occurred at that time (e.g., the Blair effect following Tony Blair’s election as Labor leader in 1994). The combined equation, modified for each election, was used to predict votes in the 1997 and 2001 general elections successfully. This predictive success was bought at some cost in terms of generalizability and explanatory power (why should Blair have this effect?). However, the overall approach to prediction is itself generalizable to other contexts.
The Macro Polity: An Update of the American Voter at the Aggregate Level
The same approach involving econometric modeling and aggregate data was used in adapting the concerns of The American Voter to modern times.
Using basically the same framework as The American Voter, The Macro Polity (2002), by Robert Erikson, Michael McKuen, and James Stimson, models voting at the aggregate level as the effect of aggregate predispositions to vote for a party and the effects of issues, particularly economic issues, in changing votes. Reflecting a growing recognition that voting has to be put in context, Macro Polity also concerns itself with the end result of the process—how election outcomes affect government policy intentions and enacted policy—and even with how these match up with voter preferences, thus addressing the central concern of democracy. A major technical achievement of the project has been to devise a measure of the (left-right) policy mood of the electorate from hundreds of different policy questions asked in opinion polls over the past 50 years. This not only provides a basis for explaining voting behavior but also for investigating the extent to which public policy reflects popular preferences.
The limitations of The Macro Policy are also those of its predecessor—it focused on the politics of one country over a limited period of time. This limits the number of cases to 12 (postwar presidential elections). It also runs the danger of tying its investigation too much to the specifics of U.S. politics, which, as The American Voter showed, have their peculiarities. Parallel studies are currently under way in Britain and France. One may hope after 40 years that they do not stir up the same controversy over concepts and measures as party identification did. So far, policy mood seems to apply as well in Europe as in the United States.
When reading through long lists of detailed and variable findings from the election studies, one is sometimes tempted to ask just what the detailed and expensive analyses of elections over the past 5 decades have shown. There is much confusion about this. One consistent lesson, however, from the earliest to the latest studies is that voters are multimotivated. That is, they do not vote on their policy preferences alone, and some votes in fact may be cast on grounds unrelated to policy. This is not solely due to electors themselves having non-policy concerns. To a considerable extent, the institutional structure of general elections requires them to settle nonpolicy questions: Who is best at governing us? Which party has the most appealing and trustworthy candidate? Which has best ridden out the most recent scandal? All these factors may enter quite legitimately into the voting decision.
The fact that voters are multimotivated puts current rational choice models largely out of court so far as explaining elections are concerned. They all postulate that voters are purely policy orientated. If they are not, parties have no need to converge, and the median preference (or seeming median preference, as the declared distribution of votes cannot be unambiguously interpreted as a policy distribution) loses its preeminent position.
Elections are at the heart of the democratic representational process and are thus expected to reveal and enforce popular preferences. As Bingham Powell (2000) notes, recognition of their role has prompted the broadening of election research noted above, to the extent that elections are increasingly being studied in their full representational context as “instruments of democracy.”
According to Joseph Schumpeter (1942), here the recognition of voters as multimotivated and not driven by pure policy concerns poses an acute dilemma for representative democracy. If elections do not clearly reveal the popular will so far as public policy is concerned, how can the democracy be directed to effect it? Or do we simply define democracy as a struggle for power between competing teams of leaders decided by voters in elections— who have, however, to put up with whatever the policy leaders decide on subsequently?
The solution may be to recognize, with Powell, that elections express the popular will in two ways:
- the proportional way, operationalized in the preference of the median voter (even though the median position, which emerges from the declared distribution of party votes, may be a bit unstable and subject to challenge), and
- the majoritarian way, in which the popular will is identified with the policy preference of plurality party voters.
The plurality voter position may also be subject to challenge since nobody knows how many votes cast for the largest party were actually cast on nonpolicy grounds. In the end, however, the declared distribution of votes has to be accepted as providing authoritative guidance on policy preferences as on other matters as well. With no way of deciding between median and plurality claims to represent the popular will, researchers have to accept both and arrange compromises between them. How this is done, however, is more a matter for research into governments than into elections.
While election research has generated and tested many theories of the determinants of voting choice, its most significant achievement to date could well be described as the generation of vast amounts of information, primarily survey data, but broadening out to time series and comparative collections of economic indicators, voting statistics, government expenditures and policy indices, personnel and structures of governments, text-based counts of policy emphases, campaign issues, manifestos, and speeches—the range and quantity of the information is truly staggering and still underexplored. The combination of information at the several levels of electors, parties, governments, and policies gives political scientists the ability to settle many of the unanswered questions raised in the previous discussion. If there is not yet a complete theory of democratic decision making and of the role of elections within it, researchers certainly now have the ability to formulate and check one against the material now available—not just for any one country but for the whole range of democracies. This is perhaps the major product of election research over the past 50 years, which has contributed cumulatively to the understanding of voting choice and of the democratic electorate.
- Berelson, B. R., Lazarsfeld, P. F., & McPhee, W. N. (1954). Voting. Chicago: University of Chicago Press.
- Budge, I., Crewe, I., & Farlie, D. J. (Eds.). (2009). Party identification and beyond. Colchester, UK: ECPR Press. (Original work published 1976)
- Budge, I., & Farlie, D. J. (1983). Explaining and predicting elections. London: Allen & Unwin.
- Campbell, A., Converse, P. E., Miller, W., & Stokes, D. E. (1960). The American voter. New York: Wiley.
- Downs, A. (1957). An economic theory of democracy. New York: Harper.
- Erikson, R., McKuen, M., & Stimson, J. (2002). The macro polity. Cambridge, UK: Cambridge University Press.
- Pool, I. de S., Popkin, S. P., & Abelson, P. (1965). Candidates, issues and strategies. Cambridge: MIT Press.
- Powell, G. B. (2000). Elections as instruments of democracy. New Haven, CT: Yale University Press.
- Schumpeter, J. A. (1950). Capitalism, socialism, democracy. New York: Harper.