Glossary of Terms

The terms, concepts, and categories used in international studies derive from an assortment of academic disciplines.  In order to reflect these multituple perspectives, the definitions for this glossary have been drawn from three sources.  Terms marked #1 are taken from Andrew Edgar and Peter Sedgwick, editors, Key Concepts in Cultural Theory (London: Routledge, 1999).  Terms marked #2 from Raymond Williams, Keywords: A Vocabulary of Culture and Society, (New York: Oxford University Press, 1976).  Terms marked #3 from Adam Kuper and Jessica Kuper, The Social Science Encyclopedia, (London: Routledge, 1996).  In some cases, definitions have been shortened, spelling has been Americanized, and citations omitted.  You may wish to consult these reference works for additional sources.

binary opposition

center and periphery
civil society
commodity fetishism
consumer behavior
cultural capital
cultural relativism
cultural reproduction
culture industry

democratic transition
dialectical and historicalmaterialism

economic development
economic growth
Enlightenment, The
ethnic politics

false consciousness
feminist theory
forces of production

gender/gender and sex
grand narrative

human capital
human nature
human rights

industrial revolutions
information society
international relations
international trade

labor theory of value

Malthus, Thomas Robert
mass media
means of production
mode of production
multicultural education



popular culture
post-industrial society
public sphere

rational choice theory
relations of production

social contract
social control
Social Darwinism
social democracy
social mobility
social stratification
social structure
state, origins of
systems theory

technological progress



World Bank
world systems theory
 (see also systems theory)

youth culture




    1. A fundamental form of culture (as its name suggests), and as such as key site at which humanity confronts and transforms nature to its own ends. Agriculture is therefore relevant as the subject-matter of much high and popular culture (from Virgil’s Georgics of the first century BC, through Thomas Hardy’s Wessex, to that quintessentially English (radio) soap opera, The Archers (‘an everyday story of country folk’). Yet also, it continues to be a boundary where the inter-relationship of culture and nature is negotiated, as is indicated, for example, by contemporary concerns about the genetic manipulation of crops and farm animals. Such concerns may conceal the fact that existing agricultural products are themselves already the outcome of centuries of cultural manipulation.
    1. Concept in sociology and political philosophy indicating the legitimate use of power. An agent thus submits willingly to, or is obedient to, the commands of another agent if that agent is perceived to be in authority. Obedience to authority is not induced through coercion and threat of violence. In social theory, the analysis of authority is first developed by Weber. He focuses on the question of why certain agents have authority. He offers three ideal types in explanation. Authority may be legal-rational, in which case authority is bestowed upon rules or laws typically through some regular and public process of law formation or a demonstration of the necessity and efficiency of the rules (as in the case of bureaucracies). Traditional authority again follows more or less well defined rules, but such rules are grounded in traditional practices, customs, and cosmologies, rather than in recent, public processes of formation. Charismatic authority rests, not in rules, but in the personality (and sanctity or heroism) of a particular leader, and thus in that person’s teaching and example.

    2. In political philosophy, the question of authority may be seen to receive a crucial modern formulation in the work of Hobbes. Hobbes effectively addresses the question of the need for authority (contingently in the face of the social disorder of civil war), and the grounds upon which individuals should submit to it. In Weberian terms, Hobbes’s account is a legal-rational one. It is, for Hobbes, rational to form a free social contract with a sovereign, providing that the sovereign maintains the social order and delivers peace. This approach is developed in the liberal tradition. A state is perceived to have authority in so far as its rules and laws would be acceptable to all rational citizens, independently of any particular interests they may wish to pursue. John Rawls’s thought experiment of an ‘original position,’ in which potential citizens plan a society in ignorance of their own talents and interests, is the most sophisticated contemporary version of such social contract accounts. In contrast, communitarian political philosophy suggest the primacy of traditional authority. In contradistinction to liberalism, agents are understood as already embedded in a particular community and culture. The agent’s judgement of authority will thus depend upon values taken-for-granted in their community.

      Political ‘realists,’ such as Vilfredo Pareto and Gaetano Mosca reject the distinction between authority and power, arguing that all submission and obedience is ultimately imposed upon the mass of social members. The distinction between authority and power is questioned more subtly by certain accounts of ideology. Within Marxism particularly, the possibility that agents may be coerced, not merely by the use of threat of physical violence, but also by the control that a dominant group or class can exercise over ideas (for example, through control over education, mass media and religion) is broached. A state may have authority in the eyes of its citizens, only because those citizens are denied the relevant cultural resources and information necessary to recognize that it is not acting in their best interests. The increasing difficulty states find in maintaining authority has been analyzed by Habermas within the theory of a legitimation crisis.

    1. Metaphorical term used in art theory and political Philosophy. The French ‘avant-garde,’ or English ‘vanguard’ literally refers to the foremost part of an army. Metaphorically, since the beginning of the 20th century, it has been taken to refer to the political or cultural leadership by an elite. Implicit in this idea are assumptions of political and cultural progress, which the avant-garde pursues. The mass of society will be more or less indifferent to, or ignorant of, their interest in this progress, and will resist or be hostile to the avant-garde. As a key aspect of cultural modernism, the avant-garde typically expresses itself through obscure and innovative techniques, deliberately resisting easy assimilation into popular or mass culture. In political theory, the avant-garde is seen as a necessary intellectual elite, leading a mass that remains afflicted by ideology and thus by a false consciousness that blinds it to its own best interests. With the increasing questioning of modernism, and indeed of Marxism, the validity of the avant-garde has itself come into question.

binary opposition

    1. Concept in structuralism, rooted in Saussaure’s linguistics but also Radcliffe-Brown’s cultural anthropology, serving to explain the generation of meaning in one term or sign by reference to another mutually exclusive term. The two terms may be seen to describe a complete system, by reference to two basic states in which the elements in that system can exist (e.g. culture/nature; dark/light; male/female; birth/death). One side of the binary opposition can be meaningful only in relation to the other side. Each side has the meaning of not being its opposite. A term may therefore appear in more than one binary opposition, with its meaning being modified accordingly. (Thus, death may be understood as an event as ‘not birth’; or as a state as ‘not life’.) Binary oppositions structure perception and interpretation of the natural and social world.

    2. In any system of signs, certain binary oppositions may be seen to stand in determinate relationships to each other. One binary opposition may be open to transformation into another, therefore enriching the meaning of all the terms concerned. Thus, for example, in western cultures, the opposition between birth and death may be transformed into an opposition between white and black (for example manifest in white christening robes and black hearses). Put differently, white is to black, as birth is to death. In addition, the binary opposition may contain an implicit evaluation, so that, for example, birth and white are associated with good, death and black with bad. The analysis of such series of oppositions provides a crucial insight into the working of ideology. Consider, for example, the following series: male/female; public/private; culture/nature; reason/emotion. Ideology may therefore work precisely to the degree that such series of binary oppositions are taken for granted, appearing to reflect rather than to structure the world. The critique of ideology entails the explication of a series of binary oppositions as a culturally specific interpretation, selection and privileging of elements from the ambient world.

      A further implication of the theorization of binary opposition focuses upon the status of ambiguous categories. Anything that shares characteristics of both sides of the opposition is suspect or otherwise problematic. Anthropologists have therefore suggested that the importance given to human hair or nail clippings in magic and folk law rests in their ambiguous status. They are at once part of the body, but have no feeling and are easily cut from the body without pain or damage. Similarly, rites de passage mark ambiguous stages in human’s development between childhood and adulthood. Magic, ceremony, and the sacred are thus seen to be concerned with ambiguous categories.

    1. Bourgeois is a very difficult word to use in English; first, because although quite widely used it is still evidently a French word, the earlier Anglicization to burgess, from oF burgeis and mE burgeis, burges, borges - inhabitant of a borough, having remained fixed in its original limited meaning; secondly, because it is especially associated with Marxist argument, which can attract hostility or dismissal (and it is relevant here that in this context bourgeois cannot be properly translated by the more familiar English adjective middle-class); thirdly, because it has been extended, especially in English in the last twenty years, partly from this Marxist sense but mainly from much earlier French senses, to a general and often vague term of social contempt. To understand this range it is necessary to follow the development of the word in French, and to note a particular difficulty in the translation, into both French and English, of the German bügerlich.

    2. Under the feudal regime in France bourgeois was a juridicial category in society, defined by such conditions as length of residence. The essential definition was that of the solid citizen whose mode of life was at once stable and solvent. The earliest adverse meanings come from a higher social order: an aristocratic contempt for the mediocrity of the bourgeois which extended, especially in the 18th century, into a philosophical and intellectual contempt for the limited if stable life and ideas of this ‘middle’ class (there was a comparable English 17th-century and 18th-century use of citizen and it abbreviation cit).There was a steady association of bourgeois with trade, but to succeed as a bourgeois, and to live bourgeoisement, was typically to retire and live on invested income. A bourgeois house was one in which no trade or profession (lawyers and doctors were later excepted) could be carried on.

      The steady growth in size and importance of this bourgeois class in the centuries of expanding trade had major consequences in political thought, which in turn had complicating effects on the word. A new concept of society was expressed and translated into English, especially in the 18th century, as civil society, but the equivalents for this adjective were and in some cases still are the French bourgeois and the German bügerlich. In later English usage these came to be translated as bourgeois in the more specific 19th-century sense, often leading to confusion.

      Before the specific Marxist sense, bourgeois became a term of contempt, but also of respect from below. The migrant laborer or soldier saw the established bourgeois as his opposite; workers saw the capitalized bourgeois as an employer. The social dimension of the later use was thus fully established by the late 18th century, although the essentially different aristocratic or philosophical contempt was still an active sense.

      The definition of bourgeois society was a central concept in Marx, yet especially in some of his early work the term is ambiguous, since in relation to Hegel for whom civil (bügerlich) society was an important term to be distinguished from state. Marx used, and in the end amalgamated, the earlier and later meanings. Marx’s new sense of bourgeois society followed earlier historical usage, from established and solvent burgesses to a growing class of traders, entrepreneurs and employers. His attack on what he called bourgeois political theory (the theory of civil society) was based on what he saw as its falsely universal concepts and institutions, which were in fact the concepts and institutions of a specifically bourgeois society: that is, a society in which the bourgeoisie (the class name was now much more significant) had become or was becoming dominant. Different stages of bourgeois society led to different stages of the capitalist mode of economic production, or, as it was later more strictly put, different stages of the capitalist mode of production led to different stages of bourgeois society and hence bourgeois thought, bourgeois feeling, bourgeois ideology, bourgeois art. In Marx’s sense the word has passed into universal usage. But it is often difficult to separate it, in some respects, from the residual aristocratic and philosophical contempt, and from a later form especially common among unestablished artists, writers and thinkers, who might not and often do not share Marx’s central definition, but who might sustain the older sense of hostility towards the (mediocre) established and respectable.

      The complexity of the word is then evident. There is a problem even in the strict Marxist usage, in that the same word, bourgeois, is used to describe historically distinct periods and phases of social and cultural development. In some contexts, especially, this is bound to be confusing: the bourgeois ideology of settled independent citizens is clearly not the same as the bourgeois ideology of the highly mobile agents of a para-national corporation. The distinction of petit-bourgeois is an attempt to preserve some of the earlier historical characteristics, but is also used for a specific category within a more complex and mobile society. There are also problems in the relation between bourgeois and capitalist, which are often used indistinguishably but which in Marx are primarily distinguished as social and economic terms. There is a specific difficulty in the description of non-urban capitalists (e.g. agrarian capitalist employers) as bourgeois, with its residual urban sense, though the social relations they institute are clearly bourgeois in the developed 19th century sense. There is also difficulty in the relation between descriptions of bourgeois society and the bourgeois or bourgeoisie as a class. A class is dominant, but there can be difficulties of usage, associated with some of the most intense controversies of analysis, when the same word is used for a whole society in which one class is dominant (but in which, necessarily, there are other classes) and for a specific class within that whole society. The difficulty is especially noticeable in uses of bourgeois as an adjective describing some practice which is not itself defined by the manifest social and economic content of bourgeois.

      It is thus not surprising that there is resistance to the use of the word in English, but it has also to be said that for its precise uses in Marxist and other historical and political argument there is no real English alternative. The translation middle-class serves most of the pre-19th century meanings, in pointing to the same kinds of people, and their ways of life and opinions, as were then indicated by bourgeois, and had been indicated by citizen and cit and civil; general uses of citizen and cit were common until the late 18th century but less common after the emergence of middle-class in the late 18th century. But middle-class, though a modern term, is based on an older threefold division of society - upper, middle and lower - which has most significance in feudal and immediately post-feudal society and which, in the sense of the later uses, would have little or no relevance as a description of a developed or fully formed bourgeois society. A ruling class, which is the socialist sense of bourgeois in the context of historical description of a developed capitalist society, is not easily or clearly represented by the essentially different middle class. For this reason, especially in this context and in spite of the difficulties, bourgeois will continue to have to be used.

    1. Much used, but often poorly understood term, referring to the dominant class in capitalist society. In Marxist theory, it is most strictly employed in opposition to ‘proletariat,’ where it refers to the owners of productive capital (and thus to mercantile, industrial, and financial entrepreneurs). What distinguishes the bourgeoisie is that they have no need to sell their labor in order to survive. While such a bold contrast might be effective in the analysis of early capitalism, its fails to grasp the role and status of the administrative and managerial classes that have emerged with the development of high and late capitalism. Thus, ‘bourgeoisie’ is frequently used to refer to the ‘middle classes’ of contemporary capitalism. While such classes may still need to sell their labor (as does the working-class), their higher financial reward, and higher status, entails that the continuation of capitalism is as much in their interests as in the interests of any class of owners. The role that the middle-classes have in shaping culture, has led to the frequent use of the adjective ‘bourgeois’ as a derogatory term.
    1. A the term is understood in contemporary sociology, bureaucracy is that form of administration in which decision-making power is invested in offices, rather than in identifiable individuals. While bureaucracies have existed in pre-industrial societies (including feudal China), it is the fundamental role that bureaucracy plays in the organization and control of 20th century capitalism that has received the greatest theoretical and empirical study.

    2. The classic source for the theory of bureaucracy is Max Weber, published in the 1920s. Weber proposed a six part model (or ideal type) of bureaucracy, that served to specify its distinctive characteristics (even if these characteristics need not all be present in any particular empirical example of a bureaucracy). Weber’s characteristics are as follows: a high degree of specialization, with complex tasks broken down and clearly allocated to separate offices; a hierarchy, with chains of authority and responsibility clearly defined; activity is governed by a consistent system of abstract rules; officials work impersonally, without emotional or personal attachment either to colleagues or clients; personnel are recruited and promoted on the grounds of technical knowledge, ability and expertise; the official’s activities as an official are wholly separate from his or her private activities (so that a professional position cannot be used for personal advantage). For Weber, this structure is the most efficient (and therefore most instrumentally rational) way in which to organize the complex activities of modern industrial society. As such, bureaucracy is an unavoidable feature of advanced society, not merely in industry, but in almost every area of social life. Mommsen has thus written of the total bureaucratization of life. Weber himself predicted, not just the growing influence of bureaucracy in capitalism, but also a convergence between capitalist and Soviet communist societies, in terms of the dominant role played by bureaucracy in both.

      While bureaucracy is technically efficient, for Weber, it also has undesirable consequences for democracy. Precisely because nearly all social activities must proceed through stages that are pre-determined by bureaucracies, and given that those bureaucratic structures are themselves inflexible and possibly unresponsive to change, innovative activity, or activity that does not make sense within the narrow parameters of the bureaucracy, is inhibited. Further, technical expertise is concentrated within the democratically unaccountable offices of the bureaucracy, so that bureaucratic decisions and procedures are not easily challenged. Bureaucracy thereby becomes a ‘steel-hard cage’ that encloses us all.

      Marxism has perhaps contributed little to the theory of bureaucracy. Bureaucracies were less extensive when Marx and Engels were writing, and they may be seen to be generally antipathetic to bureaucracy. The classic Marxist writings notably underestimate the significance that administrative structures have in capitalism (and thus have little to say on the significance of the managerial classes). The Marxists who have had most to say about bureaucracy tend to be those who seek to fuse Marxist and Weberian theories. In History and Class Consciousness, the Hungarian Marxist Lukács began to use Weberian accounts of bureaucracy and rationalization to extend Marx’s theory of commodity fetishism into an account of the reification of the social totality (and thus to explain the distinctive ideological forms of contemporary capitalism, in so far as society confronts the individual as an autonomous, quasi-natural object, rather than as a product of human agency and choice). This in turn influenced the Frankfurt School, and especially T.W. Adorno, in developing a characterization of late capitalism as a totally administered society.

    3. Bureaucracy appears in English from the middle of the 19th century. Carlyle in Latter-day Pamphlets (1850) wrote of the ‘Continental nuisance called "Bureaucracy,"’ and Mill in 1848 wrote of the inexpediency of concentrating all the power of organized action ‘in a dominant bureaucracy.’ In 1818, using an earlier form, Lady Morgan had written of the ‘Bureaucratie or office tyranny, by which Ireland had been so long governed.’ The word was taken from bureaucratie, French, bureau – writing-desk and then office. The original meaning of bureau was the baize used to cover desks. The English use of bureau as office dates from the early 18th century; it became more common in American use, especially with reference to foreign branches, the French influence being predominant. The increasing scale of commercial organization, with a corresponding increase in government intervention and legal controls, and with the increasing importance of organized and professional central government, produced the political facts to which the new term pointed. But there was then considerable variation in their evaluation. In English and North American usage the foreign term, bureaucracy, was used to indicate the rigidity or excessive power of public administration, while such terms as public service or civil service were used to indicate impartiality and selfless professionalism. In German Bureaukratie often had the more favorable meaning, as in Schmoller (‘the only neutral element,’ apart from the monarchy, ‘in the class war,’) and was given a further sense of legally established rationality by Weber. The variation of terms can still confuse the variations of evaluation, and indeed the distinctions between often diverse political systems which ‘a body of public servants,’ or a bureaucracy can serve. Beyond this, however, there has been a more general use of bureaucracy to indicate, unfavorably, not merely the class of officials but certain types of centralized social order, of a modern organized kind, as distinct not only from older aristocratic societies but from popular democracy. This has been important in socialist thought, where the concept of the ‘public interest’ is especially exposed to the variation between ‘public service’ and ‘bureaucracy.’

    4. In more local ways, bureaucracy is used to refer to the complicated formalities of official procedures, what the Daily News in 1871 described as ‘the Ministry…with all its routines of tape, wax, seals, and bureauism.’ There is again an area of uncertainty between two kinds of reference, as can be seen by the coinage of more neutral phrases such as ‘business methods’ and ‘office organization’ for commercial use, bureaucracy being often reserved for similar or identical procedures in government.



    1. Concept from economics, referring most obviously and intuitively to the machines, plant and buildings used in the industrial manufacturing process. More technically, capital is one of four factors of production. A factor of production is a resource that is valued, not for its own sake, but for its function in the production of other goods or services that are of intrinsic value. The other factors of production are land (including all natural resources prior to their extraction, the land surface, sea and space), labor (being the ability of human beings to engage in productive work), and entrepreneurship (being the ability to organize together the other three factors in the production process). Capital is any resource or item used in the production process that has already been subjected to some form of productive labor.
    1. A form of social and economic organization, typified by the predominant role played by capital in the economic production process, and by the existence of extensive markets by which the production, distribution and consumption of goods and services (including labor)is organized. The development of capitalism may most readily be linked to industrialization, and thus has its purest manifestation in 19th-cenutry Britain and USA. However, a more limited form of (mercantile) capitalism, characterized by limited markets in commodities, and thus by the existence of a small capitalist class of merchants, but without the industrial production or free labor markets, existed in medieval Europe.

    2. Different theories of capitalism exist, especially within social theory, providing different explanatory models of the origin of capitalism and of it predominant features. In Marxism, capitalism is theorized in terms of the organization of production and the resultant relationship between economic classes. The emergence of capitalism is thus explained in terms of the development of industrial technology (or the forces of production). A capitalist society is structured through the antagonism of two dominant classes: the bourgeoisie which owns and controls the means of production, and the proletariat that owns only its ability to work (and therefore survives by selling its labor power). At the surface, there appears to be a fair and free exchange of commodities, including labor power, through market mechanism. In Marx’s analysis, beneath this surface lies a systematic exploitation of the proletariat, in so far as the price of labor set on the free market is less than the values of the labor’s product. The bourgeoisie are therefore seen to appropriate surplus value akin to the discrepancy between the costs of producing a commodity and the total revenue received from its sale. While Max Weber’s analysis of capitalism shares much in common with Marx’s, Weber places greater emphasis on the surface organization of capitalism, and thus on capitalism as a system of exchange and consumption. The link between capitalism and rationalization is central to this account. For Weber, a precondition of capitalist development is the development of a double entry book-keeping (and thus the possibility of rational control and prediction of the capitalist’s resources).

      At the beginning of the 20th century, European and American capitalism developed in a number of key areas. Weber’s analysis of rationality responded to the increasing bureaucracy of capitalism, as more complex production required ever more sophisticated forms of administration and control. This in turn leads to the rise of a white-collar middle class that is distinct in its interests and allegiances from either the working class proletariat or the bourgeoisie. Furthermore, banks and other financial organizations became more significant, as day-to-day control of production was increasingly separated from ownership. A distinctive form of finance capital was identified, for example, by the Austro-Marxist Rudolf Hilferding around 1910. Linked to this development is both the increasing concentration of capital, so that production is controlled by fewer, larger, corporations (leading to monopoly capitalism), and the expansion of capitalism into colonial markets. Increasing state intervention, not merely in the regulation of capitalist production, but also in the ownership of the means of production, leads to a further deviation from the ‘pure’ model of free-market capitalism. A period of organized capitalism thus begins to emerge after the First World War, and continues, with the increasing multi-national consumption an production bases of major corporations, under the rise of welfare state capitalism and Keynesian economic policies, at least into the 1970s. All these developments may be seen to obscure the basic lines of class conflict identified by Marx. The proletariat is increasingly differentiated within itself, and through greater job security and real income, is more integrated into the capitalist system. The economic crises predicted by Marx are at worst managed and at best avoided by interventionist governments.

      Recent developments, in technology, with the decline of traditional manufacturing industries and the rise of communications or knowledge based industries; in consumerism, with increasingly affluent working and middle classes; and in the political shifts of the 1980s away from state intervention, demand new theories of the organization of contemporary societies. Thus theories of late capitalism, post-industrial society, and various accounts of postmodernism suggest a more or less radical break from capitalist modes of organization.

    3. Capitalism as a word describing a particular economic system began to appear in English from the early 19th century, and almost simultaneously in French and German. Capitalist as a noun is a little older; Arthur Young used it, in his journal Travels in France (1792), but relatively loosely: ‘moneyed men, or capitalists.’ Coleridge used it in the developed sense – ‘capitalists…having labour at demand’ – in Tabletalk (1823). Thomas Hodgskin, in Labour Defended against the Claims of Capital (1825)wrote: ‘all the capitalists of Europe, with all their circulating capital, cannot of themselves supply a single week’s food and clothing,’ and again: ‘betwixt him who produces food and him who produces clothing, betwixt him who makes instruments and him who uses them, in steps the capitalist, who neither makes nor uses them and appropriates to himself the produce of both.’ This is clearly a description of the economic system.

    4. The economic sense of capital had been present in English from the 17th century and in a fully developed form from the 18th century. Chambers Cyclopedia (1727-51)has ‘power given by Parliament to the South-Sea company to increase their capital’ and definition of ‘circulating capital’ is in Adam Smith (1776). The word has acquired this specialized meaning from its general sense of ‘head’ or ‘chief’: capital, French, capitalis, Latin, caput, Latin – head. There were many derived specialist meanings; the economic meaning developed from a shortening of the phrase ‘capital stock’ – a material holding or monetary fund. In classical economics the functions of capital, and of various kinds of capital, were described and defined.

      Capitalism represents a development of meaning in that it has been increasingly used to indicate a particular and historical economic system rather than any economic system as such. Capital and at first capitalist were technical terms in any economic system. The later (early 19th century) uses of capitalist moved towards specific functions in a particular stage of historical development; it is this use that crystallized in capitalism. There was a sense of the capitalist as the useless but controlling intermediary between producers, or as the employer of labor, or, finally, as the owner of the means of production. This involved, eventually, and especially in Marx, a distinction of capital as a formal economic category from capitalism as a particular form of centralized ownership of the means of production, carrying with it the system of wage-labor. Capitalism in this sense is a product of a developing bourgeois society; there are early kinds of capitalist production but capitalism as a system – what Marx call ‘the capitalist era’ – dates only from the 16th century and did not reach the stage of industrial capitalism until the late 18th and early 19th century.

      There has been immense controversy about the details of this description, and of course about the merits and working of the system itself, but from the early 20th century, in most languages, capitalism has had this sense of a distinct economic system, which can be contrasted with other systems. As a term capitalism does not seem to be earlier than the 1880s, when it began to be used in German socialist writing and was extended to other non-socialist writing. Its first English and French uses seem to date only from the first years of the 20th century. In the middle of the 20th century, in reaction against socialist argument, the words capitalism and capitalist have often been deliberately replaced by defenders of the system by such phrases as ‘private enterprise’ and ‘free enterprise.’

      These terms, recalling some of the conditions of early capitalism, are applied without apparent hesitation to very large or para-national ‘public’ corporations, or to an economic system controlled by them. At other times, however, capitalism is defended under its own now common name. There has also developed a use of post-capitalist and post-capitalism, to describe modifications of the system such as the supposed transfer of control from shareholders to professional management, or the coexistence of certain nationalized or ‘state-owned’ industries. The plausibility of these descriptions depends on the definition of capitalism which they are selected to modify. Though they evidently modify certain kinds of capitalism, in relation to its central sense they are marginal. A new phrase, state-capitalism, has been widely used in the middle of the 20th century, with precedents from the early 20th century, to describe forms of state ownership in which the original conditions of the definition – centralized ownership of the means of production, leading to a system of wage-labor – have not really changed. It is also necessary to note an extension of the adjective capitalist to describe the whole society, or features of the society, in which a capitalist economic system predominates. There is considerable overlap and occasional confusion here between capitalist and bourgeois. In strict Marxist usage capitalist is description of the mode of production and bourgeois a description of a type of society. It is in controversy about the relations between a mode of production and a type of society that the conditions for overlap of meaning occur.

3. The term capitalism relates to a particular system of socioeconomic organization (generally contrasted with feudalism and socialism), the nature of which is more often defined implicitly than explicitly. In common with other value-loaded concepts of political controversy, its definition - whether implicit or explicit - shows a chameleon-like tendency to vary with the ideological bias of the user. Even when treated as a historical category and precisely defined for the purpose of objective analysis, the definition adopted is often associated with a distinctive view of the temporal sequence and character of historical development. Thus historians such as Sombart (1915), Weber (1930[1922]) and Tawney (1926), who were concerned to relate changes in economic organization to shifts in religious and ethical attitudes, found the essence of capitalism in the acquisitive spirit of profit-making enterprise and focused on developments occurring in the 16th , 17th and early 18th centuries. Probably a majority of historians have seen capitalism as reaching its fullest development in the course of the Industrial Revolution and have treated the earlier period as part of a long transition between feudalism and capitalism. Marxist historians have identified a series of stages in the evolution of capitalism - for example, merchant capitalism, agrarian capitalism, industrial capitalism and state capitalism - and much of the debate on origins and progress has hinged on differing views of the significance, timing and characteristics of each stage. Thus Wallerstein (1979), who adopts a world-economy perspective, locates its origins in the agrarian capitalism that characterized Europe of the 16th , 17th and 18th centuries; while Tribe (1981), who also takes agrarian capitalism as the original mode of capitalist production, sees the essence of capitalism in a national economy where production is separated from consumption and is coordinated according to the profitability of enterprises operating in competition with each other.

Whatever the historical or polemical objective of writers, however, their definition is likely to be strongly influenced by Karl Marx (1867-94), who was the first to attempt a systematic analysis of the 'economic law of motion' of capitalist society and from whom most of the subsequent controversy on the nature and role of capitalism has stemmed. For Marx, capitalism was a 'mode of production' in which there are basically two classes of producers: the capitalists, who own the means of production (capital or land), make the strategic day-to-day economic decisions on technology, output and marketing, and appropriate the profits of production and distribution; and the laborers, who own no property but are free to dispose of their labor for wages on terms which depend on the numbers seeking work and the demand for their services. This was essentially the definition adopted, for example, by non-Marxist economic historians such as Lipson and Cunningham and by Marxists such as Dobb (1946).

Given this perspective, it is primarily the emergence of a dominant class of entrepreneurs supplying the capital necessary to activate a substantial body of workers which marks the birth of capitalism. In England, and even more emphatically in Holland, it can be dated from the late 16th and early 17th centuries. Holland's supremacy in international trade, associated with its urgent need to import grain and timber (and hence to export manufactures) enabled Amsterdam to corner the Baltic trade and to displace Venice as the commercial and financial center of Europe. The capital thus amassed was available to fund the famous chartered companies (Dutch East India Company 1602, West India Company 1621) as well as companies to reclaim land and exploit the area's most important source of industrial energy - peat. It also provided the circulating capital for merchants engaged in the putting-out system whereby they supplied raw materials to domestic handicrafts workers and marketed the product. Specialization within agriculture drew the rural areas still further into the money economy, and the urban areas supplied a wide range of industrial exports to pay for essential raw material imports.

Dutch capitalists flourished the more because they were subject to a Republican administration which was sympathetic to their free market, individualist values. In England, where similar economic developments, were in progress in the 16th and early 17th centuries, the rising class of capitalists was inhibited by a paternalistic monarchical government bent on regulating their activities for its own fiscal purposes and power objectives and in terms of a different Set of social values. The Tudor system of state control included checking enclosures, controlling food supplies, regulating wages and manipulating (he currency. The early Stuarts went further in selling industrial monopolies and concessions to favored entrepreneurs and exclusive corporations and infuriated the majority whose interests were thus damaged. The English capitalists carried their fight against monopolies to the Cromwellian Revolution. When the monarchy was restored in the 1660s, the climate of opinion had been molded by religious, political and scientific revolution into an environment which favored the advancement of capitalism and laid the foundations for its next significant phase - the Industrial Revolution.

Orthodox economic theorists eschew the concept of capitalism: it is too broad for their purposes in that it takes into account the social relations of production. Modern economic historians adhering to an orthodox framework of economic theory also tend to avoid the term. They do, however, recognize a significant aspect of capitalism by emphasizing the rational, profit- maximizing, double bookkeeping characteristics of capitalist enterprise, and in the post-Second World War debates on economic development from a backward starting-point, there has been a tendency to regard the emergence of this 'capitalist spirit' as an essential prerequisite to the process of sustained economic growth in non-socialist countries.

The modern debate on capitalism in contemporary advanced economies has revolved around its being a~ alternative to socialism. Marxist economists follow Marx in seeing capitalism as a mode of production whose internal contradictions determine that it will eventually be replaced by socialism. In the aftermath of the Second World War, when the governments of most developed countries took full employment and faster economic growth as explicit objectives of national economic policy, there was a marked propensity for the governments of capitalist economies to intervene actively and extensively in the process Of production. At that stage the interesting issues for most Western economists seemed to be the changing balance of private and public economic power (see Shonfield 1965), and the extent to which it was either desirable or inevitable for the increasingly ‘mixed’ capitalist economies to converge towards socialism. In the late 1960s and 1970s, when the unprecedented post-war boom in world economic activity came to an end, Marxist economists were able to point confidently to the ‘crisis of capitalism' for which they found evidence in rising unemployment and inflation in capitalist countries; but non-Marxist economists had lost their earlier consensus. The economic debate on capitalism is now taking place in a political context which is relatively hostile to state intervention; and those economists who believe that the 'spirit of capitalism,’ or free enterprise, is the key to sustained technological progress and that it is weakened by socialist economic policies, seem to carry more conviction than they did in the 1950s and 1960s.

center and periphery
     3. The two concepts center and periphery form part of an attempt to explain the processes through which capitalism is able to affect the economic and political structure of underdeveloped or developing societies. Drawing on the Marxist tradition, this view assumes that in the central capitalist countries there is a high organic composition of capital, and wage levels approximate the cost of reproducing labor. By contrast, in the peripheral countries, there is a low organic composition of capital, and wages are likely to be low, hardly meeting the cost of reproducing labor. This happens because in peripheral area reproduction of labor is often dependent on some degree of non-capitalist production, and the wages paid to workers are subsidized by subsistence production. In some cases, such as with plantation workers, smallholder plots may contribute as much as the actual money wage paid, or in mining, the migrant male wage laborer may receive a wage which supports him but not his family, who depend on subsistence production elsewhere. In the center, wages are determined largely by market processes, whereas at the periphery non-market forces, such as political repression or traditional relations of super- and subordination (as between patrons and clients), are important in determining the wage rate.

The use of the concepts center and periphery implies the world-system as the unit of analysis, and ‘underdevelopment’ as an instituted process rather than a mere descriptive term. Underdevelopment is the result of contradictions within capitalist production relations at the center. It is the outcome of attempts to solve these problems and is a necessary part of the reproduction of capitalism on a world scale.

Attempts to analyze the processes of surplus extraction, together with the claim that the world economy had become capitalist, gave rise to two major interrelated debates. One concerned the precise definition of capitalism and whether it is to be satisfactorily characterized by a specific system of production or an exchange of relations. The other tried to identify links between center and periphery, and thus the nature of the system, in terms of the relations and articulations between different modes of production. In trying to clarify these theoretical issues together with their political implications, the use of the terms center and periphery was elaborated and empirically researched. This gave rise to various forms of world-system theory, represented in the writing of Wallerstein (1974), Frank (1978) and Amin (1976); it also revived interest in theories of national and global economic cycles, for example in the work of Mandel (1980). In addition, in attempting to explain the position of such countries as Brazil, Argentina and Mexico, the concept of the semi-periphery was developed. This concept involves the idea that particular political cultures and the mixed nature of their industrialization places these countries in a buffer position, particularly in their international political stance, between central capitalist countries and those of the true periphery.

3. Only a state, that is, an internationally recognized entity, can grant a person citizenship. One cannot be a citizen of an ethnic group or of a nationality which is not organized as a state. Nor is citizenship confined to democratic states. The distinction between citizens (who belong to a republic) and subjects (who belong to a monarchy) became obsolete when democracy matured in states that retained a monarchical façade. Non-democratic states would not now tolerate the international stigmatization of their population by a refusal to term them ‘citizens.’

Citizenship is a legal status defined by each state. Rights and obligations are nowadays ascribed equally to all citizens, since it has become inexpedient to acknowledge the existence of second-class citizens, whether on the basis of place of birth or residence, gender, beliefs, behavior, race, or caste. ‘Civil’ rights protect their safety and ability to act. Raymond Aron (1974) affirms that ‘modern citizenship is defined by the Rights of Man,’ but ancient politics (e.g. the Roman Empire) emphasized liberties and procedural guarantees on behalf of their recognized members. In addition to these civil rights, other rights, again supposedly modern, are termed social rights or entitlements. These entitle the citizen to some level of well-being and of social (i.e. socially guaranteed and organized) security. It should not be forgotten, however, that ancient states had their own form of entitlements, the panem et circenses of Rome being inconsiderable when compared to the state socialism of some ancient empires in America or Asia. Moreover, civil rights, true prerequisites of democracy, have for centuries been viewed as ‘natural’ possessions of individuals, which had only to be protected from the state (hence their often negative description: ‘Congress shall make no law…abridging the freedom of speech or of the press’). Social rights, in contrast, are popular aspirations which are not always enforceable, even where they are endorsed by the state, because of socioeconomic or ideological constraints. Westbrook therefore suggests that ‘a civil right is thus defined in spite of ordinary politics, a social right is defined by the grace of ordinary politics.’

The duty of the citizens to obey the laws of the state: ‘subjection to the sovereign is the defining characteristic of citizenship’ . Serving the state, from paying taxes up to risking one’s life in its defense, is part of civic duty. However, most political philosophers have stressed that the state exists to serve the citizens: hence it is necessary for the citizens to assert political control over the state in order to ensure that their civic responsibilities do not become too heavy in proportion to their rights.

It is therefore easy to understand why citizenship, originally merely a classification of membership, has become enmeshed with the three most formidable ideological currents of modern times: nationalism and democracy, which advocate an active, dedicated citizenship, and tend to reject on the one hand the non-patriots and on the other hand the non-democrats; and third, but not least, the ideology of the welfare state, which emphasizes a passive, consumerlike citizenship and which emerges easily with the Hobbesian view if a profitable authority. These three conflicting ideologies are themselves internally divided on such issues as the relevance of ethnicity for the nation, and of majority rule for democracy. In consequence of these ideological conflicts being grafted upon it, the juridical notion of citizenship is sure to remain a matter of urgent public debate for many years to come.

civil society
    1. Before the work of the philosopher Hegel, the term ‘civil society was roughly equivalent in meaning to the term ‘state.’Hegel, in using this term, was alluding to the social domain of market exchange (the market economy – a notion derived from such texts as Adam Smith’s Wealth of Nations) in which individual civil agents freely engage in the pursuit of financial wealth, and the ownership and exchange of goods. Civil society is contrasted by Hegel with the realm of the family, in which the ties between members are based on mutual affection (the bonds of love). In contrast to the family, civil society is defined as a realm of engagement in which an individual pursues their own private ends, and in doing so encounters others primarily as means for the satisfaction of subjective needs (in other words, the relationship between individuals is an instrumental one). In civil society the individual thereby gains a sense of identity derived from his or her relative independence from others. Yet, in Hegel, this dependence contains within it a shared characteristic, for through the active pursuit of their subjective ends individuals also develop a sense of mutual interdependence. Civil society, therefore, is not for Hegel merely to be understood as the outcome of individuals engaged in the free pursuit of their own desires (a domain purely of the market economy in Adam Smith’s sense), but as bringing with it a sense of shared interests in which individuals recognize both the duty they have to support themselves and their duties toward one another (for instance, within civil society, Hegel argues, individuals can claim certain entitlements such as the right to job security, the right to education and the protection from such social hardships as poverty). Because of this, civil society is characterized by Hegel as constituting a ‘universal family,’ which is composed of groups or ‘corporations’ of individuals who are affiliated by means of a common craft or profession. On Hegel’s account civil society is contrasted with the state, which is ultimately concerned with the ethical good of the whole and takes the principle of the universal family to its logical fruition by functioning as a means of mediating between the competing claims of differing interests (both of individuals and corporations) with the aim of achieving the well-being of the whole of society (in Hegel’s terms, the ethical life’).

    2. The young Karl Marx inherited Hegel’s conception of civil society, and displayed or more or less uncritical attitude toward it. In his later writings, however, Marx came to adopt the view that civil society and the state are intimately connected, contending that the apparent freedom of individual association and pursuits in civil society is in fact a masked manifestation of an underlying structure of state power, the latter being the hands of a wealthy capitalist minority whose aim is the exploitation of the majority in the interests of enhanced profit. On a Marxian view, therefore, the realm of civil society is intimately connected with issues of power and ideology. Some recent commentators tend to adhere to the Hegelian view, namely that civil society is a sphere of individual association which may be contrasted with the domain of state power. The meaning of the term has not, therefore, been exhausted by Marx’s attempted revaluation of it.

3. This is an old concept in social and political thought that has recently been revived, especially in eastern Europe but also in the west. Traditionally, up to the 18th century, it was a more or less literal translation of the Roman societas civilis and, behind that, the Greek koinónia politiké. It was synonymous, that is, with the state or ‘political society.’ When Locke spoke of ‘civil government,’ or Kant of bürgerliche, or Rousseau of état civil, they all meant simply the state, seen as encompassing - like the Greek polis - the whole realm of the political. Civil society was the arena of the politically active citizen, It also carried the sense of a 'civilized' society, one that ordered its relations according to a system of laws rather than the autocratic whim of a despot.

The connection of citizenship with civil society was never entirely lost. It forms part of the association that lends its appeal to more recent revivals of the concept. But there was a decisive innovation in the second half of the 18th century that broke the historic equation of civil society and the state. British social thought Was especially influential in this. In the writings of John Locke and Tom Paine, Adam Smith and Adam Ferguson, there was elaborated the idea of a sphere of society distinct from the state and with forms and principles of its own. The growth of the new science of political economy - again largely a British achievement - was particularly important in establishing this distinction. Most of these writers continued to use the term civil society in its classical sense, as in Adam Ferguson's Essay on the History of Civil Society (1767); but what they were in fact doing was making the analytical distinction that was soon to transform the meaning of the concept.

It is to Hegel that we owe the modern meaning of the concept of civil society. In the Philosophy of Right (1821), civil society is the sphere of ethical life interposed between the family and the state. Following the British economists, Hegel sees the content of civil society as largely determined by the free play of economic forces and individual self-seeking. But civil society also includes social and civic institutions that inhibit and regulate economic life, leading by the ineluctable process of education to the rational life of the state. So the particularity of civil society passes over into the universality of the state.

Marx, though acknowledging his debt to Hegel, narrowed the concept of civil society to make it equivalent simply to the autonomous realm of private property and market relations. 'The anatomy of civil society,’ Marx said, 'is to be sought in political economy'. This restriction threatened its usefulness. What need was there for the concept of civil society when the economy or simply 'society' - seen as the effective content of the state and political life generally - supplied its principal terms? In his later writings Marx himself dropped the term, preferring instead the simple dichotomy 'society-state'. Other writers too, and not only those influenced by Marx, found less and less reason to retain the concept of civil society. The 'political society' of Alexis de Tocqueville's Democracy in America (1835-40) recalled the earlier sense of civil society as education for citizenship; but Tocqueville's example did little to revive the fortunes of what was increasingly regarded as an outmoded term. In the second half of the 19th century 'civil society' fell into disuse.

It was left to Antonio Gramsci, in the writing gathered together as the Prison Notebooks (1929-35), to rescue the concept in the early part of the 20th century. Gramsci, while retaining a basic Marxist, orientation, went back to Hegel to revitalize the concept. Indeed he went further than Hegel in detaching civil society from the economy and allocating it instead to the state. Civil society is that part of the state concerned not with coercion or formal rule but with the manufacture of consent. It is the sphere of 'cultural politics'. The institutions of civil society are the Church, schools, trade unions, and other organizations through which the ruling class exercises hegemony over society. By the same token it is also the arena where that hegemony is challengeable. In the radical decades of the 1960s and 1970s, it was Gramsci's concept of civil society that found favor with those who attempted to oppose the ruling structures of society not by direct political confrontation but by waging a kind of cultural guerrilla warfare. Culture and education were the spheres where hegemony would be contested, and ended.

New life was also breathed into the concept by the swift-moving changes in central and eastern Europe in the late 1970s and 1980s. Dissidents in the region turned to the concept of civil society as a weapon against the all-encompassing claims of the totalitarian state. The example of Solidarity in Poland suggested a model of opposition and regeneration that avoided suicidal confrontation with the state by building up the institutions of civil society as a 'parallel society'. In the wake of the successful revolutions of 1989 throughout the region, the concept of civil society gained immensely in popularity. To many intellectuals it carried the promise of a privileged route to the post- communist, pluralist society, though they were vague about the details. Western intellectuals too were enthused anew with the concept. For them it suggested a new perspective on old questions of democracy and participation, in societies where these practices seemed to have become moribund.

Civil society, it is clear, has renewed its appeal. As in the 18th century, we seem to feel once more the need to define and distinguish a sphere of society that is separate from the state. Citizenship appears to depend for its exercise on active participation in non-state institutions, as the necessary basis for participation in formal political institutions. This was Tocqueville’s point about American democracy; it is a lesson that the rest of the world now seems very anxious to take to heart. The question remains whether ‘civil society’ will simply be a rallying cry and a slogan, or whether it will be given sufficient substance to help in the creation of the concrete institutions needed to realize its goals.

    1. Civilization is now generally used to describe an achieved state or condition of social life. Like culture with which is has had a long and still difficult interaction, it referred originally to a process, and in some contexts this sense still survives.

    2. Civilization was preceded in English by civilize, which appeared in the early 17th century, from civiliser, French – to make a criminal matter into a civil matter, and thence, by extension, to bring within a form of social organization. The root word is civil from civilis, Latin – of or belonging to citizens, from civis, Latin – citizen. Civil was thus used in English from the 14th century, and by the 16th century had acquired the extended senses of orderly and educated. Hooker in 1594 wrote of ‘Civil Society’ – a phrase that was to become central in the 17th and especially 18th centuries – but the main development towards a description of an ordered society was civility, from civilitas, Latin, - community. Civility was often used in the 17th and 18th centuries where we would know expect civilization, and as late as 1772 Boswell, visiting Johnson, 'found him busy, preparing a fourth edition of his folio Dictionary…He would not admit civilization, but only civility. With great deference to him, I thought civilization, from to civilise, better in the sense opposed to barbarity, than civility.’ Boswell had correctly identified the main use that was coming through, which emphasized not so much a process as a state of social order and refinement, especially in conscious historical or cultural contrast with barbarism. Civilization appeared in Ash’s dictionary of 1775, to indicate both the state and process. By the late 18th century and then very markedly in the 19th century it became common.

      In one way the new sense of civilization, from the late 18th century, is a specific combination of the ideas of a process and an achieved condition. It has behind it the general spirit of the Enlightenment, with its emphasis on secular and progressive human self-development. Civilization expressed this sense of historical process, but also celebrated the associated sense of modernity: an achieved condition of refinement and order. In the Romantic reaction against these claims for civilization, alternative words were developed to express other kinds of human development and other criteria for human well-being, notably culture. In the late 18th century the association of civilization with refinement of manners was normal in both English and French. Burke wrote in Reflections of the French Revolution: ‘our manners, our civilization, and all the good things which are connected with manners, and with civilization.’ Here the terms seems almost synonymous, though we must notes that manners has a wider reference than in ordinary modern usage. From the early 19th century the development of civilization towards its modern meaning, in which as much emphasis is put on social order and on ordered knowledge (later, science) as on refinement of manners and behavior, is on the whole earlier in French than in English. But there was a decisive moment in English in the 1830s, when Mill, in his essay on Coleridge, wrote:

                            Take for instance the question how far mankind has gained by civilization. One observer is forcibly struck by the
                            multiplication of physical comforts; the advancement and diffusion of knowledge; the decay of superstition; the facilities
                            of mutual intercourse; the softening of manners; the decline of war and personal conflict; the progressive limitation of the
                            tyranny of the strong over the weak; the great works accomplished throughout the globe by the cooperation of the multitudes…
This is Mill’s range of positive examples of civilization, and it is a fully modern range. He went on to describe negative effects: loss of independence, the creation of artificial wants, monotony, narrow mechanical understanding, inequality and hopeless poverty. The contrast made by Coleridge and others was between civilization and culture or cultivation.
        The permanent distinction and the occasional contrast between cultivation and civilization…The permanency of the
        nation…and its progressiveness and personal freedom…depend on a continuing and progressive civilization. But
        civilization is itself but a mixed good, if not far more a corrupting influence, the hectic of disease, not the bloom
        of health, and a nation so distinguished more fitly to be called a varnished than a polished people, where this
        civilization is not grounded in cultivation, in the harmonious development of those qualities and faculties
        that characterize our humanity. (On the Constitution of Church and State, V)

Coleridge was evidently aware in this passage of the association of civilization with the polishing of manners; that is the point about the remark of varnish, and the distinction recalls the curious overlap, in the 18th century English and French, between polished and polite, which have the same root. But the description of civilization as a ‘mixed good,’ like Mill’s more elaborated description of its positive and negative effects, marks the point at which the word has come to stand for a whole modern social process. From this time on this sense was dominant, whether the effects were reckoned as good, bad or mixed.

Yet it was still primarily seen as a general and indeed universal process. There was a critical moment when civilization was used in the plural. This is later with civilizations than with cultures; its first clear use in French (Ballanche) in 1819. It is preceded in English by implicit uses to refer to an earlier civilization, but it is not common anywhere until the 1860s.

In modern English civilization still refers to a general condition or state, and is still contrasted with savagery or barbarism. But the relativism inherent in comparative studies, and reflected in the use of the civilizations, has affected this main sense, and the word now regularly attracts some defining adjective: Western civilization, modern civilization, industrial civilization, scientific and technological civilization. As such it has come to be a relatively neutral form for any achieved social order or way of life, and in this sense has a complicated and much disputed relation with the modern social sense of culture. Yet its sense of an achieved state is still sufficiently strong for it to retain some normative quality; in this sense civilization, a civilized way of life, the condition of civilized society may be seen as capable of being lost as well as gained.

    1. Classes may primarily be understood as economic groupings, although the relevant economic factors that serve to identify a class may be disputed. Thus, in the Marxist tradition, classes are defined in terms of the ownership of productive wealth, while other traditions look to differences in income or occupation. Class divisions are typically seen as fundamental to the stratification of society, and as such may be associated with differences in power and culture. Crucially, classes are not typically understood as aggregates of individuals, where class analysis would be concerned with classifying some common attribute shared by these individuals. Rather, classes are understood as social entities that have a reality that is independent of the individuals that make them up. As such, class may be a crucial causal factor in explaining the constitution of the individual human subject.

    2. Marx and Engels’ famous, if slightly glib, comment that all preceding history has been the history of class conflict, expresses much that is fundamental to the Marxist approach to class. The analysis of any given society, at any moment of history, can focus on the latent or explicit conflict that exists between two major classes. The subordinate class will be active economic producers in the society. However, the members of that class will not have control over the production process, and thus will not be able to retain the full value of what they produce, or otherwise determine the allocation and distribution of that product. This is because the dominant class will own and control the society’s stock of economic resources or (means of production), and will thereby control the fate of whatever is produced with these resources. The relationship between the dominant and subordinate classes will therefore be one of exploitation, although the precise nature of the exploitation will depend upon the particular historical stage, or mode of production, in which it occurs. In capitalism, for example, the dominant class is the bourgeoisie, which owns capital, while the subordinate class is the proletariat (the members of which have only their ability to labor, which the must sell in order to survive). Exploitation occurs through the appropriation of surplus-value, which is to say that the proletariat’s reward for selling its labor is worth less than the exchange value of the product when it is sold. While the bourgeoisie and proletariat are recognized as the major historical players within capitalism, Marx recognized that other classes will exist. At any moment in history, these classes can be the remnants of earlier historical stages (so that, for example, a feudal aristocracy survived into capitalism), or may be the early form of a class that will subsequently become significant (such as the mercantile capitalists who existed in late feudalism). Other groups may have ambiguous class positions, such as the small, petit-bourgeois producer (including the shop keeper or independent entrepreneur) in capitalism, who own insufficient productive property to free themselves from the necessity of labor.

      Class conflict, within Marxism, is understood in terms of the conflicting interests of classes. It is in the interests of the dominant class for the existing economic relations to continue. It is in the interests of the subordinate classes to see the ending of those relations. Overt class conflict, in the form of revolution, is however inhibited, at least in large part, through ideological mechanisms (such as educational institutions, religion and the mass media) existing in the society. A theory of ideology suggests that the dominant class does not maintain its position purely through the exercise of physical force (or control of the means of violence). Rather, the threat of violence is complemented, and possibly in the short-term rendered redundant, by structures of belief that appear to give legitimacy to the dominance of the ruling class. Thus, under the influence of ideology, the subordinate classes will hold beliefs that are against their own objective long-term interests. The issue of ideology becomes a core issue for cultural studies when more sophisticated theories of ideology (not least those centering around the concept of hegemony) suggest that the subordinate classes simply do not accept, passively, an account of the world that is in the interests of the dominant class, but rather more or less successfully negotiate and resist that account, in the light of their own experience. Culture thereby comes to be seen as fundamentally structured in terms of class inequalities.

      While the Marxist tradition tends to explain all social inequalities through reference to economic differences (so that the dominant economic class is also expected to be dominant politically and culturally), in the tradition of sociological analysis that arises from the work of Max Weber, a more layered account of social inequality is favored. Weber complements an economic analysis of class by analyses of differences in power and social status. Weber’s approach to the economic determinants of class is itself more varied than that of Marx. Firstly, Weber does not presuppose that all social differences can be collapsed on to economic differences (noting, for example, that the aristocratic Junta in late 19th century Germany held political power, in spite of the existence of an economically powerful bourgeoisie). Further, for Weber, at least with respect to contemporary capitalism, an individual’s class position does not depend exclusively upon his or her relationship to the means of production, but is realized through the market. Weber thus talks of market opportunities, such that an individual brings various resources including ownership of stocks of capital, the ability to labor and crucially, high levels of skill, to the labor and capital markets. Different resources will earn different levels and kinds of material and symbolic reward (or life-chances). This allows the Weberian to make differentiations within Marxism’s proletariat class, in order to explain the higher levels of material reward and status accorded to intellectuals and managers or administrators over those of manual workers. This in turn throws light on the ambiguous class position of those groups, in that while they are to be strictly defined as laborers, their short-term or apparent class interests, self-understanding and cultural identity may accord more closely with those of the property-owning bourgeoisie. (Analyses of these groups have been a key part of E.O. Wright’s class theory, for example). In addition, analysis of these differences in the social status, or the prestige and respect, that is associated with different social positions, can lead to an analysis of the distinctive lifestyles that are associated with different classes (so that class is again seen as a cultural, rather than purely economic, phenomenon).

      There is a danger that the Weberian approach to class analysis can be reduced to an account of class purely in terms of occupational difference and thus to something akin to the Registrar General’s classification of Socio-Economic Groups (of professional; employers and managers; skilled manual and self-employed non-manual; semi-skilled manual; unskilled manual)found in the UK. Without a rigorous underpinning in class theory, such classifications tend to do little more than label, for administrative purposes, aggregates of diverse individuals, rather than to describe and account for classes as real social entities and to explain the constitutive role that they have in our lives. A further problem with all class analysis, that its reduction to Socio-Economic Groups serves to exemplify, is its failure to take account of the position of women. Precisely because class analysis is conducted predominantly in terms of economic activity, women have either remained invisible or been allocated to the class of their male partner, on the grounds that they were not active wage earners, or if they were wage earners, their wage (and associated economic position) was secondary to that of their partner. Socialist feminists have attempted to analyze the relationship between men and women as itself analogous to a class relationship, by focusing on the male expropriation of female labor (for example in unpaid housework, or in the differential that continues to exist between male and female wages).
3. When most of the world's colonial dependencies attained independence as sovereign nation-states in the middle of the 20th century, it seemed that an epoch had ended logically as well as historically. Colonies had been particular forms of imperialism, created during the tide of western European expansion into other continents and oceans from, the 16th century onwards. At high noon - the end of the 19th century -- almost every society outside Europe had become or had been the colony of a western European state. Colonialism began as a series of crude ventures. Whether by force or by treaty, sovereignty was seized, and exercised by governors responsible to foreign states. If indigenous rulers were retained, they mainly veiled the reality of power. As colonial states became more secure and elaborate, they intruded more pervasively into the daily life of subject populations; popular resentment seemed to foreshadow the gradual transfer of governmental machinery to indigenous nationalist leaders.

The actual outcomes were much less orderly. Dislodged or dislocated by the Second World War, and assailed by the United Nations rhetoric of national self-determination, many colonial states in South and South-east Asia passed without ceremony into the hands of nationalists; several in Africa negotiated independence before they had trained indigenous bureaucracies; others had to mount and endure sustained warfare to reach the same goal; and yet others - usually islands with small populations - chose to remain dependencies, with rights of individual entry into the former metropolitan country. A few fragments were retained by metropolitan powers, either as conveniently remote nuclear testing facilities or as significant entrepôts. Despite this messy variety, most scholars believed that a logical narrative had come to an end: western colonialism, born in the 16th century and maturing until the 20th, had reached its apotheosis in the sovereign states which succeeded them and took their places in the community of nations.

Yet colonialism continues to fascinate social scientists, both applied and theoretical. The successor-states were often bounded by arbitrary frontiers which exacerbated ethnic identities, and were vulnerable to the rivalries of the Cold War. Brave attempts to generate solidarity on the basis of the whole Third World, or even more modest efforts such as African unity, failed to reorient the ex-colonies' long- established links with the former metropolitan powers. Their inherited bureaucracies were better equipped to control than to mobilize the populace, and their formal economics were well designed to export unprocessed commodities. Their scanty education and training institutions could not immediately cope with the rising expectations of burgeoning populations. Independence was expected to liberate national energies in escaping from underdevelopment and backwardness, in pursuit of development and modernization. The United Nations Development Decade focused international attention on these goals, and academic centers of development studies graduated platoons of experts to assist in achieving them by finding technical solutions.

Few ex-colonies responded as planned, to strategies of agricultural intensification and economic diversification. Most of tropical Africa and some other parts of the world have performed so poorly that they have, in effect, retired from the global economy, their peoples committed to environmentally dubious practices in order to survive, their cities swelling and their recurrent budgets dependent upon aid flows. These catastrophes contrast sharply with the performance of the economic tigers of South-cast and East Asia, whose success owes rather little to the advice of development experts. In either case, the imperatives of economic development legitimized authoritarian styles of government, which often adopted and elaborated the colonial structures of control in ways which hardly meet the old criteria of modernization. These disconcerting tendencies add grist to the post-modernist mill and its critique of academic positivism. Stage theories of human progress have suffered especially badly at the hands of post-modern writers. Colonialism was not, after all, an aberration. Some see it as inherent in the western Enlightenment, quoting (for example) John Locke's opinion that 'God gave the world to men in Common, hut since He gave it them for their benefit and the greatest conveniencies of life they were capable to draw from it, it cannot be supposed He meant it should always remain common and uncultivated. He gave it to the use of the industrious and rational'. Edward Said's (1978) seminal Orientalism, and the Subaltern Studies pioneered and elaborated by expatriate South Asian scholars (Guha 1994), insist that colonialism was neither a finite era nor a particular set of governmental mechanisms. The desired achievement of post- colonialism therefore requires a great deal more than the application of technical solutions to discrete problems. Colonial apologetics and the triumphalist narratives of anti-colonial nationalism are portrayed as two sides of the same coin, alternative stage theories of social evolution which distort rather than illuminate past and present human experience. Only their de- construction may exorcise colonialism's diffuse influences in the minds and practices of subalterns and superiors alike.

Just as colonialism has been applied beyond its earlier chronological limits, so it has burst its conventional boundaries in space. Ethnic identities which survived the homogenizing pressures of the modern nation-state have been unleashed by the end of the Cold War. When communities seek shelter from open competition, in solidarities which transcend market relations, ethnic identities are, reasserted and historic charters invoked. Whether the immediate demands are political recognition and decentralized institutions or land rights or other tangible benefits, the specter of ethnic nationalism walks abroad, wearing the bloody robe of colonialism. Curiously, this extension of the term's application severs the ties with capitalism and imperialism, and brings it closer to an earlier usage, whereby the movement of any group of settlers into territory claimed or occupied by others could be described quite interchangeably as colonialism or colonization. See also imperialism.

    1. A commodity is an object (or service) that is produced for exchange (or a market) rather than for consumption or use by the producer. ‘Commodity’ is the most basic category in Marx’s economics, for it opens up his analysis of capitalism, and specifically of the part that the commodity and commodity exchange play in the exploitation of the proletariat.
commodity fetishism
    1. ‘Commodity fetishism’ encapsulates much of Marx’s criticism of the capitalist economy (which is to say, an economy grounded in the ownership of private property and in the exchange of commodities through markets). Marx argues that in the exchange of commodities, the social relationships between human beings take on the appearance of a relationship between objects. Indeed, this relationship between things takes on a phantasmagorical appearance, such that the things confront us as if they themselves were a strange and obscure crowd of persons. Interpreted slightly differently, properties (such as price) that are ascribed to objects through cultural processes, come to appear as if they were natural or inherent properties of the objects.

    2. Commodity fetishism occurs because, in a capitalist economy, producers only come into contact with each other through the market. As such, they relate to each other, not as substantial, complex and unique human beings, but as producers of commodities, and these commodities are made comparable to (and therefore interchangeable with) any other commodities through the common standard of money. Thus, that which is qualitatively unique and distinctive, both in producers and product, is concealed by transformation into a pure quantity.

      The theory of commodity fetishism therefore suggests that capitalism reproduces itself by concealing its essence beneath a deceptive appearance. Just as quality appears as quantity, so objects appear as subjects, and subjects as objects. Things are personified and persons objectified. Ultimately, market exchange becomes the appearance of the real essence of production, so that humans falsely understand themselves as consumers rather than producers. This, in turn, conceals the process of exploitation inherent to capitalism (expropriation of surplus-value).

      The theory of commodity fetishism was fundamental to the development of theory of ideology within western Marxism, in the account of the reification offered by Lukács and members of the Frankfurt School.

    1. Community has been in the language since the 14th century, from comuneté, old French, communitatem, Latin – community of relations or feelings, from the rootword communis, Latin – COMMON. It became established in English in a range of senses: (i) the commons or common people, as distinguished from those of rank (14th-17th century); (ii) a state or organized society, in its later uses relatively small (14th century-); (iii) the people of a district (18th century-); (iv) the quality of holding something in common, as in community of interests, community of goods (16th century-). It will be seen that senses (i) to (iii) indicate actual social groups; senses (iv) and (v) a particular quality of relationship (as in communitas). From the 17th century there are signs of the distinction which became especially important from the 19th century, in which community was felt to be more immediate than society, although it must be remembered that society itself had this more immediate sense until the 18th century, and civil society (see civilization) was, like society and community in these uses, originally an attempt to distinguish the body of direct relationships from the organized establishment of realm or state. From the 19th century the sense of immediacy or locality was strongly developed in the context of larger and more complex industrial societies. Community was the word normally chosen for experiments in an alternative kind of group-living. It is still so used and has been joined, in a more limited sense, by commune (the French commune – the smallest administrative division – and the German Gemeinde – a civil and ecclesiastical division – had interacted with each other and with community, and also passed into socialist thought (especially commune) and into sociology (especially Gemeinde) to express particular kinds of social relations). The contrast, increasingly expressed in the 19th century, between the more direct, more total and therefore significant relationships of community and the more formal, more abstract and more instrumental relationships of state, or of society, in its modern sense, was influentially formalized by Tönnies (1887) as a contrast between Gemeinschaft and Gesellschaft, and these terms are sometimes used, untranslated, in other languages. A comparable distinction is evident in the mid-20th –century uses of community. In some uses this has been given a polemical edge, as in community politics, which is distinct not only from national politics but from formal local politics and normally involves various kinds of direct action and direct local organization, ‘working directly with the people,’ as which it is distinct from ‘service to the community,’ which has an older sense of voluntary work supplementary to official provision or paid service.

    2. The complexity of community thus relates to the difficult interaction between the tendencies originally distinguished in the historical development: on the one hand the sense of direct common concern; on the other hand the materialization of various forms of common organization, which may or may not adequately express this. Community can be the warmly persuasive word to describe an existing set of relationships, or the warmly persuasive word to describe an alternative set of relationships. What is most important, perhaps, is that unlike all other terms of social organization (state, nation, society, etc.) it seems never to be used unfavorably, and never to be given any positive opposing or distinguishing term.

    1. The notions of ‘consciousness’ and ‘mind’ are often taken as interchangeable. Consciousness is the awareness by an individual (human or animal) of its environment and, if self-conscious, of its place in and relationship to that environment. Humans, higher primates and certain other creatures, e.g. dolphins, are usually regarded as self-conscious. Some philosophers, e.g. Jonathan Glover, have conjectured that there exists a progressive spectrum of consciousness starting with lower, mere conscious animals and ending with self-conscious human beings.

    2. The stance one adopts regarding the nature of consciousness and on what can possess this property, depends upon one’s view of the nature of the mind. A dualist such as René Descartes would view ‘souls’ (minds) and bodies as two radically different substances. Bodies, according to Descartes, have shape, mass and location both in time and space. Minds, on the other hand, although containing thoughts that have duration, do not share any other properties with bodies. This radical separation of minds and bodies led to the infamous mind/body problem. This is the problem of how two substances, so totally different in their natures, can causally interact, granted that minds do in fact affect bodies and vice versa. Descartes would not agree that animals are conscious since he held only humans have souls.

      In modern philosophy of mind the attempt to answer the mind/body problem usually results in the adoption of materialism. Materialists attempt to explain the mind in physical and biological terms. Behaviorists suggest that the mind is nothing more than a series of dispositions to behave in various ways given certain sorts of environmental stimuli. Most behaviorists reject all talk of inner psychological processes. Supporters of the mind-brain identity theory take a reductionist approach, holding that the mind is nothing more than the brain. Functionalists argue that mental phenomena or psychological states can be understood in terms of the causal relationships that exist between causal stimuli, other mental states and the behavior that results. Eliminativists suggest that all our common-sense talk of psychological states, such a beliefs and desires, is wrong. In fact, eliminativists, such as Paul Churchland, hold that science will ultimately generate a much better model that the one we have now for explaining consciousness. This new model will result in a wholly different view of what minds are and how they work.

    1. Conservatism is perhaps better described as constituting an attitude toward politics and society rather than a political ideology. Its origins can be traced back to Edmund Burke’s Reflections on the Revolution in France (1790), which was inspired by the events of the French Revolution into articulating the basic characteristics underlying conservative thinking. As such, modern conservatism may well be said to have drawn its first inspiration from a reaction to the rationalist ideals of the Enlightenment, which found (albeit rather distorted) expression in the French Revolution. These reactions are (i) a negative attitude toward social change; (ii) a tenaciously held faith in the moral and political rightness of traditionally held attitudes and beliefs; (iii) a generally bleak and pessimistic view of human nature, i.e. conservatives tend to think that individuals left completely alone to pursue their own goals will generally descend into an at best immoral, and at worst amoral, lifestyle (a view which stands in direct contrast to the more optimistic conception of the individual held by both liberalism and socialism); (iv) the view that society is an interconnected structure of relationships constituting a community.

    2. In the 20th century there have been a number of significant (or at least well-known) exponents of conservatism. Michael Oakeshott has frequently been cited in this connection, although his political thinking, as well as owing a significant debt to such philosophers as Aristotle, Thomas Hobbes and G.W.F. Hegel (the latter two of which display ‘conservative’ tendencies), also has features which might equally be described as having features in common with the thinking of communitarianism and is, in any case, far more complex than such a label might imply. Leo Strauss and, more recently, Roger Scruton, might both be taken as better examples of modern conservative thought.

      More recently the German philosopher Jürgen Habermas has provided an account of conservatism which links it to the writings of postmodernism (e.g. Jacques Derrida, Michel Foucault, Jean-François Lyotard). Postmodern thinking, Habermas argues, in articulating its criticisms of the Enlightenment (i.e. of the Enlightenment faith in reason and science) is in effect the expression of a resurgent conservatism which takes its inspiration from the writings of those ‘darker’ thinkers of the bourgeois tradition, Sade and Nietzche (although it may well be equally germane to connect the thought of a thinker like Lyotard with the liberal tradition, with which his later work shares some common features.

    1. In modern English consumer and consumption are the predominant descriptive nouns of all kinds of use of goods and services. The predominance is significant in that it relates to a particular version of economic activity, derived from the character of a particular economic system, as the history of the word shows.

    2. Consume has been in English since the 14th century, from consumer, French, rootword, consumere, Latin – to take up completely, devour, waste, spend. In almost all its early English uses, consume had an unfavorable sense; it meant to destroy, to use up, to waste, to exhaust. This sense is still present in ‘consumed by fire’ and in the popular description of pulmonary phthisis as consumption. Early uses of consumer, from the 16th century, had the same general sense of destruction or waste.

      It was from the mid-18th century that consumer began to emerge in a neutral sense in descriptions of bourgeois political economy. In the new predominance of an organized market, the acts of making and using goods and services were newly defined in the increasingly abstract pairings of producer and consumer, production and consumption. Yet the unfavorable connotations of consume persisted, at least until the late 19th century, and it was really only in the mid-20th century that the word passed from specialized use in political economy to general and popular use. The relative decline of customer, used from the 15th century to describe buyer or purchaser is significant here, in that customer had always implied some degree of regular and continuing relationship to a supplier, whereas consumer indicates the more abstract figure in a more abstract market.

      The modern development has been primarily American but has spread very quickly. The dominance of the term has been so great that even groups of informed and discriminating purchasers and users have formed Consumers’ Associations. The development relates primarily to the planning and attempted control of markets which is inherent in large-scale industrial capitalist (and state-capitalist) production, where, especially after the depression on the late 19th century, manufacture was related not only to the supply of known needs (which customer or user would adequately describe) but to the planning of given kinds and quantities of production which required large investment at an early and often predictive stage. The development of modern commercial advertising (persuasion, or penetration of a market) is related to the same stage of capitalism: the creation of needs and wants and or particular ways of satisfying them, as distinct from and in addition to the notification of available supply which had been the main earlier function of advertising (where the kind of persuasion could be seen as puff and puffery). Consumer as a predominant term was the creation of such manufacturers and their agents. It implies, ironically as in the earliest senses, the using-up of what is going to be produced, though once the term was established it was given some appearance of autonomy (as in the curious phrase consumer choice). It is appropriate in terms of the history of the word that criticism of a wasteful and ‘throw-away’ society was expressed, somewhat later, by the description consumer society. Yet the predominance of the capitalist model ensured its widespread and often overwhelming extension to such fields as politics, education and health. In any of these fields, but also in the ordinary field of goods and services, to say user rather than consumer is still to express a relevant distinction.

    1. The idea that capitalism had become a 'consumer society' arose, a least in western Europe, in the 1950s, in response to increased affluence and changes in the economic and industrial structure (a move away from traditional heavy industry and towards new technologies and service provision) after the Second World War. This awareness gradually led to an increased interest in consumption as a culturally significant activity. However, important theories of consumption can be found from the late 19th century onwards.

    2. Social theorists such as Thorstein Veblen and Georg Simmel were amongst the first to begin to articulate the significance of consumption to urban existence. Veblen's (1953) account of the 'conspicuous consumption' of the new bourgeois leisure class suggested that class identity could rest, not upon occupation, but upon patterns of consumption, that served to construct distinctive lifestyles and express status. Similarly, Simmel's essays, including those on 'The Metropolis and Mental Life' (1950) and on 'Fashion' (1957), analyze the manner in which consumption may be used to cultivate, what for Simmel is a sham individuality. Such sophisticated and indeed blasé, consumption allows the consumer to differentiate him or herself. Fashion is thus seen to work through a curious interplay of conformity and dissension, of familiarity and strangeness, in so far as fashion-conscious consumers at once consolidate their membership of the fashionable as they distinguish themselves from the mass. Fashion, for Simmel, represents an attraction to the exotic, strange and new and yet, thanks to its continual historical change, an opportunity to ridicule the fashions of the past (and thus paradoxically one's own once fashionable self).

      Marxists typically demonstrate a similar, or even more pronounced, skepticism as to the value of consumption, not least in so far as Marxist social theory is grounded in the view of human beings as primarily producers. An emphasis on humans as consumers suggests an ideological distraction from the essence of economic and political struggle, or at best a manifestation of the unfulfilling or alienating nature of production within capitalism. Perhaps the most sustained Marxist engagement with consumption came from the Frankfurt School. The account of the culture industry proposed by Horkheimer and Adorno (1972) holds that 20th-century capitalism is a distinct mode of production at least in comparison with the high capitalism of Marx's own time. For Marx, 19th -century consumers could freely choose between commodities on the grounds of utility (or use-value) that they would derive from them. A useless commodity would be rejected, and thus the consumer retained some vestige of power with high capitalism. Horkheimer and Adorno argue that in late capitalism, use-value has been brought within the control of the capitalist producers, thanks to the power of advertising and the mass media. The consumers buy, crudely, what capitalism wants them to buy. The model of the culture industry is, however, is more subtle than this. The consumers are not, on Horkheimer and Adorno's account, passive dupes of the capitalist system.

      Rather, the most efficient way of surviving and gaining some pleasure within the constraints of a highly bureaucratic and instrumental society, is to accept the goods offered, and that consumption may serve to express a deep awareness of the damage that capitalism is inflicting upon them. Adorno imagines a 'shop girl' who visits the cinema, not because she believes that the fantastic events of the cinema could happen to her, but because only in the cinema can she admit that they will not happen to her. This vignette expresses a side of Frankfurt theory that is often lost to its less sensitive readers.

      More recent approaches to consumption recognize the utopian element inherent in shopping. An ideology of shopping may be analyzed, where shopping or consumption are perceived as solutions to the discontents of one's life. In Lacanian terms, shopping promises to make us whole again. Yet, as with Freud's analysis of dreams, the pursuit of consumption may be interpreted as an illusory solution to the real problems of social life. In effect, this returns the analysis to the Frankfurt position. The continual round of consumerism is rejected as a short-term and ultimately illusory solution to one's problems. The task of theory would be to expose the real (social and psychological) problems that cause this discontent in the first place. Jacques Attall (1985) has lamented upon this theme, suggesting that when we purchase music (in the form of records), what we do is exchange our own labor (and thus involvement in the pressures and necessities of working life) for a commodity. But, unlike most other commodities, we carry out this exchange only in the utopian expectation of some day having the leisure time to enjoy it. (We work, in effect, for the promise of a work-free future.) This time, of course, never comes, and the use-value of the music lies forever unrealized.

      More positive accounts of consumption, not least in that they suggested the potential of consumption as a form of political resistance, first emerged in association with subcultural theory. Youth subcultures, from the 1950s onwards, were seen as consuming the products of capitalism, but not in a manner that accorded with the expectations of the producers. The consumer is thus credited with the ability to make his or her own use-value from the commodity. Michel de Certeau (1984) thus describes consumption as 'secondary production'. While the products may be imposed by capitalism, the ways of using them are not. The shopping center itself (as well as a number of key contemporary commodities such as the 'Walkman' (du Gay et al. 1997) and 'Barbie' (Rand 1995)) has become the focus of much analysis from cultural studies. Shopping is recognized as a highly popular leisure activity (and not simply the means to other leisure activities). The shopping center becomes one focus of this activity, not least in so far as the shopping center may offer attractions other than shopping (including restaurants, cinemas, and other leisure facilities). Yet, again, different groups will consume the center itself differently. The young, unemployed, elderly and homeless, despite the fact that they are overtly excluded from consumerism due to lack of economic resources, will still find use within the center (for example as a source of shelter, warmth and entertainment, or as a meeting place) (Morris 1993).

      The theoretical issues in the analysis of the political and social significance of consumption perhaps revolve around the conceptualization and understanding of human autonomy and individuality. Empirical evidence (for example that 80 cent of all new products are rejected by consumers) is, in itself, of little value in establishing whether or not consumers have exercised active and autonomous choice. Simmel's pseudo-individualism, and even Horkheimer and Adorno's culture industry are not incompatible with such statistics. Yet, consideration of consumption does indicate much about how humans find scope for self-expression (however glorious or impoverished this expression is ultimately judged to be) within the close restrictions of their everyday life.

    1. The term 'counterculture was coined in the 1960s, largely in respons to the emergence of middle-class youth movements (such as the hippies), to refer to groups that questioned the values of the dominant culture. While centering on an opposition to the Vietnam War, the hippie counterculture also expressed its dissatisfaction with the values and goals of capitalism, such as consumerism, the work ethic and a dependence on technology. In general, the concept of counterculture may now be extended to the values, beliefs and attitudes of any minority group that opposed the dominant culture, but more precisely, does so in a relatively articulate and reflective manner. Thus, at its emergence, the Christian religion was a counterculture, in opposition to the dominant Jewish and Roman cultures. In the early period of British capitalism, the Quakers and the Methodists represented countercultures in opposition to the dominant values of Anglicanism. (See also subculture, youth culture.)
cultural capital
    1. Class membership is defined, at least within the Marxist tradition, in terms of the individual's access to and control of economic capital (such as industrial machinery, raw materials and also finance). Pierre Bourdieu (1973) drew an analogy to an individual's access to cultural resources in order to explain the workings of the educational system in a class-divided capitalist society. Children will have differing degrees of cultural competence (including information and skills), acquired prior to school within the family. The educational system will not then overtly discriminate in favor of the children of the dominant class. Rather, all children will be assessed 'neutrally', in terms of their ability to perform according to the same criteria of excellence. These criteria will, however, be derived from the dominant culture. The children of the dominant class will do better, so yielding interest (in terms of 'symbolic power') on their parents' investment in cultural capital.
cultural relativism
    1. The view that fundamentally different standards of morality, practices and belief systems operate in different cultures and cannot be judged with regard to their worth from a standpoint exterior to them. Cultural relativism thus holds that there is a fundamental incommensurability between the value-systems of different cultures. Whether or not such a view commits one to a relativism with regard to questions of knowledge (see epistemology) is a further issue which depends upon whether or not one is inclined to hold that the rules of validity which apply With regard to the construction of knowledge claims (for example, the principle of non-contradiction) are culturally constructed. However, it is difficult to see how a cultural relativist can defend any notion of epistemic validity from the charge of being likewise culturally produced, and therefore incommensurable with conceptions of validity that are generated within different cultures or contexts. It is possible to define more recent cultural relativism in terms of its commitment to a particular model of language and meaning derived from (or having strong parallels with) the work of the later Wittgenstein. Thus, Richard Rorty's espousal of a liberalism and postmodernism which is relativistic about the practices and procedures that constitute interpretative communities owes a debt to the Wittgensteinean 'meaning is use' thesis. Although it has often been claimed that the cultural relativist is interested in giving voice to the perspectives of marginalized interests and cultures, it is by no means clear that this is the case. Some have argued (CF, Christopher Norris, The Contest of Faculties (1985)) that Rorty's espousal of cultural and epistemic relativism brings with it the specter of cultural imperialism.
culture reproduction
1. The term 'cultural reproduction' was coined by Pierre Bourdieu (1973), to refer to the process by which the culture, and thus political power, of the dominant class is maintained from one generation to the next, through the education system. More generally, the term may be seen to highlight the problem of how societies continue to exist and remain relatively stable over long periods of time. This continued existence requires more than just physical reproduction, in the sense of sufficient births to replace those who have died or left the society. The culture of that society must be transmitted to the new generation. Cultural reproduction is thus intimately linked to the role that socialization, or the process through which individuals internalize the culture of their societies, plays in this stability. As Bourdieu's definition highlights, part of this problem of cultural transmission is not simply the stability of the manner in which society is organized, or the stability of the key values and beliefs of its culture, but rather the stability of the political structures and the structures of domination and exploitation within the society. As such, it may be seen as a process by which political structures are given legitimacy or authority.

In the Marxist tradition, social reproduction refers to conditions necessary for the renewal of labor. Again, this is not simply a matter of physically replacing laborers, but more centrally involves the place of social and cultural institutions, such as housing, education and health care in that process.

    1. 'Culture' is not easily defined, not least because it can have different meanings in different contexts. However, the concept that lies at the core of cultural studies, it may be suggested, is very much the concept that is found in cultural anthropology. As such, it avoids any exclusive concern with 'high' culture (which is still found, for example, in the writings of Arnold, Leavis and elite and mass society theories). It entails recognition that all human beings live in a world that is created by human beings, and in which they find meaning. Culture is the complex everyday world we all encounter and through which we all move. Culture begins at the point at which humans surpass whatever is simply given in their natural inheritance. The cultivation of the natural world, in agriculture and horticulture, is thus a fundamental element of a culture. As such, the two most important or general elements of culture may be the ability of human beings to construct and to build, and the ability to use language (understood most broadly, to embrace all forms of sign system).

    2. Gillian Rose's use of the Jewish myth of the Tower of Babel is illuminating in this context (1993). At Babel, humans attempted to reach heaven by building a tower. God did not merely destroy the Tower, but in order to prevent a further attempt, He prevented communication by imposing a multiplicity of languages. This story is often seen as an allegory of language. Rose, however, takes it further, as an allegory of language and architecture. It is therefore seen to comment upon key themes of cultural studies, including the community, the conflict of diverse cultures, power, law and morality, and knowledge. A few of these themes may be outlined. Rose's argument is that Babel represents, not simply an architectural project, but also the building of a city. Cities are a crucial cultural watershed, for in the city, diverse cultures (customs, beliefs and values) come together. In a city, people become aware, perhaps for the first time, that they have a culture, for there is always someone who disagrees with what you have always taken for granted. Our self-awareness as cultural beings is grounded in this confrontation, and thus in the exercise of power (as we struggle to sustain our own values against an assault from others). The point of Babel, and perhaps of all human culture, is that in the architectural achievement of the tower-city, humans gained a sort of immortality. While the individual may die, the buildings of his or her generation will live on and become part of the future. Cultures endure even though the individuals who build them die. So, at the very least, our understanding of time is transformed, and our understanding of history created. Yet this ‘reach,’ as Rose calls it, entails the loss of a naïve self-certainty. The unity and universality of the isolated, nomadic early Jewish tribe is confronted and questioned by its encounter with a plurality of other cultures and their claims to universality. Paradoxically, at the very moment in which we become aware of ourselves as cultural beings, we are both enabled (we can do new things, and in principle, do anything we like), but can no longer ever be certain what is the right thing to do, and so in doing anything, we fall into conflict with others.  Thus, cultural studies is necessarily concerned with artificiality, and the political struggle to find and defend meaning.

    3. Culture is one of the two or three most complicated words in the English language. This is so partly because of its intricate historical development, in several European languages, but mainly because it has now come to be used for important concepts in several distinct intellectual disciplines and in several distinct and incompatible systems of thought.

    4. Cultura, Latin, from the rootword colere, Latin, had a range of meanings: inhabit, cultivate, protect, honor with worship. Some of these meanings eventually separated, though still with occasional overlapping, in the derived nouns. Thus ‘inhabit’ developed through colonus, Latin to colony. ‘Honor with worship’ developed through cultus, Latin to cult. Cultura took on the main meaning of cultivation or tending, including, as in Cicero, cultura animi, though with subsidiary medieval meanings of honor and worship (in English culture as ‘worship’ in Caxton (1483)). The French forms of cultura were couture, which has since developed its own specialized meaning, and later culture, which by the early 15th century had passed into English. The primary meaning was then in husbandry, the tending of natural growth.

      Culture in all its early uses was a noun of process: the tending of something, basically crops or animals. The subsidiary coulter – ploughshare, had traveled by a different linguistic route, from culter, Latin – ploughshare, to the variant English spellings culter, colter, coulter, and as late as the early 17th century culture (Webster, Duchess of Malfi, III, ii: ‘hot burning cultures’). This provided a further basis for the important next stage of meaning, by metaphor. From the early 16th century the tending of natural growth was extended to the process of human development, and this, alongside the original meaning in husbandry, was the main sense until the late 18th and early 19th centuries. Thus More: ‘to the culture and profit of their minds’; Bacon: ‘the culture and manurance of minds (1605); Hobbes ‘a culture of their minds’ (1651); Johnson: ‘she neglected the culture of her understanding’ (1759). At various points in the development two crucial changes occurred: first, a degree of habituation to the metaphor, which made the sense of human tending direct; second, an extension of particular processes to a general process, which the word could abstractly carry. It is of course from the latter development that the independent noun culture began its complicated modern history, but the process of change is so intricate, and the latencies of meaning are at times so close, that it is not possible to give any definite date. Culture as an independent noun, an abstract process or the product of such as process, is not important before the late 18th century and is not common before the mid 19th century. But the early stages of this development were not sudden. There is an interesting use in Milton, in the second (revised) edition of The Readie and Easie Way to Establish a Free Commonwealth (1660): ‘spread much more Knowledg and Civility, yea, Religion, through all parts of the Land, by communicating the natural heat of Government and Culture more distributively to all extreme parts, which now lie num and neglected.’ Here the metaphorical sense (‘natural heat’) still appears to be present, and civility is still written where in the 19th century we would normally expect culture. Yet we can also read ‘government and culture’ in a quite modern sense. Milton, from the tenor of his whole argument, is writing about a general social process, and this is a definite stage of development. In 18th-century England this general process acquired definite class associations though cultivation and cultivated were more commonly used for this. But there is a letter of 1730 (Bishop of Killala to Mrs. Clayton; cit Plumb, England in the 18th Century) which has this clear sense: ‘it has not been customary for persons of either birth of culture to breed up their children to the Church.’ Akenside (Pleasures of Imagination, 1744) wrote ‘…nor purple state nor culture can bestow.’ Wordsworth wrote ‘where grace of culture hath been utterly unknown’ (1805), and Jane Austen (Emma, 1816) ‘every advantage of discipline and culture.’

      It is thus clear that culture was developing in English towards some of its modern senses before the decisive effects of a new social and intellectual movement. But to follow the development through this movement, in the late 18th and early 19th centuries, we have to look also at developments in other languages and especially in German.

      In French, until the 18th century, culture was always accompanied by a grammatical form indicating the matter being cultivated, as in the English usage already noted. Its occasional use an independent noun dates from the mid 18th century, rather later than similar occasional uses in English. The independent noun civilization also emerged in the mid 18th century; its relationship to culture has since been very complicated. There was at this point an important development in German: the word was borrowed from French, spelled first (late 18th century) Cultur and from the 19th century Kultur. Its main use was still as a synonym for civilization: first in the abstract sense of a general process of becoming ‘civilized’ or ‘cultivated’; second, in the sense which had already been established for civilization by the historians of the Enlightenment, in the popular 18th century forms of the universal histories, as a description of the secular process of human development. There was then a decisive change of use in Herder. In his unfinished Ideas on the Philosophy of the History of Mankind (1784-91) he wrote of Cultur: 'nothing is more indeterminate than this word, and nothing more deceptive than its application to all nations and periods'. He attacked, the assumption of the universal histories that 'civilization' or 'culture' - the historical self-development of humanity - was what we would now call a unilinear process, leading to the high and dominant point of 18th-century European culture. Indeed he attacked what he called European subjugation and domination of the four quarters of the globe, and wrote:

        Men of all the quarters of the globe, who have perished over the ages, you have not lived solely to manure the earth
        with your ashes, so-that at the end of time your posterity should be made happy by European culture. The very thought
        of a superior European culture is a blatant insult to the majesty of Nature.
It is then necessary, he argued, in a decisive innovation, to speak of 'cultures' in the plural: the specific and variable cultures of different nations and periods, but also the specific and variable cultures of social and economic groups within a nation. This sense was widely developed, in the Romantic movement, as an alternative to the orthodox and dominant 'civilization'. It was first used to emphasize national and traditional cultures, including the new concept of folk-culture. It was later used to attack what was seen as the 'mechanical' character of the new civilization then emerging: both for its abstract rationalism and for the 'inhumanity' of current industrial development. It was used to distinguish between 'human' and 'material' development. Politically, as so often in this period, it veered between radicalism and reaction and very often, in the confusion of major social change, fused elements of both. (It should also be noted, though it adds to the real complication, that the same kind of distinction, especially between 'material' and 'spiritual' development, was made by von Humboldt and others, until as late as 1900, with a reversal of the terms, culture being material and civilization spiritual. In general, however, the opposite distinction was dominant.)

On the, other hand, from the 1840s in Germany, Kultur was being used in very 'much the sense in which civilization had been used in 18th-century universal histories. The decisive innovation is G. F. Klemm's Allgemeine Kulturgeschichte der Menschheit - 'General Cultural History of Mankind' (1843-52) - which traced human developmentfforn savagery through domestication to freedom. Although the American anthropologist Morgan, tracing comparable stages, used 'Ancient Society,’ with a culmination in Civilization, Klemm's sense was sustained, and was directly followed in English by Tylor in PrimitiveCulture (1870). It is along this line of reference that the dominant sense in modern social sciences has to be traced.

The complexity of the modern development of the word, and of its modern usage, can then be appreciated. We can easily distinguish the sense which depends on a literal continuity of physical process as now in 'sugar-beet culture' or, in the specialized physical application in bacteriology since the 1880s, 'germ culture.’ But once we go beyond the physical reference, we have to recognize three broad active categories of usage. The sources of two of these we have already discussed: (i) the independent and abstract noun which describes a general process of intellectual, spiritual and aesthetic development, from the 18th century; (ii) the independent noun, whether used generally or specifically, which indicates a particular way of life, whether of a people, a period, a group, or humanity in general, from Herder and Klemm. But we have also to recognize (iii) the independent and abstract noun which describes the works and practices of intellectual and especially artistic activity. This seems often now the most widespread use: culture is music, literature, painting and sculpture, theatre and film. A Ministry of Culture refers to these specific activities, sometimes with the addition of philosophy, scholarship, history. This use, (iii), is in fact relatively late. It is difficult to date precisely because it is in origin an applied form of sense (i): the idea of a general process of intellectual, spiritual and aesthetic development was applied and effectively transferred to the works and practices which represent and sustain it. But it also developed from the earlier sense of process; cf. 'progressive culture of fine arts', Millar, Historical View of the English Government, IV, 314 (1812). In English (i) and (iii) are still close; at times, for internal reasons, they are indistinguishable as in Arnold, Culture and Anarchy (1867); while sense (ii) was decisively introduced into English by Tylor, Primitive Culture (1870), following Klemm. The decisive development of sense (iii) in English was in the late 19th century and the early 20th century.

Faced by this complex and still active history of the word, it is easy to react by selecting one 'true' or 'proper' or 'scientific' sense and dismissing other senses as loose or confused. There is evidence of this reaction even in the excellent study by Kroeber and Kluckhohn, Culture: A Critical Review of Concepts and Definitions, where usage in North American anthropology is in effect taken as a norm. It is clear that within a discipline, conceptual usage has to be clarified. But in general it is the range and overlap of meanings that is significant. The complex of senses indicates a complex argument about the relations between general human development and a particular way of life, and between both and the works and practices of art and intelligence. It is especially interesting that in archaeology and in cultural anthropology the reference to culture or a culture is primarily to material production, while in history and cultural studies the reference is primarily to signifying or symbolic systems. 'This often confuses but even more often conceals the central question of the relations between 'material' and 'symbolic' production, which in some recent argument - cf. my own Culture - have always to be related rather than contrasted. Within this complex argument there are fundamentally opposed as well as effectively overlapping positions; there are also, understandably, many unresolved questions and confused answers. But these arguments and questions cannot be resolved by reducing the complexity of actual usage. This point is relevant also to uses of forms of the word in languages other than English, where there is considerable variation. The anthropological use is common in the German, Scandinavian and Slavonic language groups, but it is distinctly subordinate to the, senses of art and learning, or of a general process of human development, in Italian and French. Between languages as within a language, the range and complexity of sense and reference indicate both difference of intellectual position and some blurring or overlapping. These variations, of whatever kind, necessarily involve alternative views of the activities, relationships and processes which this complex word indicates. The complexity, that is to say, is not finally in the word but in the problems which its variations of use significantly indicate.

It is necessary to look also at some associated and derived words. Cultivation and cultivated went through the same metaphorical extension from a physical to a social or educational sense in the 17th century and were especially significant words in the 18th century. Coleridge, making a classical early 19th-century distinction between civilization and culture, wrote (1830):"the permanent distinction, and occasional contrast, between cultivation and civilization.’ The noun in this sense has effectively disappeared but the adjective is still quite common, especially in relation to manners and tastes. The important adjective cultural appears to date from the 1870s; it became common by the 1890s. The word is only available, in its modern sense, when the independent noun, in the artistic and intellectual or anthropological senses, has become familiar. Hostility to the word culture in English appears to date from the controversy around Arnold's views. It gathered force in late 19th century and early 20th century, in association with a comparable hostility to aesthete and aesthetic (q.v.). Its association with class distinction produced the mime-word culchah. There was also an area of hostility associated with anti-German feeling, during and after the 1914-18 War, in relation to propaganda about Kultur. The central area of hostility has lasted, and one element of it has been emphasized by the recent American phrase culture-vulture. It is significant that virtually all the hostility (with the sole exception of the temporary anti-German association) has been connected with uses involving claims to superior knowledge (cf. the noun INTELLECTUAL), refinement (culchah) and distinctions between 'high' art (culture) and popular art and entertainment. It thus records a real social history and a very difficult and confused, phase of social and cultural development. It is interesting that the steadily extending social and anthropological use of culture and cultural and such formations as sub-culture (the culture of a distinguishable smaller group) has, except in certain areas (notably popular entertainment), either bypassed or effectively diminished the hostility and its associated unease and embarrassment. The recent use of culturalism, to indicate a methodological contrast with structuralism in social analysis, retains many of the earlier difficulties, and does not always bypass the hostility. See civilization.

  3. Culture is one of the basic theoretical terms in the social sciences. In its most general sense in the social sciences, culture refers to the socially inherited body of learning characteristic of human societies. This contrasts with the ordinary meaning of the term, which refers to a special part of the social heritage having to do with manners and the arts. Both the ordinary and the social science uses are derived from the Latin cultura, from the verb colere, 'to tend or cultivate'. An excellent history of the term culture can be found in Kroeber and Kluckhohn's (1963)- classic work, Culture: A Critical Review of Concepts and Definitions.

In the social sciences the term culture takes much of its meaning from its position within a model of the world which depicts the relations between society, culture and the individual.

The social science model

Human society is made up of individuals who engage in activities by which they adapt to their environment and exchange resources   with each other so that the society is maintained and individual needs are satisfied. These activities are learned by imitation and tuition from other humans, and hence are part of the social heritage, or culture, of a society. These learned activities persist from generation to generation with only a small degree of change unless external factors interfere with the degree to which these activities succeed in satisfying social and individual needs. Learned activities arc only one part of the society's culture. Also included in the social heritage are artifacts (tools, shelters, utensils, weapons, etc.), plus an ideational complex of constructs and propositions expressed in systems of symbols, of which natural language is the most important. By means of symbols it is possible to create a rich variety of special entities, called culturally constructed objects, such as money, nation- hood, marriage, games, laws, etc., whose existence depends on adherence to the rule system that defines them. The ideational systems and symbolic systems of the social heritage are necessary because human adaptive activities are so complex and numerous that they could not be learned and performed without a large store of knowledge and a symbolic system to communicate this knowledge and coordinate activities.

Much, but not all, of the social heritage or culture has a normative character; that is, the individuals of a community typically feel that their social heritage - their ways of doing things, their understandings of the world, their symbolic expressions - are proper, true and beautiful, and they sanction positively those who conform to the social heritage and punish those who do not.

Individuals perform the activities and hold the beliefs of their social heritage or culture not just because of sanctions from others, and not just because they find these activities and beliefs proper and true, but because they also find at least some cultural activities and beliefs to be motivationally and emotionally satisfying.

In this formulation of the model the terms social heritage and culture have been equated. The model ascribes to culture or social heritage the properties of being socially and individually adaptive, learned, persistent, normative, and motivated. Empirical consideration of the content of the social heritage leads directly to an omnibus definition of culture, like that given by Tylor: 'Culture ... is that complex whole which includes knowledge, belief, art, law, morals, custom, and any other capabilities and habits acquired by man as a member of society; that is, to an enumeration of the kinds of things that can be observed to make up the social heritage.

However, many social scientists restrict the definition of culture to only certain aspects of the social heritage. Most frequently, culture is restricted to the non-physical, or mental part of the social heritage. The physical activities that people perform and the physical artifacts they use are then treated as consequences of the fact that people learn, as part of their social heritage, how to perform these activities and how to make these artifacts. Treating actions and artifacts as the result of learning the social heritage gives causal efficacy to culture; in such a definition culture not only is a descriptive term for a collection of ideas, actions and objects, but also refers to mental entities which are the necessary cause of certain actions and objects.

The current consensus among social scientists also excludes emotional and motivational learnings from culture, focusing on culture as knowledge, or understandings, or propositions. However, it is recognized that some cultural propositions may arouse strong emotions and motivations; when this happens these propositions are said to be internalized.

Some social scientists would further restrict the term culture to just those parts of the social heritage which involve representations of things, excluding norms or procedural knowledge about how things should be done. Other social scientists would further restrict the definition of culture to symbolic meanings, that is, to those symbolic representations which are used to communicate interpretations of events. Geertz, for example, uses this further restriction of the social heritage not only to exclude affective, motivational and normative parts of the social heritage, but also to argue against the notion that culture resides in the individual. According to Geertz, culture resides in the intersubjective field of public meaning, perhaps in the same transcendent sense in which one might speak of algebra as something that exists outside of anyone's understanding of it.

Many of the disagreements about the definition of culture contain implicit arguments about the causal nature of the social heritage. For example, there, controversy about whether or not culture is a 'coherent integrated whole', that is, whether or not any particular culture can be treated as 'one thing' which has ‘one nature'. If it were found that cultures generally display a high degree of integration, this would be evidence that some causal force makes different parts of the culture consistent with one another. However, social scientists are now more likely to stress the diversity and contradictions to be found among the parts of a culture Although almost any element of the culture can be found to have multiplex relations to other cultural elements, (as Malinowski, in his great book Argonauts of the Western Pacific, demonstrated), there is little evidence that these relations ever form a single overall pattern which can be explicitly characterized, Ruth Benedict's (1934) Patterns of Culture notwithstanding.

Issues involving the integration of culture are related to issues concerning whether or not culture is a bounded entity. If culture is conceived of as a collection of elements which do not form a coherent whole, then the only thing that makes something a part of a particular culture is the fact that it happens to be part of the social heritage of that particular society But if one believes that cultures are coherent wholes, then the collection of cultural elements which make up a particular culture can be bounded by whatever characterizes the whole.

The boundary issue leads in turn to the problem of sharedness, that is, if culture is not a bounded entity with its own coherence and integration, then some number of individuals in a society must hold a representation or norm in order for it to qualify as a part of the social heritage. However, no principled way has been found to set a numerical cut-off point. In fact, there is some evidence that cultural elements tend to fall into two types: first, a relatively small number of elements that are very highly shared and form a core of high consensus understandings (e.g. red lights mean stop); second, a much larger body of cultural elements which need to be known only by individuals in certain social statuses (e.g. a tort is a civil wrong independent of a contract).

These and other problems have led to disenchantment with the term culture, along with a number of replacement terms such as 'ideology' and 'discourse'. It is not that the importance of the social heritage is being questioned within the social sciences; rather , it is that splitting the social heritage into various ontological categories does not seem to carve nature at the For example, for a culture to work as a heritage - something which can be learned and passed along - it must include all kinds of physical objects and events, such as it the physical sounds of words and the physical presence of artifacts - otherwise one could not learn the language or learn now to make and use artifacts. Since the cultural process necessarily involves mental and physical, cognitive and affective, representational and normative phenomena, it can be argued that the definition of culture should not be restricted to just one part of the social heritage.

Behind these definitional skirmishes lie important issues. The different definitions of culture can be understood as attempts to work out the causal priorities among the parts of the social heritage. For example, behind the attempt to restrict the definition of culture to the representational aspects of the social heritage lies the hypothesis that norms, emotional reactions, motivations, etc. are dependent on a prior determination of what's what. The norm of generalized exchange and feelings of amity between kin, for example, can exist only if there is a category system that distinguishes kin from non-kin. Further, a cultural definition of kin as 'people of the same flesh and blood' asserts a shared identity that makes exchange and amity a natural consequence of the nature of things. If it is universally true that cultural representations have causal priority over norms, sentiments, and motives, then defining culture as representation focuses attention on what is most important. However, the gain of sharp focus is offset by the dependence of such a definition on assumptions which are likely to turn out to be overly simple.

culture industry
    1. The term ‘culture industry’ was coined by the Frankfurt School
theorists Horkheimer and Adorno in The Dialectic of Enlightenment (1972), to refer to the production of mass culture. This deliberately contradictory term (setting the culture against its apparent antithesis in industry) attempts to grasp something of the fate of culture in the highly instrumentally rational and bureaucratic society of late capitalism. The account of the culture industry may be seen, at root, as economic, and as such an integral part of the reinterpretation of dialectical materialism that is a central theme of The Dialectic of Enlightenment. The culture industry, embracing advertising as much as radio and cinema, serves to transform use-value (the utility that consumers derive from a commodity) into something that is produced by the capitalist system. It may be suggested that the combination of advertising and the mass media promotes less particular products, and more a capitalist lifestyle.

This account of the absorption of use-value into production goes hand in hand with Adorno's analysis of the fate of the relationship between the forces and relations of production in 20th-century capitalism. The independence of use-value in 19th -century capitalism gave the human subject genuine autonomy and thus potential for resistance (thereby destabilizing capitalism). This autonomy is now increasingly lost. Similarly, administrative techniques, that developed as part of the forces of production (to increase the efficiency of industry), now become fundamental to the relations of production (so that market exchange and property ownership are subordinated to bureaucratic organization, and the employed and the unemployed alike become claimants for welfare payments). The contradiction between the forces and relations of production, that for Marx would bring about the fall of capitalism, is removed in this totally administered society

The account of the culture industry has frequently been trivialized by its critics (not least those within cultural studies). Horkheimer and Adorno do not, for example, obviously assume that human subjects are passive victims of the culture industry, and nor is the culture industry an instrument of class rule. The total administration of contemporary capitalism, embraces and constrains everyone, so that although the property-owning bourgeoisie may continue to benefit materially from the system, they are as powerless before it as the non-property owning classes. Yet these powerless subjects continue to struggle with the system, and to survive within it. Horkheimer and Adorno hint that consumption of culture industry products is diverse. The radio ham, for example, attempts to retain some autonomy and individuality by building and operating his or her own radio, rather than accepting what is given, ready made. Others use the cover of culture industry institutions, such as the cinema, to admit the unhappiness that would paralyze them in the real world. Even within the culture industry, not all of its products are homogeneous. Orson Welles (and later Michelangelo Antonioni) demonstrate that cinema has the critical and self-reflective potential that Adorno attributes to all autonomous art; Bette Davis keeps alive the tradition of great acting; and if the nuances of the text are to be believed, Warner Brothers cartoons do not share the simple minded capitulation to authority that is the hall-mark of Disney.



3. In the classical Greek polis, democracy was the name of a constitution in which the poorer people (demos) exercised power in their own interest as against the interest of the rich and aristocratic. Aristotle thought it a debased form of constitution) and it played relatively little part in subsequent political thought, largely because Polybius and other writers diffused the idea that only mixed and balanced constitutions (incorporating monarchic, aristocratic and democratic elements) could be stable. Democracies were commonly regarded as aggressive and unstable and likely to lead (as in Plato's Republic) to tyranny. Their propensity to oppress minorities (especially the propertied) was what Burke meant when he described a perfect democracy as the most shameless thing in the world.

Democracy as popular power in an approving sense may occasionally be found in early modern times (in the radical thinkers of the English Civil Way, the constitution of Rhode Island of 164 1, and in the deliberations of the framers of the American Constitution), but the real vogue for democracy dates from the French Revolution. The main reason is that 'democracy' came to be the new name for the long-entrenched tradition of classical republicanism which, transmitted through Machiavelli, had long constituted a criticism of the dominant monarchical institutions of Europe. This tradition had often emphasized the importance of aristocratic guidance in a republic, and many of its adherents throughout Europe considered that the British constitutional monarchy with an elected parliament was the very model of a proper republic. This idea fused in the 19th century with demand to extend the franchise, and the resulting package generally to be called 'democracy'.

It is important to emphasize that democracy was a package, because the name had always previously described a source of power rather than a manner of governing. By the 19th century, however, the idea of democracy included representative parliaments, the separation of powers, the rule of law, civil rights and other such liberal desirabilities. All of these conditions were taken to be the culmination of human moral evolution, and the politics of the period often revolved around extensions of the franchise, first to adult males then to women, and subsequently to such classes as young people of 18 (rather than 21) and, recently in Britain, to voluntary patients In mental hospitals. Democracy proved to be a fertile and effervescent principle of political perfection. Inevitably, each advance towards democracy disappointed) many adherents, but the true ideal could always be relocated in new refinements of the idea. The basis of many such extensions had been laid by the fact that democracy was a Greek term used, for accidental reasons, to describe a complicated set of institutions whose real roots were medieval. The most important was representation, supported by some American founding fathers precisely because it might moderate rather than reflect the passions of an untutored multitude. The Greekness of the name, however, continually suggests that the practice of representation is not intrinsic to modern democracy, but rather a contingent imperfection resulting from the sheer size of modern nations by comparison with ancient city states. In fact, modern constitutional government is quite unrelated to the democracy of the Greeks.

Although modern democracy is a complicated package, the logic of the expression suggests a single principle. The problem is: what precisely is the principle? A further question arises: how far should it extend? So far as the first question is concerned, democracy might be identified with popular sovereignty, majority rule, protection of minorities, affability, constitutional liberties, participation in decisions at every level, egalitarianism, and much else. Parties emphasize one or other of these principles according to current convenience, but most parties in the modern world (the fascist parties between 1918 and 1945 are the most important exception) have seldom failed to claim a democratic legitimacy. The principle of democracy was thus a suitably restless principle for a restless people ever searching for constitutional perfection.

Democracy is irresistible as a slogan because it seems to promise a form of government in which rulers and ruled are in such harmony that little actual governing will be required. Democracy was thus equated with a dream for freedom. For this reason, the nationalist theories which helped destroy the great European empires were a department of the grand principle of democracy, since everybody assumed that the people would want to be ruled by politicians of their own kind. The demographic complexities of many areas, however, were such that many people would inevitably be ruled by foreigners – rather than on national principle, which constitutes some as the nation, and the rest as minorities. In claiming to be democratic, rulers might hope to persuade their subjects that they ruled in the popular interest.

Democracy is possible only when a population can recognize both sectional and public interests, and organize itself for political action. Hence no state is seriously democratic unless an opposition is permitted to criticize governments, organize support, and contest elections. But in many countries, such oppositions are likely to be based upon tribes, nations or regions, which do not recognize a common or universal good in the state. Where political parties are of this kind, democratic institutions generate quarrels rather than law and order. In these circumstances, democracy is impossible, and the outcome has been the emergence of some other unifying principle: sometimes an army claiming to stand above 'politics,’ and sometimes an ideological party in which a doctrine supplies a simulacrum of the missing universal element. One-party states often lay claim to some eccentric (and superior) kind of democracy - basic, popular, guided and so on. In fact, the very name ‘party' requires pluralism. Hence, in one- party states, the party is a different kind of political entity altogether, and the claim to democracy is merely window-dressing. This does not necessarily mean, however, that such governments are entirely without virtue. It would be foolish to think that one manner of government suited all peoples.

Democracy as an ideal in the 19th century took for granted citizens who were rationally reflective about the voting choices open to them. Modern political scientists have concentrated their attention upon the actual irrationalities of the democratic process. Some have even argued that a high degree of political apathy is preferable to mass enthusiasm which endangers constitutional forms. See also citizenship, democratization.


3. The process through which authoritarian regimes are transformed into democratic regimes is called democratization. It must be kept analytically distinct both from the process of liberalization and from the process of transition. Liberalization is simply the decompression of the authoritarian regime taking place within its framework. It is controlled by the authoritarian rulers themselves. It consists in the relaxation of the most heinous features of authoritarianism: the end of torture, the liberation of political prisoners, the lifting of censorship and the toleration of some opposition. Liberalization may be the first stage in the transition to democracy. However, the transition to democracy truly begins when tile authoritarian rulers are no longer capable of controlling domestic political developments and are obliged to relinquish political power. At that point, an explosion of groups, associations, movements and parties signals that the transition has started.

There is no guarantee that, once begun, a transition from authoritarianism will necessarily lead to a democratic regime. Though it will simply be impossible to restore the previous authoritarian regime, in marry cases the political transition will be long, protracted and ineffective. In other cases, the major features of a democratic regime will come into being. Usually, political parties re-emerge representing the old political memories of the country or new parties are created to represent the dissenting groups and the adamant oppositions during the authoritarian regime. Depending on the tenure of the previous authoritarian regimes, there will appear different leadership groups. If the authoritarian regime has lasted for some decades, then few old political leaders retain enough social popularity and political support to be able to play a significant role in the transition and new young leaders will quickly replace them. If the authoritarian regime has lasted less than a decade, it will be possible for the political leaders ousted by the authoritarian regime to restructure their political organizations and to reacquire Reference governmental power. During the process of democratization new institutions will be created. The once atomized and compressed society enters into a process of self-reorganization and provides the social basis for new political actors.

The reorganization of society has proved easier in non-communist authoritarian regimes. In former communist authoritarian regimes, all social organizations have been destroyed by the communist party. Only groups supporting the communist party and dominated by it were allowed to function. Few dissenting associations and movements were tolerated and allowed to be active in politics. On the contraq6 in southern European and Latin American countries, the various authoritarian regimes never succeeded in destroying all forms of pluralism, or organized groups. Moreover, their rate of economic growth, though limited, created the premises of a pluralist society almost ready-made for the process of democratization. Eastern European communist authoritarian regimes collapsed in a sort of sociopolitical void. Only in Poland a powerful organized movement existed, Solidarnosc, that could inherit political power. Otherwise, citizens' forums and umbrella associations had to emerge while former communists slowly reorganized themselves. For these reasons, free elections have determined a new distribution of political power in eastern Europe without yet stabilizing a democratic regime.

According to Harvard political scientist Samuel P. Huntington (1991) so far there have been three waves of democratization and two reversals: the first wave took place from 1828 to 1926 and the first reverse wave from 1922 to 1942. The second wave appeared from 1943 to 1962 and the second reverse wave from 1958 to 1973. Finally, the third wave materialized starting from 1974 and is still going on. The overall process of democratization has moved from the Anglo-Saxon and northern European countries to the southern European rim and to the Latin American continent. It has now reached all eastern European and several Asian countries. Democratization is no longer a culturally bounded phenomenon and, contrary to previous periods, it has found a largely supportive international climate. Though not all newly created democratic regimes are politically stable and socioeconomically effective, they appear to have won the bitter and prolonged struggle against authoritarian actors and mentalities. Only Muslim fundamentalist movements now represent a powerful and dogmatic alternative to the attempts to democratize contemporary regimes. See also democracy.

demographic transition

3.Demographic transition, also known as the demographic cycle, describes the movement of death and birth rates in a society from a situation where both are high to one where both are low. In the more developed economics, it was appreciated in the 19th century that mortality was declining. Fertility began to fall in France in the late 18th century; and in north-west and central Europe, as well as in English-speaking countries of overseas European settlement, in the last three decades of the 19th century. Fertility levels were believed to have approximated mortality levels over much of human history, but the fact that fertility declined later than mortality during demographic transition inevitably produced rapid population growth. In France this situation appeared to have passed by 1910, as birth and death rates once again drew close to each other, and by the 1930s this also seemed to be happening in the rest of the countries referred to above.

Thompson (1929) categorized the countries of the world into three groups according to their stage in this movement of vital rates (later also to be termed the vital revolution). This process was to be carried further by C. R Blacker (1947), who discerned five stages of which the last was not the reattainment of nearly stationary demographic conditions but of declining population, a possibility suggested by the experience of a range of countries in the economic depression of the 1930s. However, it was a paper published in 1945 by Notestein, the director of Princeton University's Office of Population Research, which introduced the term demographic transition. Notestein implied the inevitability of the transition for all societies and, together with another paper published seven years later, began to explore the mechanisms which might explain the change. Notestein argued that the mortality decline was the resu1t of scientific and economic change, and was generally welcomed. However, fertility had been kept sufficiently high in high-mortality countries to avoid population decline only by a whole array of religious and cultural mechanisms which slowly decayed once lower mortality meant that they were to longer needed. He also believed that the growth of city populations, and economic development more generally, created individualism and rationalism which undermined the cultural props supporting uncontrolled fertility. Demographic transition theory is less a theory than a body of observations and explanations. Coale (1973) has summarized research on the European demo graphic transition as indicating the importance of the diffusion of birth control behavior within cultural units, usually linguistic ones, with diffusion halting at cultural frontiers. Caldwell (1976) has argued that high fertility is economically rewarding in pre-transitional societies to the decision makers, usually the parents, and that, if subsequent changes in the social relations between the generations mean that the cost of children outweighs the lifelong returns from them, then fertility begins to fall. The Chicago Household Economists (see Schultz 1974) place stress on the underlying social and economic changes in the value of women's time as well as on the changing marginal value of children.

After the Second World War doubt was cast as to whether the transition necessarily ended with near- stationary population growth because of the occurrence in many industrialized countries of a baby boom, but by the 1970s this was regarded as an aberrant phenomenon related largely to a perhaps temporary movement toward early and universal marriages. By this time the demographic transition's claim to be globally applicable had received support from fertility declines (usually assisted in developing countries by government family planning programs) in most of the world with the major exceptions of Africa and the Middle East.

Although demographic transition refers to the decline of both mortality and fertility, social scientists have often employed it to refer almost exclusively to the latter phenomenon. Because the world's first persistent fertility decline began in north-west and central Europe in the second half of the 19th century, demographers were tempted to employ the term second demographic transition for subsequent large-scale movements of this type, such as the somewhat later fail of the births rates in southern and eastern Europe or the fertility decline in much of Latin America and Asia from the 1960s. However, second demographic transition has now found acceptance as the description of the fertility decline which followed the baby boom in developed countries and which began first in those countries which earliest participated in the first demographic transition, north-west and central Europe and the English-speaking countries of overseas European settlement. Between the mid-1960s and early 1980s fertility fell by 30-50 per cent in nearly all these countries so that by the latter date fertility was below the long-term replacement level in nearly all of them and the rest of Europe as well. Philippe Ariès (1980) wrote of two successive motivations for the decline of the Western birth rate', stating that, while the first had aimed at improving the chances of the children in achieving social and occupational advancement in the world, the second was parent- oriented rather than child-oriented and had in fact resulted in the dethroning of the child-king (his term). Ariés and others agreed that in a faster changing world parents were now planning for their own futures as well as those of their children, that die process had been accelerated by later and fewer marriages and the trend towards most women working outside the home, and that it had been facilitated by the development of more effective methods of birth control, such as the pill and IUD, and readier resort to sterilization. Henri Léridon (1981) wrote of the second contraceptive revolution, Ron Lesthaege and Dirk van der Kaa (1986) of two demo- graphic transitions and van der Kaa (1987) of the second demographic transition. See also fertility.

dialectical and historical materialism

    1. Historical materialism is the theory of social change developed by Karl Marx and Friedrich Engels. History is divided into a series of epochs or modes of production. Each is characterized by a distinct economy and a distinct class structure. Historical change is fuelled by the progressive expansion of the productive power of the economy (and thus the development of technology, or the forces of production) and is manifest in overt class conflict and revolution.

    2. Dialectical materialism encompasses those aspects of Marxist philosophy other than the theory of history, including epistemology and ontology. It became the dogmatic official philosophy of the Soviet Union. The term was not used by Marx or Engels, with attempts to develop a coherent dialectical materialist philosophy beginning with Plekhanov and Lenin, building on Engels's Anti-Dühring (1947), and Dialectics of Nature (1973). Dialectical materialism is characterized by its materialism and its rejection of any form of skepticism. The material world is held to have primacy over the mental, so that the body is the precondition for consciousness. It is held that this material world is, in principle, knowable through the work of the empirical sciences. In addition, the philosophy is dialectical, in that it presents reality as in development. This is to argue, not simply that there is change in the material world, but rather that reality is characterized by the emergence of qualitatively new properties.


economic development

3. A central question in the study of economic development has turned out to be ‘in what precisely does the economic development of a society consist?’ For about twenty years after 1945, the accepted view was that the prolonged and steady increase of national income was an adequate indicator of economic development. This was held to be so because it was believed that such an increase could be sustained over long periods only if specific economic (and social) processes were at work.

These processes, which were supposed to be basic to development, can be briefly summarized as follows:

  1. The share of invest merit in national expenditure rises, leading to a rise in capital stock per person employed.
  2. The structure of national production changes, becoming more diversified as industry, utilities and prices take a larger relative share, compared with agriculture and other forms of primary production
  3. The foreign trade sector expands relative to the whole economy, particularly as manufactured exports take a larger share in an increased export total.
  4. The government budget rises relative to national income, as the government undertakes expanded commitments to construct economic and social infrastructure.

  5. Accompanying these structural changes in the economy, major changes of social structure also occur:

  6. The population expands rapidly as death rates fall in advance of birth rates. Thereafter, a demographic transition occurs in which improved living conditions in turn bring the birth rate down, to check the rate of overall population increase.
  7. The population living in urban areas changes from a small minority to a large majority.
  8. Literacy, skills and other forms of educational attainment are spread rapidly through the population.
This conceptualization of economic development as the interrelation of capital accumulation, industrialization, government growth, urbanization and education can still be found in many contemporary writers. It seems to make most sense when one has very long runs of historical statistics to look back over. Then the uniformities which this view implies are most likely to be visible. One doubt has always been whether generalizing retrospectively from statistics is not an ahistorical, rather than a truly historical, approach. It presupposes some theory of history which links the past to the future, The theory may not be transparent, or it may be unsubtly mechanistic and deterministic.

Another major doubt about the adequacy of the view of development described in processes 1-7 centers around the question of income distribution. If the basic development processes described above either do not make the distribution of income more equal, or actually worsen the degree of inequality for more than a short period, some theorists would argue that economic development has not taken place. They prefer to distinguish economic growth from economic development which, by their definition, cannot leave the majority of the population as impoverished as they originally were. For them, indicators of growth and structural change must be complemented by indicators of improvement in the quality of everyday life for most people.

The latter can be of various kinds. They can focus on the availability of basic needs goods - food, shelter, clean water, clothing and household utensils. Or they can focus on life expectation tables and statistics of morbidity. The availability and cost of education opportunities are also relevant. Although the distribution of income may be a good starting-point, the distribution of entitlements to consume of all kinds is the terminus. Similar kinds of consideration arise when one examines the role of political liberty in economic development. Is rapid growth and structural change induced by an oppressive, authoritarian regime true development? Those who object to the 'costs' of the development strategies of the former Soviet Union or the People's Republic of China do not think so. From a libertarian standpoint, they refuse to accept the standard account of economic development as sufficiently comprehensive.

The difficulty here is clearly with weighting all of the different indices involved to arrive at a single measure of the degree of development in this extended sense. Perhaps it cannot be done; and perhaps, if policy rather than international league tables is our main concern, this failure is not very important. The most familiar recent attempt in this vein is the United Nations Development Programme's Human Development Report series (UNDP 1990- ).

Linked with these questions about the meaning of development is the problem of conceptualizing the process of development. Perhaps the most famous of all models of this process is the classically-based model of Sir Arthur Lewis (1954). This attempts to capture the simultaneous determination of income growth and income distribution. Its key assumptions are the availability within traditional, technologically backward agriculture of surplus population (surplus in the sense that their marginal product in agriculture is zero); and the existence of a conventional subsistence wage in the industrial sector which does not rise as the surplus agricultural population is transferred to industrial employment. The transfer of labor from agriculture to industry at a constant wage rate (which may or may not involve physical migration, but usually does) permits industrial capitalists to receive an increasing share of a rising national income as profit and to reinvest their profits in activities which progressively expand the share of industry in the national output. Thus Lewis explained what he regarded as the central puzzle of economic development, namely to understand the process which converted economies which habitually saved and invested 4-5 per cent of the national income into economies which save and invest 12-15 per cent.

The Lewis model can be elaborated to explain other stylized facts of development. If labor transfer involves physical migration, urbanization will follow. If capitalists are defined as those with an accumulation mentality, (as Lewis does), they can operate in the public as well as the private sector, and expansion of the government share in national output can be under- stood in these terms. If industrial employment in some sense requires labor to be healthy and educated, these major social changes - including a demographic transition - may be set in train.

Much of the subsequent literature on economic development can be read as an extended commentary on the Lewis model. Neo-classical economists have criticized the assumptions of the model, questioning whether labor in the agricultural sector does have a zero marginal product, and whether labor transfer can be effected without raising the real wage rate. Alternatives to the Lewis model as an account of rural-urban migration have been proposed. The Lewis model's sharp focus on physical capital formation has been strongly questioned. Some critics have gone so far as to deny that physical capital formation is necessary at all to economic development (e.g. Bauer 1981). A less extreme view is that human capital formation or investment in the acquisition of good health and skills is a prerequisite, rather than air inevitable consequence, of the successful operation of physical capital. A balance is therefore required between physical and human investments to ensure that an economy raises its level of technological capability in advance of a physical investment drive.

The sectoral destination of investment in the Lewis model also provoked a strong reaction. Although less narrowly focused than Dobb's model (1955) where investment priority was given to the capital goods sector of industry, the Lewis model's focus on investment in the modern industrial sector was seen as inimical to the development of agriculture, and the failure of agricultural development was increasingly identified as a cause of slow growth and income maldistribution in developing countries (as argued by Lipton 1977). The debate about sectoral investment balance has since been subsumed into the analysis of project appraisal, as pioneered by Little and Mirrlees (1974) and others. This provides, in principle, a calculus of social profitability of projects in all sectors of the economy. It is worth noting, however, that the rationale for the social pricing of labor in the Little and Mirrlees method is based on the Lewis conception of agriculture-industry labor transfer. * The Lewis model recognized the possibilities of state capitalism as well as private capitalism. The infrequency in practice with which such potential has been realized has led to demands that governments confine themselves to their so-called 'traditional functions’ and the creation of an incentive and regulatory framework for the promotion of private enterprise. This has been one of the major thrusts of the counter-revolution in development dunking and policy of the 1980s (Toye 1993).

Foreign trade plays a minor role in the Lewis model and other early models of economic development. This reflected the pessimism of many pioneers (such as Prebisch (1959) and Singer 1950) about the tendency of the terms of trade of primary commodity producers to decline. It also responded to a belief that, historically, isolation from the world economy had spurred development in Meiji Japan (Baran 1973) and Latin America in the 1930s (Frank 1969). More recently, the expansion of manufactured exports has been seen as a major element, in the astonishingly successful development performances of East Asian countries like South Korea and Taiwan. Debate still rages, however, about whether this kind of trade expansion experience validates liberal trade and finance policies, or an intelligent and selective style of government intervention in these markets (as argued by Wade 1990).

The concessional transfer of finance and technical assistance from developed to developing countries fitted well with the Lewis emphasis on physical capital formation as the key to growth and income distribution. More recently, the effectiveness of aid has been questioned. Although simple supporters and enemies remain vocal, it is common now to see more clearly the complexities of the aid process, and to stress the many lessons that have been learned from hard experience to improve the likelihood of aid achieving its desired objectives (e.g. Cassen and Associates1986; Lipton and Toye 1990). Somewhat greater agreement exists on the facts of recent economic development than on the bringing it about. That many poor countries have experienced much economic growth and structural change since 1945 is widely accepted. Few still claim that growth in developed countries systematically causes increased poverty in other, poorer countries. A weaker version of this thesis is that there is an Ever-widening gap between richest and poorest, which can arise when the welfare of the poorest is constant or rising. Even this weaker version is controversial, on the Grounds that countries are ranged evenly along a spectrum of wealth/poverty, and thus to split this spectrum into two groups of rich and poor in order to compare group statistics of economic performance can be somewhat arbitrary. In fact, the measured growth rates of developed and developing countries since the early 1960s show relatively small differences and ones that may well lie within the margins of error that attach to such estimates. The countries classified as developing also show increasing differentiation among themselves.

But, although the overall record of economic growth at least need not give cause for deep gloom, certain geographical regions do appear to have markedly unfavorable development prospects. Such regions include sub-Saharan Africa and parts of South Asia, and of Central and South America. The reasons for their poor prospects vary from place to place. Some are held back by severe pressure of population on cultivable land; some by inability to generate indigenous sources of appropriate technical progress; some by the persistence of intense social and political conflict; some by unenlightened policy making; and some by the continuing failure to evolve a worldwide financial system which does not tend to amplify the inherent unevenness (over place and time) of economic development.

It is also true that the rapid increase in the world's population makes it possible for the absolute number of people whose consumption falls below a given poverty line to increase, even when the percentage of the world's people who are poor on this definition is falling. This is what seems to be happening at the moment. Despite all the evidence of widespread economic development, theoretical and practical work on poverty alleviation has, therefore, a growing urgency and relevance. See also economic growth, industrial revolution, modernization, underdevelopment.

economic growth

Economic growth is usually taken to mean the growth of the value of real income or output. The word 'real' signifies that only changes in quantities, and not changes in prices, are allowed to affect the measure. It is not equivalent to growth in welfare or happiness, although It may have important consequences for both, and there has been much debate about its benefits and costs. Measurable real income means, in turn, the maximum rate of measurable real consumption of goods and services that could be sustained indefinitely from the chosen point of time forward. It therefore presupposes that the costs of maintaining that rate of consumption are fully met, for example, that flocks are renewed, that roads are maintained, as also are the numbers of people and levels of their health and education. In principle, properly measured income would be net of the costs of environmental degradation, and would allow for depletion of minerals, thus acknowledging concerns which, however, have often been exaggerated. If income were all consumed, we would have a static economy in which all outputs would be maintained, but in which economic arrangements would be essentially unchanged.

To make the economy grow requires economic arrangements to change, and the cost of making these changes is investment. This can take many different forms, for example, increasing flocks, building more or better roads, making more or better machinery, educating more people, or to a higher standard, under- taking research and development, and so on. Countries will grow faster the more of their incomes they devote to investment and the more efficient that investment is. The former Soviet Union invested a large fraction of its income, but it was so inefficiently done that little benefit accrued to its people. Experience suggests that efficiency is best secured through a free market system, although there are some important large-scale investments that need to be centrally planned. In a market system, profits are important for investment for several reasons. They generally provide most of the savings and finance, High profits strengthen business confidence to undertake investment. Apart from taxation, the higher the share of profits, the lower must be the share of wages, and the stronger the incentive to employ more labor. But profits must not be the result of monopolistic agreements to raise prices and restrict output. Such agreements reduce growth, as does government protection from foreign competition.

The above emphasis on investment as the source of economic growth may seem little more than common sense. For a long time, however, economists have placed their emphasis on technical progress. This is because of the seemingly well-established empirical finding that, taken together, investment and employment growth can account for little more than one-half of the growth of many developed economics. The residual, unexplained, part of growth is then attributed to technical progress, which, according to theory, is the discovery of new ways to increase output from given inputs of labor and capital. Unfortunately, the residual in these studies results from the mistaken way in which the contribution of investment to growth has been estimated. In reality this contribution has been far greater, and there is no residual to be explained.

The earlier growth theories of Solow and others not only attributed too little growth to investment but also claimed that in the long run increasing the share of investment in output would leave the rate of growth unchanged. What mattered was technical progress, but its causes were left unexplained. Subsequently, attempts were made to explain it, usually by attributing it to some narrow category of investment such as research and development expenditure. These theories have erred in implicitly treating most investment as if it were just reduplication of existing assets. Since investment is the cost of changing economic arrangements, it is never mere reduplication. It is this which explains why investment opportunities are continually present, and are not steadily eliminated as they are taken up. Undertaking investment creates new opportunities as fast as it eliminates others. In the 19th century; for example, it was the railways and the telegraph which opened up the interior of the USA and which led to the development of much larger manufacturing and distributing enterprises able to take advantage of economics of scale and scope. Research and development are important, especially for some industries (e.g. aerospace, electronics and pharmaceuticals), but they are by no means t tic only important generators of investment opportunities. The fastest growing countries in the world are not those with the greatest expenditures on research and development. They are those Asian countries in which wages are low and profitability and investment are both high, as businesspeople take advantage of their ability to imitate and catch up the more developed countries in a relatively free market system. They have also benefited through a large transfer of workers from agriculture to manufacturing and other enterprises. Perhaps the chief constraint on growth in the west is the fact that a prolonged boom, which is needed to generate high rates of profit and investment, results too soon in unacceptably high rates of inflation, and leads western governments to slam on the brakes. By contrast, the Asian countries seem able to sustain their booms for much longer. See also economic development.


1. An elite is a small group that has leadership in some sphere of social life (such as a cultural elite), or has leadership of society as a whole. The elite is typically understood to be relatively homogeneous and with a largely closed membership. Modern elite theory developed in the early years of the 20th century, through the work of Vilfredo Pareto (1963), Gaetano Mosca (1939) and others. This theory was opposed to socialism, not least in so far as it argued for the inevitability of the division of all societies into an elite (with superior organization abilities), and an inferior mass. More significantly, at a theoretical level, elite theory suggested, again in contrast to socialism and Marxism, that the power of the dominant group in society did not have to be rooted in economic power. In so far as classes are economically defined, elite theory therefore offered an alternative account of social stratification and hierarchies than that provided by class theory. In this light, the work of C. Wright Mills (1956) on the 'power elite' is significant. Mills argued that contemporary America was dominated by an elite that unified three key spheres of society: industry, politics and the military. Unlike earlier elite theorists, Mills' concern was to expose the elite, and the adverse effects that it had on democracy, rather than to celebrate its inevitability.
In the study of culture, elite theory has had its greatest impact through mass society theory, and in the assumption that there is an inherently superior elite culture. This culture is seen, at worst, to be threatened and eroded by the contemporary mass media, or at best, that the mass media are incapable of serving elite culture. As such, elite theory explicitly or implicitly judges popular culture by the standards of elite culture, and finds it wanting. It is therefore typically insensitive to the subtleties and complexities of popular culture.
3. The term elite is part of a tradition which makes modern social scientists uneasy. At the same time, its use facilitates historical and contemporary analysis by providing an idiom of comparison that sets aside institutional details and culture-specific practices, and calls attention instead to intuitively understood equivalencies. Typically, an adjective precedes the word elite, clarifying its aim (oligarchic elite, modernizing elite) or its style (innovating elite, brokerage elite) or its institutional domain (legislative elite, bureaucratic elite) or its resources base (media elite, financial elite) or the decisional stage it dominates (planning elite, implementing elite) or its eligibility grounds (birth elite, credentialed elite).

Two quite different traditions of inquiry persist. In the older tradition, elites are treated as exemplars: fulfilling some historic mission, meeting a crucial need, possessing superior talents, or otherwise demonstrating qualities which set them apart. Whether they stabilize the old order or transform it into a new one, they are seen as pattern setters.

In the newer approach, elites are routinely understood to be incumbents: those who are collectively the influential figures in the governance of any sector of society, any institutional structure, any geographic locality or translocal community. Idiomatically, elites are thus roughly the same as leaders, decision makers or influentials, and not too different from spokes-persons, dignitaries or central figures. This second usage is more matter-of-fact, less normative in tone.

Still, elites are seen by many as selfish people in power, bent upon protecting their vested interests, contemptuous of the restraints on constitutional order, callous about the needs of larger publics, ready to manipulate opinion, to rig elections, to use force if necessary to retain power. A conspiratorial variant worries those who fear revolutionary subversive elites: fanatical, selfless, disciplined, competent and devoted to their cause, equally contemptuous of political democracy, constitutional order or mass contentment, willing to exploit hatred and misery, to misrepresent beliefs and facts, and to face personal degradation and social obloquy. Whether to preserve old patterns of life or to exemplify new cries, elites are those who set the styles.

When most social scientists talk about elites, they have in mind 'those who run things' - that is, certain key actors playing structured, functionally understand- able roles, not only in a nation's governance processes but also in other institutional settings - religious, military, academic industrial, communications, and so on.

Early formulations lacked this pluralist assumption. Mosca (1896) and Pareto both presumed that a ruling class effectively monopolized the command posts of a society. Michels (1911) insisted that his 'iron law of oligarchy' was inevitable; in any organization, an inner circle of participants would take over, and run it for their own selfish purposes. By contrast, Lasswell's (1936) formulation in the 1930s was radically pluralistic. Elites are those who get the most of what there is to get in any institutionalized sector of society and not only in the governing institutions and ancillary processes of organized political life. At every functional stage of any decision process - indeed, in any relevant arena - seine participants will be found who have sequestered disproportionate shares of those values, whether money, esteem, power or some other condition of life which people seek and struggle for. They are the elite at that stage and in that context. For Lasswell (1977), the question whether a situation is fully egalitarian - that is, extends elite status to every participant - is an empirical question, not a conceptual one. Nor is there necessarily any institutional stability. Macro-analysis of history shows periods of ascendancy for those with different kinds of skills, such as in the use of violence, propaganda, organization, or bargaining strategy.

The social formations - classes, communities and movements - from which elites derive are not fixed. Elites are usefully studied by asking which communities they represent or dominate, which classes they are exponents or products of, which interests they reflect or foreshadow, which personality types they are prone to recruit or to shunt aside, which circumstances of time and place (periods of crisis, tranquillity or transition) seem to provide missions and challenges for them.

Elites may change their character. Elite transformation has often been traced. Pareto saw vitality and decay as an endless cycle. Students of Third-World modernization often note the heightened tensions within a governing elite that accompany the shift in power from a revolutionary-agitational generation to a programmatic-exective generation. Specialized elites engineers, soldiers or priests – have often served as second-tier elites, recruited in the service of a ruling class that continues to set a governing style but whose members lack the skill to cope with new and pressing problems. Some scholars hold true that a true elite emerges when those who perform the historic mission – whether to bring change adapt to change, or resist to the end – become convinced that only they can carry out the mission properly. Self-consciously, they come to think of themselves as superior by nature – for example, able to think like scientists or soldiers, willing to take risks like capitalists or revolutionaries.

For some centuries, the historical forces that have been shaping the institutions of modern, urban, industrial, interdependent, institutionally differentiated societies have had a net effect that enlarges, democratizes and equalizes the life chances for elitehood. Everywhere the political stratification system typically resembles a pyramid, reflecting the striking cross-national uniformity that only tiny fractions of a country’s citizens have more than an infinitesimal chance of directly influencing national policy or even translocal policies. At the same time, fewer disadvantages linked to social status, educational attainment, geographic residence, cultural claims, age and sex attributes, or institutional credentials now appear to operate as conclusively or comprehensively as in the past.

Viewed as incumbents, those who hold key positions in the governing institutions of a community are, collectively, the elite. They are the custodians of the machinery for making policy. Once a sector of society becomes institutionally differentiated, its ability to adjust to conditions on its own terms is likely to be seriously constrained. Even with its semi-autonomous domains, a custodial elite finds it hard to sustain a network or co-ordinate sector-wide efforts. Medical elites are typically locality-rooted. Military services feud with one another. Scientists are engrossed with specialized lines of inquiry. Commercial elites are fragmented. Industrial giants are rivals.

In the modern world, when elites are seen as housed within conventionally recognized establishments such as military, diplomatic, legislative or party organizational structures, mid-elites and cadres are linked hierarchically to top elites and specialized to implement the specific public and system goals of their domains. When elites are viewed as the top talent in a vocational field – lawyers, academics, entrepreneurs, and so on – the elite structure is much more disjointed. Mid-elites are the source of eligible talent, engaged in tasks having no necessary articulation with what top-elites do, but nonetheless tasks that train and test, groom and screen individuals who may in due course reach top elitehood in their field.

Top elites in a custodial structure do not necessarily work well together. The structural complexity of legislatures is such that they typically have rather segmented power structures. In characterizing a military elite, the rivalries of services and branches, the geographic deployment and generational gaps between echelons all must be acknowledged. The illusion of homogeneity about the administrative elite is dispelled when one looks closely. Career services give some coherence to relatively autonomous fields, like police, fire, diplomacy and health. But in specific policy domains, clientele elite often dominate the picture especially when talking about elites in rather amorphous fields of endeavor, the implications of structural disjunctions on the perspectives of those in top positions seems far reaching. Most communications elites are set at working odds with one another in the various media where their contacts and skills apply. At community levels, civic leaders rarely sustain close contacts with their counterparts in other localities. Elites are studied both in context, in what can be called elite characterization work, and out of context, in what is referred to as elite survey work. There are two main genres of the former: those in which elites are characterized by their mission or function, and those in which elites are seems in custodial capacity, and characterized by the performance of the institutional processes they control. In a corresponding way, elite surveys – in which elites are taken out of context – also have two main genres: those in which the investigator is mainly interested in what the elites think, in the acumen, loyalty and ideological bent of mind typical of certain elite perspectives, and those which explore the recruitment of elite figures by looking at the changing opportunity structure, at social credentials, screening criteria, processes of sponsorship, grooming and socialization, and at those who are the gatekeepers, brokers and mentors who affect the corsus honorum of a career. In modern systematic survey work, it is customary to say at the outset how, when and where the elite status of those studied has been established, whether by reputation, position held, or process participated in. Interviews are then held, often rather long interviews, to learn their beliefs, perceptions, preferences and appraisals. Necessarily, in survey work, elites are not studies ‘in action.’ See social mobility, social stratification.

Enlightenment, The
1. An intellectual movement which occurred in France (but also in Britain in the form of the ‘Scottish Enlightenment’) during the latter part of the 18th century. Key thinkers associated with The Enlightenment were d’Alembert, Diderot, Hume, Kant, Rousseau, Smith and Voltaire. The maxim propounded by Kant, ‘Dare to understand!,’ sums up well the underlying optimism which spurred much Enlightenment thinking. This thinking was characterized by a number of significant attitudes: a faith in the ability of reason to solve social as well as intellectual and scientific problems, an aggressively critical perspective on what were perceived as the regressive influences of tradition and institutional religion (the latter expressed in Voltaire's famous declaration concerning the Christian religion: 'Crush the infamy'), a faith in humanism and the ideal of progress, the espousal of a politics of toleration and free thinking. In spite of the generally critical stance towards religion, not all Enlightenment thinkers were, like Diderot, avowed atheists; Voltaire espoused a passionately held belief in a non-Christian deity, whilst Hume was phlegmatically agnostic with regard to such matters, although his famous criticism of the belief in miracles demonstrates a typical Enlightenment commitment to a skeptical view of metaphysical beliefs in the light of advances in the physical sciences after Newton's Principia. That said, Hume's thought often cuts against the grain of the Enlightenment faith in reason, while Rousseau's writings are often associated with the development of Romanticism.

Commentators such as Habermas continue to adhere to the basic project of Enlightenment as set out by Kant, i.e. an adherence to a critical project of modernity which has as its aim the articulation of a rational basis for discourses of knowledge, and political and social criticism. Lyotard (most notorious for his early (1979) espousal of postmodernism) also takes the Enlightenment to signify a key moment in the development of critical reason, namely the initiation of postmodernity (found in the writings of Kant - principally the Critique of Judgement). Other thinkers in the 19th and 20th centuries have either reacted against the Enlightenment project, or attempted to rearticulate it in diverse ways. For example, (i) Nietzsche's thinking (in spite of his current association with postmodern anti-Enlightenment thought) without doubt owes a significant debt to the Enlightenment tradition, especially his books of the late 1870s and early 1880s (Human, All-Too-Human (1878), for instance, was dedicated to the memory of Voltaire when it was first published, and adopts a methodological skepticism which shows the influence of Enlightenment thought); and (ii) Horkheimer and Adorno’s work (c.f. Dialectic of Enlightenment (1947)), which seeks to unpack the key methodological presuppositions underlying the Enlightenment conception of rationality while adhering to its critical ideals.

3. The term entrepreneur seems to have been introduced into economic theory by Cantillon (1755) and was first accorded prominence by Say (1803). It was variously translated into English as merchant, adventurer or employer, though the precise meaning is the undertaker of a project. John Stuart Mill (1848) popularized the term in Britain.

In the neo-classical theory of the firm, entrepreneurial ability is analogous to a fixed factor endowment because it sets a limit to the efficient size of the firm. The static and passive role of the entrepreneur in the classical theory reflects the theory's emphasis on perfect information which trivializes management and decision making and on perfect markets - which do all the coordination that is necessary and leave nothing for the entrepreneur.

According to Schumpeter (1934), entrepreneurs are the prime movers in economic development, and their function is to innovate or carry out new combinations. Five types of innovation are distinguished: the introduction of a new good (or an improvement in the quality of an existing good); the introduction of a new method of production; the opening of a new market - in particular an export market in new territory; the 'conquest of a new source of supply of raw materials or half-manufactured goods'; and the creating 'Of a new type of industrial organization - in particular the formation of a trust or some other type of monopoly.

Schumpeter is also very clear about what entrepreneurs are not: they are not inventors, but people who decide to allocate resources to tile exploitation of an invention; they are not risk-bearers: risk-bearing is the function of the capitalist who lends funds to the entrepreneur. Essentially, therefore, Schumpeter's entrepreneur has a managerial or decision-making role.

This view receives qualified support from Hayek (1937) and Kirzner (1973), who emphasize the role of entrepreneurs in acquiring and using information. Entrepreneurs' alertness to profit-opportunities, and their readiness to exploit these through arbitrage-type operations, makes them the key element in the market process. Hayek and Kirzner regard entrepreneurs as responding to change - as reflected in the information they receive -- while Schumpeter emphasizes the role of entrepreneurs as a source of change. These two views are not incompatible: a change effected by one entrepreneur may cause spill-over effects, which alter the environment of other entrepreneurs. Hayek and Kirzner do not insist on the novelty of entrepreneurial activity, however, and it is certainly true that a correct decision is not always a decision to innovate; premature innovation may be commercially disastrous. Schumpeter begs the question of whether someone who is the first to evaluate an innovation, but decides (correctly) not to innovate, qualifies as an entrepreneur.

Knight (1921) insists that decision making involves uncertainty Each business situation is unique, and the relative frequencies of past events cannot be used to evaluate the probabilities of future outcomes, According to Knight, measurable risks can be diversified (or laid off) through insurance markets, but uncertainties cannot. Those who take decisions in highly uncertain environments must bear the full consequences of those decisions themselves. These people are entrepreneurs: they are the owners of businesses and not the salaried managers that make the day-to- day decisions.

Leibenstein (1968) regards the entrepreneur as someone who -achieves success by avoiding the inefficiencies to which other people - or the organizations to which they belong - are prone. Leibenstein's approach has the virtue of emphasizing that, in the real world, success is exceptional and failure is the norm.

Casson (1982) defines the entrepreneur as someone who specializes in taking decisions where, because of unequal access to information, different people would opt for different strategies. Casson shows that the evaluation of innovations, as discussed by Schumpeter, and the assessment of arbitrage opportunities, as discussed by Hayek and Kirzner, can be regarded as special cases. Casson also shows that if Knight's emphasis on the uniqueness of business situations is used to establish that differences of opinion are very likely in all business decisions, then the Knightian entrepreneur can be embraced within his definition as well. Because the definition identifies the function of the entrepreneur, it is possible to use conventional economic concepts to discuss the valuation of entrepreneurial services and many other aspects of the market for entrepreneurs.

Perhaps the aspect of entrepreneurship that has attracted most attention is the motivation of the entrepreneur. Hayek and Kirzner take the Austrian view that the entrepreneur typifies purposeful human action directed towards individualistic ends. Schumpeter, however, refers to the dream and will to found a private dynasty, the will to conquer and the joy of creating, while Weber (1930) emphasizes the Protestant Ethic and the concept of calling, and Redlich (1956) the role of militaristic values in the culture of the entrepreneur. Writers of business biographies have ascribed a whole range of different motives to people whom they describe as entrepreneurs. For many students of business behavior, it seems that the entrepreneur is simply someone who finds adventure and personal fulfillment in the world of business. The persistence of this heroic concept suggests that many people do not want a scientific account of the role of the entrepreneur.

Successful entrepreneurship provides an avenue of social advancement that is particularly attractive to people who are denied opportunities elsewhere. This may explain why it is claimed that immigrants, religious minorities and people denied higher education are over-represented among entrepreneurs. Hypotheses of this kind are difficult to test without care- fully controlled sampling procedures. The limited evidence available suggests that, in absolute terms, the most common type of entrepreneur is the son of an entrepreneur.

    1. A philosophical term meaning ‘theory of knowledge.’ Epistemology concerns itself with the analysis of what is meant by the term ‘knowledge’ itself, and with questions about (i) what we can be said to know (the limits and scope of knowledge), (ii) its reliability, and what constitutes justification or warrant for holding a belief and thereby deeming that belief to be ‘knowledge.’ Thus, philosophers may ask: ‘Is there any difference between knowing and believing something to be the case?’ or, ‘To what extent does the acquisition of knowledge depend upon reason or the senses?’ There have been a wide variety of approaches to this issue. Plato (c. 428-348 BC) held that our rational capabilities are an intrinsic property of our minds and are the sole source of knowledge (a view usually placed under the rubric of ‘rationalism’). The exponents of empiricism, in contrast, argue that human understanding and hence knowledge is a result of sense experience alone. Hence, according to empiricism, what we know is the consequence of our ability to have perceptions of the world via our senses (this view is primarily associated with thinkers such as Locke, Berkeley and Hume).

    2. Against the empiricists' the German philosopher Immanuel Kant argued that there are necessary conditions of knowing that cannot be reduced to mere experience. Thus, Kant offered an account of the 'a priori' conditions of the possibility of experience. A priori judgements can be arrived at independently of experience. On this view, we have a form of knowledge (a priori knowledge) which exists prior to, and independently of, any empirical knowledge. Indeed, according to Kant such knowledge (for example, the 'pure intuitions' of time and space) is the precondition of the possibility of our having any knowledge of experience at all. One can best understand Kant's point by way of a comparison with Locke's empiricist conception of the mind. According to Locke, the human mind is like a 'blank sheet' which is then 'written' upon by sensory experience. This view, however, is open to the objection that if the mind is capable of having experiences then this must be so in virtue of some structure that it has prior to having any particular experience. If our minds were simply 'blank sheets' then how would we be able to recognize any experience as an experience in the first place? The ability to have experiences, Kant argues, cannot therefore be derived from any particular experience, hence there must be a priori judgements which constitute the conditions of the possibility of experience. Kant holds that there are two kinds of a priori knowledge, one based upon 'analytic' judgements, the other upon I synthetic' judgements. Analytic a priori knowledge would include such propositions as 'all triangles have three sides' (i.e. it is true by definition, and we need no experiential data to establish its truth). Thus, in thinking a subject, A, and a predicate, B, the predicate is contained within A as part of it. In contrast, in synthetic judgements the predicate, B, is external to the subject, A (Critique of Pure Reason, A7/B11). Synthetic judgements thus involve an act of inference which goes beyond the scope of the analytically derived concepts one has at one's disposal independently of experience (i.e. such judgements involve the empirical or external world). All judgements concerning experience are, for Kant, synthetic, and all knowledge that has any genuine value is knowledge about experience.

      In addition to such debates as those listed above concerning where our knowledge comes from, it is worth noting that philosophers also tend to draw distinctions between kinds of knowing. For example: (i) ‘knowing that…,’ which involves knowledge claims that are factual and capable of being established by way of reference to evidence; (ii) ‘knowing how …,’ the kind of knowledge required to do certain kinds of things (such as riding a bicycle); (iii) ‘knowledge by acquaintance,’ which includes such things as knowledge gained through individual experience, or personal knowledge (e.g. memories) and is not necessarily verifiable in the way that the knowledge mentioned in (i) is; (iv) ‘knowledge by description,’ which involves knowledge that is derived from our being informed about certain relevant facts, characteristics, etc. that pertain to something or someone (e.g. ‘Shakespeare’ is the person who wrote Hamlet, King Lear and other plays, was married to Anne Hathaway, and so on). As is often the case with philosophers, there is some considerable disagreement as to the usefulness of these definitions.

      Significant amongst other perspectives on knowledge are the view put forward by thinkers such as Friedrich Nietzsche (1844-1900) and, following him, Michel Foucault (1926-84). There are many possible interpretations of Nietzsche’s attitude to questions of knowledge (his work has, for instance, certain parallels with some of the ideas central to pragmatism). However, one dominant interpretation of knowledge that has exerted an influence upon views associated with post-modernism and post-structuralism is derived from the manner in which Foucault interpreted Nietzsche’s work. For Nietzsche, ‘knowledge’ is not something which can be analyzed properly in the absence of considerations of relations of power. This is because, on Nietzsche’s view, what we deem ‘knowledge’ is in fact the expression of an assemblage of drives and interests (see for instance the posthumously published notes which go to make up The Will to Power). This attitude parallels Nietzsche’s interpretation of the meaning of morality, offered in On the Genealogy of Morals (1887). Here, Nietzsche offers an account of ethical systems which identifies the values they espouse with their genealogical heritage: 'slave' morals valorize the 'meek' because the slave is a victim; 'noble' morality, in contrast, values what is powerful. Both slave and master, in short, in one way or another affirm themselves through their moralities. Foucault developed an argument on the basis of this account which sought to analyze knowledge forms as expressions of determinate social interests (see genealogy). Whatever the respective merits and problems with their views one thing is clear: neither Nietzsche (as represented in this way) nor Foucault have an 'epistemology' in the way in which other thinkers, such as Kant, have had. Indeed, if we are persuaded by them, then it is a short step to abandoning epistemology in favor of an intricate analysis of social relations (although what the status of such analyses would be as forms of knowledge is perhaps an awkward issue, especially for Foucault).

      However, it is not clear that one can abandon epistemology so easily. Thus, as Nietzsche himself noted at the beginning of Human, All-Too-Human (1878-80) providing an analysis of something's origins does not necessarily count as an exhaustive explanation of it. Thus, whatever the conditions or intentions that gave rise to a discourse, it may not be a straightforward matter to reduce its meaning merely to those conditions. Equally, although he certainly did not construct a formal ‘theory of knowledge,’ Nietzsche did not entirely abandon the temptation to pose epistemic questions. Thus, many of his observations remain relevant to the study of epistemology (for instance, it is arguable that from the Genealogy one could derive a normative account of justification which could be situated comfortably within the domain of epistemological inquiry). Equally, the genealogical method developed by Foucault can be subjected to various criticisms derived from alternative readings of Nietzsche (a good example is offered by Peter Dews, in Krell and Wood 1988). What is offered by this kind of perspective that is perhaps most significant is its inherently critical attitude to Cartesian epistemology, for in so far as power is constitutive of modes of knowledge it is also constitutive of the knower.


    1. The view that there are essential properties which define whatsomething is, and without which it could not be what it is. One form of essentialism ascribes these properties in virtue of a definition being given. For example, an essentialist of this kind would hold that there are certain essential properties which define what the term 'gold' refers to (a particular atomic weight, color, properties of hardness, malleability, etc.). In turn, any piece of gold must have those properties which are included within the definition of 'gold' in order to be designated as real gold. Whether or not adoption of this view commits one to holding that these properties must exist in reality prior to the act of naming an object, so that a definition, if it is true is a priori true (see Lyotard's criticism of essentialism in The Differend: Phrases in Dispute (1988), is perhaps an open question.

    2. Note also that there is a difference between this form of essentialism and the view which holds that objects must possess a hidden, concrete or 'real' essence which in turn causes us to attribute to them their observable properties (i.e. their 'nominal essence'). This position was first elaborated by empiricist philosopher John Locke. A variant of this view was revived in the 1980s in the wake of American philosopher Saul Kripke'S arguments about the nature of proper names. Simply put, Kripke'S account implies that since language succeeds in referring to things by means of proper names (Kripke calls such names 'rigid designators', it should be noted that, for him, instances such as 'gold' are proper names), what it refers to must possess properties which make the referent of the name what it is independently of that language. This position is often referred to as ‘a posteriori [i.e. after the fact] essentialism.’ This is because on Kripke's account it is only the act of naming and thereby fixing a reference that is necessary a priori (i.e. before the fact), whereas the particular properties selected when one names something may be ‘accidental’ to what is referred to, and it could turn out that what is named does not have all or some of these properties.


    1. Generally a word used to refer to different racial or national groups which identifies them in virtue of their shared practices, norms and systems of belief. By terming groups 'ethnic' they are usually implicitly identified as being in a minority, and as possessing a different range of attitudes or traditions to the ones held and adhered to by the majority of a society's members. In turn, 'ethnicity' denotes the self-awareness on the part of a particular group of its own cultural distinctiveness. As is self-evident, the assertion of ethnic identity can be unifying or divisive in equal measure - often depending upon who is asserting it, of whom, and in which context. In some situations the self-aware possession of an ethnic identity could be a unifying experience (for instance, a point of focus for a given community). In other instances, the attribution of 'ethnicity' might well be regarded as a provocative and injuring form of stereotyping embodying racism. Thus, the issue turns upon who actively designates one particular social grouping as 'ethnic': for to be defined as 'ethinic' and to assert one's own 'ethnicity' are two very different things. In both cases, what is at stake may well be an issue of power, in so far as the affirmation of ethnicity can be read as an assertion of identity in the face of a social status quo, whereas to be defined in this way by 'majority opinion' others may well be an oppressive manifestation of the power of more dominant forces and interests within a society.
3. Ethnicity is a fundamental category of social organization which is based on membership defined by a sense of common historical origins and which may also include shared culture, religion or language. Is to be distinguished from kinship in so far as kinship depends on biological inheritance. The term is derived from the Greek noun ethnos, which may be translated as 'a people or a nation.’ One of the most influential definitions of ethnicity can be found in Max Weber's Economy and Society (1922) where he describes ethnic groups as ‘human groups (other than kinship groups) which cherish a belief in their common origins of such a kind that it provides a basis for the creation of a community.’

The difficulty in reaching a precise definition of the term is reflected in the many different words employed in the literature to describe related or similar concepts, such as race and nation. While usage varies, 'race', like kinship has biological connotations, although these are frequently without foundation, and nation implies a political agenda - the goal of separate statehood - beyond that generally associated with ethnic groups. According to Weber (1922), 'a nation is the political extension of the ethnic community as its members and leadership search for a unique political structure by establishing an independent state.’

In predominantly immigrant societies, like the USA, Argentina, Australia and Canada, the study of ethnic groups forms a central theme of their social, economic and political life. Systematic research on American ethnic groups can be traced to the sociologists of the Chicago School during the 1920s, led by W. I. Thomas And Robert Ezra Park, who were concerned with the processes of ethnic group assimilation into the dominant white, Anglo-Saxon, Protestant (WASP) mainstream. Park's 'race relations cycle', outlining a sequence of stages consisting of 'contact, competition, accommodation and assimilation', implied that successive ethnic groups would be absorbed into a relatively homogenous US society. The underlying assumption of ethnic group theory was that a gradual process would result in the disappearance of separate ethnic groups into an American melting pot.

This unilinear interpretation gave way to more pluralistic conceptions of ethnicity in the USA, in which various dimensions of assimilation were identified by sociologists like Milton Gordon (1964). Gordon distinguished between cultural assimilation (acculturation) and structural assimilation, the former signifying the adoption of the language, values and ideals of the dominant society, while the latter reflected the incorporation of ethnic groups into the institutions of mainstream society. While cultural assimilation did not necessarily result in an ethnic group's inclusion within the principal institutions of society, structural assimilation invariably meant that assimilation on all other dimensions - from personal identification to inter- marriage – had already taken place.

This conceptualization contrasts with that of M. G. Smith (1987), who argued that the key issue involved in a general theory of ethnic relations was the differential incorporation of ethnic groups into larger social units. Smith distinguished between three types of social incorporation: the universalistic type, where individuals are incorporated directly and on identical conditions in a common society; the differential mode, which is the same process except that individuals are incorporated on an unequal basis, either in a superior 'or inferior position; and segmental incorporation, where ethnic groups are incorporated in a common society 'as units of equivalent status on identical terms'. In this third case, individuals are incorporated indirectly, either on an egalitarian or unequal basis, giving a variety of possible ethnic outcomes.

Scholarly concern with ethnicity and ethnic groups has become increasingly salient since the 1960s. Faced with the proliferation of separatist movements throughout the world, and the rise of the so-called 'unmeltable ethnics' in North America, the inadequate assumptions underlying theories of modernization have been exposed in all types of societies, whether they are in the capitalist, socialist or developing world. The notion that modernity would result in a smooth transition from gemeinschaft (community) to gesellschaft (association), with the gradual dissolution of ethnic affiliations, simply did not fit the facts. Some social scientists argued that there was a primordial basis to ethnic attachments, while others explained the apparent persistence of ethnicity in largely instrumental terms, as a political resource to be mobilized in appropriate situations. Not only has ethnic loyalty taken on new meaning in many industrial societies, but also ethnic divisions have continued to frustrate the efforts of nation-building in most post-colonial societies. Even the countries of the Communist bloc could contain the ethnic demands of their multi- national, subject populations only by a judicious blend of co-optation and political oppression.

The focus of research on ethnicity has shifted away from studies of specific groups to the broad processes of ethnogenesis, the construction and perpetuation of ethnic boundaries, and the meaning of ethnic identity. The question of the ethnic origin of nations has produced the same tension between those who stress the continuity of ethnic history and others who emphasize its situational nature. While most social scientists recognize the flexibi1ity of ethnic identification, that under certain circumstances ethnicity becomes salient whereas in others it remains a dormant capacity waiting to be mobilized, some take the position that its impact has been greatly exaggerated. It is often merely ‘symbolic ethnicity’, or its influence is largely an illusion based on the 'invention of tradition, to serve the interests of ethnic political entrepreneurs or, in the neo-Marxist literature, the ruling class.

One of the most influential writers on ethnic boundaries has been the anthropologist Fredrik Barth (1969), whose stress on not processes of group inclusion and exclusion can be seen as a parallel development to the sociological insights of Max Weber. Weber pointed to the tendency of social groups to attempt to monopolize wealth, prestige and political power by systematically excluding outsiders from achieving membership. Immigration restrictions are one way this can be attempted in modern societies. Another is the manner in which citizenship is defined by the state, so that in the case of Germany, for example, the dominant principle reflects a sense of shared ancestry, jus sanguinis, while in France the critical factor has been residence, jus soli. While some writers have stressed the voluntary nature of ethnic group membership and the variety of ethnic options available to individuals in many post-industrial societies, others point to the coercive element to be found in all forms of ethnic stratification that can be viewed as more crucial in most situations than any hypothetical elements of preference and choice.

A central concern of social scientists has been the attempt to understand the nature of ethnic conflict and violence. Few issues have been of greater practical importance as the post-Cold War era has been marked by a resurgence of ethnic, warfare and genocide in societies as diverse, and remote from each other, as Bosnia and Rwanda. In other societies, like South Africa, a relatively peaceful transfer of power in the elections of April 1994, from a white minority to the black majority, rests on a volatile sub-structure of ethnic divisions and fragile compromises.

A wide variety of theoretical perspectives can be found supporting contemporary studies of ethnicity and ethnic conflict. Some, like rational choice theory, are methodologically individualistic and apply a cost-benefit formula to account for ethnic preferences and to explain the dynamics of ethnic group formation. These have been criticized on the grounds that they fail to appreciate the collective dynamics of much ethnic behavior and underestimate the irrational side of ethnic violence. Other common perspectives focus on ethnic stratification: neo-Marxist theories stress the economic components underlying much ethnic discrimination; while those following in the tradition of scholars like Weber and Furnivall provide a more pluralistic interpretation of differences in ethnic power. In general, these originate from conquest and migration, and, are used to account for the hierarchical ordering of ethnic and racial groups. Further theories point to psychological factors, like prejudice and ethnocentrism, as important explanations for the persistence of ethnicity. Two highly controversial arguments center on genetic imperatives, which operate through the mechanism of kin-selection, and form part of the application of sociobiological thinking to ethnic relations, and neo-conservative theories that concentrate attention on cultural characteristics, which (it is asserted) are disproportionately distributed among certain ethnic groups. Such theories have been vigorously challenged because of their deterministic implications. The heat of the debate reinforces the conclusion that no one theory provides a generally accepted and comprehensive paradigm to explain the complexity of ethnic group formation or the persistence of ethnic conflict in the world today. See also ethnic politics, race.

ethnic politics

3. More than 80 per cent of contemporary states that comprise the United Nations are ethnically plural, in that they contain two or more mobilized ethnic communities. These communities compete, sometimes by civic methods, sometimes by violence for hegemony (control of the state apparatus), for autonomy (ranging from regional self-government to secession), or for incorporation into the society and polity on more favorable terms. Inter-ethnic relations may vary from stratificational (one group dominating the others politically and economically), to segmentational (each party controlling significant resources and institutions). In the contemporary era, ethnic politics implicate the state, because the state has become the principal allocator of the values that affect the relative power, status, material welfare, and life-chances of ethnic collectivities and their individual constituents. The values at stake may be political - control of territory, citizenship, voting rights, eligibility for public office and the symbols of the state; economic - access to higher education, employment, land, capital, credit and business opportunities; or cultural - the position of religion, the relative status of language in education and in government transactions.

Ethnic politics may be generated by the grievances of territorially concentrated peoples, demanding greater autonomy for their homeland and more equitable representation in the central government; or by immigrant diasporas asking for more equitable terms of inclusion in the Polity, combined often with claims for recognition and official support for their distinctive cultural institutions. These initiatives often trigger counter-mobilization in the interest of ethnic groups that feel threatened by these claims and by state authorities committed to the- ethnic status quo.

The latter have the principal responsibility for managing or regulating ethnic conflicts. Their strategies may be directed in three ways: first, at maintaining pluralism by coercive domination of subordinated ethnic communities or by consensual processes such as federalism and power sharing, second, at eliminating pluralism by genocide, expulsion or induced assimilation; or third, at reducing the political salience of ethnic solidarity by cultivating crosscutting affiliations, dc- legitimizing ethnic organizations and ethnic political messages, and emphasizing individual participation In the economy and polity. Ethnic conflicts are seldom settled or resolved; though specific issues may be successfully compromised, the parties remain to focus their grievances and demands on other issues. Thus ethnic politics is a continuing feature of ethnically divided states.

Government policies may contribute to stimulating and rewarding ethnic mobilization, as well as to mitigating ethnic conflict. Complicating inter-ethnic relations is the inevitability of factions within ethnic communities, each competing for available resources, for support within their constituency, and for the right to represent it to outsiders. Factional conflicts within ethnic communities may result in expedient, often tacit, understandings and coalitions with counterparts across hostile ethnic boundaries or representatives of the state.

Many ethnic disputes spill over the borders of individual states, especially where ethnic kinfolk inhabit neighboring states. Domestic ethnic conflicts thus intrude into international relations, prompting intervention by other states, by sympathizers with one of the parties to the dispute, and by international organizations attempting to mediate, restore and maintain order or mitigate the suffering of civilians and refugees. With the termination of the Cold War, violent ethnic conflicts including full-scale civil wars have emerged as a major source of international instability that preoccupies national politicians and attentive publics; they have overwhelmed the diplomatic, financial and operational capacities of the United Nations. Liberals, Marxists and modernizers, despite their differences, have joined in perceiving ethnic solidarity as the residue of earlier stages of human development and in predicting and advocating its early disappearance in favor of more rational forms of association. They continue to treat it as a dangerous and essentially illegitimate phenomenon. Others explain the resurgence of politicized ethnicity and thus ethnic politics variously as, first, the search for community in increasingly bureaucratized and impersonal industrialized second, more reliable sources of security and material opportunity than class-based organizations or weak, unrepresentative Third-World governments; third, efficient vehicles for mobilization and representation of individual and collective interests in modern societies, or fourth, the consequence of the disintegration of colonial empires and multi-ethnic states that leave ethnic collectivities as their residual legatees These explanations relate to an ongoing dispute between 'primordialists', who argue that collective ethnic identities are deeply-rooted historical continuities nurtured by early socialization and reinforced by collective sanctions, and 'instrumentalists', who hold that ethnic identities and solidarities are fluid, pragmatic and opportunistic, often constructed by ethnic entrepreneurs to justify demands for Political and especially material advantages. Self-determination is the ideology that legitimizes ethnic activism on behalf of peoples who demand independence or increased territorial autonomy. Multiculturalism justifies demands for institutional separation and self-management where territorial autonomy is not feasible. Demands for non-discriminatory inclusion which may run parallel to cultural pluralism are inspired by universalistic liberal principles. State nationalism may either confirm the super ordinate position of a dominant ethnic community claim a higher order allegiance to the state that amalgamates and supersedes constituent loyalties in an ethnically plural society. See also ethnicity, multicultural education, nationalism.


    1. The tendency to refer exclusively to one's own cultural values and practices, even if engaged with others who may not share those values. Likewise, the tendency to describe and judge the systems of value and dominant practices of other cultures from the standpoint of one's own. Such an attitude has connections with the stereotyping of others and can be a feature of racism and prejudice.


3. Ethnography is a term that carries several historically situated meanings. In its most general sense, the term refers to a study of the culture that a given group of people more or less share. The term is double-edged and has implications for both the method of study and the result of such study. When used as a method, ethnography typically refers to fieldwork (alternatively, participant observation) conducted by a single investigator who ‘lives with and lives like' those who are studied, usually for a year or more. When used as a result, ethnography ordinarily refers to the written representation of a culture. Contemporary students of culture emphasize the latter usage and thus look to define ethnography in terms of its topical, stylistic and rhetorical features.

There are three moments (discernible activity phases) associated with ethnography. The first moment concerns the collection of information or data on a specified culture. The second refers to the construction of an ethnographic report; in particular, the compositional practices used by an ethnographer to fashion a cultural portrait. The third moment of ethnography deals with the reading and reception that an ethnography receives across relevant audience segments both narrow and broad. Each phase raises distinctive issues.

The greatest attention in the social sciences has been directed to the first moment of ethnography -- field- work. This form of social research is both a product of and a reaction to the cultural studies of the mid- to late 19th century. Early ethnography is marked by considerable distance between the researcher and researched. The anthropologists of the day based their cultural representations not on firsthand study but on their readings of documents, reports and letters originating from colonial administrators, members of scientific expeditions, adventurers and, perhaps most importantly faraway correspondents guided by questions posed by their stay-at-home pen-pals. Not until the early 20th century did ethnographers begin to enter, experience and stay for more than brief periods of time in the strange (to them) social worlds about which they wrote. Bronislaw Malinowski is most often credited with initiating by example a modern form of fieldwork that requires of the ethnographer the sustained intimate and personal acquaintance with ‘what the natives say and do.’

There is, however, a good deal of variation in terms of just what activities are involved in fieldwork and, more critically, just how such activities result in a written depiction of culture. Current practices include intensive interviewing, count-and-classify survey work, participation in everyday routines or occasional ceremonies engaged in by those studied, the collecting of samples of native behavior across a range of social situations, and so on. There is now a rather large literature designed to help novice or veteran fieldworkers carry out ethnographic research.

Yet much of the advise offered in fieldwork manuals defies codification and lacks the consensual approval of those who produce ethnographies. Fieldnotes, for example, are more or less de rigueur in terms of documenting what is learned in the field but there is little agreement as to what a standard fieldnote - much less a collection of fieldnotes - might be. Moreover, how one moves from a period of lengthy in situ study to a written account presumably based on such study is by no means clear. Despite seventy or so years of practice, fieldwork remains a sprawling and quite diverse activity. The second moment of ethnography - writing it up - has by and large been organized by a genre labeled 'ethnographic realism.' It is a genre that has itself shifted over time from a relatively unreflective, closed and general (holistic) description of native sayings and doings to a more tentative, open and partial interpretation of native sayings and doings. Yet realism remains a governing style for a good deal of ethnography, descriptive or interpretative. It is marked by a number of compositional conventions that include, for example, the suppression of the individual cultural member's perspective in favor of a typified or common denominator 'native's point of view' the placement of a culture within a timeless ethnographic present and a claim for descriptive or interpretive validity based on the author's 'being there' (fieldwork) experience.

Some ethnographers, though by no means all, express a degree of dissatisfaction with ethnographic realism. Partly a response to critics located outside ethnographic circles who wonder just how personal experience serves as the basis for a scientific study of culture, some ethnographers make visible - or, more accurately, textualize - their discovery practices and procedures. Confessional ethnography results when the fieldwork process itself becomes the focus in an ethnographic text. Its composition rests on moving the fieldworker to center stage and displaying how the writer comes to know a given culture. While often carefully segregated from an author's realist writings, confessional ethnography often manages to convey a good deal of the same sort of cultural knowledge put forth in conventional realist works but in a more personalized fashion.

Other genres utilized for ethnographic reporting are available as well. Dramatic Ethnographies, for example, rest on the narration of a particular event or sequence of events of apparent significance to the cultural members studied. Such ethnographies present an unfolding story and rely more on literary techniques drawn from fiction than on plain-speaking, documentary (techniques - 'the style of non-style' - drawn from scientific reports. Critical ethnographies provide another format wherein the represented culture is located within a larger historical, political, economic, social and symbolic context than is said to be recognized by cultural members, thus pushing the writer to move beyond traditional ethnographic frameworks and interests when constructing the text. Even self or auto-ethnographics have emerged in which the culture of the ethnographer's own group is textualized. Such writings offer the passionate, emotional voice of a positioned and explicitly judge- mental fieldworker and thus obliterates the customary distinction between the researcher and the researched.

A good deal of the narrative variety of ethnographic writing is a consequence of the post-1970s spread of the specialized and relatively insular disciplinary aims of anthropology and, to a lesser degree, sociology. Growing interest in the contemporary idea of culture - as something held by all identifiable groups, organizations and societies - has put ethnography in play virtually everywhere. No longer is ethnography organized simply by geographic region, society or community. Adjectival ethnographies have become common and sizeable literatures can be found in such areas as medical ethnography, organizational ethnography, conversation ethnography, school ethnography, occupational ethnography, family ethnography and many more. The results of the intellectual and territorial moves of both away and at-home ethnography include a proliferation of styles across domains and an increase in the number of experimental or provisional forms in which ethnography is cast.

The expansion of ethnographic interests, methods and styles is a product of the third moment of ethnography -- the reading of ethnographic texts by particular audiences and the kinds of responses these texts appear to generate. Of particular interest are the categories of readers that an ethnographer recognizes and courts through the topical choices, analytic techniques and composition practices displayed in a text. Three audience categories stand out. First, collegial readers are those who follow particular ethnographic domains most avidly. They are usually the most careful and critical readers of one another's work and the most familiar with the past and present of ethnography. Second, general social science readers operate outside of ethnographic circles. These are readers attracted to a particular ethnography because the presumed facts (and perhaps the arguments) conveyed in the work helps further their own research agenda. Third, there are some who read ethnography for pleasure more than professional enlightenment. Certain ethnographic works attract a large, unspecialized audience for whom the storytelling and allegorical nature of an ethnography is salient. Such readers look for familiar formats – the traveler’s tale – the adventure story, the investigative report and, perhaps most frequently, the popular ethnographic classics of the past – when appraising the writing. Ironically, the ethnographer charged with being a novelist manqué by colleagues and other social scientists is quite likely to be the ethnographer with the largest number of readers.

For each reader segment, particular ethnographic styles are more or less attractive. Collegial readers may take in their stride what those outside the field find inelegant, pinched and abstruse. The growing segmentation across collegial readers suggests that many may be puzzled as to what nominal ethnographic colleagues are up to with their increasingly focused research techniques and refined, seemingly indecipherable, prose styles. This creates something of a dilemma for ethnographers for it suggests the distance between the general reader and the ethnographic specialist as well as the distance between differing segments of the ethnographic specialists themselves is growing. While ethnography itself is in little or no danger of vanishing those who read broadly across ethnographic fields may be fewer in number than in generations past. This is a shame, for strictly speaking an unread ethnography is no ethnography at all.


    1. Exchange-value is one of the key concepts in Marxist economics. Mark identifies two forms of value in commodities. Use-value is grounded in the possibility of the object satisfying some identifiable human need or desire. The ‘value’ of the object, however, lies in the fact that is it a product of human labor. According to Marx’s version the labor theory of value, the value of a commodity depends upon the amount of labor time that has been spent in its production. Marx qualifies this simple observation, by noting that the actual labor time expended is not relevant(so that the products of a slow, lazy or unskilled worker will not be worth more than those of a fast and efficient worker, simply because the slow worker took longer to produce anything). Rather, Marx refers to 'socially necessary labourtime,’ which is that required to produce a given amount of a useful commodity, 'under the conditions of production normal for a given society and with the average degree of skill and intensity of labor prevalent in that society.’ This value is understood as exchange-value, when different sorts of commodities (that is, commodities with different use-values) are exchanged. Thus, if it takes 5 hours to produce 10 yards of linen, and 20 hours to produce a coat, then 40 yards of linen are equivalent to (or have the same exchange-value as) one coat. Exchange-value is expressed in (although is not strictly identical to) a monetary price.


false consciousness

    1. In Marxism, false consciousness occurs when a class fails to recognize the course of political action and allegiances that are in its real interests. Such a class is under the sway of an ideology.
    1. The core of feminism is the belief that women are subordinated to men
in western culture. Feminism seeks to liberate women from this subordination and to reconstruct society in such a way that patriarchy is eliminated and a culture created that is fully inclusive of women's desires and purposes. There are many different kinds of feminist theory but they all have these goals in common. Where they differ is in the particular visions of what such a reconstructed society would look like and in the strategies they employ to achieve it.

The first well documented feminist theorist in the Anglo-American tradition is Mary Wollstonecraft who produced a social theory of the subordination of women in her tract A Vindication of the Rights of Woman in 1792. Wollstonecraft engendered a political activism that has remained at the core of western feminism.

Initially, feminism was primarily concerned with women's political and economic equality with men. It gathered pace in the 19th century with political publications cataloguing the injustice of sexual inequality, for example The Subjection of Women (co-authored by J.S. Mill and Harriet Taylor Mill in 1869), and through activist organization of women's suffrage groups such as the Women's Social and Political Union (WSPU) (founded in 1903). The 20th century saw the proliferation of civil rights movements and groups campaigning for economic equality who focused on the issues of state welfare for mothers, equal education and equal pay. These early feminist issues continue to be a priority for all feminists and are a vital prop for later feminist theory in their emphasis on the importance of economic and political equality as a prerequisite for women's emancipation. They are especially prominent in Liberal Feminism, which has its roots in the civil rights movement and which maintains that equal opportunities and equal rights are the key to full social equality.

Whereas early feminism emphasized political and economic equality with men, the feminism that had its beginnings in the decades after the Second World War aimed to achieve a fuller and more sophisticated understanding of the cultural nature of oppression. To this end 'second wave' feminists look at the ways in which cultural institutions themselves underpin and perpetuate women's subordination. In particular, feminists reject the assumed universality of male values. Instead, they argue, in order to fully emancipate themselves from patriarchy, women must look to their own experience to create their own values and their own identities.

As feminism has developed, different areas of theory have concentrated on different aspects of oppression: Marxist Feminism claims all oppression to be a product of social and economic structures; Radical Feminism locates sexual oppression in the male manipulation of women's sexuality; Psychoanalytic Feminism looks at the construction of women's subjectivity in a sexist culture; Socialist Feminism combines many of these insights in a theory of the systematic oppression and exploitation of women in a patriarchal society, where women's procreative role is co-opted in the service of capitalism.

Moreover, theorists argue that women's oppression is deeply rooted in the very structures of our cultural norms. A Particular feature is the existence of binary oppositions predicated on the assumed polarity of the sexes which work to undermine the feminine in a variety of instances. For example, in politics the distinction between the public (male) and the private (female) serves to exclude women from positions of social importance and authority; in language, Hélène Cixous (The Newly Born Woman, 1987) has argued that gendered binary oppositions are an intrinsic part of grammar and syntax and so affect the possibilities of knowledge; in ethics, Carol Gilligan (In A Difference Voice, 1982) has argued that care, traditionally the province of the female, is devalued in opposition to a male idea of justice.

Recently, western feminism has come to the realization that it is itself a product of a particular cultural tradition, that belonging to the white European/American, rather than a universal expression of women’s struggle for emancipation. For black women and women of color the fight for liberation is as much a racial as a gender issue. They criticize the ethnocentricity of the western feminist tradition at the same time as endorsing the common fight against oppression.

Partly as a reaction to the charge of ethnocentricity, so-called ‘third wave’ feminism seeks to overcome the difficulties surrounding the question of what or who exactly ‘woman’ is, and who it is that the feminist movement claims to represent. In common with post-structuralism, third wave feminism abandons the concept of a single collective identity. Instead it offers ideas of ambiguity and difference as a means of understanding the unique issues and interests of each woman. This development is a controversial issue within feminism. Its critics argue that the notion of identity is itself fundamental to the analysis of oppression. Its dissolution undercuts the possibility of resistance and change, thus compromising feminism’s political commitment.

feminist theory

All variants of feminist theory tend to share major assumptions: gender is a social construction that oppresses women more than men; patriarchy (the male domination of social institutions) shapes these constructions, women's experiential knowledge best helps us to envision a future non-sexist society. These shared premises shape a double agenda: the task of critique (attacking gender stereotypes) and the task of construction, sometimes called feminist praxis (constructing new models). Feminist theory focuses particularly on women's experiences of sexuality, work and the family and so inevitably challenges traditional frameworks of knowledge and puts in question many assumptions of the social sciences, such as universalism.

Although foremothers like Mary Wollstonecraft (1759-97) are often claimed as feminist, the term feminism began to be used only in the 1890s. In the 20th century Virginia Woolf (1882-1941) and Simone de Beauvoir (1908-86) anticipate second wave feminism's attack on women's oppression. In the 1960s student and civil rights movements gave an impetus, shaping the topics and language of current feminist theory. As an identifiable area of the social sciences then, feminist theory dates from the 1970s with the publication of Kate Millett's Sexual Politics.

Feminist theory is, first, intensely interdisciplinary, ranging across customary subject divisions in the social sciences, including history, philosophy, anthropology and the arts among others. Second, certain themes recur: reproduction, representation and the sexual division of labor. Third, and most striking, are new concepts such as sexism and essentialism created to address absences in existing knowledge as the social discriminations these concepts describe. Fourth, women's subjective experiences are drawn upon to enrich scholarship and scientific theories. The starting-point is often consciousness raising where the personal can become political. MacKinnon argues that feminist theory is the first theory to emerge from those interest it affirms. Androcentric knowledge, feminist psychoanalysts claim, derives from masculine experiences of separation learned in childhood.

Since feminism developed at a time when the participation of women in the workforce was rising fast but while discrimination persisted, critics first focused on the sexism of language and of cultural and economic institutions. While intellectual ideas rarely present themselves in neat chronological order, the 1970s tackled the causes of women’s oppression (capitalism/ masculinity) describing society as a structure of oppressors (male) and oppressed (female). This moment is usually divided into forms of feminism (liberal, Marxist/ socialist, cultural/radical). Liberal feminism argues that women's liberation will come with equal legal, political and economic rights, and Friedan attacked the 'feminine mystique' preventing women from claiming equality. More comprehensive Marxist/socialist assessments of economic gender exploitation were made by Juliet Mitchell and others. The key questions were: Did women form a distinct sex-class? How far is capitalism structured by patriarchy? By widening the Marxist concept of production to include household labor and childcare, feminists could highlight further sexual divisions (‘domestic labor' debate) as well as women's unequal status at work ('reserve army of labor'). For example, Firestone argued that the 'material' of woman's reproductive body was as much a source of oppression as material inequality.

While dual systems theory argues that both capitalism and patriarchy construct gender, requiring a synthesis of Marxism with radical feminism, MacKinnon suggests that only radical feminism is feminism because it is post-Marxist. In opposition to a Marxist focus on production, cultural and radical feminists focused on reproduction and mothering and creativity. Although the labels cultural or radical are often misapplied, in general radical theorists take the view that sexuality, specifically male violence, is the cause of women's oppression condoned by the institutionalization of heterosexuality. This is the theme of Rich’s milestone essay 'Compulsory Heterosexuality' which builds on de Beauvoir's premise that women are originally homosexual, to propose that lesbianism can be part of every women's cultural, if not physical experience. This argument that ‘lesbian' is shaped as much by ideological preferences as by explicit practice built on ‘women identified women’ and ‘feminism is the theory, lesbianism is the practice’ in second wave feminism.

A major rethinking of symbolic and social structures of gender difference was undertaken by French feminists (écriture feminine). They claimed that the cultural and gendered binaries man/woman, culture/nature always made ‘woman’ inferior. Binaries ignore women’s fluid identity and the semiotic world of mother/infant bonding. American feminists drew on object relations psychoanalysis to locate the source of male power and fear of women in men's early experience of learning to be 'not the mother'. These accounts of gender identity and objectification greatly enriched feminist film and media study. The notion that there is a distinctive and gendered perception (the male 'gaze') is supported by the feminist standpoint theorists who challenge false notions of rationality and universalism in the social sciences. The 1980s saw a crucial shift in feminist theory when black feminist writers directed attention to ethnic dif1crenccs. Criticizing the three form, or phase typology (liberal/Marxist/cultural) as a white women' mental map which ignored the experiences of black women, they describe discrimination as an interlocking system based on race, class and gender. They also introduced fresh theoretical arguments, suggesting, for example, that the family was not necessarily patriarchal but could be a site of resistance. Black theory derives from Afrocentric history, as well as from a 'both/or' reality (the act of being simultaneously inside and outside society) and has a particular view of mothering experience.

These critiques of white essentialism were paralleled by feminist post-structuralist and post-modern critiques of structured systems of subjectivity Drawing on ideas from deconstruction and discourse analysis, feminists argued that gender structures are historically variable and not predetermined. This led to what Barrett calls 'the turn to culture' and a renewed interest in cultural symbols. Italian feminists, for example, created the term autocoscienza or the collective construction of new identities, Through cultural study many of these themes were brought together in feminist peace theory which argues that violence stems from traditional gender socialization. In opposition, pacifists created women-centered symbolic models of environmental action.

Feminist challenges to mainstream social science are diverse and influential, a central claim being that all science is motivated by gendered ideologies whether these are conscious or unconscious. The academic future of feminist theory is now more secure with the growth of women's studies. See also gender and sex, patriarchy.


Fertility (also referred to as natality) always refers in demographic usage to the achievement of live births. This is in keeping with its Latin etymological derivation from ferre (to bear) but in contrast to the verb, fertilize, which relates to conception. In English-language social science, the capacity to bear children is described as fecundity and the fact of giving birth as fertility. This is the reverse of the usage in French and other Romance languages. It also conflicts with much popular, medical and biological usage where infertility means not childlessness but infecundity or sterility (confusingly, the last can be employed in both senses even by demographers).

Fertility has long been identified with fruitfulness and productiveness, not only in terms of human reproduction but also with regard to the availability of game for hunters and the yield of crops. Indeed, the perceived relationship has played a major role in religion since paleolithic times. The dependence of fertility upon preceding sexual relations has meant that both fertility and coitus play a central role in much of human culture and morality. In some cultures, particularly in the Middle East, the fact of pregnancy or childbirth to a married woman is usually the cause of pleasure, but should she not be married the reaction of her relatives might be so antagonistic as to result in her death and in great problems in securing the marriage of her siblings.

In spite of the biblical advice to be fruitful and multiply, and its mirroring in the adages of many pre- industrial societies, the maximization of fertility is usually constrained by other competing social objectives. Fertility is usually not favored outside marriage partly because it may interfere with achieving the desired marriage. It may be discouraged soon after the birth of another child, because of the risk to health and life of both mother and children, or by grandmothers, because of the conflict between grandmaternal and maternal duties. Traditionally these constraints have been embedded in religion and mores rather than being expressed solely in terms of conflicting roles and danger to health.

Fertility may be expressed as a measure of the behavior of a society, a couple or an individual. In theory, reproductive measures are just as valid for individual males as females, but estimates for the former are rarely attempted because the fact of a man’s fathering a child is less obvious to the community and may be unknown to the progenitor himself. The most meaningful measures of a woman's reproduction is the number of births she experiences between menarch (or puberty) and menopause. For the whole society, the average for all women is known as completed fertility. However, this measure can be determined only in retrospect, a quarter of a century after the peak in fertility for most women completing their reproductive spans, and societies frequently demand more immediate measures which are necessarily those for aggregate populations of different ages for a specified period (usually one year and hence described as an annual rate). The most common aggregate measure is the crude birth rate or the number of live births per year, per thousand population. For national populations, this varied in 1993 from 53 in Malawi to 10 in Germany, Greece, Italy and Spain. The crude birth rate can prove to be an unsatisfactory measure in a society where immigration or other social changes have distorted the distribution of the population by sex or age, and more statistically refined measures relate births only to women of specified age or marital condition The general fertility rate is the ratio of the births during a year to the total number of women 15-49 years of age. The relating of births to women of a specific age, or age range, for a specified period (usually one year) is termed the age-specific birth rate (or fertility rate) and its sum over the whole reproductive age range is the total fertility rate, which in a society characterized by constant fertility over several decades, is an annual measure of the same magnitude as completed fertility. The total fertility rate ranged in 1993 from 7.7 in Malawi to 1. 2 in Hong Kong, 1.3 in Italy and Spain and 1.4 in Germany and Macao. In former East Germany it was lower still, while in Asia levels of 1.5 were found in Japan, 1.6 in South Korea, 1.7 in Singapore and 1.9 in China. Attention may be confined to married women so as to determine marital age-specific birth rates and the total marital fertility rate. If only female births are related to mothers of each age, then the cumulative measure is known as the gross reproduction rate. Because for societies the effective measure of reproduction is not live births but surviving children, a measure known as the net reproduction rate has been devised. This may be defined as the ratio of female births in the next generation to those in this generation in conditions of constant fertility and mortality, and hence measures the eventual multiplication of societies' numbers from one generation to the next, once the age structure changes so as to conform with these stable conditions. If the society maintains a rate of unity for a considerable period (half a century or more in societies which were previously growing rapidly) it will become stationary, while a lower rate will imply eventually declining population size, and a higher rate, a growing population. In 1980, levels below unity were recorded by thirty-six European countries (the exceptions being (Iceland, Ireland and Moldavia), seven East Asian Countries (including China), seven Caribbean countries, and also the USA, Canada, Australia and Georgia. However, only Hungary also exhibited a decline in numbers (Bulgaria, Czechoslovakia, Denmark, Greece, Ireland, Italy, Latvia and Romania being stationary), because such rates have been achieved so recently that there are still disproportionately more women in the potentially most fertile ages than would be the case in a population which had exhibited a net reproduction rate at or below unity for many years. Births within marriage may be described as nuptial or legitimate and those outside as exnuptial or illegitimate.

The female reproductive span varies between women and between societies (or the same society at different dates), but approximately spans ages from around 15 years to the late forties. If fertility were in no way constrained, not even by the institution of marriage or by the practice of breast-feeding which tends to depress fertility, completed family size would be around 15 employs 15.3 in his model). The total marital fertility rate of the Hutterites, a religious community opposed to deliberate fertility control, was in the western USA in the late 1920s at least 12.4 - a level employed by Coale in his model - but this figure was almost certainly rising because of the reduction of the period of breast-feeding. Where breast-feeding is of traditional duration two years or more) the following completed family sizes are found if deliberate control of marital fertility is not practiced. First, where female marriage is, early and widow remarriage is common, as among the Ashanti of West Africa (who practice only short periods postpartum abstinence), around 8. Second, where female marriage is early and widow remarriage is discouraged, as in India prior to the family planning program, around 6.5. Third, where female marriage is late and there are no strong feelings about widow remarriage, as western Europe before the Industrial Revolution, around 6. The term natural fertility has Been employed to describe the level of fertility and its structure by female age, found in societies which do not deliberately restrict marital fertility (but in which sexual abstinence may be practiced after childbirth and terminal sexual abstinence after becoming a grandmother).

However, contemporary interest in fertility largely arises from the decline in fertility in all industrialized and many other societies and the possibility of further reduction in developing countries. The latter has been assisted by family planning programs which have now been instituted by a majority of Third-World governments (beginning with India in 1952). The determinants of fertility have been classified as first, intercourse variables (age at first entrance to sexual union; the proportion of women never entering a union; the period spent after or between union; voluntary and involuntary abstinence, and frequency of intercourse), - second, conception variables (subfecundity or infecundity; contraception, and sterilization), and third, gestation variables (spontaneous or induced abortion). The list does not separately identify the duration of breast-feeding, which was undoubtedly in most traditional societies the major determinant of marital fertility, or sexual activity outside stable unions. Bongaarts has demonstrated that only four factors - the proportion of the female reproductive period spent in a sexual union (in many societies the period of marriage), the duration of postpartum infecundability (that is, the period without menstruation or ovulation plus any period beyond this of postpartum sexual abstinence), the practice of contraception and its effectiveness, and the extent of induced abortion -- provide 96 per cent of the explanation of the variance in fertility levels in nearly all societies.

Beginning in France in the late 18th century, and becoming more general in industrialized countries from the late 19th century, fertility has fallen in economically developed countries so that most appear likely to attain zero population growth. This has been achieved largely through the deliberate control of marital fertility, in most countries by contraception (before the 1960s by chemical or mechanical means as well as rhythm, abstinence, and withdrawal or coitus interruptus and subsequently increasingly by the use of the Pill, intrauterine devices and sterilization), supplemented by different levels of abortion. By 1993 fertility was clearly low or declining in every major world region except sub-Saharan Africa (where birth rates had begun to fall in South Africa, Botswana, Zimbabwe and Kenya - too few countries to affect the regional rate). Fertility also remained high and constant in parts of the Middle East and South-west Asia. Increasingly the relationship between the sexual act and conception has been weakened, and this has allowed a weakening in the relation between sexual activity and marriage. See also demographic transition.


    1. In Marxist theory, feudalism is the mode of production (or historical
epoch) that precedes capitalism within western Europe. Feudalism may be characterized by its decentralized structure of authority, and its pattern of land-holding. A feudal lord was linked to a politically subordinate vassal through an oath of fealty. This vassal swore loyalty to the lord, and expressed this loyalty typically through the willingness to provide military services. The vassal would fund this army through large land holdings divides amongst his own subordinates. (This lord-vassal relationship would occur through several levels of the aristocratic hierarchy, with knights at the bottom, in a process called ‘sub-infeudation’). At the base of the feudal economy, serfs were legally tied to work the land owned by their lords. The serf (or peasant) did have some control over the means of production, although without any legal ownership (in contrast to the proletariat in capitalism). Exploitation within feudalism occurred through the payment of rent. Serfs were legally obliged to transfer a portion of their product to the lord, either in kind, in money, or through working on the lord's land. (The Marxist model of feudalism inevitably oversimplifies the actual structure, focusing as it does on the two most significant classes, the aristocracy and the serfs or peasants. In practice, from the twelfth century onwards, significant numbers of serfs were able to buy their freedom, and move to the growing towns. The scope of feudal authority was thus increasingly restricted.) The dominant culture of feudalism, particularly in so far as culture is understood as an ideology that legitimates the existing political order, centered on the role of the church, in offering a morality of obedience and acceptance of one's place in the social order.

forces of production

1. In Marxism, forces of production are the productive capacities available to a society. As such, they include material technology

(such as machines, tools and sources of power), and the physical and

intellectual skills and capacities of the population. Marx suggests that forces of production continue to develop, in terms of their productive capacity, throughout history. Social change occurs through the growing conflict between the developing forces of production and the essentially static economic, political and legal organization of a society (the relations of production). Exploitation of a new technology will therefore require the overthrow of the existing order. (See mode of production).


    1. A significant development in the organization of systems of
industrial production in the 20thcentury. Fordism, as the name implies, is derived from the name of American factory owner and car manufacturer, Henry Ford. Ford developed a system of production which concentrated all the resources and materials necessary for manufacture on one site - the factory - and which allocated specialized tasks to different workers in a 'production line' in order to ensure the maximum degree of economic efficiency. The products which resulted from this mode of organization were mass produced and necessarily took on a standardized form with a view to their mass consumption.

The term 'post-Fordism' signals a move away from this model of mass production into diversified sites of production. Thus, the large-scale factory is replaced by smaller industrial units. This is evident even in the case of the production of commodities like cars, where it is often the case that the parts of a product are made in a variety of different places, and then assembled elsewhere. Post-Fordism is often associated with the rise of modern technology, and with the replacement of older, heavy-industry forms of production by it. The significance of this transition is a matter of some debate; as is the relationship between the two forms of production – in so far as post-Fordist models still utilize strategies that are common to the earliest Fordist model and rely upon the same basic commodity-based conception of value as exchangevalue. Aspects of post-Fordism have been related to postmodernism - including the implications of technology for cultural and social life. On David Harvey’s account of postmodernity, the postmodern era signifies precisely this movement away from large-scale centers of production. Thus, post-Fordism/postmodernism may be characterized in terms of a historical development into a global capitalist culture, which uses and coordinates the efforts of localized workforces in order to deal with a more flexible market.

3. Fordism is generally understood as a post-war affair, often synonymous with the long post-war boom from the 1950s to the early 19/0s, and centered largely on the US and European economics. Antonio Gramsci is attributed with the first rise of the term to convey some- thing distinctive about the cultural values emanating front the American way of life in the 1930s; indeed, it is the broad vision of Fordism as more than just an economic phenomenon which has tended to hold sway. For many, Fordism symbolizes the arrival of a new type of mass worker, a different kind of mass lifestyle, and the onset of art interventionist, welfare state. Put another way, Fordism acts as a metaphor for modern capitalism. The looseness of the term has been reduced, however, by Sayer's (1989) specification of four different senses of Fordism.

The first sense of Fordism takes its cue from the mass production systems pioneered by Henry Ford and his engineers at the Detroit car factory in 1913/14. Fordism essentially refers to a labor process based upon assembly-line techniques, specialized machinery, a fragmentation of tasks, and control by management over the line. The influence of Frederick Taylor's form of scientific management is to be found in the standardization of work tasks and the more detailed division of labor, as well as in the rise of semi-skilled labor. Where Fordism differs from Taylorism, however, is in the use of technology to impose a work discipline upon the labor force, rather than rely upon the reorganization of work.

The second sense of Fordism goes beyond the labor process to include the economic role of the mass production industries within an economy as a whole. In relation to their size, Fordist industries are seen to have a disproportionate effect on economic growth through their ability to generate and transmit growth to other sectors, many of which will include small-scale producers. Growth in the car industry, for example, creates a demand for a whole range of inputs, from electrical components to glass fittings and plastic accessories. Such industries may grow by meeting the demands of the mass Producers. Equally, the mass producers may generate disproportionate amounts of growth through the economies that arise from their scale of production. Although Fordist plants have high fixed costs, once in place they can realize increasing returns to scale as output rises. They are thus able to exert a dominant role in an economy.

A third meaning attached to Fordism refers to its hegemonic role in an economy. This again refers to the disproportionate role of the mass production industries, although in this instance it refers to the influence of the Fordist model outside of the mass production sector. So, for example, the extent to which collective bargaining agreements, rate-for-the-job contracts, or the imposition of managerial hierarchies occurs among small batch producers or even in the public services sector would be regarded as air indication of how wide- spread the Fordist model had become.

A fourth sense of Fordism comes closer to the notion of Fordism as an industrial era. It refers to Fordism as a mode of regulation aimed at sustaining a particular kind of economic growth; namely one that attempts to maintain the balance between mass production and mass consumption. Institutions such as the state are seen as central to the maintenance of such a balance, and, in the case of the long post-war boom the Keynesian interventionist, welfare state is as an example of how a mode of regulation may hold together a particular industrial era. Michel Aglietta’s account of Fordism set out in A Theory of CapitalistRegulation: The US Experience (1979) is perhaps the best known regulationist study of the long post-war boom and, indeed, equally well known For drawing attention to the much heralded crisis of Fordism.

The crisis of Fordism which dates from the early 1970s, is said to mark the end of an industrial era and the movement towards a new era based upon more flexible forms of economic organization and production, together with more diverse patterns of consumption and lifestyles. The terms used to describe shift are neo- or post-Fordism, depending upon which characteristics of the route out of Fordism are stressed. Neo-Fordism emphasizes the continuity of the labor process under Fordism, whereas post-Fordism stresses a break with all that is Fordism. The hallmark of both, however, is that flexibility is said to represent a solution to the rigidities of an economy organized along Fordist lines.

Such solutions are far from unanimously held, however, as indeed is the notion of Fordism itself. One weakness of the concept of Fordism, noted by post- industrialists in particular, is the inability to see beyond large-scale mass production and to take into consideration the development of the set-vice industries in the post-war period. At root here is the assumption disputed by post-industrialists, that manufacturing, not services, represents the ‘engine of growth’ within a modern economy. See also Taylorism.


    1. Functionalism was the dominant paradigm within cultural
anthropology and sociology throughout the first half of the 20th century. At its most basic, it attempts to explain any given social or cultural institution it terms of the consequences that particular institution has for the society as a whole. (Functionalism is therefore an alternative to historical accounts of the emergence of institutions or societies). Functionalist explanations assume that all institutions ideally participate in maintaining the stability of the society, and thus in reproducing the society from one generation to the next. Society, in accord with a frequently used analogy to a biological organism, is assumed to have the property of homeostasis, which is to say, the various parts of the society work to maintaining the society as a whole. Thus, for example, the functions of the modern family are those of physically nurturing and socializing the young. The culture (including the morality, or norms and values of the society) is thus transmitted, largely unchanged, from one generation to the next, and the economy is provided with a supply of individuals who are capable of playing useful roles.

The American sociologist Robert K. Merton (1968) proposed the distinction between manifest and latent functions. Latent functions of social institutions are those functions of which the social actors are not conscious. Such functions then go beyond any deliberate intentions that the actors may have in carrying out their own particular activities. Thus, the priests or shamen who initiate at a rain dance, may regard themselves as attempting to control the weather. The functionalist sociologist or anthropologist will rather say that the ceremony serves to raise the morale of the group, and thus stabilize and integrate it, perhaps in the face of stresses caused by sustained bad weather.

The most complex version of functionalism was developed largely by Talcott Parsons (1951). He used a systems theory approach borrowed from cybernetics. A system is theorized as maintaining its integrity in relation to an external environment. If a society is treated as a system, then there would be a set of four 'functional pre-requisites' that the social system, like any system, would have to perform in order to maintain integrity and so survive. The first functional prerequisite that needs to be satisfied is the adaptation to the external environment. This, in effect, is the task of the economy in any society (to make the resources of the external environment available to the society). The second pre-requisite is goal-attainment. Certain institutions in society (such as the political institutions) must be capable of directing the society. Integration, the third pre-requisite, maintains internal order (so can be seen the work of the police and education). The final pre-requisite, pattern-maintenance, entails the motivation of the members of the system to perform the functions required of them. This pre-requisite is met by the cultural sub-system. Culture is thus, for Parsons, itself to be understood as a system (and thus it will have the four pre-requisites of any system). In principle, Parsons' analysis of sub-systems within systems can be carried on ad infinitum, or at least down to the individual social agent, who is, him- or herself, also a system.

Functionalism has been criticized for its inability to deal with social conflict and social change. Functionalists tend to assume that society is a largely homogeneous whole, with a substantial consensus over the core norms and values. In terms of its analysis of culture, functionalism gives no scope for a theory of ideology, with the implication that a consensus could be manufactured or contested. There is, in addition, little scope to recognize conflict between sub-groups within the society, either as suggested by the Marxist model of class conflict, or in terms of the conflict theorist’s account of conflict as a sign of a politically vibrant, open society. Deviance from the consensual norm is condemned as ‘dysfuntional,’ which is to say disruptive to the social whole. The conservatism inherent in this account of conflict is also seen in the treatment of social change. Societies are seen to change not through revolutionary convulsions, as suggested by the Marxists, but rather through a ever finer differentiation of social functions (and thus, creation of sub-systems). As societies become more sophisticated, new specialist institutions will arise to fulfill functions previously carried out less satisfactorily elsewhere. Thus, the pre-industrial family was largely responsible for a child’s education. In industrial society, the school emerges as a specialist educational institution.

Functionalism’s greatest fault was perhaps its inability to deal with meaning, and to be able to recognize the capacity of social actors actively to recognize and construct a meaningful social world in which they could live and move. For this reason, the first significant challenge to functionalism’s supremacy in the social sciences came from symbolic interactionism. The more sophisticated versions of functionalism, linked to systems theory, have seen a revival in recent years, not least in the work of German social theorist Niklaus Luhmann. This version of functionalist theory has also been influential in the work of Jürgen Habermas.



3. The General Agreement on Tariffs and Trade (GATT) is a treaty between states (contracting parties) under which they undertake to adhere to specific rules and principles relating to trade policies and to participate in periodic rounds of multilateral trade negotiations (MTNs) to lower trade barriers. MTN rounds have in fact lowered average tariffs substantially, restricted domestic pressures for protection and contributed significantly to post-war reconstruction, growth and globalization of the world economy. From its creation in 1947, GATT has evolved from a replacement for the stillborn International Trade Organization into a pillar of the international trading system. Under the Uruguay Round agreement (1994) it will be subsumed into the World Trade Organization (WTO) in 1995.

The WTO is to oversee the GATT, as well as two further agreements arising from the Uruguay Round: the General Agreement on Trade in Services (GATS) and the Agreement on Trade-related Intellectual Property Rights (TRIPs). The GATT will continue as a separate body concerned with trade in goods, indeed with its range of influence expanded and basic disciplines reinforced.

The influence of GATT has increased steadily over time, as the number of contracting parties (over 120 by 1994 and set to grow further as more developing and eastern European countries join) and as the policy instruments subject to GATT rules have increased. Agricultural and industrial producer lobbies, consumer Associations and political parties are concerned with the outcome of MTNs. As a result, GATT and MTNs are the focus of major political debate, national and international, and of the mass media.

There are a number of possible reasons for GATT's longevity as an institution. There has, for instance, been a significant momentum for trade liberalization during much of the post-Second World War era, particularly until the first oil shock of 1973-4. Memories of the beggar-thy-neighbor policies that were pursued in the 1930s have remained sufficiently vivid. This historical perspective has been reinforced by the experience of the growth of incomes in the major trading nations that accompanied liberal trade policies. Clearly trade liberalization was not the sole facto at work here, but the association between trade liberalization and growth has certainly not been harmful to GATT's status.

The Articles of the GATT unambiguously call for open markets and the use of transparent and non- discriminatory policies so as to foster free and fair competition in international trade. In practice GAFT has tended to be a rather more pragmatic and Flexible institution. The MTNs have largely ignored at times certain sensitive issues, such as protectionism in agriculture and textiles, or have introduced different treatment for special cases such as the developing countries. Similarly GATT rules have often been broken by contracting countries without GATT trying to enforce its obligations, in order to avoid serious conflict between important members. In large part this has been a reflection of the nature of its relatively small, Geneva-based secretariat, which has had limited power and resources to police the international trading system and has had therefore to serve the wishes of its members, in particular its most powerful members. But GATT has demonstrated a capacity to respond to new challenges and to widen its competence. Initially largely restricted to customs duties, GATT first shifted its attention to other non-tariff measures in the Kennedy Round (1964-7) and now is more concerned with regulating the use of a potentially enormous array of non-border controls (including subsidies, competition policies and anti-dumping procedures) that may be used by national governments to restrict international competition. Following the Uruguay Round and the creation of the WTO, that competence and power to regulate the trading system is likely to increase. See also international trade.


    1. The concept of 'gender' is typically placed in opposition to the
concept of 'sex.’ While our sex (female/male) is a matter of biology, our gender (feminine/masculine) is a matter of culture. Gender may therefore be taken to refer to learned patterns of behavior and action, as opposed to that which is biologically determined. Crucially, biology need not be assumed to determine gender. This is to suggest that, while what makes a person male or female is universal and grounded in laws of nature, the precise ways in which women express their femininity and men express their masculinity will vary from culture to culture. Thus, qualities that are stereotypically attributed to women and men in contemporary western culture (such as greater emotional expression in women; greater tendencies to violence and aggression in men) are seen as gender, which entails that they could be changed. The literature of cultural anthropology gives many examples of different expressions of gender in non-western societies (with the work of Margaret Mead being exemplary in this respect). The reduction of gender to sex (which would be to see gender differences as themselves biologically determined) may be understood as a key move in the ideological justification of patriarchy.

gender and sex

The study of gender has its roots in the anthropology of women and for this reason is often mistakenly taken to be solely about women. Gender studies, however, are concerned with the cultural construction of embodied human beings, women and men. They examine the differences and similarities as experienced and interpreted in various contexts, taking this to mean all relationships whether they involve subjects of the same or different genders. Gender has often implied and/or been contrasted to sex, the biologically defined categories of male and female.

Despite the fact that the biologically determining nature of sexual differences had been questioned by Margaret Mead as early as 1935, gender studies began essentially in the 1960s with the growing awareness of the need to 'write women into' male-biased ethnographies. This women-centered approach, the anthropology of women as it came to be known, did not, as Moore notes, arise from the absence of women in the traditional ethnographies, but was rather due to a growing concern that women and their world-views were not being adequately represented.

In order to redress this imbalance, women's views had to be heard in the ethnographies. It was felt that women researchers, as women, were more capable of approaching and understanding women in other cultures. Underlying this belief was the view, often identified as essentialist, that there exists a universal women's nature: that despite cultural variations, bio- logical sexual differences are stable and pre-social and are reflected in the socially constructed gender categories, men and women.

Many anthropologists felt that this position was untenable. In order to avoid the ghettoization that would occur with an anthropology of women, and equally important in order to surpass past, male-biased theories and biological determinism, they had to develop new theoretical and analytical approaches. They had to shift their interests from the study of women to gender studies, that is, to the study of women and men. In constructionist terms they asked how sexual differences were constituted through social and historic discourse and interaction. The question which remained central to their thesis was why and how women in nearly all societies seemed to experience some form of subordination to men.

One of the first to adopt such an approach was Edwin Ardener. He proposed a theory of ‘muted groups' in which dominance was explained as a group's ability, the group often identified with men, to express a world-view while in turn muting alternative, often women's, models. Other authors noted the western bias for language as a means of expression often controlled by men, as opposed to other non-verbal forms of expression such as bodily gestures, weaving and cooking. Furthermore, anthropologists, both men and women, use male models which have been drawn from their own culture to interpret models present in other cultures.

By pointing out that women anthropologists were not necessarily privileged in their studies of women of other cultures, researchers were able to pursue the study of gender as a cultural and sociological construction. Two theoretical approaches can be discerned in the study of the position of women and of gender. The first, influenced by Engels's distinction between Production and reproduction and his analysis of the sexual division of labor, took economic relations as central to its thesis. This approach preceded the anthropology of women but late fed into the study of gender. The second based its analysis on the separation between nature and culture, rooted in the works of Freud and Lévi-Strauss.

Marxist-oriented researchers associated the subordination of women with the domestic/public dichotomy and the sexual division of labor. This analysis aligned the subordination of women with their exclusion from the public sphere of production and their subsequent relegation to reproductive labor within the household. They sought to explain women's position in society on the basis of women's access to the means of production. This view proved to be too narrow and ethnocentric, its theoretical premises rooted in industrialized, class-based societies. For example, some anthropologists noted that in hunter-gatherer societies where there is no sharp distinction between the public and private domains, the sexual division of labor is not based on relations of inequality and asymmetry but on relations of complementarity. Other ethnographic evidence showed how women often took part in both productive and reproductive labor. Although the definition of reproduction was expanded in some instance to include social reproduction, the subordination of women was still often linked to their role in biological reproduction.

By contrast, adopting a structuralist approach and in line with Simone de Beauvoir's position, Sherry Ortner argued that women's universal subordination was a result of their association with nature due to their ability to bear children, whereas men were everywhere associated with the implied superior domain of culture and its production. Michelle Rosaldo, in the same vein, pointed out that women were identified with the domestic domain because of their roles as mothers, She stressed the distinction between the ascribed status of women and the achieved status of men. For Rosaldo, women could overcome their subordinate role only if they moved out of the domestic and into the public, male domain.

These propositions were widely criticized for making too simple a universalisation. For example, in MacCormack and Strathern, the contributors showed that the structuralist dichotomy between nature and culture was a western construct, historically constituted. In many societies, it was noted, this dichotomy was differently constructed and in some it was questioned whether it existed at all. Rosaldo, too, later modified her position noting that the distinction between the domestic and the public could not necessarily be universally applied.

In view of this critique, gender categories and the relations between gender have to be understood in a different manner. Ortner and Whitehead proposed that gender could be understood as ‘prestige structure' and had to be correlated with other systems of social evaluation. Errington, for example notes that in island Southeast Asia, the differences between men and women are not highly marker. This may be due to their social invisibility to western-trained researchers. It might also have to do with how Euro- Americans define power and status. Women in these societies have instrumental power and control over practical matters and money. Yet their economic power may be the opposite of the spiritual power which brings the greatest prestige.

In short, the universals that had been proposed, that is the dichotomy between nature and culture, and its companion, domestic and public, had been questioned and undermined by the ethnographic evidence. Similarly questioned was the universality of such categories as subordination and inequality since these categories were shown to be context bound. Also brought to light were the multiple experiences of women and men, even within the same society. Women of different race, class and ethnic backgrounds did not necessarily share the experiences of white, middle-class women. It also challenged those theories which viewed the categories of women and men as universally given. Gender categories, it was argued had to be observed and interpreted within a particular time and place.

Of equal importance was that the sex categories, male and female, were increasingly being viewed as presumed rather than proven by researchers. It was increasingly becoming evident that though the distinction between biological determined sex and culturally determined gender had assisted researchers in examining the relations between men and women and in viewing gender categories as socially and culturally constituted, this dichotomy in the end echoed that of nature and culture. Sex, as biologically given and therefore pre-social and causally prior to gender, was being challenged.

Social historians, prominent among them Michel Foucault, laid bare the historical construct of sex as a western category. Laqueur for example notes how recent the two-sex model of Euro-America is. For centuries people held a one-sex model in which women were seen as inverted men. The two-sexed model developed not only in consequence of the dominance of the Cartesian model but also due to the growing power of biomedical discourses.

In line with Schneider's critique on kinship, Yanagisako and Collier argued that the two different and exclusive biologically defined categories, male and female, are derived from the Euro-American folk model of heterosexual reproduction - the same model underlying concepts of kinship. Therefore, in order to free ourselves from the blinkering category of sex, Collier and Yanagisako proposed that the study of gender should be disengaged from sex; that cultural construction alone should be studied.

Some reacted to this proposition. Errington, for example, makes the distinction between 'sex', 'Sex', and ‘gender.’ By 'Sex' she refers to a particular social construct of human bodies. The term 'sex' by contrast refers to the physical nature of human bodies, while ‘gender’ refers to what different cultures make of sex. Given these distinctions she suggests that Yanagisako And Collier have conflated the meaning of sex and Sex. To disassociate gender from sex, that is from physical bodies, would lead to a confusion about what gender and would simply reaffirm the distinction between nature and culture and the presumed hierarchical relations between them. Rather, the relation between sex and gender, biology and culture is interactive, the one not predetermining the other.

Yet if sex as a biological category is itself a product of western history, can it exist independently outside of a social matrix? Moore notes this, taking issue with both Yanagisako and Collier and with Errington. She argues that if the category sex does not exist independently of a social context we can only really speak of Sex in any given society. As is shown by, historical studies as well as by ethnographic research, not all societies have two mutually exclusive sexual categories but they do have a model of Sex. Given this, the analytic distinction, as Moore and others note, between sex and gender is no longer clear.

This does not mean that we must necessarily do away with the analytical categories of sex and gender; rather we must explore them further, and examine how they define and encompass one another in different contexts and discourses. Anthropologists seem to be moving towards new ways of understanding peoples' views of themselves and their relations, towards what some would define as an anthropology of identity. See also feminist theory, patriarchy.


    1. A method of analysis of forms of ethical (Nietzsche) or
epistemological (Foucault) discourse. Nietzsche, in On the Genealogy of Morals (1887), was the first to outline this approach, and Foucault's work owes much to him. Nietzsche's text argues that the basis of morality and the meaning of value-attributions such as 'good', 'evil' and 'bad' are not derived, as is often supposed to be the case, from either altruistic or utilitarian modes of valuing (nor, it might be added, from any divine sanction). Rather, ethical systems can be understood in terms of their 'genealogy,’ that is, as being produced by social and historical processes. Above all, morality, for Nietzsche, represents not a disinterested conception of what constitutes the 'good', but is rather an expression of the interests of particular social groups. Thus, the notion of 'good' has, he argues, two modes of derivation which signify two very different social perspectives and hence systems of valuing. First, the 'good', in its original sense, expressed the viewpoint of the noble classes who inhabited the ancient world. 'Good', taken in this sense, meant 'beloved of God', and was the expression of the nobles' affirmation of their own identity. 'Bad', in turn, expressed a secondary phenomenon, i.e. the nobles' reaction to those who were their social inferiors ('common', 'plebian', etc.). Noble (or master) morality was thus premised on an affirmation of the identity of the noble as a bestower of values. Second, 'good' in the second sense Nietzsche outlines was a secondary mode of valuing derived from the appellation 'evil' ascribed by slaves to describe their oppressors (the nobles). Slave morality, as Nietzsche terms it, therefore derived its notion of 'good' as a secondary consequence of the negative valuation 'evil'. In this way, negation is the 'creative deed' of the slave. Slave morality, Nietzsche argues, is the morality of both the Hebraic tradition and of Christianity, and is a 'resentiment' morality, i.e. one whose genealogy is that of the slave's resentment of the nobles'/ master's power over them. It is, in Gilles Deleuze's phrase, a 'reactive' morality, rather than an active or affirmative one.

Nietzsche's genealogical method is in fact a variant on a project outlined in one of his earlier works, Human, All-Too Human (1878-80). In the opening sections of that work he argues for the construction of a 'chemistry' of the religious and moral sensations and values. In other words, Nietzsche takes the view that values (and, indeed, feelings/sensations) can be revealingly understood by producing a causal and historical account of them which seeks to unearth their origins. To this extent, the genealogical approach fits in with much of Nietzsche's philosophical thinking, which often expresses the view that what has hitherto been regarded as valuable (or even sacred) can be adequately accounted for within a materialist methodology of explanation. Foucault's genealogical method of investigation, likewise, takes as its point of departure the historical conditions which constitute discourses of knowledge. His analysis of, for example, the clinical definitions and treatments of madness since the 17th century, emphasizes the importance of social relations (above all, relations of power) in the construction of knowledge, and seeks to reveal through painstaking historical analysis the influences and interests which underlie and are concealed by discourses which claim to articulate objective knowledge. A key problem, at least with Foucault's application of the genealogical method, is that in applying it to forms of knowledge he opens himself to the criticism that his own discourse is itself a production of historical factors and an expression of interests (see Peter Dews's criticisms listed in the readings below, which provides a Nietzschean criticism of Foucault's methodology).


3. The development of the world economy has a long history dating from at least the 16th century, and is associated with the economic and imperial expansionism of the great powers. By globalization we refer to a more advanced stage of this process of development. The global economy is one in which all aspects of the economy - raw materials, labour information and transportation, finance, distribution, marketing - integrated or interdependent on a global scale. Moreover, they are so on an almost instantaneous basis. By global economy, ‘we mean an economy that works as a unit in real time on a planetary basis'. The forces of globalization thereby tend to erode the integrity and autonomy of national economies.

Newly emerging and consolidating global corporations are the driving force behind these developments. Where multinational corporations in the past operated across a number of national economies, economic globalization now requires corporate interests to treat the world as a single entity, competing in all major markets simultaneously, rather than sequentially. This may involve the marketing of global products or world brands such as Coca Cola, McDonald's or Kodak. In most cases, however, global competitiveness will require more complex and differentiated strategies. Managing in a borderless world in fact necessitates the segmentation of corporate organization and marketing according to transnational regions, notably those of Europe, North America and the Far East. Some global corporations describe their approach, more precisely, as one of global localization, recognizing the continuing significance of geogaphical difference and heterogeneity. The globalization of economies is more accurately seen in terms of an emergence of global-local nexus.

Globalization has been made possible through the establishment of worldwide information and communication networks. New telecommunication and computer networks are overcoming the barriers of time and space, allowing corporate and financial interests to operate on a twenty-four-hour basis across the planet. The inauguration of information superhighways promises to further extend this compression of our spatial and temporal worlds. Global media are also part of this complex pattern of transborder information flows. Using new satellite and cable systems, channels like CNN and MTV have begun to create truly global television markets and audiences (though here too, there is growing realization of the need to be sensitive to local differences). Instantaneous and ubiquitous communication gives substance to Marshall McLuhan's idea, first put forward in the 1960s, that the world is becoming a global village.

As national economic spaces become less functional in the global context, cities and city-regions are assuming a new role as the basing points in the spatial organization of international business. Cities are consequently compelled to attract and accommodate the key functions of the global economy (services, finance, communications, etc.). This results in inter-urban competition across national borders, leading to the formation of a new international urban hierarchy. Cities must aim to become key hubs in the new global networks. Metropolitan centers such as New York, Tokyo and London may be described as truly 'world cities' or 'global cities,’ the command centers in the global economy. Competition among second-level global cities involves the struggle to achieve ascendancy within particular zones of the world. This competition also requires cities to distinguish their assets and endowments through strategies of place marketing and differentiation: in a context of increasing mobility, the particularities of place become a salient factor in the global positioning of cities. As well as attracting global investors and tourists, cities are also the destinations of migrant and refugee populations from across the world. Global cities are also microcosms in which to observe the growing dualism between the world's rich and poor and the encounter of global cultures.

We should consider what globalization means for the world's cultures. Is there a global culture? What might we mean by this? In the case of commercial culture (film and television, popular music, etc.), there are certainly aspirations towards creating a unitary, worldwide market. Global media corporations, such as Time Warner, Sony and News Corporation, are thinking in terms of global products and global audiences. This is possible only with certain kinds of programming, however, and for the most part global media interests operate in terms of transnational media spaces (e.g. the 'Eurovision' region; the 'Asian' region served by Murdoch's Star TV). At the same time, there are contrary tendencies, towards the proliferation of national and also regional (e.g. Basque, Gaelic) media. This may be seen in terms of the (re)assertion of cultural difference and distinction in the face of globalizing tendencies. Again it is the relation between the global and the local that is significant. The globalization of the media should be understood, then, in terms of the construction of a complex new map of transnational, national and subnational cultural spaces.

Cultural globalization – associated with flows of media and communication, but also with flows of migrants, refugees and tourists – has brought to the fore questions of cultural identity. For some, the proliferation of shared or common cultural references across the world evokes cosmopolitan ideals. There is the sense that cultural encounters across frontiers can create new and productive kinds of cultural fusion and hybridity. Where some see cosmopolitan complexities, others perceive (and oppose) cultural homogenization and the erosion of cultural specificity. Globalization is also linked, then, to the revalidation of particularistic cultures and identities. Across the world, there are those who respond to global upheaval by returning to their ‘roots,’ by reclaiming what they see as their ethnic and national homelands, by recovering the certainties of religious tradition and fundamentals. Globalization pulls cultures in different, contradictory, and often conflictual, ways. It is about the deterritorialization of culture, but it also involved cultural reterritorialization. It is about the increasing mobility of culture, but also about new cultural fixities.

We may see globalization in terms of the new possibilities opened up by global communications, global travel and global products. Or, alternatively, we may consider it from the perspective of those for whom it represents unwelcome destabilization and disorientation. To some extent, this difference may be a matter of who will gain from global change and who will lose or be marginalized. Globalization occurs as a contradictory and uneven process, involving new kinds of polarization (economic, social and cultural) at a range of geographical scales. The encounter and possible confrontation of social and cultural values is an inevitable consequence. We have a global economy and a global culture: we do not, however, have global political institutions that could mediate this encounter and confrontation. See also multinational enterprises, world-system theory.

grand narrative

    1. A term associated with Jean-François Lyotard’s account of
postmodernism. A grand narrative (or meta-narrative) is a narrative form which seeks to provide a definitive account of reality (e.g. the analysis of history as a sequence of developments culminating in a workers’ revolution offered by classical Marxism). In terms of Lyotard’s later work, meta-narratives (or meta-genres of discourse) founded on the logical aporia (or ‘double-bind’) of class as discussed by analytic philosopher Bertrand Russell: ‘either this genre is part of the set of the genres, and what is at stake in it is but one among others, and therefore its answer is not supreme. Or else, it is not part of the set of the genres, and it does not therefore encompass all that is at stake, since it excepts what is at stake in itself’ (The Differend: Phrases in Dispute).



    1. The term 'hegemony' is derived from the Greek hegemon, meaning
leader, guide or ruler. In general usage it refers to the rule or influence of one country over others, and to a principle, about which a group of elements are organized. In 20th-century Marxism, it has been developed by the Italian theorist Antonio Gramsci (1891-1937), to explain the control of the dominant class in contemporary capitalism. He argues that the dominant class cannot maintain control simply through the use of violence or force. Due to the rise of trade unions and other pressure groups, the expansion of civil rights (including the right to vote), and higher levels of educational achievement, rule must be based in consent. The intellectuals sympathetic to the ruling class will therefore work to present the ideas and justifications of the class's domination coherently and persuasively. This work will inform the presentation of ideas through such institutions as the mass media, the church, school and family. However, precisely because this hegemon1c account of political control entails consent, ideas cannot simply be imposed upon the subordinate classes. On the one hand, the ruling class will have to make concessions to the interests and needs of the subordinate classes. On the other hand, the subordinate classes will not accept hegemony passively. The ideas of the dominant class will have to be negotiated and modified, in order to make them fit the everyday experience of the subordinate classes. (Members of the subordinate classes may therefore have a dual consciousness. They will simultaneously hold contradictory or incompatible beliefs, one set grounded in hegemony, the other in everyday experience.) The theory of hegemony was of central importance to the development of the British cultural studies (not least in the work of the Birmingham Centre for Contemporary Cultural Studies). It facilitated analysis of the ways in which subordinate groups actively respond to and resist political and economic domination. The subordinate groups need not then be seen merely as the passive dupes of the dominant class and its ideology.

2. Hegemony was probably taken directly into English from the word egemonia, Greek, root word egemon, Greek - leader, ruler, often in the sense of a state other than his own. Its sense of a political predominance,usually_9f one state over another, is not common before the 19th century, but has since persisted and is now fairly common, together with hegemonic, to describe a policy expressing or aimed at political predominance. More recently hegemonism has been used to describe specifically 'great power' or 'superpower' politics, intended to dominate others,(indeed hegemonism has some currency as an alternative to imperialism).

There was an occasional early use in, English to indicate predominance of a more general kind. From 1567 there is 'Aegemonie or Sufferaigntie of things growing upon ye earth,’ and from 1656 'the Supream or Hegemonick part of the Soul.’ Hegemonic, especially, continued in this sense of 'predominant' or of a 'master principle.’

The word has become important in one form of 20th-century Marxism, especially from the work of Gramsci (in whose writings, however, the term is both complicated and variable. In its simplest use it extends the notion of political predominance from relations between states to relations between social classes, as in bourgeois hegemony. But the character of this predominance can be seen in a way which produces an extended sense in many ways similar to earlier English uses of hegemonic. That is to say, it is not limited to matters of direct political control but seeks to describe a more general predominance which includes, as one of its key features, a particular way of seeing the world and human nature and relationships. It is different in this sense from the notion of 'world-view,’ in that the ways of seeing the world and ourselves and others are not just intellectual but political facts; expressed over a range from institutions to relationships and consciousness. It is also different from ideology in that it is seen to depend for its hold not only on its expression of the interests of a ruling class but also on its acceptance as 'normal reality' or 'commonsense,’ by those in practice subordinated to it. It thus affects thinking about revolution in that it stresses not only the transfer of political or economic power, but the overthrow of a specific hegemony: that is to say an integral form of class rule which exists not only in political and economic institutions and relationships but also in active forms of experience and consciousness. This can only be done, it is argued, by creating an alternative hegemony - a new predominant practice and consciousness. The idea is then distinct, for example, from the idea that new institutions and relationships will of themselves create new experience and consciousness. Thus an emphasis on hegemony and the hegemonic has come to include cultural as well as political and economic factors; it is distinct, in this sense, from the alternative idea of an economic base and a political and cultural superstructure, where as the base changes the superstructure is changed, with whatever degree of indirectness or delay. The idea of hegemony, in its wide sense, is then especially important in societies in which electoral -politics and public opinion are significant factors, and in which social practice is seen to depend on consent to certain dominant ideas which in fact express the needs of a dominant class. Except in extreme versions of economic determinism, where an economic system or structure rises and falls by its own laws, the struggle for hegemony is seen as a necessary or as the decisive factor in radical change of any kind, including many kinds of change in the base. See culture, imperialism.


1. A theory which holds that an historical analysis of human beliefs, concepts, moralities and ways of living is the only tenable means of explaining such phenomena. Thus, an historicist rejects the belief that, for example, there are any a-historical necessary truths concerning the construction of human identity (see also essentialism), on the grounds that such concepts are the result of historical processes particular to specific cultures and cultural forms. Historicism therefore extols a cultural relativism. Thinkers associated with the historicist approach include sociologist Karl Mannheim, who (combining an epistemological relativism and a cultural relativism) argued that all knowledge of history is a matter of relations, and that the perspective of the observer cannot be excised from historical analysis. Michel Foucault's work, in turn, argues for the belief that the self is historically constructed, rather than a naturally produced and universal structure common to all times and cultures. This position has led to arguments about the construction of aspects of identity in relation to issues of race and gender.


In the United States, Foucault's work (as well as that of Raymond Williams) has had an influence in initiating New Historicism, which takes as its point of departure a crossfertilization between theories associated with poststructuralism and Marxism. New Historicists are interested in the social and ideological effects of meaning and its construction. They offer readings of primarily literary texts which, in contrast to the non-historical, text-based approach of traditional criticism, seek to interpret them in the cultural context of their production by way of an historical methodology, and yet spurn the development of grand narratives of history or knowledge. Writers who have adopted this approach include Stephen Greenblatt, who provided a first elaboration of New Historicism in his The Forms of Power and the Power of Forms in the Renaissance (1980).


2. In its earliest uses history was a narrative account of events. The word came into English from the word histoire, French, historia, Latin, from the root word istoria, Greek, which had the early sense of inquiry and a developed sense of the results of inquiry and then an account of knowledge. In all these words the sense has ranged from a story of events to a narrative of past events, but the sense of inquiry has also often been present (cf. Herodotus: '…why they went to war with each other'). In early English use, history and story (the alternative English form derived ultimately from the same root) were both applied to an account either of imaginary events or of events supposed to be true. The use of history for imagined events has persisted, in a diminished form, especially in novels. But from the 15th century history moved towards an account past real events, and story towards a range which includes less formal accounts of past events and accounts of imagined events. History in the sense of organized knowledge of the past was from the late 15th century a generalized extension from the earlier sense of a specific written account. Historian, historic and historical followed mainly this, general sense, although with some persistent uses referring to actual writing.

It can be said that this established general sense of history has lasted into contemporary English as the predominant meaning. But it is necessary to distinguish an important sense of history which is more than, though it includes, organized knowledge of the past. It is not easy either to date or define this, but the source is probably the sense of history as human self-development which is evident from the early 18th century in Vico and in the new kinds of Universal Histories. One way of expressing this new sense is to say that past events are seen not as specific histories but as a continuous and connected process. Various systematizations and interpretations of this continuous and connected process then become history in a new general and eventually abstract sense. Moreover, given the stress on human self-development, history in many of these uses loses its exclusive association with the past and becomes connected not only to the present but also to the future. In German there is a verbal distinction which makes this clearer: Historie refers mainly to the past, while Geschichte (and the associated Geschichtsphilosophie) can refer to a process including past, present and future. History in this controversial modern sense draws on several kinds of intellectual system: notably on the Enlightenment sense of the progress and development of civilization; on the idealist sense, as in Hegel, of world-historical process; and on the political sense, primarily associated with the French Revolution and later with the socialist movement and especially with Marxism, of historical forces - products of the past which are active in the present and which will shape the future in knowable ways. There is of course controversy between these varying forms of the sense of process, and between all of them and those who continue to regard history as an account, or a series of accounts, of actual past events, in which no necessary design, or, sometimes alternatively, no necessary implication for the future, can properly be discerned. Historicism, as it has been used in mid-20th century, has three senses: (i) a relatively neutral definition of a method of study which relies on the facts of the past and traces precedents of current events; (ii) a deliberate emphasis on variable historical conditions and contexts, through which all specific events must be interpreted;(iii) a hostile sense, to attack all forms of interpretation or prediction by 'historical necessity' or the discovery of general 'laws of historical development' (cf. Popper). It is not always easy to distinguish this kind of attack on historicism, which rejects ideas of a necessary or even probable future, from a related attack on the notion of any future (in its specialized sense of a better, a more developed life) which uses the lessons of history, in a quite generalized sense(history as a tale of accidents, unforeseen events, frustration of conscious purposes), as an argument especially against hope. Though it is not always recognized or acknowledged as such, this latter use of history is probably a specific 20th century form of history as general process, though now used, in contrast with the sense of achievement or promise of the earlier and still active versions, to indicate a general pattern of frustration and defeat.

It is then not easy to say which sense of history is currently dominant. Historian remains precise, in its earlier meaning. Historical relates mainly but not exclusively to this sense of the past, but historic is most often used to include a sense of process or destiny. History itself retains its whole range, and still, in different hands, teaches or shows us most kinds of knowable past and almost every kind of imaginable future.


1. A contextualist theory of truth, meaning and interpretation favored by some philosophers - notably WV Quine - and also by many cultural and literary theorists working in the broadly hermeneutic tradition that runs from Schleiermacher to Heidegger and Gadamer. On this view it is impossible to assign meanings or interpret beliefs except in a context wider than that of the individual statement or utterance. Opinions vary as to just how Widely this interpretive 'horizon' has to be drawn, or whether - in principle - there is any limit to the range of relevant background knowledge that might be involved. For the most part philosophers in the Anglo-American ('analytic') camp tend to adopt a pragmatic outlook and not worry too much about the demarcation issue while 'continental' thinkers follow Heidegger in espousing a depth-hermeneutic approach that concerns itself centrally with just this issue.

Thus, for Heidegger, the history of 'western metaphysics' from Plato to Husserl is essentially the history of an error, that which resulted when thinking turned away from truth-as-unconcealment (aletheia) vouchsafed through language, and instead sought to analyze the structure and content of truth through various theories of knowledge and representation. Only by overcoming that fateful legacy - nurturing a receptive openness to language holistically construed - could philosophy be set back upon the path to authentic, primordial truth. Heidegger's interpreters have differed widely in the extent of their willingness to follow him along this path. For his closest disciples, Gadamer among them, it is the way towards a deeper and fuller understanding of the so-called 'hermeneutic circle', that is to say, the ongoing dialogue between past and present wherein interpretation is always guided - or its 'horizon' already marked out - by traditional meanings and values. Hence, the charge of uncritical conservatism leveled against Gadamer by Jürgen Habermas and other dissenting commentators.

This charge has a bearing on our topic here since the holistic turn in philosophy of language and interpretation theory can be seen as lending support to various forms of cultural-relativist argument. For if the truth-value of individual statements is a function of their role within the wider context of statements-held-true at any given time, and if these make sense only when construed against the background horizon of communally sanctioned beliefs, then it follows that statements and beliefs cannot be criticized except on the evaluative terms laid down by some existing cultural consensus. Such is the reading of Heidegger proposed by a number of Anglo-American philosophers in quest of alternative ideas from outside the mainstream analytic tradition. Thus, according to Richard Rorty, we can dump all that portentous depth-ontological talk about 'western metaphysics,’ truth-as-unconcealment, authentic Dasein, etc., while taking Heidegger's pragmatist point about language as a way of being-in-the-world which requires nothing more in the way of justifying grounds or epistemological back-up. This goes some way towards explaining the recent (on the face of it unlikely) convergence between a certain strain of 'post-analytic' philosophy and a certain, albeit selective, appropriation of

Heideggerian themes. What unites them across some otherwise sizeable differences of method and approach is the belief that meaning cannot be accounted for by the kinds of logicosemantic analysis that characterized philosophy of language in the line of descent from Frege and Russell.

Quine's essay 'Two Dogmas of Empiricism' is a classic statement of the case and one that has exerted a strong influence on recent Anglo-American debate. Its argument may be stated very briefly as follows. Philosophers have often assumed that there exists a clear-cut categorical distinction between analytic statements (such as 'all batchelors are unmarried men') whose truth is purely definitional and hence self-evident to reason, and synthetic statements (such as 'water is the substance with molecular structure H 2 0') which involve some item of acquired knowledge, and whose truth is therefore neither self-evident nor merely tautological. Such was the position maintained by Kant in his Critique of Pure Reason where he also asserted the existence of a priori synthetic truths, i.e., those - like the principle of causality - that were always necessarily presupposed in every act of empirical judgement, and which thus provided the transcendental ground (or condition of possibility) for all experience and knowledge. The empiricist Hume also drew a distinction between 'truths of reason' and 'matters of fact', one that was taken up and developed by various 20th-century thinkers, among them Bertrand Russell, Rudolf Carnap and the Logical Positivists. Where the two traditions converged - despite all their deep-laid differences of philosophic principle - was on the basic point that individual statements (judgements or propositions) were the units of meaningful discourse, and moreover that these could be analyzed so as to reveal their underlying structure or logico-semantic form.

Such was Russell's celebrated 'Theory of Descriptions', designed to remove certain ambiguities of reference and scope in ordinary (natural) language by providing a clear-cut logical paraphrase in terms of quantifiers, variables, and logical constants. In this respect it paralleled Frege's theory of sense and reference which sought to distinguish genuinely referring

expressions from other (e.g. fictive or mythical) names – such as 'Pegasus' or 'Odysseus' - that failed to correspond to any real-world, objective, or historically existent entity. These are paradigm examples of analytic philosophy in so far as they assume (1) that the meaning of a statement is given by its truth-conditions, and (2) that those conditions are definable

in terms of its various component parts. Quine's 'Two Dogmas of Empiricism' was an attack on this entire program of analysis, especially the version of it laid out in Carnap's book The Logical Construction of the World. According to Quine that program ran up against a number of intractable problems. The most basic of these was its failure to justify the presumed distinction between analytic and synthetic statements, or logical truths-of-reason and empirical matters-of-fact. For it could always be shown that any definition of the term 'analytic' had to rely on other terms - like 'synonymous' or 'logically equivalent' - which themselves relied on the notion of analyticity, thus falling prey to the charge of circular argument. In which case there is no possibility of holding a firm, categorical line between logic conceived as the a priori basis of all valid reasoning and those various items of empirical knowledge that are always open to challenge or revision under pressure from recalcitrant evidence. That is to say, we might always be forced to revise some presumptive logical 'law of thought' - such as bivalence or excluded middle - if it came into conflict with the best current theories of physical science. Thus, to take Quine's example: on one interpretation of quantum mechanics it might be deemed necessary to suspend the 'law' of excluded middle so as to accommodate otherwise unthinkable phenomena like quantum superposition or the wave/particle dualism.

It is in this context that Quine offers his famous metaphor of the totality of human knowledge at any given time as a 'man-made fabric' extending all the way from a core region of putative logical ground-rules to a periphery where observation-statements link up with the data of empirical experience. His point is that nothing is immune from revision since we can always save some cherished item of belief or conserve some pragmatically useful theory by making adjustments elsewhere in the fabric. Hence Quine's argument concerning the holistic character of all interpretation - whether in the natural or the social and human sciences - and the lack of any ultimate (non-scheme-relative) criteria for distinguishing factual from theoretical components in our overall scheme of beliefs. For theories are always 'underdetermined' by the best evidence to hand, while observation-statements are always 'theory-laden' in the sense that they involve a wide range of standing ontological commitments, from the 'posits' of our everyday commonsense object-language to quarks, gluons, muons and other such specialized candidate items. According to Quine there is no good reason - pragmatic convenience apart - for supposing that some of these objects enjoy a privileged ontological status (i.e., that they really exist quite apart from our present framework of beliefs) whereas others must be counted theory-dependent or as 'existing' only by virtue of their role in the discourse of advanced theoretical physics. Such distinctions have to drop out if we take his point about ontological relativity and the extent to which all our reality-ascriptions are contingent on this or that preferred way of adjusting the belief-fabric.

Indeed Quine is willing to push this argument to the stage of denying that there is ultimately any difference between macrophysical 'posits' (such as brick houses on Elm Street), subatomic particles, forces, numbers, mathematical sets or classes, centaurs and the gods of Homer. All these entities 'enter our conception only as cultural posits', even if - as Quine readily concedes - 'the myth of physical objects is epistemologically superior to most in that it has proved more efficacious ... as a device for working a manageable structure into the flux of experience'. Thus, any choice between them will always turn on 'vaguely pragmatic inclination' (that which leads us to adjust one or another strand in the fabric) plus an empirically informed estimate of 'the degree to which they expedite our dealings with sense experience'. His own inclination is to go with the current best theories of physical science and admit just that range of posits - from brick houses to certain forces, particles, and whatever is required in the way of more abstract entities such as numbers, classes, etc. – in order to bring theory into line with the best observational data. Thus ‘[f]or my part I do, qua lay physicist, believe in physical objects and not in Homer’s gods; and I consider it a scientific error to believe otherwise.’ However, ‘in point of epistemological footing the physical objects and the gods differ in degree not kind,’ since they are both – along with every other candidate item – imported into various conceptual schemes as a matter of pragmatic convenience or predisposed belief.

It is not hard to see why Quine’s argument has struck a sympathetic cord not only among ‘post-analytic’ philosophers like Rorty but also with theorists in a range of other disciplines such as cultural studies, sociology of knowledge, ethnography, literary criticism and the human sciences at large. It is often invoked by way of support for the cultural-relativist (or social-constructivist) that truth and reality just are whatever we make of them according to some particular set of linguistic, discursive or social conventions. Thus, Quine turns up in a range of improbable contexts or allied with thinkers whose arguments he would scarcely find congenial, given his own attitude of sturdy confidence in a physicalist (if not a realist) approach to epistemological issues. Among them are Kuhnian philosophers of science who adopt a holistic theory of scientific paradigm-change; Foucauldian archeologists (or genealogists) of knowledge who push this doctrine yet further in a sceptical-relativist direction; Wittgensteinian social theorists who view all truth-claims as relative (or ‘internal’) to some given language-game or cultural ‘form-of-life’; and proponents of a depth hermeneutical approach who greet Quine’s arguments as marking the end of a narrowly analytic or reductionist conception of meaning, knowledge and truth. In one case only – Kuhn’s theory of scientific revolutions – can the theory be said to derive directly from Quine’s philosophical ideas to represent a consistent working-out their further implications for philosophy and history of science. But there is also a plausible link between Quine’s argument for a full-fledged contextualist (or meaning-holistic) approach and Foucault's notion that 'truth' is nothing more than a product of historically shifting configurations in the discursively-produced and socially-mediated 'order of things.’

The theory of meaning-holism developed out of a strong reaction against the kinds of logico-semantic approach that took the isolated statement or proposition as their primary object of analysis. In particular it marked a determined break with the philosophy of logical atomism espoused (however briefly) by Russell and carried on in a somewhat different, less overtly reductionist form by logical empiricists like Carnap and Tarski. Thus, according to Quine, 'it is nonsense, and the root of much nonsense, to speak of a linguistic component and a factual component in the truth of any individual statement.’ Such was the error of logical empiricism and such the mistake of all those philosophers - from Kant and Hume on down - who thought to distinguish analytic from synthetic judgements, or 'truths of reason' from 'matters of fact'. 'Taken collectively', Quine continues, 'science has its double dependence upon language and experience; but this duality is not significantly traceable into the statements of science taken one by one! In which case we should give up the fruitless quest for a theory of knowledge (or philosophy of science) premised on the old-style atomist belief that any statement could be verified - or falsified - by adducing this or that item of empirical evidence. Rather, 'any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system'. And again: 'even a statement very close to the [observational] periphery can be held true in the face of recalcitrant experience by pleading hallucination or by amending certain statements of the kind called logical laws.’

It is this aspect of Quine's thinking - his contextualist and (arguably) cultural-relativist theory of knowledge and truth that has opened a way to the current rapprochement between certain strains of 'post-analytic' and 'continental' thought. However, it is a distinctly strained alliance and one that takes no account of Quine's frequent protestations of belief in science as our best, most rational source of guidance in epistemological matters. Nevertheless ‘Two Dogmas’ lays itself open to just such a skeptical-relativist reading through its adoption of a meaning-holistic approach and – following from that – its doctrine of wholesale ontological relativity. At any rate, cultural theorists should be aware that there exist strong arguments against this approach (see for instance Fodor and LePore (1991)) and in favor of a truth-based propositional account of meaning and belief-content. These arguments have been mostly advanced by philosophers in the Anglo-American camp who seek to avoid what they see as the path leading from holistic theories of interpretation to cultural-relativist or strong-sociological modes of thought. The issue is posed with particular force when Wittgensteinian social theorists such as Peter Winch deny that it is possible to criticize beliefs, language-games or cultural ‘life-forms’ other than our own without presuming to adopt a stance outside and above the communal practices in question, and hence failing to understand them on their own internally self-validating terms. In which case, as the critics of this doctrine point out, we could never be justified in criticizing any cultural practice – from witchburning to clitoridectomy, racial segregation or (a good example offered by Mary Midgeley) the samurai custom of chopping off the head of the first stranger one meets in order to test new one’s new sword – since of course these practices are interwoven with a vast range of other customs and beliefs which we denizens of a late 20th-century secular culture just happen not to share.

So there are some large issues behind the debate as to whether certain items of belief can be criticized on factual, logical, ethical or other grounds without bringing in the entire background range of associated meanings and values. In philosophy of science likewise it is hard to see how discoveries or progress could ever come about if indeed there was always the possibility – as Quine argues – of invoking some alternative auxiliary hypothesis in order to save appearances, or redistributing predicates and truth-values over the total fabric of belief so as to achieve a workable trade-off between logic, theory and empirical observation. ‘Conservatism figures in such choices,’ Quine remarks, ‘and so does the quest for simplicity.’ Some statements – those nearest the periphery – may seem especially ‘germane’ to certain experiences, i.e., strongly supported by the evidence and hence most resistant to challenge. However, ‘in this relation of "germaneness" I envisage nothing more than a loose association reflecting the relative likelihood, in practice, of our choosing one statement rather than another for revision in light of recalcitrant experience.’ And, as we have seen, this thesis across-the-board revisability extends from the logical ‘laws of thought’ to statements concerning the existence of physical ‘posits’ like brick houses on Elm Street. So there is clearly a sense -whatever his own more cautious 'statements on the matter - in which meaning-holism of the Quinean variety consorts readily enough with other skeptical-relativist doctrines such as those promoted by post-structuralists, postmodernists, disciples of Foucault and strong sociologists of knowledge.

As I have said, resistance to this line of thought has come mainly from philosophers trained up in the Anglo-American analytic tradition. However, there are also continental theorists - notably Paul Ricoeur - who have drawn upon various analytic and other (e.g. Habermasian) theories of meaning and truth in order to criticize certain aspects of hermeneutic thinking in the Heidegger-Gadamer line of descent. Ricoeur is himself much influenced by hermeneutic theory and has devoted the larger part of his work to issues in just that sphere. However, he also acknowledges the implicit conservatism - as well as the methodological quandaries - of any theory or philosophy of interpretation which conceives understanding as always caught within the 'hermeneutic circle' of pre-existent values, meanings and beliefs. What is required in order to break that circle is something more than vague Gadamerian talk of the 'fusion' of interpretive horizons or the interplay of past and present cultural perspectives. In brief, it is the capacity of critical thought to analyze its own and other people's presuppositions or acculturated habits of belief, and to do so (moreover) without claiming some impossible vantage-point above and beyond all belief-attachments or value-commitments.

Of course this goes clean against the holistic thesis that

statements have meaning or beliefs possess content only when construed in relation to the entirety of what counts as knowledge at any given time. For in that case - as the Wittgensteinians are fond of observing - we could never

criticize any item of belief without bringing an entire belief-system into question and thus, in effect, disqualifying ourselves as competent interpreters or critics. Thus the doctrine of radical meaning-holism very often leads on to

an outlook of generalized skepticism with regard to the very possibility of interpreting other people's meanings and beliefs while dissenting from them on this or that matter of factual, logical, or ethical-evaluative judgement. Which is also to suggest - in company with various critics of the doctrine -

that holism need not (so to speak) be swallowed whole since we can assign content and truth-conditions to particular statements while acknowledging the extent to which they are informed by a range of background beliefs and presuppositions. As so often with such debates there is a tendency to polarize the issue so that somehow - absurdly - we are offered what amounts to a straight choice between logical atomism (or something very like it) and the idea of meaning and belief-content as unspecifiable except with reference to the entire circumambient culture. It is, to say the least, an unenviable

choice and one that bears no resemblance to what actually goes in our everyday decision-procedures as well as in other, more specialized (e.g., scientific) contexts of inquiry.

Literary critics have mostly got by on some weaker version

of meaning-holism, whether as applied to the complex of

meanings within some particular text, or to the various kinds

of relationship assumed to exist between text and wider historical or cultural context. Thus, formalist critics tend to emphasize immanent structures of metaphor, ambiguity, irony, etc., on the premise that contextualism can be held within well-defined bounds, while New Historicists and Cultural Materialists focus rather on the social dynamics of meaning or the force-field of 'resistance and negotiation' (Stephen Greenblatt) which exceeds all such restrictively work-based ideas of what counts as a relevant context. In this respect they share the holistic approach of a hermeneutic theorist like Gadamer, though reading more often on the look-out for conflicts and instances of ideological tension, and not with a view to some ultimate convergence of interpretive horizons.

Of course it may be said that literary criticism, as generally practiced, is not so much concerned with any truth-claims, propositions, or statements to be found in literary texts, but rather with interpreting their meaning or significance in a broadly contextual and non-assertoric sense. To this extent holism – in one or another version – is the default philosophy of most literary criticism. At any rate it doesn’t entail the kinds of far-reaching anti-realist or skeptical conclusion that result when similar arguments are applied to philosophy of science or other branches of epistemological inquiry. Nevertheless some critics - including William Empson in his book The Structure of Complex Words (1951) - have made a strong case for interpreting literary language in terms of its implicit propositional structures or logico-semantic grammar, rather than some vaguely inclusive rhetoric of paradox, irony, or whatever. Thus, Empson rejects the holistic theory, developed by I.A. Richards, that meanings are somehow 'spread out' over more or less extended passages, and hence that any implied propositional content can only be a matter of associative linkage through a process of gradually emergent contextual definition. On the contrary, Empson argues: what often occurs is the reverse process whereby a whole range of meanings and the various logical entailment relations between them are condensed into a single 'complex word' which is then felt to carry a 'compacted doctrine' and to act as a focal point for interpreting the wider context of argument.

Among these keywords are 'wit' in Pope's Essay on Criticism; sense' in a wide range of texts from Shakespeare to Jane Austen and Wordsworth; 'all' in Milton's Paradise Lost; 'honest' in Othello; 'fool' in Erasmus's The Praise of Folly and King Lear, and 'dog' (when addressed to human beings) as a word which runs the whole gamut of meanings from cynical contempt -as in Timon of Athens - to a Restoration usage where it serves to convey a kind of proto-Darwinian admiration for those 'rock-bottom' animal virtues (fidelity, stoicism, a straightforward pleasure in the senses) that mark the emergence of a secular-humanist ethos. In each case Empson applies his logico-semantic 'machinery' - developed at length in the book's early chapters - to draw out the various verbal 'equations' (or structures of implied statement) which enable those words to express such a range of complex, ideologically charged, and often conflictual meanings. It was this aspect of his work that inspired Raymond Williams to write his book Keywords (1976) where the method is extended - minus much of the machinery - to an analysis of various words that Williams sees as having played a crucial role in the shaping of social and cultural attitudes over the past two centuries.

Empson's main theoretical point, as against Richards, is that we simply could not interpret language - let alone explain its power to communicate across large distances of time and cultural milieu - on anything like the full-scale meaning holistic account. For on this theory, as Richards describes it,

there is nothing more to the business of interpretation than a kind of open-ended contextual adjustment very like the process that Quine describes in 'Two Dogmas of Empiricism'. And from here it is but a short step to the conclusion - eagerly embraced by cultural relativists - that meaning and truth are likewise nothing more than products of pragmatic or interpretive convenience. This is why Empson devotes such efforts to developing an alternative, i.e., a truth-based, propositional theory of complex words with application both to literary texts and to (so-called) 'ordinary language'. Even in the case of 'simple flat prosaic' words such as the deceptively unassuming 'quite' there is room for some quite extraordinary subtleties of tone, meaning and social implication. For it is precisely Empson's point - as against the 'old' New Critics and formalists of various persuasion - that poets communicate

in much the same way as everyday language-users, albeit very often at a higher level of semantic complexity. What enables them to do so - and readers to interpret their meaning – is this capacity of language for conveying intentions through structures of implied logico-semantic entailment, structures that are context-sensitive but not entirely context-dependent or (in the Quinean-holistic sense) context-relative. At any rate it seems fair to conclude - like Fodor and LePore in their survey of the field - that an adequate case for meaning-holism has not yet been made and that so far the doctrine has produced more problems than constructive or persuasive solutions.

human capital

3. Human capital is the stock of acquired talents, skills and knowledge which may enhance a worker's earning power in the labor market. A distinction is commonly made between general human capital - which is considered as affecting potential earnings in a broad range of jobs and occupations - and specific human capital, which augments people's earning power within the particular firm in which they are employed but is of negligible value elsewhere. An example of the former would be formal education in general skills such as mathematics; an example of the latter would be the acquired knowledge about the workings of, and personal contacts within, a particular firm. In many cases human capital is of an intermediate form, whether it be acquired 'off the job', in the form of schooling or vocational training, or ‘on the job’ in terms of work experience.

In several respects the economic analysis capital raises problems similar to that of capital as conventionally understood in terms of firms’ plant and equipment. It Is likely to be heterogeneous in form; it is accumulated over a substantial period of time using labour and capital already in existence further investment usually requires immediate sacrifices (in terms of forgone earnings and tuition fees); its quality will be affected by technical progress: the prospective returns to an individual are likely to be fairly uncertain, and the capital stock will be, subject to physical deterioration and obsolescence. Nevertheless there are considerable differences. Whereas one can realize the returns on physical or financial capital either by receiving the flow of profits accruing to the owner of the asset or by sale of the asset itself, the returns on human capital can usually be received only by the person in whom the investments have been made (although there are exceptions, such as independent workers), and usually require further effort in the form of labor in order to be realized in cash terms. The stock of human capital cannot be transferred as can the titles to other forms of wealth, although the investments that parents make in their children's schooling and in informal education at home are sometimes taken as analogous to bequests of financial capital.

While the idea of investment in oneself commands wide acceptance in terms of its general principles, many economists are unwilling to accept stronger versions of the theory of earnings determination and the theory of income distribution that have been based on the pioneering work of Becker and Mincer. This analysis generally assumes that everywhere labor markets are sufficiently competitive, the services of different types of human capital sufficiently substitutable and educational opportunities sufficiently open, such that earnings differentials can be unambiguously related to differential acquisition of human capital. On the basis of such assumptions estimates have been made of the returns (in terms of increased potential earnings) to human investment (measured in terms of foregone earnings and other costs) by using the observed earnings of workers in cross-sectional samples and in panel studies over time. The rates of return to such investment has usually been found to be in the range of 10-15 per cent. However, it should be emphasized that such estimates often neglect the impact of other economic and social factors which may affect the dispersion of earnings.

human nature

3. The concept of human nature, central to the study of human social life, can be traced to the ancient Greeks who elaborated the idea of ‘nature’ underlying western science. After Thales, Anaxagoras and other cosmologists began the quest for universal principles that explain the world; Sophists like Antiphon and Gorgias concluded that such rules of nature were different from – and contradicted by – human-made rules of law or cultural conventions. Socrates and his students challenges such a division between human nature and law or social virtue, claiming that what is ‘right’ or ‘just’ is ‘according to nature’ (Plato, Republic) and that humankind is ‘the political animal’ (Aristotle, Politics). Ever since, some political and social theorists (e.g. Hobbes, Locke and Rousseau) have viewed human nature as essentially selfish and derived society from the behavior of individuals, whereas others (e.g. Hegel, Marx and Durkheim) have argued that humans are naturally sociable and traced individual traits to society and its history. The former assumption has generally been adopted in such disciplines and behaviorist psychology and classical economics, the latter in sociology, cultural anthropology and history.

Contemporary scientific research has demonstrated the impossibility of reducing this ancient controversy to the simplistic nature vs nurture dichotomy. Some aspects human behavior seem primarily shaped by individual experience, the present social situation or the cultural environment, while others can be influenced by genetic predisposition, prenatal events or critical periods in childhood development. Many individual variations in physiology and behavior are by hormones, neurotransmitters or innate structures in the brain, but such biological response systems depend in turn on individual development or experience.

Due to this interaction of genetic, developmental and social factors, human nature is complex and highly adaptable. Individuals of our species are thus by nature both cooperative and competitive, both selfish and altruistic. As a result, ‘mankind viewed over many generations shares a single human nature within which relatively minor hereditary influences recycle through ever-changing patterns between sexes and across families and entire populations.’ An evolutionary perspective clarifies the age-old debates concerning human nature by distinguishing relatively invariant and universal aspects of human behavior from sources of variability that are, at least in part, under biological control. Not only does human nature entail the development of linguistic and cultural abilities that vary from one social environment to another, but also many differences among humans need to be understood as natural.

Common traits shared with other species

Among those attributes shared by every human being, some can be traced to our earliest vertebrate ancestors and are generally found among sexually reproducing animals: the overall bodily structure of vertebrates and basic drives, including food, sex, security and - for social species - predictable status. Like mammals, humans are warm-blooded, develop social bonds, and express emotions in ways that serve as social signals. Like primates, we are a highly intelligent species adopting varied social patterns and individual behavioral strategies in response to food supplies, physical environments, and individual or group experiences. This evolutionary history is reflected in the 'triune' structure of the human central nervous system, in which the brain stem controls the basic drives common to all vertebrates, the limbic system modulates the mammalian emotions, and the enlarged primate neo-cortex permits extraordinary learning and behavioral plasticity.

Each of these levels of evolution has behavioral consequences for all members of our species. Like most vertebrates, humans exhibit what ethologoists call fixed action patterns, including social displays and the consummatory behaviors satisfying needs of nutrition and reproduction. Like most mammals, human females give birth after gestation, breastfeed and care for neonates (unless the mother-infant bond has been disturbed) and hence usually invest more than males in the reproductive process. Like most primates, humans recognize other members of the group individually and use a repertoire of non-verbal displays, including facial expressions, to modulate social interactions. Also common to all humans are traits unique to our species. Most obvious are speech (the complex of linguistic abilities, utilizing grammar and syntax to produce a truly 'open' means of communication and information processing), complex productive technologies (including domestication of other species, elaborate manufacture of tools or weapons, irrigated agriculture, and industrial machinery) and of course cultural systems that use symbolic and linguistic skills to elaborate religious, political and artistic achievements unknown to other animals. Although chimpanzees exhibit many aspects of cultural variability, human nature cannot be reduced to its evolutionary roots.

Variable traits shared by all humans

The amazing diversity of our species' social and cultural behaviors is itself a characteristic of human nature. Some elements of this variability are themselves influenced by biological factors. Personality differences can be traced to heritable temperaments which vary along multiple dimensions such as shyness and sociability, risk-taking and harm-avoidance, or novelty- seeking and predictability. Individual variations in mate choice and sexual behavior may be partly due to genetics, partly due to prenatal hormonal exposure, and partly due to individual experience or social setting. Although the evidence on IQ highly contested, differences in such specific abilities as fine or gross motor co-ordination, musical or artistic skills, and excellence in mathematics, while requiring training for full expression, seem to some degree heritable. Some have also found evidence of genetic susceptibility to cancer, mental disease, learning disabilities, alcoholism and crime, though each of these categories can apparently be produced by different combinations of inherited, developmental and environmental factors.

The exploration of a heritable component in gender differences has been particularly controversial, in part because both critics and proponents often ignore the way that patterns of variation overlap. For example, although the personality dimensions described above are normally distributed, males are on average more predisposed to risk-taking than females. As a result, a given personality type may be widely observed among both males and females even though there is a significant gender difference in the overall distribution of the trait. Often, such statistical patterns can be traced to hormonal differences during neonatal development.

Some behavioral traits reflect physiological responses to specific situations of evolutionary chance. In humans as in many primates, for instance, dominant males have elevated levels of the neurotransmitter serotonin which depend on the sight of submissive behavior by subordinates; after social status has changed, there are corresponding modifications in neurotransmitter levels. It seems likely that such natural mechanisms of plastic behavioral response will be greatly elucidated by research in the late 1990s.

Cultural variation and the environment

At the level of entire societies, there seem to be natural relationships between cultural practices and the natural or social environment comparable to those studied in other species by behavioral ethologists. Among hunter-scavengers or hunter-gatherers like the Kalahari San, egalitarian status patterns with informal group leadership can be viewed as an adaptive response to the environment, whereas hypergamous patterns of social stratification and gender inequality are characteristic responses to an environment of chaotic resource flows and interspecific or intraspecific predation.

Although many social scientists assume that such variations have little, to do with human nature, many facultative traits depend on a specific environment for their expression. As a result, human nature is in many respects a variable rather than a constant. Because the environment plays such an important role in shaping the expression of natural potentialities, for example, it is no longer possible to assert that a specific social institution like monogamous marriage is always and everywhere more 'natural' than alternatives, such as polygamy, homosexuality or (in some situations) celibacy. References to 'human nature' in the singular, therefore, need to be understood as describing a central tendency that is often subject to shaping or variation, depending on time and place. While there are indeed broad universals, such as the sense of justice expressed when moralistic aggression is directed at violations of group norms, these general patterns can rarely be used to decide social conflicts in complex societies undergoing rapid change. While future scientific research will add further detail to this picture, contemporary evidence confirms the view that human nature is a hodgepodge that is complex and changeable rather than a fixed essence that could be deduced from an eternal and immutable natural law. See also human rights, nature, sociobiology.

human rights

3. Human rights are rights which all persons hold by virtue of the human condition. They are thus not dependent upon grant or permission of the state and they cannot be withdrawn by fiat of the state. While laws under different national legal systems may vary, the human rights to which each person is entitled are rights in international law. For example, the human right to a fair trial is the same for a person who lives under a legal system of common law, civil law or Roman law. States have the obligation to ensure that their discrete legal systems reflect and protect the inter- national human rights which those within their jurisdiction hold.

Are human rights universal?

There has been a long-running debate on whether human rights are universal or whether they are necessarily the product of particular cultures and societies. The suggestion that human rights represent western values, and are imposed upon others, is more a product of liberal democratic sensitivity than a reflection of the views of non-western states or their populations. The very wide acceptance of the International Covenants on Human Rights of 1966 (themselves based on the unanimously adopted Universal Declaration of Human Rights) appeared to have answered this issue, in favor Of the perceived universality of the rights. Over 140 states are parties to the International Covenant on Civil and Political Rights. These include the former socialist countries of eastern Europe as well as many developing countries. Egypt, Tunisia, Iraq and Iran are among the Islamic countries that have freely chosen to ratify this instrument. Large numbers of socialist, non-Christian and developing states have accepted that their citizens are as entitled as those residing in western countries to fundamental freedoms and human rights. Human rights constitute the common language of humanity. If individuals choose to identify with a particular culture, which may restrict the rights to which they would otherwise be entitled by international law, that is their prerogative choice. But that identification with a culture or religion may not be imposed by a state against the wish of an individual. This is not to insist upon 'western' human rights, but rather to insist that human rights are for people and not for states.

From the entry into force of the Covenants until the catty 1990s there was an incremental growth in the concept of the universality of human rights. In 1989, shortly after the fall of the Berlin Wall, it was decided to convene a World Conference. on Human Rights, The conference took place in Vienna in 1993 and the preparations therefore, and the conference meetings themselves, were used by some states who had never undertaken the obligations of the Covenants to try to persuade others that human rights represented a western cultural imperialism. This was coupled with proposals for new regional human rights treaties that would be more reflective of cultural particularity. These efforts did not in fact prevail. Article 5 of the Vienna Declaration, in the formulation of which all UN members participated and which was adopted by consensus, proclaims that

white the significance of national and regional particularities and various historical, cultural and regional backgrounds must be borne in mind, it is the duty of states, regardless of their political, economic and cultural systems, to promote and protect all human rights and fundamental freedoms. Article 57 makes clear that the role of regional arrangements is not to detract

from universal standards, but to reinforce them.

The content of human rights

Human rights do not consist only of civil and political rights. There also exist economic, social and cultural rights, notably those reflected in the International Covenant on Economic, Social and Cultural Rights. Western countries have been skeptical about whether the requirements contained in that instrument (for example, the right to housing, the right to education) should properly be described as rights, or whether they are mere aspirations. It has also been suggested that if a stated obligation is not justiciable in the courts, it is not a legal right. Developing countries have been anxious about their ability to deliver these rights in the short term. However, the International Covenant on Economic, Social and Cultural Rights is now ratified by countries from all parts of the world. The work of its monitoring Committee has done much to address these concerns. It has made clear, for example, that while the full attainment of an economic right may not be immediate, there is an immediate obligation to take designated and agreed to steps to that end. Economic, social and cultural rights entail immediate obligations of ‘best efforts,’ coupled with obligations of result. All of these factors are to be taken into account in determining whether, at any given moment, a specific country is or is not in violation of its obligations regarding such rights. Certain aspects of this category of rights may be justiciable, for example, if housing is being provided in a discriminatory manner. But the absence of justiciability in any event reflects not an absence of entitlement but a need for diverse mechanisms for guaranteeing such entitlements.

Individual and group rights

The beneficiaries of human rights, as reflected in the major international instruments, are individuals. This is true even of minority rights, which are articulated as the right of individuals to pursue their culture, or speak their language, or engage in worship, with others from their group. The sole exception arises in relation to the right to self-determination, which stands in a separate part of each of the Covenants, and is a right of ‘all peoples.’ From the western perspective, the emphasis on the individual as the beneficiary of rights is a necessary antithesis to the power of the state, and also to the power of groups that serve the purposes of the state. There is, however, now an interest in exploring again whether some rights do not properly adhere to groups. The cataclysmic events in the former Yugoslavia and in eastern Europe in the early 1990s have led to the perception that minority rights may need to be more broadly fashioned than is possible so long as they remain the rights of individuals. The question of group rights has also become relevant in the context of new ‘third and fourth generation’ rights now being proposed, such as the right to a clean environment, the right to sustainable development, the rights of indigenous people and others. With regard to these ‘new generation’ rights, there is still considerable debate as to their status as human rights, not only because of the novelty of groups or peoples as the beneficiary, but also because of the uncertainty of the content of the right or the obligations imposed thereby and on whom.

The sources and institutions of human rights law

General international law is the source of some human rights, but they are most clearly set out in a remarkable system of international treaties, all developed since the mid-1960s. The two International Covenants on Human Rights (1966) cover between them all the major civil and political, and economic, social and cultural rights. They are open to all states. Certain of those have been made the subject of single topic treaties, which specify the right concerned in more detail and provide for further procedural guarantees. These, too, are open to all states. These UN treaties have monitoring bodies, which receive reports, examine the state parties, and, in certain cases, sit as quasi-judicial tribunals in respect of individual claims. The Committee under the Covenant on Civil and Political Rights in particular has developed a significant jurisprudence. At the regional level, too, there are, treaties that cover the generality of human rights and treaties that address single topics. These are open to the states of the region, or the regional institutions. The American Convention on Human Rights, which has its own Commission and Court, is an important instrument for the Americas. The Commission does much important work, much of it in loco. In the last fifteen years the Court has begun to develop its jurisprudence. All members of the Council of Europe adhere to the European Convention on Human Rights. Those newly seeking admission to the Council of Europe must also be prepared to ratify the European Convention and accept the right of their citizens to bring cases against them. The European Commission of Human Rights and the European Court of Human Rights have since 1950 developed the most detailed and important jurisprudence on the rights. They also deal with some inter-state cases. The jurisprudence of the European Court of Rights is relied on in the courts of those states that have made the European Convention part of their own domestic law, whether by incorporation or otherwise. Even in those few countries that have not – for example, the United Kingdom – the decisions of the European court are binding and an adverse finding may require alterations to legislation or to administrative practices. In 1994 a Protocol was signed which envisages important alterations to the institutions of the European Convention, most notably replacing the Commission and Court with a new permanent Court.

Limitations on rights

The balance between the rights of individuals and the legitimate concerns of the state, which has to take into account the general good, is met through the device of permitted limitations. Very few human rights are absolute. The prohibition against torture is such a right. Most rights may be qualified, in a particular case, if certain conditions are met. A law prescribing the limitation must pre-exist its use and be accessible and known. A restriction upon the right must be shown to be necessary. There are usually further conditions to be met, for example, that a restriction be for reasons of public order, public health or state security. In times of national emergency states are permitted to derogate from human rights - that is to say, to suspend their obligations to guarantee these rights for the duration of the emergency. Again, certain rights may not be derogated from, whatever the circumstances. For example, no emergency justifies torture, nor can it remove a person's freedom of thought, conscience or of religion. See also citizenship, human nature.


    1. A word with a variety of meanings. Usually, a viewpoint which
advocates the supreme value of human beings: ‘man the measure of

all things.’ During the period of Renaissance Europe, those who

studied the classics (i.e. Ancient Greek and Roman texts) were deemed humanists. They espoused an optimism about human possibilities and achievements. During the 20th century, being a humanist commonly implies an attitude antithetical to religious beliefs and institutions.

In the post-war period debates have been waged between academics over the term humanism in a variety of contexts (e.g. politics, ethics, philosophy of language). In this context, a humanist has come to signify (amongst other things) someone who advocates a view of human nature with stresses the autonomy of human agency with regard to such matters as moral or political choice, or one who adheres to the view that human subjectivity is the source of meaning in language-use. A humanist, on this view, is someone who presupposed that there are essential properties (e.g. autonomy, freedom, intentionality, the ability to use language for the purpose of producing meaningful propositions, rationality) which define what it is to be human. Such a conception of subjectivity has been criticized by way of an invocation of theories of meaning derived from stucturalism and post-structuralism. Following on from such thinkers as Nietzsche, writers within these schools have argued that the production of meaning, and therefore subjectivity, is a matter of relations of discourses of power (Foucault) or processes of semantic slippage within language (Derrida) rather than a matter of an extra-linguistic subject who exists ‘outside’ the domain of language and subsequently ‘uses’ language to express their intentions. Such views have been taken up by advocates of postmodernism, who have claimed, for example, that the politics that purportedly accompanies humanism is susceptible to being undermined by these forms of analysis. Such a view depends upon whether or not one is inclined to accept the claim that the advocacy of a particular ontology of the subject commits one to a particular kind of politics. Certainly, many facets of liberalism are not so easily swept away by advocating anti-humanism. For example, the anti-humanism implicit in Jean-François Lyotard’s conventionalist account of language in The Differend: Phrases in Dispute does not circumvent certain key principles of liberal thought as elaborated by J.S. Mill in On Liberty, but might rather be said to be compatible with them.

Other thinkers who adopt an anti-humanist attitude include Heidegger (whose conception of dasein should not be confused with ‘humanist’ accounts of subjectivity; indeed, Heidegger explicitly rejected the humanism of Jean-Paul Satre’s existentialism in his ‘Letter on Humanism’ (1947)); and Louis Althusser, whose ‘structural Marxism’ opposed Marx’s contention that humans were the authors of their own destiny with the view that social relations are instrumental in the construction of identity, belief systems and forms of consciousness.



    1. The issue of identity is central to cultural studies, in so far as cultural studies examines the contexts within which and through which both individuals and groups construct, negotiate and defend their identity or self-understanding. Cultural studies draw heavily on those approaches to the problem of identity that question what may be called orthodox accounts of identity. Orthodoxy assumes that the self is something autonomous (being stable and independent of all external influences). Cultural studies draws on those approaches that hold that identity is a response to something external and different from it (an other).

    2. In orthodox European philosophy, at least from Descartes' writings in the 17th century, it has been assumed that the self (ego or subject) exists as an autonomous source of meaning and agency. Descartes himself found that the only thing that he could not doubt was that he existed, and that this existence took the form of a 'thinking substance.’ This notion of the autonomous subject, sure of its own identity and continuing throughout the individual human being's life, was dominant not just in philosophy, but also in political thought (not least as a grounding assumption of liberalism) and psychology. The idea was questioned however, not least by the Scottish philosopher David Hume, in the 18th century. Hume observed that the contents of his consciousness included images (or sense-impressions) of everything of which he was thinking (either directly perceiving, or recalling in memory). There was, though, no image of the self that was supposedly doing this perceiving and remembering. Hume therefore proffered what was commonly known as the 'bundle theory' of the self, such that the self is nothing more than a bundle of sense impressions, that continually changed as the individual had new experiences or recalled old ones.

      In the late 19th century, Emile Durkheim posed a fundamental challenge to liberal individualism. The liberal presupposed the primacy of the individual, and thus that society was composed out of individuals (brought together, for example, in a social contract). In contrast, Durkheim argued that the individual was a product of society (not that society was a product of individuals). His point was that a modern understanding of individuality (and thus, self-understanding of humans in modern society) was a product of that particular culture. In pre-industrial societies, with little or nor economic specialization (or division of labor), all members of that society would be similar in attitudes, values and norms. Such societies were held together purely because of this homogeneity. In contrast, in industrial society, with its high degree of specialization, individualism occurs because people live distinctive lives with distinctive experiences. Their values and attitudes can then diverge. Durkheim therefore argues that individual identity is not primary, but it a product of economic organization.

      George Herbert Mead’s analysis of self poses an alternative set of problems for the idea of the autonomous ego. For Mead, the self is constructed through its relations with others. Mead distinguishes the ‘I’ from the ‘me,’ arguing that: ‘The "I" is the response of the organism to the attitudes of others which one himself assumes.’ The ego thus collapses into little more than an animal response. The self, and thus self-consciousness, rests rather upon the internalization of the viewpoint of others. The ‘I’ becomes self-conscious only in so far as it can imagine how it is seen by others, and responds accordingly. The development of the self therefore depends upon the others it encounters. This line of thought it fundamental to the symbolic interactionist approach in sociology. In the work of Erving Goffman (1959) it is taken further. Goffman suggests that the self is a product of particular interactions, in so far as the individual’s capacities, attitudes and ways of behaving (and possibly, of conceiving of him- of herself) changes as the people around him or her change. Alone, a person is either not self-conscious, and as such does not have, at that moment a self, or is self-conscious, in so far as he or she is aware of how he or she would appear to some more or less specific other. The self therefore has no stability, being almost as fluid as the self proposed by Hume.

      Psychoanalysis opens up a further series of questions against the orthodox view of identity. For Freud, identity rests on the child’s assimilation of external persons. The self is structured through the relationship of the ego, id and super-ego. While the id is the instinctive substrate of the self, and the super-ego crucially, is the constraining moral consciousness that is internalized in the process of psychological development, the ego may be understood either as the combination of the id and super-ego, or as an agency separate from these two. The latter interpretation is, in the current context, possibly the more interesting, for it suggests that the ego is never self-identical. Erik Erikson's psychodynamic theory develops upon this. Identity for Erikson is a process between the identity of the individual and the identity of the communal culture. It was Erikson who coined the phrase 'identity crisis' in the 1940s. At first, the term referred to a person who had lost a sense of 'personal sameness and historical continuity.’ As such, the individual is separated from the culture that can give coherence to his or her sense of self Later, it came to characterize youth, as a stage in the psychological development of any individual.

      In Lacan's reinterpretation of Freud, the problematic identity of the self or subject, is explored further. For Lacan, self-consciousness emerges only at the mirror stage (at approximately six to eighteen months). Here the infant recognizes its reflection as a reflection of itself. It therefore comes to know itself, not directly, but through the mirror image. The self emerges as the promise of control in the face of the fragmentation that occurs as the child is separated from the mother. However, as for Freud, the male child's identity depends upon that of the mother (allowing, in English at least, a pun on (in) other). The child enters language through the imposition of the law by the father, with the 'no' that prohibits incest with the mother. The child desires the mother in order to regain a primal unity. This is a desire to disobey the father's prohibition, and yet it must be repressed. Thus, Lacan can argue, the unconscious is structured like language. In effect, this is to argue that the self (or more properly the subject) is positioned by language, which is to say that it is positioned as always repressing its own lack of unity. Althusser's structuralist version of Marxism offers a parallel account of the subject, albeit now as a product of ideology. Social institutions such as the church, education, police, family and mass media 'interpellate' or hail the subject, again positioning him or her within society.

      The work of Foucault may also be interpreted through the centrality of the question of identity. Thus, in his early work on madness (1971), he analyses how madness is conceived differently in different ages (comparing, for example, the Renaissance view of madness as its own form of reason, with the rationalist 17th -century's exclusion of the insane from society). Madness is thus socially constructed and specific, and historically variable social practices exist to constrain it. Yet, crucially for the 17th and 18th centuries, madness is also the other, in comparison to which the sane and rational define themselves. The identity of the dominant group in society therefore depends upon its construction of its own other. In Foucault's later writings, he turns to the problem of the construction of the 'self' (especially in relation to sexuality) through its positioning within discourses (1981). From this, the self may be theorized in terms of the conceptual and other intellectual resources that it calls upon in order to write or talk about itself, and in the way in which it is written about, or written to. The way in which a text is composed will anticipate, and thus situate, a certain self as reader.

      Structuralist and post-structuralist questioning of the nature of self-identity, as found in the work of Lacan, Althusser and Foucault, may also be linked to an identity politics. The recognition that identity is not merely constructed, but depends upon some other, opens up the theoretical space for marginal or oppressed groups to challenge and re-negotiate the identities that have been forced upon them in the process of domination. Ethnic identities, gay and lesbian identities and female identities are thus brought into a process of political change. (See also self.)

    1. It can be plausibly be suggested that a theory of ideology is fundamental to any critical social or cultural science. However, the exact meaning of the term is often elusive or confused. Its most common use may be simply to refer to a more or less coherent set of beliefs (such as political ideology, meaning the beliefs, values and basic principles of a political party or faction). ‘Ideology’ is used in this sense in some branches of political science. In Marxism and the sociology of knowledge, however, it has taken on much more subtle meanings, in order to analyze the way in which knowledge and beliefs are determined by the societies in which they emerge and are held.

    2. The term was coined at the end of the 18th century, by the French philosopher Destutt de Tracy, to refer to a science (logos) of ideas. Such a science would be based in analysis of human perception, conceived itself as a sub-discipline of biology, and the idéologues sought to reform educational practice on the basis of it. (This origin is more important than it may initially seem, for it presents the argument that ideas depend on some, non-ideational, substrate. For de Tracy, this is biology; for social science it will be the material, economic and political practices and structures of society). Napoleon’s ridiculing of the idéologues led to ‘ideology’ becoming a pejorative term.

      It is with Marx that ideology becomes an important critical concept. Marx’s approach to ideology may be introduced through the famous observation that, for any society, the ideas of the ruling class are the ruling ideas. This is to suggest that our understanding and knowledge of the world (and especially, if not exclusively, of the social world) is determined by political interests. There are certain beliefs, and certain ways of seeing the world, that will be in the interests of the dominant class (but not in the interests of the subordinate classes). For example, it was in the interests of the dominant class in feudalism to believe in the divine right of kings. The authority of the king and the aristocracy is given by God, and is thus beyond question. It is in the interests of the bourgeoisie (the owners and controllers of industry) in capitalism to see the social world as highly individualistic and competitive. What for Marx is the genuinely social and collective nature of human life (not least in class membership) is thereby concealed, and the possibilities of effective proletarian resistance to capitalism are minimized. The dominant class is able to propagate its ideas throughout society due to its control of various forms of communication and education (such as the mass media, church and schools).

      While ideology, in the Marxist sense, is a distorted way of viewing the world, it is not strictly false (and so ideology is not simply a synonym for false consciousness). Marx’s observation that religion is the opium of the masses expresses this more complex idea. On one level, religion does distort the subordinate classes’ understanding of the social world, not least in its promise of a reward in heaven, for the injustices suffered in this world. Yet, the metaphorical reference to opium is important, not just because opium dulls our experience of pain, but also because opium induces dreams. Heaven is therefore an idea to be taken seriously (although not literally), for it does contain an image of justice – but one that should not be realized in this world, not the hereafter. In this sense, ideology is an illusory solution to a real problem. The task of the critic of ideology is therefore to recognize this – to recognize the way in which ideology inverts our understanding of real problems – and thereby identify and tackle the real problem.

      The Marxist theory of ideology presupposes that ideology is a distortion. It may therefore be set against true knowledge. In the sociology of knowledge, not least in its development by the German sociologist Karl Mannheim (1960), ideology loses its links to class and to domination, and so challenges this notion of truth. Mannheim retains the link that Marx establishes between ideas and the material base of society, but in order to argue that people from different sections of societywill understand the world in different ways. The difference between the bourgeois understanding of the world and the proletariat is not then the difference between the views of a dominant and reactionary class and a subordinated, progressive class, but simply the difference between two, equally valid, worldviews. For Mannheim, there is then no single truth against which all ideologies can be judged. Each ideology will have its own standards of truth and accuracy, dependent upon the social circumstances within which it is produced.

      The Marxist account of ideology can be seen to have under gone two important revisions in the 20th century. First, the development of the theory of hegemony, by the Italian theorist Gramsci, tackled the problem that the theory of ideology appeared to suggest that ideas could be passively imposed upon the subordinate classes. The theory of hegemony suggests, rather, that ideologies are actually negotiated in the face of contradictory evidence and life experiences. The second revision stems from the work of the French structuralist, Althusser. Althusser overturned the emphasis in the theory of ideology on ideas. Ideology need not be about what people think, but rather about how they act - 'lived relations'. Ideological practices, which are taken-for-granted, constitute the human subject and his or her identity within capitalism, thus allowing him or her to function.

2. Ideology first appeared in English in 1796, as a direct translation of the new French word idéologie which had been proposed in that year by the rationalist philosopher Destutt de Tracy. Taylor (1796): 'Tracy read a paper and proposed to call the philosophy of mind, ideology.’ Taylor, (1797): '. . . ideology, or the science of ideas, in order to distinguish it from the ancient metaphysics.’ In this scientific sense, ideology was used in epistemology and linguistic theory until 1ate 19th century.

A different sense, initiating the main modern meaning, was popularized by Napoleon Bonaparte. In an attack on the proponents of democracy - 'who misled the people by elevating them to a sovereignty which they were incapable of exercising' - he attacked the principles of the Enlightenment as 'ideology.’

It is to the doctrine of the ideologues - to this diffuse metaphysics, which in a contrived manner seeks to find the primary causes and on this foundation would erect the legislation of peoples, instead of adapting the laws to a knowledge of the human heart and of the lessons of history - to which one must attribute all the misfortunes which have befallen our beautiful France.
This use reverberated throughout the 19th century. It is still very common in conservative criticism of any social policy which is in part or in whole derived from social theory in a conscious way. It is especially used of democratic or socialist policies, and indeed, following Napoleon's use, ideologist was often in the 19th century generally equivalent to revolutionary. But ideology and ideologist and ideological also acquired, by process of broadening from Napoleon's attack, a sense of abstract, impractical or fanatical theory. It is interesting in view of the later history of the word to read Scott (Napoleon, vi, 251): 'ideology, by which nickname the French ruler used to distinguish every species of theory, which, resting in no respect upon the basis of self-interest, could, he thought, prevail with none save hot-brained boys and crazed enthusiasts' (1827). Carlyle, aware of this use, tried to counter it: 'does the British reader ... call this unpleasant doctrine of ours ideology?' (Chartism, vi, 148; 1839).

There is then some direct continuity between the pejorative sense of ideology, as it had been used in early 19th century by conservative thinkers, and the pejorative sense popularized by Marx and Engels in The German Ideology (1845-7) and subsequently. Scott had distinguished ideology as theory 'resting in no respect upon the basis of self-interest,’ though Napoleon's alternative had actually been the (suitably vague) 'knowledge of the human heart and of the lessons of history'. Marx and Engels, in their critique of the thought of their radical German contemporaries, concentrated on its abstraction from the real processes of history. Ideas, as they said specifically of the ruling ideas of an epoch, 'are nothing more than the ideal expression of the dominant material relationships, the dominant material relationships grasped as ideas.’ Failure to realize this produced ideology: an upside-down version of reality.

If in all ideology men and their circumstances appear upside down as in a camera obscura, this phenomenon arises just as much from their historical life process as the inversion of objects on the retina does from their physical life process. (German Ideology, 47) Or as Engels put it later:
  Every ideology . . .once it has arisen develops in connection with the given concept-material, and develops this material further; otherwise it would cease to be ideology, that is, occupation with thoughts as with independent entities, developing independently and subject only to their own laws. That the material life conditions of the persons inside whose heads this thought process goes on in the last resort determine the course of this process remains of necessity unknown to these persons, for otherwise there would be an end to all ideology.(Feuerbach, 65-6) Or again:
  Ideology is a process accomplished by the so-called thinker, consciously indeed but with a false consciousness. The real motives impelling him remain unknown to him, otherwise it would not be an ideological process at all. Hence he imagines false or apparent motives. Because it is a process of thought he derives both its form and its content from pure thought, either his own or his predecessors'. (Letter to Mehring, 1893)
Ideology is then abstract and false thought, in a sense directly related to the original conservative use but with the alternative - knowledge of real material conditions and relationships - differently stated. Marx and Engels then used this idea critically. The 'thinkers' of a ruling class were 'its active conceptive ideologists, who make the perfecting of the illusion of the class about itself their chief source of livelihood' (German Ideology, 65). Or again: 'the official representatives of French democracy were steeped in republican ideology to such an extent that it was only some weeks later that they began to have an inkling of the significance of the June fighting' (Class Struggles in France, 1850). This sense of ideology as illusion, false, consciousness, unreality, upside-down reality, is predominant in their work. Engels believed that the 'higher ideologies' - philosophy and religion - were more removed from material interests than the direct ideologies of politics and law, but the connection, though complicated, was still decisive (Feuerbach, 277). They were 'realms of ideology which soar still higher in the air ... various false conceptions of nature, of man's own being, of spirits, magic forces, etc…’(Letter to Schmidt, 1890). This sense has persisted.

Yet there is another, apparently more neutral sense of ideology in some parts of Marx's writing, notable in the well-known passage in the Contribution to the Critique of Political Philosophy (1859):

The distinction should always be made between the material transformation of the economic conditions of production ... and the legal, political, religious, aesthetic or philosophic - in short, ideological - forms in which men become conscious of this conflict and fight it out. This is clearly related to part of the earlier sense: the ideological forms are expressions of (changes in) economic conditions of production. But they are seen here as the forms in which men become conscious of the conflict arising from conditions and changes of condition in economic production. This sense is very difficult to reconcile with the sense of ideology as mere illusion. In fact, in the last century, this sense of ideology as the set of ideas which arise from a given set of material interests or, more broadly, from a definite class or group, has been at least as widely used as the sense of ideology as illusion. Moreover, each sense has been used, at times very confusingly, within the Marxist tradition. There is clearly no sense of illusion or false consciousness in a passage such as this from Lenin: Socialism, insofar as it is the ideology of struggle of the proletarian class, undergoes the general conditions of birth, development and consolidation of an ideology, that is to say it is founded on all the material of human knowledge, it presupposes a high level of science, demands scientific work, etc.... In the class struggle of the proletariat which develops spontaneously, as an elemental force, on the basis of capitalist relations, socialism is introduced by the ideologists. (Letter to the Federation of the North) Thus there is now 'proletarian ideology' or 'bourgeois ideology', and so on, and ideology in each case is the system of ideas appropriate to that class. One ideology can be claimed as correct and progressive as against another ideolo6. It is of course possible to add that the other ideology, representing the class enemy, is, while a true expression of their interests, false to any general human interest, and something of the earlier sense of illusion or false consciousness can then be loosely associated with what is primarily a description of the class character of certain ideas. But this relatively neutral sense of ideology, which usually needs to be qualified by an adjective describing the class or social group Which it represents or serves, has in fact become common in many kinds -of argument. At the same time, within Marxism but also elsewhere, there has been a standard distinction between ideology and science, in order to retain the sense of illusory or merely abstract thought. This develops the distinction suggested by Engels, in which ideology would end when men realized their real life-conditions and therefore their real motives, after which their consciousness would become genuinely scientific because they would then be in contact with reality (cf. Suvin). This attempted distinction between Marxism as science and other social thought as ideology has of course been controversial, not least among Marxists. In a very much broader area of the 'social sciences', comparable distinctions - between ideology (speculative systems) and science(demonstrated facts) are commonplace.

Meanwhile, in popular argument, ideology is still mainly used in the sense given by Napoleon. Sensible people rely on experience, or have a philosophy; silly people rely on ideology. In, this sense ideology, now as in Napoleon, is mainly a term of abuse.


2. Imperialism developed as a word during the second half of the 19th century. Imperialist is much older, from the early 17th century, but until the late 19th century it meant the adherent of an emperor or of an imperial form of government. Imperial itself, in the same older sense, was in English from the 14th century; from the word imperialis, Latin, root word imperium, Latin - command or supreme power.

Imperialism, and imperialist in its modern sense, developed primarily in English, especially after 1870. Its meaning was always in some dispute, as different justifications and glosses were given to a system of organized colonial trade and organized colonial rule. The argument within England was sharply altered by the evident emergence of rival imperialisms. There were arguments for and against the military control of colonies to keep them within a single economic, usually protectionist system. There was also a sustained political campaign to equate imperialism with modern civilization and a 'civilizing mission'.

Imperialism acquired a new specific connotation in the early 20th century, in the work of a number of writers - Kautsky, Bauer, Hobson, Hilferding, Lenin - who in varying ways related the phenomenon of modern imperialism to a particular stage of development of capitalist economy. There is an immense continuing literature on this subject. Its main effect on the use: of the word has been an evident uncertainty, and at times ambiguity, between emphases on a political system and on an economic system. If imperialism, as normally defined in late 19th century England, is primarily a political system in which colonies are governed from an imperial center, for economic but also for other reasons held to be important, then the subsequent grant of independence or self-government to these colonies can be described, as indeed it widely has been, as 'the end of imperialism'. On the other hand, if imperialism is understood primarily as an economic system of external investment and the penetration and control of markets and sources of raw materials, political changes in the status of colonies or former colonies will not greatly affect description of the continuing economic system as imperialist. In current political argument the ambiguity is often confusing. This is especially the case with 'American imperialism,’ where the primarily political reference is less relevant, especially if it carries the 19th century sense of direct government from an imperial center, but where the primarily economic reference, with implications of consequent indirect or manipulated political and military control, is still exact, Neo-imperialism and especially neo-colonialism have been widely used, from the mid-20th century, to describe this latter type of imperialism. At the same time, a variation of the older sense has been revived in counter-descriptions of 'Soviet imperialism', and, in the Chinese version, 'social imperialism', to describe either the political or the economic nature of the relations of the USSR with its 'satellites' (cf. 'the Soviet Empire'). Thus the same powerful word, now used almost universally in a negative sense, is employed to indicate radically different and consciously opposed political and economic systems. But as in the case of democracy, which is used in a positive sense to describe, from particular positions, radically different and consciously opposed political systems, imperialism, like any word which refers to fundamental social and political conflicts, cannot be reduced, semantically, to a single proper meaning. Its important historical and contemporary variations of meaning point to real processes which have to be studied in their own terms. See hegemony.


    1. A person or a self. Taken in the sense of something which cannot be subject to any further division, an individual is often contrasted with a group. The view that individual selves are (i) irreducible, (ii) endowed with the ability to use their rationality according to their own dispositions and desires, and (iii) ought to be free civic agents, is associated with individualism. This conceived of the individual as a free agent in the market place and advocates a view of political and social liberty on these terms. It is often linked to the influence of the writings of Adam Smith (for example, in the UK in the 1980s to the impact of his ideas on Margaret Thatcher, who advocated a free-market individualism).

    3.  Individual originally meant indivisible. That now sounds like paradox. 'Individual' stresses a distinction from others; 'indivisible' a necessary connection. The development of the modern meaning from the original meaning is a record in language of an extraordinary social and political history.

    4. The immediate fore word individualis, Latin, is derived from individuus, Latin, 6th century, a negative (in-) adjective from the root word dividere, Latin - divide. Individuus was used to translate atomos, Greek - not cuttable, not divisible. Boethius, 6th century, defined the meanings of individuus:

Something can be called individual in various ways: that is called individual which cannot be divided at all, such as unity or spirit (i);that which cannot be divided because of its hardness, such as steel, is called individual (ii); something is called individual, the specific designation of which is not applicable to anything of the same kind, such as Socrates (iii). (In Porphyrium commentarium liber secundus).

Individualis and individual can be found in the sense of essential indivisibility in medieval theological argument, especially in relation to the argument about the unity of the Trinity (the alternate form, indivisible, was also then used). Thus: 'to the ... glorie of the hye and indyvyduall-Trynyte' (1425). Sense (i) continued in more general use into the 17th century: 'Individuall, not to bee parted, as man and wife' (1623); '. . . would divide the individuall Catholicke Church into severall Republicks' (Milton,1641). Sense (ii), in physics, was generally taken over by atom, from the 17th century. It is sense (iii), indicating a single distinguishable person, which has, from the early 17th century, the most complicated history.

The transition is best marked by uses of the phrase 'in the individuall' as opposed to 'in the general'. Many of these early uses can be read back in a modern sense, for the word is still complex. Thus: 'as touching the Manners of learned men, it is a thing personal and individual'(Bacon, Advancement of Learning, 1, iii; 1605). In the adjective the first developing sense is 'idiosyncratic' or 'singular': a man should be something that men are not, and individuall in somewhat beside his proper nature' (Browne, 1646)._ The sense is often, as here, pejorative. The word was used in the same kind of protest that Donne made against' the new 'singularity' or 'individualism':

 For every man alone thinks he hath got
To be a Phoenix, and that then can be
None of that kind of which he is but he.
                (First Anniversarie, 1611)
In this form of thought, the ground of human nature is common; the 'individual' is often a vain-or eccentric departure from this. But in some arguments the contrast between 'in the general' and 'in the individual' led to the crucial emergence of the new noun. It was almost there in Jackson (1641): 'Peace ... is the very supporter of Individualls, Families, Churches, Commonwealths', though' individualls' is here still a class. It was perhaps not till Locke (Human Understanding, III, vi; 1690) that the modern social , sense emerged, but even then still as an adjective: 'our Idea of any individual Man'.

The decisive development of the singular noun was indeed not in social or political thought but in two special fields: logic, and, from the 18th century, biology. Thus: 'an individual ... in Logick ... signifies that which cannot be divided into more of the same name or nature' (Phillips, 1658). This formal classification was set out in Chambers (1727-41): 'the usual division in logic is made into genera ... those genera into species, and those species into individuals'. The same formal classification was then available to the new biology. Until the 18th century individual was rarely used without explicit relation to the group of which it was, so to say, the ultimate indivisible division. This is so even in what reads like a modern use in Dryden:

That individuals die, his will ordains;
The propagated species still remains.     (Fables Ancient and Modern, 1700)
It is not until the late 18th century that a crucial shift in attitudes can be clearly seen in uses of the word: 'among the savage nations of hunters and fishers, every individual ... is ... employed in useful labour' (Adam Smith, Wealth of Nations, i, Introd., 1776). In the course of the 19th century, alike in biology and in political thought, there was a remarkable efflorescence of the word. In evolutionary biology there was Darwin's recognition (Origin of Species, 1859) that 'no one supposes that. all the individuals of the same species are cast in the same actual mould'. Increasingly the phrase 'an individual' - a single example of a group - was joined and overtaken by 'the individual': a fundamental order of being.
The emergence of notions of individuality, in the modern sense, can be related to the break-up of the medieval social, economic and religious order. In the general movement against feudalism there was a new stress on a man's personal existence over and above his place or function in a rigid hierarchical society. There was a related stress, in Protestantism, on a man's direct and individual relation to God, as opposed to this relation mediated by the Church. But it was not until the late 17th century and the 18th century that a new mode of analysis, in logic and mathematics, postulated the individual as the substantial entity (cf. Leibniz's 'monads'), from which other categories and especially collective categories were derived. The political thought of the Enlightenment mainly followed this model. Argument began from individuals, who had an initial and primary existence, and laws and forms of society were derived from them: by submission, as in Hobbes; by contract or consent, or by the new version of natural law, in liberal thought. In classical economics, trade was described in a model which postulated separate individuals who decided, at some starting point, to enter into economic or commercial relations. In utilitarian ethics, separate individuals calculated the consequences of this or that action Which they might undertake. Liberal thought based on 'the individual' as starting point was criticized from conservative positions - 'the individual is foolish ... the species is wise'(Burke) - but also, in the 19th century, from socialist positions, as most thoroughly in Marx, who attacked the opposition of the abstract categories 'individual' and 'society' and argued that the individual is a social creation, born into relationships and determined by them.

The modern sense of individual is then a result of the development of a certain phase of scientific thought and of a phase of political and economic thought. But already from early 19th century a distinction began to be made within this. It can be summed up in the development of two derived words: individuality and individualism. The latter corresponds to the main movement of liberal political and economic thought. But there is a distinction indicated by Simmel: 'the individualism of uniqueness - Einzigheit - as against that of singleness - Einzelheit'. 'Singleness' - abstract individualism – is based, Simmel argued, on the quantitative thought, centered in mathematics and physics, of 18th century. 'Uniqueness', by contrast, is a qualitative category, and is a concept of the Romantic movement. It is also a concept of evolutionary biology, in which the species is stressed and the individual related to it, but with the recognition of uniqueness within a kind. Many arguments about 'the individual 'now confuse the distinct senses to which individualism and individuality point. Individuality has the longer history, and comes out of the complex of meanings in which individual developed, stressing both a unique person and his (indivisible) membership of a group. Individualism is a 19th century coinage: 'a novel expression, to which a novel idea has given birth' (tr. Tocqueville, 1,835): a theory not only of abstract individuals but of the primacy of individual states and interests. See society.


2. There are two main senses of industry: (i) the human quality of sustained application or effort; (ii) an institution or set of institutions for production or trade. The two senses are neatly divided by their modern adjectives industrious and industrial.

Industry has been in English since the 15th century, from the word industrie, French, root word industria, Latin - diligence. Elyot wrote in 1531: 'industrie hath nat benso longe tyme used in the englisshe tonge as Providence; wherfore it is the more straunge, and requireth the more plaine exposition,’ and he went on to define it as quick perception, fresh invention and speedy counsel. Yet there were uses, contemporary with this, in contrast to sloth and dullness; as a synonym for diligence; and, in a specialized use, as a working method or device. Industrious, meaning either skilful or assiduous, was the common derived adjective from the mid-16th century, but there was also a 16th century appearance of industrial in a distinction between cultivated (industriall) and natural fruits. Industrial is then rare or absent until the late 18th century, when it began the development which made it common by the mid-19th century, perhaps in a new borrowing from French.

It was from the 18th century that the sense of industry as an institution or set of institutions began to come through. There was mention of a 'College of Industry for all useful Trades and Husbandry' in 1696,and of subsequent 'schools of industry' associated with Sunday Schools. But the most widespread 18th century use was in 'House of Industry,’ the workhouse, where the ideas of forced application and useful work came together. Then, in Adam Smith, there was a modern generalizing use: funds destined for the maintenance of industry' (Wealth of Nations, II, iii; 1776). By the 1840s, at latest, this use was common: Disraeli - 'our national industries' (1844);Carlyle - 'Leaders of Industry' (1843). Industry as a human quality rather than an institution, while continuing to be used, was on the whole subordinate after this period, and survives mainly in different kinds of patronizing reference.

The sense of industry as an institution was radically affected, from the period of its main early uses, by two further derivations: industrialism, introduced by Carlyle in the 1830s to indicate a new order of society based on organized mechanical production, and the phrase industrial revolution, which is now so central a term. Industrial revolution is especially difficult to trace. It is usually recorded as first used by Arnold Toynbee, in lectures given in 1881. But there were much earlier uses in French and German. Bezanson (1922) traced several French associations of révolution and industrielle between 1806 and the 1830s, but analysis of these depends on understanding the ways in which both revolution and industrial were shifting, in both English and French. Most of the early uses referred to technical changes in production – a common later meaning of industrial revolution itself - and this was still the primary sense as late as 'Grande Révolution Industrielle'(1827). The key transition, in the developed sense of revolution as instituting a new order of society, was in the 1830s, notably in Lamartine: 'le 1789 du commerce et de l’industrie,’ which he described as the real revolution. Wade (History of the Middle and Working Classes, 1833) wrote in similar terms of 'this extraordinary revolution.’ This sense of a major social change, amounting to a new order of life, was contemporary with Carlyle's related sense of industrialism, and was a definition dependent on a distinguishable body of thinking, in English as well as in French, from the 1790s.The idea of a new social order based on major industrial change was clear in Southey and Owen, between 1811 and 1818, and was implicit as early as Blake in the early 1790s and Wordsworth at the turn of the century. In the 1840s, in both English and French ('a complete industrial revolution', Mill, Principles of Political Economy, III, xvii; 1848 - revised to 'a sort of industrial revolution'; l’ere des révolutions industrielles,’ Guilbert, 1847) the phrase became more common. But the decisive uses were probably by Blanqui (Histoire de l’économie politique, 11, 38; 1837): 'la fin du dix-huitième siècle ... Watt et Arkwright ... la révolution industrielle se mit en possession de l'Angleterre'; and by Engels (Condition of the Working Class in England; written in German, 1845): 'these inventions ... gave the impulse to an industrial revolution, a revolution which at the same time changed the whole of civil society.’ Though the phrase was not in common use in English until the late 19th century, the idea was common from the mid-19th century and was clearly forming in the early 19th century. It is interesting that it has survived in two distinct (though overlapping) senses: of the series of technical inventions (from which we can speak of Second or Third Industrial Revolutions); and of a wider but also more historically specific social change - the institution of industrialism or industrial capitalism. (It must be noted also that the relations between industrialism and capitalism are problematic, and that this is sometimes masked by the terms. In one use, industrialism is euphemistic for capitalism, but problems of 'socialist' industrialization have elements in common with the industrial capitalist history.)

From the early 19th century, association with organized mechanical production, and the series of mechanical inventions, gave industry a primary reference to productive institutions of that type, and distinctions like heavy industry and light-industry were developed in relation to them. Industrialists - employers in this kind of institution – were regularly contrasted not only with workpeople - their employees, but with other kinds of employer - merchants, landowners, etc. This contrast between industry as factory production and other kinds of organized work was normal to the mid-20th century and is still current. Yet since 1945, perhaps under American influence, industry has again been generalized, along the line from effort, to organized effort, to an institution. It is common now to hear of the holiday industry, the leisure industry, the entertainment industry and, in a reversal of what was once a distinction, the agricultural industry. This reflects the increasing capitalization, organization and mechanization of what were formerly thought of as non-industrial kinds of service and work. But the development is not complete: industrial workers, for example, still primarily indicates factory workers, as distinct from other kinds of worker, and the same is true of industrial areas, industrial town and industrial estate. Industrial relations, however, has become specialized to relations between employers and workers in most kinds of work; cf. industrial dispute and the interesting industrial action (strikes, etc.), where the sense depends on a contrast, within the Labor Movement, with political action.

industrial revolutions

3. The term industrial revolution is of fundamental importance in the study of economic history and economic development. The phrase is, however, full of pitfalls for the unwary, partly because it has been used with several different meanings, and has, of course, generated a good deal of controversy.

An early use of 'industrial revolution' referred to what are best seen as periods of industrial growth with quite limited implications for overall economic development. Well-known examples include Carsus-Wilson on the 13th and Nef on the 16th centuries. This usage is now seen as unhelpful and is generally disparaged.

Still very much extant is thinking of 'industrial revolution’ in the sense of technological revolution, as does Freeman. This approach is often used by researchers concentrating on science and technology to describe developments which they feel had widespread economic and social ramifications. This school of thought envisages successive technological or (if you will) industrial revolutions. These would typically include the famous inventions of the period 1750-1850 based on steam power, the so-called second industrial revolution of the late 19th and early 20th centuries involving new chemicals) electricity and automobiles, and the information technology revolution of the years after 1970.

Among economists and economic historians 'industrial revolution' most frequently relates to a fundamental part of the experience of economic development, namely the spread of industrialization in the economy as a whole. Since Kuznets, this has been associated with the onset of modern economic growth. Econometric research has confirmed the existence of systematic patterns of change in the structure of economies as real incomes rise, although it is widely recognized that countries have not followed identical routes to modernity. Industrialization is typically accompanied by acceleration in economic growth, increases in investment in both physical and human capital, and improvements in technology and urbanization.

Perhaps the most common use of all refers to the classic and pioneering example of industrialization which occurred in Britain during the late 18th and early 19th centuries, famously described by Rostow as featuring a spectacular take-off into self-sustained growth. It now seems more probable that this episode was characterized by particularly rapid changes in economic structure but quite slow growth. fir this sense the British industrial revolution not only subsumed technological innovation in industry but also embraced much wider organizational changes in agriculture, finance, commerce, trade, etc. Indeed the term 'Industrial revolution' is a metaphor which should not be taken literally in this context.

A large literature has sought the ultimate causes of the first industrial revolution and has produced a great variety of hypotheses, many of which are unpersuasive, although hard completely to refute. What is generally accepted is that British industrialization resulted from prowess in technology and investment but ultimately depended on the institutions of a market economy which had their origins in the distant past. While the structural changes and their implications for living standards can fairly be described as revolutionary, this does not detract from the point that the industrial revolution was the culmination of evolutionary changes which had been proceeding for centuries. See also economic development.

information society

The information society is a broad concept which has been used since the 1970s to refer to the wide range of social and economic changes linked to the growing impact of information technology. It highlights the role that information technology plays in the way that individuals live, work, travel and entertain themselves. The use of the term information society has now become so widespread that the concept cannot be understood as a reference to any specific thesis. Journalists, futurists and social scientists often use this term to denote a more information-centric society in the same vein as others use such concepts as the information economy, the wired nation, the communications revolution, the microelectronics revolution and the knowledge society.

Others see the information society in terms of a prescription rather than a forecast. In Japan and Europe, as well as North America, the information society is often promoted as a vision for the 21st century as a means to help policy makers anticipate and nurture the information sector in local, national and regional economies. In the 1990s US and other national initiatives to build modern information infrastructures – the so-called ‘information super-highway’ – were based on such visions.

For social scientists interested in the role of information and communication technology in social and economic development, the information society is a central idea. It builds on seminal work by the American sociologist Daniel Bell, who focused on forecasting the ‘post-industrial society.’ Bell posited information as the defining technology of the post-Second World War era, while raw materials were the core technology of the agricultural society, and energy was the core technology of the industrial society.

Broadly speaking, information technology refers to knowledge about how to create, manage and use information o accomplish human purposes, and so includes not only advances in computing and telecommunications, but also advances in the techniques and skills for using these systems for such purposes as modeling and computer simulation.

Bell identified major trends in what he called the post-industrial society, focusing on the USA as the exemplary case. The principal trends tied to the development of an information society include the growth of employment in information-related work; business and industry tied to the production, transmission and analysis of information; and the increasing centrality of technologists - managers and professionals skilled in the use of information for planning and analysis - to decision making.

The most significant trend is the shift in the majority of the labor force from agriculture (the primary sector) and manufacturing (the secondary sector) to services (the tertiary sector). Growth in information work, primarily white-collar occupations, has contributed to growth in service sectors. Information work includes a broad array of jobs, ranging from programmers and software engineers to teachers and researchers. New information industries, such as the providers of on-line data and communication services, account for some of this growth, but information work has also become more central to every sector of the economy, including agriculture and manufacturing. In this respect, the occupational shifts associated with the information society do not necessarily imply a decline in the relevance of primary or secondary sectors to national or global economies, as some critics have argued, but rather a diminishing need for labor within these sectors as computing, telecommunications and management science techniques are used to redesign the way in which work is accomplished.

A second trend identified in post-industrial information societies is the increasing importance of knowledge - including theoretical knowledge and methodological techniques, and its codification – to the management of social and economic institutions. Knowledge and technique, such as systems theory, operations research, modeling and simulation, are viewed as critical to forecasting, planning and managing complex organizations and systems, which Bell posited as central problems of the post-industrial era. According to Bell, the complexity and scale of emerging social and economic systems requires systematic forecasting and foresight rather than a previously trusted reliance on common sense or reasoning based on surveys and experiments.

A third set of trends involves power shifts, particularly the growing prominence of the professional and managerial class – the knowledge workers. These are the individuals who understand and know how to work with knowledge, information systems, simulation and related analytical techniques. They will become increasingly vital to decision-making processes in situations of growing complexity. Thus, the relative power of experts should rise with the emergence of an information society.

Despite the significance and longevity of the concept, there remains no consensus on the definition of an information society, or indeed whether we are in fact living in an increasingly information-oriented society. Controversy over the trends and historical underpinnings of an information society generated a lively debate within the social sciences. Critics of Bell's theory focus on his identification of information technology central to long-term macrolevel changes in society – particularly in the structure of occupations and social strata - and the resultant deterministic view of social change. Whether or not this is an oversimplification of the information society thesis, it has led to a valuable shift in the focus of social science inquiry. This no longer looks only at the social implications of technological change, but also considers the social, political and economic factors that have shaped the design and use of information and communication technologies.


1. As a technical term in social science, an institution is a regular and continuously repeated social practice. As such, the term has a wider coverage than in everyday usage, including not merely, prisons, asylums, schools, hospitals and government offices, but also language, and moral and cultural practices.


3. A strict definition of intellectuals would be that they are persons whose role is to deal with the advancement and propagation of knowledge, and with the articulation of the values of their particular society. In that sense all societies have their intellectuals, since even the most so-called primitive will maintain priests or other interpreters of the divine will and natural order. For most of history, intellectuals have of necessity been supported by the political and religious institutions of their societies, so that rebels against accepted institutions and mores have tended to be critical of what they regarded as the over-intellectual approach of the recognized teachers of their time.

The role of intellectuals was altered in major respects by the advent of printing, and consequently of a public for a wide variety of reading matter including freer discussion of basic problems in science, morals, politics, and even religion. The French philosophes of the 18th century, later to be saddled by some historians with responsibility for the advent of the great Revolution, gave a precedent for the modern idea that intellectuals stand somehow outside the power structures and are, by definition, critical of existing social arrangements.

In the 19th century, the concept and its resonance differed in different societies. In France and the other advanced countries of western Europe, intellectuals were distinguished from scientists and scholars who depended upon institutions and academies funded by the state, and from those practitioners of literature whose appeal was strictly aesthetic. To be an intellectual was to claim a degree of independence of outlook; and the word in general parlance implied respect and approval. In central Europe, where the state was more suspicious of radical ideas, intellectuals, while courted by the political parties, were looked upon with suspicion by the authorities especially if they were recruited largely from minority groups. Nationalist (and late fascist) movements appealed to populist anti-intellectual prejudice against the Jewish intellectuals of Vienna at the turn of the century, and in the German Weimar Republic.

Britain differed from its neighbors in that, although there were eminent social critics in the Victorian age, the interaction between the world of the intellect and the political and administrative worlds was very close. Intellectuals could preach reform and hope to have an influence. For this reason, the word intellectuals was held to represent a foreign rather than a British reality and was given a slightly scornful edge, as implying a lack of contact with everyday life. Few British people would have wished or now would wish to be called intellectuals. In the USA the similar role of intellectuals was diminished after their triumph in the success of the anti-slavery movement. Towards the end of the 19th century, a new movement of radical social criticism did develop among what can be seen as the American equivalent of European intellectuals, and this was renewed after the First World War and Woodrow Wilson’s temporary mobilization of some of them in pursuit of his domestic and international ideals. So great was their alienation in this second phase that they became susceptible to Communist penetration and influence to a greater extent than was common in Europe in the 1930s, although Marxism was to enjoy an efflorescence in liberated Europe after the Second World War, notably in the Latin countries.

In Tsarist Russia the differentiation between intellectuals and the members of learned professions was narrower, and they were grouped together as members of the intelligentsia. Faced with an absolutist regime, to be a member of the intelligentsia was almost by definition to be a critic of the social order and an opponent of the regime, although on occasion from a right-wing angle rather than left-wing angle. In the former Soviet Union, and subsequently in eastern Europe as well, the monopoly of the communist party in defining and expounding the ruling doctrine, and the monopoly of the state and party in access to the media, forced intellectuals seeking to follow their own bent to go underground so that, as under Tsarism, to be intellectual is to be classed as an opponent of regimes whose instruments of repression are greater and used with less scruple than those of earlier times.

In the overseas European empires of the 19th and 20th centuries, a class of intellectuals influenced by their western-style education came into being alongside the more traditionally educated and motivated intellectuals of the indigenous tradition. The ideas to which they were exposed, combined with the limited roles available to them, produced a similar effect to that noted in relation to tsarist Russia, predisposing them towards political opposition. Another similarity was the extension of the concept to include more than the small minority who were full-time intellectuals in the western sense. What was created was again an intelligentsia. This important aspect of the prelude to independence of the counties of the so-called Third World has had strong repercussions. Ingrained habits of criticism and opposition proved difficult to discard when these intelligentsias took power. Intellectuals, when called upon to rule, rarely perform well and usually have to give way to more disciplined elements such as the military.

A reaction against the adoption of western values and attitudes by intellectuals in Third-World countries has produced a revival of a traditional, largely religious-oriented leadership, notably in parts of the Islamic world, and a specific repudiation of intellectuals thought to be tarnished by western liberal or Marxist contacts.

Intellectuals whose mission is to examine everything are naturally prone to examine their own roles. Their self-consciousness has been heightened by the anti-intellectualism of some populist movements, an anti-intellectualism which has surfaced more than once on the American political scene. There are a number of recurring problems for intellectuals generally. Should they seek solitude to produce and develop their own ideas, or does the notion itself imply a constant commerce between intellectuals such as took place in the salons of 18th -century Paris and Regency London, or later in the cafes of Paris and Vienna, or as it now takes place in the many international congresses and seminars supported by American foundations? Should intellectuals engage directly in current controversies or content themselves with publishing their own ideas, leaving the arena to others? Should they accept public office or even seek the suffrages of the people for themselves? Should philosophers be kings?

international relations

3. In the most general sense international relations have existed ever since people formed themselves into social groups and then developed external relations with groups like themselves. Relationships were most frequently conflictual or warlike, although occasionally they were cooperative; but they took place in a system of anarchy and not within the framework of any political or legal or customary rules. These peculiar relationships were little considered by writers in the western world before Machiavelli, but from the 17th century onwards international law (Grotius, Pufendorf, Vattel) and the problems of war and peace (Rousseau, Kant) began to attract attention. These historical origins, combined with the horror of the First World War, led to the subject's emergence as a policy-making perspective and normative study: war was an intolerable evil, its recurrence most forever be prevented, and the duty of international relations scholars was to show how to achieve this. It was assumed that nobody could want war, so if states were democratic and governments were accountable to their peoples, and if the system’s anarchy were ended (hence the League of Nations), was might be banished.

The diagnosis was too simple. The aspirations and actions of Hitler, Mussolini, the Japanese, and the Bolsheviks in Moscow showed the truth of the dictum of Morgenthau that peace and security is the ideology of satisfied powers. Scholars now turned their minds away from the study of ways to achieve a supposedly universal goal to study of how things in the internal arena in fact were. The modern subject of international relations was born. From the outset, though at first not explicitly, the subject was approached by different scholars from two different points of view. The first sought to establish why the significant units (or actors) on the international stage behaved the in the ways they did: most such scholars saw states as the significant actors, and this branch of the subject became foreign policy analysis. The second group focused on the arena within which relations occurred, and was concerned to identify the mechanisms by which patterned relationships with a fair degree of stability and order were able to be maintained in conditions which, formally at least, were anarchical.

The 1950s and 1960s saw a burgeoning of methodological experimentation and quasi-theoretical speculation, and a proliferation of journals. The behavioralist revolution in the USA invaded international relations, as it did other social sciences, and a great debate with the so-called traditionalists raged through the 1960s and early 1970s, and is not yet concluded. But in the 1970s and 1980s disappointment at the relative lack of success in the creation of theories with explanatory power for real-world problems led to some redirection of attention towards substantive questions, to smaller-scale analyses and to theorizing over limited ranges of phenomena.

Foreign policy analysis is the branch of the subject in which most practical advances have occurred. Many conceptual frameworks have been developed, the most comprehensive probably being that of Brecher et. al., but the central components of such frameworks are now widely agreed. States are conceived as having objectives of various kinds – political/security, economic, ideological. Objectives are not consistently compatible one with another, and a short-term objective may be inconsistent with a long-term goal. Objectives are ranked differently by different groups, organizations, and political leaderships within states, and rankings change over time. Explanation of policy decisions thus requires understanding of political interplay and bureaucratic process. But the determination of policy is conditioned also by states’ capabilities - economic, demographic, political, military – and by decision makers’ perceptions of the comparative efficacy of their own capabilities as against those of the other state(s) with which they are dealing, all in the context of support relationships (alliances, economic aid) and of respective commitments elsewhere in the system. Most, if not all, relationships have elements of conflict and common interest, and are essentially of a bargaining character; but the conflictual element usually predominates, and the concept or power is thus central to the analysis. A check-list of such considerations affecting foreign-policy decisions enables rudimentary comparisons of foreign policies to be made, but also makes possible greater awareness among policy makers of the likely consequences of their decisions.

The purposes of studies at the second or system level are to determine the factors that make the stability of the system more or less probable, and the effect on international outcomes of the system‘s structure. Essential structural components are the number of significant units (or actors) in the system, the nature, quality and quantity of interactions among the units, the distribution of capabilities among them, and the degree to which realignment of relationships is easy or is constrained (a system that is ideologically highly polarized, for example, is relatively inflexible). Analysis at the system level is commonly more highly abstract than analysis of state behavior: this makes possible theory construction of a more rigorous kind, but by the same token makes application of theory to the real world more difficult.

At both levels statistical and mathematical techniques are used, as well as more traditional methods relying on historical and verbally described data. The distinction between levels is, of course, analytical only. To take just one example of interdependence: at the unit behavior level, the extent to which states are economically, militarily or ideologically interdependent will very greatly affect the policy choices that are open; at the system level, the extent to which the realignment of units is impeded by their interdependence will fundamentally affect both outcomes and the stability of the system. Mention of interdependence calls attention to the fact that while states are widely accepted as still the most significant actors in the international arena, there are now many other actors, including intergovernmental organizations (the International Monetary Fund) and non-governmental organizations (guerilla groups or multinational corporations). The roles of these, in interplay with the behavior of states, and as components of international systems, all form part – and some would say an increasingly important part- of the study of international relations.

international trade

International trade is not intrinsically different from transactions in which commodities do not cross national boundaries. Nevertheless, the study of inter- national trade has traditionally constituted a separate branch of microeconomics, It may be distinguished from other branches by its focus on situations where some but not all goods and factors are mobile between countries; and from international macroeconomics by its focus on real rather than nominal variables (trade flows and relative prices rather than exchange rates and money supplies), and by a tendency to examine medium-run issues rising equilibrium analysis rather than short-run positions of disequilibrium. One of the first and most durable contributions to the analysis of international trade is the principle of comparative advantage due to Ricardo. This is the antecedent of both the normative and positive strands of international trade theory, At a normative level, it postulates that an absolutely inefficient country will nevertheless gain from trade; at a positive level, it predicts the direction of trade: each country will tend to export those goods which it produces relatively cheaply in the absence of trade. As an explanation of trade patterns, the principle has met with some success. However, in its classical form it is open to two objections: it assumes unrealistically that unit production costs are independent of scale or factor proportions; and it fails to explain why they differ between countries in the first place.

A theory which overcomes these deficiencies was developed by the Swedish economists Heckscher and Ohlin, who stressed international differences in factor endowments as the basis for comparative advantage and trade. Thus a country which is relatively capital-abundant will tend to export goods which are produced by relatively capital-intensive techniques. Largely through the influence of Samuelson a Highly simplified version of this theory, assuming only two goods and two factors in each country, has come to dominate the textbooks. In this form, it is a useful teaching device for introducing some of the basic concepts of general equilibrium theory but, not surprisingly, it is overwhelmingly rejected by the data. The most notable example of this is the so-called Leontief Paradox, an early application by Leontief of his technique of input-output analysis, which found that the presumably capital-abundant USA exported labor-intensive commodities, thus contradicting the theory. Nevertheless, for most economists probably the preferred explanation of trade patterns between countries at different levels of economic development is an eclectic theory of comparative advantage along Heckscher-Ohlin lines, allowing for many factors of production, some of them (such as natural resources) specific to individual sectors.

Even this theory fails to account for certain features of contemporary international trade, especially between advanced economies with similar technology and factor endowments. Such trade is frequently intra-industry, involving differentiated products within a single industry. Various theories explain such trade in terms of imperfectly competitive firms producing under conditions of increasing returns. Attention has also focused on the increased international mobility of factors (in part through the medium of multinational corporations) which in different circumstances may act as a substitute for or a complement to trade.

As well as attempting to explain the pattern of trade, positive trade theory also makes predictions about many aspects of open economies. Most notorious of these is the implication of the Hecksher-Ohlin model known as the factor price equalization theorem, which predicts that free trade will equalize the prices of internationally immobile factors. The theory also makes predictions concerning such issues as the effects of tariffs and international transfers on foreign and domestic prices, the effects of trade policy on domestic income distribution, and the consequences of structural change. Turning to normative trade theory, its traditional focus has been the merits of free trade relative to autarky, stemming from increased specialization in production and increased efficiency and diversity of choice in consumption. Similar arguments favor partially restricted trade relative to autarky, although the benefits of selective trade liberalization such as the formation of a customs union) are not as clear-cut. The persistence of protectionist sentiment, despite these theoretical arguments, may be explained by the fact that gains from trade accruing to the economy as a whole are not inconsistent with losses to individual groups, especially owners of factors specific to import-competing sectors.

Two exceptions to the case for free trade are normally admitted. The optimal tariff argument states that a country with sufficient market power can gain by behaving like a monopolist and restricting the supply of its exports. The infant-industryargument defends transitional protection to enable a new industry to benefit from learning and scale economies. (As with many arguments for trade restriction, the latter on closer examination is less an argument against free trade than against laissez faire). Work on strategic trade policy has added to these arguments the possibility that a government’s ability to pre-commit tariffs or subsidies may allow it to give an advantage to home firms competing in oligopolistic markets.

Other special models have been developed to deal with important features of contemporary international trade. Thus, the growth of trade in intermediate goods (as opposed to goods for final consumption) has inspired the theory of effective protection, which builds on the insight that an industry benefits from tariffs on its outputs but is harmed by tariffs on its inputs. The post-war decline in importance of tariffs (at least between developed countries), due largely to international agreements such as the General Agreement on Tariffs and Trade (GATT) and the formation of free-trade areas and customs unions such as the European Union (formerly the EC), has focused attention on the widespread use of non-tariff barriers (such as quotas, health and safety regulations and government procurement policies) as methods of restricting trade.


1. The term 'irony' is derived from the Greek eironeia, meaning 'simulated ignorance.’ Its precise definition is, however, elusive. At its simplest, it is a figure of speech in which what a person says is the opposite to what he or she means (so referring to the tall as short, the cowardly as courageous, and so on). This inversion captures little of the subtlety of irony. A liar or confidence trickster may say the opposite of what he or she means, but the liar is not using irony, for those who understand an utterance as ironic will recognize the inversion of meaning. The point of the inversion is therefore important - why say the opposite of what you mean, unless you are trying to deceive your audience? Two reasons can be offered. First, irony is a form of mockery or critical comment. Ironically to dub the cowardly courageous is to mock their lack of courage. Irony usefully saves the speaker from committing him or herself to a positive position, and to a degree may keep the speaker detached from the issues upon which he or she comments. (A classic example of literary irony is Swifts Modest Proposal (1729), in which he advocated eating Irish babies as a solution to the population problem. He thereby ridicules existing solutions to the 'Irish problem', without offering a serious solution of his own.) Second, recognition of irony as irony may serve to distinguish the sophisticated members of an in-group, from the more simple creatures without.

Two special meanings of irony may be noted. 'Socratic irony' refers to the manner of argument employed by Socrates, at least as he is represented in the early dialogues of Plato. Socrates pretends both ignorance and a sympathy with the position of a supposed expert on some topic. This affectation allows Socrates to question his victims, harrying them until their arguments and contradictions collapse into contradiction and incoherence. ‘Romantic irony’ is especially associated with early 19th-century German philosopher-poets, including Hölderlin and Friedrich Schlegel. Such irony, drawing on Socratic irony, is explicitly associated with ambiguity, uncertainty and fragmentation of meaning. For Schlegel, in irony ‘everything should be playful and serious, guielessly open and deeply hidden.’ Or again: ‘Irony is the form of paradox. Paradox is everything which is simultaneously good and great.’ Irony therefore disrupts the taken-for-granted meaningfulness of utterance and writing, exposing its artificiality. It is this emphasis on the problematic and ultimately indeterminate nature of the interpretation of any utterance or text that carries irony into contemporary literary theory. Thus, for Barthes, irony is the ‘essence of writing,’ in that it exposed the inability of the writer to control the interpretation of the text.



1. In economics labor is one of the four factors of production, alongside capital, land (or natural resources) and enterprise, which is to say, it is one of the four general types of input or resource required for economic production. In orthodox economics, labor includes the number of people actually employed in, or who are available for, production, or a little more abstractly, the capacity to produce (understood in terms of intellectual and manual skills, and the exertion). In Marxist economics, labor is the source of all economic value, (hence the labor theory of value). In addition, the proletariat (the subordinate class within capitalism) are characterized by having to exchange their capacity to labor (or labor-power) for the commodities that they require in order to live.

labor theory of value

1. The labor theory of value is an attempt to explain the value of goods and services in terms of the costs of their production, as opposed to their usefulness (or use-value). Elements of the labor theory can be traced back, at least to the 17th century political philosopher John Locke, who analyzed the appropriation of private property in terms of a person's ability to 'mix' their

labor with natural resources. The British economist David Ricardo (1772-1823) gave the first coherent account of the theory, in part in response to the

paradox of value.’ It was argued that the usefulness of a good could not determine its value, as very useful entities, such as air and water, are generally free or very inexpensive. In contrast, apparently useless luxury goods (gold and diamonds, say) can be very expensive. The labor theory explains this in terms of the amount of labor (or labor-time) that went into their production, either directly, or indirectly through having being stored up by having been expended in the production of machinery and other capital goods. Water is easily found and conveyed to consumers, in contrast to the great amount of time needed to find and extract diamonds. In practice, the actual amount of labor expended in production is of less relevance than a social average labor-time (for otherwise the theory would imply that the products of the lazy would be worth more than those of the efficient). While the theory is fundamental to Marxist economics, in orthodox economics, since the late 19th century, it has been replaced by more sophisticated explanations of value grounded in usefulness (beginning with Marshall's account of marginal utility).


1. A term in sociologist’s Max Weber’s sociology of politics which means the acknowledgement on the part of a society’s subjects of the right of their rulers to rule them. In the post-war period legitimation has become a central issue in social, political and cultural discussion. For Jean-François Lyotard, for example, the question of legitimation is one that is continually suspended within a theoretical double-bind. Questions of legitimation, on this view, are really genre-questions concerning appropriate means to particular ends, and cannot be divorced from considerations of their social and cultural dimension. Lyotard argues that there are no universal criteria for legitimation and that, in consequence, the political level is a realm of cultural antagonism between contending purposes rather than goal-oriented. He does, however, reserve a critical space for the study of language: the open-ended philosophical analysis of rules. Politics, on a Lyotardean model, would be about competing claims being fought out within the space of cultural life, not in terms of some overall, most desirable state of affairs towards which society should be aiming. Jürgen Habermas, in contrast, has tried to argue against this view (which endorses a politics of conflict or ‘dissensus’) with a consensual reading of the social language of ‘communicative action.’ (See also rationality).


1. A key term within political philosophy, the word 'liberalism' is associated with a large number of thinkers (including Locke, Adam Smith, Malthus, Condorcet, J.S. Mill, Kawls and more recently Richard Porty). The origins of liberalism can be traced back at least as far as the writings of John Locke (1632-1704). Indeed, Locke’s work exhibits many of the key features that have subsequently been used to define liberalism. For instance, in the Two Treatises of Government (1690) Locke is concerned to show that the analysis of political power involves consideration of certain key attributes all human beings possess (in Locke’s case this means analyzing human beings in their ‘natural state,’ or the ‘state of nature’ – a notion derived from the work of Thomas Hobbes (1588-1679)). By taking this approach Locke in effect asserts that there are a number of principles of political right that operate outside the realm of civil society, and indeed function to ground it. These principles are (i) freedom of action, and (ii) equality of right. Thus, in the state of nature no individual has the right to transgress another individual’s basic freedom. Locke justifies this claim by way of reference to a conception of natural law derived from the claims of reason, ‘the common rule and measure God hath given to mankind.’ From a rational point of view, it is claimed, every individual has the right both to self-protection and to claim compensation for suffering a wrong at the hands of another. From this it is clear that a particular conception of the human individual (conceived in a manner which divorces human subjectivity from the constraints of modes of social organization) forms the basis for Locke’s political discourse.

Each individual is, in Locke’s view, self-interested. From this it follows that some form of regulative body is required for the impartial administration of these rights. This forms part of the basis of Locke’s justification for the existence of government, which constitutes a means of arbitrating between the disputes which necessarily will arise between individuals situated in a state of nature. Government, in turn, rests on the constitution of a civil society, which is voluntarily arrived at through a contract. Thus, in Locke’s view the legitimacy of governmental power should be derived from the consent of those who fall under it. In principle, one is only subject to the power of government if one has agreed to enter into civil society, and thereby become a civil agent.

For Locke, civil society is ultimately derived from one basic principle of natural law which operates within the state of nature: the right to the possession of one's own body and the products thereof Locke's argument can be summarized thus: (i) all humans situated in the right to selfpreservation; (ii) the earth is the common possession of all human beings equally; (iii) its natural products thus belong in principle to everybody; (iv) however, since these products are available for use it follows that there must be some means whereby they may be appropriated and thereby subsequently owned; (v) there is one piece of property all humans possess, namely their own bodies; (vi) if you own your body, then the products of your labor are state of nature have the also yours; (vii) hence, if you appropriate anything from the state of nature this must, by definition, be the result of your labor and consequently become yours. Once the latter point has been reached, Locke says, it follows that other persons do not have the right to take possession of what is now yours, viz. the products of your labor, for goods appropriated in this manner from the state of nature become through this process a matter of 'private right'. This right is God-given, since God would not have put the world of nature at humanity's disposal if they were not to be taken advantage of. There is, it follows, a 'law of reason', I an original law of nature', which grounds the ownership of private property and thereby grounds civil society. In turn, on a Lockean account, the proper function of government is to protect the rights of individuals and of their property (both in the form of the individual's own body and the products of their labor). A limitation to appropriation in the state of nature is set by use: one may only own what can be used without waste (e.g. if one appropriates more apples than one can eat they will go off and be wasted; and the same point goes for land). However, with the invention of money (which is a nonperishable good) this limitation is overcome. For instance, one may indeed own a large quantity of land, the products of which can be exchanged for cash and hence do not go to waste. In turn, it is possible thereby to justify unequal property ownership: 'since gold and silver, being little one basic useful to the life of man, in proportion to food, raiment, and carriage, has its value only from the consent of men it is plain that the consent of men have agreed to the disproportionate and unequal possession of the earth.’ Liberty, it follows, does not guarantee equality. Indeed, the progression from the state of nature to civil society is, for

Locke, one which brings with it a necessary inequality with regard to the possession of goods.

Locke's thought exhibits a number of features common to many liberal thinkers. First, a central concern is with the basis of the individual's right to the ownership of goods, including above all their own body. Second, this right is paramount and it is the function of good government to protect it. Third, liberty, in turn, is understood as the freedom to be left alone to pursue one's own goals with the minimum of interference from others. Fourth, the function of the state is articulated and established within this basic assumption concerning liberty: a state should be based on consent (from which it derives its legitimacy and authority), and has as its proper function the protection of the rights of civil agents. Fifth, the state therefore has a limited role in the lives of individuals: it is not there to prescribe particular modes of behavior which individuals ought to adhere to, but rather ought only to oversee the behavior of individuals to the extent of ensuring that one person's actions do not infringe the rights of another. It follows that for thinkers within the liberal tradition the individual takes precedence over all other political concerns (i.e. individual liberty has priority over other values, such as equality).

These features are also evident in J.S. Mill's classic text On Liberty (1859). Mill's avowed aim in this text is to explore 'the nature and limits of the power which can be legitimately exercised by society over the individual' in the context of the social 'struggle between liberty and authority.’ There is, for Mill, an inherent political tension which exists between the spheres of liberty and authority, between individual freedom of thought and 'collective opinion' (manifested at its worst in the 'tyranny of the majority'). The individual is for Mill an independent entity with an accompanying right to this independence: 'his independence is, of right, absolute.’ An individual exhibits abilities (such as those of reflection and choice) as well as passions, desires and purposes. Taken together, these features allow for the identification of the individual as that which possesses interests. Given a situation in which a diversity of individuals are present in a society, it follows that such a society will also contain a diversity of interests. It is just such a form of society, one which both contains and is an expression of the diversity of human possibility, manifested in the form of the individual, that Mill favors as being the most progressive. Hence, Mill's account of individuality and political authority simultaneously implies an affirmation of a particular conception of cultural life. A more 'progressive culture' is taken to be synonymous with a liberal political culture, i.e. one in which individuality is fostered as the key basic value: 'It is not by wearing down into uniformity all that is individual ... but by cultivating it and calling it forth, within the limits imposed by the rights of others, that human beings become a beautiful and noble object of contemplation.’ As with Locke, then, for Mill the individual has rights which are established by way of reference to a regulative model of negative freedom. Freedom is, in other words, conceived as the freedom to act according to one's individual desires, providing that one does not infringe the liberties of others in the process ('freedom from . . .', as opposed to 'freedom to . . !). As such, the liberal conception of individuality sets up a normative restriction which tells us what the boundaries of an agent's actions ought to be, even as it asserts the absolute right of individuals to be free from either state or consensual pressures which might impede their basic right to liberty.

More recently, John Rawls (in A Theory of Justice, 1972) has rearticulated many of the central tenets which underlie the thinking of both Locke and Mill. As with these two thinkers, Rawls is concerned to demonstrate that political right must be derived from the protection of individual interests, which are anchored within a rational framework capable of providing

a normative model for individual agency. In Rawls's case, this framework is articulated through the postulation of the 'original position.’ In the 'original position', Rawls says, a group of individuals would be placed behind a 'veil of ignorance' and asked to choose the basic rules which would underpin the society in which they will subsequently live. In such a position, these individuals have no knowledge of such things as what social status they will have, how much money they will possess, etc. Thus, the 'original position' functions as a heuristic device intended to show what choices rational agents divested of individual interest would make about the most favorable form of

social order. Rawls's conception of the 'original position' shares common features with Locke's 'state of nature' theory. For example, it envisages that it is possible to describe rational human subjects removed from the constraints of social hierarchy, and in turn to adduce that they would favor a social order which maximizes personal liberty. In addition, however, Rawls also argues that such individuals would elect for a society in which the possible injustices they would suffer were they to draw the short straw and find themselves at the bottom of the social pile are minimized (what is termed the 'maximin' principle). Once again, though, it is evident that Rawlsean liberalism envisages the key political issue as being concerned with individual liberty and how best to both maximize and protect it. As with Locke and Mill, individuals have liberty granted to them with the proviso that it ought not to transgress the interests of others.

It is apparent from the work of these three thinkers, however, that liberalism is not a term which may be used to define a particular procedural attitude concerning how to arrive at the best model of social order. Thus, where Locke and Rawls both resort to a model of justification which, in

effect, removes the individual from their social context in order to derive the principles of right and liberty which then apply to them, for Mill this move is not necessary. In other words, Mill does not envisage a 'state of nature' theory (or something akin to it) as being necessary to the project of

arguing for the primacy of the liberty of individual political agents. Indeed, Mill's conception of the individual is more socially embedded to the extent that individuality gains its meaning, for him, from the social context in which agents engage in their personal pursuits. Nevertheless, Mill is equally committed to the view that the individual's rights are paramount, and that the pursuit of the conditions which maximize individual liberty will lead to the most desirable forms of social organization and cultural life. With regard to the state, likewise, liberals are not in common agreement. As already noted, a Rawlsean would argue that the maximization of liberty must nevertheless be compatible with the minimization of the risks to individual well-being that are present in society. A certain level of wealth redistribution being carried out by government is therefore justifiable in Rawls's view; whereas for a thinker like Locke, the unequal distribution of goods is a necessary consequence of human activity in civil society and one must simply accept this fact.

Along with their emphasis on the importance of individual liberty, liberals also show a commitment to a fairly rigid distinction between the public and private spheres of life. In other words, for a liberal like Mill, what an individual chooses to do with their own goods and even life is not a matter for public concern, so long as any choices that are made do not adversely affect the private rights of others. This line of thinking reflects the liberal emphasis on the individual as the basic unit of political discourse. Putting the matter another way, one might say that liberals are in general committed to an ontology of the individual - a metaphysical conception of the individual as an irreducible entity endowed with an existence that can be taken to transcend the limitations of any particular culture or society.

It may be tempting, in the light of the above, to oppose the thought of liberalism to more recent developments within postmodernism. For example, the postmodern critique of the subject, if convincing, might be regarded as sounding the death-knell of the liberal conception of subjectivity and its accompanying commitment to its particular conception of liberty. However, this may not be the case. The American pragmatist thinker Richard Rorty, for example, does not shy away from describing himself as both a postmodernist and a liberal. Nor, it might be added, is it necessarily the case that certain liberal principles are exercised by postmodernist criticism. Amongst the postmodernists, the work of Jean-François Lyotard may be cited as an example of a thinker who, in spite of his commitment to a critique of liberal conceptions of the political, nevertheless retains features which can with justification be termed ‘liberal.’ Thus, in his book, The Differend: Phrases in Dispute (and indeed elsewhere) Lyotard’s advocacy of the pursuit of a plurality of ‘genres of discourse’ is not incompatible with the liberal’s advocacy of a plurality of individual modes of existence. Indeed, it may be more germane to oppose liberal thought to that of the tradition of Marxism which, unlike that of the postmoderns, does not tend to regard the pursuit of multiplicity for its own sake in an uncritical light.


1. As a political doctrine, libertarianism may be situated as an extreme form of liberalism, and like liberalism, it is historically rooted in the work of 17th-century political philosopher John Locke. Libertarianism places a central emphasis upon the moral and political necessity of respecting human freedom, autonomy and responsibility. This freedom is principally expressed through the exercise of the right to own and enjoy property. Humans must be free to acquire property (but not by stealing the property of others) and to transfer property (by giving it away or by selling and exchanging it). The libertarian will therefore argue that state interference in the life of its citizens must be restricted. The state will have a duty to protect basic freedoms of its citizens (and so will provide a police force and the legal apparatus necessary to support it). The state cannot, however, appropriate its citizens’ property (in the form of taxation) for any other purpose. For example, to provide state education or health care would, firstly, require illegitimately appropriating citizens’ property (to pay for these services), and secondly would fail to respect the autonomy and responsibility of citizens to organize their own education and health care. In libertarian thinking, the market plays a key role in the organization of a free society.


1. As developed in the sociology of the 1960s and 1970s, ‘lifestyle’ referred to the patterns of consumption and use (of material and symbolic goods) associated with different social groups and classes. As developed in cultural studies, lifestyles may be understood as a focus of group or individual identity, in so far as the individual expresses himself or herself through the meaningful choice of certain items or patterns of behavior, as symbolic codes, from a plurality of possibilities. The choice of lifestyle may be seen as a form of resistance to the dominant social order. However, the analysis of lifestyles has also to address the problem of the degree to which choice of lifestyle represents a genuinely free and creative choice, and the degree to which it represents the influence of advertising and other mass media over everyday life, and thus the incorporation of the individual into the dominant social order.


Malthus, Thomas Robert (1766-1834)

3. Thomas Robert Malthus, cleric, moral scientist, and economist was born near Guildford, Surrey. He entered Jesus College, Cambridge, in 1784, graduated in mathematics as ninth Wrangler in 1788 and was a non-resident fellow of his college from 1793 until his marriage in 1804. Originally destined for a career in the Church of England, he became curate of Okewood in Surrey in 1796, and Rector of Walesby in Lincolnshire in 1803; but from 1805 until his death he served as professor of history and political economy at Haileybury College, then recently founded by the East India Company for the education of its cadets.

The source of Malthus's reputation as a political economist lay in his Essay on the Principle of Population published in 1798; but this essay was originally written to refute the 'perfectibilist' social philosophies of such writers as Godwin and Condorcet, and as such was developed within the context of an essentially Christian moral philosophy, as was all of Malthus's writing. At the core of Malthus's argument was his theory that

Population, when unchecked, increases in a geometrical ratio. Subsistence increases only in an arithmetical ratio.... By that law of our nature which makes food necessary to the life of man, the effects of these two unequal powers must be kept equal. This implies a strong and constantly operating check on population from the difficulty of subsistence. In the first edition of his Essay, Malthus identified the checks to population as either preventive (keeping new population from growing up) or positive (cutting down existing population); hence followed the bleak conclusion 'that the superior power of population cannot be checked without producing misery or vice'. In the second, much enlarged, edition (1803) he extended the category of preventive checks to include 'moral restraint', thus admitting the possibility of population being contained without either misery or vice as necessary consequences. Even when thus modified, Malthus's population principle seemed to impose narrow limits to the possibilities of economic growth and social improvement, although he himself did not so intend it. Idealists and reformers consequently railed against the implications of the theory, but his fellow economists accepted both its premises and its logic and for most of the 19th century it remained one of the classical 'principles of political economy'.

His population principle was not Malthus's only contribution to economic thought: he was among the first to state (in 1815) the theory of rent as a surplus generally associated with the name of his friend and contemporary, David Ricardo. Both were followers of Adam Smith but Malthus's development of Smith’s system differed significantly from Ricardo’s notably in his use of supply and demand analysis in the theory of value as against Ricardo's emphasis on labor-quantities, and his explanation of the ‘historical fall’ of profits in terms of competition of capitals rather than by the 'necessity of resort to inferior soils’ which Ricardo stressed.

Malthus and Ricardo debated at length ‘the possibility of a general glut' of commodities. Ricardo argued for the validity of Say's Law, that ‘supply creates its own demand', while Malthus asserted the possibility of over-saving (and over-investment) creating an excess supply. Ricardo's apparently watertight logic won acceptance for Say's Law for over a century, until Keynes in 1933 drew attention to Malthus’s use of the principle of 'effective demand' and contended that ‘the whole problem of the balance between Saving and Investment had been posed' in the Preface to his Principles of Political Economy in 1820. Economists tend to see Malthus's Principles not so much as containing a notable foreshadowing of Keynes’s theory of employment as presenting, albeit through a glass darkly, a subtle and complex analysis of the conditions required to initiate and maintain balanced growth in a market economy.


3. Markets are institutions that enable exchange to take place through bargaining or auction. They play a crucial role in allocating resources and distributing income in almost all economies, and also help to determine the distribution of political, social and intellectual influence. Only complete markets, in which every agent is able to exchange every good directly or indirectly with every other agent, can secure optimal production and distribution' Such markets must also be competitive, with many buyers and sellers, no significant barriers to entry or exit, perfect information, legally enforceable contracts, and an absence of coercion. While die obvious common-sense models of markets refer to those for widely used commodities, the same analysis can be applied to markets for capital, land, credit, labor and so on. A system of perfect markets provides a mechanism to coordinate the activities of individuals who pursue their own self- interest, and thus embodies the essential social and political institutional framework that allows economists' assumptions about human behavior to result in optimal efficiency and welfare outcomes.

Inevitably, most economic analysis of markets, from Aristotle onwards, has focused on their imperfections, and something like the modern theory of oligopoly was propounded in the early 19th century by Cournot (1838). Modern discussion of incomplete markets has been dominated by the debate the work of Arrow and Debreu, which suggested that uncertainty would inhibit tl1c securing of Pareto-optimal solutions, linking up with a complementary analysis of the consequences of imperfect competition such as that set out in Stigler. The absence of complete markets, especially the lack of information or enforceable contracts, or the presence of distortions caused by taxes or subsidies or imperfect property rights, is seen to lead to market failure: the circumstance in which allocations achieved by markets are not efficient.

Market imperfections and failures that result in inefficient and exploitative socioeconomic structures have been analyzed in detail for developing economies in the Third World, especially for the agrarian institutions of their rural areas. While the older literature often stressed a lack of rationality or the survival of a non-material tradition to explain the absence of effective market institutions in Africa and Asia, it is now possible to see that the interlinking of markets where the same agents supply land, credit and employment under conditions of monopoly or oligopoly lies at the heart of the problems of food production and distribution in South Asia and elsewhere. As a result, those with labor to sell are not given adequate entitlements and those with land and capital use their social power to extract rents without effective competition. More generally, as Bardhan and others have argued forcefully, social institutions such as share-cropping and debt-bondage can distort land and labour markets to create a sub-optimal and static economic equilibrium, and should not simply be seen as second-best adaptations to imperfect market conditions brought about by exogenous uncertainty or imperfect information. The role of the state in substituting for missing markets was seen by development economists and policy makers in the 1950s and 1960s as justified by the inadequate or exploitative nature of market allocations. By the end of the 1970s, however, with the obvious distortions in prices that resulted from state intervention, and the heightened dangers of rent-seeking by which the owners of scarce factors of production such as capital and land were guaranteed returns that rewarded them for the ownership of their assets not the efficient use of them, a more neo-classical role of the relations between markets and governments was reasserted by Lal among others. This debate is complicated by the fact that many developing economies achieved independence and began their development policies at a time the late 1940s and early 1950s) when national and international market institutions were badly damaged by the impact of the Great Depression and the Second World War.

Despite the resurgence of neo-classical rigor in development theory, awkward questions about the practical power of market arrangements to secure optimal solutions to economic problems have persisted. If all markets are in some sense imperfect, then questions remain about how best to complete them to ensure equity and efficiency by using other institutions. While some economists have suggested that complementary institutional change to reassign property rights or establish effective prices by removing all subsidies and distorting taxation is all that is necessary, others have seen the problem as more complex than this.

In developed economies, doubts about the efficiency of markets to ensure dynamic structural change have been focused on the history of large corporations, especially in the USA. Here the work of Coase and others led to the identification of transactions costs that can inhibit the contractual arrangements on which competitive markets must be based, and provided a framework for analyzing institutions as substitutes for missing markets in an environment of pervasive risks, incomplete markets, information asymmetry and moral hazard. Thus in the words of Williamson, large firms have come to be seen as alternatives to markets, not monopolists but 'efficiency instruments' that can coordinate the economic activity needed for rapid structural and technological change through the visible hand of the corporation more effectively than the invisible hand of the market. The work of business historians that stresses the importance of firms' investments in managerial capabilities in constructing internationally competitive industries notably Chandler) can be used to flesh out these points empirically. The emergence of efficient and rapidly growing industrial economies in East Asia has given another twist to this analysis. The market activity that has accompanied rapid technical change and dynamic growth in Japan, South Korea and Taiwan, in particular, has been seen as managed or governed by a developmental state. This has produced the paradoxical situation that economies with apparently less competitive markets have outperformed those such as the USA in which free enterprise is still the ideal. Thus the question of whether the competitive market, as usually understood by Anglo-Saxon economists, was ever or is still the most effective way of securing economic growth and social justice is at the top of the agenda for social scientists once more. See also capital.

mass media

1. The mass media of communication are those institutions that produce and distribute information and visual and audio images on a large scale. Historically, the mass media may be dated from the invention of the printing press, and thus in the west, from Johann Gutenberg's commercial exploitation of printing around 1450. The early products of printing presses were religious or literary works, along with medical and legal texts. In the 16th and 17th centuries, periodicals and newspapers began to appear regularly. Industrialization led to a further expansion in the book and newspaper industries in the 19th century. The 20th century has seen the introduction and rapid expansion of electronic media (cinema, radio and television), to the point at which they have become a dominant element in the experience and organization of everyday life.

The first significant attempts to theorize the mass media in

the 20th century began within the framework of mass society theory. Developed most significantly in the second quarter of the century, not least as a response to the rise of Nazism and Fascism, mass society theory typically presented industrial society as degenerating into an undifferentiated,

irrational and emotive mass of people, cut off from tradition and from any fine sensitivity to aesthetic or moral values. The mass entertainment media are thereby presented as key instruments in the creation of this mass, precisely in so far as they are seen to appeal to the more base elements of popular taste (thus reducing all content to some lowest common denominator) in the search for large audiences. The media thereby serve to undermine traditional and local cultural difference, and in the emotional nature of their content, to inhibit

rational responses to the messages they present. Entertainment is complemented by the use of radio, especially, as an instrument of political propaganda, or more precisely in Marxism, as one of the core contemporary instruments of

ideology. Mass society theory may therefore be seen to attribute enormous power to the media, and, as a complementary presupposition, to present the audience as the more or less passive victim of the messages foisted upon it. The empirical research that such theory fostered, 'effects' research, tends to look for the harmful effects that the media had, both politically (in inhibiting democracy) and morally (for example in encouraging violence). This assumption of media power was, paradoxically, in the media's own interests, in that it implied that they were a powerful and effective tool of advertising.

A more subtle approach to media research emerged in the post-war period, within the framework of sociological functionalism. 'Uses and gratifications' research attributes greater activity and diversity to members of the audience, in so far as they are assumed to have subjectively felt needs, created by the social and physical environment, that the media can fulfil. The central functions performed by the media include escapism (in so far as media consumption allows a legitimate withdrawal from the pressures of normal life), the establishing of personal relationships (including the use of media programs as the focus of discussion and other social interaction), and the formation of personal identity (whereby the values expressed by programs are seen to reinforce one's personal values).

In the 1950s, a Canadian school of media theory emerged, principally in the work of Harold Innis and Marshall McLuhan. The central argument here was that there was a causal link between the dominant form of communication and the organization of a society. Thus, Innis distinguished 'time biased media' from 'space biased media'. The former, such as clay and stone, could not easily be transported, but were durable, thus leading to stable social phenomena, grounded in the reproduction of tradition over long periods of time. The latter (such as paper), are less durable, but are easily transported. They could therefore support the expansion of administrative and political authority over large territories. McLuhan argued that the development of new media technologies has a fundamental impact on human cognition. The introduction of printing leads to greater compartmentalization and specialization of the human senses, as communication comes to be dominated by the printed page (as opposed to oral communication previously). Vision thus becomes dominant, but deals with information that is presented in a linear, uniform and infinitely repeatable manner. Thought thus becomes standardized and analytical. Print also leads to individualism, as reading becomes silent and private. Print culture, which for McLuhan as for Innis is space biased, is challenged by electronic media. Electronic media, in their proliferation and continual presence, annihilate space and time. Confronting us continually, modern media do not have to be sought out. Similarly, the act of reading or consuming various media is no longer confined to particular periods of the day. Information from diverse locations and even periods in history are juxtaposed in a single newspaper or evening's television. The modern experience is thus one of an unceasing relocation of information in space and time, leading to what McLuhan termed

'the global village'. While McLuhan's theories fell from fashion in the 1970s, they bear a resemblance to much recent postmodernist thinking.

New strands of media theory emerged in the 1960s and 1970s, in no small part through increasing interest specifically in television. Two extremes may be identified. At one, concern is with the material base that determines cultural production. The political economy of the mass media thus focused on institutional structures that underpinned media production (and thus its contents and value orientations). Murdock and Golding (1977), for example, looked at the structures of share ownership and control that linked media

organizations into multi-national capitalism. At the other, emphasis is placed upon media content as texts, in need of interpretation or decoding. The increasing influence of semiotics led to a fundamental re-evaluation of the role of the media audience. They cease to be mere victims of the media, and come to be seen as actively engaging with media products, interpreting them in a plurality of ways that may be at odds with the possibly ideological intentions of the producers. The work of the Birmingham Centre for Contemporary Cultural Studies and Stuart Hall is crucial here. From this,

cultural studies may be seen to lead, less to theorization of the mass media per se, than to the development of distinctive theories and accounts of specific media (such as television, popular music, and even the Sony Walkman).

Jürgen Habermas and Jean Baudrillard offer two distinct, yet general accounts of the place of the mass media in the experience and development of contemporary society. Habermas's theory centers on the concept of the public

sphere. The bourgeois public sphere emerged in Europe in the 17th and 18th centuries, as critical self-reflection and reflection upon the state, conducted first in coffee houses and salons, and then through pamphlets, journals

and newspapers. While in practice this public sphere was exclusive, allowing participation by the propertied, rational, and male bourgeoisie, Habermas finds in it a principle of the open, and thus democratic, use of public reason. Contemporary electronic media are seen to have a complex, dialectical, impact on this sphere. Positively, modern production techniques can make complex, critical and culturally demanding material widely available. In practice, cultural consumption has become increasingly privatized, breaking up the public sphere, and dominated by low quality material, designed to have a mass appeal. In politics, this leads to the degradation of political debate and policy formation into an increasingly stage managed political theatre.

Baudrillard understands contemporary capitalism in terms of symbolic (as opposed to strictly economic) exchange. The contemporary world is therefore dominated by signs, images and representations, to such a degree that the distinction between the sign and its referent, the real world, collapses (so that one can no longer speak to the real needs or interests of the people, for example). The mass media (and particularly television) are central to this production and exchange of signs, and it is to the nature of the consumption of these signs that Baudrillard looks, in order to outline a pessimistic theory of the impact of the mass media on democratic society. Baudrillard's consumer is typically a channel-hopper and couch potato. On the one hand, television transforms the world into easily consumable fragments, and yet does so within the gamut of media that produce more information than any one person could absorb and understand, so that it attracts only a superficial 'ludic curiosity'. On the other hand, the media swallow up private space, for although typically consumed privately, they intrude upon our most intimate moments by making them public. Nothing is taboo any longer, and the immediacy of media coverage inhibits the possibility of critical reflection. An opinion poll, for example, cannot appeal to a genuine public. It does not manipulate the public, for the public (and the distinction between public and private) has ceased to exist. The expression of political opinion is reduced to a yes/no decision, akin to the choice or rejection of a supermarket brand, or film. Resistance, for Braudillard, can then rest only in a refusal to participate in this system.

3. Mass media together comprise a new social institution, concerned with the production and distribution of knowledge in the widest sense of the word, and have a number of salient characteristics, including the use of relatively advanced technology for the (mass) production and dissemination of messages; the systematic organization and social regulation of this work; and the direction of messages at potentially large audiences who are unknown to the sender and free to attend or not. The mass media institution is essentially open, operating in the public sphere to provide regular channels of communication for messages of a kind determined by what is culturally and technically possible, socially permitted and in demand by a large enough number of individuals.

It is usual to date the beginnings of mass media from the first recognizably modern newspaper, in the early 17th century, which in turn was a new application of the technology of printing, already in use for over 150 years for the multiple reproduction of book manuscripts. The audiovisual forms which have subsequently been developed, mainly since the end of the 19th century, have caused existing media to adapt and have enlarged the total reach of media, as well as extended the diversity of their social functions.

This history of media development is, nevertheless, more than a record of technical advance and of increasing scale of operation. It was a social innovation as much as a technological invention, and turning points in media history are marked, if not caused, by major social changes. The history of the newspaper, still the archetypal as well as the first, mass medium, illustrates the point very well. Its development is linked to the emergence to power of the bourgeois (urban- business-professional) class, which it served in cultural, political and commercial activities. It became an essential instrument in subsequent economic and political struggles, a necessary condition for economic liberalism, constitutional democracy and, perhaps, also, revolution and bureaucratic centralism. Its development thus reflects not only political and economic forces but also major social and cultural changes. The latter include urbanization; rising living standards and the growth of leisure; and the emergence of forms of society which are, variously, democratic, highly organized, bureaucratic, nationalistic and committed to gradual change. Consideration of newer media, especially film, radio and television, would not greatly modify this assessment, and these media have not greatly widened the range of functions already performed by the newspaper as advertiser, entertainer and forum for the expression of opinion and culture.

Early social science views of mass media reflect some of these historical circumstances. Commentators were struck by the immense popular appeal of the new media and by the power which they might exert in society Beyond that, views divided sharply on whether to welcome or regret the new instruments of culture and information, and a division between pessimists and optimists has been an enduring feature of assessments of mass media, starting to fade only as the inevitability and complexity of the media are accepted. The pessimistic view stems partly from the pejorative connotations of the term 'mass', which includes the notions of vast scale, anonymity, impersonality, uniformity, lack of regulation and mindlessness. At the extreme, the media were regarded, sometimes by conservative and radical critics alike, as instruments for manipulation, a threat to existing cultural and spiritual values and to democracy. But optimists saw the mass media as a powerful means of disseminating information, education and culture to the previously excluded classes and of making feasible a genuine participatory democracy. By the 1930s some circumstantial evidence and enough theory supported both sides, but there was little systematic investigation.

The first period of scientific investigation of mass media undertaken between the mid-1930s and the late 1950s resulted in a much more modest estimate of media effects than was previously assumed, even a new myth of media powerlessness. The earlier stimulus- response model of influence was replaced by a model of indirect influence, according to which the media were seen to be subject to mechanisms of selective attention, perception and response, such that any effects would be more likely to reinforce existing tendencies than to cause any major change. Further, the working of media was seen to be subordinate to the existing patterns of social and personal influence and thus riot well conceived of as an external influence. White the evidence reassured many critics and discomfited prophets of doom, it seemed to lead to no slackening of efforts to use media, in ever more subtle ways, for political and commercial ends. Since the 1960s there has been further development in the assessment of mass media effects in the direction of a renewed belief in their potency. * The earlier research, despite its reassuring message, left open the possibility that media effects could be considerable under certain conditions: first, where there exists a monopoly or uniformity of message content; second, where the messages seem to concern matters beyond immediate experience or direct relevance; and third, where there is accumulation over a long period of time of similar messages. Research attention has thus shifted from the search for direct, short- time effects on individuals and towards the following: structures of ownership and control of media; patterns of ideology or culture in messages and texts; professional and organizational contexts in which media knowledge is manufactured. Experts assessing the influence of mass media emphasize what people learn from the media, thus cognitive effects in the widest sense. We may learn from the media what is normal or approved, what is right or wrong, what to expect as an individual, group or class, and how we should view other groups or nations. Aside from the nature and magnitude of media effects on people, it is impossible to doubt the enormous dependence of individuals, institutions and society as a whole on mass media for a wide range of information and cultural services.

If the mass media play an essential part in mediating a wide range of relationships within societies, they have also come to be seen as playing a comparable part in mediating relations between nation-states and world blocs. The flow of information and culture by way of mass media does much to establish and confirm patterns of perception, of hostility and attraction and also the relations of economic dependence and latent conflict between the different worlds of east and west, north and south. While mass media still largely consist of separate national systems, the increasing internationalization of networks and content is now interesting researchers.

The history of mass media has so far been fairly short and very eventful, but it already seems on the point of a new and significant departure which may change the essential character of mass communications. The most important developments are of smaller-scale, point-to-point and potentially interactive media, employing cable, satellite or computer technology. It is likely that there will be a move away from centralized and uniform media of distribution abundant and functionally diversified provision of messages based on receiver demand. The boundaries between mass communication and the emerging new forms of information transfer are likely to become even more blurred in what is being hailed as an emerging 'information society'. Nevertheless, the issues which shaped early debates about mass media are still somewhat relevant in the new conditions, especially those which concern the contribution of mass communication to equality or inequality, order or fragmentation. Because of their public functions, nationally and internationally, mass media are unlikely to be replaced by the new media, although the balance of use and significance will change.


2. Materialism and the associated materialist and materialistic are complex words in contemporary English because they refer (i) to a very long, difficult and varying set of arguments which propose matter as the primary substance of all living and non-living things, including human beings; (ii) to a related or consequent but again highly various set of explanations and judgments of mental, moral and social activities; and (iii) to a distinguishable set of attitudes and activities, with no necessary philosophical and scientific connection, which can be summarized as an overriding or primary concern with the production or acquisition of things and money. It is understandable that opponents of the views indicated in senses (i) and (ii) often take advantage of, or are themselves confused by, sense (iii) and its associations. Indeed in certain phases of sense (ii) there are plausible connections with elements of sense (iii),which can hardly, however, be limited to proponents of any of the forms of sense (i) and (ii). The loose general association between senses (i) and (ii) and sense (iii) is in fact an historical residue, which the history of the words does something to explain.

The central word, matter, has a suitably material primary meaning. It came into English, in varying forms, from the word matere, French, from root word materia, Latin - a building material, usually timber (with which the word may be etymologically associated, as also with domestic; cf. 'will sliver and disbranch from her material sap', King Lear, IV, ii); thence, by extension, any physical substance considered generally, and, again by extension, the substance of anything. In English this full range o meanings was established very early, though the most specific early sense was never important and was quickly lost. Among early established uses, matter was regularly distinguished from form which it was held was required to bring matter into being. There was a related distinction between material and formal, but the most popular distinction was between material and spiritual, where spirit was the effective theological specialization of form. Matter was also contrasted, from the late 16th century, with idea, but the important modern material/ideal and materialist/idealist contrasts, from early 18th century, were later than the material/formal and material/spiritual contrasts. It is this latter contrast which has most to do with the specific meanings of material and materialist in sense (iii). It is not easy to trace these, but there was a tendency to associate material with 'worldly' affairs and an associated distinction, of a class kind, between people occupied with material activities and others given to spiritual or liberal pursuits. Thus Kyd (1588): 'not of servile or materiall witt, but ... apt to studie or contemplat'; Dryden (1700): 'his gross material soul'. This tendency would probably have developed in any event, but it was to be crucially affected by the course and context of the philosophical argument.

Philosophical positions that we would now call materialist are at least as old as the 5th century, BC, in the Greek atomists, and the fully developed Epicurean position was widely known through Lucretius. It is significant that in addition to simply physical explanations of the origins of nature and of life, this doctrine had connected explanations of civilization (the development of natural human powers within a given environment), of society (a contract for security against others), and of morality (a set of conventions which lead to happiness and which may be altered if they do not, there being no pre-existing values where the only natural force is self-interest). The key moment in English materialism, though still not given this name, was in Hobbes, where the fundamental premise was that of physical bodies in Motion - mechanics - and where deduction was made from the laws of such bodies -in motion to individual human behavior (sensation and thought being forms of motion) and to the nature of society - human beings acting in relation to each other (and submitting to sovereignty for necessary regulation). In 18th century France, for example in Holbach, it was comparably argued that all causal relationships were simply the laws of the motion of bodies, and, with a new explicitness, that alternative causes and especially the notion of God or any other kind of metaphysical creation or direction were false. It was from the mid-17th century that doctrines of this kind became known as materialist and from mid-18th century as materialism. The regular association between physical explanations of the origins of nature and of life, and conventional or mechanical explanations of morality and society, had the understandable effect, much sharpened when they became explicit denials of religion, of transferring materialism and materialist in one kind of popular use to the sense of mere attitudes and forms of behavior. In the furious counterattack, by those who would give religious and traditional explanations of nature and life, and thence other kinds of cause in moral behavior and social organization, materialism and materialist were joined to the earlier sense of material (worldly) to describe not, so much the antecedent reasoning as the deduced moral and social positions, and then , in a leap of controversy, to transfer the notion of self-interest as the only natural force to 'selfishness' as a supposedly recommended or preferred way of life. It hardly needs to be pointed out that both the conventional and the mechanical forms of materialist moral argument had been concerned with how this force - 'self-interest' - might be or actually was regulated for mutual benefit. In the 18th century the usage was still primarily philosophical; by the early 19th century the rash and polemical extension from a proposition to a recommendation had deeply affected the senses of materialism and materialist, and the suitably looser materialistic followed from mid-19th century.

So complex an argument cannot be resolved by tracing the development of the words. Some people still assert that a selfish worldliness is the inevitable even if unintended consequence of the denial of any primary moral force, whether divine or human. Some read this conclusion back to qualify the physical arguments; others accept explicitly or implicitly, the physical arguments but introduce new terms for social or moral explanation. In religious and quasi-religious usage, materialism and its associates have become catchwords for description and free association of anything from physical science to capitalist society, and also, significantly often, the socialist revolt against capitalist society. The arbitrary character of this popular association has to be seen both critically and historically. But what has also to be seen, for it bears centrally on this argument, is the later development of philosophical materialism. Thus Marx's critique, of the materialism hitherto described, accepted the physical explanations of the origin of nature and of life but rejected the derived forms of social and moral argument, describing the whole tendency as mechanical materialism. This form of materialism had isolated objects and had neglected or ignored subjects and especially human activity as subjective. Hence his distinction between a received mechanical materialism and a new historical materialism, which would include human activity as a primary force. The distinction is important but it leaves many questions unresolved. Human economic activity - men acting on a physical environment - was seen as primary, but in one interpretation all other activity, social, cultural and moral, was simply derived from by this primary activity. (This allows, incidentally, a new free association with the popular sense of materialism: economic activity is primary, therefore materialists are primarily interested in activities which make money - which is not at all what Marx meant.) Marx's sense of, interaction – men working on physical things and the ways they do this, and the relations they enter into to do it, working also on 'human nature', which they make in the process of making what they need to subsist – was generalized by Engels as dialectical materialism, and extended to a sense of laws, not only of historical development but of all natural or physical processes. In this formulation, which is one version of Marxism, historical materialism refers to human activity, dialectical materialism to universal processes. The point that matters, in relation to the history of the words, is that historical materialism offers explanations of the causes of sense (iii)materialism - selfish preoccupation with goods and money - and so far from recommending it describes social and historical ways of overcoming it and establishing co-operation and mutuality. This is of course still a materialist reasoning as distinguished from kinds of reasoning described, unfavorably, as idealist or moralistic or utopian. But it is, to take the complex senses of the words, a materialist argument, an argument based on materialism, against a materialistic society. See dialectic.

means of production

1. In Marxism, ‘means of production’ refers to all the material resources used in production. The major class divisions in any society are understood in terms of ownership and control, or lack of ownership and control, of the means of production. Thus, in capitalism, the bourgeoisie owns factories, raw materials and other productive resources, and is able to control what is produced, and the disposal of that product. The subordinate proletariat have only their ability to labor, which they sell to the bourgeois capitalist.


    1. A meritocracy is a society with an occupational hierarchy (see social
stratification). Different occupations will enjoy different rewards, power and status. However, in a meritocracy, individuals move up and down this hierarchy (see social mobility) on the basis of merit, which is to say, on the basis of the talents and qualifications that they possess, and the appropriateness of these attributes to the tasks required in a given occupation. The most highly rewarded occupations will also be those which are most important to the society, that require rare skills or skills and knowledge that take a long time to acquire, and which carry the highest levels of responsibility. (It is assumed that financial and other rewards are necessary, in order to motivate the most appropriate people to undertake the training necessary to fulfil the occupation).

The liberal philosopher John Rawls has offered a highly influential defense of meritocracy as being fundamental to a just and fair society (1972). He is, however, at pains to distinguish what he calls a 'callous meritocracy' from fair equality of opportunity. In the former, a person's education will depend predominantly upon what his or her parents can afford. Thus, the children of successful parents will be more likely to acquire prestigious jobs, because they are likely to have had a better education. This would lead to wide inequalities in society. Rawls therefore defends an education system to which everyone has equal access, to ensure that the talents a person does have are recognized and cultivated, regardless of one's parental background.


3. Migration is a generic term used to refer both to immigration (or in-migration) and to emigration (or out-migration). Formally, these terms may refer to various types of change of residence, but we customarily speak of immigration and emigration when the change of residence is between nations, and of in-migration and out-migration when the change of residence is between subunits of a nation. The term 'net migration' is used to denote the difference between the number of in- migratory events and the number of out-migratory events with respect to a particular geographic unit during a given time period.

Events of immigration (in-migration) and emigration (out-migration) constitute two of the four components of population change; the other two are births and deaths. For large areas, population change is generally determined predominantly by the balance of births and deaths ('natural increase'). However, for small areas the net migration is often larger than the natural increase.

A migration stream is defined as the total number of migratory events from Place A to Place B during a given time period. The counterstream is defined as the number of migratory events from Place B to Place A. The sum of events in the stream and counterstream Is termed the gross interchange between A and B. The effectiveness of migration is defined as the ratio of the net migration between A and B and the gross inter- change between the two places. Logically, therefore, the effectiveness of migration can vary from a low of 0 to a high of 1. For most pairs of geographic units the effectiveness of migration tends to be much closer to 0 than to 1.

Certain types of migration are commonly distinguished Petersen made useful distinctions between the concepts of free, impelled and forced migration. In free migration, the will of the migrant is the main factor. In impelled migration, the will of the migrant is subordinated to the will of other persons. In forced migration, the will of other persons is paramount, and the will of the migrant is of no weight at all. Return migration is defined as migration back to a place in which one had formerly resided. For most individuals who have migrated several times during their lifetime, return migrations are an important component of the total number of movements. Chain migration refers to the common pattern whereby individuals migrate to a particular destination in which they already have kin or friends who have previously migrated from the individual's own area of origin.

Migration differentials

It is universally observed that the propensity to migrate is strongest among young adults. Other differentials in migration tend to be limited to particular cultures or locales.

Determinants of migration

The determinants of migratory behavior may conveniently be analyzed in terms of a preference system; a price system; and the total amount of resources avail- able for all goals. * First, the preference system describes the relative attractiveness of various places as goals for potential migrants, compared to other goals which their resources would allow them to pursue. An area's attractiveness is the balance between the positive and negative values which it offers. Among the most important of the positive values is the prospect of a better-paying job. Other advantages achieved by migration include the chance to live in a more favorable climate, freedom from persecution, marriage and the continuation Of marital ties, and the desire for more adequate housing, a factor particularly important with respect to central city to suburb movements.

However, migration also creates negative values. A major disincentive to migration is that it involves a disruption of interpersonal relationships with kin and old friends. Chain migration is so attractive precisely because it mitigates this disruption of relationships. Other negative aspects of migration are the necessity to learn new customs and sometimes a new language. Laws restraining legal entry or departure are also, of course, important deterrents to migration.

Second, the price system describes costs in money, energy and time (which cannot be used in the pursuit of other goals) imposed by a given migration decision. As the cost of migration generally varies in direct proportion to the distance traveled, the number of migrants to a given place tends to vary inversely with the distance.

Third, the total resources available for all goals also affect the decision to migrate. If the only drawback to migration is the expense of the move, then an increase in monetary income should increase the probability of migration. The secular increase in monetary income since the end of the 19th century in the developed nations should have increased rates of migration, provided that the value and price of migration had remained constant. However, to the extent that regional differences in job opportunities may also decline, the factor of increasing resources may be offset.

Consequences of migration

Migration has consequences for the areas of net out- migration and net in-migration, as well as for the larger social system, which includes both the area of out- migration and the area of in-migration.

First, net out-migration may have several important consequences for an area. It may relieve population pressure and cause the average level of wage and salary income to rise. However, it may cause the value of land and real estate to decline. Areas of net out-migration incur the loss of the investments made to raise and educate those children who spend their productive years elsewhere.

Second, net in-migration may also have important consequences. If the area is definitely underpopulated, the resultant population increase may help the area achieve economies of scale and thus raise the general standard of living. Under other circumstances, net in- migration may result in decline in average wage and salary income. In either case, a net flow of in-migrants will tend to raise the price of land and real estate. It is also possible that a high rate of in-migration fosters social disorganization, as social-control networks are not easily established among persons who are strangers to one another.

Third, for the system comprising the areas of both net inflow and net outflow, migration promotes a redistribution of population. If migrants have been responsive to differences in job opportunities this redistribution will further the economic development of the total system. Usually, migration also has consequences for the degree of regional homogeneity within the total system. Since migrants tend to move from low to high income areas, regional income inequalities are generally reduced by migration. Moreover, migration often helps to reduce regional disparities in racial, ethnic and religious composition.

Migration policies and legislation affecting migration

It is useful to distinguish between migration policies, which are intentionally designed to influence migratory flows, and legislation affecting migration which in fact influences the flow of migrants even though it is designed to serve some other major goal or goals. Almost all nations have adopted policies with respect to international migration. Most such policies severely restrict immigration, so that the actual stream of legal immigrants is much smaller than it would have been if no barriers had been imposed. As a result, many nations, particularly the USA, have a large number of illegal immigrants. However, certain nations have actively encouraged immigrants - of a particular type. Australia, for example, in the twenty- year period following the Second World War, actively sought to increase its population through subsidizing immigration from Europe while at the same time discouraging immigration from Asia. Although most governments proclaim the right of their citizens to emigrate, many of them place restrictions on emigration of selected persons for reasons of national security. Moreover, restrictions on emigration can be very severe, as exemplified until recently in the former Soviet Union. * The stream of rural to urban migration has marked every nation, both developed and less developed, since the beginning of the Industrial Revolution. In the developed nations, the net stream of rural to urban migration has in most cases ceased; in many of the less developed nations, it is still of considerable magnitude.

Explicit policies concerning internal migration are less common than with respect to external migration, but in most nations there is a large body of legislation which affects internal migration either negatively or positively. Nations which do have explicit policies regarding internal migration have generally tried either to discourage the growth of their largest cities or to encourage settlement of scarcely populated with important natural resources.

mode of production

1. In Marxism, history is understood as the determinate succession of distinct epochs or modes of production. Marx identifies six historical epochs: primitive communism; ancient slave society; feudalism; capitalism; socialism and communism. Each has a distinctive economic character, analyzed in terms of its forces and relations of production, which is to say, the level of technology within the society and the relationship between producers and the owners or controllers of the resources required for production (the means of production). The mode of production is therefore the distinctive relationship of forces and relations of production, and their associated structures of economic exploitation. While strictly no historically specific social structure can be fully analyzed in terms of a single mode of production, and there has been fruitful debate over distinctions with the capitalist modes of production (for example, as to a break between high capitalism and late capitalism), the basic Marxist account offers a powerful, if abstract model of social change.

This may be illustrated through reference to the transition from feudalism to capitalism. Feudal technology depends on sources of natural power (including animal power, wind power and human strength), while capitalism has machinery powered by the burning of fossil fuels. The relatively low production of feudal technology can be fully exploited through small scale, and predominantly agrarian production methods. The greater power of capitalist technology entails that a single source can provide the power for a large number of workers. The factory therefore emerges as the most appropriate way to exploit this power. However, the factory, and its organization, are themselves strictly part of the forces of production. To make the factory possible, the feudal relations of production must be broken. These relations are those existing between the feudal lord and the serf, where the serf is bound to a particular piece of land, and to service for a particular lord. The lord can exploit the serf by appropriating a portion of the production of this land, and by requiring the serf to work for a period on the lord's own land. Capitalism, and thus the bourgeois or capitalist class that seeks to take fun advantage of the new technology, requires a labor force that is free to move between employers (according to the demands and motivations of a free labor market). Capitalist relations of production therefore center upon the market. The laborer is formally free to work for anyone willing and able to employ them, for a wage determined by the market. The capitalist will own, not just the means of production, but also the product of the labor that is exerted within their factories. The capitalist is free to dispose of this product as they wish (again, at a price largely determined by the market in consumer goods). Exploitation of the subordinate class is now concealed within the exchanges made on the labor and commodity market, all of which are superficially fair.

The value paid to the laborer as a fair and mutually agreed

wage for a given amount of labor is less than the exchange-value received by the capitalist in selling the product. (Exploitation therefore occurs through the appropriation of surplus-value.)

The transition between modes of production is violent (brought about through revolutions that are the overt manifestation of class conflict). This violence is necessitated by the inherently conservative or static nature of the relations of production, in contrast to the dynamic nature of the forces

of production. Revolution occurs when a contradiction occurs between the forces of production and the relationships of production. This is to say that the existing relations of production are no longer adequate to exploit the productive potential of the forces of production. The dominant feudal class, and thus feudal relations of production, are seen as being incapable of making full use of industrial technology. The rising capitalist class is only able to develop the potential of industrial technology if it can first overthrow the feudal relations of production, in order to remove the feudal inhibitions on the expansion of a mobile and free labor force. Capitalist relations of production are thus seen to be somehow implicit in early industrial technology, and this implicit capitalism is in contradiction to the reality of the old feudal order.

Through appeal to the base and superstructure metaphor (and in various forms of 20th-century Marxism, analyses of commodity fetishism and reification), Marxists may suggest that the economic elements of the mode of production (the economic base) has a determining influence over the legal and cultural aspects of society If so, then different modes of production are not merely characterized in terms of different economic characteristics, but also in terms of different cultural characteristics (and most importantly, by the ideological mechanisms that are used to give legitimacy to the rule of the dominant class).


2. Modern came into English from the word moderne, French, modernus, Latin, from the root word modo, Latin – just now. Its earliest English senses were nearer our contemporary, in the sense of something existing now, just now. (Contemporary, or the equivalent - until the mid-19th century co-temporary, was used, as it is still often used, to mean ‘of the same period,’ including periods in the past, rather than ‘of our own immediate time’). A conventional contrast between ancient and modern was established before the Renaissance; a middle or Medieval period began to be defined from the 15th century. Modern in this comparative and historical sense was common from the late 16th century. Modernism, modernist and modernity followed, in the 17th and 18th centuries; the majority of pre-19th century uses were unfavorable, when the context was comparative. Modernize, from the 18th century, had initial special reference to buildings (Walpole, 1748: ‘the rest of the house is all modernized’); spelling (Fielding, 1752: ‘I have taken the liberty to modernize the language’); and fashions in dress and behavior (Richardson, 1753: ‘He scruples not to modernize a little’). We can see from these examples that there was still a clear sense of a kind of alteration that needed to be justified.

The unfavorable sense of modern and its associates has persisted, but through the 19th century and very markedly in the 20th century there was a strong movement the other way, until modern became virtually equivalent to improved, or satisfactory or efficient. Modernism and modernist have become more specialized, to particular tendencies, notably to the experimental art and writing of c.1890-c.1049, which allows a subsequent distinction between the modernist and the (newly) modern. Modernize, which had become general by the mid-19th century (cf. Thackery (1860): ‘gunpowder and printing tend to modernize the world’), and modernization (which in the 18th century had been used mainly of buildings and spellings) have become increasingly common in 20th century argument. In relation to institutions or industry, they are normally used to indicate something unquestionably favorable or desirable. As catchwords of particular kinds of change the terms need scrutiny. It is often possible to distinguish modernizing from modern, if only because (as in many such actual programs) the former terms imply some local alteration or improvement of what is still, basically, an old institution or system. Thus a modernized democracy would not necessarily be the same as a modern democracy.


1. The precise meaning of the concepts of 'modernity' and 'modernism' depend, very much, upon the context in which they originate and are used. Thus, the concept of 'modernity' typically implies an opposition to something, and particularly to an historical epoch that has passed and has been superseded. Thus, as derived from the Latin 'modernus' (and 'modo', meaning recently), modernity comes to characterize the Christian epoch (from the fifth century, in the writings of St Augustine), in contrast to a pagan past. This distinction is revised at a number of points throughout the European middle-ages and into the Renaissance. (The Renaissance, for example, as a modern age, was initially understood in opposition to the preceding 'middle' ages, but not to the now revalued pagan epoch (or antiquity).) In the 17th and 18th centuries, modernity came to be associated with the Enlightenment. This entailed a revision of the historical understanding of the present. The understanding of time and history in the Christian middle-ages, and even in the Renaissance, was shaped by the expectation, on the part of Christianity, of the immanent end of the world. The more secular Enlightenment presupposes that history will unfold into an open, possibly limitless future. In addition, technological and industrial development, with associated social change, became visibly more rapid during this period. As such, modernity ceases to be merely that which is most recent or new, and now becomes that which is most progressive. Thus, the contemporary social theorist Jürgen Habermas can still defend the 'unfinished project' of modernity. Such a project suggests that modernity has not merely technological, but more importantly political and moral goals (particularly in the emancipation of humanity from the superstitions and unquestioned authority of the past). In this context, 'modernism' in its contemporary meaning, can be seen to emerge in the political revolutions of 1848.

In sociological thinking, modernity is typically placed in contrast to traditional, and therefore pre-industrial, societies. Sociology, as a discipline, emerges in the theorization of modernism in this sense. In the work of Emile Durkheim, at the close of the 19th century, contemporary modern society is contrasted, in terms of its complex division of labor and greater sense of individual identity and separateness, from the mechanical solidarity of pre-industrial societies. The German social theorist Tönnies similarly distinguished the integrated and homogeneous ‘community’ of pre-industrial society, from the fragmentation , isolation and artificiality of modern ‘society.’ In the work of Max Weber, the development of modernism is linked to increasing rationalization in all aspects of social life. This rationalization entails that all social activities (from the economy, through law and political administration, to architecture and music) are subject to scrutiny in order to determine the most instrumentally efficient means of achieving their goals. In these account, modernity is never purely a good thing. The idea of modernity as simple, unambiguous progress, is thrown into question, as the problems and tension of existence in modern society are thrown into relief (from Durkheim’s anomie, through Marxist theories of alienation, to Weber’s iron cage of bureaucracy that curtails individual and political freedom and spontaneity).

In the arts and other areas of culture, modernism may be taken to refer to the development of a more self-reflective art form towards the end of the 19th century. Thus, in 1845, the poet Baudelaire writes of the French painter Constantin Guys in an essay significantly entitled ‘The painter in modern life.’ However, modernism in painting is typically tied to Edouard Manet (1832-83), and under his influence, the development of Impressionism. Crucially, in this work, the conventions of realist art are thrown into question. The artist’s concerns therefore shift away from the overt subject matter of the painting, to the process of painting itself. (As the composer Schoenberg once remarked, painters do not paint trees, they paint paintings). Similar shifts can be seen in music (with the break from the conventions of tonality at the beginning of the 20th century, for example in the work of the Second Viennese School) and in literature (as the conventional narrative of the realist novel is questioned by such figures as Proust and Joyce). Yet it may be suggested that an increasing interest in the techniques of the artistic medium itself, or in form, is only one aspect of modernist art. This emphasis on form serves to separate the art work from anything outside art (culminating, not merely in the practice of Abstract Expressionistpainting, for example, but also more importantly in the way in which that work is theorized and defended by such critics as Clement Greenberg (1992) and Michael Fried (1992)). In contrast, much art that can be fairly described as modernist shows a greater commitment to political and social change, or an engagement with the project of producing an art that is appropriate to contemporary (modern) social life. Thus, futurism, for example, sought to celebrate the achievements of an industrial age, and the power and speed of modern technology. Modern architecture, for example in the work of Le Corbusier and the Bauhaus, sought a building design and urban planning that was appropriate to a rational age, stripped of the conventions and ornaments of the past.

Modernism in art and architecture tended to be characterized by an elitism and insularity that made it unpalatable to a wider public. The crisis of modernism comes as its aspirations to universalism (and thus its tendency to dictate, from a privileged position, what culture and architecture should be) are revealed as concealing a closure against the many alternative voices that had in fact been excluded from modernist developments (see postmodernism).


3. Modernity refers to a historical period which began in Western Europe with a series of cultural, social and economic changes during the 17th century, and it is usually characterized by three features: first, culturally, a reliance on reason and experience conditioned the growth of science and scientific consciousness, secularization and instrumental rationality; second, as a mode of life it was based on the growth of industrial society, social mobility, market economy, literacy, bureaucratization and consolidation of the nation-state; and third, it fostered a conception of the person as free, autonomous, self-controlled and reflexive. opposed to traditional forms of thought and life, modernity can be conceptualized as a mode of social and individual experience that is shared by many men and women all over the world due to the expansion and prestige of scientific inquiry, technological innovation, political models of democracy and nation-state boundaries, and the subjective drive for self-development. Modernity is inherently globalizing. Giddens has argued that the globalizing tendencies of modern institutions is accompanied by continuous changes in the perception of the self and redefinitions of identities. From this perspective, modernity is more than a historical product; it is an uncompleted program that can still play a very creative role in present-day societies. Modernity implies an openness towards a determinate future characterized by progress, social stability and self-realization.

According to Berman, to be modern is to live in an environment that, at the same time, promises adventure, growth, joy, power and transformation of the self and the world, and threatens to destroy everything we have and we are. Modernity is a paradoxical unity which has constantly been on trial. The paradoxes of modernity are closely related, to the discontinuities between the growth of reason; the logic of industrialism; the power of the nation-state; and the personal quest for freedom and self-realization. Bauman has pointed out that modernity would be better defined by the consciousness of a universal order that allows no place within the boundaries of the nation-state for strangers, diversity and tolerance. In his perspective, the Holocaust is a product of modernity. Foucault has analyzed the emergence of disciplinary powers (in psychology, penology and sexology) in circumstances of modernity. Wagner argues that the modern project is unable to reconcile its conflicting commitment to liberty and to discipline, and that its history is characterized by the coexistence of the discourses of liberation and control.

It has also been stated that in no contemporary society have religion or collective identities disappeared, and Mingione has shown that reciprocal obligations between individuals continue to be recognized in modern societies. Family or religious ties, ethnic solidarity, and gender and sexual identities play a very important role in the processes of allocating power and distributing resources. In contemporary societies, western and non-western, the realization of the self is often accompanied by loyalties, formerly seen as pre-modern, to different communal groups. See also modernization, postmodernism.


3. In academic development economics and related disciplines, and also in actual public policy on development, the word modernization slips and slides, alludes and obtrudes, both as a key or code term as well as a perfectly ordinary word meaning updating, upgrading, renovation, reconstruction or stabilization in the face of adverse social, physical or economic structures. In this ordinary usage, sometimes a particular history, political approach or ideology is intended, sometimes not. Often all that is meant is professionalism, rationality, planning or progress in general. Where no particular history or episode of development is taken to be at issue, probably any implied allusion to, say, the Russian debate about industrialization, or peasant participation in policy, will be sovietized, sanitized, populist perhaps, and certainly depoliticized. Where some particular historical reference is intended, such as to congeries of changes which included economic and demographic developments in western Europe from the 16th to the 19th centuries, unfortunately there is similarly likely to be much ellipsis and little historiography. As a result, the model matters alluded to tend in this literature to be more misunderstood than understood. For instance, 'industrialization' as in 'western industrial revolution' will be bandied about as if, for example, English, French and Dutch history in this regard had been the same, as if the Rochdale pioneers in the co-operative movement had not been non-agricultural, not engaged in political protest against a regime from which they felt excluded, not an urban class or class segment with a distinctive religious zeal.

Turning now to its other sense, before modernization as a technical and emotive key or code term or discourse emblem can do for one what it does already for others, some special initiation may be necessary. For example, modernization as a policy remedy for rural or some other backwardness problem may be proposed essentially as an alternative paradigm or option to another policy remedy: self-reliance (understood in a special sense) is the answer for another policy problem, dependency. Modernization theory and dependency theory are constantly pitted in the development studies literature as exclusive and hostile rivals. In development economics since Bretton Woods, this has served indeed as the principal polarization in this literature. Undoubtedly there are some striking contrasts between them with regard, for example, to the consequences for international relations, with each favoring recourse to its own pivotal terms about development problems and solutions. Exponents of modernization will preach dualism, diffusion of innovations, economies of scale, development administration, human resources development, financial and foreign aid. Believers in dependency theory will talk about core and periphery, world-system, unequal exchange, small is beautiful, delinking, or adjustment. Yet there are also some equally important, if seldom identified, similarities. Both schools of thought adopt comparable concepts of what one calls traditional, and the other pre-capitalist society and economy. Both are preoccupied with crises and turning-points and stages of development. Both put a heavy stress on First and Second World determinisms on the Third (and Fourth). Both tend to prefer structuralist analyses and to look for structural change.

Neo-classical economic studies of growth and development say they are or ought to be unadulterated by sociological, political and other non-economic variables. Is modernization neo-classical in the way in which dependentia often claim it is (and dependency is not)? Much will depend on the degree to which distinctions are drawn in each as regards dogma and actual practice, and one area or sector compared with another. Lack of stated institutional (as in institutional economics, comparative social institutions, and so o analysis is not necessarily and equally a matter of implicit default as well, at least to the same extent or form. In modernization (and dependency) theory and practice, some institutional analysis goes - erroneously or otherwise - by omission as well as commission. For instance, nothing could be more institutionalist - and attitudinalist - than modernization's (and again dependency's) ideas of traditional society and economy (and underdevelopment). There is none the less much useful truth in the complaint that - in its coded sense - modernization 'crudely foreshortens. the historical development of society ... is a technocratic model of society, conflict-free and politically neutral [which dissolves] genuine social conflict and issues in the abstractions of "the scientific revolution" [and] "productivity" [presuming] that no group in the society will be called upon to bear the costs of the scientific revolution'. Modernization (like dependency yet again, so really there is very much similarity indeed) tends to self- correct its policies in the light of its disappointments with the actual development record as it unfolds: that is, it self-adjusts within its own shell of epistemological and other assumptions as further challenges present themselves. Thus, unfortunately, the historical perspectives and changes in the development studies seldom those of the economies and policies to which they say they are addressed. So, and again as with dependency no less, modernization can often be best understood not as a particular development - or development theory or method for the study of development and development theory - but rather as a recurring pattern of perennial speech about such development, theory and method, and would-be practical action. In many development studies and Policies this tends to be discourse about solutions which are more likely to be in search of, than for, problems. Whose discourse is this? On the whole this is the perennial speech of modernizing elites as well as about modernizing elites (neither of which, as most notably in Iran, might on empirical investigation turn out in effect to be modernizing). These are the writers and actors who align their own best efforts with state-building, but in the name of nation-building. See also economic development, modernity, underdevelopment

multicultural education

3. 'Multicultural education' began as an educational reform movement in the USA during the civil rights struggles of African Americans in the 1960s and 1970s. Substantial societal changes such as the integration of public schools and an ever-increasing immigrant population have had a profound impact on educational institutions. As educators struggle to explain the disproportionate failure and dropout rates of students from marginalized ethnic groups, some proposed that these students lack sufficient cultural knowledge for academic success. However, many multicultural theorists attribute school failure to institutional inequities that create obstacles to marginalized youths' academic success.

Banks has described the evolution of multicultural education in four phases. First, there are efforts to incorporate ethnic studies at all levels of the curriculum. Second, this is followed by multi-ethnic education, an attempt to establish educational equity through the reform of the entire educational system. Third, other marginalized groups, such as women, people with disabilities, and gays and lesbians, begin to demand fundamental changes in educational institutions. The addition of various groups with differing needs and agendas has resulted in a myriad of theoretical focuses. However, during the fourth phase of theory development, research and practice, attention to the interrelationship of race, gender and class has resulted in a common goal for most theorists, if not practitioners, of multicultural education. This reform movement seeks nothing less than the transformation of the schooling process and educational institutions at all levels so that all students, whatever their race or ethnicity, disability, gender, social class or sexual orientation, will enjoy equal opportunities to learn.

Most proponents of multicultural education agree that their goal is an education that is anti-racist; attends to the basic skills and knowledge necessary to world citizenry; is important for all students; is pervasive throughout all aspects of the education system; develops attitudes, knowledge and skills that enable students to work for social justice; is a process in which staff and students together unlearn anti-democratic attitudes and behaviors and learn the importance of cultural variables for academic success; and employs a critical pedagogy that emphasizes the social construction of knowledge and enables students to develop skills in decision making and social action. They share also a view of the school as a social system of interrelated components such as staff attitudes and actions, school policy and politics, school culture and hidden curriculum, student learning styles, assessment and testing procedures, instructional materials, formalized curriculum of study, teaching styles and strategies, languages and dialects of the school, and community participation. Multicultural education, then, extends beyond reform of the curriculum to the transformation of all components in the system. See also ethnic politics.

multinational enterprises

3. A multinational enterprise owns and controls productive activities located in more than one country. It owns the outputs of these activities even though it may not own the assets used: these may be hired locally in each country. The multinational does not necessarily transfer capital abroad; finance can often be obtained locally as well. The multinational is thus, first, an international producer, and only second, a foreign investor.

The activities of the multinational enterprise form an integrated system; they are not usually a mere portfolio of unrelated operations. The rationale for integration is that managerial control within the enterprise coordinates the activities more profitably than would arm's length contractual relations. The antecedents of the modern multinational enterprise are found in the late 19th century, in British direct investments in the colonies, and in the merger movement in the USA from which the modern corporation evolved. In the interwar period, multinational operations focused upon backward integration into minerals (especially oil). Horizontal integration was effected through international cartels rather than multinational firms. After the Second World War, many US enterprises began to produce in Western Europe, particularly in high-technology industries producing differentiated products. They transferred to Europe new US technology, together with improved Reference management and accounting practices, and the experience of selling to a multicultural market of the kind that was developing within the European Community. In the 1970s European firms began to produce in the USA on a larger scale than before, often in the same industries in which US firms were producing in Europe. At the same time, Japanese firms began to produce abroad on a large scale in low-wage South- East Asian countries, particularly in low-technology industries such as textiles.

The value added by some of the world's largest multinationals now exceeds the gross national products of some of the smaller countries in which they produce. However, there are increasing numbers of very small multinational firms: not all multinationals conform to the popular image of the giant corporation.

Multinational operations provide firms with a number of benefits in addition to the operating economies afforded by integration. Intermediate products transferred between the parent company and its overseas subsidiaries - or between one subsidary and another - can be valued at transfer prices which differ from those prevailing in arm's length trade. The transfer prices can be set so as to minimize ad valorem tariff payments, to reallocate profits to subsidiaries in low-tax countries, and to allow the enterprise to bypass exchange controls by disguising capital transfers as income. Transfer prices are particularly difficult for fiscal authorities to detect when the resources transferred are inherently difficult to value: this is particularly true of payments for technology and management services which are very common in firms in high-technology industries. Reliable evidence on transfer pricing is difficult to obtain, though there are some prove, instances of it. * Multinational operations also give the enterprise access to privileged information through membership of producers' associations in different countries, and enable it to co-ordinate internationally the lobbying of government for favorable changes in the regulatory environment. Multinationals are often accused of enlisting the support of powerful governments in the pursuit of their interests in foreign countries, though once again reliable evidence is difficult to obtain. The United Nations actively monitors the behavior of multinationals through its Centre on Transnational Corporations. See also globalization, international trade.


1. 'Myth' is a term that has a number of subtly interrelated meanings. At its most fundamental, a myth is a (typically anonymous) narrative about supernatural beings. The importance of the myth lies in the way in which it encapsulates and expresses beliefs and values that are shared by, and definitive of, a particular cultural group. Thus, a myth may explain the origin of the group (or of the world in general), the place of that group in the world, and its relationship to other groups, and illustrate or exemplify the moral values that are venerated by the group. Mythology has been subject to various theoretical approaches.

In psychoanalysis, mythical themes are typically treated as expressive of universal psychic conflicts (with the Oedipus Complex being the most famous example). Through an extensive study, not just of mythologies, but also dreams, religion and art, Jung developed his account of archetypes as the basic and universal formative processes that structure mythologies. In functionalist approaches to cultural anthropology, myths are explained in terms of the needs they meet in the reproduction and stabilization of society. Thus, by encoding group norms, a mythology serves to strengthen the cohesion and integrity of the society. In Durkheimian sociology, mythology may be seen to be expressive of the collective conscience, that is to say, the norms and beliefs into which individuals are socialized, and that serve as the cement that holds together both pre-industrial and industrial societies. Something akin to this understanding of myth, as that which binds and motivates a group, is found in Reflections on Violence, by the French Marxist theorist Georges Sorel (and first published in 1907). Sorel treats accounts of contemporary political and social events as potential myths (notably in the example of the general strike). Such myths are necessary to evoke sentiments that would serve to motivate mass political action. This echoes, in a revolutionary manner, Plato’s conservative account of golden lies. In his utopian republic, individuals will be motivated to keep their place in society, thanks to a mythology of metal in the soul. The dominant guardians have gold in their souls, while the warrior class has silver, and the artisans iron. The social and political relationship between groups is thereby expressed in a fictional account of natural differences.

In Lévi-Strauss’s structuralist anthropology, inspired by Saussure’s semiology, myths are treated as sign-systems. While myth is still important as the medium through which the cultures reflect upon the tensions of social existence, for Lévi-Strauss, the appropriate way to analyze them is as a surface expression of an underlying deep structure (akin to Saussure’s langue). On one level, his four volume Mythologies recounts in faithful detail a vast array of myths from anthropological literature. On another level, the study attempts to identify the rules that govern the transformation from one myth to another. The semiological approach to myth is taken up by Roland Barthes (1973), particularly as a tool to analyze a wide range of images and activities in contemporary culture.

Barthes’ analysis works as follows. A sign is understood to have both denotative and connotative orders. It denotes by pointing or referring to something in the world. Thus, a photograph of a family denotes two adults (a mother and a father) and let us say two children. As connotation, the sign expresses or alludes to certain, culturally specific, values. The precise values involved will depend both upon the culture within which the sign is produced and interpreted, and the way in which the sign is presented. Thus, our family photograph could be brilliantly lit, emphasizing bright colors and a sunny day. The photograph would then connote the contentment and security associated with family life. Conversely, a bleak, black and white photograph might express the pressures of family life and the tensions between generations. Mythology builds upon this structure of denotation and connotation. As myth, the sign gives concrete and particular expression to abstract concepts, through which we make sense of a particular social experience. Thus, when we look at a photograph, it does not merely evoke values of which we are consciously aware, but also values or ideas that are so taken-for-granted that we remain unaware of our own attention to them. Our photographs of the family then evoke myths of family-life. These may be myths of the harmonious heterosexual family, and benefits of marriage to the social and moral order (for our color photograph), and the myth of the decline of family life in the other. The photographs work as mythology precisely in so far as they immediately give support for the taken-for-granted and oversimplified beliefs. The belief leads to a certain understanding of the photograph, and the photograph reinforces the veracity of that belief. The mythical beliefs transform complex cultural processes into apparently natural, unchangeable and self-evident ones. (The association with Plato's noble lies, where the cultural becomes natural, is worth noting.)




1. In its modern sense, a political community is differentiated from other such communities in virtue of its autonomy with regard to its legal codes and governmental structures, head of state, boundaries, systems of military defense, etc. A nation-state likewise has a number of symbolic features which serve to present its identity in unified terms: a flag, national anthem, a popular self-image, etc. It is worth noting that the nation-state is not synonymous with the possession of nationhood. In the 19th century, nationalistic struggles to achieve the political autonomy of a nation-state were mounted by nations which did not possess political autonomy (e.g. the Italian states, or the unification of the German states under the leadership of Prussia in 1872). Likewise, today there are nations which do not necessarily have an accompanying status of statehood (e.g. Wales and Scotland in the UK). From this it follows that what a nation-state is cannot be determined with reference to such notions as nationality, nor ethnicity, culture, or language. It is, rather, the political, social and economic modes of organization which appear fundamental with regard to this matter: nation-states have political autonomy, different norms and codes with regard to their systems of social relations, and a relatively independent economic identity.


1. Nationalism presents itself not simply as a political phenomenon, but also as a matter of cultural identity. As such, any conception of the nation to which it refers must take account of ethnic, historic and linguistic criteria, as well as political notions such as legitimacy, bureaucracy and presence of definable borders. Nationalists make a number of specific claims for the nation, which vary in relative significance according to the particular historical situation. A primary argument is that the nation has a right to autonomy, and that the people of the nation must be free to conduct their own affairs. As a corollary to this autonomy, nationalists presuppose (or demand) that the members of the nation share a common identity, which may be defined according to political or cultural (ethnic, linguistic) criteria. This notion of identity may be extended to create a sense of unity of purpose, whereby the projects of individuals are subsumed within the projects of the nation.

Nationalism thus defined is a modern phenomenon, becoming prevalent towards the end of the 18th century. Despite the existence of similar ideas in ancient times, the development of nationalism is concomitant with the development of the modern state, primarily in Europe and North America. The dates of the American Declaration of Independence (1776) and the French Revolution (1789) are frequently cited as marking the beginning of nationalism. Its roots as an intellectual movement are nonetheless vague; although steeped in the Enlightenment tradition of Rousseau and Herder, nationalism’s appeal to an authentic existence based on a return to a shared cultural heritage has much in common with the themes prevalent within Romanticism and the writings of Fichte and Hegel. Analytical study of nationalism as a political force had to wait, however, until the latter half of the 19th century, and it was not until post-colonial era that scholarly interest became widespread.

Given the disputed nature of the nation in political and cultural theory, it is hardly surprising that a universally accepted theory of nationalism remains elusive. In particular, theorists remain divided over the relative importance of nationalism’s political and cultural dimensions. Ernest Gellner’s definition of nationalism as a ‘political principle, which holds that the political and national unit should be congruent’ is an example of a position stressing the former aspect, whereas so-called ‘primordialists,’ exemplified by the anthropologist Clifford Geertz, argue that nationalism stems from patterns of social ordering deeply embedded in all ethnic psyches. By contrast, Eric Hobsbawm and Elie Kedourie have proposed that nationalism is an invention on the part of social elites which fails to address the arbitrary and contingent formation of nations, instead positing invented traditions which thence constitute a superficial cultural heritage. In addition, scholars are divided as to whether a distinction can be made between ‘good' and 'bad' nationalism (patriotism and chauvinism). Despite disagreement concerning its nature, however, nationalism remains a potent ideology in contemporary society, and its popularity appears to have diminished little in the face of potential threats such as globalization, mass communication and multi-national institutions.

3. Nationalism is the belief that each nation has both the right and the duty to constitute itself as a state. There are many difficulties in specifying what a nation is – in Europe, for example, the candidates range from the Welsh and the Basques to Occitanians and Northumbrians - but some common culture is indispensable and a shared language highly desirable. The Swiss have so far got by without a common language, but its lack has sorely tried the rulers of Belgium. Nationalist theory usually attributes conflict to cross-national oppression, and thus offers a promise of world peace when self-determination has become a global reality.

Nationalism emerged in the hatred of cosmopolitanism which registered the resentment of Germans and other Europeans who were coming to feel marginal in terms of the universalistic rationalism of the French Enlightenment. The romantic idea that true humanity must be mediated by a deep involvement in one's own unique culture led to an admiration for songs, poems, stories, plays and other creations understood as emanations of the national soul. The language of a people was accorded a unique value, no less as the medium of cultural self-expression than as a practical rule of thumb about how far the boundaries of a putative nation might stretch. The conquests of Napoleon turned these particularistic passions in a practical direction, and Fichte's Addresses to the German Nation delivered at Berlin in 1807-8 struck a responsive chord throughout Germany. Italy and Germany were both plausible candidates for state creation and both duly became states, though Italy remains imperfectly national to this day, while German unity owed more to Bismarck than to popular passion for nationhood.

The spread of nationalist ideas to Eastern Europe and beyond, where very different peoples were inextricably intertwined, was bound to create difficulties. Doctrinal diffusion was facilitated by the growth of industry, and of cities. Teachers , journalists, clergy and other intellectuals found in nationalist ideas an identity for the present and a vision for the future. Some set to work writing down languages previously purely oral; others constructed a literature and elicited a suitable history. Opera and the novel were favorite vehicles of nationalist feeling. The politics of these endeavors triumphed with the Treaty of Versailles in 1918, which settled Europe in terms of the principle of national self-determination.

Throughout Africa and Asia, nationalist ideas fuelled the campaigns to replace the old European empires with home-grown rulers, but since there were few plausible nations in this area, successor states which had been constructed on a variety of principles claimed freedom in order to begin the process of cultural homogenization which might lead to nationhood. Pakistan, based upon the religious identity of Islam, attempted to hold together two separated areas inherited from the British Raj, and could not be sustained in that form; the eastern region broke off as Bangladesh in 1971. The artificial boundaries of imperial Africa have, however, been a surprisingly Successful container of the often chaotic mixture of tribes they contained, though virtually all have had to compensate for lack of homogeneity by centralizing and frequently tyrannizing governments.

Political scientists often find in nationalism an attractive form of explanation because it promises to explain the hidden causes of conflict between different ethnic groups. In this usage, nationalism is not a belief, but rather a force supposed to move people to both action and belief. Such a concept provokes a search for the conditions under which the force is triggered. The promise of this research program, like many another in political science, far exceeds the performance. Nationalism is better treated as a complex of ideas and sentiments which respond flexibly, decade by decade, to new situations, usually situations of grievance, in which people may find themselves.


2. Nation (from the word nation, French, nationem, Latin - breed, race) has been in common use in English from the late 13th century, originally with a primary sense of a racial group rather than a politically organized grouping. Since there is obvious overlap between these senses, it is not easy to date the emergence of the predominant modern sense of a political formation. Indeed the overlap has continued, in relation to such formations, and has led on the one hand to particularizing definitions of the nation-state and on the other hand to very complex arguments in the context of nationalist and nationalism. Clear political uses were evident from the 16th century and were common from the late 17th century, though realm, kingdom and country remained more common until the late 18th century. There was from the early 17th century a use of the nation to mean the whole people of a country, often in contrast, as still in political argument, with some group within it. The adjective national (as now in national interest) was used in this persuasive unitary sense from the 17th century. The derived noun national, which is clearly political, is more recent and still alternates with the older subject. Nationality, which had been used in a broad sense from the late 17th century, acquired its modern political sense, in the late 18th and early 19th centuries.

Nationalist appeared in the early 18th centuries and nationalism in the early 19th century. Each became common from mid-19th century. The persistent overlap between grouping and political formation has been important, since claims to be a nation, and to have national rights, often envisaged the formation of a nation in the political sense, even against the will of an existing political nation which included and claimed the loyalty of this grouping. It could be and is still often said, by opponents of nationalism, that the basis of the group's claims is racial. (Race, of uncertain origin, had been used in the sense of a common stock from the 16th century. Racial is a 19th century formation. In most 19th-century uses racial was positive and favorable, but discriminating and arbitrary theories of race were becoming more explicit in the same period, generalizing national distinctions to supposedly radical scientific differences. Racial was eventually affected by criticism of these kinds of thinking, and acquired both specific and loose negative senses. Racialism is a 20th-century formation to characterize, and usually to criticize, these explicit distinctions and discriminations.) It was also said that the claims were 'selfish', as being against the interests of the nation (the existing large, political group). In practice, given the extent of conquest and domination, nationalist movements have been as often based on an existing but subordinate political grouping as upon a group distinguished by a specific language or by a supposed racial community. Nationalism has been a political movement in subjected countries which include several 'races' and languages (as India) as well as in subjected countries or provinces or regions where the distinction is a specific language or religion or supposed racial origin. Indeed in nationalism and nationalist there is an applied complexity comparable with that of native. But this is often masked by separating national feeling (good) from nationalist feeling (bad if it is another's country, making claims against one's own), or by separating national interest (good) from nationalism (the asserted national interest of another group). The complexity has been increased by the usually separable distinction – between nationalism (selfish pursuit of a nation's interests as against others) and internationalism (co-operation between nations). But internationalism, which refers to relations between nation-states, is not the opposite of nationalism in the context of a subordinate political group seeking its own distinct identity; it is only the opposite of selfish and competitive policies between existing political nations.

Nationalize and nationalization were early 19th-century introductions to express the processes of making a nation or making something distinctively national. The modern economic sense emerged in the mid-19th century and was not common before the late 19th century, at first mainly in the context of the proposed nationalization of land. In the course of political controversy each word has acquired specific tones, so that it may be said without apparent difficulty that it either is or is not in the national interest to nationalize.

See ethnic, racial, status.


3. At the heart of the term nationalization is the act of converting a privately owned resource into one owned by the central government (or local government in the case of 'municipalization'). One might then ask how the use and development of the resource and the economic organization of production may be predicted to change. Instead of exploring this issue, many economists in both Europe and North America have taken an essentially prescriptive stance. 'What advice can one give about the use of the resources?' they have asked, invariably on the presumption that the managers, civil servants and ministers are disinterested recipients of that advice. Since no one would want to deny that resources should be used efficiently, economists have translated their own concept of efficiency into guide lines of behavior. Publicly owned industries should, as a first approximation, set user prices and extend the use of resources up to the point where the marginal cost of output equals price. The rationale for this is that no gains could then be made by switching resource usage in or out of the industry, since consumer valuation of the marginal dose of resources is just equal to its valuation in other activities. The implications of such a rule are quite striking, suggesting, for example, different electricity tariffs for different times of day, high fares and tariffs for transport, gas and electricity to high-cost rural areas, low fares and freight rates for bulky, long-distance rail journeys. Much work has been undertaken on the detailed implementation of these policy proposals, in terms of identifying short and long- run marginal costs, demand elasticities and time- stream aspects of investment projects. While many economists have not felt that the price at marginal cost rule should be modified to take into account questions of income distribution - on the grounds that the tax system is the way to handle that - they have not advocated the simple rule when spill-over effects exist or when information flows have been regarded as deficient. Health and education are therefore viewed as areas raising other considerations.

The forgotten question about how the use of resources would actually change under public ownership re-emerged in the 1970s, partly as a product of the growing influence of a persistent element in US economic thinking - the study of institutional behavior - and partly because the economists' policy prescriptions were either ignored or found too difficult to implement. The restriction of a private interest to the end of promoting a public interest can be achieved in a variety of ways. Such 'regulation' has a long history in Britain, embracing areas like the factory inspectorate and the control of private railway -and fuel companies in the interwar period. The shift in the immediate post- 1945 period to public ownership of strategic industries may itself be a reflection of the siege mentality of the 1930s and 1940s. Study of such issues is still awaited. Instead the main thrust of 'positive' theories has come from US thinking on the property rights characteristics of public firms. For example, one approach stresses that citizen- owners can dispose of their rights in publicly owned activities only by engaging in high-cost activities like migration or concerted political action. This is contrasted with private ownership, where each owner has the unilateral ability to buy and sell shares, an act viewed as a capitalization of the expected results of cur- rent management action. A significant wedge between owner and management therefore arises in public firms, the nearest approximation to which for private firms is the cost to owners of monitoring management behavior. In the former case the wedge permits scope for discretionary behavior by civil servants, management and politicians. The precise outcome in each public firm would depend on the way in which property rights are specified and the constraints on the various parties in tile pursuit of their own utility maximizing position. But the broad expectation is that productivity will be lower and unit costs higher in public than in private firms. Testing such theories is difficult, for when public firms have product monopolies there is no contemporaneous private firm to act as benchmark, and in the absence of monopoly one has to separate the effects of competition from the effects of ownership. Because of the wide variety of institutional types within many of its industries, the USA is proving a fruitful data source with comparisons between publicly owned firms (municipal rather than national) and private firms, some of which are regulated. The evidence on productivity and unit costs shows a very varied pattern, with public firms coming out better in electricity supply, private firms in refuse collection and water supply, and with no clear-cut differences in transport. Pricing structures in public firms seem unambiguously to be less closely geared to the supply costs of particular activities, though whether this is due to electoral influences, empire building or a disinterested pursuit of fairness is not yet clear. Little work has yet been done on explaining why some activities are taken into public ownership whilst others are not.



1. True knowledge that is, or should be, value-neutral. Thus, objective knowledge is knowledge of how things really are, as opposed to how they appear to be. In the natural sciences (e.g. physics) objectivity is an indispensable notion (with regard to the application of theories and, above all, their verification by experiment). Objectivity presupposes that there is a real, external world which is independent of our knowledge of it, and that it is possible to describe this world accurately. On this view of science, scientific methodology aspires to provide the rules whereby reality can be known (a variant of this can be found in positivism). Philosophers like Nietzsche have criticized this notion. For Nietzsche, there is no knowledge which is not interested knowledge, i.e. which does not have an interest in, and therefore does not presuppose some value with regard to, its subject-matter. Likewise, Foucault has taken a similar line. The implication of this attitude is that the aspiration to value objectivity above all else in knowledge is itself generated historically and culturally. (See also self, epistemology.)


    1. A concept that can be traced back to the work of Hegel, and to be found in a variety of approaches to epistemology, questions of cultural identity and psychoanalysis. Amongst others, a treatment of this notion is in the writings of Lacan, in Satrean existentialism, Derridean deconstruction, and Edward Said’s analysis of the colonial European study of Oriental cultures, Orientalism (inspired in part by the thought of Michel Foucault). The term, not surprisingly, is highly ambiguous. In the context of theories of culture, perhaps the most prominent contemporary use of this notion has been made by Said. In these terms, the Other may be designated as a form of cultural projection of concepts. This projection constructs the identities of cultural subjects through as relationship of power in which the Other is the subjugated element. In claiming knowledge about ‘orientals’ what Orientalism did was construct them as its own (European) Other. Through describing purportedly ‘oriental’ characteristics (irrational, uncivilized, etc.) Orientalism provided a definition not of the real ‘oriental’ identity, but of European identity in terms of the oppositions which structured its account. Hence, ‘irrational’ Other presupposes (and is also presupposed by) ‘rational’ self. The construction of the Other in Orientalist discourse, then, is a matter of asserting self-identity, and the issue of the European account of the Oriental Other is thereby rendered a question of power.
Orientalism 3. Orientalism is the extension and application to anthropology of the work of the literary and cultural critic Edward Said. His Orientalism (1978) describes how westerners; have understood the Middle East, classically called the Orient, and particularly the understanding, developed by the academic discipline called Oriental Studies. As a generic term in anthropology, orientalism refers to distortions in the perception and analysis of alien societies that resemble the distortions that Said discerns in Oriental Studies. These distortions, critic's say, are found frequently in anthropology.

One distortion is exaggerating 'the difference between the familiar (Europe, the West, "us") and the strange (the Orient, the East, "'them").’ This is found in ethnography that focuses on the exotic in the societies it describes. More subtly, it is found in theories or models that compare the west and the alien and that portray the alien as little more than a mirror- image of the western. Examples include the comparison of hierarchic (India) and egalitarian (western)societies and the comparison of gift (Melanesian) and commodity (western) societies.

A second distortion is treating a society as though it is an unchanging expression of some basic essence or genius, a distortion sometimes called 'essentialism'- Here, social and cultural practices and institutions are portrayed or understood as being 'what they are because they are what they are for all time, for ontological reasons that no empirical matter can either dislodge or alter' (Said 1978: 70). This refers particularly to anthropologists who seek to discern a stable and coherent social order that somehow simply inheres the society being described as part of its essence. Closely related to this is a third distortion, portraying and analyzing a society as though it is radically separated from the west. This occurs especially in ethnography that ignores points of contact between the society and the west, and so ignores the colonial relations that may have existed between the two societies, as it also ignores western intrusions in that society. These are ignored because they do not reflect what is taken to be the true essence of the society involved. Although many condemn orientalism, it may be inescapable in anthropology. Groups commonly distinguish themselves from others by casting those others as exotic In some way, and there is no reason to expect that, anthropologists are exempt from this tendency. Similarly, comparison is at the heart of anthropology, and to compare two societies is almost necessarily to stress their differences and slight their similarities, and to construe them in terms of fundamental attributes or essences.



    1. The term ‘patriarchy’ literally means the ‘rule of the father.’ It has been adopted by the majority of feminist theorists to refer to the way in which societies are structured through male domination over, and oppression of, women. Patriarchy therefore refers to the ways in which material and symbolic resources (including income, wealth and power) are unequally distributed between men and women, through such social institutions as the family, sexuality, the state, the economy, culture and language. While there is no single analysis of the workings of patriarchy, debate over its nature and historical development has been important in the development and differentiation of schools of feminist thought. A number of key issues can be identified in the theorizing of patriarchy. The relationship of male domination to biology was an early source of contention. While patriarchal structures may be found in all known human societies, the reduction of patriarchy to biological invariants, such as "the roles of women and men in child-birth and nurturing, suggests that patriarchy is an essential and unchangeable natural relationship. Feminism tends, rather, to argue that patriarchy is, at least, the cultural interpretation of those natural relationships, if not itself wholly cultural. Psychological, and especially psychoanalytic theories, may associate patriarchy in the early socialization of the child (and especially the break of the child from the mother at the Oedipal stage). Feminist responses to Lacanian psychoanalysis, from for example Kristeva and Cixous, are significant in seeing dominant culture, language and reason (Lacan's 'symbolic') as inherently patriarchal. They therefore seek to recover a pre-patriarchal stage, expressed in an écriture feminine, through which women can articulate themselves to themselves outside the distortions of male language. The relationship of patriarchy to other forms of oppression, such as class and race receives diverse theorization. Questions include that of the primacy or otherwise of patriarchy over other forms of domination, and the way in which different forms of domination may interact and reinforce each other. Thus, socialist feminists have typically sought to link patriarchy to class exploitation (Barrett 1980). The importance of race and ethnicity has indicated that a potential flaw in an all-encompassing theory of patriarchy is that it remains indifferent to divisions between women. The exploitation and domination of all women is not alike, and women cannot therefore be theorized as a single, homogeneous group.
3. Patriarchy, literally meaning the rule of the father, is a term which has been widely used in a range of contrasting accounts which seek to describe or explain the conditions of male superiority over women. What is not always understood is that the different uses of the term reflect differing understandings of the relationship between nature and culture in the organization of social life.

The modern history of this term starts with the lawyer Henry Maine's Ancient Law (1861), which argued that 'The patriarchal family was the fundamental and universal unit of society.’ Like many of his contemporaries, Maine defined human society as society with law, and saw legality as being historically founded on the authority which fathers exercised over their families.

Maine was quickly challenged by evolutionary theorists (influenced by Darwin), especially Bachofen (1861), McLennan (1865) and Morgan (1877), who claimed that modern society developed through a succession of stages from nature to culture. This contradicted Maine's view that human organization had been fully social from its beginnings.

According to the evolutionists, the earliest stage of human organization was matriarchy based an biological links between mother and child rather than social links with the father (patriarchy), which was a later and more advanced stage.

The idea of patriarchy as a vital developmental stage can be seen in the social theory of Marx, Engels and Weber, and in the psychoanalytic theory constructed by Freud. Engels's (1884) writing focused on the connection between private property, the patriarchal family and the origins of female oppression. Patriarchal household heads controlled women as the reproducers of children. Thus, in the tradition of Morgan, Engels saw women's social position, unlike men's, as structured by their physical nature. Engels's account provided the framework for Marxist feminist critiques of patriarchy. However, a continuing tension developed between Marxist historical materialism, which insisted that a change in class relations would free women of their oppression, and the implications of Engels's biologically based account, which inadvertently introduced the possibility that it would not.

Attempts to resolve this contradiction not include the development of dual systems theory, but also have led some Marxist feminists to reject the use of the term patriarchy altogether.

Whereas the Marxist approach argues that material structures determine relations between men and women, radical feminists reverse the equation. For them, patriarchal values structure relations between the sexes and these inequalities of gender become paradigmatic of all other social inequalities and are not reducible to any other causes. However, although this view of patriarchy is a social explanation of gender oppression, it also tends, despite itself, to take for granted a natural distinction between men and women due to its central focus on an antagonistic gender dichotomy.

In these debates, the question which was constantly being asked was both whether the oppression of women was universal and whether it was natural. Because of its cross-cultural perspective, anthropology had always potentially offered a critique of assumptions that relations between men and women are everywhere the same. However, it was not until the 1970s that the discipline began to engage with feminist perspectives; and began to shift its focus away from kinship and towards gender. Drawing on ethnographic evidence from outside Europe, anthropologists increasingly suggested that the apparently obvious biological differences between men and women did not necessarily account for, or directly explain, the very many different ways in which relationships between the sexes can be envisaged and enacted. Non-western societies do not necessarily make a strong dichotomous distinction between male and female based in biology, nor opposed nature to culture. The concept of patriarchal domination may therefore seriously misrepresent the complexity of sexual relations and gendered identity, both outside and within the west. Writing from the mid- I 980s to the 1990s has therefore moved away from the question of the causes of patriarchy to a comparative ethnography of the different components of gendered identities, including, for example, race. See also feminist theory, gender and sex.


2. Peasant came from paisant, French, root word pagus, Romanic – country district, whence in another development pagan. It was in common use in English from the 15th century, often distinguishable from rustic (rusticus, Latin – countryman, root word, rus, Latin – country) in that peasant usually meant working on the land as well as living in the country. The collective noun peasantry came in the 16th century. Peasant continued in its traditional sense in English until the 20th century, though increasingly in literary usage only. The social and economic transformation of English agriculture, from the 16th to the 19th centuries, created a special difficulty in uses of the word. The class of small working landholders in feudal or semi-feudal relationships to landowning aristocracy, as found in pre-revolutionary France or Russia, and often described by this primarily French word, had virtually ceased to exist in England by the late 18th century, and had been replaced by the new capitalist relationships of landlord, tenant and laborer. Cobbett, in Political Register, LXX, c. 695 (1830), noted ‘the "peasantry," a new name given to the country laborers by the insolent boroughmongering and loan-mongerning tribes.’ From this period, in English, peasant and peasantry have either declining literary words or, in social description, in effect re-imports from other languages, mainly French and Russian. There has also been a specialized use, again imitated from French and Russian, in which peasant is a loose term of abuse – in English usually very self-conscious and exaggerated – of ‘uneducated’ or ‘common’ people. At the same time, in descriptions of other societies and especially of the Third World, peasantry still carries a major sense, of a distinct social and economic group, and peasant has, in some contexts, been given both descriptive and heroic revolutionary connotations.

popular culture

1. A simple definition of the term 'popular culture,’ as the culture that appeals to, or that is most comprehensible by, the general public, may conceal a number of complexities and nuances of its use within cultural studies. The term is frequently used either to identify a form of culture that is opposed to another form, or as a synonym or complement to that other form. The precise meaning of 'popular culture' will therefore vary, for example, as it is related to folk culture, mass culture or high culture. In addition, popular culture may refer either to individual artifacts (often treated as texts) such as a popular song or a television program, or to a group's lifestyle (and thus to the pattern of artifacts practices and understandings that serve to establish the group's distinctive identity).

Theories of mass culture (that were dominant in American and European sociology in the 1930s and 1940s) tended to situate popular culture in relation to industrial production, and in opposition to folk culture. While folk culture was seen as a spontaneous production of the people, mass society theories focused on those forms of popular culture that were subject to industrial means, of production and distribution (such as cinema, radio and popular music) and theorized them as being imposed on the people. The approach therefore tended to assume that the audience were passive consumers of the goods foist upon them. The message and purpose of these goods were interpreted within the context of a more or less sophisticated theory of ideology, so that the mass of the people were seen to be manipulated through the new mass media. Perhaps the most sophisticated version of this approach is found in the Frankfurt School's concept of the culture industry.

With the development of the sociology of the mass media and of cultural studies from the 1950s onwards, not least with the work of Hoggart and the Birmingham Centre for Contemporary Cultural Studies, the consumers of popular culture came to be seen as increasingly active, and thus the process by which the message of popular culture is communicated, to be increasingly complex. The activity of the people can be identified at two levels. On the first, the people are identified as the producers of popular culture (so that popular culture becomes the folk culture of an industrial society). On the second, more sophisticated level, the people are the interpreters of this culture. Thus, using for example a theory of hegemony the propagation of mass culture cannot be seen as simply inflicting a message on the audience, despite the use of industrial production and distribution techniques. Rather, the audience will interpret, negotiate and appropriate the cultural artifacts or texts to its own uses, and make sense of them within its own environment and life experience. Precisely in so far as more sophisticated (and especially semiological and structuralist) approaches to communication emphasized the fact that the interpretation of a message can never be self-evident, the audience came to be credited with greater interpretative skills, and thus with the ability to resist an interpretation of the culture that is simply in the interests of the dominant class. The analysis of women's magazines, for example, may at once recognize the systems of codes and other mechanisms that integrate the reader into a particular ideological construction of femininity (and thus into particular patterns of commodity consumption), but also the space that the magazine opens up in which the reader can enjoy and indulge in this construction and yet see through it as a fiction. Thus, popular culture may be understood in terms of ideological struggles, and as a central element in any cultural politics.

Popular cultural artifacts serve to articulate the differentiation of society in terms of gender, age or race, and to constitute the self-understanding of those groups. Popular music for example has a key role in articulating the gender, class and ethnic identities of teenagers (and indeed in constituting the 'teenager' as a distinctive age group). However, precisely because much popular culture continues to depend upon the resources of industrial capitalism for its production and distribution, a tension remains in the selection of popular cultural products between the interests of capitalism (even if these are the purely commercial interests of profit maximization) and the cultural and political interests of the consumers. Fiske, for example, distinguishes between the financial and the cultural economies within which cultural artifacts circulate. While the former is concerned with the generation of exchange-value, and thus with the accumulation of wealth and the incorporation of the consumer into the dominant economic order, the latter is concerned with the production of meanings and pleasures by, and for, the audience. Precisely because the production of meanings within the cultural economy is not as readily controlled as is the production of wealth, the audience, as producer or meanings, is credited with considerable power to resist the financial forces of incorporation. Popular culture is therefore seen by Fiske as a key site of resistance to capitalism.


1. A term generally used to indicate a range of global cultural developments which occurred in the aftermath of the Second World War. To this extent, it has both historical nuances and theoretical ones. On the one hand, 'postcolonialism' signifies something distinctive about this period as one in which the cultural, economic and social events which have constituted it mark the decline of European imperialism. On the other hand, theories of 'postcoloniality' concern themselves with a wide range of metaphysical, ethical, methodological and political concerns. Issues which are addressed from this perspective include the nature of cultural identity, gender, investigations into concepts of nationality, race and ethnicity, the constitution of subjectivity under conditions of imperialism and questions of language and power. One of the earliest writers who brought attention to such issues was Frantz Fanon (1925-61), who sought to articulate the oppressed consciousness of the colonized subject. He argued that imperialism initiated a process of 'internalisation' in which those subjected to it experienced economic, political and social inferiority not merely in 'external' terms, but in a manner that affected their sense of their own identity. Hence, material inferiority creates a sense of racial and cultural inferiority. In turn, Fanon attempted to show the role of language within this process. Colonization, he argues, also took place through language: under French domination the Creole language is rendered 'Inferior' to French, and the colonized subject is compelled to speak the tongue of his or her imperial rulers, thereby experiencing their subjugation in terms of their own linguistic abilities and identity (an experience, it might be added, not uncommon within the context of Europe itself, e.g. the colonial experiences of Irish and Welsh cultures under the dominion of English expansion since the 16th century)

In the wake of the work of such figures as Fanon, writers have raised questions about the applicability of definitions of culture and humanity (for instance, the question of nationhood) which have been offered within the context of western cultural domination (see, for example, Bhabha), or have elucidated the cultural bias inherent in particular forms of European discourse (see Edward Said's writings on Orientalism). Likewise, notions, such as those of 'hybridity' and diaspora, have been developed in order to emphasize the notion of an implicit cultural diversity underlying the identities of so-called 'Third World' or post-colonial cultures (see, for example, the writings of Stuart Hall or Homi Bhabha). Within this context, theories of discourse and narrative have often been deployed as -a means of articulating the distinctions between western and non-western culture, and in turn questioning its hierarchical superiority. Some of these theories have been derived from Marxism or the thinking of postmodernism and post-structuralism - although the antirealismimplicit in the work of thinkers associated with these last two movements has led to some criticism for instance by Said, of its applicability to the experience of 'postcolonial' subjects (and, perhaps, one ought to mention the possible criticism that much of the thought inherent in postmodernism and post-structuralism has itself been produced within the western academy).

It is also worth noting that the use of 'post-colonialism' to define such theories, or indeed even an historical period, is controversial. This is not least because it is possible to argue that the word preserves within it the presupposition that western culture retains the predominance it attained during the past two or three hundred years as a consequence of colonial expansion. To be identified as 'post-colonial', in other words, involves a retention of the belief that colonialism continues to exert its influence through providing a definition of the identity of ‘post-colonial’ subjects and their cultures. Equally, whether the post-war period can be seen as really signifying a move away from colonial forms is questionable. The rise of colonial imperialism rooted in the political form of the European nation-state occurred in conjunction with capitalism in the modern era, and the predominance of this form has perhaps subsided. But the cultural and economic power of the west, it is arguable, retains its dominance in the form of those processes of globalization which have been delineated by some critics as characteristic developments within late capitalism (see the discussion of David Harvey’s work in the postmodernism entry).

post-industrial society

1. The idea of a 'post-industrial state', grounded on an economy of small-scale, workshop-based craft production was first proposed in the late 19th century, by followers of the utopia socialist William Morris. However, in current usage, 'post-industrial society' was articulated, almost simultaneously in the early 1960s, by Daniel Bell (1973) and Alain Touraine (1968). The concept of 'postindustrial society' is intended to encapsulate the changes that have occurred within capitalism in the post-war period. The post-industrial society was presented as a new social form, as different from industrial capitalism as capitalism had been from feudalism. The central idea is that theoretical knowledge has now become the source of social change and policy formation. The society is highly educated, with significant levels of resources invested into the production of theoretical knowledge (in higher education and commercial research and development). The economy therefore shifts from the production of goods and raw materials, to the production of services. The dominant industries become those which are dependent upon theoretical knowledge (such as computing and aerospace). This is accompanied by a decline in the old working class, and the rise of 'white collar' (or non-manual) classes. New professional and technical classes (or a 'knowledge class') become dominant. The difference between Bell and Touraine's accounts rests largely upon the enthusiasm with which they embrace post-industrial society For Bell it is a positive development, leading to greater social integration, and the reduction of political conflict. For Touraine, postindustrial society threatens to become a society dominated by a technocratic elite, who are insensitive to the humanist values of traditional university education.


1. 'Post-modern, if it means anything,' Anthony Giddens argues, 'is best kept to refer to styles or movements within literature, painting, the plastic arts, and architecture. It concerns aspects of aesthetic reflection upon the, nature of modernity.’ Giddens in fact also links it to Nietzsche and Heidegger, and an abandonment of the Enlightenment project of rational criticism. Postmoderns, though, Giddens continues, have nothing better to offer in the place of the ideals of the Enlightenment. Amongst other critical works which have dealt with postmodernism, David Harvey's The Condition of Postmodernity has sought to analyze it in socio-economic terms. Harvey argues that the postmodern can be taken to signify a decentralized, diversified stage in the development of the market place, in which the Fordist rationale of production concentrated in a single site (the factory) has been replaced by a form of manufacture which co-ordinates a diversity of sources (e.g. parts of one final product are made in more than one place and then shipped elsewhere for purposes of assembly) in search of greater flexibility of production. In turn, this has had the effect of producing workforces which are mobile and disposable in a way in which the earlier labor markets of Fordism were not. Thus, for Harvey postmodernism is in fact an extension of those social processes which Marx diagnosed as being characteristic of the logic of capitalist society. In effect, on this view, postmodernism (at least in its philosophical guise) may well be regarded as a form of apology for capitalism.

One thing, therefore, is certain about postmodernism: the uses of the word display such a diversity of meanings, that it defies simple definition. In architecture, for example, postmodernism has been taken to mean the overcoming of earlier, rigid conventions underlying modernist tastes (as exemplified by Le Corbusier's functionalism) in favor of a more eclectic, playful and nonfunctional aesthetic. The 'postmodern' novel, in contrast, could be described as embodying an experimentalism with narrative form, through which a rejuvenation of the established conventions of the form itself is sought (by way of a simultaneous retention and redeployment of those conventions in the name of an avantgardism which harks back to modernism). Writers often associated with postmodernism include Jean Baudrillard, Jacques Derrida, Michel Foucault, and Luce Irigaray.

Perhaps the most coherent account of what constitutes postmodernism has been offered by the philosopher Jean-François Lyotard in The Postmodern Condition: A Report on Knowledge, and most succinctly in the essay included at the end of that volume, 'Answering the Question: What is Postmodernism?.’ In The Postmodern Condition, Lyotard provides an account of postmodernity which stresses the collapse of 'grand narratives' (e.g. that of Marxism), and their replacement with 'little narratives' in the wake of technologies which have transformed our notion of what constitutes knowledge. To that extent, the view offered in this text concentrates on the epistemology of postmodernity, i.e. the postmodern conceived of in terms of a crisis in our ability to provide an adequate, 'objective' account of reality

In the essay 'Answering the Question: What is Postmodernism?,’ Lyotard offers an analysis of Kant's notion of the sublime (as presented in the Critique of Judgment) as a means of elucidating the postmodern. The sublime, Kant argues, is a feeling aroused in the spectator by the presentation to the intellect of something which defies conceptualization. Likewise, Lyotard holds, the postmodern can be characterized as a mode of expression which seeks to put

Forward new ways of expressing the sublime feeling. In other words, postmodernism is an avant-garde aesthetic discourse, which seeks to overcome the limitations of traditional conventions by searching for new strategies for the project of describing and interpreting experience. Significantly, Lyotard argues that the postmodern ought not to be understood in terms of an historical progression which signals a present departure from a past modernism. Rather, modernism is in fact characterized as a response to a set of concerns which are themselves already postmodern. According to Lyotard, modernism embodies a nostalgic yearning for a lost sense of unity, and constructs an aesthetics of fragmentation in the wake of this. Postmodernism, in contrast, begins with this lack of unity but, instead of lamenting it, celebrates it - a claim made most evident by Lyotard's comparison of the modernist 'fragment' (i.e. the art-work conceived of as a part of a greater, albeit unattainable, whole) with the postmodern 'essay' (taken in the sense of an essaying-forth, in the spirit of an experimentalism which disdains either to construct or lament totality - the characterization of the latter bearing a strong resemblance to T.W. Adorno's analysis in his 'The Essay as Form').

More recently, Lyotard has moved away from his earlier exposition of postmodernism. On the one hand, he has sought to redefine it in terms of a 'rewriting' of the project of modernity (see the essays collected in The Inhuman). On the other hand, a work like The Differend: Phrases in Dispute at least hints that postmodernism may be considered in a rather less positive (and certainly more modest) light than that afforded it in The Postmortem Condition: 'an old man who scrounges in the garbage-heap of finality looking for leftovers ... a goal for a certain humanity.’

Italian philosopher Gianni Vattimo has also offered an account of the postmodern in his essay 'Nihilism and the Postmodern in Philosophy' in The End of Modernity. Contrary to Giddens' view, Vattimo specifically relates postmodernism to philosophy, rather than the arts. As with Giddens, two thinkers mark the opening of postmodernity: Nietzsche and Heidegger. Vattimo turns to Heidegger's notion of Verwindung as a means of explicating his position. The word Verwindung represents neither an Überwindung (i.e. a critical overcoming of contradiction through the use of reason), nor a Kantian Verbindung, which seeks to establish a priori modes of combination as a means of grounding transcendental critique in primary rules of understanding and principles of reason. A Verwindung, rather, is a. ‘twisting’ of meaning which makes room for a form of relativistic criticism which disdains all pretensions to objectivity. This, then, allows for Vattimo, to account for the ‘post-‘ in postmodernism, for it does not presuppose the possibility of transcendental critique. Interestingly, it is Nietzsche, and not Heidegger, whom Vattimo regards as the first philosopher to talk in the terminology of Venvindung. Indeed, for Vattimo, postmodernity is born with Nietzsche’s writing. Turning to Nietzsche’s book Human, All-Too-Human, Vattimo, argues that this work defines modernity as a process of constant replacement, wherein the old (expressed through notions such as ‘tradition’) is abandoned in favor of the new, which in its turn decays and is replaced by ever newer forms. Within such a context, the modern can never be overcome, since each overcoming is merely another repetition of the fetish of the new. Having offered this diagnosis, Nietzsche’s text refuses to envisage a way out of modernity by way of recourse to, for example, a Kantian transcendentalism. Rather, a Nietzschean account seeks to radicalize the modern through a dissolution of ‘its own innate tendencies’ (p. 166). This is achieved through the following chain of reasoning: (i) a criticism of mores (dominant forms of ethical behavior) is undertaken by Nietzsche through a strategy of ‘chemical reduction’ (see Human, All-Too-Human, sections Iff., where Nietzsche writes of constructing a ‘chemistry of the moral and religious sensations’); which leads to (ii) the realization that the ontological ground and methodological basis for this reduction (i.e. truth) is destined likewise to dissolve under such scrutiny; and (iii) that truth, in consequence, is- rendered the product of historical contingency. As such, it is realized that truth (and consequently the language of truth) is both (a) subject to and (b) molded by forces such as the need for survival, and rests

on such notions as the untenable belief that reality can be known; this, in turn, leads to the conclusion that (iv) truth is rooted in the metaphorical function of language (language as a tool for coping with the world, not as a means of describing reality). Within this context, truth is dissolved and (most famously) God dies, slain by his own metaphysics (the Christian metaphysical demand for truth having turned on Christianity itself, finds it unable to live up to its own ideal). For Vattimo, this nihilistic conclusion offers a way out of modernity, and marks the birth of postmodernity i.e. an interest in grounding knowledge in concepts of truth and Being is replaced by one which stresses the historical analysis of 'appearance' and the predominance of contingency in our forms of knowledge. It is worth noting that such an account leaves out many aspects of Nietzsche's thought which would not conform with Vattirrio's view (e.g. his later diagnosis of modernity as a decadent form which must be 'overcome', and likewise his criticisms of modern 'nihilism' as a symptom of 'decadence' or cultural decline).


1. Movement of thought in various fields - literary criticism, cultural studies, political theory, sociology, ethnography, historiography, psychoanalysis - which grew out of (and to some extent reacted against) the earlier structuralist paradigm adopted by mainly French theorists in the 1950s and 1960s. Structuralism took its methodological bearings from the program of theoretical linguistics devised some four decades earlier by Ferdinand de Saussure. This work was rediscovered - with considerable excitement - by structuralist thinkers who proceeded to apply his ideas to a range of social and cultural phenomena supposedly exhibiting a language-like (systemic) character, and hence amenable to description and analysis in terms deriving from Saussure's structural-synchronic approach. Thus, in each of the above-mentioned disciplines, the aim was to break with an existing (merely 'empirical' or case-by-case) treatment of the innumerable narratives, myths, rituals, social practices, ideologies, case histories, cultural patterns of belief, etc., and to focus rather on the underlying structure - the depth-logic of signification - which promised to fulfil Saussure's great dream of a unified general semiology. Such would be the structuralist key to all mythologies, one that explained how such a massive (empirically unmanageable) range of cultural phenomena could be brought within the compass of a theory requiring only a handful of terms, concepts, distinctions and logical operators. Among them - most importantly - were Saussure's cardinal distinctions between signifier andsignified, langue and parole, and the twofold (diachronic and synchronic) axes of linguistic-semiotic research. Beyond that, the main task was to press this analysis to a point where it left no room for such supposedly naive ideas as that of the subject - the 'autonomous' subject of humanist discourse - as somehow existing outside or beyond the various structures (or 'subject positions') that marked the very limits of language and thought at some specific cultural juncture.

Thus structuralist thinking most often went along with a strain of theoretical anti-humanism which defined itself squarely against such earlier 'subject-centred' movements of thought as phenomenology and existentialism. In this respect, and others, there is a clear continuity between structuralism and post-structuralism. Indeed, there has been much debate among theorists as to how we should construe the 'post-' prefix, whether in the strong sense ('superseding and displacing the structuralist paradigm') or simply as a matter of chronological sequence ('developing and extending the structuralist approach in certain new directions'). Post-structuralism also finds its chief theoretical inspiration in the program of Saussurean linguistics, though it tends to play down - or reject outright - any notion that this might give a 'scientific' basis for the analysis of texts, semiotic systems, cultural codes, ideological structures, social practices, etc. That claim is now viewed as just a species of ‘meta-linguistic’ delusion, an example of the old (typically structuralist but also Marxist) fallacy which holds that theory can somehow attain to a critical standpoint outside and above whatever it seeks to interpret or explain. On the contrary, post-structuralists argue: there is no way of drawing a firm methodological line between text and commentary, language and metalanguage, ideological belief-systems and those other (theoretical) modes of discourse that claim to unmask ideology as a product of false consciousness or - in the language of a structural Marxist like Louis Althusser - a form of 'Imaginary' misrecognition. Such ideas took hold through the false belief that theory could achieve a decisive 'epistemological break' with the various kinds of naturalised 'commonsense' knowledge which passed themselves off as straightforwardly true but which in fact encoded the cultural values of a given (e.g. bourgeoishumanist) sociopolitical order. However, this position becomes untenable once it is realised that all subject-positions - that of the analyst included - are caught up in an endless process of displacement engendered by the instability of language, the 'arbitrary' relation between signifier and signified, and the imposssibility that meaning can ever be captured in a moment of pure, self-present utterer's intent.

Thus the 'post-' in 'post-structuralism' is perhaps best understood - by analogy with other such formations, among them 'postmodernism', 'post-Marxism', and more lately 'post-feminism'- as marking a widespread movement of retreat from earlier positions more directly aligned with the project of political emancipation and critique. However, post-structuralism does lay claim to its own kind of radical politics, one that envisages a 'subject-in-process' whose various shifting positions within language or discourse cannot be captured by any theory (structuralist, Marxist, feminist or whatever) premised on old-style 'enlightenment' ideas of knowledge and truth. Most influential here, at least among literary theorists, was the sequence of changing allegiances to be seen in the work of Roland Barthes, from his early high-structuralist phase (in texts such as Mythologies (1957) and 'The Structural Analysis of Narratives’ (1977) to his late style of writing (e.g. S/Z (1970) and The Pleasure of the Text (1973)) where he renounces all claims to theoretical rigor, and instead draws freely and idiosyncratically on whatever sources come to hand - literature, linguistics, structuralism psychoanalysis, Marxism, a vast range of intertextual allusions - while treating them all with a consummate deftness and irony which disclaims any kind of orthodox methodological commitment. In Mythologies Barthes had provided by far the most convincing application of a highly systematic (Saussure-derived) structuralist method to the analysis of various items of late-bourgeois 'mythology', from advertising images to French culinary fashion, from 'The Romans on Film' to the myth of the jet pilot, and from 'the brain of Einstein' (a fetish-object created by the modern ideology of scientific genius) to the spectacle of boxing as a prime example of cultural artifice passing itself off as a natural sporting event. A decade later he reflected ruefully that this method could now be applied by anyone who had picked up the necessary analytic tools and learned to demythologize just about everything that came their way. So one had to move on, renounce that false idea of 'metalinguistic' analysis, and instead produce readings that would 'change the object itself' - the title of a later essay - by actually re-writing the myths concerned through a process of creative textual transformation. Otherwise there would always come a stage - repugnant to Barthes - when radical ideas began to settle down into a new orthodoxy, or when theories that had once seemed challenging and subversive (like those of 'classical' structuralism) were at length recycled in a safely packaged academic form.

In Barthes's later writing one can see this diagnosis applied to certain aspects of post-structuralism even though that movement had not yet acquired anything like its subsequent widespread following. Thus, for instance, it became a high point of post-structuralist principle (deriving from the psychoanalytic theories of Jacques Lacan) that the unconscious was 'structured like a language', that its workings were by very definition 'inaccessible to conscious- thought, and that the human subject was irreparably split between a specular realm or false (‘imaginary’) ego-identification and a symbolic realm where ‘identity’ consisted of nothing more than a series of shifting, discursively produced subject-positions. Then again, post-structuralists have been much influenced by Michel Foucault's skeptical genealogies of knowledge, his argument that 'truth' is always and everywhere a product of vested power-interests, so that different regimes of 'power-knowledge' give rise to various disciplinary techniques or modes of subjectively internalized surveillance and control. These ideas are presented as marking a break - a radical break - with the concepts and values of a humanist discourse which concealed its own will-to-power by fostering the illusion of autonomous freedom and choice.

So the claim is that post-structuralism affords a potentially liberating space, a space of 'plural', 'decentred', multiple or constantly destabilized subject-positions where identities can no longer be defined according to such old 'essentialist' notions as gender or class-affiliation. For some theorists, Ernesto Laclau and Chantal Mouffe among them, it points the way towards a politics - an avowedly 'post-Marxist' politics - that acknowledges the sheer range and variety of present-day social interests. On this view it is merely a form of 'metanarrative' delusion to suppose that any one privileged theory (like that of classical Marxism) could somehow speak the truth of history or rank those interests on a scale of priority with socio-economic or class factors as the single most important issue. Rather we should think - in post-structuralist terms - of subjects as 'dispersed' over a range of multiple positions, discourses, sites of struggle, etc., with nothing (least of all some grand 'totalizing' theory) that would justify their claim to speak on behalf of this or that oppressed class or interest-group. Still there is a problem when it comes to explaining how anyone could make a reasoned or principled choice in such matters if every such 'choice' were indeed just a product of the subject's particular mode of insertion into a range of pre-existing discourses.

Nor is this problem in any way resolved by the idea that subjects are non-self-identical, that subjectivity is always an ongoing process, or again - following Lacan - that there never comes a point where the ego escapes from the endless ‘detours’ of the signifier and at last achieves a wished-for-state of 'imaginary' plenitude and presence. For this still works out as a determinist doctrine, a theory of the subject as constructed in (or by) language, whatever the desire of some post-structuralists to give it a vaguely utopian spin by extolling the 'freeplay' of the signifier or the possibility of subjects adopting as many positions - or 'performative' roles - as exist from one situation to the next. In Barthes' later work it is the very act of writing, exemplified in certain avant-garde literary texts, that is thought of as somehow accomplishing the break with oppressive (naturalized or realist) norms, and thus heralding a new dispensation where identity and gender are no longer fixed by the grim paternal law of bourgeois 'classical realism'. Such ideas have a certain heady appeal when compared with the bleak message conveyed by theorists such as Foucault and Lacan. Nevertheless, they are open to the same objection: that the subject remains (in Lacan's phrase) a mere 'plaything' of language or discourse, and that reality likewise becomes just an optional construct out of various signifying codes and conventions.

One result - as seen in post-structuralist approaches to historiography and the social sciences - is a blurring of the crucially important line between fictive discourse (novels, stories, imaginary scenarios of various kinds) and those other kinds of narrative that aim to give a truthful account of past or present events. That confusion of realms is carried yet further in the writing of postmodernist thinkers like Jean Baudrillard who argue - largely on the same premise - that we now inhabit a world of ubiquitous mass-media simulation where the very idea of a reality 'behind appearances' (along with the notions of truth, critique, ideology, false consciousness and so forth) must be seen as belonging to a bygone age of näive Enlightenment beliefs. This is all - as post-structuralists would happily concede - a very long way from Saussure's original program for a structural linguistics based on strictly scientific principles. Whether or not their more radical claims stand up to careful scrutiny is still a topic of intense dispute among theorists of varying persuasions.


3. Definitions of power are legion. To the extent that there is any commonly accepted formulation, power is understood as concerned with the bringing about of consequences. But attempts to specify the concept more rigorously have been fraught with disagreements There are three main sources of these disagreements: different disciplines within the social sciences emphasize different bases of power (for example, wealth, status, knowledge, charisma, force and authority); different forms of power (such as influence, coercion and control); and different uses of power (such as individual or community ends, political ends and economic ends). Consequently, they emphasize different aspects of the concept, according to their theoretical and practical interests. Definitions of power have also been deeply implicated in debates in social and political theory on the essentially conflicting or consensual nature of social and political order. Further complications are introduced by the essentially messy nature of the term. It is not clear if power is a zero-sum concept; if it refers to a property of an agent (or system), or to a relationship between agents (or systems); if it can be a potential or a resource; if it is reflexive or irreflexive, transitive or intransitive; nor is it clear if power can only describe a property of, or relation- ship between, individual agents, or if it can be used to describe systems, structures or institutions; furthermore, it is not clear whether power necessarily rests on coercion or if it can equally rest on shared values and beliefs. Nor is it at all clear that such disputes can be rationally resolved, since it has been argued that power is a theory-dependent term and that there are few, if any, convincing metatheoretical grounds for resolving disputes between competing theoretical paradigms.

In the 1950s discussions of power were dominated by the conflicting perspectives offered by power-elite theories, which stressed power as a form of domination exercised by one group over another in the presence of fundamental conflicts of interests; and structural-functionalism which saw power as the 'generalized capacity of a social system to get things done in the interests of collective goals.' Parsons thus emphasized power as a systems property, as a capacity to achieve ends; whereas Mills viewed power as a relationship in which one side prevailed over the other. Mills's views were also attacked by pluralists, who argued that he assumed that some group necessarily dominates a community; rather, they argued, power is exercised by voluntary groups representing coalitions of interests which are often united for a single issue and which vary considerably in their permanence. Against class and elite theorists the pluralists posed a view of US society as 'fragmented into congeries of small special-interest groups with incompletely overlapping memberships, widely differing power bases, and a multitude of techniques for exercising influence on decisions salient to them.' Their perspective was rooted in a commitment to the study of observable decision making, in that it rejected talk of power in relation to non-decisions, the mobilization of bias, or to such disputable entities as 'real interests'. It was precisely this focus on observable decision making which was criticized by neo-elite and conflict theorists, who accused the pluralists of failing to recognize that conflict is frequently managed in such a way that public decision-making processes mask the real struggles and exercises of power; both the selection and formulation of issues for public debate and the mobilization of bias within the community should be recognized as involving power. Lukes further extended the analysis of covert exercises of power to include cases where A affects B contrary to Bs real interests - where B's interests may not be obtainable in the form of held preferences, but where they can be stated in terms of the preferences B would hold in a situation where B exercises autonomous judgement. Radical theorists of power have also engaged with structural-Marxist accounts of class power over questions of whether it makes sense to talk of power without reference to agency. Although these debates have rather dominated discussions of power in social and political theory, we should not ignore the work on power in exchange and rational-choice theory, nor the further criticisms of stratification

theories of power which have been developed from positions as diverse as

Luhmann’s neo-functionalism, and Foucault’s rather elusive post-structuralism.

Definitional problems seem to be endemic to discussions of power. One major problem is that all accounts of power have to take a stand on whether power is exercised over B whether or not the respect in which B suffers is intended by A. Similar problems concern whether power is properly restricted to a particular sort of effect which A has on B, or whether it applies in any case in which A has some effect on B. These two elements, intentionality and the significance of effects, allow us to identify four basic views on power and to reveal some of the principal tensions in the concept.

The first view makes no distinction between A’s intended and unintended effects on B; nor does it restrict the term power to a particular set of effects which A has on B. Power thus covers phenomena as diverse as force, influence, education, socialization and ideology. Failing to distinguish a set of significant effects means that power does not identify a specific range of especially important ways in which A is causally responsible for changes in B's environment, experience or behavior. This view is pessimistically neutral in that it characteristically assumes that power is an ineradicable feature of all social relations, while it makes no presumption that being affected by others in one way or in one area of life is any more significant than being affected in any other. One plausible version of this view is to see power as the medium through which the social world is produced and reproduced, and where power is not simply a repressive force, but is also productive. Note that with this conception there is no requirement that A could have behaved otherwise. Although this is an odd perspective, it is not incoherent, as it simply uses power to refer to causality in social and interpersonal relations.

The second view isolates a set of significant effects. Thus, A exercises power over B when A affects B in a manner contrary to B's preferences, interests, needs, and so on. However, there is no requirement that A affect B intentionally, nor that A could have foreseen the effect on B (and thus be said to have obliquely intended it). Poulantzas's Marxism provides one such view by seeing power in terms of 'the capacity of a social class to realize its specific objective interests.' Any intentional connotations are eradicated by his insistence that this capacity is determined by structural factors. The capacity of a class to realize its interests 'depends on the struggle of another class, depends thereby on the structures of a social formation, in so far as they delimit the field of class practices.' As agency slips out of the picture, so too does any idea of A intentionally affecting B. Although idiosyncratic, this view does tackle the problem of whether we can talk meaningfully of collectivities exercising power. If we want to recognize the impact which the unintended consequences of one social group's activities have over another, or if we want to recognize that some group systematically prospers while others do not, without attributing to the first group the intention of doing the others down, then we shall be pushed towards a view of power which is not restricted solely to those effects A intended or could have foreseen. The pressures against this restriction are evident in Lukes's and Connolly’s work. Both accept unintended consequences so as to capture conceptually some of the most subtle and oppressive ways in which the actions of sonic can contribute to the limits and troubles faced by others.’ Both writers, however, also recognize that attributions of power are also often attributions of responsibility, and that to allow unintended effects might involve abandoning the possibility of attributing to A responsibility for B's disbenefits. Consequently both equivocate over how far unintended effects can be admitted, and they place weight on notions of A’s negligence with regard to B’s interests, and on counterfactual conditionals to the effect that A could have done otherwise. Stressing 'significant effects' also raises problems, as the criteria for identifying such effects are hotly disputed. Thus radical theorists criticize pluralists for specifying effects in terms of overridden policy preferences, on the grounds that power is also used to shape or suppress the formation of preferences and the articulation of interests. Again, two pressures operate, in that it seems sociologically naive to suppose that preferences are always autonomous, yet it is very difficult to identify appropriate criteria for distinguishing autonomous and heteronomous preferences. Taking expressed preferences allows us to work with clearly observable phenomena, since B can share the investigator's ascription of power to A - it thus has the advantage of methodological simplicity and congruence with the dependent actor's interpretation. However, taking 'repressed' preferences or real interests can be justified, since it provides a more theoretically persuasive account of the complexities of social life and of the multiple ways in which potential conflicts are prevented from erupting into crisis. Yet this more complex theoretical account is under pressure to identify a set of real interests, and the temptation is to identify them in terms of autonomous/ rational preferences; the Problem with this is that it often carries the underlying implication that power would not exist in a society in which all agents pursued their real interests. Power is thus used to describe our deviation from utopia.

The second view is primarily concerned with identifying the victims of power - not the agents. The focus is on A’s power over B. The third view, which attributes power only when A intends to affect B, but which does not place any restrictions on the manner in which A affects B, switches the focus from As power over B to Ns power to achieve certain ends. Power is concerned with the agent's ability to bring about desired consequences - 'even' but not necessarily 'against the resistance of others.’ This view has a long pedigree (Hobbes 1651) and it satisfies some important theoretical interests. In so far as we are interested in using A’s power as a predictor of A’s behavior, it is clearly in our interests to see A’s power in terms of A’s ability to secure high net profit from an action - the greater die anticipated profit, the more likely A is to act. Another reason for focusing on A’s intention is the difficulty in identifying a range of significant effects which is not obviously stipulative. Concentrating on A’s intended outcomes allows us to acknowledge that there are a number of ways in which A can secure B’s compliance and thereby attain A’s ends. Thus force, persuasion, manipulation, influence, threats, throffers, offers, and even strategic positioning in decision procedures may all play a role in Ms ordering of A’s social world in a way that maximally secures A’s ends. But seeing power solely in terms of A’s intentions often degenerates into an analysis where all action is under- stood in power terms, with behavior being tactical or strategic to the agent's ends. On this view agents become, literally, game-players or actors, and we are left with a highly reductive account of social structures and institutions.

Finally, the fourth perspective analyses power in terms of both intentional action and significant effects. It concentrates on cases where A gets B to do some- thing A wants which B would not otherwise do. Two sets of difficulties arise here. The first concerns the extensiveness of the concept of power and its relation- ship with its companion terms, authority, influence, manipulation, coercion and force. On some accounts power is a covering term for all these phenomena; on others it refers to a distinct field of events. Getting B to do something that B would not otherwise do may involve mobilizing commitments or activating obligations, and it is common to refer to such compliance as secured through authority. We may also be able to get B to do something by changing B's interpretation of a situation and of the possibilities open to B - using means ranging from influence and persuasion to manipulation. Or we may achieve our will through physical restraint, or force. Finally, we may use threats and throffers in order to secure B's compliance - that is, we may coerce B. In each case A gets B to do something A wants which B would not otherwise do, although each uses different resources (agreements, information, strength, or the control of resources which B either wants or wants to avoid), and each evidences a different mode of compliance (consent, belief, physical compliance or rational choice). Although exchange and rational choice theorists have attempted to focus the analysis of power on the final group of cases, to claim that the others are not cases of power is clearly stipulative. Yet it is these other cases which introduce some of the pressures to move away from a focus solely on intended effects and significant affecting, Where A’s effect on B is intended, instrumental to A’s ends and contrary to B's preferences, and where B complies to avoid threatened costs, we have a case which firmly ties together A’s intention and the set of effects identified as significant (B's recognized costs are intended by A and functional to A’s objectives). But the other cases all invite extensions, either in the direction of covering cases in which A secures A’s will, disregarding the nature of the effects on B, or towards cases where B's options or activities are curtailed by others, either unintentionally, or unconditionally. Also, this view of power risks focusing on Ss exercise of power over B, to the detriment of the alternative and less tautological view that power is a possession, that it may exist without being exercised, and that a crucial dimension of power is where A does not secure B's compliance, but is in a position to do so should he choose. Wealth, status, and so on, are not forms of power, but they are resources which can be used by A to secure B's compliance. An adequate understanding of power in a given society will include an account of any systematic inequalities and monopolies of such resources, whether, they are being used to capacity or not. The pressure, once again, is against exclusive concentration on As actual exercise and towards a recognition of A’s potential. But once we make this step we are also likely to include cases of anticipatory surrender, and acknowledging these cases places further pressure on us to move beyond easily attributable, or even oblique, intention on A’s part. These pressures are resisted mainly by those who seek to construct a clear and rigorous, if stipulative, theoretical model of power. But there is also some equivocation from those who seek to match ascriptions of power with ascriptions of moral responsibility. Part of the radical edge of Lukes's case stems precisely from the use of ascriptions of power as a basis for a moral critique. But much is problematic in this move. A may act intentionally without being sufficiently sane to be held morally responsible; A may intentionally affect B to B's disbenefit without violating moral norms (as in a chess game, competitions, some exchange relations with asymmetrical results, and so on); it is also important to recognize that B’s compliance must maintain proportionality with A’s threat in order for B to be absolved of moral responsibility.

The theoretical and practical pressures which exist at the boundaries of these four possible interpretations of power account for much of the concept’s messiness. Each has its attractions. The fourth view is most promising for model or theory building, the third for the prediction and explanation of action, the second for the study of powerlessness and dependency, and the first for the neutral analysis of the strategic but non-intentional logic of social dynamics. Although metatheoretical grounds for arbitration between competing conceptions of power seem largely absent, we can make a few comments on this issue. Although restrictivist definitions of power may serve specific mode, and theory-building interests, they inevitably provide a much simplified analysis of social order and interaction. However, more encompassing definitions risk collapsing into confusion. Thus, while there are good theoretical grounds for moving beyond stated preferences to some notion of autonomous preferences – so as, for example, to give a fuller account of B’s dependence - we should be cautious about claiming that A is as morally responsible for B's situation as when A intentionally disbenefits , B. Indeed, depending , on how we construe the relevant counterfactuals, we might deny that agents are liable for many of the effects of their actions. Thus, we might see social life as inevitably conflict ridden, and while we might recognize that some groups systematically lose out it might not be true that A (a member of the elite) intends to disadvantage any individual in particular, or that A could avoid harming B without allowing B to harm A (as in Hobbes's state of nature). Also, although we are free to use several different definitions of power (such as the three dimensions identified by Lukes),we should recognize that each definition satisfies different interests, produces different results and allows different conclusions, and we need to take great care to avoid confusing the results. Finally, we should recognize that although definitions of power are theory-dependent, they can be criticized. in terms of the coherence of the theory, its use of empirical data, and the plausibility of its commitments to positions in the philosophies of mind and action. See also authority.


1. A philosophical movement that exerted a profound influence upon American thought during the first part of the 20th century. Principal thinkers associated with pragmatism include C.S. Pierce (1839-1914), William James (1842-1910), John Dewey (1859-1952), George Herbert Mead (1862-1931) and Clarence Irving Lewis (1883-1964). However, these thinkers do not share one basic doctrine on the basis of which they may all straightforwardly be classified as pragamtists. It is, rather, in virtue of a shared approach to philosophical problems that the term ‘pragmatism’ is best applied to each of them. Although an exclusively American movement, unsurprisingly (given the fact that its thinkers were schooled in European philosophy and literature) pragmatism owes much to British and continental European philosophy. Thus, pragmatists like Peirce devoted their attention to elucidating problems in the sphere of theory of knowledge that they had encountered in the work of Descartes or Kant. It is perhaps best to turn to Peirce's own account of pragmatism, given in the essay 'What Pragmatism Is', for a concise exposition of his notion of pragmatism:

a conception, that is, the rational purport of a word or other expression, lies exclusively in its conceivable bearing upon the conduct of life [ ... ] if one can define accurately all the conceivable experimental phenomena which the affirmation or denial of a concept could imply, one will have therein a complete definition of the concept, and there is absolutely nothing more in it. In other words, in Peirce's view, pragmatism involves placing emphasis upon the concrete outcomes of our concepts as a means of determining their value as expressions of knowledge. Thus, according to Peirce in 'Definition and Description of Pragmatism', there is 'an inseparable connection between rational cognition and rational purpose'. Hence, Peirce outlined pragmatism as 'the doctrine that the whole "meaning" of a conception expresses itself in practical consequences, consequences either in the shape of conduct to be recommended, or in that of experiences to be expected, if the conception be true'. In turn, he argued for viewing inquiry as a process which proceeds from a state of doubt and is resolved in belief. According to Peirce, the best way of establishing belief is according to the dictates of scientific method.

William James is probably the most famous thinker associated with pragmatism. James was a friend of Peirce and therefore formulated his ideas in conjunction with the development of Peirce's thought, so it is not easy to separate the intellectual development of the two men. However, James's conception of pragmatism differs from that offered by Peirce in so far as

Whereas Peirce (who, as a realist, formulated pragmatism primarily as a theory of meaning) sought to ground meaning in the sphere of practical and concrete human action, James looked elsewhere. For James, in contrast, what is highlighted is his account of the role of concepts and ideas in human experience. Our beliefs, he claims, affect our actions in the world, and his pragmatism therefore concentrates upon the ways in which ideas and beliefs relate to our experiences. In turn, James is not committed to the realism that Peirce endorses, but instead embraces a kind of nominalism. More significantly, for James, pragmatism involves constructing a more general account of human thought and action (including psychology) of which a pragmatic theory of meaning is merely one part.

John Dewey's work represents another variant of the pragmatist theme. As with James, Dewey started out by developing a psychological approach. However, he later turned to a more behavioristic and socially nuanced account of human action. In time, Dewey came to term his own brand of pragmatism 'instrumentalism'. Principal amongst his philosophical concerns was education, which Dewey came to regard as having supreme importance as the primary means for the transmission of knowledge and ideas within society. Society, for the mature Dewey, comes to be regarded as a kind of educational institution, which as the sphere in which human life is actually lived, is taken as the educative means to the end of living. In turn, Dewey developed a view which emphasized the links between human action arid the social realm: action does not occur 'in' a social space, since the social is itself an essential aspect of human behavior. Dewey's criticism of the Cartesian conception of subjectivity (i.e. mind-body dualism) clarifies his view of the social realm: the philosophical division between mind and body allows us to ignore the fact that the thinking individual is itself a part of the social structure in which thinking occurs. Dewey envisaged this relationship in terms of a 'circuit' (see his 'The Unit of Behavior: The Reflex Arc Concept in Psychology' of 1896). Equally, he was also interested in developing an account of the relationship between knowledge and value, arguing that self-reflexive scientific inquiry, understood as an active selecting and therefore valuing of what it investigates, is a prime example of ethical action.

From the consideration of Peirce's and James's 'pragmatism' and Dewey's 'Instrumentalism', it is evident that the primary question pragmatists ask with regard to knowledge is 'does it work?' Dewey's term is thus apposite: pragmatists are essentially instrumentalists when it comes to the issue of what counts as reliable knowledge.

Amongst contemporary thinkers Richard Rorty (1931-) has adopted a form of pragmatism which endorses an anti-essentialism with regard to questions of rationality, cultural identityand politics. This is coupled with an extolling of bourgeois liberalism. Rorty is perhaps more Jamesean than Peircean in his approach. For example, he has consistently criticized realism, which is a central component of Peirce's pragmatism. For instance, on one of Rorty's arguments, since we cannot escape from language our thinking must, it follows, relate only to language, i.e. there is no 'reality' independent of language to which we refer when we speak (in philosophical parlance, there are no 'matters of fact'). Those who believe that there is an 'outside' to language Rorty has deemed 'representationalists'; and it Is against this position that he espouses his own 'antirepresentationalism':

By dropping a representationalist account of knowledge, we pragmatists drop the appearance-reality distinction in favour of a distinction between beliefs that serve some purposes and beliefs that serve other purposes [ ... ] We drop the notion of beliefs being made true by reality [ ... ] (1998: 206) Hence, on Rorty's view, since our language cannot be identified in terms of some mind-independent realm, it must be culturally situated, and our knowledge of the world depends upon the cultural norms at our disposal and our aims. In short, Rorty views himself as a 'pragmatist' in so far as he, too, advocates an instrumentalism. Rorty likewise advocates a cultural relativism. Aspects of his views have been criticized by, amongst others, another thinker with a pragmatist heritage, Hilary Putnam (1926-), who has claimed that Rorty’s argument in support of his account of language is ‘terrible.’ As Putnam remarks, what if it were instrumentally useful for us to believe in things like ‘matters of fact’? If so, then Rorty’s argument hardly goes very far towards mounting a serious objection to such notions. Although not following Rorty’s line of thought, Putnam too, has sought to develop some of the ideas first outlined by his pragmatist predecessors (addressing, for instance, the importance of education to democratic forms of life in the wake of Dewey’s writings).


1. As contested a concept as its question-begging dictionary definition (‘forward movement,’ or improvement over time') would suggest. The idea of progress has been in circulation for upwards of 2500 years, but gained its most sustained momentum during the Enlightenment. Whether as the rationalization of the capacity of things in general to get better, or with a tighter focus, say on the expansion of scientific knowledge, progress became bound-up with the steady emancipation of humankind from blinkered subservience, blind faith, and the pull of myth and mysticism. Thus Condorcet and Kant among others proclaimed progress as a trajectory of increasing reflexive self-awareness, on a cultural as well as individual level. Even so, Kant's writing on the topic makes seemingly incongruous reference to a 'hidden plan of nature' to bring about 'the sole state in which all of humanity's natural capacities can be developed.’

The tension has proved stubborn. Treated on the one hand as a matter of uninterruptible historical evolution, the idea of progress took strongest hold as the interventionary power of human agency began finally to displace the fatalistic acceptance of providence in the tenor of social thinking. How then, to quantify progress? Its intimate relations with notions as ideologically charged as development, civilization, and technological advancement have made it eminently deconstructible - not least in its most emphatic, Hegelian version in which (since the real is rational and the rational is real) philosophy sets itself the task of revealing the gradual triumph of human reason in all departments of cultured and social life. Marx, in subverting Hegel's abstract, sanguine diagnosis of the seamless unfolding of universal reason, invoked a materialist conception of progress as emancipation through the realization of hitherto-suppressed human potentialities and control of our natural environment. 'The philosophers have only interpreted the world, in various ways,’ as he famously stated in the Theses on Feuerbach; 'the point is to change it.’ But for all Marx's emphasis on active participation in history, and the integral role of classed-based schism and revolution, crucial to both schema is an objective linearity to the historical process. This is progress as technology: as a more or less vital journey towards a given, universally redemptive, end.

As such, it is a prime example of what postmodern theorists deem a 'metanarrative,’ their incredulity towards which accompanies the jettisoning of all ideas of Progress with a capital ‘P.’ This is usually on the basis of an appeal to recent history as flatly contradicting the very idea that general social improvement has, in any real sense, been afoot. Lyotard and Bauman, to name but two, have linked the atrocities and excesses of the 20th century (for example, Auschwitz, the Gulag Archipelago, the nuclear build-up) to the overweening hubris of the Enlightenment's prediction of an emancipatory triumph of reason and virtue. Like Adorno and Horkheimer before them (though without their residual Marxist affinities), they trace the fruition of 'Instrumental' reason exemplified in recent barbarities back to modernity's fetishizing of universal reason and the concurrent banishment of the irrational, illusory or retrograde. Thus scientific or technological advance does not by itself a good society make; and indeed the valorization of science as supreme source of knowledge makes more likely the regimentation, normalization and silencing of those not party to the expert culture -hardly 'progress' in the modern definition.

Whether this disposes of, or rather asks anew, the question of the nature of progress is another matter. Does the nafvet~ or danger of conceiving progress as structurally guaranteed bury too those alternative accounts which would put it down to collective human agency? Can we really look at the history of science or medicine and deny that substantive advances have taken place? Is the end of slavery just a culturally determined and administered value? 'We have stopped believing in progress', remarked Borges; 'What progress that is!' It's a pregnant contradiction. Hedgy and unfashionable as progress-talk has become, it is hard to see how normative social theory - whether at the fin of a given siecle or not - can get along without it. Nor has its theoretical beleaguering served filly to extinguish its obviously cultural and rhetorical import.


1. The term 'proletariat' has been popularized through its use in Marxist theory, where it refers to the subordinate class within capitalism. The proletariat is composed of that proportion of the urban population who own only their own ability to labor. They are therefore compelled to sell this labor power in order to be able to purchase all other goods that are required for their continued existence. Less formally, the term is frequently used as a synonym for working class. Strictly, the working class, composed of those who are occupied in any form of manual labor, are only a portion of the proletariat, for few if any of the (non-manual) middle class owns enough productive property or capital to generate enough income to do away with the necessity of working for a living.



1. Propaganda is the conscious attempt to control or change the attitudes and behavior of a group, through manipulation of communication (either in the provision of information, or the use of imagery). Qualter identifies several properties of propaganda: it is deliberate, and aims to influence an audience; it attempts to affect behavior by modifying attitudes (rather than through the threat of violence or offer of reward); it is essentially elitist, with a small group attempting to influence the behavior of the many; it uses all forms of symbol (including verbal language, music and visual images). Given this definition, it is surprisingly hard to analyze, not least in terms of separating propaganda from ideology, and propaganda from the usual course of news reporting or political debate in the liberal democracies. Ideologies are belief systems that are in the interests of a dominant class, and are propagated throughout society. However, while the content of much school education may fruitfully be analyzed as ideology (and is indeed deliberately designed to influence the behavior and attitudes of pupils), it does not seem appropriate or fruitful to call it propaganda. Further, propaganda is not unproblematically untrue. While it may falsify facts, it may also simply be selective with facts, and present those facts in an emotive manner. To deem propaganda untrue is to minimize the degree to which the 'truth' is itself negotiated and contested in everyday life. What is one person's truth is an other's lie. Hence, while truth is frequently judged to be the first casualty of war, as propaganda and selective reporting take over, it is worth considering the degree to which news reporting in peace time is (and has to be) highly selective. The news cannot report everything that happens without discrimination. An analysis of propaganda is therefore posed with the problem of distinguishing legitimate selectivity with illegitimate selectivity, or at least, with explaining the particular circumstance that make the illegitimate selection propaganda, and not ideology.

public sphere

3. The concept of the public sphere is used most commonly to refer to the realm of public discourse and debate, a realm in which individuals can discuss issues of common concern. The public sphere is generally contrasted with the private domains of personal relations and of privatized economic activity. One of the most important accounts of the public sphere was provided by Jürgen Habermas in his classic work The Structural Transformation of the Public Sphere (1962). Habermas traced the development of the public sphere (Öffentlichkeit) from Ancient Greece to the present. He argued that, in 17th- and 18th-century Europe, a distinctive type of public sphere began to emerge. This 'bourgeois public sphere' consisted of private individuals who gathered together in public places, like salons and coffee houses, to discuss the key issues of the day. These discussions were stimulated by the rise of the periodical press, which flourished in England and other parts of Europe in the late 17th and 18th centuries. The bourgeois public sphere was not part of the state but was, on the contrary, a sphere in which the activities of state authorities could be confronted and criticized through reasoned argument and debate.

The development of the bourgeois public sphere had important consequences for the institutional form of modern states. By being called before the forum of the public, Parliament became increasingly open to scrutiny; and the political role of the freedom of speech was formally recognized in the constitutional arrangements of many modern states. But Habermas argued that, as a distinctive type of public domain, the bourgeois public sphere gradually declined in significance. Many salons and coffee houses eventually disappeared, and the periodical press became part of a range of media institutions which were increasingly organized on a commercial basis. The commercialization of the press altered its character: the press gradually ceased to be a forum of reasoned debate and became more and more concerned with the pursuit of profit and the cultivation of images.

Habermas's argument concerning the transformation of the public sphere has been criticized on historical grounds, and in terms of its relevance to the social and political conditions of the late 20th century. But the concept of the public sphere remains an important reference point for thinkers who are interested in the development of forms of political organization which are independent of state power. It also remains a vital notion for theorists who are concerned with the impact of communication media in the modern world. The concept emphasizes the importance of open argument and debate - whether conducted in the media or in a shared locale - as a means of forming public opinion and resolving controversial political issues.



3. Few concepts in modern times have been less understood and few more

liable to misuse than the concept of race when applied to humankind. Such powerful feelings has it aroused that its use is sharply declining

among the writers of physical anthropology textbooks in the USA. Of twenty such textbooks published between 1932 and 1969, thirteen (65 per cent) accepted that races of humans exist, three (15 per cent) claimed that they do not exist, while of the remaining four, two did not mention the subject While two stated that there was no consensus on the subject. Of thirty-eight such textbooks that appeared between 1970 and 1979, only twelve (32 per cent) stated that races of humans exist, whereas fourteen (37 per cent) claimed that races do not exist; of the remaining twelve texts, four were non-committal on the matter, three failed to mention race and five indicated that there was no consensus. It is of course a moot point how much we may conclude from a study of the contents of textbooks, but it is, to say the least, striking that, during the 1970s, there was in the USA so marked a swing away from the earlier widespread acceptance of the existence of human races. Critics of the study cited have raised the question of the degree to which that change reflects new concepts flowing from new data and novel approaches, and the extent to which the change might have been predicated upon extraneous factors, such as a swing of fashion, political considerations, or the composition of classes of students to which the texts were directed. Nor is it clear whether the tendency in the USA typifies other parts of the world of physical anthropology.

Certainly the change tells us that, even among experts, no less than in the public mind, the concept If race is being critically re-examined and that no consensus, let alone unanimity, among specialists on the validity or the usefulness of the race concept appears to exist at the present time. It is worthwhile therefore to examine the meaning of race. Since race is basically a concept of biology in general, we shall start by examining race as a biological notion.

Race as a biological concept

Many, perhaps most, species of living things comprise numbers of populations which may be dispersed geographically and among varying ecological niches. To impart order to the subdivisions within a species, biologists have used several terms such as subspecies, race and population to classify the various groupings of communities that make up a species. Thus, in a species within which two or more subspecies are recognized, a race comprises populations or aggregates of populations within each formally designated sub- species. Often the term race is qualified: biologists recognize 'geographic races' (which may be synonymous with subspecies); 'ecological races' where, within a species, there occur ecologically differentiated populations; and 'microgeographic races' which refers to local populations.

Although students of any group of living things may differ from one another on the finer details of such intraspecific classifications, there has for some time been fairly general agreement that race is a valid biological concept. Classically, the differences among the races in a species have been identified by their morphology, that is, their observable physical structure. Since the mid-1930s, and especially since 1950, biologists, not content with studying the morphological make-up of populations within species, have been studying the genetic composition of the subdivisions within species. These studies have directed attention to a number of non-morphological traits such as the genes for blood-groups and for specific proteins. When these hereditary characters are analyzed, they reveal that there are no hard and fast boundaries between races and populations within a species. For any such genetic marker, it is not uncommon to find that the frequency of the trait in question is distributed along a gradient (or cline) which cuts across the boundaries of races, as delimited by morphology. Such gene clines often do not parallel any detectable environmental gradient; they appear to be neutral in relation to natural selective agencies in the environment.

Different genetic markers within a species may vary along different gradients. Thus, if one were to base one's thinking about the subdivisions of a species on the distribution of any one genetic marker, one would be liable to reach a different conclusion from that which might flow from the use of another genetic marker.

Hence, newer methods of analysis combine the frequencies of many different genetic markers, in the hope that the resulting sorting of populations will more nearly reflect the objective genetic relationship of the subgroups within a species.

Character-gradients apply as well to some morphological features. That is, some structural features such as body size, car size, or coloring, change gradually and continuously over large areas. Such gradients, unlike the genetic clines, appear to parallel gradients in environmental features and have probably resulted from the action of natural selection. However, the frequencies of the genes governing morphological characters are less commonly used in the study of genetic interrelationships within a species, for several good reasons: first, such traits are often of complex, difficult and even uncertain genetic causation; second, many of them and, particularly, measurable characters are determined not by a single gene-pair, but by numbers of different gene-pairs; third, such characters are especially subject to environ- mental modification: for example, if animals-live in a lush area and eat more food, they would be expected to grow bigger than those of the same species living in a more and region. This 'eco-sensitivity' of the body's metrical traits renders them less useful in an analysis of genetic affinities.

In sum, race is a biological concept. Races are recognized by a combination of geographic, ecological and morphological factors and, since 'the 1970s, by analyses of the distribution of gene frequencies for numbers of essentially non-morphological, biochemical components. As long as one focused on morphological traits alone, it was sometimes not difficult to convince oneself that races were distinctly differentiated, one from another, with clear-cut boundaries between them; the progressive application of genetic insights and analyses to the problem revealed that recognizable gene variants (or alleles) are no respecters of such hypothetical boundaries. Often, indeed, one race merges with the next through intermediate forms members of one race can and do interbreed with members of other races. Hence, the importation of genetic appraisal into the discussions on race has led to a definite blurring of the outlines of each race, so to an attenuation of the concept of race itself.

Race in human biology

The biological concept of race, as just sketched, has been applied to the living populations of the human species. At least since the time of the Swedish naturalist and systematist Linnaeus (1707-78), all living human beings have been formally classified as members of a single species, Homo sapiens. The accumulation since the middle of the 19th century of fossil remains of the human family has revealed that earlier forms lived which could validly be regarded as different species of the same genus, for example Homo habilis and Homo erectus. Our species, Homo sapiens, Probably made its appearance between one-half and one-third of a million years before the present.

As Homo sapiens spread across first the Old World and, more latterly, the New World, the species diversified, in varied geographical zones and ecological niches, into numerous populations. At the present time we have a situation in which living humanity is divided into several major and many minor subdivisions among which the same kinds of variation are encountered as apply to other living things. Thus, the populations show morphological variation, including some gradients associated with environmental gradients, and varying gene frequencies with clines of distribution that, for individual genetic markers, breach the limits of morphologically defined groups of populations. Physical anthropologists, relying on morphological traits, have for long divided living humankind into great geographical races (also called major races, subspecies and constellations of races). Most classifications recognized three such major subdivisions, the Negroid, Mongoloid and Caucasoid; some investigators designated other major races, such as the Amerind and the Oceanian. Within the major races, several dozen minor races (or, simply, races) were recognized, the number identified varying with the investigator. As with other living groups, historically the classification of living Homo sapiens was based on morphological traits, such as skin color, hair form and body size. As genetic analysis came to be applied, in respect first Of blood-groups and later of a variety of proteins, dines were found which cut across the boundaries of Minor and even of major races. Moreover, it was found that the genic variation between the major races was small in comparison with the intraracial variation. Doubts began to be expressed as to whether there was any biological basis for the classification of human races.

The problem is compounded by the fact that, even when genetical analysis became based not just on a few traits such as the ABO, MN and Rh blood-groups, but on a number of traits, different results were obtained according to which combinations and numbers of traits were used. For example, Piazza et. al. analyzed frequency data for eighteen gene loci in fifteen representative human populations: they found that the Negroid populations were genetically closer to the Caucasoid populations than either group of populations was to those populations classified as Mongoloid. This in turn, was interpreted as signifying an earlier phylogenetic split between Mongoloid, on the one hand, and Negroid-Caucasoid on the other, and a later (more recent) split between Negroid and Caucasoid.

However, Nei's analysis, based on eleven protein and eleven blood-group loci in twelve human populations, revealed a first splitting between Negroid and Caucasoid-Mongoloid. Subsequently, Nei and Roychoudhury used a still larger number of genetic traits, namely sixty-two protein loci and twenty-three blood-group loci, that is eighty-five gene loci in all, for which data were available for some eighteen world populations. Interestingly, while the protein data revealed a first splitting between Negroid and Caucasoid-Mongoloid, the blood-group data suggest a slightly closer affinity and therefore a slightly more recent splitting between Negroid and Caucasoid.

Clearly, the last word has not been said on the exact pattern of affinities among the living races. Nor is there a consensus as to whether the large size of intraracial genetic variation, compared with interracial, vitiates any biological basis for the classification of human races. As two representative studies, we may cite Lewontin who believes there is no basis; and Nei and Roychoudhury who disagree with Lewontin and assert that, while the interracial genic variation is small, the genetic differentiation is real and generally statistically highly significant. Furthermore, it is clear that, by the use of genetic distance estimates, Piazza, Nei and Roychoudhury and others have been enabled to study the genetic relationships among the mainly morphologically defined human races, to construct dendrograms and to impart some understanding of the pattern of recent human evolution. Thus, the latter investigators have found evidence from protein loci to suggest that the Negroid and the Caucasoid-Mongoloid groups diverged from each other about 110,000 ± 34,000 years before present, whereas the Caucasoid and Mongoloid groups diverged about 41,000 ± 15,000 years before the present. These estimates do depend on a number of assumptions and may be modified with the accretion of more data.

One further point may be mentioned here: the extent of genetic differentiation among the living human races, as determined by -the study of protein loci, is not always closely correlated with the degree of morphological differentiation. Indeed, evolutionary change in morphological characters appears to be governed by quite different factors from those governing genetic differentiation in protein-forming genes of the human races, on presently available evidence. Genetic differentiation at protein loci seems to occur largely by such biological processes as mutation, genetic drift and isolation, with migration playing an important role in the establishment of current genetic relationships among human races. However, morphological characters have apparently been subject to stronger natural selection than 'average protein loci.'

In short, the race concept can be applied to modern humans, even when one uses the most modern analytical procedures of population geneticists, and such application has been found of heuristic value. Nevertheless, irrespective of sociopolitical considerations, a number of modern investigators of human intraspecific variation find it more useful and more valid to base such studies on populations, as the unit of analysis, and to discard the race concept in these attempts.

Abuses and aberrations of the race concept

Among the various misconceptions that surround the concept of race are ideas about 'race purity', the effects of racial hybridization, superior and inferior races, race and mental differences, race and culture. A full review of this vast subject is not possible here: it has been dealt with in a number of studies.

Although writers adopt widely differing standpoints, especially on the subject of race and intelligence (as supposedly reflected by IQ test results), it would not be unfair to claim that they reflect the view of a great majority of physical anthropologists, human biologists and human geneticists at this time.

    1. Race is an idea borrowed from biology.
    2. At a stage when the study of human populations was primarily, if not exclusively, morphological and its objective classificatory, the race concept helped to classify the immense variety of living and earlier human beings of the species Homo sapiens. With the advent of genetic analysis and the discovery that clines of genetic differentiation transcend the supposed boundaries of human races, the race concept has been appreciably weakened.
    3. While some population geneticists have found that race still serves a useful purpose in the study of the genetic affinities of living populations, in the determination of the causal factors that have operated to produce genetic differentiation and in the reconstruction of the phylogenetic history of modern human diversity, others have found the concept of such negligible value in these studies as to have led them to discard race entirely. Time will tell whether we are witnessing 'the potential demise of a concept in physical anthropology,’ or whether the concept will survive the politico-social abuses to which it has been subject and which have been regarded by some as the primary cause of its decline from favor among many investigators and writers of textbooks.
    4. If, for purposes of this analysis, we accept the existence of human races (as of other living things), we must note that races differ not in absolutes, but in the frequency with which different morphological and genetic traits occur in different populations.
    5. The overwhelming majority of the genes of Homo sapiens are shared by all the mankind; a relatively small percentage is believed to control those features which differentiate the races from one another.
    6. The formation of the modern human races is a relatively recent process, extending back in time for probably not much more than 100,000 years. As against this period of recent diversification, at least forty times as long a period of its hominid ancestry has been spent by each race in common with all other races, as it has spent on its own pathway of differentiation. This statement is based on the evidence that fossilized members of the human family (the Hominidae) are known from 4 million years before the present; molecular and some other evidence suggests that the appearance of the hominids may go back to 5 or more million years before the present.
    7. Racially discriminatory practices make certain assumptions about race, some overt, some tacit. These include the assumptions that races are pure and distinct entities; that all members of a race look alike and think alike, which assumption, in turn, is based upon the idea that how one behaves depends entirely or mainly on one's genes; and that some races are better than others.
    8. The scientific study of human populations has provided no evidence to validate any one of these assumptions.
    9. Genetical and morphological analysis of human populations has failed to confirm that some races are superior and others inferior.
    10. Accidents of geography and history, difficulties of terrain, physical environment and communication, are sufficient to account for the contribution which different populations have made to the varying advancement of human culture and to civilization.
    11. Culture, language and outlook are not inseparably bound up with particular morphological or genetic racial features; for example, human culture is altering the direction of human evolution, as the species spreads into every corner of the world, and as cultural and racial divergence gives way over large areas to cultural and racial convergence.
    12. The myth of the pure race has been thoroughly disproved. There are no pure (genetically or morphologically homogeneous) human races and, as far as the fossil record goes, there never have been.
    13. Not only is purity of race a non-existent fantasy, but there is no evidence to support the notion that purity of race is a desirable thing.
    14. Racial groups are highly variable entities; for many traits intraracial variability is greater than interracial variability Intermediates exist between one race and the next.
    15. Members of all races are capable of interbreeding with members of all others, that is, all that , have been put to the test.
    16. The supposed evils attendant upon race-crossing do not bear scientific scrutiny: neither sterility, diminished fertility, nor physical deterioration, has been proven to be a biological consequence of race-mixing. If there are unfortunate effects from such crossing, they are social (not biological) and they appear to result from the way in which other members of the populations in questions look at and treat the 'hybrids'.
    17. The study of the races of humankind has been based on physical (that is morphological, physiological and biochemical) and genetic traits; mental characteristics have not been used in the classification of the human races, nor have they been found useful for such a purpose.
    18. Scientific studies have not validly demonstrated any genetically determined variations in the kinds Of nervous systems possessed by members of different human races, nor any genetically determined differences in the patterns of behavior evinced by members of different races.
    19. The claim that genetic factors contribute as much as 75 or 80 per cent of the variance of IQ test-score results and are therefore largely responsible for Black- White differences in mean test-score results has been seriously questioned in a number of investigations. It has been shown that a heritability estimate of 0.75 does not apply to American Blacks, among whom a much smaller percentage of the variance of test-score results has been shown to be genetically determined, and a larger proportion environmentally determined. The immense literature that has accumulated since Jensen put forward his hypothesis that American Blacks are genetically inferior in intelligence to Whites has revealed many flaws that were implicit in the reasoning behind the hypothesis. The main conclusion that many of these studies have reached is that 'currently avail- data are inadequate to resolve this question in either direction.’ A number of investigations have led to the development of environmental hypotheses. For example, Scarr found evidence in her studies to support a twofold hypothesis: such differences as exist between comparable populations she attributed partly to environmental factors and partly to cultural factors. On this additional cultural hypothesis, her work led her to stress a different relevance of extra-scholastic or home experience to scholastic aptitudes and achievement: ‘The transfer of training from home to school performance is probably less direct for Black children than for White children.' Clearly, at this stage of our ignorance, it is unjustified to include intelligence, however tested, among the validly demonstrated, genetically determined differences among the races of humankind.

2. Race came into English in the 16th century, from the word race, French and razza, Italian. Its earlier origins are unknown. In the early uses it has a range of meanings: (i) offspring in the sense of a line of descent - 'race and stock of Abraham' (1570) - as in the earlier uses of 'blood, and the synonymous 'stock,’ used thus from the 14th century in the extended metaphor from stoc, Old English - trunk or stem; (ii) a kind or species of plants (1596)or animals (1605). (iii) general classification, as in 'the human race'(1580); (iv) a group of human beings in extension and projection from sense (i) but with effects from sense (ii) - 'the last Prince of Wales of the Brittish race' (1600).

This range has persisted, but it is from sense (iv), with effects from sense (i), that the word has become problematic, and especially in its overlap and confusion with the relatively simple senses (ii) and (iii).Race has been used alongside both genus and species in classificatory biology, but all its difficulties begin when it is used to denote a group within a species, as in the case of the 'races of man'. This derives, atone level, from the old senses of 'blood' or 'stock', but it has been widely extended from traceable specific offspring to much wider social, cultural and national groups. However, at another level, serious physical anthropology, from Blumenbach (1787), was indeed tracing broad differential groups among humans; Blumenbach's classification, largely based on the measurement of skulls, distinguished the Caucasian, the Mongolian, the Malayan, the Ethiopian and the American (Indian), marked also by skin color -white, yellow, brown, black, red. More complex systems of physical anthropology have followed this, including pre-human and other hominid types but from the emergence of 'true humans' tracing differences within an unquestioned single species.

This serious scientific work became radically confused, in the 19th century, with other ideas derived from social and political thought and prejudice. One landmark is Gobineau's Essai sur l’inégalité des races humaines (1853-5), which proposed the idea of an 'Aryan race' (by extension from Aryan, Sanskrit - noble, which had been widely used from the early 19th century to describe the Indo-European 'family' of languages established by comparative linguistics or, more restrictively, the Indo-Iranian division of that family'). The transposition from a linguistic to a physical (racial) group was especially misleading when it was combined, as in Gobineau, with ideas of a pure stock, of the superiority of the 'Nordic strain' within this, and then the general notion of inherent racial inequalities. It is indeed from the mid-19th century that racial comes into use in English. There was then a further effect from the ideas which became known as 'Social Darwinism', in which ideas of evolution as a competitive struggle for existence and as the 'survival of the fittest' were extended from their biological source, where they referred to relations between species, to social and political conflicts and consequences within one species, the human. In relation to race, this took its most influential form in eugenics, a word introduced by Galton in 1883, from Greek roots, with the sense of 'the production of fine offspring'. In some branches of eugenics, ideas of both class and racial superiority were widely propagated, and scientific evidence of variable heredity was mixed with and often overridden by pre-scientific notions of 'pure racial stocks' and of the inheritance, through blood or race, of culturally acquired characteristics (which Galton himself had rejected). In its gross forms, this doctrine of inherent racial superiority interacted with ideas of political domination and especially imperialism. It is characteristic to find the use in 'distinctions of race-character in governing (Negroes)' (1866). The supposed historical missions of the 'Anglo-Saxon' and of the 'German' races (later to be in 'national' conflict with each other) were widely propagated.

Thus the group of words around racial came to be effectively distinct from the older group around race, though it is obvious that the groups can never be finally separated. Racialism appeared in the- racialist is recorded from 1930. These are almost early years of the 20th century) invariably hostile words (in recent years often shortened to racism and racist, and then always hostile) to describe the opinions and actions of the proponents of racial superiority or discrimination. To a certain extent they have compromised continuing work in physical anthropology and in genetics, where scientific inquiry into heredity and variation within the human species is still important and productive.

Race-hatred, as a term, is recorded from 1882, though we should also note Macaulay's 'in no country has the enmity of race been carried further than in England' (1849). It is clear that the very vagueness of race in its modern social and political senses is one of the reasons for its loose and damaging influence. It has been used against groups as different in terms of classification as the Jews(culturally specific Europeans and North Americans, in the most usual context), American Blacks (a mixed minority within the heterogeneous population of the United States), 'Orientals' (as in the projection of 'the Yellow Peril'), 'West Indians' (a mixed population identified by geographical origin, but with the term persisting when this has ceased to apply), and then, in different ways, both Irish and Pakistanis, where the 'Aryan' (Indo-European) assumption is stretched literally to its limits, but in excluding ways. Physical, cultural and socio-economic differences are taken up, projected and generalized, and so confused that different kinds of variation are made to stand for or imply each other. The prejudice and cruelty that then often follow, or that are rationalized by the confusions, are not only evil in themselves; they have also profoundly complicated, and in certain areas placed under threat, the necessary language of the(non-prejudicial) recognition of human diversity and its actual communities. See ethnic, imperialism, nationalist.


3. Racism, the idea that there is a direct correspondence between a group's values, behavior and attitudes, and its physical features, is one of the major social problems confronting contemporary societies. Racism is also a relatively new idea: its birth can be traced to the European colonization of much of the world, the rise and development of European capitalism, and the development of the European and American slave trade. These events made it possible for color and race to become pivotal links in the relations between Europeans, Americans and the people of Africa, Asia, Latin America and Australia. Though belief in the idea of the link between race and behavior has never been proven, the tenaciousness of ideas supporting this connection has been elevated to a status of folk truth among the general population in many, if not most, countries. Indeed, if the assertion of such a relationship were the only defining aspect of racism, its impact might be less damaging, though no less unacceptable. Instead, a more pernicious feature of racism entails the belief that some groups, those of a certain hue, with less power and low status, are inferior; others, of another hue, with greater power and high status, are deemed superior.

Racism is a highly complex and multifaceted concept and can be delineated into several areas, but it might be important to differentiate racism from ethnocentrism, a concept with which it is often confused and used, unfortunately, interchangeably. For example, Jones begins his critique of racism by distinguishing the two terms. Ethnocentrism entails the acceptance of the belief that individuals and groups seek to interpret events and situations, and to evaluate the actions, behavior and values of other individuals and groups from their particular cultural perspectives. This view simply assumes that all insider values are 'acceptable', 'right' and 'good'; conversely, all outsider values are 'unacceptable', 'wrong' and 'bad'. What distinguishes ethnocentrism from racism is that in the former, there is no attempt to base insider/outsider differences along racial or color lines. Oliver C. Cox makes a similar point in his study of class, caste and color: studies of early civilizations and empires demonstrated that ethnocentrism was clearly evident; the ethnocentrism focused solely on language and culture. That is, one was civilized if one understood the language and culture of the insider, but a barbarian if one did not. The early Greek idea of dividing the world into these two spheres, the civilized or the barbarian, was typical.

The Social Darwinism of the 19th century laid the foundation for what is called 'ideological racism'. The logic is as follows: nature rewards groups which win the struggle for existence; strong groups, the winners, have won the right to control and, hence, decide the fate of the losers, the weaker groups. Those groups which lose in the struggle against other groups, by dint of this loss, confirm their weakness and inferiority. Since this ideology emerged simultaneously with the rise of European imperialism and the colonization of the continents, and gave credence to these events, and because the people and races being colonized and conquered were Africans, Asians and Native Americans, the close relationship between race, color and ideas of superiority or inferiority was viewed by Europeans and Americans as having been confirmed. As the European and American political, economic and cultural powers became more deeply entrenched in what DuBois called the 'coloured world', other attempts were made to justify the ever-increasing racial inequality. One new doctrine may be called 'scientific racism'. This racism entailed the use of 'scientific techniques', to sanction the belief in European and American racial superiority. The first technique was the use of 'objective' IQ tests, and their results, to confirm the high position of Europeans and the low positions of all other races in what its proponents called a racial hierarchy. Almost simultaneously with the use of 'scientific tests' was the use of brain size to prove inferiority or superiority. Those who believed in racial inequality were, thus, eager to use the lofty name of science to support their efforts to dominate and control other races and continents. In one of his studies, Pierre Van Den Berghe cut to the heart of the racist logic when he stated that despite all talk of inferiority or superiority, groups dominate other groups because only by doing so can they ensure and enforce inequality. But it can be said that this enforced inequality has yet a more ulterior motive which is even more central to the idea of racism: to isolate, penalize, ostracize and push the pariah group outside of the normal and ongoing social, political, economic and cultural discourse so that the pariah group will, in fact, be 'made' inferior.

During the 1960s when race and racism were crucial themes, Stokely Carmichael and Charles Hamilton coined the term 'institutional racism’ to differentiate the, overwhelming importance of the former over the latter. An individual may be a racist and choose to discriminate against another individual or a group. This individual act is in contrast to institutional racism in which organizational networks linked to rules, procedures and guidelines make it difficult for members of one group to affiliate institutionally. In this case, it is not so much the individual who discriminates though individuals may do so as supervisors and managers for the company. In institutional racism, institutional rules and procedures which have been established on the basis of the qualifications and standards of the group in power serve to keep all other groups out, though this may not have been the intent of the original rules, procedures and guidelines. In fact, individuals employed in racist institutions may attest to their own lack of racism while proclaiming that they too are trapped and imprisoned by the laws, rules and procedures. There are other instances, however, when institutions willingly and knowingly discriminate. Since the mid- I 980s, for example, the United States government has uncovered extensive patterns of institutional racism in housing employment, education and banking, generally directed against racial minorities. Turner et al. presented a concise history of the interlocking networks which provide power and force to the racism which permeate institutions. One of the glaring consequences of the intensity of the traditional patterns of institutional racism has been the extent to which White Americans, and Whites in South Africa and Britain, have been the recipients of massive affirmative action programs in which they, Whites, had a monopoly on jobs, incomes and bureaucratic positions, while those not White were removed from the competitive field. We have just recently begun to understand the extent to which centuries of affirmative action for Whites have consigned minorities to a secondary role in economics, politics, education and other areas of social life.

In the USA, some attention has focused on the idea of 'reverse racism'. Racism in any form, practiced by any group, should be challenged and contested, but the idea that minorities in the USA now have sufficient power to injure the interests of the majority group is not consistent with the facts. In all areas of living (political, economic, educational, etc.) Whites continue to have a huge monopoly. When one looks closely at the data provided by those who claim that reverse racism is alive and real, one generally sees anecdotal evidence in which much of the information used is obtained third or fourth hand, that is, a friend of a friend said his friend did not get a job or lost a job to a Black. When these anecdotal sketches are used, the minority who gets the job or the promotion is invariably less qualified, very incompetent, etc. In the USA, a member of a minority group who is a racist may insult a member of the majority, but in no area of American life are minorities, who may be racists, in a position to control institutionally or determine the opportunity structure for the majority. When majorities are racists, and when they control the major institutions, as described by Turner, they can and do control the opportunity structure for minority people.

Since the 1960s when racial analysis became a major issue in social relations, ample data have been collected verifying the negative consequences of racism for minority groups. Generally, these negative consequences resonate around the theme of powerlessness in all areas. in the 1980s some sociologists began to focus on the impact of racism on Whites. This new twist on the consequences of racism shifts the focus somewhat, for it suggests that racism is not merely something which happens to the oppressed; rather there are social, emotional and ethical issues for the majority culture which controls the institutions which constitute the continuing source of racism. Attention has also been devoted to the idea that racism may be a more consciously directed act and idea than previously assumed. In the 1981 study, it was revealed that many parents did, in fact, socialize their children to be racists; racial training did not occur by chance. Children are guided in their racial training by adults, mainly parents. However, during their teen years, even the children from the most racist families tend to move away from the positions of parents and to assert their own views of other groups based on their relationship with these groups at school, work or in various social circles.

In the mid-1990s, the abolition of apartheid in South Africa will certainly alter the racial history in that country. But we now know, based on history in the USA and Britain, that the abolition of racially restrictive laws will not end all semblance of racism. Though much of the myth of race resides in institutional arrangements, another large part resides in patterns of racial thinking and the ideological orientation of individuals and groups in the society. Laws restricting discrimination may be effective to some degree, and groups may be frightened enough by the price they might pay for discriminating, yet ideological racism, enshrined in the deeply held racial myths in a society, may survive among the population in many forms. This then is the test for nations which contain diverse racial groups and which have had a history of racial domination and conflict: how to ensure individual and group equity; how to ensure that group cultural and racial differences be viewed as objective social and biological realities without the accompanying invidious distinctions. See also ethnicity, Social Darwinism.

rational choice theory

3. Rational choice theory (RCT) - or alternatively rational action theory (RAT) - adopts the view that behavioral units (usually individual people) optimize their choices (actions) under specific conditions. Colloquially, individuals (units) do the best they can given their conditions.

RCT is tied to no particular notion of what is 'best', but in the absence of any independent evidence to the contrary usually assumes that individuals look out for themselves (self-regard). There are, however, RCT models of altruism and malice. Evolutionary theory is often used to justify self-regard, by claiming to show that self-regard survives in competitive environments. This is, however, controversial.

It is the assumption of optimizing action, when conjoined with descriptions of the conditions of action, which gives RCT its predictive (explanatory) proper- ties. It may also be used normatively to guide actions in an optimal direction. Differences in the conditions of action lead to variations in the type of theory which is analytically appropriate. The most basic distinction is between parametric and strategic conditions. The former implies that the focal decision/action taker, when optimizing, need make no calculation about how others will calculate about what the focal actor does, in taking the decision. When this is the case, the appropriate theory is decision theory. On the contrary, when the focal actor needs to calculate how others will choose (act) in taking their own decision (actions), then the conditions are strategic. Game theory is the theory of strategic choices/ actions.

The conditions assumed in decision theory are as follows: first, the set of alternative actions available to the decision maker, second, the decision maker's degree of certainty about the relevant outcomes of each of the available actions, and third, the decision maker's ranking of the available actions on the basis of presumed rankings of relevant outcomes. It is conventional to divide the decision maker's degree of certainty into three types: decision under certainty where the actor (decision-maker) is certain about the relevant outcomes; decisions under risk where the decision maker can in some way assign probabilities to the outcomes; decisions under uncertainty where the decision maker cannot even assign probabilities. With decisions under certainty the optimal choice is the highest ranking action; with decisions under risk expected utility theory is usually deployed. The analysis of decisions under uncertainty is controversial but usually involves an attempt to assign some sort of probabilities.

Game theory extends RCT to strategic situations. Games may be classified in a number of ways, including zero-sum and non-zero sum; normal (or strategic) form and extensive form; complete and incomplete information; perfect and imperfect information; cooperative and non-cooperative; one shot and repeated (finite or infinite times). It is generally held that cooperative games can be reduced to non-cooperative ones, by including some initial bargaining moves. Nash equilibrium is the fundamental solution concept (predictive or normative). A Nash equilibrium is a set of (actions) strategies (one for each (actor) player) such that none has an incentive to change strategy given the other actors play the strategies specified in the set. Some games have more than one Nash equilibrium; then additional criteria must be applied for a solution (so-called equilibrium selection). Game theory is a theory of social interaction which is increasingly used in the social sciences, notably economics, but is now advancing in sociology and political science also.


1. A word which usually signifies the possession of reason. The notion of rationality is a central theme in the western cultural tradition, and has been used by way of cultural self-definition and in order to define the identity of others. How rationality has been conceived of, what it consists in, and the main problems that the analysis of it entails, have formed the basis for much discussion of the nature of self, society and culture within the western tradition.

In the thought of 17th-century philosopher René Descartes (Meditations on First Philosophy (1641)), rationality is an attribute of human minds and is shown in the form of self-evident truths, like the law of non-contradiction (the principle which states that a thing cannot both exist and not exist at one and the same time). Descartes’ rationalism held, in common with the work of a wide variety of ‘rationalist’ approaches (e.g. Plato (c. 428-348 BC), Spinoza (1632-77), Leibnitz (1646-1716)), that it is possible through the use of reason to obtain knowledge of the nature of existence, and that there is a systematic relationship between existence and our knowledge. Thus, rationality, on this conception, pertains to objectivity, and is possessed independently of contigent factors, such as those to do with history or the constitution of society. On this view, rationality is a universal and non-cultural phenomenon.

One criticism of this view was inaugurated by the empiricists, who placed an emphasis upon the role of experience in grounding human subjectivity and knowledge. David Hume (1711-76) even went so far as to argue, notoriously, that ‘Reason is, and ought only to be, the slave of the passions’ (Treatise of Human Nature, III (iii):3). For Hume, the self was not primarily rational, and human knowledge resided not in rational principles, but in the force of ‘custom or habit.’ In other words, reason is not transcendent, but is culturally located and linked to human desires and dispositions. In response to this view, Immanuel Kant (1724-1804) sought to present a critique of the nature and limits of rationality through an interrogation of the structures which must be in place in order for knowledge to be possible. Kant formulated the theory of the ‘transcendental subject’ which, rather than signifying an empirical subject, is an attempt to indicate the fundamental features of subjectivity which must be present in order for experience to be possible. On this view, subjectivity must follow if we are to have knowledge of the world, and these rules form the basis of our rationality. This conception of subjectivity is this normative; but it is also a rational one in so far as it delineates the scope and boundaries of scientific and rational inquiry.

For Friedrich Nietzsche (1844-1900), Kant’s criticism of metaphysics were extended insufficiently far, and demanded to be expanded into the realms of subjectivity and reason themselves. On a popularly disseminated ‘Nietzschean’ reading, rationality is, in reality, an instrument which does not define humanity as such, but is a product of needs and drives which allowed humans to survive, or it is related to power. In other words, reason can be read as the production of a particular species of animal, and also in terms of its role in the cultural construction of concepts like ‘truth’ and ‘reality.’ The 20th century has seen the development of criticisms which follow Nietzsche’s injunction (in a poem which concludes the volume Human, All-Too-Human (1878)) to ‘bring reason to its senses’ by subjecting it to an extensive criticism. Criticisms of rationality and subjectivity have been forthcoming from a variety of perspectives, e.g. from the Frankfurt School theorists such as Adorno and Horkheimer, who argue that modern rationality has, through the influence of both the Enlightenment and as a consequence of the increasing rationalization of modern societies, taken on the form of a primarily instrumental function, and thereby neglects its proper cultural role of critical reflection. Likewise, various criticisms have been offered by figures associated with post-structuralism and communitarianism, which replace dehistoricized conceptions of subjectivity and rationality with more historically aware, or linguistically-based, conceptions (e.g. by Foucault, although his analysis is not without its own problems, and Lyotard – see also self). In contrast to the latter thinkers’ full-blown criticisms of the rational subject, Jürgen Habermas has sought in recent years to develop a theory of rationality which takes into account the normative and linguistic aspects of social interaction between agents. Habermas draws upon the work of both analytical philosophy (Austin’s conception of speech-act theory) and the later Wittgenstein, to formulate a conception of rationality which argues for its being understood in terms of the material and historical factors underlying its development, and yet preserves a space for critical and rational discourse in the shape of ‘communicative action.’ This notion can be contrasted with that of instrumental rationality. The latter involves only the calculation of means to attain given ends (thus, in the sciences, a rationality based upon calculation is used as a means of problem-solving), whereas communicative action depends upon binding ‘consensual norms’ which serve to underpin interaction between social agents. On Habermas’s conception, this realm of action constitutes a fundamental component in cultural life: it is the sphere in which questions about the validity of our norms and value-systems can be raised, and is therefore concerned with a non-instrumental form of rationality and justification.

rationality, rationalism and reason

3. Rationality is a problem shared by the social sciences and philosophy.    Before considering the various issues it raises, it is best to offer provisional definition, of the three related notions of reason, rationality andrationalism.

Reason is the name of an alleged human faculty capable of discerning, recognizing, formulating and criticizing truths. Philosophic disputes about reason concern its very existence (extreme irrationalism may deny that any such faculty exists at all); the nature of its operations (for example, can it actually secure data, or can it only make inferences; how powerful are the inferences it can make; can it make discoveries or can it only check discoveries made by other means?); the areas of its operations (is it restricted to deductive reasoning, or to science; can it be applied to moral, aesthetic, political issues; can it initiate conduct?).

Rationality is a trait which individuals or collectivities display in their thought, conduct or social institutions. Various features can be seen, singly or jointly, as marks or defining features of rationality:

    1. A tendency to act only after deliberating and calculation, as opposed to acting impulsively or in obedience to unexamined intimations.
    2. A tendency to act in accordance with a long-term plan.
    3. A control of conduct by abstract and general rules.
    4. Instrumental efficiency: the selection of means purely by their effectiveness in bringing about a clearly specified aim, as opposed to allowing means to be selected by custom or impulse.
    5. A tendency to choose actions, institutions, and so on in terms of their contribution to a single and clearly specified criterion, rather than by evaluating them by multiple, diffuse and unclear criteria, or accepting them in virtue of their customariness.
    6. A propensity to systematize convictions and/or values in a single coherent system.
    7. An inclination to find human fulfillment in the exercise or satisfaction of intellectual faculties rather than in emotion or sensuality.
Rationalism is the name of a number of doctrines or attitudes:
    1. The insistence of the authority of individual, independent, cognitive activity, as opposed to authority of some extraneous privileged sources (Revelation, Church).
    2. The higher valuation of thought or inference as against sensation, observation or experiment, within cognitive activity
    3. The view that collectivities, or individuals, should best conduct their lives in accordance with explicit and intellectually chosen plans, rather than by custom, trial and error, or under guidance of either authority or sentiment.
It should be noted that doctrine (1) opposes the partisans of human reason, assumed to be fairly universally or evenly distributed among all people, to followers of privileged Authority. In other words, Rationalists in sense (1) include both sides of dispute (2), that is, both adherents of thinking and adherents of sensing as the main source of knowledge. In other words, issues (1) and (2) cut across each other. As 'rationalism' is widely used in both senses, and the issues are cross-related in complex ways, failure to see this ambiguity leads to confusion. A key figure in western rationalism was Descartes, who was a rationalist in both senses. On the one hand, he recommended that all traditional, inherited ideas be subjected to doubt, a kind of intellectual quarantine, and be awarded certificates of clearance only if they were found logically compelling to the inquiring mind. Though Descartes, when applying this method, did in fact eventually award just such a certificate to the theism of the faith of his birth, the sheer fact of making inner reason the first and last Court of Appeal in effect constituted and encouraged rationalism in sense (1). But he was also a rationalist in the second sense, and considered innate rational powers to he far more important than sensory information. His view of the human mind has been powerfully revived by the contemporary linguist, Noam Chomsky, notably in Cartesian Linguistics (1966) and supported by the argument that the amazing range of linguistic competence of most humans, cannot be explained without the assumption of an innate grammatical structure present in all mind, which thus corresponds to one aspect of the old 'reason'.

In the 17th and 18th centuries, the program of Descartes's rationalism (sense 1) was implemented, among others, by the school of 'British empiricists', of whom the greatest was probably David Hume. However, at the same time they repudiated rationalism (sense 2). Hume (1739-40), for instance, basically considered thinking to be nothing but the aftertaste of sensations: thinking about a given object. was like having an aftertaste of a dish when one is no longer eating.

The 18th century is often said to have been the Age of Reason; in philosophy, however, it was also the age of the Crisis of Reason. This was most manifest in the work of Hume. His main discovery was this: if rationalism (1), the subjection of all belief to evidence available to the individual, is combined with empiricism, the view that the senses alone supply the data-base at the individual's disposal, we end with an impasse: the database supplied by the senses simply is not strong enough to warrant our endorsement of certain convictions which seem essential for the conduct of life - notably, the presence of causal order in the world, or continuous objects, or of moral obligation. Hume's solution for this problem was that these crucial human tendencies, such as inferring from the past to the future, or feeling morally constrained, not being warranted by the only database available to us, were simply rooted in and justified by habit, a kind of Customary Law of the mind.

Immanuel Kant (1781) tried to provide a stronger and less precarious refutation to Hume's skepticism. His solution was, in substance, that the human mind has a rigid and universal structure, which compels humans (among other things) to think in terms of cause and effect, to feel obliged to respect a certain kind of ethic (a morality of rule-observance and impartiality, in essence), and so on. So the inner logical compulsions on which Descartes relied as judges of culturally inherited ideas were valid after. all, but they were only valid for the world as experienced by beings with our kind of mind; they were not rooted in the nature of things, as they were 'in themselves'. They were rooted in us.

It is among Kant's numerous intellectual progeny that the problem of reason becomes sociological. The two most important ones in sociology were Emile Durkheim and Max Weber. Each of them very obviously inherits the Kantian problem, but they apply it to society and to the diversity of human cultures in radically different, indeed almost diametrically opposed, ways. Durkheim followed Kant in being concerned with our conceptual compulsions, in holding conceptual compulsion to be central to our humanity. But where Kant was content to explain it by invoking an allegedly universal structure of the human, mind, operating behind the scenes in each individual mind, Durkheim sought the roots of compulsion in the visible life of distinct communities and above all in ritual. The core of religion is ritual, and the function of ritual is to endow us with shared concepts, and to endow those concepts with a compelling authority for all members of a given community. This is the central argument of his The Elementary Forms of Religious Life (1912). For Durkheim, all humans are rational, rationality manifests itself in conceptual compulsion, but the form that rationality takes varies from society to society. Sharing the same compulsions makes humans members of a social community.

If for Durkheim all humans are rational, for Weber some humans are more rational than others. He notes that the kind of rationality which Kant analyzed - orderly rule-bound conduct and thought - is specially characteristic of one particular tradition, namely the one which engendered the modern capitalist and industrial society. (Weber is less explicitly concerned with Kant than is Durkheim, but, the connection is nevertheless obvious.) Weber's problem is not why all humans are rational (all humans think in concepts and are constrained by them), but why some humans are specially rational, quite particularly respectful of rules and capable of selecting means for their effectiveness rather than for their congruence with custom, thereby becoming apt at establishing modern capitalist and bureaucratic institutions.

Weber (1924; 1922) noted that the kind of world codified by the great philosophers of the Age of Reason, a world amenable to rational orderly investigation and manipulation rather than propitiation, was not a world inhabited by all humankind, but only by the participants of the historical tradition which had engendered capitalism and large-scale bureaucracy. He believed that this kind of rational mentality was an essential precondition of a capitalist or bureaucratic civilization, and was not the necessary corollary of the other preconditions of that civilization: in other words, in opposition to historical materialism, he did not believe that the emergence of that civilization could be explained in terms of its material preconditions alone. One further and independent necessary factor was also required. (He modified rather than inverted the materialist position, in so far as he did not claim or believe that the nonmaterial necessary condition, or any set of such conditions, could ever be sufficient.) Hence in his hands the philosophical problem of rationality becomes a sociological one - how did rationality come to dominate one particular civilization and eventually, through it, the entire world?

The Durkheimian and Weberian traditions are not the only ones through which the philosophers' concern with Reason reaches the social sciences. There are at least two others.

In Kant, the attribution of rationality to a rigid and universal structure of the human mind, but not to the material which that mind (or those Minds) handled, led to a tense and uncomfortable dualism: the world was a blind, amoral machine, and the intrusion of either cognitive rationality or moral conduct into was a mysterious imposition by our minds of order on to material indifferent and alien to that order. At the core of the philosophy of Hegel lay the supposition that Reason was not merely (as Kant thought) responsible for the individual striving for consistent behavior and explanations, but that it was also a kind Of grand and impersonal Puppet Master of History In other words, the pattern of history had an underlying principle which was not alien to the rational strivings within us, but, on the contrary, provided a kind of guarantee for them. The idea is attractive, inherently and inescapably speculative, but it did seem to receive some support from . the vision of history as Progress, which had become fashionable at about the same time. Marxism while disavowing the mystical elements in Hegel, nevertheless took over the underlying intuition of a rational historic design. People who continue to uphold some version of this view are not normally called rationalists, but nevertheless their ideas are relevant to the debate about the relation of reason to life.

The other relevant tradition, in addition to the Marxist-Hegelian one, is associated with the great names of Schopenhauer, Nietzsche and Freud. Kant had identified Reason with all that was best in humankind. Schopenhauer (1819) taught that humans were dominated by a blind irrational Will, whose power they could not combat in the world, though they could at best occasionally escape it through aesthetic contemplation and compassion. Nietzsche (1909-13) shared Schopenhauer's views, but inverted his values: why should the Will be condemned, in the name of a morality which was really the fruit of resentment, of a twisted and devious manifestation of that very Will which was being damned? Freud (1930) took over the insights of both thinkers (though not Nietzsche's values), provided them with an elaborate setting in the context of clinical practice and, psychiatry, invented a technique purporting to alleviate at least some of the more pathological manifestations of irrational forces and set up an organization for the application and supervision of that technique. In so far as he did not applaud the dominance of irrational forces but on the contrary sought to mitigate them, he cannot (unlike Nietzsche) be accused of irrationalism; but his views of the devious and hidden control of seeming reason by hidden unreason closely resemble Nietzsche's, though as stated they are elaborated in what seems to be a far more specific form, and are linked to clinical practice.

The social scientist is likely to encounter the problem of Reason and Rationality (under a diversity of formulations) in connection with the various traditions and problems which have been specified. The main problem areas are

    1. Innate reason vs experience as a source of cognition, the debate opposing thinkers such as Descartes and Chomsky to empiricists and behaviorists.
    2. The anchoring of inner logical compulsions either to an allegedly universal human mental equipment, or to the specific culture - in other words the opposition of (say) Kant and Durkheim.
    3. The question of a historically specific form of rationality, its roots, and its role in engendering modern civilization - what might be called the Weberian problem.
    4. The feasibility, in principle or in particular cases, of locating a rational plan in history.
    5. The debate as to whether the real driving force, and the location of genuine satisfaction, within the human psyche is to be sought in irrational drives or in rational aim, calculation, insight or restraint (or in what proportion).
    6. Rationality in the sense of explicit criteria and conscious plan, as opposed to respect for tradition and continuity in the management of a polity
    7. Rationalism in the sense of favoring free inquiry as against the authority of either Revolution or Tradition.
These various issues are of course interrelated, although by no means identical, but they are often confused, and the lack of terminological consistency frequently furthers this confusion.


3. Regulation here defined as any rule laid down by the government which affects the activities of other agents in the economy, takes many forms, but in general the types of activities concerned and the methods of control vary together. Three broad areas can be identified.

The first is legislation: this approach in commonest for issues such as safety. Most countries have regulations concerning health and safety at work, such as protection for workers against dangerous machinery; other examples include the wearing of seatbelts in cars. Enforcement may be carried out by the normal police authorities or by special agencies such as factory inspectors.

The second category is the regulation of monopolies. A monopoly will charge higher prices than a competitive industry, so consumers’ interests need some protection. It is useful to distinguish the general case, where action is infrequent, from the particular case of natural monopoly typified by public utilities, where regulation is more continuous.

General competition law operates when a company either has, or is about to acquire, a significant share of the market; then a body such as the Monopolies and Mergers Commission in the UK would determine whether a proposed merger or takeover should go ahead. However, the benefits of monopoly can also be gained by a group of firms acting collusively, and the threat of this has led to antitrust legislation such as the Sherman Act in the USA.

The second area of monopoly regulation is that applied to industries where competition is not feasible for structural reasons; this includes much of the transport, communications and energy sectors. Here a regulator is needed more or less permanently. The difficulty then is to control the monopoly sufficiently tightly without removing all its incentive to cut costs and develop new products. Two main methods have been used: rate-of-return regulation, where the firm may not exceed a given percentage return on its capital assets, and price-capping, which controls prices directly.

The final method of regulation is self-regulation, where an industry polices itself This seems to occur where the problem is incomplete knowledge on the part of consumers. In areas such as medicine or the law, consumers depend on the doctor or lawyer making the right decision on their behalf. Some control of practitioners is therefore needed, and licensing is delegated to their professional bodies: someone 'struck off' their registers can no longer practice. Financial services markets are often self-regulatory too, requiring membership of the appropriate organization to work in the market, although the banking system is regulated by the government's central bank as part of its general responsibility for the stability of the financial system.


1. Reification is literally the transformation of something subjective or human into an inanimate object. In social and cultural theory it therefore refers, most generally, to the process by which human society (that is ultimately the product of largely conscious and intentional human actions) comes to confront its members as an external, seemingly natural and constraining force. In a more precise or technical sense, the theory of reification (or Verdinglichung in the original German) was developed by George Lukács (1923) from Marx's theory of commodity fetishism. Marx analyzed the process in capitalism by which relationships between human beings (i.e. the meeting of humans in commercial exchange in the market), take on the appearance of relationships between things (such that the relationships between humans come to be governed by properties - exchange-values - that appear to be inherent to the commodities exchanged). For Lukács, this inversion is manifest in all social relations (and not merely in the economy), as in an increasingly rationalized and bureaucratic society, that which is qualitative, unique and subjective in human relationships is lost, as they are governed according to the purely quantitative concerns of the bureaucrat and the manager.

relations of production

1. In Marxism, the relations of production are the social relations that exist between the class of producers and the class of owners within an economy. In Marxist theory, all societies are characterized in terms of conflict between two major classes. The subordinate class is the class that actually produces goods and services, through the exercise of its labor power. The dominant class owns and controls the resources that are used in the production process (the means of production), and as such are able to control the production process and the fate of the product. Different modes of production, or historical epochs, are characterized by distinct relations of production and levels of technology (or forces of production). The relations of production are inherently static, and social revolution occurs when the productive potential inherent in developing forces of production can no longer be contained or fully exploited within the existing relations of production. (See mode of production).


1. (1) On some theories, a function of language (i.e. the representation conceived of as (a) the representation of thought in language, (b) the linguistic representation of the world of empirical experience). (2) In social terms, representation has (a) a political meaning (in the sense of meaning the representation, through institutional bodies or pressure groups, of the interests of political subjects – a notion inextricably linked with modern, liberal conceptions of the democratic process), and (b) a more nuanced meaning, which has linked the practices and norms of representing and which may, for example, be used in the mass media, in order to present images of particular social groups. In sense 2b, representation does not necessarily signify the representing of interests of the group or individual represented. A group can be represented in a manner which might be conceived of as stereotyping them. Thus, in this context, ‘representation’ may be characterized as misrepresentation: as the ‘presentation’ or construction of identity. Such constructions of identity may be closely allied to questions of ideology and power, and to the forms of discourse implicated in the procedures whereby such images are created. Thus, the construction of concepts relating to issues of gender, race or sexuality are questions of representation. Sense 2b is, in many ways, a matter related to senses 2a and 1b. In terms of the representation of political subjects (2a), the constitution of modes of representation may have an important role to play within the political process, in so far as such issues as those concerned with the construction of discourses surrounding matters of race or ethnicity can also be conceptualized as political issues. Likewise, the view that language may have a role in constructing ‘reality,’ rather than simply reflecting it (1b), is an important one in this connection; for, if we were to be convinced that language does not merely ‘mirror’ the world of experience but constructs it, the same must go for its role in the world of social experience. The question of the role of representation can also be raised in the context of discourses of knowledge (cf. Edward Said’s account of Orientalism).


2. Revolution now has a predominant and specialized political meaning, but the historical development of this meaning is significant. The word came into English from the 14th century, from the word revolucion, Old French, revolutionem, Latin, from the root word revolvere, Latin - to revolve. In all its early uses it indicated a revolving movement in space or time: 'in whiche the other Planetes, as well as the Sonne, do finyshe their revolution and course according to their true tyme' (1559); 'from the day of the date heereof, to the full terme and revolution of seven yeeres next ensuing' (1589); 'they recoyl again, and return in a Vortical motion, and so continue their revolution for ever' (1664). This primary use, of a recurrent physical movement, survives mainly in a technical sense of engines: revolutions per minute, usually shortened to revs.

The emergence of the political sense is very complicated. It is necessary to look first at what previous word served for an action against an established order. There was of course treason (with its root sense of betraying lawful authority) but the most general word was rebellion. This was common in English from the 14th century. The sense had developed in Latin from the literal 'renewal of war' to the general sense of armed rising or opposition and, by extension, to open resistance to authority. Rebellion and rebel (as adjective, verb and noun) were then the central words for what we would now normally (but significantly not always) call revolution and revolutionary. There was also, from the 16th century, the significant development of revolt, from the word révolter, French, revolutare, Latin - to roll or revolve, which from the beginning, in English, was used in a political sense. The development of two words, revolt and revolution, from the sense of a circular movement to the sense of a political rising, can hardly be simple coincidence.

Revolution was probably affected, in its political development, by the closeness of revolt, but in English its sense of a circular movement lasted at least a century longer. There are probably two underlying causes for the transfer (in both revolt and revolution) from a circular movement to a rising. On the one hand there was the simple physical sense of the normal distribution of power as that of the high over the low. From the point of view of any established authority, a revolt is an attempt to turn over, to turn upside down, to make topsy-turvy, a normal political order: the low putting themselves against and in that sense above the high. This is still evident in Hobbes, Leviathan, 11, 28: 'such as are they, that having been by their own act Subjects, deliberately revolting, deny the Soveraign Power' (1651). On the other hand, but eventually leading to the same emphasis, there was the important image of the Wheel of Fortune, through which so many of the movements of life and especially the most public movements were interpreted. In the simplest sense, men revolved, or more strictly were revolved, on Fortune's wheel, setting them now up, now down. In practice, in most uses, it was the downward movement, the fall, that was stressed. But in any case it was the reversal between up and down that was the main sense of the image: not so much the steady and continuous movement of a wheel as the particular isolation of a top and bottom point which were, as a matter of course, certain to change places. The crucial change in revolution was at least partly affected by this. As early as 1400 there was the eventually characteristic:

It is I, that am come down
Thurgh change and revolucioun. (Romance of the Rose, 1436)
A sense of revolution as alteration or change is certainly evident from the 15th century: 'of Elementys the Revoluciouns, Chaung of tymes and Complexiouns' (Lydgate, c. 1450). The association with fortune was explicit as late as the mid-17th century: 'whereby one may see, how great the revolutions of time and fortune are' (1663).

The political sense, already well established in revolt, began to come through in revolution from the early 17th century, but there was enough overlap with older ways of seeing change to make most early examples ambiguous. Cromwell made a revolution, but when he said that 'God's revolutions' were not to be attributed to mere human invention (Abbott, Writings and Speeches of Cromwell, 111, 590-2) he was probably still using the word with an older sense (as in Fortune, but now Providential) of external and determining movements. Indeed the most fascinating aspect of this complex of words, in the 17th century, is that Cromwell's revolution was called, by its enemies, the Great Rebellion, while the relatively minor events of 1688 were called by their supporters the Great and eventually the Glorious Revolution. It is evident from several uses that revolution was gaining a political sense through the 17th century, though still, as has been noted, with overlap to general mutability or to the movements of Fortune or Providence. But it is very significant that in the late 17th century the lesser event attracted the description Revolution while the greater event was still Rebellion. Revolution, that is to say, was still the more generally favorable word, and from as late as 1796 we can find that distinction: 'Rebellion is the subversion of the laws, and Revolution is that of tyrants'. (Subversion, it will be noted, depends on the same physical image, of turning over from below; and cf. overthrow.) The main reason for the preference of revolution to rebellion was that the cyclical sense in the former implied a restoration or renovation of an earlier lawful authority, as distinct from action against authority without such justification.

From the late 17th century the sense of revolution in English was dominated by specific reference to the events of 1688. The ordinary reference (Steele, 1710; Burke, 1790) was to 'the Revolution', and revolutioner, the first noun for one engaged in or supporting revolution, was used primarily in that specific context. Yet a new general sense was slowly making its way through, and there was renewed cause for distinction between rebellion and revolution, according to point of view, in the rising and declaration of independence of the American states. Revolution won through in that case, both locally and generally. In a new climate of political thought, in which the adequacy of a political system rather than loyalty to a particular sovereign was more and more taken as the real issue, revolution came to be preferred to rebellion, by anyone who supported independent change. There is a surviving significance in this, in our own time. Rebellion is still ordinarily used by a dominant power and its friends, until (or even after) it has to admit that what has been taking place - with its own independent cause and loyalties - is a revolution, though also with an added sense of scale: 'Sire ... it is not a revolt, it is a revolution' (Carlyle, French Revolution, V vii;1837). (It is worth noting that revolt and revolting had acquired, from the mid-18th century, an application to feeling as well as to action: a feeling of disgust, of turning away, of revulsion; this probably accentuated the distinction. It is curious that revulsion is etymologically associated with revel, which itself goes back to rebellare, Latin - to rebel. Revel became specialized, through a sense of riotous mirth, to any lively festivity, rebel took its separate unfavorable course; revulsion, from a physical sense of drawing away, took on from the early 19th century its sense of drawing away in disgust.)

It was in this state of interaction between the words that the specific effects of the French Revolution made the modern sense of revolution decisive. The older sense of a restoration of lawful authority, though used in occasional justification, was overridden by the sense of necessary innovation of a new order, supported by the increasingly positive sense of progress. Of course the sense of achievement of the original rights of man was also relevant. This sense of making a new human order was always as important as that of overthrowing an old order. That, after all, was now the crucial distinction from rebellion or from what was eventually distinguished as a palace revolution (changing the leaders but not the forms of society). Yet in political controversy arising from the actual history of armed risings and conflicts, revolution took on a specialized meaning of violent overthrow, and by the late 19th century was being contrasted with evolution in its sense of a new social order brought about by peaceful and constitutional means. The sense of revolution as bringing about a wholly new social order was greatly strengthened by the socialist movement, and this led to some complexity in the distinction between revolutionary and evolutionary socialism. From one point of view the distinction was between violent overthrow of the old order and peaceful and constitutional change. From another point of view, which is at least equally valid, the distinction was between working for a wholly new social order (socialism as opposed to capitalism) and the more limited modification or reform of an existing order ('the pursuit of equality' within a 'mixed economy' or 'post-capitalist society'). The argument about means, which has often been used to specialize revolution, is also usually an argument about ends.

Revolution and revolutionary and revolutionize have of course also come to be used, outside political contexts, to indicate fundamental changes, or fundamentally new developments, in a very wide range of activities. It can seem curious to read of 'a revolution in shopping habits' or of the 'revolution in transport', and of course there are cases when this is simply the language of publicity, to describe some 'dynamic' new product. But in some ways this is at least no more strange than the association of revolution with violence, since one of the crucial senses of the word, early and late, restorative or innovative, had been simply important or fundamental change. Once the factory system and the new: technology of the late 18th and early 19th centuries had been called, by analogy with the French Revolution, the Industrial Revolution, one basis for description of new institutions and new technologies as revolutionary had been laid. Variations in interpretation of the Industrial Revolution -from a new social system to simply new inventions - had their effect on this use. The transistor revolution might seem a loose or trivial phrase to someone who has taken the full weight of the sense of social revolution, and a technological or second industrial revolution might seem merely polemical or distracting descriptions. Yet the history of the word supports each kind of use. What is more significant, in a century of major revolutions, is the evident discrimination of application and tone, so that the storm-clouds that have gathered around the political sense become fresh and invigorating winds when they blow in almost any other direction.



1. A term which is linked to issues of subjectivity and identity, and which also has ramifications in a variety of discursive contexts (e.g. politics, liberalism, individualism, epistemology, ethics).

The notion of the self is invoked as soon as one asks a question like 'Who am I?’ At first glance, this might not seem very difficult to answer, and you might respond by just giving your name. But giving your name does not adequately answer the 'Who am I?' question if you also take it to mean 'What am I?' In general, philosophers have held that asking who you are necessarily also involves considering what you are. Here is one possible answer to this question: 'I am a mind and a body. I think and I also move about in the world as a material being.’ But answering in this way does not solve the problem, unless you are also able to say how such things as minds and bodies are related to one another. In turn, then, a consideration of the nature of the self usually entails a number of related questions; e.g. how is the mind connected to the body? (put another way: What is the relationship between mind and body?). Also, if one holds the view that each of us is a mind plus a body, another issue arises, namely, which came first?

A number of approaches to this issue are possible. Plato, in the Phaedo (c. 380 BC), argued that the soul (mind) and the body are distinct. Moreover, he held that the soul must have existed prior to the body. The essence of what each of us is resides in this contrast. The essential part of each of us (the mind/soul) never changes, because what is essential (and hence true) must by definition never change. In contrast, the realm of the material world changes. Here is Plato's argument, presented by Socrates in the Phaedo: (i) There are two sorts of existence: the seen (the physical world) and the unseen. (ii) The world of experience (the seen) is a realm of change, whereas the unseen is unchanging. (iii) We are made of two parts - body and soul. (iv) The body is akin to the seen, and therefore changing; and the soul is akin to the unseen, and therefore unchanging. (v) Of these two, the soul is akin to the divine (which is unchanging) - in short, we have an immortal soul. (vi) Therefore, the soul is indissoluble. From this, it follows that the self is the immortal part of each of us, and the body the mere vessel in which this essence is instantiated. Plato, following this chain of reasoning, held that what is essential about each of us endures after death (i.e. that the soul/mind is immortal).

Such a view can be contrasted with 18th-century philosopher David Hume's treatment of the matter in the Treatise of Human Nature (1739). According to Hume, whenever I speak of myself I always do so in the context of some particular thought or feeling, There is no self over and above thoughts and feelings which can be held to be independent of them. What ‘I’ am is a bundle of sensations; the self, therefore, is a product of a body's ability to have sensations, experiences, etc. Hence, on Hume's account, nothing about the self can be said to exist independently of such sensations: the self is mortal. Moreover, the self is therefore something added to experiences; it is a fiction or an illusion. Put another way, the self is not an entity independent of the sensations a body is capable of feeling, but is produced by them. Thus, for Hume the self is a kind of interpretation of these sensations.

These two accounts, whatever their respective shortcomings, offer contrasting ontological views about the nature of the self. In making some claim about the nature of the self (i.e. what the self is), we are committed to some kind of ontology. This is the case even if, like Hume, we are tempted to deny that the self exists in any ontological sense: we are still making an ontological claim about the self on the basis of what we hold reality to be.

Important elements of Plato's view are by no means restricted to him. Many philosophical, religious and ethical attitudes and ideas contain within them the (albeit perhaps tacitly held) belief that mind and body are distinct from one another in kind. Likewise, with regard to knowledge, considered from both a philosophical point of view and from the vantage point of science, the question of the self is a significant one. This is because in talking of knowledge the question necessarily arises concerning who or what is it that has, or is the subject of, knowledge. For example, within the sciences, some notion of what an inquirer is must be presupposed.

The 17th-century philosopher René Descartes, in reply to the writings of contemporary skeptics who questioned whether we can have any certain knowledge, attempted to show that there is at least one certain piece of knowledge we are in possession of. Descartes starts by claiming that he has been struck by the large number of false beliefs lie has accepted since being a child. He resolves to 'demolish' all his beliefs as a prologue to constructing the foundations of knowledge (this approach is often known as the 'skeptical method', since it precedes from doubt). In order to do this, it is sufficient to bring into question all one's opinions, i.e. to show that they are not certain, rather than that they are false. However much one may doubt the veracity of one's beliefs, Descartes claims, one thing remains true: whatever happens I am still thinking: 'I must conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward or conceived by me in my mind'. This is most famously expressed in the phrase 'I think, therefore I am' (cogito, ergo sum).

What is this 'I' that thinks? Descartes draws a distinction between (i) the mechanical structure of the human body and (ii) the activities which humans pursue: they walk about, eat, have perceptions through their senses, etc. These activities are, he claims, the actions of a soul or mind. The properties of a body are physical: it can be seen, touched, occupies a particular space, can be moved, etc. The 'power of self-movement,’ however, is not a property we can attribute to a body. In line with the precepts of the skeptical method, the body can be doubted. But the self that thinks, Descartes argued, cannot be doubted. Thus, Descartes holds that he is a mind, 'not that structure of limbs which is called a human body' (a view termed 'mind-body dualism'). In other words, this standpoint contends that what is essential about humans is that they are thinking things, and that the property denoted by the term 'mind' is essentially different from that denoted by the term 'body'. This forms the basis for his view of knowledge: certain (i.e. true) knowledge derives from the 'I think', the self conceived as a mental essence. Amongst those who have criticized this approach was Nietzsche, who, in Beyond Good and Evil pointed out that there was no necessary connection between thinking and the self; that is, we cannot show with complete certainty that it is the self which is the agency behind the activity of thinking. For Nietzsche, in contrast, the self is always to be comprehended as being situated within particular contexts and, indeed, as the product of human culture, rather than an ontological category which grounds the basis of experience and therefore knowledge.

With the 'linguistic turn' in philosophy during the 20th century and also in the light of intellectual developments such as psychoanalysis, accounts have been offered of the self which address, for example, the question of its construction within the domain of language and discourse. For Jean-François Lyotard, for example, the notion of a self apart from language derives from an anthropocentric view of the nature of meaning which can be challenged. Selves, on this account, are not situated in a language-independent realm, nor are- their attitudes, dispositions and intentions alone sufficient to secure an epistemological foundation for knowledge. Rather, such things as intentions, dispositions and interests are realized in and through language. Thus, Lyotard criticizes Wittgenstein’s conception of 'language games' as being too limited. For instance, in drawing an analogy between language games and the game of chess, Wittgenstein, says Lyotard, remains trapped within a view of meaning which privileges a self which is independent of language: he presupposes that a 'player' moves a piece in a chess game, yet remains apart from the game. Equally, Jacques Derrida has argued that the meaning of such things as propositions is not simply a matter of the intentions of a speaker. For Derrida, although 'meaning has its place’ what is instrumental in the production of meaning language and context. Also, the work of Michel Foucault following Nietzsche, also concentrated on reconceptualizing the notion of the self in terms of the relations between discourses of power.


1. 'Sexuality' is probably the most misunderstood concept in Freudian psychoanalysis. It is commonly conflated with the term 'genital.’ For Freud, sexuality functions as a superordinate term: the genital is merely one of the aspects of sexuality. The pansexualist criticism of psychoanalysis is based on the idea that Freud reduces everything to sexuality (i.e. the genital). In his Three Essays on the Theory of Sexuality (1905), Freud widens the ambit of sexuality to include infantile sexuality, polymorphous perversity, the function of symptoms (which represent the sexual life of the subject), and the sheer diversity and deviations that pertain to object choice. Sexuality cannot be reduced to instinctual behavior since the relationship between the drive and the object is arbitrary. Sexuality does not merely frame the phenomenology of the neurotic symptom but helps the psychoanalyst to understand its aetiology as well. It was Freud's insistence on the sexual aetiology of the neuroses that led to a parting of ways between him and his early followers, Alfred Adler and Carl Gustav Jung.

Sexuality in psychoanalysis is described through a developmental model where the infant progresses through different stages: the oral, the anal, the genital and the phallic. Contingent disturbances in any of these levels will determine the distributions of libido that structure the subject's life. Neuroses were initially understood as regressions to one of these levels of libidinal fixation. The regression is made necessary by the subject's inability to respond to the demands of 'reality.’ Sexuality is understood to manifest itself from the time of early infancy. The premature demands of the sexual drive are repressed in the so-called Oedipal phase and the child switches from an Imaginary identification with the mother to a Symbolic identification with the father. This is followed by a period of latency. At puberty, sexuality once again makes its exorbitant demands on the subject thereby leading to the revival of modes of behavior that constitute the libidinal matrix of childhood.

In subjects who fail to make a proper Oedipal crossover from the mother to the father, sexual impressions of early childhood take on a traumatic aspect during puberty resulting in the return of the repressed. This leads to the production of neurotic symptoms, which constitute the sexual life of the subject. The typology of neuroses can also be classified along a model of stages. For example: in terms of fixation, hysteria is to orality what obsessionality is to anality. Lacan, however, called this model of biological stages into question without doing away with it completely. In the Lacanian model, though the infant must travel through these stages, there is nothing specifically biological about it. It is the fear of castration that mediates the subject's relation to any of these 'stages.’ Castration has a specifically symbolic dimension in Lacan. Symbolic castration is the radical disjunction between the subject and its object of desire, such that no object can exhaust the restlessness of the drive. The Oedipal drama is the symbolic realm where the subject is first alienated in its desire. Subsequently, the sexual drive can only seek an object in a complex imitation or distortion of the lost object. Sexuality therefore cannot be reduced to instinctual behavior, instead it takes on a dialectical relationship with the absent, the forbidden, and finally, the impossible.

social contract

3. The doctrine that government should be for and by the people informs the constitution of all countries claiming to be democratic, even when the precept is not observed in practice. Democratic governments rest their claims to legitimacy and obedience on electoral consent, but the concept of consent itself derived originally from contract theory which discovered the origins of government in a primal act of consent, 'the social contract'. The foremost exponents of contract theory, Hobbes, Locke and Rousseau, did not believe that savages had literally congregated and agreed to set up governments; contract was, rather, a hypothetical device. Its purpose was to show that governments should be viewed as if they had been established by the people and evaluated according to whether they served the purpose of protection for which they were instituted. In Hobbes's case the theory had illiberal implications: almost any government, however bad, would serve to keep anarchy at bay. But for Locke, the people had the right to resist a government which failed to protect their lives and property. Whether the conclusions of contract theory were reactionary or revolutionary depended on its basic assumptions.

In Leviathan (1651]), Hobbes, fresh from the horrors of civil war, imagined people in an anarchic state of nature, living in fear of sudden death. These people would eventually make a contract to guarantee peace, for their own protection. But since none would trust their fellows, they would then appoint a sovereign, independent of the contract, to enforce it and maintain order by all necessary means, including coercion. Because Hobbes sees authorization as a blank check, imposing no accountability on those in authority, his sovereign would have unqualified power over those who authorized him.

Locke's contract theory (1690) was developed partly -in protest against Hobbes's absolutist conclusions, partly to vindicate the revolution of 1688, which replaced the Stuarts with a constitutional monarchy. His state of nature is peaceful and orderly; people follow natural, moral laws and cultivate land and acquire property. But the absence of laws to resolve disputes leads people to establish a government by agreement. In making the contract, individuals surrender their natural rights, receiving instead civil rights and protection. The government thus created has a limited, fiduciary role. Its duty is the preservation of 'life, liberty and estate', and if it reneges, the people have the right to overthrow it. Although Locke argued that the contract enforced consent to majority rule, his was not a theory of democracy but an argument for a, balanced constitution with a people's legislature, an executive monarch and an independent judiciary This innovatory constitutionalism was a far cry from Hobbes's axiom that sovereignty was necessarily indivisible. Locke's doctrine that post-contract generations must consent to government, either actively or tacitly, later gave rise to 'consent theory'.

Contractualism is developed in a different direction by Rousseau (1762), who argued that governments originally resulted from conspiracies of the rich to protect their property. But through an ideal social contract, individuals would freely consent to exchange their natural autonomy for a share in government. This could be achieved only by a direct, participatory democracy, which would be directed by the 'General Will'. The General Will is 'that which wills the common good', the decision which all citizens would accept if they laid aside personal interests. Dissenters from the General Will could be 'forced to be free', that is, compelled to obey laws for the public good which, if less self-interested, they would themselves freely have chosen. The General Will thus represents our 'better selves', but liberal theorists have often regarded it as a potential justification for authoritarianism or for totalitarian regimes claiming to act in the 'real interests' of the people (although undoubtedly this was not Rousseau's intention) and have therefore rejected Rousseau's contract theory.

Despite their differences, all three theories reflect the same desire to make the legitimacy of governments rest on the people's choice. The cultural environment which produced this desire was one of increasing individualism, secularization and legalism: the doctrine of individual free will dictated that no persons should be governed without their own consent, while the decline of the 'divine right of kings' dogma meant that a secular justification for political power was needed. The recourse to a contractual justification mirrored the growing reliance on contracts in the expanding commercial world, and a new, anti-feudal, legalistic attitude to public affairs.

The central fallacy of contract theories, as T. H. Green stated, is that they presuppose 'savage' people with notions of rights and legality which could be generated only within a society. More damning for critics such as Hume, Bentham and Paine was the fact that existing governments were blatantly based on coercion, not consent, and operated largely for the benefit of the governors. History too suggested that most governments had been established through conquest and force. Such criticisms explain why contract theory was later replaced by the more plausible idea of democratic consent. However, contractarianism was revived in Rawls's Theory of Justice (1971), which identified the principles of justice as those to which people would consent, if deliberating in a state- of-nature-like vacuum. Rawls's work, which vindicates a broadly liberal view of justice, illustrates again how the original assumptions, especially those concerning human nature, determine the form and the contents of a hypothetical social contract. Contract theory is not abstract speculation, but a political myth tailored to prove a point.

Post-Rawlsian debate has been informed by ‘rational choice theory'. In particular, the Prisoners' Dilemma model, which shows that it is rarely in the interests of individuals to co-operate, has been used to confront social contract theory. But Gauthier, Taylor and others have invoked rational choice techniques to illustrate the possibility of co-operation and a rolling social contract, based on self-interest, for the provision of public goods, including government.

Despite the logical and empirical shortcomings of contract theory, it deserves serious attention because of its relation to central political ideas such as the will of the people, legitimacy and political obligation. All these have been employed manipulatively, often by regimes which have no basis in the people's choice. To avoid such ideological maneuvers and sleights of hand, we need to reject the rhetorical invocation of implicit, tacit or imaginary social contracts and to develop a doctrine of meaningful and participatory choice and consent.

social control

3. The concept of social control is widely and variously used in the social sciences. In sociology and anthropology it is used as a generic term to describe the processes which help produce and maintain orderly social life. In the specialist field of criminology, it usually carries a narrower meaning, referring to the administration of deviance by criminal justice, social welfare and mental health agencies. Sometimes the term takes on a critical sense, for example in social history and women's studies, where the notion of social control has been used to point to the subtle constraints and forms of domination present in institutions such as the family or the welfare state.

The breadth and imprecision of the social control concept has meant that it has tended to work as an orienting device for thinkers and researchers, rather than as an explanatory tool of any refinement. Sociologists in the early 20th century developed the concept to explore the problem of social order in the industrialized, urbanized societies then emerging. Criminologists in the 1960s used the term to redirect attention away from an exclusive focus upon the individual criminal and to stress the role which social rates and reactions play in the process of criminalizing particular behaviors and persons. Social historians in the 1970s employed the notion of social control as a means of subverting and revising orthodox accounts of social reform which had tended to overlook the hidden class-control aspects of many reform programs.

However, once such reorientations have been achieved, the general concept of social control often ceases to be useful and gives way to more specific questions about the different forms, objectives, supports and effects of the control practices under scrutiny. Like many sociological concepts, social control is a subject of continuing contestation, either by those who deny the appropriateness of this approach to particular phenomena, or else by those who find the term insufficiently precise to do the analytical and critical work required. That the concept is also and inevitably tied into political debates - either as part of a conservative quest for social order, or else in support of a radical critique of social institutions - serves to deepen the controversy surrounding its use. So too does the semantic proximity of the term to other concepts Such as socialization, regulation, domination, power and culture. Social scientists who use this concept are Obliged to define it for their own purposes or else invite misunderstanding.

Given this conceptual state of affairs, the most illuminating way of understanding the term is to summarize its intellectual history rather than adopt a purely analytical exposition. However, one should bear in mind that contemporary usage Still draws upon all of these past conceptions, and often reinvents under a different name many of the ideas and distinctions which earlier writers first established.

The classical social theorists of the 19th century - Comte, Marx, Durkheim, Weber, Simmel, etc. - did not employ the term social control, although their work certainly dealt with the issues of social self- regulation, enforcement of norms, and class domination which social control theorists were later to address. Instead, the concept was first developed by the sociologists of early-20th-century USA, particularly by E. A. Ross and W G. Sumner, who sought to identify the myriad ways in which the group exerts its influence upon the conduct of the individual. Ross's (1901) Social Control took as its starting-point the shift from small-scale, agrarian, face-to-face communities to dense, industrialized urban societies, and argued that this shift entailed a qualitative transformation in the bonds which made social order possible. Whereas the earlier gemeinschaft communities had been held together by what Ross regarded as a 'living tissue' of natural controls such as sympathy, sociability and a shared sense of justice, social control in the newer Gesellschaft societies was a matter of 'rivets and screws' which had to be consciously created and maintained if order was to be achieved within these complex and conflictual social settings. Ross's work catalogued and anatomatized these foundations of order, dealing in turn with public opinion, law, belief, education, religion, art and social ceremony. In much the same way, Sumner's (1906) Folkways described how usages, manners, customs and morals provided the basic underpinning of social regulation, upon which the more formal system of law was built.

Underlying this early work is a Hobbesian conception of individual human nature as disruptive and anti-social, to be reined in by the imposition of group controls and sanctions - as well as an anxiety that deliberate social control was more than ever necessary in burgeoning metropolises such as Chicago and New York, with their masses of newly arrived immigrants and diverse ethnic groups. Later work, by writers such as G. H. Mead (1925) and C. H. Cooley (1920), developed a different conception of the individual's relation to society in which social control was accomplished not by suppressing individuality but rather in its very creation. For these writers, the self emerges through a process of interaction with others, out of which develops the individual's capacity to take the point of view of other people. To the extent that this socialization process is successful, individuals are constituted as social selves, internalizing the norms of the wider society ('the generalized other'), and adapting their conduct to the various roles demanded by social interaction. Thus the social control which permits coordination and integration operates in and through the individual, rather than over against individuality, and the internal self-control of the individual is as much a social matter as the rules and regulations which constrain individuals from the outside. (In some- what different language, and from somewhat different premises, Durkheim, Freud and Piaget had come to a similar conclusion: see Coser. See also Elias's (1939) The Civilizing Process, which sets out a sociohistorical account of how social controls and self-controls change over time.)

For Chicago sociologists such as Park and Burgess, 'all social problems turn out finally to be problems of social control'. By this they meant that the practical problems of administering social institutions and governing populations, as well as the theoretical problems of understanding the dynamics of social organizations, turn on the ability of the analyst or policy maker to comprehend the mechanisms which permit groups to regulate their own activities without recourse to violence and in accordance with specified moral ideals. In this conception, social control is contrasted with coercive control, (and indeed, with minority or class domination) and is viewed as an integral part of purposive social planning and the rationalization of social institutions. Moreover, it can take both positive and negative forms, operating to elicit and evoke action in individuals through rewards, persuasion or education, or else to restrain and repress by means of gossip, satire, laughter and criticism. For the early American Pragmatists and Progressives - and, indeed, for a few later sociologists - social control was thus an ideal to strive after. (One version of this ideal was the Welfare State, which utilized controls such as Keynesian demand management, insurance-based protections, personal social services, and processes of democratic opinion formation, in an attempt to govern in the interests of a positive ethic of solidarity and security.) By the 1970s, many writers had come to regard the history of 20th-century rationalization and social engineering in a much more negative light, with the consequence that social control came to be regarded by some as a regret- table feature of social organization and a subject for critical attack.

In anthropology, the issue of social control was first explicitly addressed as part of a controversy about the social organization of 'primitive' societies. In his account of Crime and Custom in Savage Society (1926) Malinowski attacked the then orthodox view that small-scale, pre-industrial societies were held together by virtue of the 'spontaneous obedience' of group members, whose individuality was stifled by the operation of a harshly conformist conscience collective. Against this view, Malinowski argued that social control was embedded in the reciprocity of routine social relations, and was supported by the self-interest of each individual in maintaining his or her place in the system of exchange and reputation, as well as by the ceremonial enactment of social obligations, which ensured that individual deviance was made visible to others. Social control involved not only the individual internalization of social norms, but also the active pursuit of self-interest, and occasionally the operation of coercive sanctions. As Etzioni later pointed out, social organizations can be distinguished in terms of the three modes of control which Malinowski identifies. Thus prisons, mental hospitals and concentration camps are founded primarily (though not exclusively) on coercive controls; organizations such as the workplace depend upon the utilitarian controls of remuneration; while others - such as religious, voluntary and political associations - maintain their cohesion primarily through the normative commitment of their members.

In the 1950s Parsons defined the theory of social control as 'the analysis of those processes in the social system which tend to counteract ... deviant tendencies'. Social control thus came to refer to a sub-system or, at most, a special remedial aspect of social relations, rather than the patterning effect of social relations as such. This narrower usage has tended to prevail in the specialist field of criminology, where it has spawned a large literature analyzing the effects - especially the unintended effects - of the actions of 'agencies of social control' such as the police, prisons, psychiatrists, etc. (However Hirschi's influential control theory of crime utilizes the term in its older sociological sense to argue that the key variable in explaining offending is the strength of the social bonds which tie individuals into patterns of conformity.) In the 1960s Lemert and Becker transformed that field by arguing that 'social control leads to deviance' rather than the reverse, and by stimulating research on the ways in which the 'labeling' and 'stigmatizing' of deviant conduct by officials tends to reinforce and amplify deviant identities and behavior.

In the 1970s, this critical attitude towards the practices of social control came to take a more radical form, partly under the influence of Marxist theories of the state, and partly as a result of the new social history which argued that the emergence of modern institutions such as the reformatory prison, the asylum, and the social welfare system ought to be seen not as humane and progressive reforms but instead as strategic measures to consolidate the subordination and control of the lower classes. In the 1980s there emerged a new specialism, the sociology social control, focused upon the developmental forms and functioning of the 'control apparatus.' An influential thesis which has emerged from this work - and which resonates with the dystopian themes of Orwell’s (1949) Nineteen Eighty-Four and Huxley's (1932) Brave New World - asserts that, since the 1960s, there has been an 'increasing expansion, widening, dispersal, and invisibility' of the 'net of social control.’ The assumption underlying this thesis is that modern society is increasingly governed by reference to expert knowledge, classification systems, and professional specialists in the administration of deviance. Styles of social control may change, (e.g. the shift from reliance upon closed institutions to greater use of community-based provision for mentally ill people), as may the ideologies which gloss these practices (e.g. the decline of 'rehabilitative' philosophies in the penal system), but the build up of the control apparatus is viewed as a secular trend.

Critics of the concept object to its tendency to imply that diverse strategies and practices of governance - operating in different social sites, involving different personnel, different techniques and different programmatic objectives – can somehow be said to share a common source (usually the state or the ruling class) and a common purpose (integration or domination). To lump all of these together as social control is to impart a spurious unity to a variety of problems and practices which may have little in common. Others object to the implication that social controls are imposed upon the subordinate classes, rather than negotiated or invited by the groups concerned, and to the frequently made assumption that the social control objectives implicit in a reform program are automatically realized just because laws are passed or agencies set up. In this respect, the alleged 'social control' effects of reform measures often function as a post-hoc explanation for a theoretically predicted event (usually a revolution) which failed to occur. Like its more Marxist cognates 'ideology' and 'hegemony', social control can operate as a device to protect a theory from falsifying evidence rather than as a tool for exploring the social world.

Social Darwinism

1. An appropriation of the evolutionary principles outlined in Charles Darwin's Origin of Species (1859), Social Darwinism was first propounded by Herbert Spencer (1820-1903). Spencer's theories in fact pre-date the publication of Darwin's own text; he first drew on contemporary science as a means of justifying his hypotheses, and later used Darwin's work in order to validate the authority of his own. Spencer's project aimed to integrate different disciplines (e.g. the then developing discipline of sociology, and the methods and theories of the physical sciences) within an evolutionary account of human society. Thus, whereas Darwin's model of evolution is concerned with physical fact (the realm of nature), Spencer's conception of evolution may be characterized by way of its claim to be a science of society.

Spencer holds that the evolutionary process is one in which there is a spontaneous 'change from incoherent homogeneity (i.e. unity) to a coherent heterogeneity (i.e. diversity), accompanying the dissipation of motion and integration of matter' (Structure, Function and Evolution). From this premise he constructs a depoliticized model of society which is both naturalized and ahistorical: left alone, society will regulate itself according to the principle of the 'survival of the fittest,’ which is driven by this movement towards increasing coherence and diversity. His view is highly conservative in its implications: hierarchical stability is considered by Spencer to be essential to the 'coherence' (i.e. stability) of social structure. Hence, any outbreak of social disorder which threatens hierarchy is conceived of as a negative force, akin to illness in the human body. Both are disorganizing regressions and obstruct the evolutionary process by causing heterogeneity without coherence. Defining the production of disease in the human body, Spencer comments on how, in successive stages, ‘lines of organisation, once so precise disappear’ and parallels this description with social disorder, which is a 'loosing of those ties by which citizens are bound up into distinct classes and subclasses.’ The 'survival of the fittest' is thereby taken as being fought out in the form of an economic and social struggle for existence, and the only justifiable attitude to matters of social organization is one which lets the forces of progressive evolution take their course. Spencer held his view to be the most 'scientific' of philosophical theses because it could, he thought, be tested empirically. The theory attracted a number of adherents associated with fascism, most notably Nazi leader Adolph Hitter.

Leaving the question of the history of authoritarianism to one side, there are, of course, a number of objections to Spencer's evolutionary theory. For instance, although the 'survival of the fittest' principle may apply to nature, it is not clear that it is applicable in the same way, if at all, to the sphere of human culture. Also, recent empirical work in paleontology has suggested that natural forms do not necessarily develop from states of relative 'simplicity' and homogeneity to states of 'complexity' and heterogeneous diversity. (See Stephen J. Gould's account of the enormous diversity of forms of life found in the Burgess Shale - a deposit of sedimentary rock around 530 million years old. Gould argues that, on this evidence, evolutionary history cannot be thought of as a straightforward progression towards increasing diversity and heterogeneity, since the forms of animal design existing today represent a reduction of the number of designs found in the older Burgess Shale. Interestingly, Gould holds that seeking to explain why certain forms survived arid others became extinct only in terms of which had 'better' body-designs will not work, since it is not always possible to see what perceptible advantages one Burgess form may have had over another. He argues that chance may play a key role in the process of evolution: run the clock of history again from the same starting point and it might turn out differently – i.e. there might have been no humans).

3. Social Darwinism refers loosely to various late 19th-century applications (mostly misapplications) to human societies of ideas of biological evolution associated (often erroneously) with Darwin. Though often associated with conservatism, laissez-faire capitalism, fascism and racism, Social Darwinism was, in fact, a pervasive doctrine of the late 19th and early 20th centuries, especially in Britain and North America, and its influence covered the entire political spectrum, including, for example, British Fabian socialism.

Its two leading intellectual proponents were the British philosopher Herbert Spencer (who coined the phrase 'survival of the fittest') and William Graham Sumner, a professor of anthropology at Yale University. To Spencer, we owe the misleading analogy that a society is like an organism (hence, the term 'organicism' sometimes used to describe his theories). just as an organism is composed of inter- dependent organs and cells, a human society is made up of specialized and complementary institutions and individuals, all belonging to an organic whole.

Spencer himself was never very clear about his analogy: he claimed that society was both 'like an organism' and that it was a 'super-organism'. His central notion, however, was that the whole (organism-society) was made up of functionally specialized, complementary and interdependent parts. Thus, he is also considered to be one of the main fathers of sociological functionalism. * Sumner's concept of 'mores' (his term, by the way) and his turgid disquisitions on morality are his most lasting contributions. What his writings have to do with Darwinism is questionable. 'Bad mores are those which are not well fitted to the conditions and needs of the society at the time.... The taboos constitute morality or a moral system which, in higher civilization restrains passion and appetite, and curbs the will.’ Sumner uses terms such as 'evolution' and 'fitness' to be sure, but his moralistic pronouncements and his repeated emphasis on the 'needs of the society' are the very antithesis of Darwin's thinking. Spencer was also prone to inject ethics into evolution, seeing an 'inherent tendency of things towards good'. Darwin saw evolution as a random process devoid of ethical goals or trends, and natural selection as a blind mechanism discriminating between individual organisms on the basis of their differential reproductive success.

Another central theme in Sumner is that 'stateways cannot change folkways', meaning that state action is powerless to change the underlying mores. This certainly made him an apostle of laissez faire. Indeed, he went so far as to contradict himself and suggest that state intervention is worse than useless: it is noxious. These propositions probably form the core of the -doctrine associated with Social Darwinism, namely, that the existing social order with its inequalities reflects a natural process of evolution in which the 'fittcr' rise to the top and the 'unfit' sink to the bottom. Any attempt, through social welfare, for example, to reduce inequalities is seen as noxious because it allows the unfit to 'breed like rabbits'. Indeed Spencer, as a good Victorian puritan, believed that intelligence and reproduction were inversely related. Overproduction of sperm, he thought, leads first to headaches, then to stupidity, then to imbecility, 'ending occasionally in insanity.'

Again, these ideas are quite antithetical to those of Darwinian evolutionary theory. If the lower classes reproduce faster than the upper classes, it means they are fitter, since, in evolutionary theory, the ultimate measure of fitness is reproductive success. To say that the unfit breed Re rabbits is a contradiction in terms. * Social Darwinism, in short, is a discredited moral philosophy that beats only a superficial terminological resemblance to the Darwinian theory of evolution, and is only of historical interest. See also race.

social democracy

3. Social democracy is a party-based political movement, inspired by socialism, which has held different meanings in different times and places. It has remained a substantially European movement, with Australia, New Zealand and Israel as the main exceptions. Not all social democratic parties have adopted the name, the most common alternatives being Socialist Party (France) and Labour Party (The Netherlands, Britain and Norway).

Three phases can be distinguished: from 1875 when the German Social Democratic Party (SPD) was founded to 1914; between the World Wars; and the period since 1945.

The first period, one of expansion and consolidation of the movement, coincided with the industrialization of Europe and the formation of a large proletariat. Social democrats (or socialists - the terms were then interchangeable) were then members of centralized and nationally based parties, loosely organized under the banner of the Second international (1889) supporting a form of Marxism popularized by Friedrich Engels, August Bebel and Karl Kautsky. This held that capitalist relations of production would dominate the whole of society, eliminating small producers until only two antagonistic classes, capitalists and workers, would face each other. A major economic crisis Would eventually open the way to socialism and common ownership of the means of production. Meanwhile, social democrats, in alliance with trade unions, would fight for democratic goals such as universal suffrage and a welfare. state and traditional workers' demands such as a shorter working day. It was assumed that in democratic countries power could be achieved peacefully, though participation in bourgeois governments was ruled out. Some (e.g. Rosa Luxemburg) proposed the mass general strike as the best revolutionary weapon. The SPD provided the main organizational and ideological model though its influence was less pronounced in southern Europe. In Britain the powerful trade unions tried to influence the Liberals rather than forming a separate working-class party. Not until 1900 was a Labor Party created. It was never Marxist and adopted a socialist program only in 1918.

The First World War and the October Revolution divided the movement, setting the supporters of Lenin’s Bolsheviks against the reformist social democrats, most of whom had backed their national governments during the war Thereafter the rivalry of social democrats and communists was interrupted only occasionally, as in the mid-1930s, in order to unite against fascism. Between the wars socialists and social democrats formed governments in various countries including Britain, Belgium and Germany. In Sweden, where social democrats have been more successful than elsewhere, they governed uninterruptedly from 1932 to 1976.

After 1945, while in northern Europe socialists called themselves social democrats, in other countries, notably in Britain, France and Italy, social democrat was the label given to right-wing socialists who sided with the USA in the Cold War, objected to extensive nationalization, and were hostile to Marxism. In practice the differences between various tendencies were never as significant as internal doctrinal debates suggest. Eventually all social democratic parties discarded Marxism, accepted the mixed economy, loosened their links with the trade unions and abandoned the idea of an ever-expanding nationalized sector. In the 1970s and 1980s they adopted some of the concerns of middle-class radicals, such as feminism and ecologism, Socialism was no longer a goal at the end of a process of social transformation but the process itself, in which power and wealth would gradually be redistributed according to principles of social justice and equality - a position advocated by Eduard Bernstein in the 1890s. The success of capitalism in the 1950s favored such revisionism, enshrined in the SPD at its Bad Godesberg Conference in 1959 and popularized in Britain by Anthony Crosland. All social democrats assumed that continuous economic growth would sustain a thriving public sector, assure full employment and fund a burgeoning welfare state. These assumptions corresponded so closely to the actual development of European societies that the period 1945-73 has sometimes been referred as the era of 'social democratic consensus'. Significantly, it coincided with the 'golden age' of capitalism. Since 1973, left-wing parties obtained power where they had never held it before, in Portugal, Spain, Greece and France (except briefly in 1936-7). But social democracy faced a major crisis of ideas. Contributing factors were the end of continuous economic growth, massive unemployment (which made the welfare state more difficult to fund), global economic interdependence (which made national macroeconomic management inadequate), popular resistance against high levels of taxation, a sharp fall in the size of the factory-based proletariat, and the challenge of neo-liberal and anti-statist ideas of de- regulation and privatization. The collapse of communism and the transformation of communist parties into social democratic ones provided little comfort for social democrats when the ideology of the free market appeared stronger than ever before.


1. A political creed whose origins are normally traced back to the mid-19th century. There have been many types of socialist (e.g. utopian socialists, Fabian socialists, Guild socialists) but they share in common an adherence to particular principles with regard to how human society should be organized. In contrast to liberalism, which advocates the primacy of the individual's liberty and rights, socialists have traditionally placed emphasis upon the importance of equality as a political principle. This is, in turn, expressed in terms of the importance of economic relationships within society Socialists are particularly opposed to the individualism of liberal capitalist society holding that a desirable form of social order (which would be based upon mutuality, co-operation and shared public ownership of the means of production) is not possible as long as human relationships are dominated by the self-interested and antagonistic principles which underlie civil society. In contrast to liberals, therefore, socialists see justice as a matter of how society is ordered with regard to the distribution of goods within it, not in terms of the guardianship of freedoms which enable individuals to pursue their own purposes. Socialism thus has much in common with communism with regard to its holding that the most desirable form of social organization embodies principles of egalitarianism. However, whereas advocates of communism traditionally adhere to a theoretical perspective derived from Marx's claim that his analysis of the development of capitalist societies pertains to a scientific status, which in turn holds that proletarian revolution is the inevitable outcome of class antagonisms, socialism has tended to be more pragmatic and less confrontational in its approach. Socialists have, for example, emphasized the importance of democratic procedures within the political process.

Socialism like communism can be defined as an internationalism. This tendency can be traced back to the organization (by Marx) of the First Socialist International in London in 1864, although the spirit of unity embodied by this event gave way to fragmentation into opposed groups (socialists, communists, anarchists) by the time of the Second International, nearly a quarter of a century later. The international aspect of socialism can be seen in its adherence to values associated with humanism, for instance, the notion of a universal conception of value with regard to such things as the establishment of supranational mutuality, shared norms of justice and human rights between different nations.

social mobility

1. Social mobility refers to the movement of individuals between hierarchical social groups, most typically between classes. Study of social mobility is an important complement to studies of social stratification, because a hierarchical society may not be considered undesirable if there is free movement between the different levels of the hierarchy. Such free movement would suggest that the ruling class or elite may not be a closed and self-serving group, and similarly, a person born in the lower strata of society is not condemned to a life of relative powerlessness and low income. However, Marxist approaches to social mobility have long suggested that the ruling class will recruit the most able members of the lower classes into its ranks in order to prevent them becoming effective agents of revolution.

social stratification

1. The differentiation of society into separate groupings becomes social stratification when these groupings can be seen as forming a hierarchy. Traditionally, in sociology, three major types of strata have been recognized. In a caste system, different strata are characterized in terms of ethnic purity, with no movement between castes (so that a person lives his or her entire life within the caste into which he or she is born). In an estate system, typical of feudal societies, again there is little or no mobility between strata. The estates are defined through land ownership (on the part of the dominant stratum) and bondage. In industrial societies, stratification is in terms of class, with classes understood as economically defined. Class hierarchies formally allow for social mobility (although the actual amount of mobility and thus the real opportunities to leave the class of one's birth, may be restricted through unequal access to economic and cultural resources, such as education). Disputes continue, firstly over the relevant criteria for defining class. In the Marxist tradition, two major classes are identified and distinguished in terms of ownership and control of the means of production. (In Marxism, estates and castes are subsumed within the concept and theory of class, being understood as different forms that class and exploitation take in different historical epochs.) In other sociological traditions, defining class in terms of occupation allows for a more subtle and comprehensive account of social stratification. However, it is not clear that other hierarchies, such as power, material reward and status necessarily map onto class hierarchies in any simple manner. (Thus, as Max Weber noted, the nouveaux riches may have the income and wealth typical of the highest class, yet they will not have the status or respect that traditionally attends old money.) Further, a predominantly economic analysis of social stratification can fail to recognize the significance of other hierarchical social groupings, such as gender and ethnicity.

social structure 1. While social structure is one of the most widely used of concepts in sociology and social theory, and indicates some regular and stable patterning of social action and social institutions, its precise meaning is not easily determined. While 'structure' itself may be defined as the organizing relationships between parts in a whole, and in social structure the whole is society (albeit that 'society' is by no means an unproblematic term), the parts or elements of the whole may be variously understood. In the organic analogy, whereby society is compared to an organism, the elements are institutions which perform functions, necessary to the survival and stability of the whole. Thus, in functionalism, social structure may be understood as a set of relationships between institutions. Conversely, the elements may be understood as roles, or as variously defined (or self-defining) groups within the society. The validity of the concept may however be challenged. While critical theorists, such as the first generation members of the Frankfurt School, tend to adopt the concept very much in the sense in which it is used in functionalism, they do not treat it as a value neutral description of society. That society is structured, and that these structures can confront the individual as natural forces, constraining and determining their action, is taken to be indicative of a reified and thus false society. Conversely, the existence of social structures is denied altogether by certain micro-social theories, such as ethnomethodology. This is to reject the idea of any social entity existing independently, or prior to, individuals' mundane competence to produce that entity (through common acknowledgement of each others' skills and practices) in social interaction. Thus, while particular interactions may be structured, in the sense of being ordered and meaningful events, this order is produced, spontaneously and co-operatively by the agents involved, and is not determined by some independent mechanism.


1. In its modern sense, an arrangement of institutions, modes of relationship, forms of organization, norms, etc. constituting an interrelated whole within which a group of humans live. That said, there is no simple definition which will fit all theories with equal ease. How one understands the term usually depends upon how one conceives of the distinction between the individual and society. Traditional liberalism (e.g. Locke, Mill, Rawls) conceives of society as a collection of free agents whose properties and characteristics are constituted independently of the modes of relationship which operate within any particular context. Thus, society is not coterminous with the individual, and the institutions which go to ground social relations are independent of the individual's identity. Communitarian critics have argued against the liberal view, asserting that there is a necessary link between being a social entity and any conception of the self we might have. Marxists traditionally view society in terms of the history of economic and institutional relationships (the economic base-structure and ideological superstructure) which have exerted a determinant effect on class interests and differences, and would likewise oppose the liberal conception.

Writers associated with fascism also attacked the view that individuals could be contrasted with society. On the fascist model, the assertion of a fundamental division between individual and society embodies a 'mechanistic' attitude, which is opposed to the unity of collectivity and tradition which underlies the organic whole that makes up human relationships. Equally, conservatives (e.g. Edmund Burke) have often discussed society in terms of the organic unity of its traditions and, in contrast to the liberal conception of it as an aggregate of individuals, have used this to argue that the life of society must be preserved by way of safe-guarding these traditions.

Within sociology (the science which studies society) a similar contrast is detectable. Thus, functionalism shares in common with the conservative viewpoint an adherence to the organic model; while the Weberian and interactionist approaches tend to view society in terms of the abilities of individuals to make sense of their social environment and react to it in an independent way.

2. Society is now clear in two main senses: as our most general term for the body of institutions and relationships within which a relatively large group of people live, and as our most abstract term for the condition in which such institutions and relationships are formed. The interest of the word is partly in the often difficult relationship between the generalization and the abstraction. It is mainly in the historical development which allows us to say 'institutions and relationships', and we can best realize this when we remember that the primary meaning of society was companionship or fellowship.

Society came into English in the 14th century from the word société, Old French, societas, Latin , root word socius, Latin - companion. Its uses to the mid-16th century ranged from active unity in fellowship, as in the Peasants' Revolt of 1381, through a sense of general relationship - 'they have neede one of anothers helpe, and thereby love and societie ... growe among all men the more' (1581) - to a simpler sense of companionship or company - 'your society'(late 16th century). An example from 1563, 'society between Christ and us', shows how readily these distinguishable senses might in practice overlap. The tendency towards the general and abstract sense thus seems inherent, but until the late 18th century the other more active and immediate senses were common. The same range can be seen in two examples from Shakespeare. In 'my Riots past, my wilde Societies' (Merry Wives of Windsor, 111, iv) society was virtually equivalent to relationship or to one of our senses of associations, whereas in 'our Selfe will mingle with Society' (Macbeth, 111, iv) the sense is simply that of an assembled company of guests. The sense of a deliberate association for some purpose (here of social distinction) can be illustrated by the 'societe of saynct George' (the Order of the Garter, 15th century), and over a very wide range this particular use has persisted.

The general sense can be seen as strengthening from themid-16th century. It was intermediate in 'the yearth untilled, societie neglected' (1533) but clear though still not separate in 'a common wealth is called a society or common doing of a multitude of free men' (1577). It was clear and separate in 'societie is an assemblie and consent of many in one'(1599), and in the 17th century such uses began to multiply, and with a firmer reference: 'a due reverence ... towards Society wherein we live'(1650). Yet the earlier history was still evident in 'the Laws of Society and Civil Conversation' (Charles 1, 1642; conversation, here, had its earliest sense of mode of living, before additional (16th-century) familiar discourse; the same experience was working in this word, but with an eventually opposite specialization). The abstract sense also strengthened: 'the good of Humane Society' (Cudworth, 1678) and 'to the benefit of society' (1749). In one way the abstraction was made more complete by the development of the notion of 'a society', in the broadest sense. This depended on a new sense of relativism (Cf. culture) but, in its transition from the notion of the general laws of fellowship or association to a notion of specific laws forming a specific society, it prepared the way for the modern notion, in which the laws of society are not so much laws for getting on with other people but more abstract and more impersonal laws which determine social institutions.

The transition was very complex, but can now be best seen by considering society with state. State had developed, from its most general and continuing sense of condition (state of nature, state of siege, from the 13th century), a specialized sense which was virtually interchange-able with estate (both state and estate were from the word estat, Old French, status, Latin - condition) and in effect with rank: 'noble stat' (1290). The word was particularly associated with monarchy and nobility, that is to say with a hierarchical ordering of society: cf. 'state of prestis, and state of knyghtis, and the thridd is staat of comunys' (1300). The States or Estates were an institutional, definition of power from the 14th century, while state as the dignity of the king was common in the 16th and early 17th centuries: 'state and honour' (1544); 'goes with great state' (1616); 'to the King ...your Crowne and State' (Bacon, 1605). From these combined uses state developed a conscious political sense: 'ruler of the state' (1538);'the State of Venice' (1680). But state still often meant the association of a particular kind of sovereignty with a particular kind of rank. Statist was a common term for politician in 17th century, but through the political conflicts of that century a fundamental conflict came to be expressed in what was eventually a distinction between society and state: the former an association of free men, drawing on all the early active senses; the latter an organization of power, drawing on the senses of hierarchy and majesty. The crucial notion of civil society (see civilization) was an alternative definition of social order, and it was in thinking through the general questions of this new order that society was confirmed in its most general and eventually abstract senses. Through many subsequent political changes this kind of distinction has persisted: society is that to which we all belong, even if it is also very general and impersonal; the state is the apparatus of power.

The decisive transition of society towards its most general and abstract sense (still, by definition, a different thing from state) was a 18th-century development. I have been through Hume's Inquiry Concerning the Principles of Morals (1751) for uses of the word, and taking 'company of his fellows' as sense (i) and 'system of common life' as sense (ii) found: sense (i), 25; sense (ii), 110; but also, at some critical points in the argument, where the sense of society can be decisive, sixteen essentially intermediate uses. Hume also, as it happens, illustrates the necessary distinction as society was losing its most active and immediate sense; he used, as we still would, the alternative company:

As the mutual shocks in society, and the oppositions of interest and self-love, have constrained mankind to establish the laws of justice ... in like manner, the eternal contrarieties, in company, of men's pride and self-conceit, have introduced the rules of Good Manners or Politeness ... (Inquiry, VIII, 211) At the same time, in the same book, he used society for company in just this immediate sense, where we now, wishing for some purposes to revive the old sense, would speak of 'face-to-face' relationships; usually, we would add, within a community.

By the late 18th century society as a system of common life was predominant: 'every society has more to apprehend from its needy members than from the rich' (1770); 'two different schemes or systems of morality' are current at the same time in 'every society where the distinction of rank [see class] has once been established' (Adam Smith, Wealth of Nations, 11, 378-9; 1776). The subsequent development of both general and abstract senses was direct.

A related development can be seen in social, which in the 17th century could mean either associated or sociable, though it was also used as a synonym for 'civil', as in social war. By the late 18th century it was mainly general and abstract: 'man is a Social creature; that is, a single man, or family, cannot subsist, or not well, alone out of all Society . . .'(though note that Society here, with the qualification all, is still active rather than abstract). By the 19th century society can be seen clearly enough as an object to allow such formations as social reformer (although social was also used, and is still used, to describe personal company-, cf. social life and social evening). At the same time, in seeing society as an object (the objective sum of our relationships) it was possible, in new ways, to define the relationship of man and society or the individual and society as a problem. These formations measure the distance from the early sense of active fellowship. The problems they indicate, in the actual development of society, were significantly illustrated in the use of the word social, in the early 19th century, to contrast an idea of society as mutual co-operation with an experience of society (the social system) as individual competition. These alternative definitions of society could not have occurred if the most general and abstract sense had not, by this period, been firm. It was from this emphasis of social, in a positive rather than a neutral sense, and in distinction from individual, that the political term socialist was to develop. An alternative adjective, societal, was used in ethnology from the early 20th century, and has now a broader, more neutral reference to general social formations and institutions. One small specialized use of society requires notice if not comment. An early sense of good society in the sense of good company was specialized, by the norms of such people, to Society as the most distinguished and fashionable part of society: the upper class. Byron (Don Juan, XIII, 95) provides a good example of this mainly 19th-century (and residual) sense:'..

Society is now one polish'd horde

Formed of two mighty tribes, the Bores and Bored.

It is ironic that this special term is the last clear use of society as the active companionship of one's (class) fellows. Elsewhere such feelings were moving, for good historical reasons, to community, and to the still active senses of social. See class, individual.


3. Sociobiology is used in both a wide and a narrow sense. As Ernst Mayr put it, 'Sociobiology, broadly speaking, deals with the social behavior of organisms in the light of evolution.' This definition would include ethology, biopolitics, primatology, behavioral zoology, eugenics, population genetics, biosocial anthropology, evolutionary ecology, and all the disciplines that accept the neo-Darwinian mandate. In the narrow sense, following the coinage of E. 0. Wilson in his Sociobiology: The New Synthesis (1975), it refers to the application of theories of evolutionary genetics, stemming from the work of the modern synthesists of the 1930s and 1940s (Huxley, Haldane, Fisher, Wright) as modified by Hamilton, Williams, Maynard Smith and Trivers in the 1960s and 1970s. Here we shall explore both senses of the word.

All the sociobiological disciplines derive ultimately from Darwin's (1859) theory of natural selection (as modified by later discoveries in genetics), differing only in their interpretations. Thus they should not be confused with so-called Social Darwinism, which was in fact derived more from Herbert Spencer's develop- mental and progressive theories than from Darwin's essentially non-progressive theory of 'descent by modification'. In keeping with Darwin's own approach, especially in The Descent of Man (1871) and The Expression of the Emotions in Man and Animals (1872), sociobiology maintains that humankind is part of the natural world, and that therefore human behavior is subject to analysis by the principles of natural science. Thus it stands firmly on the side of human sciences as natural sciences, as opposed to the view of them as purely cultural sciences: in terms of the traditional debate as set by Dilthey in the 1880s, it is for Naturwissenschaft as against Geisteswissenschaft (or Kulturivissenschaft in Dilthey's later formulation). It thus stands also in firm opposition to many current anti-positivist trends in philosophy and the social sciences and humanities. Whatever the differences among them- selves, and these can be quite profound despite outside perceptions of homogeneity, the sociobiological sciences have a common aim of developing theories that apply to social behavior in general, whether human or non-human. Human behavior has unique qualities, but is not therefore exempt from the laws of natural selection, and even those unique qualities must be explained on its principles.

There are several strands or traditions that follow in the Darwinian tradition. One is the natural history tradition: the careful observation of the behavior of both domestic and wild animals, birds and fish, from the lowest organisms to the higher mammals. Much of this was carried on outside the academy by dedicated amateurs, as it had been since long before Darwin. Indeed Darwin could be counted as a member of its ranks. But like the other strands, it was to receive much new impetus, and above all a working theory, from Darwinism. Whitman in the USA and Spaulding in the UK, whose work influenced William James, were followed by such notables as Lack on the robin and Fraser Darling on the red deer. During the 1930s, this observational tradition developed an academic base under Heinroth and Lorenz in Germany, Tinbergen in The Netherlands, Huxley in Britain and Allee in the USA. The academic development came to be know, as ethology (a word originally coined by John Stuart Mill). Its general principle was that behavior throughout the life cycle of an organism emerged according to an evolved program, but a program that needed releasers or stimuli from the environment for its completion, as in the classic case of 'imprinting, - by young animals on their parents - for example. The environment included other organisms, and the main stress was on the communication mechanisms that evolved to make social interaction possible. After the Second World War it was joined by remarkable developments in primatology stemming from both zoology and anthropology, principally under Hall Kummer, Washburn and the Japanese students of their indigenous macaque populations. Later Jane Goodall (under the direct influence of Louis Leakey and the investigation of human origins) was to pioneer the field study of chimpanzees, and George Schaller that of gorillas.

The involvement of anthropologists as well as the growing interest in human behavior among the ethologists proper, led to various attempts to apply principles derived from animal behavior studies (which continued across many species) to human behavior ('comparative ethology'). These involved such areas as territorialism, dominance hierarchies, mother-infant bonding, male bonding, female hierarchies, optimism, aggression, ritualization, attention structure, kinship, incest avoidance, social categories, attachment, facial expression, courtship, childhood behavior, art, fathering and politics, to name but a few. The first general attempt at synthesis for social scientists came with Tiger and Fox's The Imperial Animal (1971). The general - attempt to use theories and methods of ethology to study human behavior and society - either through interpretation or by direct study - came to be known as human ethology and continues as a lively tradition, especially in Europe.

Primatology has developed almost as a separate discipline from ethology and includes field studies of primate kinship and mating, ecology, communication, politics, and the linguistic abilities of apes. Studies Of captive colonies and laboratory animals, as in the work of Hinde and Harlow, for example, have been important. Because of the heavy input from anthropology and the nearness of relationship between humans and the other primates, the relevance of primate studies for the study of the evolution of human behavior has always been central. In the spirit of comparative ethology, anthropologists have continued to examine the social life of primates for clues to the evolution of human social organization. Human ethology also maintains a continuing close relationship with studies of hominid evolution generally, and such developments in the neurosciences as the study of the complex relationships between hormones, neurotransmitters, and social behavior ('neuroethology'). The work of Chomsky and the transformational grammarians and such biolinguists as Lenneberg, for example, was important in bringing human verbal communication under the aegis of ethological interpretation, and away from the purely cultural.

Another major tradition stems from the rediscovery of Mendel and the growth of genetics. This was an independent development in its origins, but once married to Darwin's theory of natural selection (after an initial confusing period when it was thought of as a rival to that theory) it developed an impressive field of evolutionary population genetics which was concerned with the causes of the shifting of gene frequencies in populations over time. There was a remarkable convergence of these ideas in the 1930s which, following the title of a book by Julian Huxley, has come to be known as the 'modern synthesis' and involved the work of among others, R. A. Fisher, J B. S. Haldane and Sewall Wright. What the synthesis did was to marry the population concerns of the naturalists with the mathematical concerns of the geneticists and show how natural selection was the bridge. It did not, however, concern itself overly with the evolution of behavior in the manner of the ethologists. The latter continued to operate largely under the aegis of ‘group selection' theory, although this was often unstated. In the clear case of Wynne-Edwards in his Animal Dispersion in Relation to Social Behavior (1962) it was central, stating that conventional behaviors evolved, for example, in order to achieve the control of populations.

During the 1960s a reaction set in to this form of thinking that was to have profound effects for the 61ture of human behavioral evolutionary studies. Williams produced an elegant series of arguments insisting that natural selection could work only on individual organisms, not groups, and Hamilton in an attempt to produce an answer to Darwin's unsolved puzzle of the existence of sterile castes in insects, demonstrated that such 'altruism' (i.e. sacrifice of one's own reproductive fitness to further that of others) could spread in populations if certain conditions were met. Haldane had made the half- serious suggestion that while he would not lay down his life for his brother, he would for two brothers or eight first cousins. Hamilton worked out the mathematical genetics of this self-sacrificial behavior in which it paid altruists, in terms of their own reproductive fitness, to sacrifice their immediate fitness (offspring) in so far as in doing so they preserved the fitness of enough relatives, who carried genes identical by descent with their own, to compensate. The logic of this implied that the real units of evolution were in fact the genes themselves, and that organisms were the mechanisms, or vehicles, by which the genes ensured that replicas of themselves were reproduced. This logic was beautifully worked out by Dawkins who coined the now popular term 'selfish gene'.

The originality of Hamilton's position, however, was to show how 'altruism' and 'selfishness' were two sides of the same coin, and not necessarily incompatible. In helping relatives we were, in genetic terms at least, helping ourselves. To this interesting mix Trivers added the necessary formula to deal with altruism towards strangers, again demonstrating how this could evolve if there were a pay-off for the altruist. Maynard Smith had shown how game theory could handle much of the conceptual and mathematical elements of this theory, and the philosopher's favorite paradox of the 'Prisoners' Dilemma' became the foundation of theories of what Trivers dubbed 'reciprocal altruism'. A major element of this theory involved the possibility of cheating, since freeloaders could always take the benefits without paying the costs. They could not, however, do this to excess since there would then be no altruists to cheat. Hence arose the concept of the evolutionary stable strategy (ESS), in which two such behaviors could evolve in tandem.

Thus two powerful new concepts were developed: the concept of inclusive fitness, the fitness of the individual plus that of those with whom the individual shared genes in common by descent; and the concept of reciprocal altruism. A corollary of inclusive fitness was kin selection, which sounds like group selection in that it sees selection working essentially on units of related organisms. The difference essentially is that kin selection does not need to assume that behaviors evolve for these groups, but only for the benefit of individuals and their gene replicas in other organisms. Trivers further added the concept of parental investment (PI), which clarified the differing roles of the sexes in the rearing to viability of their offspring and hence the problems of parent-offspring and sibling-sibling conflict, as well as the profound asymmetry in male and female mating strategies. As a consequence attention was called to Darwin's neglected concept of sexual selection especially as it affected differential male-female reproductive strategies One might say that sociobiology in the narrower sense was born when E. 0. Wilson (1975) took these theoretical concepts, married them to the data of entomology, ethology and primatology, and produced a 'new synthesis' to replace and advance the 'modern synthesis' of Huxley. Wilson's ambition (which was anything but narrow) was to order all behavior across species, from insects through primates to humans, according to the set of principles deriving from the precepts of Williams, Maynard Smith, Hamilton and Trivers. The influence of his massive synthesis has been profound, not only on the science of human behavior, but also throughout the biological sciences. We, however, are concerned here with the subsequent developments in the study of human social behavior.

Largely under the influence of Alexander, a school of sociobiological thought emerged which took as its central precept the maximization of reproductive success. Its main assumption is that such maximization - deriving directly from Darwinian fitness - is a basic motive and can explain a whole range of human mating and kin-related behaviors. It married this idea to the basic ideas of kin altruism ('nepotism'), inclusive fitness, parental investment and paternity certainty. Thus organisms - humans included - would strive to maximize reproductive success through inclusive fitness, and attempt to ensure (largely in the case of males) that the genes they 'invested' in were their own, that is, ensure the certainty of paternity Ethnographic, sociological and historical examples were ransacked to discover examples of these principles at work. Thus the problem of the avunculate (the special relationship between mother's brother and sister's son) and hence the origins of matrilineal descent were attributed to 'low paternity certainty' in promiscuous societies where males would prefer to invest in sisters' sons with a low but definite degree of genetic relationship, rather than their own sons whose degree of relationship could be zero. The logic of this general position has been applied to hypergamy, despotic polygymy, child abuse, legal decisions, kin support in illness, family structure, cross-cousin marriage, mate competition, kin-term manipulation, polyandry, bridewealth, morality, parental care, among many others.

Another tradition, however, rejects the primacy of reproductive fitness maximizing. It argues that while differential reproductive success in the past, and particularly in the species' environment of evolutionary adaption (EEA), certainly led to specific adaptations, no such generalized motive can explain ongoing behavior. The motive, it is argued, does not give specific enough instructions to the organism, which is more likely to act on proximate motives like desire for sex, avoidance of cheaters, accrual of resources, achievement of status, etc. These may well lead to reproductive success, but they are not based on any general desire for its maximization. Studies, then, of contemporary behavior which demonstrate - as do many of the studies cited above - that certain practices in certain circumstances lead to reproductive success, or that they involve nepotistic actions towards kin, tell us only that ongoing behavior shows ingenious adaptability, but tell us nothing about whether or not these practices stem from genuine evolutionary adaptations.

The influence, among other things, of cognitive science, has led many sociobiologists then to reject the 'adaptation agnostic' stance of their other brethren and to look for specific adaptational. mechanisms in human perception and cognition ('information processing mechanisms'), whether or not these lead to current reproductive success. This school of evolutionary psychology has attempted to devise means of testing for 'domain specific algorithms' in the human mind, and is firmly opposed to 'domain general mechanisms' such as are proposed, for example, by many artificial intelligence theorists. It is sympathetic to those philosophers of mind like Fodor who prefer a modular to a unitary model of mind, and echoes, for example, Lumsden and Wilson's idea of 'epigenetic rules' or the 'biogrammar' of Tiger and Fox, and various other developments in the field of 'cognitive ethology' and transformational grammar ('the language acquisition device'). For example, Tooby and Cosmides have looked for cognitive mechanisms for social exchange and the detection of cheating, Buss for mate-preference mechanisms, Wilson and Daly for male proprietary behavior, Silverman and Eals for sex differences in spatial ability Profet for pregnancy sickness as an adaptive response to teratogens, and Orians and Heerwagen for evolved responses to landscape. Linguists, like Pinker, adhering to this approach, take issue with Chomsky and others who, while seeing linguistic competence as innate, do not see it as a product of natural selection. Evolutionary linguists see language, on the contrary, as having all the design hallmarks of an evolved adaptation. The 'adaptationist’ school, however, like the one it rejects, is itself largely brain agnostic in that it does not try to link its domain specific modules to brain functions. But developments in neuroscience and artificial intelligence, particularly theories of parallel distributed processing (PDP), may well show the way to a future melding of these approaches.

Sociobiology, both broadly and narrowly speaking, shows then a continuing vigorous development with influences in all the social sciences as well as philosophy and literature. Even criticisms of it have developed beyond simple-minded objections to genetic determinism or Social Darwinism to become serious attempts to grapple with the real issues. The general issue of conceptualizing the relation of genes and culture continues to be contentious, and the issue of group selection is by no means settled, but the future holds promise for the development of a normal science paradigm within which constructive disputes will be possible and cumulative progress made. The big question mark for the social sciences is the degree of their willingness to enter into a constructive debate with the sociobiologists now firmly established in their midst. See also human nature, Social Darwinism.


3. State refers, in its widest sense, to any self-governing set of people organized so that they deal with others as a unity. It is a territorial unit ordered by a sovereign power, and involves officeholders, a home territory, soldiers distinctively equipped to distinguish them from others, ambassadors, flags, and so on. Since the 1880s, the inhabitable land of the world has been parceled lip into such units; before that, quite large areas had been either unclaimed and uninhabited, or inhabited by nomadic and wandering peoples who were not Organized as states. Most states are now represented at the United Nations, and they vary in size and significance from China and the USA at one extreme, to Nauru and the Seychelles at the other.

More specifically, however, the term state refers to the form of centralized civil rule developed in Europe since the 16th century. This model has been imitated, with varying success, by all other peoples in the modern world. What most distinguishes the state as an organizational entity is the freedom and fluency with which it makes and unmakes law. The empires of the east, by contrast, were predominantly bound by custom, while in Europe in the medieval period, authority to rule was dispersed among different institutions, and in any case took long to acquire the habits of fluent legislation.

The modern European state came into being gradually, and has never ceased to evolve. Its emergence can in part be traced in each of the major European realms by way of the growing currency of the word 'state', along with. its analogues in other European languages: stato, état, estado, Reich and so on. The idea, however, has played a varying role in different countries - much less, for example, in Britain than in some continental countries. Machiavelli in The Prince (1513) exhibits a clear grasp of the emerging realities of central power, but while he sometimes talks of lo stato, he can also use expressions like loro stato (your state) which suggest that he is not altogether dear about the difference between a state and a regime. In Jean Bodin's Six Livres de la République (1578) later in the 16th century, the French state was explicitly theorized in terms of the idea of sovereignty, as the absolute and perpetual power of both making and unmaking laws. The unmaking of laws is important, because it constitutes one reason why the growth of absolute power could be welcomed as a liberation from the dead hand of inherited rules. A weariness with the civil strife of the 16th and 17th centuries further disposed many people to welcome absolute rulers -as guarantors of peace. Monarchs were, of course, far from loathe to acquire this power, and set to work diminishing the co-ordinate powers, and jurisdictions inherited from earlier times. The Church was perhaps the most important of these jurisdictions, and lost power no less in realms that remained Catholic than in those which became Protestant. Parliamentary institutions fell into desuetude everywhere except in England. The nobility, which had been turbulent in the exercise of its feudal powers, were domesticated as courtiers, most famously at the Versailles of Louis XIV Monarchy became strictly hereditary and evolved mystiques both of blood and divine right. The absolute power thus generated was often used with a ruthless cynicism typified in the motto 'canons are the arguments of princes' and exemplified in the careers of spectacularly aggrandizing monarchs like Charles XII of Sweden and Frederick the Great of Prussia. But all states alike tried to expand their power both by mobilizing the resources available and by conquering new territory. It would be a mistake, however, to think that this early modern absolutism became indistinguishable from despotism. The sovereigns remained subject to myriad customary restrictions and had to operate for the most part in terms of law, whose abstractness limits its usefulness as an instrument of pure policy. Further, as the new system settled down in the later 17th century, the more powerful classes, such as the nobility, clergy and the bourgeoisie in the towns, solidified into corporations which sensibly limited the freedom of action exercised by monarchs who found in Enlightenment rationalism a doctrine highly conducive to their dreams of mobilizing nation power. What emerged was the ancien régime, a social form so immobile it needed a French Revolution and a Napoleon to destroy it.

The issues raised by the emergence of this quite new form of civil association can best be grasped by their reflection in European political philosophy. A pure theory of the state was presented by Thomas Hobbes in Leviathan (1651). Hobbes argued that subjection to a sovereign ruling by law was the only alternative to the incessant discord created when proud and insecure individuals jostled together. Hobbes was clear, as Machiavelli was not, that the state (or Leviathan) is an abstract and impersonal structure of offices condition- ally exercised by particular individuals. People must, as subjects, rationally consent to the absolute power of the sovereign, but this consent lapses if the sovereign cannot protect them, or if the sovereign begins directly to threaten their lives. The boldness of the Hobbesian conception, which reflects the thoroughness with which Hobbes thought the issue through, lies in the extrusion of any external limitations on the sovereign power: what the sovereign declares to be just is ipso facto just, and the sovereign has the right to determine religious belief, and what may be taught in the schools. Liberty is the private enjoyment of the peace brought by civil association, a peace in which alone culture and material prosperity may be garnered.

Being a philosophical work, the Leviathan explained but did not justify, and fragments of its argument were appropriated by both sides in the English civil war. Both sides were offended by it. The Leviathan was publicly burned at Oxford in 1685. Immediately after the revolution of 1688, John Locke published Two Treatises on Government, which softened the intolerably austere picture of the state Hobbes gave. This was an occasional work which popularized the notion that governments rested upon the consent of their subjects, and were limited by natural rights (to life, liberty and property) with which people entered civil society. Their business was to protect such rights. Locke avoided the idea of sovereignty altogether and emphasized that the rulers represented the ruled. The spread of liberalism in the next two centuries extended this idea, both in theory and in practice.

In the course of the 18th century, it became clear that the modern European state raised quite problems, both practical and theoretical. It was a free association of individuals claiming the power to legislate for themselves, without any necessary moral, religious or metaphysical commitments. Two ideas, potentially disharmonious, consequently dominated further development: community and freedom. The best formulation of the problem is in Chapter 6 of Rousseau's Social Contract (1762):

How to find a form of association which will defend the person and goods of each member with the collective force of all, and under which each individual, while uniting himself with others, obeys no one but himself, and remains as free as before. Rousseau's solution focused on a general will constituting a community of citizens devoted to the public interest. Such a conception clearly emerged from the ancient conception of the virtuous republic which had haunted European thought since Machiavelli's Discourses on the First Ten Books of Livy (1513), and which was unmistakably subversive of the European system of extended monarchies. just how subversive it was soon became evident, both in the thought of Immanuel Kant, who argued that republics were the condition of perpetual peace, and in the French Revolution, whose protagonists adopted Rousseau posthumously as one of their own.

The problem was that the classical republic was possible only in a small city with a homogeneous population. Montesquieu had argued in De l’esprit des lois (1748) (The Spirit of the Laws) that no such thing was possible in the conditions of modern Europe. In the 1820s Hegel presented in the Philosophy of Right (1821) an account of the modern state as the objective embodiment of the fully developed subjective freedom towards which the human spirit had always been tending. At the time, however, a whole group of writers emerged to emphasize the misery and repression, as they saw it, of modern life and the iniquity of the state. Marx and Engels argued that the state was an illusion masking the domination of the bourgeois class, and predicted that after a proletarian revolution, the state would wither away. A newly homogeneous humankind would be able to surpass the unity and virtue of the classical republics on a worldwide scale.

The actual history of states has been one of a continuous growth, both in their claim to regulate the lives and property of their subjects, and in their physical capacity to enforce such claims. it is, for example, possible to regulate a literate society much more completely than an illiterate one. The propensity of European states to engage in war with one another has provided frequent emergencies in which necessity trained governments in how to regulate; and all states now have bureaucracies and other instruments of control. Yet, paradoxically, the increase in the state's range and power has produced countervailing decreases in effectiveness. When its functions were limited to guaranteeing order and security, the state was accorded immunity from some of the moral restraints binding on individuals. The doctrine called ‘reason of state' authorized the breaking of treaties, deceit, and the employment of violence, when necessary. From the 19th century onwards, some extensions of state power (especially the redistributions of wealth which began to constitute the state as a system of welfare for all members of society) were justified on the ground that the state stood for a higher morality. Citizens thus came to believe that they had rights against the state. The state's claim to suspend law, to guard its own secrets, to the use of non-legal measures in dealing with enemies who themselves resorted to terror - all the traditional apparatus of raison d’état - was challenged, and it was felt to be the duty of the state to represent the highest moral standards even against those who violated them. In developments such as this, and in the persistently transforming dynamism of the idea of democracy, will be found reasons for seeing the modern state, at least in its European heartland, not as an abstract idea, but as an institution ceaselessly responsive to the beliefs that move its subjects. See also citizenship, civil society, origins of state.

state, origins of

3. Since the 17th century, much western scholarship has focused on the origin of the state, a form of political organization in which power rests in the hands of a small governing group that monopolizes the use of coercive force to, maintain internal order and cope with neighboring peoples. This type of government is found in all large-scale societies, which are also invariably characterized (even in socialist examples) by marked political and economic disparities.

Theorizing has centered on whether states evolve primarily through consent or conflict and as a result of internal or external factors. It has also been debated keenly whether states have improved or degraded the human condition by comparison with smaller, seemingly more natural societies. 'Social contract' theorists, including Thomas Hobbes and John Locke, believed that individuals, submitted willingly to the state in return for the protection it offered their persons and property. Karl Wittfogel and Julian Steward argued that the state first evolved to manage large irrigation systems in and regions. Others see the state developing in order to regulate the production, importation and redistribution of valuable materials.

The oldest 'conflict' theories postulated that states originated as a result of conquest, especially of agricultural peoples by pastoralists. Marxists view agricultural and craft specialization as producing socioeconomic differentiation that results in class conflict and ultimately the formation, of the state as a means to maintain ruling-class dominance. Robert Carneiro argues that population increase within geographically or socially circumscribed areas results in warfare over arable land and the emergence of the state. More generally, Mark Cohen maintains that population increase leads to the intensification of food production and eventually to competition for arable land, population agglomeration, and the development of the state. In each of these theories, the state evolves at least in part to protect social and political inequalities.

None of these theories explains satisfactorily the origin of the state. Marxists assume that emergent classes developed prior to the state, a position that is not widely accepted. Other explanations do not appear to cover all, or even most, cases. 'Prime-mover' theories of the origins of the state have been rejected. Synthetic theories, which combine a number of causal variables, have not proved more successful. It is widely accepted that many different factors promote the development of larger and more differentiated societies which require state controls. This has led interest to shift away from explaining why the state develops to how.

Increasing use has been made of 'information theory' to account for the development of the state. It is argued that the delegation of decision making to' central authorities becomes increasingly necessary for political systems to function adequately as their size and complexity increase. Centralized control requires the collection of information from all parts of the system and the effective transmission of orders from the center to these parts. Archaeologists have argued that the state can be equated with settlement hierarchies of at least three levels, corresponding to an equal number of levels of decision making. This does not explain, however, why early civilizations were characterized by marked economic and status disparities rather than simply by functionally differentiated leadership roles.

A further application of information theory suggests that gossip, ridicule, accusations of witchcraft and other economic and political leveling mechanisms that are found in small, egalitarian societies work only when people are known to one another. As larger societies develop, leaders can use their newly acquired control of public information to weaken opposition and silence critics. This facilitates the concentration of political and economic power among an elite and the development of conspicuous consumption as a feature of elite lifestyles.

At the same time, it is recognized that rulers must curb the predatoriness of officials and excessive increases in the bureaucracy and distribution of state largess if exactions are to remain within limits that are acceptable to the taxpayer. This assumes that the state provides services, such as defense and internal order, which the bulk of the population accepts as essential. If exactions are kept within accepted limits, most people will continue, at least passively, to support the state.

Additional theories stress the emulation of inegalitarian behavioral patterns within extended and nuclear families and other spheres of personal interaction. This strengthens the power of the state by making domination appear universal and natural from each person's earliest infancy. Attention is also paid to hegemonic ideologies which elites employ to naturalize inequality and enhance their own power. There is, however, disagreement about the extent to which dominated classes accept elite claims or construct counter ones. The latter position rejects theocratic explanations of state power. While the state is not coterminous with human societies, discussions of its origins remain intimately tied to an understanding of human nature. See also state.


1. Social status refers to the prestige and honor publicly ascribed to particular positions and occupations within society. The possibility of identifying a hierarchy of status groups within society, that would be strictly independent of class hierarchies, was recognized by Max Weber. The classic example is that of priests and other religious professionals in contemporary society, whose status is disproportionate to their income or political power (although it may be indicative of their influence on the formation of public opinion). Status groups may be expected to have distinctive lifestyles, including patterns of behavior, belief systems and patterns of preference and consumption. For Weber a caste system was representative of a hierarchy of status groups, not of classes. Crucially, social status is to be understood as the prestige that is ascribed to the social position, which need not necessarily correspond to an individual member’s self-perception.


1. A stereotype is an oversimplified and usually value-laden view of the attitudes, behavior and expectations of a group or individual. Such views, which may be deeply embedded in sexist, racist or otherwise prejudiced cultures, are typically highly resistant to change, and play a significant role in shaping the attitudes of members of the culture to others. Within cultural studies, the role of stereotypes is possibly most marked in the products of the mass media (including the portrayal of women and ethnic minorities in drama and comedy, and in the shaping and construction of news coverage), although they are also significant in education, work and sport (in channeling individuals into activities deemed appropriate to their stereotyped group).


1. The concept of a 'subculture', at its simplest, refers to the values, beliefs, attitudes and life-style of a minority (or 'sub-') group within society. The culture of this group will diverge from, although be related to, that of the dominant group. Although now associated in large part with the cultures of young people (mods and rockers, skinheads, punks), it may also be applied to ethnic, gender and sexual groups. The concept was in fact developed largely through work in the sociology of deviance (referring, for example, to the culture of 'delinquents', criminals or drug-users). An early explanation of the behavior of working-class delinquents, saw it, as youths over-conforming to the working-class values of their parents (such as toughness and masculinity, cunning as against gullibility, risk-taking), and that in over-conforming, they come to violate the dominant norms of middle-class culture.

The concept of 'subculture' is important, precisely because it allows recognition of the diversity of cultures within a society. While the older concept of youth culture tended to assume a single, homogeneous, culture amongst young people, the subcultural approach stresses the fragmentation of that culture, especially along class lines. As with the concept of 'counterculture,’ 'subculture' tends to presuppose some form of resistance to the dominant culture. However, 'counterculture' increasingly comes to refer to groups that are able to provide an intellectual justification and account of their position. Subcultures articulate their opposition principally through exploiting the significance of styles of dress and patterns of behavior (or rituals). (Semioticapproaches, decoding the dress and behavior of subcultures, have therefore been highly influential. The skinhead's dress of braces, cropped hair and Doc Martins makes sense as a comment upon an imagined industrial past, and as an attempt to come to terms with powerlessness in the face of a predominantly middle-class culture, in which the skinhead has neither the financial nor the cultural resources to participate.) The subculture may therefore be seen to negotiate a cultural space, in which the contradictory demands of the dominant parent culture can be worked through, or resisted, and in which the group can express and develop its own identity. The subcultural approach can therefore be characterized by its sympathy with the position of the subculture, suggesting that subcultures are an important source of cultural variation and diversity - as opposed to the implicit or explicit condemnation of subcultural activity that accompanied earlier studies of deviance.

The sixties mods offer a neat illustration of a subculture and its analysis. Mods may be characterized by their concern with fashion and consumption, and a hedonist life-style. Typically, the mod was employed in low grade, non-manual (clerical) work. The mod is thus very much part of his or her time, responding to the increased consumerism of the 1960s, and the shift in economy from traditional manual and manufacturing work, to non-manual, service industry. Indeed, the mod takes consumerism to its limits. Unlike so many other subcultures, the mod is disturbing, not because he or she shuns the 'parent' culture's demands for smart dress, but because he or she is just too smart. The problem faced by the mod is that employment (that is traditionally associated with a work ethic of self-denial and self-discipline) is at once necessary, in order to pay for a hedonistic life-style, and yet at odds with the life-style (for self-discipline is the opposite of hedonism). The mod therefore conforms to the paradoxical demands of consumerism and work, through the use of amphetamines.

Certain criticisms have been made against the subcultural approach as it has developed within cultural studies. It has been seen to be overly selective in the subcultures it has studied. Crucially, much of its work has focused on masculine activities, at the exclusion of either female participation in the subculture, or more importantly, the recognition of distinctive female subcultures. Similarly, it may be argued that it has been excessively concerned with working-class subcultures, leading to a romanticizing of the subculture as a source of resistance (and politically progressive values). Further, an over-emphasis on subcultures may serve to distort the picture that cultural studies has of youth as a whole. The concept of youth culture remains important. An emphasis on subcultures may serve to highlight the spectacular at the cost of ignoring the more mundane forms that are of concern to the majority of young people. This minority may be more appropriately seen as belonging to youth culture (or cultures), not to a resisting subculture. A crude opposition between conformist youth (or even 'pop') culture and a radical subculture is itself inappropriate, as it fails to recognize the degree to which the two merge.


1. A word with a variety of meanings. Symbols pervade human life, and are used in a wide range of specialized discourses, as well as in everyday living. Usually, the word ‘symbol’ is taken as referring to a sign or action of some kind which is used to communicate a meaning to somebody in virtue of a shared set of norms or conventions. A symbol therefore communicates a meaning because it stands for something else, although there is no necessary connection between it and what it stands for (hence its use and meaning are both matters of convention; a conception which Peirce uses in his semiotics). In analytical philosophy ‘symbolic logic’ involves the substitution of symbols for terms which occur in natural language (‘~’ means ‘not’; ‘·’ means ‘or,’ etc.) as a means of analyzing the structure of arguments. In Freudean psychoanalysis, symbols are taken to stand in place of some object which has been repressed (in this sense, symbols usually have some (often metaphorical) relation to their referents; although Freud – a smoker – stated that there are times when a cigar is simply a cigar, from the psychoanalytic point of view the latter could be taken as a metaphor for the phallus when it occurs in a patient’s dreams). In Peirce’s semiotics, a symbol is a kind of sign which bears no relation or resemblance to what it stands for. A symbol can also have historic significance and a multitude of resonances of meaning linked to this (e.g. in European culture, the sign of the cross can be a potent symbol not only for Christian faith, but also for the institutions, identity, traditions and values associated with that culture).

systems theory

1. Various forms of systems theory have been used in analyses of society throughout the 20th century), with functionalism being perhaps the longest lasting and most influential variant. In general, a system may be understood as a collection of interrelated parts. The system is divided from an external environment by a boundary. The environment is more complex than the system. The system is thus characterized by the degree of order it manifests, not least in so far as it excludes certain relationships between its parts and enforces others. (For example, a meaningful sentence is a relatively simple affair, its meaning being determined by the rules of its language. The sounds of traffic, other people, bird song, rain and so on around me, when I utter that sentence, are far more complex and indeed seemingly chaotic.) A system maintains this boundary between itself and its external environment, both maintaining an internal order and also drawing the resources necessary for its survival and reproduction from the external environment. (Thus, an animal organism can be understood as a system. Its skin is a boundary between it and the external world. It must be able to draw sustenance from that world, and maintain itself as a vital organism. At its death, the boundary collapses, and the organism decays into its environment.) It may be argued that any system must satisfy a set of abstract conditions in order to remain stable and vital. These include adaptation to the external environment, internal integration, arid the motivation to realize the goals of the system as a whole.

Society may be treated as a system at a number of different levels. For example, the interaction between two people can be understood as a system. That interaction will have a purpose. The system, strictly, co-ordinates not the people, but their actions. Other people and other irrelevant events and actions will be excluded from it. It will be conducted according to various rules that give it coherence and integrity. Thus, for example, a market is a system that co-ordinates together the actions, not merely of two, but potentially of many people. In systems theory the market is not understood as a meaningful exchange between people (so the systems theorist is not interested in social background or motivations of those involved in the market) but only in the co-ordination of the actions of buying and selling. Society as a higher level, for example that of a nation-state, or country, may similarly be understood as a system. The systems theorist therefore responds to a particular aspect of contemporary societies: the way in which they confront their members as having a power to constrain and control them. (The point is, not simply that a market can be viewed as a system, theoretically blanking out our subjective experience of it, but that markets are increasingly becoming pure systems. The argument would be that markets, like bureaucracies, increasingly only recognize the ability of agents to buy or sell. Money alone matters in coordinating our actions together. The market thus takes on the force of an objective law, so that we are thus obliged to obey it, whether we like it or riot, and in addition, despite the fact that we may realize that the market is really just one more set of cultural conventions.) It has then been argued that modern societies may be characterized by this high level of systematization: social actions are increasingly coordinated by sets of rules and conventions that fall outside of the understanding (or even experience) of society's members. For some, such as Niklaus Luhmann (who has done much to revive systems theory in the social sciences), this is a good thing, for it removes a burden of responsibility from the individual. For others, such as Jürgen Habermas, it can pose a threat, in removing society from the control of people who constitute it.



1. The scientific management movement, that developed in the early 20th century in association with the writings of F.W. Taylor, sought to bring rational administration to the workplace. Scientific management seeks to increase the efficiency of industrialized mass production. Taylor proposed three principles for the reorganization of work: (i) an extreme division of labor reducing tasks to the simplest possible actions (and if possible to a single repeated operation), and as such reducing the skills required of manual labor, (ii) managers (rather than foremen or skilled workers) would have complete control of the workplace, which in turn serves to validate management as a skill distinct from mere ownership; (iii) time-and-motion studies would be used to control costs and the efficiency of movement within the workplace.

Taylorism is a key component of Fordism, which is the term coined by the Italian Marxist Gramsci to refer to the writings of the industrialist Henry Ford. Fordism, however, combines scientific management with an extreme mechanization of the production process. Ford, in addition, advocated high wages, that at once rewarded the workforce for submission to the disciplines of the scientifically managed workplace, and (if the policy were adopted by all producers) would facilitate the market for mass produced products. The proletariat are thereby incorporated into capitalism, in that they come to benefit, to some degree, from its advance.

In the late 1970s, the inapplicability of Taylorism to the new knowledge based technologies (such. as computing and micro-electronics) was partially responsible for a crisis in Fordism, leading to the prospect of a 'post-Fordism', where production would be grounded in the return of craft specialization and greater flexibility and responsibility amongst the workforce.

3. Taylorism refers to a form of systematic management whose principal aim is to remove control over the organization of work from those who actually do the work. The separation of thinking about how to perform a particular task from its actual execution lies at the heart of Taylorism, and in Frederick Taylor's day at the turn of the 20th century, it was a radical innovation. In a world of craft knowledge and internal contracts, the manufacturing workshop at the end of the 19th century was one organized and largely run by craftsmen in their respective trades. Taylor's (1911) notion of 'scientific management' represented a direct challenge to this form of organization and laid the basis of the modern factory system.

Scientific management, as professed by Taylor, involves wresting control from the workforce of what they do and how they do it, and passing this knowledge to a group of workers whose sole task it is to manage. Management thus becomes a distinct activity (separate from ownership) whose function is to establish a set of work standards, lay down how they should be achieved, and select the most appropriate workers to perform them. To achieve this process of task management it was necessary to analyze the existing labor process: breaking jobs down into their component parts and then calculating how to get the best out of each component. Time and motion studies were used by management to obtain detailed knowledge about each step of the work process. Taylor himself was fond of telling the story of Schmidt, a pig iron handler at the Bethlehem Steel Company, to convey the principles of his system.

After watching a gang of workmen loading pig iron on to railway cars at the rate of 12½ tons per person each day, Taylor's first act was to select 'scientifically' the appropriate workmen. Schmidt was selected for his physical and social attributes. After a series of time and motion studies, where the details of picking up pig iron and walking with it were recorded, Schmidt was instructed to load pig iron in a systematic way, his every movement timed, and a piece rate wage system offered as an incentive to improve his output. The end result, according to Taylor, was that Schmidt was able to carry 471 tons of pig iron each day. The broader message, however, was that it was now possible to break down jobs into their minimum , skill requirements, reduce training times, and still improve productivity levels. It also implied that cheaper, unskilled labor could be hired to perform the tasks.

It is debatable how much influence Taylorism has had upon patterns of work organization in the industrialized world, especially outside of the USA. It is apparent, however, that Taylor's ideas on the rationalization of work did influence Henry Ford's engineers at Detroit in 1913-14 and contributed towards the rise of mass production techniques at the Highland Park Factory. What distinguishes Ford's innovations from those of Taylor, however, is that whereas Taylor took for granted the existing level of technology and sought greater productivity from labor, Ford used technology to mechanize the labor process and to eliminate labor. Fordism, as exemplified by the moving assembly he, represents the dominance of machinery over labor and is thus a step beyond Taylor in the organization of work and the development of modern industry. See Fordism.

technological progress

3. The importance of technological progress for economic and social development is undeniable, bit it is a field where understanding and analytical effort have lagged far behind other areas, such as short-term supply-demand analyses. This is due at least partly to the complexity of the process of technical change and the difficulty of obtaining precise definitions and measurements of it. Important advances have been made since the 1970s, but it remains a relatively neglected field.

Schumpeter, one of the few distinguished economists to put technological progress at the center of his analysis, stressed the importance of new products, processes, and forms of organization or production – factors which have been clearly associated with enormous changes in the economic structures of developed economies since the Industrial Revolution. The rise of major new industries, such as railways and steel in the 19th century, and automobiles, synthetic materials and electronics in the 20th, depended upon a complex interaction of inventions, innovations and entrepreneurial activity, which Freeman aptly described as ‘technological systems.’ Since the onset of the post-1973 recession, the idea that developed capitalist economies are subject to long waves of alternating periods of prosperity and stagnation, each wave of being around fifty to sixty years’ duration, has been revived: some commentators argue that new technological systems are primarily responsible for the onset of an upswing, which begins to slowdown as the associated technologies and industries reach maturity. Other economists, while accepting the notion of such cycles, argue that technological progress is a consequence, rather than a cause, of them. Outside the long-wave literature, there is an ongoing debate concerning the direction of causality regarding observed statistical associations between the growth of an industry and the pace of technical innovation.

At the macroeconomic level, the traditional neo-classical growth models treat technological progress as part of a residual factor in ‘explaining’ increases in output, after accounting for the effects of changes in the volume of the factors of production (capital, labor, and so on). This residual is normally large, and implicitly incorporates factors such as the education of the workforce and management expertise which contribute to improvements in efficiency, in addition to technological progress. In such approaches technological change is purely ‘disembodied,’ that is, unrelated to any other economic variables. The class of so-called vintage capital models, which have become quite widely used since the 1970s, treat technological progress as at least partly embodied in new fixed investment: plant and machinery are carriers of productivity improvements and the gains from technological progress depend on the level of investment in them. Even the latter approach, however, does not go far in capturing the processes and forces by which new techniques are absorbed into the production system; the ‘evolutionary’ models pioneered by Nelson and Winter attempt to explore the conditions under which entrepreneurs will strive to adopt improved techniques. Such approaches are, however, in their infancy.

Discussion of how new techniques are generated and adopted is typically conducted at a more microeconomic case-study level. An invention is a new or improved product, or a novel procedure for manufacturing an existing product, which may or may not become translated into an innovation, that is, the (first) commercial adoption of the new idea. In many cases, scientific discoveries pave the way for inventions which, if perceived as having potential market demand, are adopted commercially; in the 19th century, the inventor/innovator was frequently an independent individual, but in the 19th century the emphasis has moved to scientific and technological work being carried out in-house by large firms. If an innovation is successful, a period of diffusion often follows, where other firms adopt or modify the innovation and market the product or process. It is at this stage that the major economic impact frequently occurs. Freeman illustrated this process in the case of plastics, where fundamental scientific research work in Germany in the early 1920s on long-chain molecules led directly to the innovation of polystyrene and styrene rubber, and indirectly to numerous other new products in the 1930s. Further innovations and massive worldwide diffusion took place after the Second World War, facilitated by the shift from coal to oil as the feedstock for the industry. In the 1970s the industry appeared to have matured with a slow-down in demand and in the rate of technological progress.

The measurement of inventive and innovative activity is beset with difficulties. Input measures include the personnel employed and financial expenditure, although there is necessarily a degree of arbitrariness in defining the boundary of research and development activity. Output measures of invention include patent statistics, but these need to be interpreted with caution, owing to the differences in propensity to patent between firms, industries and countries with different perceptions of whether security is enhanced by patent protection or not, and differences in national patent legislation. The use of numbers of innovations as an output measure normally requires some - necessarily subjective - assessment of the relative 'importance' of the individual innovations. Despite their limitations, however, the use of several indicators in combination can provide a basis for comparisons between industries or between countries.

Over the post-war period, governments increasingly recognized the importance of attaining or maintaining international competitiveness in technology. The emergence of Japan as a major economic power owed much to a conscious policy of importing modern foreign technology and improving it domestically. Most countries have a wide variety of schemes to encourage firms to develop and adopt the new technologies, and policies for training or retraining the workforce in the skills needed to use new techniques. In the, current context, attention is, of course, focused particularly on micro- electronics-related technologies; and - whatever their validity - fears that these technologies could exacerbate unemployment problems generally take second place to fears of the consequences of falling behind technologically, in the eyes of governments and trade unions alike. * Forecasts of the impact of new technologies are notoriously unreliable. The cost-saving potential of nuclear power was dramatically overstated in the early stages, while the potential impact of computers was first thought to be extremely limited. For good or ill, we can, however, say that technological progress shows no sign of coming to a halt. See technology.


1. The word 'technology' is derived from the Ancient Greek word 'tekhne', meaning either 'art' or 'craft.’ In modern parlance, however, the meaning of 'technology' has tended to take on the instrumental aspect implied by the word 'craft.’ The use of the word 'technology' can in turn be divided into two separate but linked domains. First, 'technology' concerns that web of human practices within which the manipulation of (raw) materials is undertaken with a view to giving them a functional and useful form.. In this sense, technology is primarily a matter of technique, and its employment presupposes some notion of purpose or design with regard to the manner in which materials are subsequently used. Second, the end product of such a process of manipulation is also called 'technology.’ Thus, when we refer to a 'piece of technology,’ such as a computer or an aircraft, we are not referring to the manipulation of materials which gave rise to them, but ill each case to something which, by its very nature, is deemed different in kind to other types of object that we might encounter in the world (e.g. rocks and stones, plants, animals etc.). 'Technology', therefore, refers both to a web of human practices and to the products of those practices.

In the late modem era, it might, with some good cause, be argued that the burgeoning of particular forms of technology has been a significant element in the social and political transformations which mark out the history of the industrial and post-industrial periods. Thus, with the rise of industrial forms of production in Britain dating from the late 18th century there were accompanying changes in the distribution and concentration of population (an increased concentration in urban centers) and the concentration and distribution of wealth (a burgeoning capitalist class). Equally, there were accompanying developments in the political constitutions of representative bodies ( , e.g., by the end of the 19th century, there was an increasingly widening political franchise who elected members of parliament). Without attempting to fill in the historical picture, it is clear what the possible links between technological developments and such social and political developments might be. First, the development of industrial technology increases the viability of producing more goods at cheaper prices, since such technology brings -with it the possibility of mass production. In economic terms, this implies an increased turnover both at the levels of production and consumption: speaking from the vantage point of the mercantile capitalist, the more items of a product you can make efficiently (i.e. cheaply) the cheaper you can sell it, and the cheaper you can sell it the more you can sell. In turn, the efficient mass production of goods requires the concentration of labor forces in restricted areas, and this is achieved through offering more financial enumeration for labor than can be obtained in what can subsequently be deemed 'rural' (Le, non-industrialized) areas. Such movement and concentration of population, it is clear, will have important social effects, in so far as some of the social relations which predominated in rural social forms will no longer apply. Hence, there may be increased fluidity of labor and job opportunity; likewise, there is the possibility that new social antagonisms will develop (e.g., between those who own systems of production and those who work for them) and, following the account offered by Marxism, individuals will develop self-consciousness through the development of class divisions resulting from the division of labor that mass production institutes. These social ramifications can have knock-on effects, in that the development of wealth among the mercantile class is probably going to be, accompanied by an increased desire to see that wealth realized in terms of concrete political power. Likewise, one might expect those who work for the capitalists to want to see an expression of their interests in political terms.

Alongside the approach represented by, for instance, Marx's analysis of social relations, the kind of understanding of the significance of technology implicit in the above approach is also present in the work of thinkers associated with more recent intellectual developments that are often classified under the rubric of postmodernism. One such example is Jean François Lyotard's The Postmodern Condition (1979). In this book, Lyotard puts forward the view that technology has a determining influence on forms of knowledge. In other words, Lyotard is claiming that the social and cultural effects of technology are not limited to such matters as the socio-historical development of classes with defined interests which spring from the predominance of the economic relation to industrial technological forms. Rather, according to Lyotard, the ways in which we think about, categorize and valorize experience are also subject to change at the hands of technological forces. In short, the question concerning what knowledge is (cf. epistemology), on Lyotard's account, an issue which must itself be transformed by the advent of modern technology. This is because technology comes to serve as the primary criterion for evaluating what counts as knowledge within contemporary culture. Technology, in this sense, transforms knowledge to the extent 'that anything in the constituted body of knowledge that is not translatable in this way will be abandoned [ ... I the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language.’ Thus, on Lyotard's view the postmodern era is one which bears witness to the 'hegemony of computers,’ for it is this hegemony which serves to dictate what counts as knowledge by imposing the criterion of 'translatability' upon those propositions which make claims about reality. One outcome of this is that the primacy of human subjectivity is displaced by the machinic tendencies of modern technology. This displacement, in turn, renders the thinking subject a secondary phenomenon with regard to knowledge, simply because subjectivity can no longer be taken as the foundational principle which underlies what counts as knowledge. Speaking from the point of view of an inwardly oriented conception of subjectivity (as exemplified by, for example, the Cartesian cognito - cf. self), knowledge, under the conditions dictated by technology, becomes externalized. Knowledge, transformed in this way, becomes linked to market exchange-value and the play of exterior forces. What is noteworthy in Lyotard's account is the claim that material forces (in the shape of technology) have the capability to transform not merely social norms and relations, but can alter radically the ways in which we think about ourselves and our abilities. How 'radical' an insight this is may well be a point of some debate. For example, if one understands 'knowledge' to be best defined in terms of justification (i.e. as 'justifiable belief), then what has been transformed through the technological process Lyotard alludes to is not necessarily something delineated by the proper name 'Knowledge', but rather the criteria and practices which serve to define what justification is. In other words, even if we might accept that what counts as knowledge must now be judged in terms of its suitability for translation into technological terms, this does not necessarily entitle us to the further claim that knowledge 'itself' has been thereby transformed, since the definition of knowledge as 'justifiable belief' has not changed, only the criteria which constitutes justification.

As with Lyotard, aspects of Jean Baudrillard's conception of the postmodern are articulated in the wake of technological developments. Thus, on Baudrillard's argument, technology is again regarded as something capable of transforming our conceptions of experience and knowledge. On Baudrillard's vie-w, the power of technology to influence our understanding of the significance of events through processes of representation is highlighted. Most famously (or notoriously, depending on where your sympathies lie in this context) Baudrillard claimed that the Gulf War was a staged spectacle enacted through the technology of the mass media which, in 'reality', never happened. In other words, the issue of what constitutes an 'event' is taken by Baudrillard to be a matter which is now determined by the representational function of technology. His views, understandably, have been met with a variety of responses.

Whatever their merits as forms of possible explanation of socio-historical events, such accounts as those mentioned above do not, however, necessarily take us any further towards clear understanding of what technology is. Equally, if we do not understand what technology is, then it might be somewhat problematic to claim that we can construct a persuasive account of its social or cultural significance. One possible approach to this problem has been offered by the German philosopher Martin Heidegger. In his 1953 essay 'The Question Concerning Technology,’ he attempts to show that a purely instrumental understanding of technology is a reductive one: it is reductive because if we discuss technology only in instrumental terms we miss out of our account what technology presupposes, and thereby something essential concerning what technology is. Thus, Heidegger claims, if we do not account for technology in terms of what is presupposed by it, then we ignore its 'essence'. Equally, Heidegger is careful to show that 'the essence of technology is by no means anything technological.’ In other words, what is presupposed by technology (namely what is essential to it in order for it to be what it is) cannot be accounted for in technological terms. The contemporary view of technology, in contrast, is regarded by Heidegger as being both 'instrumental' and 'anthropological'. In short, this means that technology is generally taken to be a means to an end, and this implies that the desires and purposes of humans constitute an exhaustive definition of it. Such a view is, according to Heidegger, correct as far as it goes. But this view does not go far enough, for it presupposes that we can define notions like 'means' and 'ends' in an unproblematic manner. '[W]herever instrumentality reigns there reigns causality,’ and therefore if we do not provide an acceptable account of causality, then we cannot be said to have engaged with the question of what technology is in sufficient depth.

On Heidegger's account, causality can be best elucidated in terms of its 'fourfold' nature: (1) The matter out of which a thing is made; (2) The form which is imposed on the material; (3) The purpose of the thing; (4) That which brings about this transformation (the agent). Heidegger claims that it is essential to see the relationship between each of these four elements as an immanent one. In other words, the agent (4) does not stand 'outside' of, or independently of, 1-3. Rather, each of these is a mode of 'bringing-forth,’ i.e. a process in which what is hidden in the world is made manifest. Moreover, 'bringing-forth' is itself 'grounded in revealing,’ and revealing involves uncovering and thereby showing how things are. 'Technology is no mere means. Technology is a way of revealing. If we give heed to this, then another whole realm for the essence of technology will open itself to us. It is the realm of revealing, i.e., of truth.' Above all, it is in its capacity as a mode of revealing, not as a mere 'manufacturing', that technology is a 'bringing-forth.’ The 'bringing-forth' involved in modern technology is a 'challenging' which 'sets upon' nature so as to impose order upon it with the aim of achieving 'the maximum yield at the minimum expense.’ Nature, in short, is conceptualized as a mere resource by modern technology, a storehouse of energy. But even this is 'no mere human doing.' In the same way as a mountain range is formed and folded by forces which are not to be confused with the range itself, so humans are propelled into this 'challenging' by what Heidegger calls 'Gestell' (enframing). 'Enframing means the gathering together of the setting-upon that sets upon man.' In modern technology, humans are themselves 'set-upon' and thereby engage with the world in a manner which cannot be accounted -for in purely anthropological terms. Although the process of enframing which occurs as technology takes place within the sphere of human action, entraining does not 'happen exclusively in man, or definitively through man.' This is because humans are themselves set upon by the conditions of their existence and hence challenged into responding to these conditions through the enframing which underlies technology. Thus, the essence of technology is revealed in the process of enframing, and enframing itself is shown to be a mode of engaging with, and thus revealing, the conditions of existence. This mode of engagement 'starts man upon the way of that revealing through which the actual everywhere […] becomes standing reserve [i.e. a resource].' In this sense, there is a determinacy with regard to how humans encounter the conditions of their existence, once the enframing which constitutes the essence of technology has set them upon the course of revealing which technology embodies. This process, which underlies all modes of revealing, Heidegger calls 'destining' (Geschick). Humans exist within the domain of destining, but are never compelled by it, since destining is itself the 'free space' within which human action is rendered possible. As such, it is 'the realm of freedom.' Technology, in turn, is thus always already situated within the domain of freedom. Given this last point, it cannot make sense to talk of our being 'compelled' by technology, either in the sense of 'a stultified compulsion to push on blindly' with it or, what comes to the same, to rebel helplessly against it and curse it as the work of the devil.’

On Heidegger's view, then, we cannot take a stand either 'for' or 'against' technology. However, the danger presented by technology lies in the fact that it may come to subvert all other possible modes of revealing in its pursuit of ordering the world (i.e. mastery over it). In turn, such mastery would reduce both humanity and all other entities to the status of a mere resource for technological goals. Nevertheless, the technological mode of enframing can never entirely subvert the very conditions which gave rise to its historical development, and for Heidegger this means that a space must remain within which articulate different modes of thinking that can engage with the world. For Heidegger this means, above all, formulating a poetic form of dialogue with which to engage with Being - a theme which pervades much of his work.

Of other accounts of technology, thinkers associated with the Frankfurt School have alluded to the relationship between the rise of technology and the development of modem forms of rationality. Significant amongst these is Max Horkheimer's conception of 'instrumental rationality' and his accompanying criticisms of positivism. According to Horkheimer, modernity can be characterized in terms of a modulation towards a conception of reason which highlights its purposive/instrumental aspect. In short, by 'reason' what is meant in modern culture is a form of thinking which gives priority to the attainment of a given purpose or end, rather than any process of critical reflection upon a broader range of issues which fall outside the purview of the 'means and ends' rationality of instrumental reason. Instrumentalism is thus a form of thinking which takes purposes as 'givens' which are then to be acted upon, rather than a reflective and critical engagement with the question as to whether particular purposes are justifiable. In philosophy, Horkheimer argues, this has led to the development of 'positivism', which seeks to emulate the methodology of science. Positivism, in seeking to emulate science, Horkheimer claims, merely becomes a passive and uncritical voice with regard to questions of knowledge, since it is content to leave the arbitration of what counts as justification to the hegemony of modern instrumental reason.

2. Technology was used from the 17th century to describe a systematic study of the arts or the terminology of a particular art. It is from the word tekhnologia, Greek, and technologia, Latin – a systematic treatment. The root is tekhne, Greek – an art of craft. In the early 18th century, a characteristic definition of technology was ‘a description of arts, especially the Mechanical’ (1706). It was mainly in the mid-19th century that technology became fully specialized to the ‘practical arts’; this is also the period of technologist. The newly specialized sense of science and scientist opened the way to a familiar modern distinction between knowledge (science) and its practical application (technology), within the selected field. This leads to some awkwardness as between technical - matters of practical construction – and technological – often used in the same sense, but with the residual sense (in logy) of systematic treatment. In fact there is still room for a distinction between the two words, with technique as a particular construction or method, and technology as a system of such means and methods; technological would then indicate the crucial systems in all production, as distinct from specific ‘applications.’

Technocrat is now more common, though technocracy, from c. 1920, was a more specific doctrine of government by technically competent persons; this was often anti-capitalist in the USA in the 1920s and 1930s. Technocrat is now more local, in economic and industrial management, and has overlapped with part of the sense of bureaucrat.

3. Technology admits a wide variety of definitions. First, it refers to physical objects or artifacts, for example, a car. Second, it refers to activities or processes - the system of car production, the pattern of organization around vehicle technologies, the behavior and expectations of car users, and so on. Third, it can refer to the knowledge and skills associated with the production or use of technologies - the expertise associated with car design and use, as well to broader cultural images generated and sustained by the car industry.

Conventionally, technology has been the focus of social science interest from the point of view of its actual and potential impacts on society, or more specifically on work and the organization of labor. This follows the well-known position of technological determinism associated with some forms of Marxism: that technologies have the capacity to determine the course of historical evolution. In this view, then, the proper focus of social science attention is the effects of technology upon society. This position also draws upon views of the evolution of technology as a process whereby new developments are extrapolated from the existing (technical) state of affairs.

Against this it can be pointed out that technology is not independent of society; that 'society' can also have a significant impact upon the course of technological development; and that the determinist thesis is under- mined by the many myriad examples where the effects of a technology diverge from the intended effects, or where a whole series of different effects result from the same technology.

These criticisms underpin the 'social shaping' approach to technology, wherein the central question is what shapes technology in the first place, before it has 'effects'? What role does society play in shaping technology? Axiomatic to this approach is the presumption that technologies can not be considered neutral, but are the upshot of various social and political forces, A celebrated example is Winner's analysis of Robert Moses's bridges on Long Island, New York: the apparently unremarkable structural form of these bridges is said in fact to embody the social class bias and racial prejudice of their designer. The bridges were designed with a low headway; buses could not pass under diem, so that poor people and Blacks, who were habitually dependent on bus transportation, were kept off the roads. Hence the technology embodies sociopolitical factors.

A similar theme occurs in attempts to apply social constructivism as developed for the analysis of scientific knowledge. We thus find the same post-Kuhnian critique of preconceptions of technology as was applied to scientific knowledge: the role of the great individual inventor must be seen in social context; technological growth can no longer be seen as a linear accumulation of artifacts each extrapolated from an existing corpus of technological achievement; technology involves social process as well as product. In short, technology is to be regarded as the upshot of a process of social construction: a stabilized design or artifact is the contingent product of social circumstances rather than the logical outcome of technical trajectory.

Similarly technology has been construed as a cultural artifact. In this way of thinking, technology is congealed social relations, that is, a frozen assemblage of the practices, assumptions, beliefs, language, and so on, involved in its design and manufacture. Technology is thus a cultural artifact or system of artifacts which provides for certain new ways of acting and relating. The apposite slogan is that technology is society made durable: technology re-presents a form of social order (a defined concatenation of social relations) in material form. It freezes and offers this fixed version of social relations such that its adequately configured users re-enact the set social arrangements, They can only 'adequately' (that is, socially accountably) use/make sense of the technology if they conform to the community of social relations which the technology makes available.

It is unclear to what extent these social science perspectives pose a radical challenge to widely entrenched preconceptions about the nature of technology. The key point of the social science critique is that technologies do not contain intrinsic (or given) technical capacities and potential; these qualities are the upshot of contingent social shaping and/or their interpretation and use. Yet, arguably, critics themselves deploy uninterrogated versions of 'what the technology can do'. At one level, there is the danger that the social study of technology becomes a mere application of the constructivist formula, thereby overlooking the strategic significance of this form of relativism for fundamental questions about the adequacy of social science explanation.

In order further to stress the interpretive flexibility of technology, the wide and contingent variety of possible designs and uses, it has been useful to deploy the metaphor of technology as a 'text'. The analogy highlights the social contingency of the processes of both designing (writing) and using (consuming, interpreting, reading) technology. In particular, it draws attention to the complex social relations between producers and consumers, and points to the importance of conceptions of user which are embodied by the technology text. The technology text makes avail- able a particular reading which can be drawn upon by adequately configured users.

One benefit of this perspective is that it sets technology within a more theoretical frame of understanding how cultural artifacts in general are created and used. The production and consumption of cultural artifacts in general can be understood as occurring in virtue of the reorganization of sets of social relations. However, by comparison with other cultural artifacts, technology and science are particularly hard: that is, the congealed social relations are especially costly to unpack; by contrast, for example, cultural artifacts such as social science texts comprise social relations which seem relatively easy and cheap to dismantle.

It is a truism that technology is increasingly central to modern social life. But from an analytic point Of view, there is a useful sense in which it is useful to recognize that this has always been the case; it is just that features of life once popularly regarded as technology have now been absorbed into routine. For example, writing is not now commonly thought of as a technology, yet it is a practice and system whose initial introduction provoked profound questions about the nature of reason and practice. This way of broadening our conception of technology - from physical objects and their associated patterns of social organization to a more general notion of 'a system of social arrangements' - allows us to extend the perspective developed for the skeptical analysis of inherent technical qualities.

This perspective on technology has important implications for current thinking about the relation between technology and work. On the whole, this latter tradition has followed a determinist line by concentrating on the effects upon work organization of the introduction of new technologies. The sociology of technology proposes considerably more flexibility in the interpretation, use and implementation of technology in work situations. Technology is also an important focus for examining and confronting deeply-held preconceptions about human nature. This follows from the fact that the emergence and evolution of a new technology can become the focus of discussion and concern about potential changes to the established order of social relationships. Thus, for example, just as 17th century mechanical puppets aroused substantial moral concern about the implications for qualities defined as uniquely human, so too recent debates about artificial intelligence can be understood as discussions about what, after all, are the quintessential features of human (that is, non-mechanical) nature. See technological progress.


3. Territoriality is a strategy which uses bounded spaces in the exercise of power and influence; this can take place at a great variety of spatial scales, ranging from the student in a library who spreads books on a desk so as to prevent others sitting nearby, to a state apparatus which delineates and defends its national borders.

The use of territoriality has been identified in a range of animal species, leading some scientists to argue that it is a genetically inherited trait. Most social scientists avoid this claim, however, and instead focus on the efficiency of territoriality as a strategy, in a large variety of circumstances, involving the exercise of power, influence and domination. Sack defines territoriality as the establishment of differential access to people and things. It comprises three necessary facets: a classification of space (i.e. the definition of the relevant territory); communication of that classification by the means of boundaries (so that you know whether you are within or outside the relevant territory); and enforcement or control of membership (i.e. subjection to certain rules if within the territory and limits on crossing its boundary).

The value of this strategy in enforcing control rests on a number of characteristics of bounded spaces. First, as a classification a territory is an extremely efficient way of defining membership - those inside a territory are subject to the controls therein - which can readily be communicated by boundary markers (which might -be as effective as walls, as in prisons). Territoriality is also a means of reifying and depersonalizing power, associating it with the space rather than with the individuals who implement it, and therefore can be used to deflect attention from the reality of unequal relationships.

The efficiency of territoriality is exemplified by the large number of 'containers' into which the earth's surface is divided. By far the best example of its benefits to those wishing to exercise power is the state, which is necessarily a territorial body. Within its territory); the state apparatus assumes sovereign power: all residents are required to 'obey the laws of the land' in order for the state to undertake its central roles within society; boundaries are policed to control people and things entering and leaving. Some argue that territoriality is a necessary strategy for the modern state, which could not operate successfully without it.

Many social groups use territoriality, either formally (with delineated boundaries, as with estate walls) or informally (as with the 'turfs' of street gangs), to advance their interests. These may involve defensive strategies, as when minority groups retreat into ghettos the better to withstand threats. Territoriality is important in the creation and maintenance of group consciousness - as in nationalism, which often involves people being socialized into allegiance to a territory rather than to a human institution (i.e. the state apparatus in control of that territory). As people identify with one territory, they define others as not of that territory, and therefore different from themselves. This can be a major cause of tension: the definition of 'in-groups' (associated with positive characteristics) and 'out-groups' (with negative features) leads to a polarization of social attitudes at a variety of scales (and so some argue for social engineering which will reduce the polarization by mixing rather than separating groups, that is, by removing the use of territoriality). Those in control of state apparatus may well build on this polarization of attitudes in, for example, the development of support for foreign policies (as with US President Reagan's presentation of the Soviet Union as the 'evil empire').



3. The original meaning of underdevelopment was a neutral one, simply defining the condition of poorer countries which then were called underdeveloped countries. However, this term was felt to be derogatory and has since disappeared from the international vocabulary, being replaced by the more euphemistic 'developing countries'. As a result the term under- developed has assumed a specific and rather different meaning. It is now closely associated with the so-called dependency school, and it indicates a belief that in the world economy there are centrifugal forces at work, strengthening the position of the already rich core while keeping the periphery poor and in a state of permanent underdevelopment. The chief author using and building on this term was André Gunder Frank. Frank was also the first to speak of 'development of underdevelopment', meaning the development of a rich country/poor country or core/periphery relationship which results in the impoverishment of the poor or periphery partner.

There are a number of variants within the underdevelopment school. These range from the radical wing which identifies underdevelopment with neo-colonial relationships and is an outgrowth of Marxist thinking, to non-political or non-ideological explanations such as the principle of cumulative causation developed by Gunnar Myrdal. The principle of cumulative causation states that in the case of poor countries or poor groups a vicious circle is at work keeping them poor (for example, low income causing low savings and low investment, in turn causing low income in the next round; or low income leading to poor health leading to low productivity and low income). By contrast, in rich countries, or among rich groups, a reverse beneficial circle enables them to go from strength to strength and to improve their condition progressively. The strict Marxian view is perhaps best represented by Rodney (1972) in How Europe Underdeveloped Africa: 'An indispensable component of modern underdevelopment is that it expresses a particular relationship of exploitation: namely the exploitation of one country by another.' This view logically also leads to the use of the concept in describing domestic relations within developing countries (as in relations between an urban elite and the rural poor), but in practice the term is now associated with an international context of relations between countries. In between these two extremes are various other schools of thought explaining that the system of international trade relations has a tendency to benefit rich countries more than poor countries. The best known of these schools is the Prebisch-Singer theory according to which the terms of trade of primary products tend to deteriorate in relation to the prices of manufactured goods.

The radical view that any international contact between rich and poor countries will be to the disadvantage of the latter, obviously leads to the policy conclusion that poorer countries should either try to be self-sufficient or inward-looking in their development; while in the case of smaller countries, where this is not feasible, regional groupings of developing countries are advocated. One does not have to be an advocate of the underdevelopment school, however, to support such policies; it is clear that trade, investment and other economic relations among the developing countries are conspicuously and abnormally sparse compared with relations between rich and poor countries. It can be argued that it is also in the interest of the richer industrialized countries to support such closer south-south cooperation.

The milder variation is that international contacts are advantageous for both partners, in accordance with liberal doctrine and the law of comparative advantage, but that the benefits are unequally distributed.

The belief of the more radical underdevelopment school that international relations are positively harmful to the poorer partners can in turn lead to two different policy conclusions. One is to reduce north-south contacts and instead develop south-south relations; the other is to reform the international system so that its benefits are more equally distributed. The latter approach is implied in the pressure of the developing countries for a New International Economic Order which has dominated the international discussions since the mid-1970s and also in such reform proposals as the two Brandt Reports. See center and periphery, economic development, modernization, and world-system theory.



1. To value something may be defined as ascribing worth to it, and thus placing it within some hierarchy. Three core areas of value are of relevance to cultural theory: the aesthetic; the moral; and the economic.

Aesthetic value includes the worth of cultural goods and activities. Orthodox aesthetics is, in part, concerned with the principles that ground the ascription of value to particular works of art. While aesthetics may not itself be concerned with valuing particular works of art (which is more properly the task of art criticism), the attention that it gives to art, and especially the cultural products consumed by the dominant classes within society (not least, European society since the 18th century) presupposes that they are valuable objects and activities, and that there is such a thing as aesthetic value. At the end of the 18th century, Kant's Critique of Judgement is -significant for proposing and defending a distinction between the pleasure that is derived from beauty (and thus art) and the mere sensuous enjoyment of useful, non-art objects (such as food). The autonomy and distinctiveness of aesthetic value has been increasingly challenged, On the one hand, politically, links have been drawn between art and ideology. The aesthetically valued art of the dominant class is explained by reference to the role it plays in legitimating and propagating the political and moral values of the dominant class. On the other hand, aesthetic value may be linked to economic value. It may be argued that the prime purpose of aesthetics is not to ascribe the ultimately illusionary aesthetic value to objects, but to give that which is otherwise of minimal use an economic value. Aesthetically valued objects can be traded at high prices.

The development of sociology as a discipline may be seen to center on the empirical study of values, not least in Emile Durkheim's conception of 'moral facts.' The integration and stability of a society is seen to depend upon the internalization of the consensual values of the society (encapsulated in Durkheim's concept of the 'conscience collective'), through the process of socialization. Functionalism, as the dominant American approach to sociology up to the 1960s, presupposed a consensus on moral values as a precondition of a stable society. This presupposition was increasingly challenged by the sociology of deviance, with the recognition of a wide-range of alternative subcultures, with markedly divergent value systems, within a single society. Similarly, the reemergence of Marxism as a significant force within sociology 'in the 1960s, led to an increased recognition that consensual values were themselves the products of political and above all ideological and hegemonic practices, as conflicting groups sought to defend, promote and negotiate conflicting values systems. The work of Michel Foucault, on punishment and sexuality served to restore to sociology Nietzschean perspectives on the power struggles that underpin value systems, and in which values are inculcated.

The question of economic value centers upon explanations of the value and price ascribed to commodities. Marxism is characterized by an appeal to the labor theory of value, whereby the exchange-value of a good depends upon the amount of labor-time that has gone into its production. Orthodox economics, in contrast, explains value (or price) through appeal to the interaction of supply and demand in the market.


World Bank

3. Along with the International Monetary Fund, the International Bank for Reconstruction and Development (IBRD or World Bank) was established in 1944 and began operations in 1946. It is essentially an inter- national development agency whose primary role is to make long-term development project loans in foreign currency to member governments. It is the largest of all official development agencies. Since its creation its role has changed substantially.

Its original role (as suggested by its original name) was to facilitate the reconstruction of European economies after the Second World War. It origin- ally had 45 members compared with around 150 in the mid-1990s. The bulk of its early lending was to Europe and even by the mid-1950s lending to Europe accounted for two-thirds of its total. As the post-war reconstruction of the European economies was rapid (due partly to biliteral Marshall Plan assistance rather than the activities of the IBRD) the World Bank's focus changed. It was in this process that it emerged as the world's leading economic development agency making loans to developing countries. Its focus remains the making of loans to foster economic development. It has become the world's largest official lender for the development of low-income countries. As well as finance, the Bank also provides substantial technical assistance on the projects it finances.

In 1956 the Bank was expanded by the establishment of the International Finance Corporation whose purpose is to provide and facilitate finance exclusively for the development of private enterprise in member countries. A further extension was made in 1960 with the creation of the International Development Agency focused on low-income borrowers with loans made on more concessionary terms. In 1988 the Multilateral Investment Guarantee Agency was established as part of the World Bank group to facilitate and encourage foreign direct investment in developing countries. It does this partly by an insurance program to alleviate perceived political risks by potential investors. The World Bank group is, therefore, a bank, a broker, a consultant, and an insurance agency.

Its loans are usually for large-scale projects in the area of energy, transportation, infrastructure, communications and public utilities. Such loans alleviate domestic savings and foreign currency constraints to borrowing countries' economic development. A characteristic of developing countries is that the optimum level of capital formation exceeds the capacity of the country to generate domestic savings, and foreign currency reserves are insufficient to fill the gap on a continuing basis. The World Bank acts as an intermediary by channeling savings generated in the developed world towards investment in developing countries. All loans of the World Bank are made to either governments or to entities guaranteed by governments (e.g. public utilities).

Typically, loans are repayable over a period of ten to fifteen years and carry interest rates which reflect the cost of funds to the Bank. The Bank has never suffered a default on a loan and it has never had to call on any part of its capital. The World Bank has the highest possible credit-rating in the world's capital markets.

While historically development projects have dominated its lending, priorities and focus have changed over time. Increasing emphasis has been given to alleviating poverty per se and this is reflected in the expansion of lending to the rural sector. In its 1991 Annual Report it stated that 'the eradication of poverty remains the World Bank's top priority'. This shift in emphasis, a reflection also of the ability of higher-income developing countries to tap international banking and capital markets directly, is seen in the increasing proportion of lending to low-income countries. Thus, in 1981, 35 per cent of loans were made to this group of countries while by 1991 the proportion had risen to over 40 per cent. Lending has been directed to health and . nutrition projects as well as to education. Environmental issues have also been given an increased priority.

Since 1988, the Bank has also participated in debt-reduction and debt-restructuring programs of those developing countries which encountered severe debt-servicing problems following many years of massive private market borrowing.

The Bank is owned by its member governments who provide the Bank's capital. Its main lending operations, however, are financed by its own borrowing on the international capital markets. It has become one of the world's largest single issuers of bonds. It can borrow on very fine terms not only because of its untarnished record of repayments and debt-servicing, but also because it has never borrowed amounts in excess of its own capital. This gearing ratio of 1: 1 makes it the most cautious and prudent bank in the world. In effect, the World Bank borrows in its own name and makes loans to countries which are unable to gain direct access to private markets or can do so only on less advantageous terms than the World Bank.

The World Bank inevitably has its critics from across the political spectrum. It is criticized from one end of the spectrum for being an agent of capitalism and imperialism and for creating conditions on its lending that unduly interfere with governments' social and political priorities. This alleged bias arises because it is controlled by wealthy developed countries with poorer borrowing nations having little voice in its decision-making progress. At the other end of the spectrum the criticism is that it is too passive, and not sufficiently responsive to market forces in economic development. This school also argues that it has a bias towards public sector rather than private enterprise projects and has not been sufficiently vigorous in fostering de-regulation.

Whatever the merits of the conflicting criticisms, the fact remains that the World Bank has become both the world's largest development agency and one of the largest borrowers in international capital markets.

world-system theory

3. The sociologist Immanuel Wallerstein developed world-system theory in the early 1970s in an attempt to explain the origins and processes of capitalism, the industrial revolution, and the complex interconnections of the First, Second and Third Worlds. The multidisciplinary research of world-system theory focuses on historical studies of the growth of the world-system and on contemporary processes within it.

The modern world-system arose in western Europe about 500 years ago. It was based on capitalist trade networks which transcended state boundaries, hence it is called the capitalist world-economy. The drive for capital accumulation via production for exchange caused increasing competition among capitalist producers for labor, materials and markets. As competition waxed and waned through repeated crises of overproduction, various regions of the world were incorporated into the unevenly expanding world-economy. These cyclic processes are a fundamental property of the world-system.

Uneven expansion differentiates the world into three central, or 'core', interrelated types of societies. The central or 'core' societies specialize in industrial production and distribution, have relatively strong states, a strong bourgeoisie, a large wage-labor class, and are heavily involved in the affairs of non-core societies. At the other extreme, in the 'periphery', societies concentrate on the production of raw materials, have weak states, a small bourgeoisie, a large peasant class, and are heavily influenced by core societies. The remaining societies form the semiperiphery, which shares characteristics of both the core and periphery. Semiperipheral societies are typically rising peripheral societies, or declining core societies. The semiperiphery blocks polarization between core and periphery, thus stabilizing the system. The economic and political interrelations of the core and periphery are the presumed sources of development in the core, and the lack of development in the periphery.

A key assumption of world-system theory is that the world-economy must be studied as a whole. The study of social change in any component of the system - nations, states, regions, ethnic groups, classes - must begin by locating that component within the system. The typical component analyzed is a state. Thus world- system theory has a dual research agenda. It examines the consequences of dynamic changes in its components (such as states) for evolution of the system and for the movement of various components within the system. It also examines the consequences of dynamic changes in the world-system for the internal dynamics and social structure of its various components.

Case studies investigating the emergence and evolution of the world-system offer finer-grained analyses of various components of the system and complement global analyses. Controversy surrounds the measurement and explanation of the system and its parts, and centers on two major issues: first, to what degree and how is underdevelopment in the periphery necessary to the development of the core; and second, whether market (exogenous) factors or social-structural (endogenous) factors, especially class, are the primary agents of change.

World-system literature is complicated by a number of intertwined polemics, which focus on the role of socialist states in the contemporary world-system; the probability of a world socialist revolution; the degree to which underdevelopment is a necessary consequence of core development; the effects of various policies on the evolution of the world-system; and whether world- system theory is a useful extension or crude distortion of Marxist theory.

Since the mid- 1980s world-system theory has begun to address a number of issues which critics have noted it had neglected. So much new work has been done that these criticisms are losing salience. Indeed, a scholar who consulted only the works of Wallerstein, or summary works written in the early 1980s, would be poorly informed. Some new topics are various cyclical processes in the world-system; the consequences of the collapse of the Soviet Union; the roles of women, households, and gender in the world- economy; the role of culture in the world-economy; case studies of slavery, agrarian capitalism, and the incorporation of aboriginal populations into the world- economy; and pre-capitalist world-systems.

The last topic has generated a great deal of work and debate among anthropologists and archaeologists. Debates center around whether there has been one sporadically growing world-system since the origin of states or several types of world-systems, of which the modern world-system is only one. These new evolutionary studies open a number of assumptions about the modern world-system to empirical, historically grounded investigation, often done with an eye to foreseeing possible future transformations of the contemporary system.

Polemical debates notwithstanding, world-system theory has generated many studies of long-term social change. These studies use techniques from all the social sciences and are published in a wide range of journals. Several journals have devoted special issues to world-system issues. Review, published by the Fernand Braudel Center at the State University of New York, Binghamton, is devoted to world-system studies. See also capitalism, center and periphery, globalization, imperialism, underdevelopment.


1. From the German ‘Weltanschauung.’ A shorthand term signifying the common body of beliefs shared by a group of speakers about the world and their relationship to it. There is a close interrelationship between this notion and that of ‘language-game,’ discourse or paradigm. Thus, it is one’s place within a language-game, discourse, or paradigm which supplies one with the beliefs and assumptions necessary to construct one’s worldview. It therefore follows from this close interrelationship that, given a change of language-game, discourse or paradigm, there will be a corresponding change of worldview (see also cultural relativism).


youth culture

1. The idea of a youth culture emerges in sociology in the 1950s and 1960s, in recognition of the fact that the culture of young people, especially in their teens or early twenties, is distinctive to that of their parents. Youth will have different values, attitudes and patterns of behavior to those in the dominant culture. Youth cultures are seen to emerge under certain conditions. First, youth must form a sufficiently large cohort. Second, rapid social change may disrupt young people’s integration into the adult world, through, for example, changes in industry removing the traditional occupations or simply causing unemployment. Finally, an increasing pluralism in society will provide a stimulus to new ideas and life-styles. The idea of a youth culture is largely homogenous. It was particularly challenged by theories of youth subcultures, that recognize the fragmentation of youth culture according to class, gender and ethnic divisions. A swing back in favor of theories of youth culture may now be perceived, as subcultural accounts are seen to place too much emphasis upon exotic or marginal aspects of everyday life of young people. (See also counterculture).