Wednesday, December 14, 2011

Flattery Inflation

Reading Aloys Winterling’s entertaining revisionist biography of Caligula (which combines my interests in crazy dictatorships and the classical Greco-Roman world – two great tastes that go even better together!), I came across the useful concept of “flattery inflation” (cf. p. 188). Though Winterling is talking about the relationships between the emperors and the senatorial aristocracy in the early Roman Empire, the idea seems more broadly useful to anyone interested in understanding the development of cults of personality and other forms of status recognition gone haywire.


First, the context. From Augustus (Caligula’s great grandfather, the first emperor) onward, the emperor was the most powerful person in Rome, partly due to his control of the Praetorian Guard, and partly due to the economic resources the imperial household had come to control. At the same time, the emperor depended (at least early on) on the senatorial aristocracy to rule the empire. In more technical terms, the 600 or so member senate constituted the emperor’s selectorate, the group from which the emperor needed to draw the people who could command the legions, coordinate the taxation of the provinces, and in general govern the empire and keep him in power. The emperor could differentially favour members of the senatorial aristocracy (by promoting them to various high-status positions), but segments of the aristocracy could also conspire against him and potentially overthrow him, selecting a different emperor, especially since principles of hereditary succession were never clearly institutionalized (though emperors early on had wide latitude in selecting their own successors). Nevertheless, though senators as a group might dislike a particular emperor, they did not necessarily agree on any given alternative (much less on any alternative acceptable to the Praetorian Guard, which also had some say in the matter), and at any rate individual senators could always benefit from convincing the emperor that some other senators were conspiring to unseat him (via maiestas [treason] trials, in which the convicted were executed and their property confiscated – something which incidentally provided an incentive for accused senators to commit suicide before their trial, so that their families could keep their property). Senators thus faced some coordination costs in acting against even a hated emperor. These obstacles were not insurmountable (conspiracies did take place, and sometimes succeeded), but they were not insignificant either.


So far, so good: nothing too different here from any number of autocracies in the ancient world (and many modern ones as well).Yet there is one thing that makes this strategic situation interesting: despite the huge disparity in military and resources between the emperor and the members of the aristocracy, emperors and senators did not at first have widely different social statuses, and the senate remained the central locus for the distribution of honours in Roman society. Senators jockeyed over relative status (marked by such things as the seating order in the circus or the theatre, the order of voting in the senate, the lavishness of their hospitality in their private parties, the achievement of political office, the number of their clients, etc.) while recognizing the primacy of the emperor, but they remained notional social equals. Augustus was known as the princeps, literally the “first citizen” (hence the early Roman Empire is normally called the “principate”); the standard republican offices were filled more or less normally and retained their meaning as markers of status (though elections were basically rigged, when they were held at all, to produce the results decided in advance by the emperor); the senate voted triumphs and special festivals in honour of particular people and events, and technically confirmed the emperor’s own position; even the title imperator originally meant nothing more than military commander (though it came to be applied exclusively to the princeps or certain members of his family). Most importantly for our purposes, the first two emperors (and many later ones as well) did not (and could not, for reasons that should become clear shortly) compel the sorts of marks of obeisance typical of Hellenistic monarchies, where the “status distance” between the rulers and the members of the traditional elite had been much larger than in Rome: proskynesis (prostration), kissing the feet or the robe, worship as a god, elaborate forms of address, clear hereditary succession, etc. (Incidentally, in these monarchies, as in the later Roman empire, the immediate “key supporters” of the ruler tended to be assimilated to or incorporated into the ruler’s “household,” which limited the extent to which they could gain status at his expense: though the ruler might treat you like family, you could always be seen as his “slave”).


In fact, Augustus in particular went out of his way not to signal any sort of intention to become a “king,” that is, a ruler like the Hellenistic monarchs of an earlier time (including, most famously, Alexander the Great), despite the fact that the Roman polity had obviously become a “monarchy” in all but name, something that was common knowledge among all members of the elite. He lived in a relatively small house on the Palatine hill; stood for office in the normal way, and sometimes resigned it; and let the senate conduct the business of the republic in appearance, cleverly signalling his intentions so that senators could reach the “right” result (i.e., the result Augustus wanted). Why?


Part of the answer to this question has to do with the way in which signalling any intention to become a king was thought to risk nearly certain conspiracy. This was, after all, what happened to Julius Caesar (Augustus’ adoptive father). By behaving in ways that signalled an intention to become a king in the Hellenistic sense (whether or not he actually wanted to do so), he threatened to destroy the foundations of senatorial status in the Republic, i.e., to drastically humiliate them vis à vis the emperor. The Republic was built on norms that rejected kingship and competitively allocated relatively “equal” high social status among the senatorial class, so that any credible signals of an intention to re-establish kingship seem to have greatly lowered the coordination costs of dissatisfied senators for conspiring against the emperor.


So how do we get from Augustus to Caligula, who attempted (among other things) to widen enormously the social distance between himself and the senatorial elite, especially in the last year of his reign, when a full-blown emperor cult – a cult of personality – was instituted? More generally, how do we get to the later empire of 100-150 years later, which was not too different from the hereditary Hellenistic monarchies that had been seemingly abhorrent to the senatorial aristocracy of a few generations earlier, and which included proskynesis, emperor cults, etc?


Here is where the idea of flattery inflation comes in. The process is grounded in the “disequilibrium” between material resources (military and economic, in particular) and social status noted above. The emperor controlled more material resources than any given senator, but his social status was not fully commensurate with his resources. Senators as a group liked this situation. But individual senators could benefit (both materially and in status terms) from credibly signalling special loyalty to the emperor. Such signalling could take two forms, which I’ll call “negative” and “positive.” The negative form consisted of informing on each other. The disadvantage of such negative signalling (for the emperor), however, was that denunciations also increased the risk of actual conspiracies and devastated the elite on which he relied. The positive form consisted in what we normally call “flattery.” The problem here was that any particular form of flattery quickly became devalued, and the emperor lost the ability to distinguish genuine supporters from non-supporters. Moreover, flattery inflation tended to diminish the collective social status of the senatorial aristocracy: the more the emperor was praised, the more the senators were abased. For example, in Roman elite society the morning salutatio was an important indicator of status: friends and clients visited their friends and patrons in the mornings, and the more visitors a senator had, the higher his status. But nobody could afford not to visit the emperor every morning, or to signal that they weren’t really “friends” with the emperor. So the morning salutatio at the emperor’s residence turned into a crush of hundreds of senators, all of them jostling to get a little bit of the emperor’s attention, and all of them pretending to be the emperor’s “friends,” regardless of their private feelings. Similarly with senate votes granting honors to the emperor. In principle, the senate retained some discretion in the matter, but individual senators could always sponsor extraordinarily sycophantic resolutions in the hopes of gaining something from the emperor (offices, marriages, etc.), and other senators could not afford not to vote for such resolutions.


In sum, flattery inflation was, from the point of view of the senators, a kind of tragedy of the commons: as each senator tried to further his relative social status within the aristocracy, they tended to devalue their collective status. And it was not necessarily a good thing from the point of view of the emperors either, who could not easily distinguish sycophantic liars and schemers from genuine supporters, and who often disliked the flattery. So the emperors tried to dampen it or manage it to their advantage. Winterling distinguishes three different responses.


First, as noted earlier, Augustus managed flattery inflation through ostentatious humility. Everybody could then pretend that things remained the same even though they all knew that Augustus was ultimately in charge. But this required indirectly signalling his intentions so that senators had enough guidance to know what to vote for and who to denounce without ordering them to do anything (which would have resulted in a catastrophic loss of status for the senators, potentially risking a conspiracy). Such indirection could lead to confusion when practiced by a less able political operator, like Tiberius. Tiberius apparently detested flattery, but he was at the same time unable to clearly communicate his intentions to the senate, unlike Augustus. His inability to master the complex signaling language that Augustus had used prevented him from containing flattery inflation very well, leading him to use increasingly blunt instruments to tame it (like moving to Capri permanently and banning the senate from declaring certain honours: this is sort of the equivalent of price controls in "economic" inflation, and was just about as effective). This provided endless opportunity for denunciations, since senators were constantly making “mistakes” about what Tiberius really wanted. The more denunciations, moreover, the less actual conspirators had to lose, leading to a poisoned and dangerous atmosphere, especially as factions of Tiberius’ family schemed over the succession. Most potential heirs didn't live long; Caligula was the last man standing.

At first, Caligula tried the Augustan policy, and was reasonably good at it. But for a number of reasons that Winterling describes, he seems to have changed tack in the third year of his reign to deliberately encourage flattery hyperinflation. He did this, in part, by taking the senators literally: when they said that he was like a god, he basically demanded proof of this, thus forcing them to worship him as a god. Or when he was invited to dinner, he forced senators to ruin themselves to please him. And he demonstrated contempt for their status by the way he behaved in the circus and elsewhere. (The famous story of how he named his horse a consul can be understood as one such insult). Yet the senators could not retaliate by revealing their true feelings; their coordination costs had increased insofar as their individual incentives were always to flatter Caligula.

Strategically speaking, the point of this seems to have been to lessen his dependence on the senatorial aristocracy and to move the regime towards a Hellenistic model. (Winterling discusses some suggestive evidence that Caligula might have been planning to move to Alexandria, an obviously symbolic move to the historic capital of Hellenistic dynasts). Runaway flattery inflation not only makes it exceedingly difficult for conspirators to succeed (even the most innocuous comment can be used against you when flattery inflation is in full swing) but also succeeds in completely humiliating the flatterers (in this case the senatorial aristocracy) and lowering their collective social status vis a vis the ruler. If flattery hyperinflation is not stopped, the end result is that the ruler no longer has to use "ambiguous" language to manage his relationship to the selectorate. He can just order them to do things, without worrying about slighting their status. One might also speculate that it also helps to institutionalize the principle of hereditary succession, which was not clearly established in the early empire, and which would contribute to a shift in the selectorate from the aristocracy to the imperial household. (It does not seem to be coincidental that cults of personality in the modern world appear to be associated with forms of hereditary succession even in regimes that are not in principle hereditary, like North Korea or Syria). But of course flattery hypeinflation doesn't always work for the ruler: the humiliation of the aristocracy eventually led to the downfall of Caligula, and (according to Winterling) contributed to his characterization by later writers as the "mad emperor."

Anyway, I think one can extract a more general model of flattery inflation from all this. When material resources are more much more unequally distributed than status, and status is competitively allocated, flattery inflation can result. But rulers (or those who control material resources) will usually try to dampen or manage this kind of inflation, since flattery has obvious disadvantages from their perspective. Yet there seem to be circumstances under which they will try to encourage flattery hyperinflation, e.g., when the costs of coordination for challengers are relatively low and the maintenance of "low inflation" requires extensive communication management. One could also imagine other ways in which this process may play out. For example, if status is more unequally allocated than material resources, high status rulers may encourage flattery (hyper)-inflation (e.g., cults of personality) in order to accumulate these resources. (This seems to have happened in the Soviet system under Stalin and in North Korea). And if material resources become more equally distributed, or more diverse in their effects, as in many modern economies, one might see flattery deflation.

[Update 15/12/2011 - added the bit about Tiberius moving to Capri, clarified a transition]

[Update 17/12/2011 - fixed some minor typos.]

Monday, November 14, 2011

Exit, Voice, and Democracy

Speaking of exit, voice, and domination, here's a very interesting paper by Mark E. Warren in the latest APSR: Voting with Your Feet: Exit-based Empowerment in Democratic Theory (gated, ungated earlier version), Volume 105, Issue 04, November 2011 pp 683-701. Abstract:
Democracy is about including those who are potentially affected by collective decisions in making those decisions. For this reason, contemporary democratic theory primarily assumes membership combined with effective voice. An alternative to voice is exit: Dissatisfied members may choose to leave a group rather than voice their displeasure. Rights and capacities for exit can function as low-cost, effective empowerments, particularly for those without voice. But because contemporary democratic theory often dismisses exit as appropriate only for economic markets, the democratic potentials of exit have rarely been theorized. Exit-based empowerments should be as central to the design and integrity of democracy as distributions of votes and voice, long considered its key structural features. When they are integrated into other democratic devices, exit-based empowerments should generate and widely distribute usable powers for those who need them most, evoke responsiveness from elites, induce voice, discipline monopoly, and underwrite vibrant and pluralistic societies.
Warren explicitly argues for a connection between mechanisms of exit and the promotion of nondomination, something which I had idly wondered about, and rightly argues that exit has often been neglected in democratic theory, even though modern democracies obviously depend at a basic level on certain forms of exit (e.g., from one political party to another). I also found Warren's discussion of the varieties of exit and their interaction with voice mechanisms (e.g., exit as signalling vs. exit as silence, and exit as free-riding vs. exit as empowerment) insightful, and his discussion of the ways in which exit mechanisms can be incorporated into modern democracies provocative.  (I should note that my first reaction to his argument was "I wish I'd written this paper!").

I have some quibbles, however. Warren notes that democracy is typically understood in terms of a voice-monopoly model in which collective voice is required to discipline the  potentially problematic effects of the state monopoly on violence:

The democratic case for voice usually assumes monopoly organizations. It does so normatively—voice is most important within the context of monopolistic organizations. And it does so structurally—monopoly induces voice by restricting exit. 
In these two respects, Hirschman's analysis tracks the fact that modern democracy was born of a specific kind of monopoly—that of states. In its eighteenth- and nineteenth-century origins, the democratic project focused on increasing inclusions within states that had effectively consolidated power by controlling territory, developing administrative capacities, and regularizing sovereignty through constitutional means (Poggi 1990, chaps. 1–2). The justifications for voice are closely related to these elements of monopoly in two ways. First, when a collectivity controls key features of livelihood, such as security, and solves collective action problems through coercion, then individuals subject to that coercion should have a say in how it is deployed. Second, the greater the costs of exit to individuals, the greater the need for voice. Though liberal-democratic states do not legally restrict exit from their territories, they recognize that exit is costly: It is disruptive of family, social support networks, careers, language, and culture, and can mean giving up the protections and welfare entitlements of citizenship. These monopoly-like effects are well recognized and justified by the existence of voice mechanisms—that is, democratic processes that legitimate the monopoly-like properties of the state. Thus it is appropriate that democratic theory has focused on equalities of political resources, secured by positive political rights (voting, speech, association) and related welfare rights (education and income security), as well as on the mechanisms such as electoral systems, judicial systems, public sphere discourse, and civil society activism through which citizens’ voice is translated into influence over law and policy (see, e.g., Habermas 1996, chap. 8).
The depth of attachment to monopoly within democratic theory stems from the fact that it is structurally necessary for the provision of common goods. As Hirschman's analysis suggests, democracies are sensitive to problems of collective action: Defectors from collectivities undermine democracy by undermining the possibility of common choice (Barry 1974). Union organizing is the archetypal case: The worker who breaks with the solidarity of the bargaining unit also undermines the capacity of the union to serve its members. More generally, as Olson (1971) famously detailed, when individuals are left to weigh the costs and benefits of collective action, larger groups tend to return fewer benefits, causing individuals to exit the collectivity, which in turn undermines the provision of public goods. As Hobbes understood, monopoly removes the threats to common security and provision posed by defection. Similarly, democratic theorists—particularly those focused on the important relationship between solidarity and collective choice—view exit opportunities as harmful, indeed, so much so that, as Hirschman (1970) observes more generally, exit is often branded criminal or treasonous (17).

Warren then rightly argues that voice is insufficient to the task of disciplining monopoly, especially given the scale of and the dispersion of power in modern states, and also that forms of exit can also play a role in ensuring that people affected by collective decisions are not unjustly dominated. Yet he does not discuss the possibility of exit from the state monopoly (aside from the brief mention of migration quoted above) except in terms that assimilate these forms of exit to "free-riding" (e.g., capital flight that hollows out public services). And the forms of exit he does discuss (what he calls "enabled" and "institutionalized" exit) are more or less dependent on the state insofar as they require the state to provide resources to make effective the ability of individuals to leave dominating relations, or to transform relations of domination into relations of choice. For example, a policy of full employment can be understood to enable exit from oppressive employment relations by reducing the costs of unemployment; and similarly extensive social safety nets, or a guaranteed minimum income can enable exit from such relations by making formal options (like quitting a job) much more easily taken. But forms of enabled exit are presented as dependent on the state in ways that suggest that Warren implicitly values voice more than exit, or at least thinks that voice is normatively or structurally prior to exit, and sets limits to its exercise, a view that seems to me to be unwarranted. (So Warren is fairly critical of the market as a mechanism for exit, in part because he thinks that the market tends to be biased against those with fewer resources). Yet it is by no means clear that all forms of exit from the state monopoly should be understood as forms of free-riding (see, for example, James C. Scott's work), or that enabled exit (making effective formal opportunities for exit) should be understood as something that only states can (or should) structure and provide, even if enabling exit may on occasion require large-scale collective action.

Perhaps this is a result of trying to fit a discussion of exit within democratic theory rather than simply liberal theory. It seems to me that there is something like a liberalism of voice that incorporates exit to a greater or lesser extent in its basic structure, and a liberalism of exit that similarly incorporates voice to some greater or lesser extent in its structure. Both forms of liberalism are concerned with nondomination, but they differ in their normative evaluations of the relative importance of exit and voice, in part due to different understandings of the relationships between, and the value of, the individual and the community. Warren's argument pushes a liberalism of voice closer to a liberalism of exit, but his position remains, in important respects, a liberalism of voice.

Monday, October 31, 2011

Endnotes


I haven't done one of these in a couple of months. So, for your Monday (or Sunday - we're ahead of the world here in NZ) reading pleasure:
  • Via a link from Cosma Shalizi, more on Arendt and Occupy Wall Street by The Slack Wire. There's some interesting discussion in the comments as well, which implicitly brings out some points I didn't stress in my post: concrete political action with specific goals always ends up transforming the space of appearances and introducing elements of surveillance, hierarchy, and the like (sometimes with very good reason!). Organized hierarchy appears to be unavoidable in both politics and economic life, but (according to Arendt) there is something that is always lost in that transition. Hence the need for a different balance between spaces of appearance, spaces of surveillance, and spaces for escaping visibility. (Maybe I'll write more about this later). 
  • Speaking of Cosma Shalizi, I enjoyed his discussion of an obscure book on Marxist econophysics and of Bayesianism and the law in the UK. It is obscure, but you'd be surprised about how much you learn about the perils and difficulties of using models in the social sciences! Besides, it comes with a mention of the call-in show at Radio Yerevan, and who doesn't like that?
  • Question to Radio Yerevan: Is it correct that Grigori Grigorievich Grigoriev won a luxury car at the All-Union Championship in Moscow? 
    Answer: In principle, yes. But first of all it was not Grigori Grigorievich Grigoriev, but Vassili Vassilievich Vassiliev; second, it was not at the All-Union Championship in Moscow, but at a Collective Farm Sports Festival in Smolensk; third, it was not a car, but a bicycle; and fourth he didn't win it, but rather it was stolen from him.
  • Via BK Drinkwatercreating a totalitarian society inside a film set. And then living in it. And refusing to finish the film. 
No government in the world pours more resources into patrolling the Web than China’s, tracking down unwanted content and supposed miscreants among the online population of 500 million with an army of more than 50,000 censors and vast networks of advanced filtering software. Yet despite these restrictions — or precisely because of them — the Internet is flourishing as the wittiest space in China. “Censorship warps us in many ways, but it is also the mother of creativity,” says Hu Yong, an Internet expert and associate professor at Peking University. “It forces people to invent indirect ways to get their meaning across, and humor works as a natural form of encryption.” To slip past censors, Chinese bloggers have become masters of comic subterfuge, cloaking their messages in protective layers of irony and satire.
This is not a new concept, but it has erupted so powerfully that it now defines the ethos of the Internet in China. Coded language has become part of mainstream culture, with the most contagious memes tapping into widely shared feelings about issues that cannot be openly discussed, from corruption and economic inequality to censorship itself. “Beyond its comic value, this humor shows where netizens are pushing against the boundaries of the state,” says Xiao Qiang, an adjunct professor at the University of California, Berkeley, whose Web site, China Digital Times, maintains an entertaining lexicon of coded Internet terms. “Nothing else gives us a clearer view of the pressure points in Chinese society.”
    The core of his argument is that even Caligula’s wildest behavior reflected the instability of the political order, not of his mind. The transition from republic to empire in the decades prior to his reign had generated a rather convoluted system of signals between the Senate (the old center of authority, with well-established traditions) and the emperor (a position that emerged only after civil war).
    The problem came from deep uncertainty over how to understand the role that Julius Caeser had started to create for himself, and that Augustus later consolidated. The Romans had abolished their monarchy hundreds of years earlier. So regarding the emperor as a king was a total non-starter. And yet his power was undeniable – even as its limits were undefined.
    The precarious arrangement held together through a strange combination of mutual flattery and mutual suspicion, with methods of influence-peddling ranging from strategic marriages to murder. And there was always character assassination via gossip, when use of an actual dagger seemed inconvenient or excessive.
    Even those who came to despise Caligula thought that his first few months in power did him credit. He undid some of the sterner measures taken by his predecessor, Tiberius, and gave a speech making clear that he knew he was sharing power with the Senate. So eloquent and wonderful was this speech, the senators decided, it ought to be recited each year.
    An expression of good will, then? Of bipartisan cooperation, so to speak?
    On the contrary, Winterling interprets the flattering praise for Caligula’s speech as a canny move by the aristocrats in the Senate: “It shows they knew power was shared at the emperor’s pleasure and that the arrangement could be rescinded at any time…. Yet they could neither directly express their distrust of the emperor’s declaration that he would share power, nor openly try to force him to keep his word, since either action would imply that his promise was empty.” By “honoring” the speech with an annual recitation, the Senate was giving a subtle indication to Caligula that it knew better than to take him at his word. “Otherwise,” says Winterling, “it would not have been necessary to remind him of his obligation in this way.”
    The political chess match went smoothly enough for a while. One version of what went wrong is, of course, that Caligula became deranged from a severe fever when he fell ill for two months. Another version has it that the madness was a side-effect of the herbal Viagra given to him by his wife.
    But Winterling sees the turning point in Caligula’s reign as strictly political, not biomedical. It came when he learned of a plot to overthrow him that involved a number of senators. This was not necessarily paranoia. Winterling quotes a later emperor’s remark that rulers’ “claims to have uncovered a conspiracy are not believed until they have been killed.”
    In any event, Caligula responded with a vengeance, which inspired at least two more plots against him (not counting the final one that succeeded); and so things escalated. Most of the evidence of Caligula’s madness can actually be taken, in Winterling's interpretation, as ways he expressed contempt for the principle of shared power -- and, even more, for the senators themselves. Giving his horse a palace and a staff of servants and announcing that the beast would be made consul, for example, can be understood as a kind of taunt. “The households of the senators,” writes Winterling, “represented a central manifestation of their social status…. Achieving the consulship remained the most important goal of an aristocrat’s career.” To put his horse in the position of a prominent aristocrat, then, was a deliberate insult. It implied that the comparison could also be made in the opposite direction.
More evidence for the "signaling" interpretation of cults of personality. (Working on a paper on the topic right now).
In one sense, the Information Sharing Environment is a medium tending toward unobstructed transmission; it is like an ocean that conducts whale songs for hundreds of miles. But in another sense, the ISE has created a very private pool of publicly circulating information. Simplified Sign-On, for example, gives those who qualify total access to "sensitive but unclassified" information—but it gives it only to them, and with only internal oversight on how that information is used. The problem is not simply that private information is now semi-public but that the information is invisible to anyone outside organizations that "need to share."
Citron and Pasquale have suggested that if technology is part of the problem, it can also be part of the solution—that network accountability can render total information sharing harmless. Rather than futilely attempting to reinforce the walls that keep information private, publicly regulating how information is used can mitigate the trends that caused the problem in the first place. Immutable audit logs of fusion-center activity would not impede information sharing, but they would make it possible to oversee whom that information was shared with and what was done with it. In fact, it was John Poindexter, the director of the Total Information Awareness program, who first suggested this method of oversight, though even today, many fusion centers have no audit trail at all. Standardization and interoperability might also provide means of regulating what kinds of data could be kept. The technological standards that make information available to users can also facilitate oversight, as Poindexter himself realized.  
Spaces of surveillance are worse when the watchers cannot be watched.
This fusion of despotism and postmodernism, in which no truth is certain, is reflected in the craze among the Russian elite for neuro-linguistic programming and Eriksonian hypnosis: types of subliminal manipulation based largely on confusing your opponent, first developed in the US in the 1960s. There are countless NLP and Eriksonian training centres in Moscow, with every wannabe power-wielder shelling out thousands of dollars to learn how to be the next master manipulator. Newly translated postmodernist texts give philosophical weight to the Surkovian power model. François Lyotard, the French theoretician of postmodernism, began to be translated in Russia only towards the end of the 1990s, at exactly the time Surkov joined the government. The author of Almost Zero loves to invoke such Lyotardian concepts as the breakdown of grand cultural narratives and the fragmentation of truth: ideas that still sound quite fresh in Russia. One blogger has noted that ‘the number of references to Derrida in political discourse is growing beyond all reasonable bounds. At a recent conference the Duma deputy Ivanov quoted Derrida three times and Lacan twice.’
In an echo of socialism’s fate in the early 20th century, Russia has adopted a fashionable, supposedly liberational Western intellectual movement and transformed it into an instrument of oppression. In Soviet times a functionary would at least nominally pretend to believe in Communism; now the head of one of Russia’s main TV channels, Vladimir Kulistikov, who used to be employed by Radio Free Europe, proudly announces that he ‘can work with any power I’m told to work with’. As long as you have shown loyalty when it counts, you are free to do anything you like after hours. Thus Moscow’s top gallery-owner advises the Kremlin on propaganda at the same time as exhibiting anti-Kremlin work in his gallery; the most fashionable film director makes a blockbuster satirising the Putin regime while joining Putin’s party; Surkov writes a novel about the corruption of the system and rock lyrics denouncing Putin’s regime – lyrics that would have had him arrested in previous times.
In Soviet Russia you would have been forced to give up any notion of artistic freedom if you wanted a slice of the pie. In today’s Russia, if you’re talented and clever, you can have both. This makes for a unique fusion of primitive feudal poses and arch, postmodern irony. A property ad displayed all over central Moscow earlier this year captured the mood perfectly. Got up in the style of a Nazi poster, it showed two Germanic-looking youths against a glorious alpine mountain over the slogan ‘Life Is Getting Better’. It would be wrong to say the ad is humorous, but it’s not quite serious either. It’s sort of both. It’s saying this is the society we live in (a dictatorship), but we’re just playing at it (we can make jokes about it), but playing in a serious way (we’re making money playing it and won’t let anyone subvert its rules). A few months ago there was a huge ‘Putin party’ at Moscow’s most glamorous club. Strippers writhed around poles chanting: ‘I want you, prime minister.’ It’s the same logic. The sucking-up to the master is completely genuine, but as we’re all liberated 21st-century people who enjoy Coen brothers films, we’ll do our sucking up with an ironic grin while acknowledging that if we were ever to cross you we would quite quickly be dead.

Bet you cannot do that.

More here, while it lasts.

[Update 10/31/2011: added Geobacter picture, fixed some typos, some minor wording changes]

Friday, October 28, 2011

Spaces of Appearance, Spaces of Surveillance, and #OccupyWallStreet


(Warning: contains self-promotion and potentially hazardous levels of theory).

It is a bit of an occupational hazard for bloggers that one is always tempted to comment on current events. It’s the pundit temptation that comes from suddenly coming into (temporary, fragile) possession of an audience. And it is a truth universally acknowledged that a single blogger in possession of an audience must be in want of an opinion. (Or is it that a single blogger in possession of an opinion must be in want of an audience?). I try to avoid this, since for the most part my opinions on most current topics are not that insightful, and besides they are often more than a little uncertain and muddled. The #OccupyWallStreet movement is no exception; I am still trying to figure out what I think about it. (I’ve been thinking of visiting the “Occupy Wellington” camp to see what’s going on, among other things). But it so happens that I have an actual academic article coming out early next year [update: now out!]  that might (might – results not guaranteed!) shed some light (laterally, at odd angles) on the “Occupy X” protests taking place around the globe. The piece is called “Spaces of Surveillance and Spaces of Appearance” ([update: gated final version hereungated nearly-final version here), and it is forthcoming in Polity (vol. 44, issue 1, January 2012, pp. 6-31). Here’s the abstract:

Hannah Arendt and Michel Foucault developed different but complementary theories about the relationship between visibility and power.  In an Arendtian “space of appearance,” the common visibility of actors generates power, which is understood as the potential for collective action.  In a Foucauldian “space of surveillance,” visibility facilitates control and normalization.  Power generated in spaces of appearance depends on and reproduces horizontal relationships of equality, whereas power in spaces of surveillance depends on and reproduces vertical relationships of inequality.  The contrast between a space of appearance and a space of surveillance enhances both Arendt’s and Foucault’s critiques of modern society by both clarifying Arendt's concerns with the rise of the “social” in terms of  spaces of surveillance, and enriching Foucault's notion of “resistance.”

Basically, your bog-standard interpretive piece on Arendt and Foucault, mostly of interest to specialists in (certain kinds of) political theory; I try to put Arendt and Foucault in dialogue with one another with respect to the question of the relationship between power and visibility, and to extract some ideas from both I think are useful for thinking beyond Arendt and Foucault (and not necessarily in harmony with their specific theoretical projects), especially about the relationship between surveillance, appearance, and forms of economic organization in society. But the key points of the piece are relatively intuitive, and some of its arguments may have some relevance to current events, particularly the concluding thoughts on how modern society could do with more spaces of appearance and fewer spaces of surveillance (which, if I’m not too mistaken, is at least in the spirit of the “Occupy” movement). So let me see if I can explain the main points of the paper without too much reference to Arendt and Foucault. (Those who prefer a fuller discussion of Arendt and Foucault can read the paper – and I’d be happy to hear your thoughts about it).

The paper starts by considering the relationship between visibility and power. We can distinguish four ideal-typical ways in which visibility and power are related in particular spaces:

1)      In some spaces, the visibility of those present generates power (the capacity for collective action) by enabling people to act with, and in front of, others. We can call these, following Arendt, spaces of appearance. Her main examples are “egalitarian” democratic spaces like the participatory Soviets of the early Russian revolution, the New England town council, and the classic public spaces of the agora, the parliamentary assembly, etc.; the “General Assembly” at a typical “Occupy” event would be one such space. But lots of other spaces, including spaces structured in non-egalitarian ways, also have the characteristic of generating (forms of) power and influence for those who are visible: consider how a politician’s power is often mediated through his/her visibility to many, and would be reduced by becoming less visible. The key point is that in such spaces visibility enables those who are visible to initiate and coordinate action.

2)      By contrast, in some spaces, visibility subjugates or subjects people to power, insofar as they are prevented from escaping (or find it costly to escape) the gaze of particular spectators (including, sometimes, one another). We can call these, following Foucault, spaces of surveillance. The panopticon is Foucault’s ideal-typical case, but one can easily think of many other spaces where visibility functions in this way. Modern society is in fact notable for the wide variety of spaces in which people are surveilled (for good and bad reasons, by the way – I’m not passing judgment on any particular form of surveillance at this point). Spaces of economic production within firms, in particular, tend to be spaces of surveillance due to obvious principal-agent problems. The key point is that in such spaces it is difficult (but not impossible) for those who are visible to avoid various kinds of sanctions for deviating from whatever norms or rules are current among spectators. These sanctions do not need to be very “explicit” to work: the permanent and unavoidable gaze of others (who may not themselves be visible) can induce powerful pressures for conformity even in the absence of explicit or obvious punishments for noncompliance. People want to get along, or they dread ridicule, and even the otherwise powerful politician fears scandal.

3)      Conversely, in some spaces invisibility enables some people to escape subjugation or subjection, and can even empower them in various ways. We can call these private or secret spaces. The private space of the home, for example, enables people (on occasion) to escape the prying eyes of others; and the secret recesses of intelligence agencies enable people in suits to plan mischief against the rest of us and their invisibility prevents us from controlling their activities. In accordance with the logic of exit, invisibility (or at least the possibility of making oneself invisible) can have a liberating effect.

4)      Finally, in some spaces invisibility marginalizes people, disempowering them. For completeness, we call these marginal spaces. For example, the oikos to which the Greek citizen could retire after a day spent at the agora was at the same time the space to which women were confined.

These spaces are all related, of course, and they are not always sharply distinguished. Within any given space some people may have power that is mediated through their visibility, while others may be surveilled and marginalized. Surveillance is not always asymmetrical, as in the Foucauldian panopticon; it may also be mutual, as in David Brin’s idea of the “transparent society.” It is also never perfect. By the same token, any significant degree of visibility in spaces of appearance is accompanied by the potential for surveillance: the politician who is powerful precisely because he is in the public eye faces powerful pressures for regulating his behaviour so long as he cannot escape that same public eye or hide parts of his life from it. (Even voluntary self-disclosure, as when people share stuff on Facebook or blog, is subject to these pressures to some extent). Spaces of appearance are always tainted by surveillance and pressures for conformity; invisibility often implies some degree of marginalization even if it sometimes also serves to escape from subjugation; and marginalization is often accomplished through various forms of surveillance.

Much of Hannah Arendt’s political theory is a defence of certain kinds of “egalitarian” spaces of appearance on non-instrumental grounds. For Arendt, egalitarian spaces of appearance are valuable not because they promote specific ends like welfare or justice, or because such spaces somehow represent the only way in which political life could be organized so as to respect the equal rights of people, or because they induce appropriate forms of deliberation, but because they are the only spaces in which we can truly be “persons” – actors with individual stories that transcend the routine and repetitive aspects of the human condition. In acting together with and in front of others, we disclose ourselves as virtuous or vicious, or as the people who are responsible for this or that act; we acquire a story, rather than a living. And in acting together with others in such spaces, we can modify the roles and rules that regulate our ordinary intercourse; our actions put these norms in question and enable us to “begin something new,” i.e., to come up with new ways of regulating our ordinary lives. But action itself in such spaces is never “ordinary” or “routine,” and it is never simply effaced behind some achievement. In fact, Arendt indicates that what matters most about action in such spaces (from her perspective, if not necessarily the perspective of the activist, who certainly has some objective in mind) is not the achievement of some particular goal, an achievement that is at any rate uncertain, given human freedom: political activity is not, in her view, like the making of a work of art, or the implementation of some blueprint. What matters is the possibility of appearance in front of others as such; without such a possibility, in her view, our lives tend to the routine isolation of “making a living,” or the self-effacement of other forms of creative activity where what matters ultimately is the work produced (the painting, the book, the poem, etc.) rather than the person and his/her story.

This distinctive understanding of what we might call the joy of public action seems to be echoed in many descriptions of what happens in OWS protests. People discover a sense of themselves as joint actors in the world, and they generally enjoy this above and beyond anything they may or may not accomplish; to put the point in non-Arendtian terms, there is something fun and exciting about revolutions, even when they are supremely risky, and there is something about the public spaces that such movements create that help people experience each other as people who are engaged in a common story in which they all have some part. (This is also what makes some people annoyed about things like the OWS protests: participants seem too concerned with their own voices and actions, and too little concerned with “getting things done.” There is something narcissistic about every “revolutionary” movement and every protest: admiration is an important part of any space of appearance).

But Arendt was also concerned about what she called the “substitution of making for acting,” which involved (in her view) the attempt to use these modes of action characteristic of spaces of appearance for the solution of very specific problems through the implementation of “policies” understood as blueprints for social organization. This, I argue in the paper (drawing on Foucault), always requires not appearance but forms of surveillance: the uses of collective action that can be geared towards the production of specific effects in the world necessarily involves forms of visibility that are in conflict with the possibilities of self-disclosure through stories in spaces of appearance. E.g., if you provide food to people, unless you have unlimited resources, you will need to monitor your activities and make distinctions between those who should and should not receive it.

Arendt thus worried a lot about the transformation of politics into administration, and stressed that politics properly speaking should not be concerned with “economic” and social questions, a position that earned her much criticism. (What else are politicians going to talk about?). But I think what she had in mind had to do with the kinds of power appropriate to different kinds of activity. In her view, “to the degree that politics (which is predominantly conducted in spaces of appearance, however imperfect) becomes ever more directly concerned with the management of production (a development that she connected with the rise of the “social question” ever since the French revolution), the more politics turns into bureaucratic administration (which is pre-eminently conducted in spaces of surveillance): more like Soviet bureaucratic communism than like the original Soviets Arendt praised in her book On Revolution.” (Here I quote myself). Action in public spaces provides an opportunity for putting in question, and perhaps changing (unpredictably), the overall norms and rules that govern our everyday interaction, but it does not offer a model for governing everyday life.

What this perspective suggests is that an important problem about modern societies concerns the balance between spaces of appearance, spaces of surveillance, and other spaces. Let me quote myself again to close this post:
…  Arendt’s worries about the colonization of public space by the social can be restated as a worry about the balance between spaces of appearance and spaces of surveillance, and their proper relationship, within modern societies. The modern welfare state appears then less as a successful or unsuccessful attempt to manage material inequalities than as a diminution of available spaces of appearance and an expansion of spaces of surveillance, and in particular disciplinary spaces. In such a state, any gains in the “empowerment” of individuals occur at the expense of the possibility of self-disclosing collective action (and hence “power” in a different sense). Similarly, Arendt’s other complaints about the rise of the “social” realm can be understood as concerns that even when this realm is not directly concerned with economic production it nevertheless functions as a space of mutual surveillance where common visibility leads to hypocrisy and conformism rather than to self-disclosure and creative individuality.  Arendt’s views converge, on this reinterpretation, with Foucault’s views on the expansion of “biopower,”  where the concern with the management of “life” was accompanied by the development of disciplinary techniques and objects of surveillance (like populations) that produced an intricate ecology of spaces of surveillance.

But where Foucault appears to think that the problematic aspect of these developments lies in the way in which previously more or less unregimented areas of human life come to be regulated by infra-legal mechanisms,  yet at times seems to recommend a strategy of pure resistance that is at the very least easily misunderstood as a kind of nihilism because of his inability or unwillingness to articulate an alternative vision of the operation of power,  an Arendtian perspective is perhaps more illuminating about what is lost in this process, and about what sorts of political action might make things better. On the one hand, we find a shrinkage of spaces of appearance, where human beings in their plurality may emerge in their full individuality, and their replacement by “social” spaces and other spaces where conformity rules, i.e., by spaces where visibility is turned into an instrument of control or regulation, including self-regulation. This includes the deployment of ever more elaborate technologies of surveillance and (self)-monitoring that extend their tendrils into ever more “ordinary” aspects of social life, and the relative narrowing of public spaces to those mediated spaces of modern democracy where only relatively few political leaders can appear and act. On the other hand, and less obviously, we find the “unmooring” of important spaces of appearance from control by a public, so that genuine action not only remains restricted to a few, but these actors are now too much in control of their own visibility to be properly accountable to their publics: the public’s surveillance is no longer sufficiently effective to undermine [or at least exercise some degree of control over] the ordinary hierarchical relationships that structure the modern state. In other words, not only is the space of appearances colonized by people who have too much control over their own visibility, but the spectators are in turn more surveilled and normalized than before, losing control over their own visibility.

(I draw here on an interesting book by Jeffrey Edward Green, The Eyes of the People: Democracy in an Age of Spectatorship, which I should like to review properly at some point).

But if this is in fact a problem (a big if, I suppose), how could we think about what to do? I suggest in the paper that “a solution to these problems would at least involve the expansion of spaces of appearance (even if they can never be untainted by surveillance) and the reduction of the reach of spaces of surveillance,” which seems to me to be sort of what movements like OWS are trying to do at some level. (Of course, they are also trying to do all kinds of other things, like decrease income inequality and punish bankers.) But I also indicate that the point is not to eliminate spaces of surveillance, or transforming all of society into a big public space: any moderately complex society, and indeed any society that aspires to a certain level of material security, will certainly contain a very large number of spaces of surveillance, though it would be better if, following on the work of people as diverse as James C. Scott and Hayek, these spaces of surveillance were not large and centralized.  But, to be honest, I’m not very good at thinking about the classic “what is to be done” question.

[Update 10/28/2011 5:45pm - fixed some minor typos]

Thursday, October 13, 2011

The Little Ice Age and Other Unintended Consequences of the Conquest of the Americas

I recently read Charles C. Mann's 1493: Uncovering the New World Columbus Created, which I highly recommend. It's the best kind of popular history, full of amazingly interesting, perspective-altering stories, and I hope to blog more about it, if time permits. (Samurai in Mexico city in the sixteenth century to guard the silver coming from Potosi: somebody should make a movie about that). One of the points that Mann makes both in 1493 and in his earlier (and also excellent, in fact better) 1491 is that the "conquest" of the Americas by Europeans (and in particular, Spaniards) was ultimately made possible by the fact that the Europeans brought more lethal microbes with them. It wasn't metal, or guns, or horses, or political organization, that made the crucial difference, but smallpox (and to a lesser extent malaria and yellow fever); without these invisible armies, Cortes and Pizarro would never have won the kinds of astonishingly quick victories they achieved against the Mexica and the Inca. (In fact, Cortes almost lost, even with his microbial allies).

Smallpox was so lethal in the Americas that in places more than a third of the native population was quickly (very quickly!) wiped out, leaving in its wake collapsed political and social structures; and smallpox usually raced ahead of the Spanish and other Europeans, like an enormously powerful advance force. And after smallpox, malaria and yellow fever usually moved in, especially in wet and hot areas like the Amazon basin, making life difficult both for any natives that survived smallpox and for the Europeans themselves, with important political consequences. For one thing, in places where malaria became endemic, Europeans were often unable to settle in any great numbers, but they were able to create "extractive" institutions using malaria-resistant slave labor forces imported from West Africa; and the consequences of such extractive institutions have been enormously far-reaching and long-lasting.

There is something inhuman about this idea: the conquest of the Americas, with all its enormous injustices, is ultimately reduced to a biological event, driven by forces none of its human protagonists could understand, let alone control. But it gets worse (or more interesting, depending on your perspective). Most native peoples in the Americas, lacking iron tools, practised forms of agriculture that made much use of fire. These were not "primitive" forms of agriculture, but complex land-management practices that made possible great population densities, even in places that are today only lightly inhabited (like the Amazon). Low-level burning kept grasslands from turning into forests, helped create forests that looked to Europeans like great parks, and produced charcoal that was used to make thin soils fertile through terra preta. And these practices effectively kept enormous amounts of carbon dioxide constantly in the atmosphere rather than locked into trees and other vegetation. When  native populations collapsed, however, the burning stopped or was greatly reduced, and the carbon dioxide was quickly locked up into forests again. Now, what follows is quite controversial. Mann cites some recent research that argues that this must have made a big contribution to the so-called Little Ice Age: the sudden drop in carbon dioxide in the atmosphere, perhaps in combination with natural variations in solar radiation, generated global cooling from around 1550 to around 1660. And this global cooling in turn appears to have produced a great "general crisis" in Europe: famine, war, and pestilence. I quote from a recent piece in Ars Technica summarizing recent research by Zhang et al:
The General Crisis of the 17th Century in Europe was marked by widespread economic distress, social unrest, and population decline. A significant cause of mankind’s woes during these times was the climate-induced shrinkage of agricultural production. Bioproductivity, agricultural production, and food supply per capita all showed immediate responses to changes in temperature. In the five to 30 years following these changes, there were also responses in terms of social disturbance, war, migration, nutritional status, epidemics, and famine. 
Cooling during the Cold Phase (1560-1660 AD) reduced crop yields by shortening the growing season and shrinking the cultivated land area. Although agricultural production decreased or became stagnant in a cold climate, population size still grew, leading to an increase in grain price and an increased demand on food supplies. Inflating grain prices led to hardships for many, and triggered social problems and conflicts such as rebellions, revolutions, and political reforms. 

Many of these disturbances led to armed conflicts, and the number of wars increased 41 percent during the Cold Phase. During the latter portion of the Cold Phase, the number of wars decreased, but the wars lasted longer and were far more lethal—most notable was the Thirty Years War (1618-1648), where fatalities were more than 12 times of the conflicts between 1500-1619.

Famine became more frequent too. Nutrition deteriorated, and the average height of Europeans shrunk 2cm by the late 16th century. As temperatures began to rise again after 1650, so did the average height.

The economic chaos, famine, and war led people to emigrate, and Europe saw peak migration overlapping the time of peak social disturbance. This widespread migration, in conjunction with declining health caused by poor nutrition, facilitated the spread of epidemics, and the number of plagues peaked during 1550-1670, reaching the highest level during the study period. As a result of war fatalities and famine, the annual population growth rate dropped dramatically, eventually leading to population collapse.
I am tempted to make a bad joke about "Montezuma's revenge," except that would be in terribly bad taste.  More seriously, perhaps, I wonder about what the complexity of natural and social systems implies for political theory.Given that none of the forces that were set in motion by the arrival of European colonists in the Americas the late 15th century were understood or even controllable if they had been understood (in fact, even today we do not understand them very well), how should we think about what they did, and about  the kinds of political systems they could or should have created? And should these sorts of stories matter for thinking about the ways in which we may be unleashing similar forces today?

More on this later - Mann has another story, about the silver trade and the collapse of the Qing dinasty, that is also quite instructive for these purposes.

Tuesday, October 11, 2011

Technical Request

Dear readers,

I'm currently the secretary of the New Zealand Political Studies Association. We are a small professional association (there are about 150 people in our mailing list) which basically disseminates announcements of interest to people who study politics in New Zealand, presents a number of awards for NZ postgraduate students, and helps fund and coordinate our annual conference. I am basically in charge of membership matters. My problem is, our membership records are a bit of a mess. Right now, we have a jury-rigged system involving a Google spreadsheet to keep track of new and existing members, but existing records do not allow me to tell for sure who is and is not a member (so our mailing list probably overstates the extent of our paid membership), and it is more difficult than we would like for members to pay their fees. So we've been looking into upgrading  our system for keeping track of members and  processing membership renewals and requests, and I thought that my (wise and discerning) readers might have some good ideas about how to go about doing this.

Ideally, we would like a system that:
  • Allows people to become members/renew their membership and pay their membership fees online via credit card or some other means (a modest NZ$20 per year - NZ$10 for students). 
  • Allows members to update their own membership records
  • Sends automatic reminders to people whose membership is about to expire 
  • Allows exec committee members to send e-mails to the entire membership or to specific "sections" (we have a political theory and a media and communications network)
  • Allows members to register their interest in being available to the media or other people as experts in some particular field (e.g., elections, MMP, etc.) and makes the names, contact details, and fields of expertise of these  members available in a searchable database 
  • Is easy to maintain and not too expensive
I've looked at a couple of membership software packages and a few other things (including Zoho Creator), but nothing seems quite right. We could also pay someone to set up a system like this (like a very limited version of the APSA website), though we have limited funds. Do my readers have any ideas about this? If you have any thoughts/suggestions, leave them in comments below or e-mail me at xavier.marquez@vuw.ac.nz.

Sunday, September 25, 2011

On the Meaning of Political Support


In the closing pages of Eichmann in Jerusalem, Hannah Arendt notoriously claimed that “politics is not like the nursery; in politics obedience and support are the same” (p. 279). Her point was that whatever Eichmann’s motivations or beliefs might have been ultimately, he had made himself a “willing instrument in the organization of mass murder;” and ethically and legally speaking, that fact was all that mattered. To support a regime (especially a murderous one) could be nothing more and nothing less than to act in whatever way the regime asks you to.

There is something harsh and uncompromising about this view. We often seem to want to distinguish between support and obedience, or at least to excuse some forms of obedience on the grounds that such obedience was not granted willingly or not grounded in genuine support. We might speak of “preference falsification” and attempt to separate overt obedience, given out of fear or lack of options or greed, from the “real” or “baseline” support that would have been given in the absence of ignorance, coercion, peer pressure or other incentives. (I have often written in this way, and find it a useful shorthand for thinking about things like cults of personality). And when we think about questions of responsibility in coercive regimes we sometimes engage in a complicated moral calculus that balances the inculpatory force of actual obedience against the exculpatory force of morally objectionable incentives (partially) underlying that obedience. Here I take it that our usual intuitions indicate that negative incentives for obedience (like threats of violence) are more exculpatory than positive incentives (like jobs or money), and positive incentives are more exculpatory than “intrinsic” preferences. The man who falsely denounces his neighbour on pain of seeing his son put in prison and tortured may do a wrong, but the wrong is partly excused by the threat of violence (perhaps he does the lesser of two evils), whereas the man who denounces his neighbour in exchange for money behaves less excusably (even if he really needs the money), and the man who denounces his neighbour for fun is a simply a monster. (And what about the man who supports a coercive system because he thinks it is the right system? Here our intuitions seem inconsistent, or perhaps depend on what we think about the source of the belief). In other words, we typically believe that obedience gained at gunpoint expresses less “genuine” support than obedience gained by an appeal to material interest, and that the most genuine support is manifested in purely “disinterested” obedience or collaboration.

I want to put aside for a moment the moral questions about responsibility and exculpation, and just focus on whether we can speak about “support” independently of obedience, i.e., about some “real” level of support underlying a person’s obedience to or collaboration with authority. And here I think Arendt was on to something: to ask about “real” motivations in politics is often fruitless, and sometimes positively perverse. The only way to demonstrate support in politics is by obeying, collaborating, or otherwise doing what the group one supports expects of you; the demand for additional proofs of support can only result in socially destructive (if sometimes individually advantageous) signalling games (see here, here, and here for some examples in this blog; Arendt’s favourite example was the destructive politics of purity during the terror in the French revolution). And the inner world of motivation and belief is too obscure (even to the agent) and fragile to survive the light of publicity, as Arendt repeatedly stressed.

More precisely, I am not sure that it makes sense to speak of political support independently of the institutions that condition obedience and collaboration. For purposes of analysis, we can (sometimes) separate out various “inputs” of what we might call the obedience-production function – coercion, monetary incentives, peer pressure and so on – and call the residual “real or genuine support,” a pure preference for collaboration with or obedience to a group or leader. This is basically what you get in Kuran’s classic analysis of preference falsification and its consequences, which I quite like (in fact, I use it constantly); but it is at best a simplification of the complex phenomenology of belief and motivation, especially when coercion and other external “incentives” dominate over whatever “intrinsic” preferences one may care to postulate. For one thing, in environments where coercion and other incentives are large enough, this residual preference is itself likely to be at least partially produced by all the other forces at work and is likely to be quite small in magnitude; and perhaps more importantly, it won’t always make sense to speak of this residual as a “preference” for the leader or the regime (or as a belief in its legitimacy, for that matter).

These ideas came to mind when reading Robert F. Worth’s superb and disturbing NY Times piece on the last days of the Qaddafi regime:

Unlike Benghazi, the old opposition stronghold in eastern Libya where the rebellion began in February, Tripoli had been a relative bastion of support for Qaddafi. Even the bravest dissidents, who risked their lives for years, often posed as smiling backers of Qaddafi and his men. Now the masks were off, but another game of deception was under way. At all the military bases I visited, I found soldiers’ uniforms and boots, torn off in the moments before they had, presumably, slipped on sandals and djellabas and run back home. Even the prisoners I spoke with in makeshift rebel jails had shed their old identities or modified them. “I never fired my gun,” they would say. “I only did it for the money.” “I joined because they lied to me.”

Everyone in Tripoli, it seemed, had been with Qaddafi, at least for show; and now everyone was against him. But where did their loyalty end and their rebellion begin? Sometimes I wondered if the speakers themselves knew. Collectively, they offered an appealing narrative: the city had been liberated from within, not just by NATO’s relentless bombing campaign. For months, Qaddafi’s own officers and henchmen had quietly undermined his war, and ordinary citizens had slowly mustered recruits and weapons for the final battle. In some cases, with a few witnesses and a document or two, their version seemed solid enough. Others, like Mustafa Atiri, had gruesome proof of what they lived through. But many of the people I spoke with lacked those things. They were left with a story; and they were telling it in a giddy new world in which the old rules — the necessary lies, the enforced shell of deference to Qaddafi’s Mad Hatter philosophy — were suddenly gone. It was enough to make anyone feel a little drunk, a little uncertain about who they were and how they got there.

Were these people deceiving themselves or others? Did the soldiers really support Gaddafi in the past but now do not? Do some of these people support Gaddafi still? The question makes less sense to me than it once did. It is clear that they once obeyed Gaddafi and now do not; and that the change from obedience to non-obedience must be explained as a result of a changing configuration of “inputs” to the obedience-production function, so to speak (changing configurations of coercion, monetary incentives, peer pressure, views of the rebels, etc.); but to attempt to determine if, in their heart of hearts, these people supported Gaddafi then (net of all of these forces) and now do not seems slightly absurd. Their obedience and disobedience, support and lack of support are nothing but the vector product of all the forces (threats of coercion, positive incentives, beliefs about Gaddafi, idiosyncratic likes and dislikes, moral convictions, obscure and half-formed ideas about the future, etc.) operating through them. It may make sense to attempt to disentangle these forces if we are interested in legal or moral responsibility, or in the private tragedies of everyday life in Libya, but it does not make sense to me to attempt to figure out if Gaddafi enjoyed some “genuine” level of support (independent of coercion, money, etc.) as a separate explanatory factor.

But didn’t some people love Gaddafi? And doesn’t such love make a difference? (This is basically the old “fear and love” problem). I do not think it makes the explanatory difference it is sometimes thought to make: those with more “love” for Gaddafi were not necessarily those more committed to the defence of his regime, for example. Here is another passage that jumped out at me in the piece (but really, read it all, though some of the stories are quite disturbing):

Of all the former Qaddafi loyalists I spoke with, only one offered a rationale that went beyond money or compulsion. His name was Idris, and he was a handsome 21-year-old medical student with a downy wisp of beard, a pink T-shirt and jeans. Idris (he asked me not to use his full name) talked about Qaddafi’s loss in a baffled, crestfallen way. We drove to a cafe not far from Algeria Square — since renamed Qatar Square by the rebels, in deference to Qatar’s support for the Libyan revolt — and got a table. I was amazed to see that Idris still had an image of Qaddafi on the screen of his cellphone. “I’ve been passionate for Qaddafi ever since I was born,” he said. His parents felt the same way, though he insisted they had not held any position or drawn any special benefits. “Libya is just a bunch of tribes, and there are blood feuds,” Idris said, when I asked him why. “We see Qaddafi as the only wise man with the power to stop the feuds. If he fails, there will be no one to mediate.” I asked what he thought of Qaddafi’s apparent support for terrorists and his reputation as a maniac in the West. “We see him as a brave man who speaks out against American bullying, as other Arab leaders do not,” Idris said. “So they accuse him of these things.” Idris conceded that Qaddafi made the mistake of surrounding himself with bloodthirsty people like Abdullah Senussi, his security chief and brother-in-law. He also said, like many loyalists, that he was misled about the rebels by Libyan state television, which portrayed them as terrorists. Yet he gave no ground in his love for Qaddafi. When I asked how he felt about Tripoli’s fall, he said: “Devastated. It’s like someone you love, and they’re gone.”

Our conversation began to draw interest from two men sitting at a nearby table, and Idris was getting nervous. We got back into the car and drove to his neighborhood, Abu Selim, a stronghold of support for Qaddafi. The neighborhood is known for criminals and immigrants — a ready base of support for the regime — but Idris’s area was more middle-class. As we drove down his own street, he pointed derisively to the new rebel flags hanging outside the houses. “This was all green flags until last week,” he said. “They love Qaddafi. They haven’t opened their shops, everything is still closed. They are afraid.” Later, he added: “Honestly, before February there was no such thing as pro- or anti-Qaddafi. Only those people who were directly affected, the prisoners or the very religious men, had any view.” We drove past the stalls of a local market, blackened by fire in the final days of fighting. Idris gazed out sadly. “Change is not worth this kind of destruction,” he said. On one wall, I saw the words “Who are you?” It was a satire, like so much of the graffiti, aimed at one of Qaddafi’s recent speeches, in which he repeatedly asked the rebels who they were. But in this neighborhood, full of silent and resentful young men like Idris, the words took on a very different meaning.

I think Idris inadvertently hits on a couple of important points. First, it is interesting to note that when one strips away all the other “inputs” to the production of support – money, coercion, peer pressure, etc. – we are forced to speak of things like “love” (for Qaddafi!). But this love is hardly comprehensible as a preference for Qaddafi over the alternatives, or even as a belief in the “legitimacy” of Qaddafi’s regime; it is obscurely wrapped up with a person’s identity and understanding of the world, and its political consequences appear not to have been significant. (Idris does not appear to have fought for Qaddafi when things got tough, despite his love for him, unlike many other people who were loyal to Qaddafi out of a variety of pragmatic considerations of interest and fear). As a side note, I suspect that one cannot normally speak of beliefs in legitimacy except in the Hobbesian sense of beliefs that converge on particular rules or persons as sovereign. To believe in the legitimacy of a regime is simply to expect that other people will obey its rules and officials and collaborate with its authority; when that expectation disappears, so does the regime, but this is obviously very different from something that can be measured by means of opinion polls, and it seems to have very little to do with the personal feeling that someone like Idris might have had for Qaddafi.

Second, Idris is right to note that before people were forced to take sides, “there was no such thing as pro- or anti-Qaddafi. Only those people who were directly affected, the prisoners or the very religious men, had any view.” The public act of taking a position obviates any question of “inner” support, since the public act is a clear signal of support. And without that public act, there is really no such thing as pro- or anti- Qaddafi “support” other than the ordinary collaboration of everyday life. It is only when people are called upon to do something one way or the other – to shoot prisoners, as some of the people whose stories are told in the piece were called upon to do, or spy on their neighbours, or anything that actually puts them at risk – that we can speak of support (or lack of support) in politically significant ways. And here Arendt is obviously right: obedience and support then are the same; to support the regime was to fight for it, whatever complex motivations one might have had for doing so. It is worth understanding the complexity and tragedy of these motivations (the story of Furjani, in the article, gives a glimpse of the tragic situation in which some people are placed when coercion is the dominant input the obedience-production technology), but from the point of view of explaining the maintenance and fall of the regime these will add very little beyond the obvious facts that most people supported the regime because they thought it was in their interest to do so or were afraid to do otherwise. 

Thursday, September 22, 2011

Just War Theory and Other Philosophical Responses to Warfare (A Footnote on Aquinas, Erasmus, and Machiavelli)


(Warning: a rambling disquisition about the point of just war theory in history. Tries to articulate some thoughts I've been nursing for the last couple of months, and some things I've tried to say in my class on political philosophy and international relations). 

How should one respond to the fact of war? I do not mean how we ought to respond to this or that war, but about the enduring fact that human beings engage in warfare: what should we do about this fact, at the most general level, if anything? And in particular, what constitutes a proper philosophical response to the fact of war?

One common response to the fact of warfare is articulated by the theory of just war. Just war theory presupposes that war is an unfortunate but sometimes unavoidable aspect of the human condition: given some general facts about human psychology (for example, the fact that at least some people lust for power or strongly believe that particular ideologies must be imposed on others), we must expect war to flare up from time to time, though its frequency may wax and wane for a variety of reasons (demographic, technological, cultural, etc.). Yet some of these wars will be justifiable: there will be good reason for (some people) to fight them in order to protect important values. The proper response to the fact of warfare thus involves articulating the principles and rules that distinguish between justifiable and unjustifiable wars (and between justifiable and unjustifiable conduct in war), and appealing to or forcing those who engage in that practice to regulate their conduct according to these principles and rules; and at least the first task necessarily involves philosophical reflection.

The ideal here is not the elimination of war, but the reduction of unjustifiable war through the moral (and sometimes legal) regulation of the practice. And though this regulation may take institutional form (as it does, imperfectly, nowadays), it need not: all the just war theorist presumes is that most people are relatively receptive to moral argument, at least when such moral argument appeals to relatively noncontroversial principles and is presented in a clear way. And even if such appeals sometimes fall on deaf ears, the just war theorist assumes that they are not entirely ineffectual. One can appeal to the conscience of those in power, even if sometimes they have trouble hearing its voice, or at least force them to pay a decent respect to the opinions of others, and one can train those who actually fight to be responsive to moral precepts that constrain what they can do in the heat of battle.

The basic principles of just war theory seem to have a certain universal appeal, given that they have changed little since Aquinas articulated them in the 13th century. (And he was merely systematizing ideas that were even older, going back to Cicero and the Stoics in the late Roman Republic). We still discuss ius ad bellum in terms of the basic triad of proper authority (who can authorize a war?), just cause (is there a good reason to fight, and in particular a reason that can justify the collective use of armed force?), and right intention (is the just cause merely a pretext for more nefarious purposes, or do the people waging war genuinely intend to protect some important values by going to war?). Other principles – like “reasonable chance of success” – sometimes enter the discussion, but the basic framework remains ancient. Witness the debate about the recent intervention in Libya, for example. Disagreement about the morality of the intervention revolved around the questions of who had the authority, if anyone, to permit the use of armed force against Gaddafi’s government, whether Gaddafi’s actions to put down a rebellion against his government gave other countries a good reason for engaging in war against him, and whether NATO members genuinely intended to protect Libyan civilians and/or help the Libyan rebels overthrow an oppressive regime (or were, on the contrary, acting to secure control over Libya’s oil or Western influence in the Middle East). Similarly, debates about the morality of particular tactics in bello (e.g., the use of precision munitions to attack particular people in urban areas) all revolve around the basic triad of principles of innocent immunity (is the target a civilian or a combatant?), proportionality (are the means proportionate to the end, or are they “overkill”?), and double effect (are the deaths of civilians a genuinely unavoidable result of the use of proportionate means?). Though the full articulation of the principles of ius in bello is of somewhat more recent vintage (they are more sketchily described in Aquinas than the principles of ius ad bellum, for example), they are still quite old and broadly accepted.

But though the basic principles of just war theory are widely accepted, the fact of disagreement obviously indicates that their application is much more controversial. The more one moves from broad principles to specific rules and even more to particular judgments the less arguments about the justice of particular wars or tactics will be convincing. Arguments come to depend on distinctions that are far less obvious and much more contestable. For example, the US Air Force consistently argues (and I’m sure mostly in good faith; as far as I know, American soldiers do receive explicit training on the principles of just conduct in war) that its use of precision munitions respects all the basic principles of ius in bello: such munitions are used only against people which intelligence indicates are “combatants” and responsible people attempt to minimize “collateral damage” (i.e., apply the proportionality and double effect principles). Yet many people vocally disagree with them about all aspects of this argument, including the weight that should be given to the evidence of combatant status (what is the acceptable false positive rate for a target?) and whether the use of 500 pound weapons in urban areas represents due care for the lives of non-combatants (what is the acceptable rate of civilian death from attacks on genuine military targets?).

The problem is not that there is no right answer to these questions, but that no particular answer can depend on premises that are all widely acceptable. Many if not most positions can muster plausible arguments (I get a glimpse of this every year when I ask my students to write essays applying the principles of just war to various recent military conflicts). Even sincere attempts by serious and well-trained thinkers to apply these principles to particular conflicts lead to ambiguous results. When one reads Vitoria’s exhaustive examination (in the 16th century) of what would count as a just cause of war against the natives of the Americas (and hence would justify conquering them and taking their land), it is hard to say for sure whether he supported or opposed the conquest; though in private made it clear that he was appalled, he thought that there could be (and perhaps were?) circumstances in which the conquest would have been justified.

The pervasiveness of disagreement, and the fact that such disagreement is necessarily entwined with important (even existential) interests tends to make moral argument about just war appear as a form of rationalization, and worse, as legitimating the designs of the powerful. The suspicion arises that in trying to distinguish between justified and unjustified forms of war we merely enable more warfare; and that we would all be better off if the considerable intellectual energy spent on making these distinctions were instead spent on delegitimizing warfare as such. Already in the 16th century, when the School of Salamanca was at the height of its influence (the Valladolid debates on the justice of the Spanish conquest of the Americas were not just for show!) and just war theory had evolved into a highly sophisticated discourse, there were people who thought precisely that. In his Dulce Bellum Inexpertis, Erasmus railed against what he saw as the enabling role of theologians in justifying too many wars. For him, just war reasoning was corrupting: it turned theologians and philosophers into advocates of their patrons’ predatory projects. The correct philosophical response to the fact of warfare, in Erasmus’ view, was not to help regulate it by articulating the principles and rules that can justify particular wars or practices within wars, but to deploy the full power of rhetoric to depict the horror of warfare and to delegitimize it as much as possible. (It is worth noting that the Dulce Bellum Inexpertis was a sort of 16th century best seller. The printing press was still relatively new in Europe, and Erasmus was very good at making use of it to publicize his views).

The point is not that Erasmus thought that no war could ever be just (he does suggest here and there that some wars could be justified), but that asking which wars are just is (most of the time) the wrong question, since most wars will not be just. Intellectual energy is better spent delegitimizing warfare as much as possible by depicting its material and moral costs as vividly as possible, denouncing its general injustice, and indicating potential alternatives. (This is implicitly an argument about the “responsibility of intellectuals,” though of course the point is never put that way by Erasmus). In this way, if wars must be fought, they will tend to be fought less often, and with more restraint; the use of rhetoric to delegitimize warfare as such will if nothing else tend to “ratchet” up the restrictive force of just war principles, increasing the rhetorical cost that must be paid to start or wage a war. Whether this is in fact the case is difficult to tell; there does seem to have been a gradual, if haphazard, “tightening” of the restrictive force of just war principles over time, though whether this “tightening” is at least partly due to the efforts of people like Erasmus is anybody’s guess. For example, whereas Aquinas in the 13th century thought that almost any “wrongdoing” that could not be redressed by the political authorities of a single political community could constitute a just cause of war, we now treat suspiciously any form of warfare that is not obviously defensive. And this “tightening” of the principles of just war has been correlated (I’m not claiming causality, however) with apparently large declines in the overall frequency and murderousness of war. (Yes, there are exceptions, and very long-term trends obscure significant variation over shorter periods of time. But the overall trends are striking, despite the greater destructiveness of modern technologies of warfare. At any rate, one only has to read Thucydides History of the Peloponnesian War to understand that the modern era is not particularly inhumane in its way of waging war).

The 16th century also gave rise to a very different response to the fact of warfare. Here the exemplary figure is Machiavelli, and the problem is not what to do to reduce warfare (should one help regulate it, or delegitimize it?), but how to use warfare to accomplish important goals. Warfare is not seen as a uniquely awful experience, but as a tool of politics; and one must study “The Art of War,” not because one ought to avoid war, but because one must learn to use it efficiently. Machiavelli (among others, though he most of all) wants to study the “economy of violence,” in Sheldon Wolin’s useful phrase, to put war to use, and in particular to put it to use for purposes that are internal to political life (the achievement of power, the foundation and preservation of political communities, etc.). Machiavelli’s thought is especially original not so much because he wants to study the economy of violence, however (there are many precedents, and Machiavelli’s advice in this respect, though generally acute, is not always great), but because he thinks that the standards by which we must judge the use of violence are themselves internal to the practice of politics: greatness rather than goodness. The point is to learn to do memorable and admirable deeds, and the most admirable deeds are those which produce lasting authority structures (founding religions and political communities, for example), not those that are most in keeping with conventional moral rules (or are accomplished with the least amount of violence). (I might write more on this point. It’s something I’ve been thinking about).  But even if one disagrees with Machiavelli that these are legitimate goals, and that reducing warfare is much more important, one might still think that doing so requires understanding the economy of violence and using it judiciously: that seems to me to be the genuine moral core of “realism” as a kind of consequentialist theory. 

Though the Machiavellian response is not a direct reaction to the development of just war theory, it is nevertheless a logical response to the same concerns that led Erasmus to move away from just war reasoning. It’s interesting to me that the European experience of the 16th century produced these entirely divergent responses to war, despite the fact that all of the writers who were operating in these traditions had similar understandings of what war entailed (war was after all a very common experience in their world). None of them were especially naive about human beings and their limitations, and many had real influence with those in power. Yet these three responses seem to be fundamentally different, and the difference is not always rooted in radically different understandings of human nature (though they do differ on this point, especially Erasmus). In my class, I sometimes put the point in slogan form: just war theory says (about war) “regulate it,” Erasmian pacifism says “delegitimize it,” and Machiavellian political science says “study and use it.” 

Yet which of these responses is the best one? And how are they related to one another? Are they complementary responses, such that a division of intellectual labor between their proponents is possible, and capable of promoting important values over time? (Just war theory and Erasmian-style pacifism do seem to me to be related in something like this way, but to be in tension with Machiavellian political science). Or are they ultimately incompatible, so that we must choose among them? And does the development of just war theory typically necessarily generate, in a “dialectical” fashion, these alternative responses to war? I suspect that it does: as just war theory becomes more complex, it comes to seem more futile, giving rise both to Erasmian-style “delegitimize it” responses Machiavellian-style “study and use it” responses, and yet it never fully disappears; and perhaps just war theory itself becomes more relevant after periods where both Erasmian and Machiavellian responses seem to fail (perhaps the period after WWII).

[Update 9/22: fixed some typos and made some minor wording changes]

Sunday, August 07, 2011

Endnotes

Haven't done one of these posts for over a month now, and the links accumulate. In no particular order for your Sunday reading pleasure:

More links accumulate here. HT: Zunguzungu, Cosma Shalizi, Henry Farrell, and probably others too.