rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780520216662

The Language War

by
  • ISBN13:

    9780520216662

  • ISBN10:

    0520216660

  • Format: Hardcover
  • Copyright: 2000-06-01
  • Publisher: Univ of California Pr
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $85.00 Save up to $49.06
  • Digital
    $35.94*
    Add to Cart

    DURATION
    PRICE
    *To support the delivery of the digital material to you, a digital delivery fee of $3.99 will be charged on each digital item.

Summary

Robin Lakoff gets to the heart of one of the most fascinating and pressing issues in American society today: who holds power and how they use it, keep it, or lose it. In a brilliant and vastly entertaining discussion of news events that have occupied an enormous amount of media space--political correctness, the Anita Hill/Clarence Thomas hearings, Hillary Rodham Clinton as First Lady, O. J. Simpson's murder trial, the Ebonics controversy, and the Clinton sex scandal--Lakoff shows that the struggle for power and status at the end of the century is being played out as a war over language. Controlling language is a basis for all power, she says, and therefore it is worth fighting for. As a result, newly emergent groups, especially blacks and women, are contending with middle- to upper-class white men for a share in "language rights." Lakoff's introduction to linguistic theories and the philosophy of language lays the groundwork for an exploration of news stories that meet what she calls the UAT (Undue Attention Test). As the stories became the subject of talk-show debates, late-night comedy routines, Web sites, and magazine articles, they were embroidered with additional meanings, depending on who was telling the story. Race, gender, or both are at the heart of these stories, and each one is about the right to construct meanings from languagein short, to possess power. Because language tells us how we are connected to one another, who has power and who does not, the stories reflect the language war. We use language to analyze what we call "reality," the author argues, but we mistrust how language is used today--witness the "politics of personal destruction" following the Clinton impeachment. Yet Lakoff sees in the struggle over language a positive goal: equality in the creation of our national discourse. Her writing is accessible and witty, and her excerpts from the media are used to great effect.

Author Biography

Robin Tolmach Lakoff is Professor of Linguistics at the University of California, Berkeley.

Table of Contents

Acknowledgments ix
Introduction. What I Am Doing Here, and How I Am Doing It 1(16)
Language: The Power We Love to Hate
17(25)
The Neutrality of the Status Quo
42(44)
``Political Correctness'' and Hate Speech: The Word as Sword
86(32)
Mad, Bad, and Had: The Anita Hill/Clarence Thomas Narrative(s)
118(40)
Hillary Rodham Clinton: What the Sphinx Thinks
158(36)
Who Framed ``O.J.''?
194(33)
Ebonics---It's Chronic
227(25)
The Story of Ugh
252(31)
Notes 283(20)
References 303(10)
Index 313

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts


Chapter One

LANGUAGE

The Power We Love to Hate

THE STORIES THAT MAKE THE NEWS

Some of the stories in the news over the last few years:

The fight over Political Correctness

The Anita Hill / Clarence Thomas hearings

The David Mamet play Oleanna

The role of Hillary Rodham Clinton

The Bobbitt contretemps

The Nancy Kerrigan-Tonya Harding faceoff

The O. J. Simpson saga

Adultery in high places

Sexual misconduct in the military

The Ebonies controversy

The fight to make English the "official" language of the United States

The death of Princess Diana

The "Cambridge Nanny" case

Sex (or whatever) in the Oval Office

Each of these stories is different, but they share at least one striking similarity: we hate to let them go. Each story, each issue, each case spurs passionate discussion, spilling out of the media and into our personal conversations, our private thoughts, our secret dreams.

    There are moralists and pundits who are incensed at the amount of attention paid to what they feel ought to be ephemera, especially since it's impossible to get anyone to care about what ought to matter: campaign spending corruption, genocide in Yugoslavia, the collapse of the Asian economy, the standoff in Iraq. Why doesn't anyone want to argue about those?

    Theories abound. It's "the culture of narcissism." The media's ratings frenzy. The short attention span created by MTV. Bad education. But the stories keep us enthralled despite the pundits' diatribes. The common thread in the explanatory narratives spun by the moralists and experts is a belief in the intellectual deficits of a populace inveigled into an addiction to trivia. The real stories, it is said or implied, the ones a serious and intelligent public would be paying attention to, are complex. We decadent Americans are drawn to these narratives by their very simplicity--all we postmodern consumers can understand.

    In truth, these tales are as deep and convoluted as they are gossipy and salacious. They grab us by our emotions, or even less dignified parts of ourselves, so that we are often embarrassed at our fascination. But they do so not because (well, not entirely because) our sensibilities have grown tawdry. Rather, these stories stay around much longer than they logically ought to because they involve the complexities of being American, and indeed being human, in the years just before the millennium. Each of these stories, whatever it appears to be about, is really about how hard, yet how interesting, it is to be us , here and now.

    Stories like these are our new mythology, our current just-so stories. As we tell them and hear them, they have a core of real-life truth. We've read about them in the papers, seen them on television. But we embroider them with other meanings, deeper and more troubling, as people have done with their stories since the making of the classic myths. O.J. is our Achilles, Hillary Rodham Clinton our Helen of Troy, and their stories help make sense of our reality.

    These fables are diverse in content and even ultimate staying power, but all pass what I call the Undue Attention Test. The UAT selects stories and issues that seem by "serious" standards to be ephemeral and trivial, to merit no more than passing media attention, if even that. But a story that passes the UAT does not fade away quickly. It acquires "legs." It shows up night after night on TV--in purportedly hard-news venues; on infotainment shows; on TV talk shows; on late-night shows; on the magazine shows. It turns up on radio talk shows as hearers call in with empurpled opinions; it occupies the front pages of newspapers and news-magazines. We talk about it spontaneously. Serious people may deride our fascination, but we are not ashamed. We know that our Attention is not Undue, that something we need to understand is playing out before our eyes.

    Some of these stories stay around only briefly (though longer than they should according to serious standards); some lived with us for several years before receding; and some seem destined never to depart. But all have received Undue Attention. Most of these cases involve problems currently causing unrest and dissension at all levels of national discourse: gender and race. A great many focus on the respective roles of women and men: who can do and say what, and what it means. Others ask the same questions about white and black persons; and some of the most persistent refer to both race and gender.

    We use these stories to explore the hardest questions we have to face, as ways to circle around our feelings and test possible resolutions, ways to mask our confusion with titillation. We wallow in Schadenfreude concerning the rich and famous and try not to acknowledge that some of the sources of their embarrassment play a part in our own lives.

    Several of these stories share a further similarity. They are about language: who has the ability and the right to make meaning for everyone. Language-based controversies like these are really about which group is to enter the new millennium with social and political control. Whose take on things will be the take? Who gets to make meaning for us all--to create and define our culture? Culture, after all, is the construction of shared meaning. These cases are about nothing less than our definitions of ourselves and who can make them. Therein resides power, directly or not.

    Does this argument seem strange? One reason it might is that we often consider language no more than mere effusions of breath, a representation of reality, not reality itself, and therefore incompetent to be the motive force behind social change. And the power to make language and through it meaning has been vested in one powerful group (typically middle- and upper-class white males) for so long and so totally that that perception became a transparent lens through which we viewed "reality": the view of that group seemed to all of us the plain, undistorted, normal and natural view, often the only view imaginable (if you weren't totally crazy). (I will discuss this point at length in Chapter 2.) For those still immured in this perspective, the suggestion that there might be more than one way to construct meaning is meaningless, threatening, or bizarre. The common conception of language as abstract and impotent seems to support this argument. But our passionate opposition to any new understanding of meaning-making gives the lie to that comforting old belief. Often the pundits (typically supporters of the old order) call these news stories superficial and those who pay attention to them mindless because they don't see that both are harbingers of a truly terrifying social change: the democratization of meaning-making.

    We are currently engaged in a great and not very civil war, testing whether the people who always got to make meaning for all of us still have that unilateral right and that capacity. The answer that seems to be winning is NO, but those who want to check the YES box are unaccustomed to not having their choices win by default and are fighting back with the zeal common to lost causes.

LANGUAGE MAKES REALITY

Language is, and has always been, the means by which we construct and analyze what we call "reality." The pundits opine that the economy wins and loses elections, but who has actually encountered, touched, or smelled an Economy? What we know of it we know through carefully selected words, images that tell us what we ought to think and believe we know. It is no accident that, at the very moment at which meaning-making rights are being contested, politicians and others in the public eye have developed armies of specialists whose job it is to construct public meanings via the skillful manipulation of language: old-time speechwriters, image consultants, media advisers, press secretaries, spin doctors, spokespersons, PR experts, pollsters, and many more. "Just language" has become big business. If in Calvin ("Silent Cal") Coolidge's time, the business of America was business, in ours the business of America is language.

    But how can language have this kind of power--explanatory and cohesive, on the one hand; divisive and threatening on the other? How can something that is physically just puffs of air, a mere stand-in for reality, have the power to change us and our world?

    In some of my classes, to illustrate the role language plays in the making of what we call "reality," I write a sentence on the blackboard:

    Christopher Columbus discovered America in 1492.

I then point out that, on the one hand, this is a sentence that early on each of us was encouraged to think of as simple, immutable, physically verifiable historical truth. On the other hand, every word in it (particularly including "in") represents an ambiguity or potential point of rational disagreement. Conservative critics rail at the notion that history may not be literally and absolutely "true." And of course, neither I nor any responsible person argues that the literal event the above sentence immortalizes did not actually occur. What is in dispute is the interpretation of that event. In the possibility of multiple interpretations lies the slippage, but without those interpretations the bare "fact" is meaningless. So you can choose between several competing meanings and no meaning at all.

    I am suggesting, then, that language not only has the ability to allocate political power for all of us as a society, but also is the means and the medium by which we construct and understand ourselves as individuals, as coherent creatures, and also as members of a culture, a cohesive unit.

    There is a paradox latent here. Language is just air after all--it is not a gun, it has no power on its own. Yet it changes reality. How is air transmuted into concrete reality--how does language become action?

    The British philosopher of language J. L. Austin (1962) proposed a way. As a first gambit, he suggested that most utterances were "constative," that is, merely descriptive of reality: The sky is blue; It will snow tomorrow; I like artichokes; Columbus discovered America . Such utterances could be judged as either true or false, based on comparison with reality.

    But there are other kinds of utterances as well. Leaving aside nondeclarative sentence types ( Close the door! Who is this? ), there are sentences or utterances that are declarative in form, but do not describe an externally determinable reality. Rather, by their very utterance they bring into being , or perform, the situation they represent. Austin called such expressions "performative."

    Performative utterances have specific requirements. They must all contain a first-person, present-tense verb of linguistic activity: I order you to leave; I excommunicate you; I promise to pay you within a week . They are not subject to verification procedures. It doesn't make sense to retort to them, "That's false!" since by my utterance of the words themselves I have made the situation they describe true--providing, of course, I have the right and the means to make that utterance appropriately or, in Austin's term, felicitously . Thus if I give an order but haven't the power to enforce it; or if the order I give cannot be carried out by a human being ("I order you to fly around the room"), the performative will not work. But if a performative utterance is felicitous, it transmutes the air of language into concrete reality. You must leave; you are no longer a member of my church; you expect money from me at a specified future time. Of course, performative utterances can change only those real-world situations that are changeable via language alone.

    Thus far Austin appears to distinguish between constative and performative sentence types, with only the latter (a much less common category) having world-changing properties. But toward the end of his book he slips in a subtle but powerful revision. In fact, all utterances are performative, even those that look merely constative.

    His argument is ingenious. Consider the pair of utterances:

I will give you $100 on Monday.

I promise to give you $100 on Monday.

The first utterance could be understood as either a prediction or a promise; the second, clearly a promise, may be constative or performative. If we interpret the first utterance as a promise, and the second as performative, the two are essentially identical in meaning. Yet by Austin's original analysis, based on its form (the verb is not one of linguistic communication) the first could be only constative. That doesn't make too much sense, inasmuch as it can (though it need not) be used with performative, that is, reality-creating, force. The same can be said about any utterance: cats eat bats is an assertion--that is, it performs the speech act of asserting a proposition--and as such it is equivalent to the explicit performative I say that cats eat bats . And since function, not form, determines category, it moves from constative to performative. In the same way, Austin suggests that all utterances are performative, and all language world-changing. In some cases ("explicit performatives" like "I promise to give you $100"), the performative expression is itself physically present as part of the utterance; in others ("primary performatives" like "I will give you $100"), it is formally absent, but functionally present as much as if it were literally there.

    There are categories of performativity. Some kinds of performative utterances have more palpable effects than others, some are more constrained as to who can perform them felicitously, under what conditions ( excommunicate is highly constrained, order less so, and promise the freest, although there are conditions for a felicitous promise). Battles over language not infrequently are fought over performatives: who can use which ones to whom, under what conditions. One case that has received a lot of attention of late is the apology.

APOLOGIES AS LANGUAGE POLITICS

An apology is certainly performative: it changes the world for participants in terms of their relative status and their future relationship. In making an apology, the maker (1) acknowledges wrongdoing; (2) acknowledges that the addressee is the wronged party; (3) admits needing something (forgiveness) from the addressee to make things right again. Apologies put their makers at a disadvantage in two ways: as transgressors, and as people in need of something from those against whom they have transgressed. Hence a true apology is always painful, and real apologies tend to occur either between equals, or from lower to higher. Higher ups "never explain, never apologize," first because they don't have to, and second because it might threaten their high status.

    Yet we persist in making apologies. Why? Because even as we lose status by making them, we get credit for making them: we are seen as nice, responsible people deserving of forgiveness and even praise. But to commit an error without apologizing to the wronged party is to appear haughty and uncaring, sins as bad as the original malfeasance. Unlike most speech acts it is the form of the apology that counts. It is less important whether it is sincere than that it gets made. In fact sometimes a true, obviously heartfelt apology is just what you don't want; all you want is a little reminder that you are owed something.

    Suppose you're in a movie theater, the movie is in progress, and someone steps on your foot to get to his seat. What would you prefer at this juncture? A long, loud expression of mortification followed by promises of perpetual atonement? Or a grunted "sorry"? (But you do want that "sorry.")

    Sometimes an apology is little more than a social formula meaning, "I care about you, I hear your distress." Women seem especially prone to this usage, apologizing even when no discernible wrong has been done or the speaker has had no imaginable part in the wrongdoing that has occurred. Sometimes a pro forma apology is used to placate a disappointed interlocutor, even if the speaker had nothing to do with the annoyance. ("Can I speak to Mr. Bigg?" " I'm sorry , he's in a meeting.")

    Because genuinely apologizing is humiliating, the full, explicit performative form ("I apologize for stepping on your cat") is seldom encountered. Instead, we have a host of indirect and ambiguous ways to accomplish the nasty business.

I'm sorry I stepped on your cat.

I'm sorry the cat got stepped on.

The cat looks upset.

Why was the cat under my foot?

Can't the damn animal watch where it's going?

You shouldn't have let the cat in the room.

You will note the progression from an ambiguous expression that could be either a true apology or a mere expression of regret to an apparent statement of a simple factual observation to an accusation blaming someone else--the cat, or the addressee.

    These choices are language politics in action: the way we determine who bears the responsibility for a mishap, who owes what to whom, reflects current power differences among participants and creates their future relationships. The options chosen by a speaker make meaning by creating or acknowledging the existence of a frame (see Chapter 2), clues that tell everyone how to understand what has occurred. If I acknowledge fault in the incident, then I am forever after in the eyes of those present either clumsy or a hater of cats, and I must undo the damage or make it up somehow. My identity shifts, however slightly, against my will and beyond my control. If I do not make an unambiguous apology, I may not have to undergo this debasement. Yet I may still get some credit for acknowledging ... well ... something .

    The public apology shares much of its form and some of its function with the private apology, though its consequences may be different. After an "incident," a nation's government must decide whether to apologize, or whether to accept the offending state's apology. If either decision is negative, the only recourse may be war. Sometimes national leaders intentionally perform actions that normally would require apology, and then refuse to apologize--forcing the offended nation into a declaration of war, the only way to save face--as a way to become involved in a war without appearing to be the aggressor. This was certainly the case in the prelude to the Gulf War in the summer and fall of 1991.

    Iraqi President Saddam Hussein was known to be a man of passionate temperament, not used to being thwarted, one who did not brook insults calmly. Yet during the summer of 1991 U.S. President George Bush repeatedly stepped on Saddam's figurative toes. He and members of his administration regularly mispronounced Saddam's name--always a sign of contempt. He referred to Saddam as a "butcher" and a "madman" and worse. He announced that he had "drawn a line in the sand," a statement of unilateral power. Through most of this, Saddam maintained remarkable tolerance, but finally he snapped and hostilities began.

    The president could have apologized at any point. Instead, he escalated his insults. The absence of apology was itself an act of hostility, and no doubt was intended to be read as one, albeit a slightly cowardly one, in avoiding explicit expressions of aggression while taunting the target to attack.

    The absence of any apology from President Bush seems in this case to have been deliberately provocative. But even when there is no affirmative reason to avoid apologizing, a person in power will often take evasive action anyway. A striking instance occurred during the presidency of Ronald Reagan (a man alert to the potency of symbols if ever there was one). The problem arose in the course of the debate over whether to make the birthday of Dr. Martin Luther King an official holiday. Liberals favored it; conservatives were generally less enthusiastic. As the discussion in Congress proceeded, Senator Jesse Helms argued against the proposal on the grounds that, according to information in possession of the FBI, King had been under Communist influence.

    At a press conference in October 1983, the president was asked whether he agreed with Helms's assessment. Reagan replied: "We'll know in about thirty-five years, won't we?" apparently believing that the files would be under seal for that length of time (it was actually forty-four years). Democrats immediately demanded that the president apologize to King's widow, Coretta Scott King. According to the Los Angeles Times report, as it appeared in the San Francisco Chronicle :

White House press aide Larry Speakes asserted, "The president said what he meant and meant what he said," and declared there were no plans to call Mrs. King. But later the president did.

                  Speakes said Reagan told her he "thought his remarks were misinterpreted."

    After the conversation, Mrs. King was asked by reporters what had transpired. The article reports her comment and its aftermath.

"He apologized to me," Coretta Scott King said after the conversation. "He said it was a flippant remark made in response to what he considered a flippant question."

White House aides, however, denied that the president had apologized. "It was an explanation," assistant press secretary Anson Franklin said. "He didn't mean the remarks the way they sounded."

A number of interesting questions arise here. First of all, why should Mr. Reagan's remark, superficially a mere prediction of a future discovery, be considered to require an apology? How was it possible for his reported conversation with Mrs. King to be interpreted by her as an "apology" and by his spokesman as an "explanation"? Finally, why was it so important to the Reagan administration that the statement not go into the public record as an "apology" that they were willing to risk offending a significant segment of the (voting) population?

    It is relatively simple to see how Mr. Reagan's remarks, if they were as Mrs. King indicated (that is, if he said something essentially equivalent to, "It was a flippant remark made in response to what I considered a flippant question"), could be interpreted as two different speech acts by the two parties involved. Part of the explanation has been suggested above: since apologies are painful, they are apt to be made indirectly, without the explicit performative "I apologize." That provides fertile soil for ambiguity. Mrs. King, hearing the words quoted or paraphrased above (there appears to be no disagreement about what the president said), and having in her mind a context (the president had said something that could be construed as offensive) in which an apology was appropriate, heard "flippant" as self-criticism, a not unreasonable interpretation, especially coming from a public figure whose every word is expected to be dissected by the punditry, and who therefore may be presumed to know enough to shun the primrose path of nonliteral expression. (One of the burdens of power, in a democracy that is at the same time a mediacracy, is the impossibility of linguistic playfulness: everything you say had better be literal.) So to Mrs. King, the president's comment was equivalent to, "I spoke irresponsibly, as I should not have done." Since she was the person most closely related to the one harmed by the misspeaking, and since the president made the statement to her, she was reasonably taking it as a true apology: a statement made acknowledging wrongdoing, to the person hurt by it. Furthermore, Mrs. King's response to the president's comments suggests that she not only construed them as an apology, but acted accordingly--expressed forgiveness: "Mrs. King said she replied to the president, `I understand. We all make mistakes, and I attribute this one to human error.'"

    President Reagan (or his spokesman) construed the utterance differently. For him, the emphasis falls on the second part, "... in response to what he considered a flippant question." One might still wonder what is "flippant" about a question suggesting that a national hero was in reality a traitor (this colloquy occurred at a time when the Soviet Union was still the "Evil Empire"--in Mr. Reagan's own phrase). In this interpretation, the president's utterance becomes an "explanation," with a communicative force of "It is normal to answer a question in the spirit in which it was asked. So if someone asks you something in jest (flippantly), it is only fitting and proper to respond in kind. If I had responded seriously, I would show myself to be communicatively incompetent, which (as the Great Communicator) I could not possibly be. Therefore, if you understand anything about how communication works, you will see that I did the only rational thing, and your concern (indicating that you took my remark seriously) merely shows that you don't know as much about how to communicate as I do, which is why I have to explain it all to you."

    It is always preferable to be in a position of explaining rather than apologizing: as much as the latter puts you one-down, the former places you one-up, with your addressee presumably needing something from you (information) and grateful to you for supplying it. But when the speaker in question is a person of authority, it is all the more crucial that he be seen to be on top from start to finish, lest he lose face as a possible wrongdoer. So it was important for the president's staff to insist on the construction of his remarks as explanatory rather than apologetic.

    Other interpretations were, and remain, conceivable. But there is an informal rule that whoever publicly makes the first interpretation, especially if it is a personage of influence, generally gets the meaning-making rights. So Anson Franklin's gloss was the last word (that I know of) on the subject.

    But how did the president's superficially innocent prediction about the unsealing of the tapes become open to an interpretation ("The Rev. Dr. King was a Communist") reasonably requiring apology? A powerful person's words tend to be subjected to more thorough exegesis than other people's. So if an ordinary person had made Mr. Reagan's remark, it might very well have been taken at face value: everyone has better things to do than expatiate on my meanings or yours. But the President of the United States' meanings--that's another story! Especially a remark by a conservative white president about an African-American heroic figure, in a racially polarized and racially tense society. This social and psychological context figures into the possible "meanings" that can be made, here as elsewhere.

    In order to account for the extended (apology-requiring) interpretation, it is necessary to realize first that when an ambiguity is uttered and it involves the possibility of both positive (or innocent) and negative (or harmful) interpretations, the literal surface reading generally is the one with the positive interpretation, the one requiring exegesis the negative. If I say, "You're a real genius," the literal interpretation is the positive one, you're smart; the ironic, that is, the nonliteral one, is negative, you're an idiot. On the other hand, I can't normally say "You're a real idiot," meaning ironically that you're smart. This follows from our natural instincts as social animals for politeness and self-defense: better to be some distance away (or on to a different topic) when the hearer gets what I really meant.

    Moreover, when public persons say something, in an informational situation like a press conference, the assumption is that what they say will be "interesting." It will be informative, it will say or suggest things we don't know or haven't thought of. So if an utterance under these conditions is capable of multiple understandings, the one preferred in this case (as opposed to that of normal small-talk) is the one that carries the most informational weight, is the most unexpected, or would have the most significant consequences.

    The question prompted by Mr. Reagan's remark is, "Just what will we know in thirty-five years?" If we have to wait thirty-five years, and it's to be worth the long wait, it had better be the richer possibility. In the normal course of events, once the tapes are unsealed, we would then learn one of two things: King was a Communist, or he wasn't. But the latter is much less shocking, and therefore less interesting, than the former. While that interpretation is certainly theoretically possible, it is significantly less likely in the context in which the remark was made. So there are good reasons to demand an apology, and equally good reasons to avoid making one. Once we understand the way language creates and enhances power relations, this otherwise rather odd colloquy begins to make perfect sense.

    In similar fashion, the mayor of New York, Rudolph Giuliani, managed to evade a real apology when many demanded one. On Columbus Day it is traditional for New York mayors and mayoral candidates to attend a Catholic mass. The mayor's opponent in the 1997 mayoral election, Ruth Messinger, is Jewish (Giuliani is Catholic). Messinger did not attend the mass, while Giuliani did. According to a report in the New York Times , Giuliani described Messinger's absence as a sign of disrespect to those who attended the mass. The remark was seen by some critics as covertly antisemitic and aroused a storm of demands for an apology. Giuliani, whose thin skin is legendary, made a statement which was construed by the Times reporter and other observers as an apology.

I was not suggesting that anyone who has any reservations about going to a Mass or a religious service should be required to go to it, or any pressure should be put on them to go to it. I think people have a right to make those choices for themselves. It was probably a mistake to put the focus on the Mass.

Technically Giuliani's statement could be put into the "apology" category, but only with some serious reservations that take away much of the sting. As the article notes,

The Republican Mayor pointedly refused to apologize to his Democratic opponent herself, saying his remarks were directed only at "anybody who feels they need an apology." And he said he would not retract his assertion that Ms. Messinger had shown disrespect to Italian Americans and Roman Catholics who celebrate the holiday. (Nagourney 1997)

    One essential part of a true apology is absent here: an acknowledgment that it is the recipient who has been wronged, and who therefore has to be the addressee of the apology. Giuliani's phraseology permits the reader to understand that there might be nobody to whom the remarks were addressed--the "anybody" he alludes to might not include anyone. In that case there would of course be no "apology" made at all. And there seems to be no acknowledgment of wrongdoing in Giuliani's direction of his remarks to "anybody who feels they need an apology"--very different from acknowledging that an actual person had really been wronged and was therefore truly in need of an apology. And since he would not retract one especially hurtful part of the original assertion, it's hard to know just what his "apology" consisted of. In other words, Giuliani's utterance was a skillful exercise in having it both ways: it took the form of an apology of sorts, so that only a crybaby or malcontent could grouse; but for himself and like-minded others, no retreat had been made, and the loss of face consistent with a felicitous apology had not occurred.

    Nice work if you can get it.

THE UN-APOLOGY

I have suggested that public figures avoid making apologies. But there is an exception in the eagerness, over the last several years, of high public officials in this and other countries to make public "apologies," almost always for behavior occurring prior to their term of office, usually before they (or those to whom the apology is made) were born. How do we explain this current fad? For one thing it is easier to "apologize" for what people long dead have done to people long dead. President Clinton appears perfectly happy to make explicit apologies for bad behavior by his predecessors (to African Americans for the Tuskegee syphilis experiments of the early twentieth century; to Hawaiians for the overthrow of Queen Liliuokalani at the end of the nineteenth), but he waffled and talked around a direct "apology" to his wife and the public at large in the Sixty Minutes interview of the Clintons in January 1992, in response to reports of his marital infidelities; and public dissatisfaction with his apology for an "inappropriate relationship" with Monica Lewinsky in August 1998 almost led to his ignominious demise (see Chapter 8).

    When the occasion for an apology occurred outside of the here and now, it is easier for a public person to do the deed. Hence the long list of public apologies for deeds in the more or less distant past. For instance:

· President George Bush's apology to Japanese Americans for the World War II internment camps;

· Canada's apology to its native population for the damage done by "150 years of paternalistic assistance programs and racist residential schools" ( New York Times , January 8, 1998);

· Japan's apology to Korea for its use of Korean "comfort women" during World War II;

· Switzerland's apology to the Jews for appropriating the moneys of Holocaust victims;

· The French Catholic clergy's apology for the Church's silence in the face of the antisemitic laws passed by the Vichy government during World War II, and their consequences.

· Great Britain's apology to Ireland for the potato famine (although Queen Elizabeth refused to apologize to the Indians for the 1919 Amritsar massacre);

· Australia's apology to its aboriginal population for its mistreatment of them;

· South Africa's apology for apartheid;

· Pope John Paul II's expression of "regret" for the Church's inaction during the Holocaust.

This is quite a list, all the more extraordinary because such behavior scarcely ever occurred before the 1990's. But by now, the genre has become so familiar that it has become the butt of satire. For example, the comedian Steve Martin (1997) writes in the persona of a candidate for public office "looking out over the East River from my jail cell,"

Once, I won a supermarket sweepstakes even though my second cousin was a box boy in that very store.... When I was twenty-one I smoked marijuana every day for one year.... Finally, I would like to apologize for spontaneously yelling the word "Savages!" after losing six thousand dollars on a roulette spin at the Choctaw Nation Casino and Sports Book. When I was growing up, the meaning of this word in our household closely approximated the Hawaiian "Aloha," and my use of it in the casino was meant to express, "Until we meet again."

In a similar vein, a cartoon by David M. Hitch shows a political candidate's television ad: Re-Elect Flembrik: The Real Sorry Candidate, with a bemused viewer commenting, "I think things have gotten out of hand with all these political apologies."

    Why has the public infelicitous apology become so popular of late? And why (as the parodies suggest) is resentment developing among those in whose name the apologies are made? I suspect it's because the more perceptive recognize that these apologies, for all their literal meaninglessness, have serious consequences, reallocating as they do the right to determine what events mean, and how they shall be spoken of, and who shall speak of them in public. In the past, groups powerful enough to perpetrate the kinds of bad behavior that normally trigger apologies also had the power to refuse to apologize and used their cultural clout to inculcate a general belief that no apology was called for. The only response by the nonpowerful that was deemed appropriate was humble gratitude, thanks to the former oppressor for ending slavery, for instance. So the fact that representatives of the perpetrators must now perform acts of public ritualized contrition is in itself a significant communication. The apology itself, and the specific details of its wording, matter less than the fact that the once grovel-proof have been made to grovel, an ineluctable sign that the times have changed. Beyond this, the patent insincerity, smarminess, and inappropriateness of such apologies make them irresistible targets for ridicule.

THE IDENTITY CRISIS

Public apologetics are not the only current discourse fad that has created controversy. A second seems at first unrelated, although further analysis suggests connections. Contemporary rhetorical style has devised two novelties that are disturbing because they fit so neatly into our tendency to erode or question the concept of individual identity: the use of the third person for self-reference and the erosion of our trust in personal memory, that creator of the cohesive ego that we confidently refer to as "I." In an increasingly diverse society, as more and more of us expect to be active makers of the public discourse, there necessarily arises a nervousness about who "we" are, whether there even is a "we," and if so, how it is created. Only if we see ourselves as a cohesive entity sharing a collective past, similarities of outlook, a common language (metaphorically and otherwise), and common interests, can we allow others the right to interpret any one of us as an individual. If I can trust your good intentions (because you and I have established grounds for mutual trust), at least to a degree, even if at first your view of me sounds outrageous, I may try to listen and may conclude that you are speaking in my best interests. And if you (the interpreter) have more power than I (the interpretee), I will not be overly concerned: I will assume that you will use your power for my good. Even if I don't feel so confident of your goodwill, there is no one to whom I can turn for redress. I won't even have access to a public language to argue against your construction of me. In such a world, one group's interpretation of another goes unchallenged. Stereotypes proliferate. Even their targets may accept them in silence. For there is a covert promise made: assimilate--become just like us--and we will take you in as one of us. And a covert threat: accept our view, or be ignored (or worse).

    Once it becomes clear that the cohesive "we" is a fictive construction, the rules change. I can no longer allow you to speak for me. That new perspective enables the underdogs, as well as their masters (who always had that right) to establish sharp boundaries between "we" and "you" or "they." We can still interpret each other, because we "speak the same language." We can still tell our ethnic jokes among ourselves, kid around about our resemblance to the stereotypes. But they can't do that to us any more. We --all of us who are included--now take over the responsibility for constructing both our individual and group identities. I create myself. The differentiation and definition of "I" have effects at many communicative levels, from the concrete choice of personal pronouns to more abstract and complex decisions about the definition of narrative genres and the right to claim a personal memory. Memory is a major source of identity, and whoever determines whose memories are legitimate controls the identity of others.

    At the simplest level is an increasing usage of third-person self-reference. Normally we call ourselves "I," "me," or "myself," as in,

    I am trying to explain third-person self-reference.

But

    Robin Lakoff is trying to explain third-person self-reference,

is third-person self-reference. The form itself is nothing new. Julius Caesar used it famously throughout his memoir, Commentaries on the Gallic War , in the first century B.C.E. His choice was that of a master manipulator of language and people, meant to suggest the absolute trustworthiness and objectivity of his account. If he had used the first person, readers might well have discerned an attempt to illegitimately influence the body politic, and might have begun to perceive distortions of the truths--Caesar was well known to his contemporaries as a tricky character. But his third-person presentation lulled his readers into complacency. It felt like history, and history is "truth." How could someone writing about Caesar's exploits in the third person have an axe to grind?

    More elaborate structures have lately been built on Caesar's foundation, most notoriously by Lt. Col. Oliver North in his 1987 congressional testimony. Unlike Caesar, North moved back and forth between first- and third-person self-reference. Indeed, North pushed the form still further. He chose his third-person personae from several different evocative options, depending on which image of himself he wished to project at any given point: "This Marine Lieutenant Colonel," to suggest his obedience to orders and his military practicality and toughness; "this American citizen," to link him to viewers on television and underscore his patriotism and trustworthiness; "Ollie North," in refuting rumors about marital infidelities: just a regular guy like you and me, nothing wrong with Ollie! He thus created a stageful of characters, all parts of "Oliver North," yet each distinct; none identical with the person ("I") who was testifying; vivid, unlike Caesar's deliberately gray third person, but at the same time impersonal and thus trustworthy: no propaganda, no razzle-dazzle from him! The ploy worked well for the eminently theatrical and charismatic Marine Lieutenant Colonel.

    Successful third-person self-reference seems to be confined to people in positions of power, people who naturally see themselves as the cynosure of all eyes, for whom the external point of view invited by the third-person pronoun makes sense. But used by the charisma-challenged, even among the prominent, the device can backfire. Bob Dole, the 1996 Republican presidential nominee, used it a lot, generally in the form "Bob Dole." Despite his position as Senate majority leader and presidential candidate, his persona seemed insufficient to bear the weight, particularly as he used the form both often and predictably. Commentators began referring ironically to "bobdole," and as he fell inexorably behind in the polls, his persistent third-person self-reference began to feel more like a demand for recognition or a gimmick gone haywire than (as it must to be effective) a reflection of presupposed significance.

    The burgeoning of third-person self-reference in the public discourse marks the beginning of a slippage: no one can be absolutely sure any more that there is a difference between I and he , the internal and external worlds; the I who knows what is going on inside, the owner of the inner life, the knower of motives; and the he who is outside, objective, seeing with and through the eyes of the hearer/reader. The user of third-person self-reference seems to want to play both roles at once. But this advantageous prospect has a downside: "You" don't know who "you" are any more.

    Therefore the concomitant creation of another postmodern puzzle is inevitable, namely the argument over what, if anything, distinguishes "fact" from "fiction," particularly in the genre of autobiography or memoir. In the last several years, the formerly hard line between the two, assumed since the development of mass literacy in fifth-century Athens, has grown fuzzier.

    The current confusion goes back to the late 1960's with the publication of Truman Capote's In Cold Blood and Norman Mailer's The Armies of the Night . In both, the line between reportorial factuality and novelistic imagination was blurred, a blurring made more problematic in the second case by Mailer's interjection of himself, in the third person, as a character in the novel--or rather, a subject of the report. (Mailer continued the game in The Executioner's Tale , published about a decade later, in which the reporter/novelist entered into the thoughts of a convicted murderer, with whom he had not actually spoken.) Still, there was no dispute about the veracity of the reports themselves--the external world was still trustworthy, its reporters (whether in the guise of novelists or newspapermen) reliable.

    Things got more complicated with the publication in the mid-1990's of several allegedly "true" first-person accounts couched as memoir or autobiography, the veracity of which, when subjected to scrutiny, proved dubious. An author's use of a first-person narrator need not require sophisticated readers to believe in the truthfulness of the narrative or the authenticity of the author's borrowed identity. So for instance both the personages and the accounts of "Lemuel Gulliver" in Jonathan Swift's Gulliver's Travels and of "Humbert Humbert" in Vladimir Nabokov's Lolita are unproblematically fictional. But the recent controversies arose over stories that were represented explicitly as nonfictional, and whose authors represented themselves not only in the narrative proper, but also in interviews and other public discourse as literally identical to the "I" of the narratives, to whom the events described had actually, historically, happened. They kept their own names--not necessarily a statement about their true identities (occasionally a writer of fiction creates an "I" with the writer's own name who is nonetheless understood to be a fiction), but certainly suggestive of identification. In particular Kathryn Harrison and Lorenzo Carcaterra, authors of The Kiss and Sleepers , respectively, have come under criticism for obscuring the boundaries between "fact" and "fiction," between ream and fictive-I. Their use of the trustworthy and empathic first-person arouses feelings of betrayal in readers who learn of the blurred lines--more than would be true if the writers had cast their protagonists as more clearly fictional third persons.

    Any argument that writers should keep a clear distinction between fact and fiction has to seem naive, if plausible at all, to sophisticated people in the postmodern period. Modern people often feel that a work that knowingly blurs that line and invites readers to examine their resultant confusion is in fact more honest than the sort of thing that has been around for centuries--purportedly true works of alleged nonfiction that skew reality in ways that cannot be checked or controverted: propaganda of one sort or another.

    Perhaps at this impasse one is tempted to duck the confusion by retreating back into one's own, trustworthy "I," the deep recesses of the psyche. But there is no refuge there. As Freud pointed out a century ago, we are not in control of our minds, though we believe ourselves to be. Our "memories" may be fictions (or not--we have no way of knowing); the motives we attribute to ourselves may well not be genuine; that self we have painfully put together out of our memories, motives, and daily interactions is as fictional a construct as the protagonist of a novel. If we can't trust our memories, there is no identity--no "I"--we can claim with certainty. This concern has been part of the culture for nearly a century, but only now are its disturbing consequences becoming manifest.

    On whose authority are one's memories (and with them, one's self) to be declared reliable? Here too politics enters the fray. Early in the development of psychoanalysis Freud wondered why his patients had become neurotic. In many cases their recollections included memories of sexual molestation in childhood. When these repressed memories were made conscious, (sometimes) the patient's symptoms would be mitigated. Freud's first hypothesis was simple: the need to repress all memory of the painful events was the cause of neurotic symptoms, themselves distortions of those very events.

    This theory not surprisingly found little sympathy in Freud's professional circle, from whose families many of his first analysands were drawn. And Freud himself was the father of three young daughters. Perhaps, too, for a mind as convoluted as Freud's, that theory was ungratifyingly simple and obvious. The "seduction theory" was short-lived; by 1905 Freud had reversed his original hypothesis. By then he had developed his theory of the Oedipus Complex, the claim that children sexually desire the parent of the opposite sex, and that neuroses result from repressing that wish. In this narrative the patient-to-be desires her father; as a result, she fantasizes his act of seduction. So patients' (or anyone's) tales of childhood sexual molestation were ascribed by generations of psychoanalysts to Oedipal fantasy.

    That Freud's switch inspired virtually no argument until the late 1960's is not surprising. Males--particularly of the authoritative medical type--were almost universally seen as the realists, females as misty-eyed romantics with a tenuous hold on truth. So fathers' denials were taken as true, female patients' stories as wish-fulfilling fantasies. Thus was authoritative male power again reasserted over female identities, meanings, and memories. Any woman who fought these interpretations was "resisting"--proof positive of the neurosis that made her memory unreliable.

    With the rise of the women's movement in the late 1960's, there was a reexamination of male interpretations of women, in psychoanalysis as elsewhere. Writers like Kate Millett, Shulamith Firestone, and Phyllis Chesler (and, almost a generation later, Jeffrey Moussaieff Masson) reexamined the "seduction theory" and suggested that Freud had gotten his story right the first time. The argument had been largely accepted, even, if grudgingly, within psychoanalysis, by the early 1980's, that women's recovered memories were real. The pendulum had swung 180 degrees.

    By the mid-1980's it was not uncommon for women who had "recovered" "memories" of early sexual abuse in therapy, spontaneously or under hypnosis, to confront their alleged abusers, sometimes in the courtroom. During the same period there were several large and well publicized trials of day-care workers on similar charges. Popular myth had it that children never "lied" about such things, and popular (as well as professional) opinion saw only two options: they were lying, or their stories were utterly true. Trust persisted in many quarters even when these stories, both the children's and the adults', included bizarre additions like satanic ritual, cannibalism, and multiple murder. In the daycare cases there were a number of guilty verdicts followed by extremely long sentences for those found guilty.

    Eventually psychologists began to understand that, especially for young children and adults under the influence of hypnosis, there was something in between "lying" and "truthfulness." While small children do not spontaneously invent such stories, they are notoriously susceptible to suggestion. And hypnosis is, above all, heightened suggestibility. Work by the cognitive psychologist Elizabeth Loftus (1979) demonstrated the unreliability of even adult short-term memory and observation. Sentences began to be overturned, and there were a few cases of therapists being sued by alleged abusers.

    The result has been a significant backlash, with several influential books (e.g., Crews 1995; Ofshe and Watters 1994; Showalter 1997) taking an absolutist position against the existence of repression and recovered memory. Once again "experts" (predominantly male) are telling "victims" (predominantly female) what their symptoms "mean," denying them the right or capacity to make their own interpretations. It would certainly be naive and dangerous to accept all reports of recovered memory of childhood sexual molestation as true; but there is equal danger in discarding all, sight unseen, as false. The culture as a whole seems to have a problem with gray areas, preferring either-or to both-and or some-of-each. But--in the (typical) absence of reliably corroborating eyewitnesses--there is seldom a clear diagnostic for telling true stories from confabulations.

    So finally we are left in a state of undecidability. In almost any such case, memory (mine, yours, the culture's collective) may represent absolute literal truth; may represent a construction of truth plus embellishments or reorganizations; or may represent pure imagination. There is no way to be sure, and the debunkers' use of one or two or a few demonstrably false constructions merely warns us to be skeptical in all cases--it does not prove that the entire theory of repressed memory is faulty.

    The problems raised, both by our doubts about memory and by the absence of any means by which to resolve those doubts, only exacerbates our other uncertainties about the reality of our selves. If our sense of self--the uniqueness, cohesion, and autonomy that we call "I"--is created and justified by our sense of continuity brought about by reliance on the belief in an unbroken and coherent chain of memories; and if we now have to believe that some if not all of our memories, especially the oldest, are mere ex post facto constructions ... then who are "we"? Even if we no longer surrender to authoritative others the control of our identities via memory, we ourselves must give up those rights as well.

    We are dangerously close to the situation in which Alice found herself after her descent down the rabbit-hole. She is no longer the same size and shape she used to be; indeed, she shifts sizes and shapes uncontrollably and unpredictably. So her physical attributes no longer enable her to establish cohesive identity. At the same time she discovers that the psychic determinants of identity have also turned unreliable: she no longer knows the things she used to know. Her painfully garnered (she is seven years old) school knowledge has vanished. And her practical know-how is continually belied by the special rules of Wonderland.

"Dear, dear! [said Alice to herself] How queer everything is to-day! And yesterday things went on just as usual. I wonder if I've been changed in the night? Let me think: was I the same when I got up this morning? I almost think I can remember feeling a little different. But if I'm not the same, the next question is, `Who in the world am I?' Ah, that's the great puzzle! ..."

"I'm sure those are not the right words," said poor Alice, and her eyes filled with tears again as she went on. "I must be Mabel after all, and I shall have to go and live in that poky little house, and have next to no toys to play with, and oh, ever so many lessons to learn!" ( Alice in Wonderland , chapter 2)

    The connection between these two dilemmas--the loss of certainty of selfhood and trust in memory, and our current taste for group apologetics--may be that the first creates a need for the second. Implicit in public apology is the invocation, and acknowledgment, of shared memory: All of us remember that this event occurred . Such apologies underscore our reliance on presumably shared memory: if we didn't agree that it happened, if we couldn't all be sure that it happened, then the apology would be infelicitous--it couldn't work in our hearts and minds as an apology! So the fact that we accept it as legitimate (if in fact we do) signifies that there is some shared memory we can trust. And that in turn encourages us to go on believing in the myth of ourselves as members of a cohesive society, a valid "we," composed of members such as ourselves, who have at least this group "memory," even if our individual ones are questionable. What Freud hath put asunder, the President of the United States can bring back together.

    But difficulties keep arising out of the diffusion and incoherence of our individual selves and the impossibility of keeping "me" distinct from "them." The latter problem is thrust painfully into our attention by the loosening of common bonds in our multicultural society. In the metaphorical melting pot, you and I might eventually merge. In the newer metaphorical "mosaic" or "salad," the boundary between I and you remains distinct. So even if all of us spoke English (and worse luck yet, it appears that all of us don't!), we cannot expect to "understand" one another. It has become an article of faith that men and women "speak different languages." But we always suspected as much. Now we are assailed by the realization that words don't mean the same thing under all conditions for everyone (as the O. J. Simpson verdict and its aftermath made painfully clear; see Chapter 6). Context--where, by whom, in what tone words are uttered--counts. And in disputes over meanings, those who could always count on having the last word can't any more. Arguments about what kinds of language are fit for public consumption (see especially Chapter 7) become more rancorous as more sides demand a hearing.

    There is public disagreement over the use of troublesome words: for instance, over who determines what constitutes a "slur," and therefore, who determines how a dictionary (once the unquestionable voice of authority) should define the word "nigger," or even if it should be in the dictionary at all. The very dictionary has become overtly political.

    According to an article by Torri Minton in the San Francisco Chronicle (October 17, 1997), the Merriam-Webster's Collegiate (r) Dictionary Tenth Edition has aroused the ire of the African-American community because it "does not state immediately that the word is offensive." It does say so, but rather as an afterthought: "1: a black person--usu. taken to be offensive," adding at the end of the complete definition, "It now ranks as perhaps the most offensive and inflammatory racial slur in English." The Merriam-Webster's definition "inflicts extensive emotional damage on young people who have to identify with that definition--because the definition states who they are," according to Jamal-Harrison Bryant, national youth and college director of the NAACP, quoted in the article.

    That is an extraordinary statement. First and most obviously, the idea that anyone --let alone members of historically unempowered groups--can take issue with a dictionary, its makers, or its definitions, is shocking. But Mr. Bryant's argument is reactionary rather than revolutionary because it reinforces the old order: the reason it makes sense to create a fuss is precisely that the dictionary is our maker of meanings, our semantic arbiter: the definition in the dictionary states who you are, your identity still depends on how authority views you. True revolutionaries would reject the authority of the dictionary to define them altogether.

    There is an analogous dispute about the use of the word "Holocaust." Must it refer only to the Nazis' killing of six million Jews? Or can it be applied to other cases of mass murder or mass mistreatment of one group by another? Can African Americans appropriate the term to refer to slavery? Does any group have the right to appropriate a word for themselves or their own experience? Again, the use of the word "Holocaust" is only the surface issue; the deeper one is about who gets to decide who can make the language, choose words, assign meanings, mediate between the real-world referent and the concept via language. In the case of "Holocaust," it is true that its original creation, in Greek, in the course of the translation of the Old Testament into Greek from Hebrew, was for the purpose of translating a Hebrew term. But its current capitalized sense did not arise until well after the events it describes, in 1957.

    Like "nigger," "Holocaust" packs a real wallop because both words unlock group memories of horrific events. Because both words have become invested with a special mystique, they have meaning for their users beyond the literal. Victims of both slavery and the Nazis want to hold on to the mystique of their words, want them to retain both their terror and their clout, lest we forget in an age in which memory is vaporous. And by asserting control over words that purport to describe the indescribable, the victims of those histories seek to gain some control over their past through control over the language that describes it. But in the normal course of events, nobody retains a copyright on words. (Even manufacturers who try to do so, with words connected specifically to physical products that they own the rights to--like Xerox, Kleenex(r), and Scotch(tm) Tape--have a hard time keeping them specific and uppercase.) Whatever the morality involved, it is unlikely that the Jewish community can retain sole rights to "Holocaust." Inevitably, over time, it will be used more and more loosely, to cover events that are less and less horrific. The word will lose its mythic and mystical power.

    We find ourselves in a curious situation. We see ourselves as awaiting the dawn of a new millennium, a time of unparalleled scientific understanding, a time when we ought to be free of the silly superstitions of the past. Yet at the same time, words retain their ancient magical powers. We want to regulate language in order to magically control the course of real world events.

    Language is not "just words." It enables us to establish our selves, and ourselves, as individuals and as members of groups; it tells us how we are connected to one another, who has power and who doesn't. The cases I have looked at in this chapter are mostly small-scale: a word, a phrase, or a sentence. Later chapters will examine the way we construct and dispute larger units--whole narratives, even more potent ways to make meaning. Now more than ever language is construed as something worth fighting for, or at least over.

Copyright © 2000 Regents of the University of California. All rights reserved.

Rewards Program