did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780374250355

Rights Gone Wrong How Law Corrupts the Struggle for Equality

by
  • ISBN13:

    9780374250355

  • ISBN10:

    0374250359

  • Format: Hardcover
  • Copyright: 2011-10-25
  • Publisher: Farrar, Straus and Giroux
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $27.00 Save up to $13.51

Summary

Since the 1960s, ideas developed during the civil rights movement have been astonishingly successful in fighting overt discrimina#xAD;tion and prejudice. But how successful are they at combating the wholespectrum of social injusticeincluding conditions that aren't directly caused by bigotry? How do they stand up to segregation, for instancea legacy of racism, but not the direct result of ongoing discrimina#xAD;tion? It's tempting to believe that civil rights litigation can combat these social ills as efficiently as it has fought blatant discrimination. In Rights Gone Wrong, Richard Thompson Ford, author of the New York TimesNotable Book The Race Card, argues that this is seldom the case. Civil rights do too much and not enough: opportunists use them to get a competitive edge in schools and job markets, while special-interest groups use them to demand special privileges. Extremists on both the left and the right have hijacked civil rights for personal advantage. Worst of all, their theatrics have drawn attention away from more seri#xAD;ous social injustices. Ford, a professor of law at Stanford University, shows us the many ways in which civil rights can go terribly wrong. He examines newsworthy lawsuits with shrewdness and humor, proving that the distinction between civil rights and personal entitlements is often anything but clear. Finally, he reveals how many of today's social injustices actually can'tbe remedied by civil rights law, and demands more creative and nuanced solutions. In order to live up to the legacy of the civil rights movement, we must renew our commitment to civil rights, and move beyond them.

Author Biography

Richard Thompson Ford is the George E. Osborne Professor of Law at Stanford Law School. He has pub­lished regularly on the topics of civil rights, constitutional law, race rela­tions, and antidiscrimination law. He is a regular contributor to Slate and has written for The New York Times, The Washington Post, The Boston Globe, and the San Francisco Chronicle.

Table of Contents

1
 
Entitlement and Advantage
 
 
Now you want me to tell you my opinion on autism…? A fraud, a racket. For a long while we were hearing that every minority child had asthma … Why was there an asthma epidemic amongst minority children? Because I’ll tell you why: the children got extra welfare if they were disabled, and they got extra help in school. It was a money racket … Now the illness du jour is autism. You know what autism is? I’ll tell you what autism is. In 99 percent of the cases, it’s a brat who hasn’t been told to cut the act out. That’s what autism is … Everybody has an illness … Stop with the sensitivity training. You’re turning your son into a girl and you’re turning your nation into a nation of losers.
On July 16, 2008, the radio talk show host Michael Savage managed to offend parents of disabled children, racial minorities, and women in less than a minute and a half—an accomplishment that his rivals Rush Limbaugh and Glenn Beck can only aspire to. The group Autism United demonstrated in front of the New York radio station that carries Savage’s program. One of his sponsors, the insurance company Aflac, promptly gave Savage some unwelcome sensitivity training: it pulled its advertising from his program, explaining that the company found “his recent comments about autistic children to be both inappropriate and insensitive.” Criticism was almost unanimous among doctors, child psychologists, disability rights advocates, parents, and pundits alike. Several local stations dropped Savage’s program in response to public outrage.
Savage is a provocateur—deliberately insulting and extreme, with a loose regard for factual accuracy. According to the clinical psychologist Catherine Lord, autism is “just like epilepsy or … diabetes or a heart condition. [Savage’s comments are] like blaming the child with a heart condition for not being able to exercise.”1 Savage eventually backpedaled, saying his remarks were “hyperbole,” designed to draw attention to the problem of fraudulent diagnosis. He agreed to devote another show to the subject so that parents of autistic children and others could air dissenting views.
Savage, like Limbaugh and Beck, is conservative and contentious, but he is also idiosyncratic—often unexpectedly thoughtful, even cerebral. While Limbaugh and Beck are activists for conservative politicians and causes, Savage is distinguished by a kind of crotchety ennui. As contemptuous of other conservatives as he is of liberals (he called Glenn Beck a “hemorrhoid with eyes”), he treats partisan politics with an aloof disdain: “You’ll have to go to one of the other talk-show hosts to get ‘Obama’s a Ma-a-arxist’ and ‘McCain is a wa-a-ar hero.’”2 As a result, where other conservative talk show hosts are annoyingly predictable, Savage’s off-the-cuff ramblings and intemperate tirades are often surprising and intriguing, and they often contain at least a grain of truth. For instance, Dr. Lord admitted that mild autism is vaguely defined and can be a catchall diagnosis for children with behavioral problems who fit no other category. A year and a half after Savage’s remarks, the psychiatrists in charge of writing the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders announced that they were considering folding several types of mild autism—such as Asperger’s syndrome and pervasive developmental disorder—into a single broad category—autism spectrum disorder—reflecting a new understanding that autism is not a single disorder but rather a range of conditions, from severe mental disabilities to mild emotional abnormalities that can come with extraordinary mental gifts.
There’s a professional consensus that severe autism is a discrete neurobiological condition, but mild cases can be hard to distinguish from less well-defined conditions, such as attention deficit hyperactivity disorder (ADHD) and other vaguely defined “learning disabilities.” Here, diagnosis is difficult and contestable, and expert opinions differ. “We’re fairly good about making the diagnosis of kids who are classically autistic, but as you move away from that specific disorder, it gets harder … [F]or kids who are of average, close to average or above average intelligence, it is difficult to sort out what is eccentricity versus what is a real social deficit,” said Dr. Lord.3
Federal law doesn’t reflect a continuum that includes mild autism and learning disabilities along with eccentricity and poor concentration. For legal purposes, a disability is a discrete condition: either you have it, and therefore have a right to an array of special concessions and extra help, or you don’t. The law doesn’t define learning disabilities with precision, but it does provide a partial definition: “a severe discrepancy between achievement and intellectual ability.”4 In practice, this means that learning disabilities are diagnosed, in large part, by identifying a gap between a child’s performance in academic settings and the performance one would expect of a child of his or her age and IQ.
Civil rights laws entitle all disabled people to special accommodations and services: a blind person might require an exam to be administered orally or written in Braille; a paraplegic might require voice-recognition software or transcription. These accommodations let the disabled reach their potential. Children with learning disabilities are also legally entitled to accommodations and services that other children are not, such as special tutoring and extra time on exams. In theory, just as a blind person needs Braille, a Seeing Eye dog, or a cane to overcome his blindness, a person with ADHD may need extra time to get organized and overcome his inability to concentrate.
But there are some important differences between severe disabilities like blindness and milder learning and behavioral disabilities. First, conspicuous disabilities often trigger reflexive animus or prejudice. Many employers wrongly assume disabled people can’t work, and businesses discriminate against them because of squeamishness and irrational aversion. A business that refuses to accommodate a disabled person might secretly wish to exclude him. Milder disabilities don’t trigger such reflexive prejudice because, for the most part, they are not conspicuous: typically an employer learns of a learning or an emotional disability only when an employee seeks an accommodation for it. Second, most of the accommodations that people with severe disabilities need wouldn’t help a nondisabled person at all. A sighted person wouldn’t benefit from having an exam written in Braille; an able-bodied person wouldn’t get much of an edge from using voice-recognition software or a professional transcriber. By contrast, people with learning and emotional disabilities often enjoy extra time on competitive exams, costly one-on-one tutoring, and exemptions from discipline for disruptive behavior—things that would benefit anyone. Finally, unlike blindness or a physical disability, many learning disabilities are hard to define objectively; as Dr. Lord admits, they are on a continuum with ordinary “eccentricity.” Put these together and you have a recipe for gaming the system: no one would suggest that an eccentric person with a wandering mind has a right to extra time on a timed exam, but someone with ADHD does—and the two can be hard to distinguish. This doesn’t suggest that civil rights for people with mild cognitive disabilities are a “racket,” but it does suggest that they have the potential to encourage opportunism and can lead to unwarranted advantages.
Suppose two children achieve low scores on a competitive timed exam: one has a diagnosed learning disability, and the other doesn’t. Suppose both of the children’s scores would improve dramatically if they had extra time to complete the exam. Is it fair to give one student extra time and not the other? Maybe. In theory, the extra time isn’t an advantage for the person with a learning disability; it’s just the way he copes with his disability. But if the disability is on a continuum with garden-variety poor concentration, then in fairness anyone with poor concentration should be entitled to extra time in proportion to the severity of his concentration deficit. This would, of course, defeat the purpose of a timed exam, which is to test not only skills and knowledge but also the ability to perform quickly.
*   *   *
The Harvard medical student Sophie Currier became a heroine to advocates of breast-feeding in 2007 when she demanded and eventually won the right to a breast-pumping break during a medical licensing exam. No hothouse flower, Currier first took the exam—widely considered to be one of the most challenging of all professional qualification exams—when eight months pregnant and came just short of a passing score. Currier chose to nurse her newborn baby as most experts in the medical profession she was poised to join recommend. But she still needed to pass the exam in order to start her residency at Massachusetts General in the fall. So she asked the National Board of Medical Examiners to give her a break—specifically, an extra hour each day to express and store her breast milk. The board refused, informing Currier that it would accommodate only disabilities as defined by the Americans with Disabilities Act.
Currier wasn’t the first woman to get a less-than-nurturing reaction to her nursing. Until recently, nursing an infant in public was considered indecent exposure and could result in citation or even arrest. Businesses and employers not only refused to accommodate nursing mothers but often deliberately embarrassed them or asked them to leave. The problem isn’t a relic of the era of three-martini lunches and cars with tail fins either. In October 2006, Emily Gillette was flying with her husband and twenty-two-month-old daughter on a Freedom Airlines flight from Burlington, Vermont. Freedom Airlines didn’t give Gillette the freedom to feed her baby; instead, a flight attendant barked, “You need to cover up. You are offending me,” and thrust a blanket into Gillette’s hand. Gillette balked: “No thank you. I will not put a blanket on top of my child’s head.” The flight attendant kicked her off the flight. In response Gillette filed a complaint against the airline with the Vermont Human Rights Commission. Her story inspired over eight hundred women to stage a “nurse-in” at thirty-nine airline ticket counters nationwide.5 This wasn’t the first time lactation took on the character of social protest: a year earlier women staged a “nurse-in” in front of ABC studios after Barbara Walters spoke unapprovingly about a woman nursing her baby on a flight.
A growing number of women have decided that Mother Nature is a more wholesome provider than Gerber or Nestlé and nurse their newborns for a year or longer. In reaction to social squeamishness about breast-feeding and widespread ignorance of its many virtues, some have become “lactivists,” proselytizing to pregnant women and young mothers about the benefits of the breast, lobbying for policy changes to accommodate nursing mothers, and agitating against inhospitable businesses and employers. Their goal is to reverse the decades-long trend toward bottle-feeding, which they see as the result of a conspiracy among hubristic scientists, perverse moralists who eroticize the female breast, and callous industrialists anxious to get new mothers back on assembly lines and behind desks. While breast-feeding was, for obvious reasons, almost universal before the Industrial Revolution, it declined throughout the twentieth century: by 1972 only 22 percent of American mothers nursed their infants.6 Lactivists reject the notion of better living through technology and cite mounting evidence that breast-fed children are less susceptible to illness and emotionally healthier than those who receive only manufactured formula. Scandals involving contaminated baby formula and conspiracies to foist costly baby formula on an impoverished third world have only strengthened their resolve and increased their numbers.
Medical opinion has shifted decisively in favor of nursing: the American Academy of Pediatrics decided in 1997 to recommend that mothers breast-feed their infants for six months. The U.S. Department of Health and Human Services started a campaign to encourage breast-feeding. Public opinion followed quickly, and today bottle-feeding is tantamount to child abuse among the Bugaboo stroller set. As mothers found themselves caught between the old-school squeamishness of blanket-wielding prudes and a trendy new obligation to breast-feed, some feminists began to wonder whether the new ethos was a totem for women’s liberation or a Trojan horse. Hanna Rosin complained in The Atlantic: “In Betty Friedan’s day, feminists felt shackled to domesticity by the unreasonably high bar for housework, the endless dusting and shopping and pushing the Hoover around … When I looked at the picture on the cover of [Dr.] Sears’s Breastfeeding Book—a lady lying down, gently smiling at her baby and still in her robe, although the sun is well up—the scales fell from my eyes: it was not the vacuum that was keeping me and my twenty-first-century sisters down, but another sucking sound.”7
Nursing requires a significant commitment. Nursing mothers must either feed their children directly or express the milk every several hours; failure to do either can lead to painful engorgement, infections, and a reduction in the milk supply. The National Women’s Health Information Center helpfully suggests to working mothers of newborns: “Let your employer know that you are breastfeeding and explain that, when you’re away from your baby, you will need to take breaks throughout the day to pump … Ask where you can pump at work, and make sure it is a private, clean, quiet area … If your direct supervisor cannot help you with your needs … go to your Human Resources department to make sure you are accommodated.”8
Or, failing that, go to court. Sophie Currier v. National Board of Medical Examiners wasn’t even a close contest in the end. The National Board of Medical Examiners, with their creaky old rules and their hand-wringing about the integrity of their precious exam, didn’t have a chance against the sisterhood of virtuous lactation—a powerful fusion of modern feminism and the Victorian cult of pure womanhood, backed by the American Academy of Pediatrics, with Angelina Jolie as glamorous spokesmodel. Currier lost her sex discrimination lawsuit at the trial court but won handily on appeal: Judge Gary Katzmann held that “in order to put the petitioner on equal footing as the male and non-lactating female examinees, she must be provided with sufficient time to pump breast milk.”9
Pumping breast milk is time-consuming and uncomfortable: a machine must be assembled, the milk must be pumped, the machine must be cleaned so it’s ready for next time (which will be roughly four hours later) and disassembled for storage, and the milk must be stored on ice so that it is still fit for the baby to drink later. This could easily consume the entire forty-five-minute standard break for the medical licensing exam, leaving Currier no time to eat or use the restroom. Pumping might not take the entire hour that Currier asked for, but any extra time wouldn’t really give her an edge. She couldn’t use it to think through or reconsider her answers, because the exam was administered in discrete blocks, and once a block was finished, the examinee could not return to it. The board’s concern that the accommodation would compromise the exam seemed unwarranted: after all, Currier wasn’t asking for extra time to take the exam itself.
But actually, she was. Currier had been diagnosed with ADHD and dyslexia; as an accommodation, she had demanded and received a full eight hours of additional exam time—double the normal limit. The board granted this request because ADHD and dyslexia are recognized disabilities within the definition of the Americans with Disabilities Act. Having failed the exam once even with the extra time, Currier had come back to the board with another demand for an additional accommodation.
It was starting to look as if Currier wanted to keep changing the rules until she passed. This may explain why relatively few feminists or lactivists took up her cause. Pondering the lack of support for Currier, Slate’s legal analyst Dahlia Lithwick complained that “if we can’t stand up for a woman with a brilliant career who is fighting to care for her babies as she chooses … you really have to wonder if we can stand up for anyone at all,” but worried that “it’s harder to sympathize … when we learn that she is already getting a whole extra day to take the test because she has ADHD and dyslexia, or that she received extra accommodation in her schooling as well … Suddenly … she isn’t a pioneer for the rights of working moms. She’s a crybaby and an opportunist.”10 This lack of sympathy was widely expressed on blogs and websites devoted to working mothers and lactation rights. “This woman is a disgrace,” groused an anonymous commenter on a motherhood blog. “Not only has she failed the exam, she is expecting everyone else to fix her problems for her … I am a physician, a working, nursing mom, who passed her general and subspecialty boards (written and oral) while nursing without difficulty.” On another site a nursing mother complained, “As a nursing mother who has managed to get through a LOT of daylong exams without whining … I can only say there is a limit to special entitlements … Ms. Currie [sic] is simply an example of entitlement gone too far.” Another woman wrote, “While I sympathize with her for nursing … keep in mind that she did get lots of extra help [and didn’t pass the first time] … Is there any chance of passing the 2nd time? Maybe, with the extra 2 days she has been given for a one day test, plus the extra time given for her to lactate … In a way, I am glad [she won] … now other people will get an awareness and learn how to get … perks … when going through the educational system.”11
Doctors, on the whole, were even less sympathetic. One insisted: “The USMLE is a STANDARDIZED test to assess a minimum competency … If you don’t pass, then the exam is doing what it was intended to do: preventing somebody without a core knowledge of medicine [from] practicing … When the patient dies on the table [because the doctor is too slow] who is going to be supporting her when her excuse is ‘I needed to breast feed at that moment.’” Another echoed this macabre theme: “When your Father has a heart attack, do you want [someone who] is … practicing only because he/she was granted 3 months of time to pass his licensing exam while every other MD passed it in 8 hours?”12
Few observers bothered to distinguish between the accommodations Currier received for her disabilities and those she received to pump. Currier’s supporters typically treated the extra eight hours she received due to her dyslexia and ADHD as irrelevant: “If a man were to have ADHD and dyslexia … [and] were to also have cancer … he’d be given accommodations for his ADHD and dyslexia, and I would think that additional accommodations would be made for his cancer … as well.” Her critics thought that each accommodation—regardless of the justification—compromised the integrity of the exam and gave Currier an unfair advantage: “Allowing some students to have a time advantage, no matter the reason, destroys the integrity of the exam.”13
But there’s a big difference between Currier’s modest request for an extra break to pump and the extra eight hours of exam time she enjoyed as an accommodation of her disabilities. Perversely, federal civil rights law gave Currier an entitlement to the more extreme accommodation while leaving the modest request open to debate (Currier eventually got her pumping break under Massachusetts state law). Contrary to the complaints of her critics, letting Currier take an extra hour to pump doesn’t compromise the exam much, if at all. The extra break is pretty close to the amount of time Currier would actually need to pump and store her milk—leaving her no better off than a non-lactating examinee. You might think that the extra time away from the test would give Currier a recuperation advantage, but any woman who has used a breast pump will tell you that it’s not exactly relaxing or rejuvenating. Currier’s critics often remarked that she won’t be able to ask for extra time in the operating room, but unless she’s lactating again when she needs to perform an eight-hour surgical procedure, she won’t need to. The break simply compensates for the effects of a temporary condition that would otherwise depress Currier’s test results and make the exam an inaccurate measure of her true abilities.
We can’t say the same of the legally mandated accommodation for Currier’s disabilities. ADHD and dyslexia are not temporary conditions. If they affect Currier’s ability to take the exam, they will affect her ability to perform any similar task under time pressure. Of course, an exam isn’t a perfect measure of real-life job skills: plenty of people who do poorly on exams excel in real-life situations, and just as many do well on exams and poorly on the job. But when used to test for minimum competence, the exams serve an important function: they are a cheap and efficient way to screen out the ill prepared and the incompetent. You’d be a fool to entrust your health to a doctor just because she had a high score on her medical boards, but you’d be a bigger fool to entrust it to someone who couldn’t pass them. Here the morbid fantasies of Currier’s critics are relevant: if Currier couldn’t focus on a make-or-break professional exam because of her ADHD, will she be able to focus on a life-or-death time-sensitive medical procedure or complete a complex diagnosis? Perhaps Currier will choose a medical specialty where speed and concentration are never required. But if that’s the reason to give her extra time, shouldn’t anyone willing to limit himself to time-insensitive specialties get extra exam time?
*   *   *
Several federal laws prohibit discrimination against people with disabilities. The most important are the Rehabilitation Act, the Americans with Disabilities Act, the Fair Housing Act, and the Individuals with Disabilities Education Act (IDEA). Together these laws cover employers, landlords, proprietors of public facilities, public schools, and any other organization that receives federal funding. The Rehabilitation Act and the Americans with Disabilities Act define a disability as a physical or mental impairment that substantially limits a major life activity. The IDEA adopts a similar definition but also specifically defines as learning disabled any child who fails to “achieve commensurate with his or her age and ability levels … [and] has a severe discrepancy between achievement and intellectual ability.”14
The idea behind these laws is that the failure to accommodate a disability is a kind of discrimination. Before the 1970s most disabled people were excluded from meaningful social interaction and gainful employment. Blatant discrimination was the norm, and few institutions made any effort to be accessible to disabled people. The all-too-common view was that if someone was unable to attend school, enter public buildings, or hold jobs because of his handicap, it was a tragic fact of life about which nothing could be done.
Advocates for the disabled, inspired by the civil rights movement, began to challenge this widespread idea in the 1970s. They insisted that disabled people could lead productive lives without science-fiction technological cures if society made an effort to accommodate them. In fact, they argued, many disabled people suffered less from the natural consequences of their physical condition than from discriminatory practices and insensitive policies established in disregard of their needs. Many people were openly contemptuous of the disabled, insulted their dignity with condescension and pity, or avoided them out of an irrational squeamishness. And how different were the myriad subtler decisions made in callous ignorance of disabled people and their needs? A wheelchair-bound architect would never design a building with stairs as the only means of ingress and access to upper floors. A deaf school administrator would make sure teachers provided written as well as oral instruction. Just as discriminatory laws once excluded blacks, discriminatory employment standards, educational policies, and architectural design excluded the disabled.
Congress passed the first major law prohibiting discrimination against the disabled—the Rehabilitation Act—in 1973, prohibiting recipients of federal funding from discriminating. It passed the Education for All Handicapped Children Act banning discrimination in public education two years later. But these laws were too mild and too limited: the disabled remained locked out of the mainstream of the job market and public life. When Congress passed the Americans with Disabilities Act (ADA) in 1990, banning discrimination in employment and businesses open to the public, it found rampant discrimination against the disabled that had resulted in widespread unemployment and poverty in their ranks: “Two-thirds of all disabled Americans between the age of 16 and 64 are not working at all … Fifty percent of all adults with disabilities have household incomes of $15,000 or less. Among non-disabled persons [the figure is] only twenty-five percent.”15 The ADA forbids discrimination against people with disabilities and defines “discrimination” to include a failure to make “reasonable accommodations” of their disabilities. The simple nondiscrimination provisions require employers, landlords, and proprietors to treat disabled people as well as they treat people without disabilities. The accommodation provisions require employers, landlords, and proprietors to make special exceptions and take affirmative steps to help the disabled succeed.
The idea that disabled people were limited by laws, policies, and design rather than by their physical handicaps inspired a cumbersome but instructive nominal innovation: the disabled became “differently abled.” For instance, the idea that blind people developed their other four senses to an almost superhuman degree was sufficiently mainstream by 1967 to serve as the premise of the Hollywood film Wait Until Dark. Audrey Hepburn played a blind woman who is terrorized by criminals looking for smuggled drugs. In the climactic sequence, her character fends off a knife-wielding man by plunging her apartment into darkness, giving her the advantage over her sighted assailant. In the same year the television police drama Ironside featured Raymond Burr as the retired detective Robert Ironside, who had been paralyzed by a sniper’s bullet. Aided by a modified police van designed to accommodate his wheelchair, Ironside remained an ace sleuth, using his years of experience and intelligence to solve crimes his able-bodied colleagues couldn’t crack. Under the right conditions, a handicap could be a strength.
Social movements for the disabled followed the lead of Black Power and turned what had been a cause for stigma into a source of power. And just as black pride matured into multiculturalism, with its vague but consistent implication that any social practice that was sufficiently widespread among a racial group was a part of that group’s unique and precious “culture,” some disability rights groups came to see their conditions and unique methods of coping as parts of a distinctive and precious culture as well. For instance, activists for the hearing impaired argued for the existence of a “deaf culture” grounded in sign language.16 Some in the deaf culture movement rejected lip-reading as a demeaning form of assimilation. Some went as far as to reject hearing aids and other medical devices designed to restore lost hearing as an insult to deaf culture: these interventions implied that deafness is a defect to be fixed rather than a condition that gives rise to an equally valid and valuable alternative mode of interaction with the world.
Disability rights laws were inspired by the long-overdue recognition that disabled people could make valuable contributions if given the chance. But the laws could also give effect to a much more questionable claim: that disabilities are not in fact disabling, but simply define different, equally effective modes of perception and interaction. It follows from the stronger claim that any practical impediment to the full and equal interaction of disabled people is the result of some form of invidious discrimination: the wrongful hegemony of bipedal over alternative modes of locomotion prevents a wheelchair-bound paraplegic from easily entering a nineteenth-century building built with grand staircases; the unjust emphasis on concentration and speed keeps a person with ADHD from passing the medical licensing exam.
It can be hard to tell the difference between the natural limitations of a disability and limitations that are imposed or magnified by bigotry, callous indifference, and careless oversight. Until recently, most people assumed that the disabled were simply incapable of making valuable contributions to society, so very few things were designed to accommodate them. Often, minor changes could have accommodated disabled people at relatively little cost. Doors can be widened slightly to accommodate wheelchairs, written materials made available to the deaf to supplement an oral presentation, oral descriptions used to aid the blind. And these changes may inadvertently improve things for a much larger group of people: ramps designed to accommodate wheelchairs also help people with wheeled carts, baby strollers, and wheeled luggage; written supplements to an oral presentation benefit the large number of people who find spoken lectures hard to follow and remember. Rights for the disabled have improved public life dramatically by punishing irrational prejudice and encouraging everyone to rethink habitual practices.
But disabilities are disabling. No amount of design accommodation will allow a blind person to pilot an aircraft safely or help a person with Parkinson’s disease to practice delicate surgery. And even when accommodation is possible, disability rights present difficult trade-offs: How much can we afford to change norms, rules, and physical infrastructure to help people with disabilities? Ramps and elevators to accommodate wheelchairs are expensive; remodeling older buildings can destroy their architectural character; Braille translations are costly and hard to acquire; closed-captioning isn’t free. We’ve correctly decided to make the changes in many cases—but not all. The law requires that employers, landlords, and proprietors make “reasonable” accommodations, inviting a cost-benefit analysis. Courts often find that a disabled person is entitled to some accommodation, but not everything that he or she might want. To accommodate a wheelchair, an employer may have to remodel a bathroom but not a staff kitchen. New construction must be designed to accommodate the disabled, but older buildings can remain inaccessible until they are substantially remodeled. Employees must be able to perform the “essential functions of the job” in order to qualify for mandatory accommodations: that rules out the blind pilot and the surgeon with the shakes.
Unfortunately, thinking of these conflicts in terms of civil rights encourages claimants to ignore the necessity of tough decisions and trade-offs. Sophie Currier and her supporters consistently argued that her demands for accommodation were questions of simple fairness, as if there were no downside to changing the rules just for her. Judge Katzmann, for example, insisted that Currier’s accommodations just put her “on an equal footing” with other examinees, and another Currier supporter was confident that the accumulation of special breaks didn’t matter: someone with ADHD, dyslexia, and cancer should get extra time for all three conditions, she insisted.
There’s a reasonable argument that fairness required giving Currier extra break time to pump. But there’s also a strong argument that giving her two days to complete a time-sensitive exam doesn’t put her on an “equal footing” with the examinees who had only eight hours; it gives her an advantage. The argument for this accommodation was that the exam was biased against Currier and the extra time only corrected the bias. But the exam was “biased” only if speedy performance is irrelevant. And in that case the exam is biased against all examinees who would have passed if they had had more time. If the speed limitation is arbitrary and misguided, the National Board of Medical Examiners should drop it entirely rather than make case-by-case exceptions.
There’s a sound civil rights precedent for such an approach: Title VII of the Civil Rights Act requires an employer to abandon an employment practice that disproportionately screens out members of a minority group and isn’t job related. But the employer has to drop the practice entirely—not suspend it or change it just for members of the minority group. On the other hand, if the practice is job related, the employer can use it regardless of its effect on minority groups.
It’s a conceptual sleight of hand to define one person’s inability to answer questions quickly and accurately as a disability that society must accommodate in order to reach the merits, when the same inability is considered a lack of merit for other people. This is especially true of a disability like ADHD, which many experts believe differs only in degree from what we might simply call a high-strung or absentminded personality trait. It makes little more sense to insist that exceptions to the normal rules simply “make up” for ADHD than it would to insist that an exam that favors smart people “discriminates” against the less intelligent. We all have unique natural strengths and weaknesses that make us better suited to some jobs than to others. Short people are at a disadvantage in basketball tryouts; socially awkward people typically don’t succeed in politics; clumsy people make bad jugglers. Isn’t it possible that people who have a hard time concentrating usually don’t make the best doctors?
*   *   *
Tom Freston may be best known as the man who discovered music videos. He got involved in cable television in 1979, when it still seemed doubtful that people would pay for television when they could get Big Three network programming for free. Along with the legendary adman George Lois, who designed Esquire magazine’s avant-garde covers during its golden age in the 1960s, Freston created the now iconic “I Want My MTV” ad campaign that defined shopping-mall chic during the early 1980s. He went on to turn MTV from a cultural phenomenon into a global media empire, launching VH1, Nickelodeon, Comedy Central, and many other cable channels and creating independent programming to edify the masses, including SpongeBob SquarePants, South Park, and Beavis and Butt-head. Freston’s MTV Networks invented reality television with The Real World, the first television show to place several strangers in a house together and tape their every move. An arrangement that would have been an unambiguous violation of professional ethics if done in the name of science was an unqualified success as entertainment. He became president of Viacom—MTV’s parent company—in 2004, where he remained until 2006. He left with a $60 million severance package.
In 1995, after his son Gilbert was diagnosed with ADHD, Freston enrolled him in the Stephen Gaynor School, a private school specializing in learning disabilities. The Gaynor school isn’t cheap: one year there cost $21,819 in 1999. Still, that wasn’t much more than what a typical New York City private school would cost, and few people with Freston’s wealth send their kids to public schools in New York: private school tuition is simply one of the many extraordinary expenses that wealthy urbanites consider a necessity.
The Individuals with Disabilities Education Act requires all states that receive any federal funding for special education to provide all children with disabilities a free and appropriate public education. The law requires that public schools develop “specially designed instruction, at no cost to parents, to meet the unique needs of a child with a disability.” If the school district fails to provide a child with an “appropriate” education, the parents are legally entitled to tuition reimbursement for private schools. Public schools are often unable to accommodate a child with a rare and severe disability at a reasonable cost: private placement may be better for the child and cheaper for the district. And if a district simply fails to offer an appropriate education due to incompetence or neglect, parents should be able to take matters into their own hands and make sure their child gets the education he or she needs. The law makes sure that disabled children have the same access to a free public education as any other child: the school district must either provide an education that meets their needs or outsource the job to someone who can.
But what about parents who would never consider public school for a nondisabled child? Freston asked the New York City Board of Education to evaluate Gilbert and recommend an educational program suited to his special needs, but nothing the school district had to offer could match the pricey private school Gilbert was already attending. Freston sued the school district for his son’s private school tuition in 1997 and 1998, and the district agreed to compensate him. He later argued that this was a tacit acknowledgment that the district had not offered Gilbert an appropriate education; the district insisted that it paid up only in order to avoid litigation. In 1999 the district offered Gilbert a coveted placement in the city’s Lower Lab School for Gifted Education with a student-faculty ratio of fifteen to one along with additional tutoring and counseling. But Freston never visited the school or met with any of its staff, and later testified that “it was sort of a moot point … I spent the summer in California … The down payment [for private school] had been made.”17
School administrators from coast to coast worried that wealthy parents would game the system to get school districts to pay for private schools—and more. The San Francisco Chronicle described the parents of a student with learning disabilities and anxiety disorder who enrolled in a “$30,000-a-year prep school in Maine—then sent the bill to their local public school district.” According to the Chronicle, “Parents of special education students seek extra-special education at public expense: private day schools, boarding schools, summer camps, aqua therapy, horseback therapy … Special education is a growing portion of budgets in many districts, squeezing out services for other pupils.” Similarly, Time magazine reported that an autistic child’s parents “informed Colorado’s Thompson school district it had to pick up the bill for Boston Higashi’s $135,000 annual tuition.” The New York Times quoted a Westport, Connecticut, school superintendent who faced special education reimbursement requests for horseback riding and personal trainers.18 These reports suggested that a law designed to help the disabled and needy had become a giveaway for the rich and greedy. Mainstream media coverage of “extra-special education” echoed the radio talk show host Michael Savage’s claim that learning disabilities had become a “money racket.”
New York fought Freston’s claim for private school tuition. Joined by a coalition of other large urban school districts, the city argued that “many parents ask public school districts to develop an [educational plan] for their child despite intending from the outset to reject whatever … is developed and then claim that the district is unable to provide [an appropriate education] … These parents, who never intended to use the public schools, unilaterally place their child in the private school in which they planned to enroll their child all along, and then request reimbursement, hoping for a windfall.”19 The cities pointed out that private schools for the disabled often encourage parents to sue local school districts for tuition reimbursement; some even gave parents a list of “contact information for … lawyers and … instructions on how to sue the city.” The cities insisted that in order to “prevent abuse by parents who never intended to use the public schools,” the IDEA allowed parents to seek tuition reimbursement only after their children had tried public schools and they had proven inadequate.
Advocates for the disabled countered that most disabled children do not come from wealthy families; to the contrary, “30 percent of children with disabilities live in foster care … Almost 25 percent … are living in poverty.”20 The advocacy group Autism Speaks warned that disabled children who are forced to “try out” inappropriate public school placements before moving to an effective private school may miss a critical window of opportunity for development: “The effectiveness of intervention depends on early application … When the opportunity presented during this window passes, the squandered potential cannot be regained later.”21 As for the threat of escalating expenses, advocates for the disabled pointed out that private placements accounted for only a tiny fraction of the costs of special education and most private placements involved severely disabled children, whom school districts admitted they couldn’t serve. The cases where parents unilaterally put their children in private school and sued the district for reimbursement were trivial in number.
Moreover, the cost of private placement was typically not much more than an adequate public education: in fact, New York City’s public schools spent more on average for a disabled pupil attending public schools than Freston had requested in reimbursement.22 One of the briefs filed on behalf of the City of New York complained that “in one recent school year, public schools spent over 20% of their general operating budgets on special education students.”23 But, as a brief filed on behalf of Freston pointed out, most of that amount was spent on special education in public schools—not on tuition reimbursement.24 Taken together, these arguments implied, perhaps unintentionally, that tuition reimbursement wasn’t a unique problem; it was just a dramatic example of the cost of special education generally.
Mark Kelman, my colleague at Stanford, and Gillian Lester, now a professor at UC Berkeley Law School, conducted an extensive study of learning disability claims in public schools. They visited a number of local school districts and talked to local school administrators, teachers, and parents to see how the disability rights laws worked in practice. They came away convinced that treating the education of learning disabled children as a civil rights issue benefited rich families at the expense of the poor and actually made it harder to educate most students—disabled and nondisabled alike.
For nondisabled children, the problem is obvious: the law requires school districts to spend more—often a lot more—on costly special services reserved exclusively for children diagnosed with learning disabilities. This might make sense if the districts were awash in money, or if the special services were uniquely helpful to the children with learning disabilities, the way, say, Braille texts are uniquely helpful to the blind. But in fact many of the special services the schools are required to provide for children with learning disabilities would benefit any child: smaller classes with better student-teacher ratios, one-on-one tutoring, immunity from discipline for disruptive behavior, extra time on exams. One administrator Kelman and Lester interviewed worried that only
maybe half the people we label are “really” LD. The problem is that the truly LD kids are irremediable. The 25 percent who eventually show significant changes were probably misdiagnosed. In theory, the LD kids have alternative coping mechanisms, and the educator should try to help the kids tap into these alternatives, [but] slow learners [who aren’t diagnosed as learning disabled] may also have untapped abilities … The difference between the two is merely a matter of degree.
Good teaching, simply, is what makes it work … For the LD kids or for anyone else, good teaching is good teaching.25
Some administrators believe the law requires them to prevent services earmarked for a child with a learning disability from “leaking” over to other, presumably undeserving students who may simply be slow learners. For instance, if a student with a mild learning disability attends class with nondisabled students and receives one-on-one tutoring during the school day, can the tutor also help other kids who have questions about the day’s lesson? While some school officials think they must prevent the diversion of special education resources to nondisabled students, others welcome “leakage” as a way to compensate for the inevitably imprecise diagnosis of learning disabilities. “This way, the sharp categories formally exist, but all students who need assistance … get it,” said one California administrator.26
Special education services eat up a growing share of the public school budget in many districts. In 1979 there were 796,000 students diagnosed with learning disabilities; in 2003 there were 2,848,000, and the number continues to grow at a rapid pace. Perhaps too few students were diagnosed with learning disabilities in the 1970s, but as a larger and larger percentage of students are said to have a “disability” that keeps them from learning, one has to wonder whether the cause is truly neurobiological, or whether it’s political and social. Under the IDEA, schools that fail to effectively educate disabled children can be made to pay for private school tuition. But the public schools—especially those in large cities like New York—are failing to educate many of their students who aren’t disabled too. For instance, in 2006 over 3 percent of all the students served by the Washington, D.C., school district were in private placements at a cost, according to The Washington Post, of 15 percent of the district’s entire budget.27 But, as two special education experts acknowledged, “the D.C. schools struggle to provide an adequate education to any of their students. Disabled students are entitled … to demand an adequate education … The nondisabled students … lack the same mechanism for exiting failing schools.”28 Contrary to the civil rights theory underlying the IDEA, disabled students who don’t receive an adequate education aren’t necessarily being discriminated against; tragically, they’re often receiving the same-quality education as everyone else.
The civil rights approach to special education also disserved many disabled children—especially those from poor and minority families. Historically, special education has been split along the lines of family income and race. Culturally unsophisticated children—often poor blacks and Latinos and poor people who had moved from rural to urban areas—accounted for the lion’s share of children labeled “slow,” mentally retarded, emotionally disturbed, or culturally deprived. These students were typically either expelled from school or shunted off into dead-end special ed classes. The problem was so pervasive that civil rights activists in the 1950s and 1960s worried that special ed had become a cloak for racial discrimination and lobbied hard for provisions designed to ensure that minority students were not segregated from mainstream public education.29
Meanwhile, the category of “learning disability” emerged due to the efforts of wealthier, predominantly white families in the 1950s and 1960s who saw their underachieving children slip through the cracks of the educational system. Armed with psychological research that had identified discrete neurological causes (such as dyslexia) for certain cases of poor academic performance, they lobbied for a new category that would distinguish their children from the “mentally retarded” and from children who were simply lazy or slow—a recognition of a discrete condition that did not actually decrease intelligence, but only masked it. In studies of children with learning disabilities published in the 1960s and early 1970s, 98.5 percent were white and 69 percent were of middle-class or higher socioeconomic status.30
Today’s learning disability rights laws are a result of the efforts of these two groups: litigation to prevent the isolation and expulsion of retarded, emotionally disturbed, and hyperactive children eventually led to the Education for All Handicapped Children Act in 1975, now renamed the Individuals with Disabilities Education Act. Special education under the IDEA can range from reimbursement of expensive private school tuition to isolation in a dead-end class with “slow” children. Kelman and Lester worry that poor children typically receive very different treatment under the IDEA mandates than do the children of wealthy parents, who have the wherewithal to pressure school districts for better and more costly options: “The IDEA system … permit[s] relatively privileged white pupils to capture high-cost … in-class resources that others with similar educational deficits cannot obtain while, at the same time, allowing disproportionate numbers of African-American and poor pupils to be shunted into [dead-end special ed] classes.” There was even more reason to worry that the IDEA system benefited the rich at the expense of the poor in the case of demands for tuition reimbursement like Tom Freston’s because only wealthy parents could afford to send their child to an expensive private school and sue for reimbursement later. As the coalition of urban school districts warned in its amicus brief: “Every dollar spent on tuition reimbursement is a dollar that can no longer be spent to improve public special education programs … [This harms] students with the greatest need for public services, namely those whose families cannot afford to seek services outside the public school system.”31
Those families face deteriorating schools with large classes and dramatically reduced extracurricular activities. In New York City, kindergarten classes averaged 22 students in 2009, and elementary and middle school classes averaged 25.8 students.32 In California, budget cuts have made classes of over 30 students commonplace, and many students have to pay for extracurricular activities such as sports and music out of their own pocket—if they are offered at all.33 It’s easy for parents to argue that public school classes don’t offer an adequate education to their learning disabled children when they don’t offer an adequate education to anyone. Given the state of many American public schools, who can blame parents for seeking private alternatives or trying to finagle extra resources for their children? And even the top public schools can’t compete with the best that money can buy. New York’s offer of a much-sought-after spot at the prestigious Lower Lab School for Gifted Education paled in comparison to the education Gilbert Freston was receiving at the private Gaynor school, where he enjoyed a four-to-one student-staff ratio: the head teacher at Gaynor suggested that the city’s proposed class size of fifteen “could be a bit overwhelming.”34
Freston insisted that he sued as a matter of principle: after taking his case all the way to the U.S. Supreme Court, he donated the tuition reimbursement that he was awarded to tutoring for public school children. But all things considered, Freston’s stance is somewhat perverse: What sound moral principle would force cash-strapped public schools to provide a gourmet education for some students while others must make do with a dog’s breakfast?
In 2009 students with learning disabilities accounted for almost half the entire population of disabled students receiving special services under the IDEA. It’s no accident that the explosion of learning disability diagnoses comes at the same time the public schools are increasingly troubled by overcrowding, spotty teaching quality, and violence. The strongest students manage to learn despite overcrowding and poor teaching, but weaker students don’t. So while all students suffer from overcrowding and indifferent teaching, poor performers—whether diagnosed with disabilities or not—suffer most. The parents of such students are right to insist that the schools are failing to help their children realize their potential, and failing them more dramatically than they are failing students who learn easily and without much help. In that sense, poor schools are inherently discriminatory: they make any student who has difficulty learning—for whatever reason—worse off than students who learn easily. But of course in this sense any poorly provided public service “discriminates” against the people who need it most: badly run hospitals discriminate against the injured and the sick; incompetent police departments discriminate against people living in crime-ridden neighborhoods; inadequately maintained parks discriminate against people without backyards.
The solution is obvious: better public services for everyone. But the IDEA doesn’t make the public schools better; instead, it shifts resources to a small fraction of the larger group of people who need them most. This might make some sense if that small fraction were especially injured by inadequate education or if they would uniquely profit from the extra resources. But if, as many educators believe, these children need the same things that any other student needs—good teaching in small classes—then it’s wrong to treat their needs as inalienable civil rights when we treat the needs of other students as luxuries that nearly bankrupt districts can’t afford. At any rate, the IDEA doesn’t even try to find out whether children with learning disabilities get more out of extra resources than other children would. Instead, the law mandates that some children should have more than others whether or not they need it more or will benefit more from it. All in the name of equality.
*   *   *
Dr. Paul Steinberg, a Washington, D.C., psychiatrist, argues that many students with what we call learning disabilities may in fact simply learn differently than other students and excel in different areas: for instance, “attention deficit disorder” may be a valuable asset in situations that demand spontaneity. “Essentially, ADHD is a problem dealing with the menial work of daily life, the tedium involved in many school situations and 9-to-5 jobs … [but] in many situations of hands-on activities or activities that reward spontaneity, ADHD is not a disorder.” But in today’s economy of technical and professional specialization, concentration is king, spontaneity is less valued, and impulsiveness can be ruinous: “What once conferred certain advantages in a hunter-gatherer era, in an agrarian age or even in an industrial age is now a potentially horrific character flaw.”35 Of course there have always been tasks that required concentration. But in past eras, a lot of things didn’t require sustained concentration: people we now would diagnose with ADHD could be great hunters, gladiators, knights, traveling minstrels, or rich aristocrats who didn’t need to work. During the Industrial Revolution, at least until the era of Henry Ford and modern management science, factory managers expected that workers would daydream and lose focus on the job. By contrast, in the information economy it’s harder and harder to find a good job where focus and detail orientation are optional.
This suggests that ADHD—even if it is the result of a discrete neurological condition—isn’t really a disability in the way that blindness, paralysis, severe autism, or even dyslexia is. Steinberg suggests we abandon the idea that some people have an attention deficit and instead think of everyone else as blessed (or cursed) with “attention-surplus disorder.” He argues that “children … with attention disorder may need more hands-on learning. Some may perform more effectively using computers and games rather than books. Some may do better with fieldwork and wilderness programs.” Steinberg urges that we “change the contexts in schools to accommodate the needs of children who have [ADHD], not just support and accommodate the needs of children with attention-surplus disorder.” Changing the context doesn’t suggest case-by-case exceptions to a general rule: it suggests a new pedagogical approach. If some children learn better using computers and fieldwork, we should introduce these teaching methods, and there’s no reason to limit them to children with diagnosed learning disabilities. Making viable alternatives available to all children who would profit from them would make the accommodations more equitable, further the important goal of integrating disabled children into regular classes, and eliminate any stigma now attached to “special education.”
Of course that’s practical only if games, fieldwork, and wilderness programs prepare children for life in the modern economy as well as “tedious” conventional schoolwork. Unfortunately, such ideas are often more attractive as therapy than as pedagogy. Educators tried out similar new and untested pedagogical methods in the 1960s and 1970s: when I was in grade school, for several years we learned “new math” and were graded on the quality of our ideas, regardless of whether they were well composed using proper grammar and sentence structure. The idea behind these new pedagogical methods was much the same as Dr. Steinberg’s idea: different children have different learning styles, and many children aren’t engaged by conventional pedagogy. These experiments were often short-lived because the new methods didn’t teach children as effectively: in order to tackle advanced subjects such as trigonometry, calculus, and college-level composition, you needed to have mastered the “old” math, with its multiplication tables and long division, and the boring old rules of grammar, sentence structure, and vocabulary. Moreover, students needed the mental discipline that the old methods imposed: part of the point of rote memorization was to teach children to focus on a single task for long periods of time.
Dr. Steinberg points out that “each child and adult learns and performs better in certain contexts than others.” Of course, this is true whether the person in question is diagnosed with a learning disability or not. It’s best to encourage people to pursue interests and careers for which they are well suited. Let’s face it: in many jobs a wandering mind isn’t a superficial condition that somehow masks an employee’s good performance; it’s a flaw that makes for poor performance. This is true whether or not the cause is an immutable neurological condition, inadequate practice, or a simple lack of diligence. We should help people with short attention spans find jobs where sustained attention isn’t important, not artificially inflate their grades and test scores and kid ourselves that concentration and speedy performance isn’t important in jobs where it is.
*   *   *
From the beginning, the precise rationale for disability rights has been unclear. Disability rights enjoyed widespread support among both liberals and conservatives, but for very different reasons. That has made it hard for courts to know how to interpret the law and easy for new claimants to press for expanded application and new entitlements. Liberals typically saw the extra resources and special exceptions for students with learning disabilities as civil rights that advance equality—part of a larger set of egalitarian social welfare policies designed to level hierarchies based on what philosophers might call “morally irrelevant” differences. But it’s unclear how far liberals will go in pursuit of this conceptual goal. Arguably all differences in innate ability and intelligence are morally irrelevant. But of course differences in ability—whether due to disabilities or not—are very relevant practically. Disability accommodations have less to do with the mainstream civil rights goal of equal opportunity than with equality of result—forbidding even-handed policies and practices that happen to disadvantage the disabled. Requiring employers, proprietors, landlords, and schools to ignore differences in ability and absorb the extra costs of compensating for such differences goes further than simply prohibiting irrational discrimination: it’s effectively a redistribution of wealth. In many cases, that redistribution makes sense; for instance, forcing building owners to pay for wheelchair ramps when they remodel or forcing employers to make allowances for blind or handicapped employees gives a long-neglected and disproportionately impoverished group of people a chance to lead fulfilling and constructive lives. But we should evaluate such accommodations as social welfare policies—not categorically accept them as inalienable civil rights.
Conservatives, by contrast, saw the disabled as among a small group of deserving unfortunates who suffer through no fault of their own—unlike the much larger group of losers who have their own shiftlessness and irresponsible behavior to blame for their misfortunes. Disability rights correct for variations in human ability caused by accidents and genetic randomness while leaving more patterned and predictable inherited inequalities firmly in place. Educational accommodations for students with learning disabilities are a conspicuous example: a diagnosis of a learning disability often effectively allows successful parents to pass their advantages in academic accomplishment along to their less successful children. Perhaps this is why many conservatives have supported fairly aggressive disability rights but have opposed much milder civil rights for other groups. To the extent disability rights protect existing socioeconomic statuses, they are consistent with a conservative tradition at least as old as Edmund Burke that places a high value on continuity and social stability. But they are at odds with the more widely accepted libertarian conservatism of today, which emphasizes self-reliance, free enterprise, and the discipline of the market. And they are certainly at odds with the civil rights tradition, which abhors hierarchies of birth and prizes social equity.
Disability rights serve two important purposes: they prohibit discrimination based on irrational aversion or inaccurate stereotypes, and they help to integrate disabled people into the mainstream of society. As for simple discrimination, given the long history of aversion to and prejudice against the disabled, it makes sense to require reluctant employers and proprietors to give disabled people a chance—even when doing so requires some extra effort or expense. There’s no doubt that simple prejudice against the disabled is still a serious and pervasive problem. Like race and sex, most disabilities are conspicuous: a blind person with a cane or Seeing Eye dog, a paraplegic in a wheelchair, or a mentally ill person muttering to herself makes an obvious target for the bigoted employer, landlord, or proprietor. But the milder emotional and learning disabilities aren’t conspicuous; in fact, they weren’t considered disabilities at all until recently. When people with these conditions do poorly in school or at work, they aren’t suffering because of irrational prejudice or inaccurate stereotypes; they’re suffering from an accurate assessment of their performance.
Disability rights also help to integrate the disabled into the mainstream. Congress noted the isolation and resulting impoverishment of the disabled when it passed the Americans with Disabilities Act in 1990. Again, people with more severe and conspicuous disabilities are the ideal beneficiaries of such an integrationist policy. Without mandatory accommodations, people with severe disabilities would be unable to compete for jobs, unable to communicate effectively, and unable to get around in cities designed for the able-bodied. But mild emotional and learning disabilities don’t prevent people from finding gainful employment or making their way in the world. They may prevent some people from getting the jobs they most want, but many, many nondisabled people can’t get the jobs they most want because they lack the required skill, temperament, or intelligence—deficits that are at least partially determined by discrete neurological conditions too. That’s not a civil rights issue—that’s life. Mandatory accommodations for the disabled involve the redistribution of resources—to disabled employees from employers, to disabled customers from proprietors, and to disabled students from the other students who compete with them for teaching resources and high grades. Deciding when such redistribution is justified requires difficult trade-offs between competing policy priorities—not an inflexible legal entitlement.
Rock of the Aged: Civil Rights for Older Workers
Google was one of the few high-tech Silicon Valley firms that emerged from the dot-com crash of 2001 not only unscathed but actually stronger. By 2003 its Internet search engine had become so popular that its lawyers had to worry that the “Google” name might become a generic term for Internet search (“I’ll Google it”), jeopardizing its legal status as a trademark. Having conquered web searches, Google moved on to dominate Internet maps, directions, and real-time traffic conditions. It launched an ambitious—some said quixotic—plan to digitally scan every book in the world for a text-searchable database: Google Books collaborated with some of the world’s largest libraries and alienated some of the largest publishers and literary agencies, who organized a lawsuit to block the project. Another Google project bested some of the world’s most powerful military intelligence organizations: in 2006 a satellite image from Google Earth revealed top secret U.S. operations in Pakistan in a photograph sharp enough to render the painted lines on the tarmac. When Google went public in 2004, the offering was among the most anticipated in Silicon Valley history, although many were skeptical that the company could hold its initial valuation of $27 billion. In 2009, in the midst of the worst economy since the Great Depression, Google was worth $140 billion.36
Google had done so much so quickly with a huge team of energetic, talented, and fiercely dedicated employees. The Googleplex—its main offices in Mountain View, California—is a sprawling campus of four large buildings, each surrounded by lawns, courtyards, and, this being suburban California, ample parking. The Googleplex is a cross between a university quadrangle and the ultimate party house. Google offers its employees three free meals a day at eleven cafeterias, free laundry, free hairstyling, a state-of-the-art gym complete with stationary lap pool, a volleyball court, lounges with pool tables, foosball, video games, and replicas of a spaceship and a dinosaur skeleton. Google provides bicycles and Segway scooters for employees to move around its campus, where they work on laptops at informal workstations, “yurts,” and “huddle rooms” that encourage collaboration and out-of-the-box thinking. The company provides toys to entertain the children of employees, and dogs are always welcome.
With all of these postmod cons, Google employees never need leave work. “We have a preference for those who like to work hard and play hard and are enthused about working on collaborative global teams,” informs each job listing for Google. Brian Reid started working at Google in 2002 as director of operations and engineering. A former professor of electrical engineering at Stanford, Reid was an early Internet pioneer. He helped invent one of the first Ethernet networks at Stanford in 1981 and worked on the first e-mail protocols and on the first Internet search engine—AltaVista—in 1995. Reid was fifty-two years old when he started work at Google—a good two decades older than Google’s founders, Larry Page and Sergey Brin. Reid received positive evaluations from his superiors, in which they described him as “very intelligent” and “creative” and complimented his “confidence when dealing with fast changing situations” and his “excellent attitude.” But, in what was to prove a foreboding observation, his first performance review noted that “adapting to the Google culture is the primary task for the first year here … Google is simply different: Younger contributors, inexperienced first line managers, and the super fast pace are just a few examples.”
Reid lasted less than two years at Google. During his short tenure on the company’s campus, he was the target of a series of age-related jokes and disparaging comments. His immediate supervisor called him “lethargic” and dismissed his ideas as “obsolete” and “too old to matter.” His co-workers referred to him as an “old man” and an “old fuddy-duddy.” A CD jewel case served as an office placard for Google managers: some quipped that Reid’s should be a vinyl LP. Google’s management began to ease Reid out in October 2003 when they replaced him as director of operations with someone fifteen years younger. Reid was moved to a new position in charge of a pilot program that would allow Google’s engineers to earn graduate degrees on-site at Google. The new program turned out to be little more than a place to park Reid before driving him out: the degree program was never staffed or funded, and in January of the next year Google’s top management worked on “a proposal … on getting [Reid] out.” On February 13, 2004, the vice president of engineering, Wayne Rosing, told Reid he was not a “cultural fit” in Google’s engineering department. Reid applied for positions in other departments but the company’s management had already made sure Reid would not find a new home in the Googleplex. Various department heads had coordinated by e-mail to adopt a uniform line with Reid. “My line at the moment is that there is no role for him,” wrote the vice president of business operations. “We’ll all agree on the job elimination angle,” advised the human resources director, Stacy Sullivan. Five months later, Reid sued Google for age discrimination.37
Much of Reid’s complaint focused on Google’s corporate culture. Reid was let go because he wasn’t a cultural fit at Google. He argued that the atmosphere at the Googleplex was biased against older workers. Was Google’s culture a culture of youth? In many ways it was: Google used physical activities, such as skiing and hockey, as a way of team building; the Googleplex is modeled on a college campus; the decor is bright, colorful, and eccentric, like a dream house designed by MTV; and many of the perks offered to employees—from foosball to table tennis to free T-shirts—are likely to appeal to the young. But it’s just these attributes that make Google both a successful business and a beloved employer. Google is widely considered one of the best places to work in the high-tech industry. Its youthful culture is a deliberate attempt to cultivate the fresh, innovative thinking that has made the company a success. Even the age-related comments Reid rightly complained of may have reflected this emphasis on novel, out-of-the-box thinking rather than age-based animus. It’s not inconceivable that an older person could fit in at Google; in fact, the company hired Reid with the expectation and hope that he would adapt to the Google lifestyle. Is it a civil rights issue when an employer wants a workforce that’s young at heart?
*   *   *
In 1967 roughly half of all private job openings were explicitly closed to applicants over the age of fifty-five; one-fourth were limited to those forty-five or younger. Older people were disproportionately unemployed and stayed jobless longer than younger people. Spurred to action by the pathetic image of a jobless older person reduced to eating pet food in his cold-water flat, Congress passed the Age Discrimination in Employment Act of 1967, or ADEA. Naturally, the ADEA was modeled on the Civil Rights Act of 1964, which prohibited discrimination on the basis of race, color, national origin, religion, and sex. Unlike the 1964 act, the ADEA sailed through a Congress made up overwhelmingly of middle-aged and older people, buoyed by a broad consensus that discrimination on the basis of age was cruel, inefficient, and unjust. “Nobody defends such discrimination, and—it ought to be stopped,” declared a labor union representative in his testimony before Congress.
As expected, employers quickly took down the “Elderly Need Not Apply” signs after the ADEA was enacted. But unemployment among the elderly actually increased in the ten years after the ADEA was passed.38 Was the reason lack of enforcement? Did employees not know their rights under the ADEA? To the contrary, there was a “backlog” of age discrimination lawsuits that had grown every year since the ADEA was passed. The problem wasn’t in enforcement; it was in the design of the ADEA. Age discrimination was different from discrimination based on race, sex, and religion. It was oddly lopsided: older people suffered discrimination in hiring, but once hired, they fared as well as or better than younger workers. The Department of Labor’s commissioner on aging found that older employees were “frequently preferred over the younger” for promotions and received favorable treatment on the job.39 That’s still true today. “If you are old and have a job, you are less likely … to be fired,” said Alicia Munnell of the Center for Retirement Research at Boston College in 2009.40 There was really no need for a civil rights law to protect older workers; it was older applicants for jobs who needed protection. This was still the case over forty years after the ADEA was passed: in the recession of 2009 older people who were laid off were out of work for an average of 22.2 weeks; by contrast, younger people were unemployed an average of 16.2 weeks.41
The ADEA didn’t do much for job applicants, because an unemployed person is unlikely to sue for a job he didn’t get. A job applicant has very little information about the hiring process, the decision makers, or the qualifications of other people vying for the same job: it’s hard to know whether discrimination or legitimate factors made the difference. Moreover, litigation takes time—time that could be better spent continuing the job search. That’s why only about 9 percent of all employment discrimination claims challenge hiring decisions.42 By contrast, once hired, people become invested in their jobs and are much more likely to sue to keep them and to advance in them. Accordingly, almost 80 percent of all employment discrimination claims involve firing and denied promotions. Unsurprisingly, then, the ADEA encouraged current employees to sue over promotions and termination—where age discrimination wasn’t much of a problem. The ADEA empowered older workers who, as a group, were already better off than younger employees, but did little to solve the problem of chronic unemployment among the elderly. In fact, because the law gave older people a new weapon to use against employers—a weapon that they were much more likely to use once hired—the law probably encouraged employers to discriminate against older job applicants, making the problem the law was designed to solve worse.
The civil rights solution didn’t fit the problem of unemployment among older workers very well. Even when the ADEA was passed, it was well-known that “age discrimination is … seldom a matter of blind or arbitrary prejudice which often exists for reasons of race, creed, color, national origin, or sex … [It] is a more subtle series of problems … a combination of institutional factors and stereotyped thinking.”43 But the civil rights approach focused only on stereotypes, neglecting the subtler institutional factors that are actually the biggest cause of the problem. Congress basically cribbed from the 1964 Civil Rights Act, extending the same civil rights protections to a new group—the elderly—as if age discrimination were caused by irrational bigotry against the aged and inaccurate stereotypes about their qualifications. For instance, Secretary of Labor Willard Wirtz argued that age discrimination reflected outdated attitudes, “a failure on the part of employers to realize how technology and the life sciences have combined to increase the value of older people’s work”;44 and William D. Bechill, the commissioner on aging, insisted that “stereotyped attitudes about the ability of older people … play a major role in barring older workers from fair … consideration.”45
Stereotypes about older people are a problem, but as the NYU law professor Samuel Issacharoff and his coauthor Erica Harris point out in a detailed discussion of the ADEA, age discrimination is often driven by simple economics. Labor economists describe the typical career path in terms of a “life cycle.”46 The typical employee needs training early in her career, becomes a seasoned and efficient worker in mid-career, and then slows down a bit near the end of her career. If wages and salary perfectly matched the productivity of each employee, most people would receive very low compensation early in their careers, when they are still learning—in some cases the costs of training might be greater than the contribution new employees make, suggesting an “apprenticeship” model where the new employee should work for very little. Then pay would rise very quickly after the employee became accomplished. Finally, pay would level off and eventually drop as the older employee slowed down and thoughts turned from work to a well-earned retirement filled with grandchildren, gardening, and exotic travel (at least that’s what I hope to be looking forward to at seventy).
But of course this isn’t how compensation is usually set. Typically, compensation rises relatively slowly but inexorably: pay cuts reflect a firm in financial crisis or a sanction for unacceptably poor performance. This type of arrangement involves an implicit bargain between employer and employee: the junior employee builds up a debt while being trained, which she pays off as she gains skills, eventually banking a surplus, which she will collect as she ages and her compensation exceeds her value to the firm. Viewed sympathetically, it’s a humane and practical model based on a long-term relationship: younger people receive a decent wage and avoid the indignities of apprenticeship, and older workers enjoy compensation that reflects the esteem and respect they’ve earned throughout a long career—even if it exceeds their current productivity.
But the deferred compensation and cross subsidy inherent in this approach encourage three types of age discrimination. Two are necessary parts of the implicit bargain and therefore defensible if one accepts its terms; one is an indefensible breach of the implicit bargain by the employer.
First, an employer might refuse to hire older employees because they will enter the pay scale at just the point where compensation exceeds contribution, never having “banked” the surplus by working when their contributions exceeded compensation. If the firm pays according to seniority, or guarantees retirement benefits at a certain age, the newly hired older worker will arrive just in time to collect the subsidy. Even an employer that is happy to retain older workers hired early in their careers or in mid-career might not want to hire workers already near the end of their careers. Of course, such an employer could hire a senior employee for a wage she’s worth—even if it’s much less than other workers of the same seniority who have contributed to the firm during their most productive and less well-compensated years. But this would require the firm to be explicit about the implicit cross subsidy involved in the salary structure. An advantage of the employment life cycle is that the subsidies are implicit. Making them explicit would be bad for employee morale: mid-career employees would resent subsidizing older and younger employees, and older employees would lose esteem and respect.
Of course, discrimination in hiring is just what the ADEA was supposed to prevent. But ironically the law may have made this kind of discrimination more likely by giving employers an additional reason to avoid hiring older people. Under the ADEA, an employer who hired an older employee on a trial basis would have to worry about a lawsuit if things didn’t work out. It’s not surprising that employers responded to the ADEA by eliminating conspicuous discriminatory policies while continuing to discriminate against older job applicants in less obvious ways.
Second, a smoothly and inexorably rising pay scale assumes mandatory retirement—a form of age discrimination. At some point the surplus banked in mid-career runs out. Before the ADEA was amended to prohibit the practice, most employers openly discriminated by imposing a mandatory retirement age. The ADEA originally covered only employees between forty and sixty-five, but the upper limit was eliminated in 1986, effectively outlawing most mandatory retirement.47 As a result, many older workers had the option of staying on at high compensation long after having recouped any surplus they had contributed in the middle of their careers. Many employers adjusted to the new legal regime by replacing “lockstep” compensation based on seniority with bottom-line-oriented compensation such as merit pay and low base salaries supplemented with productivity bonuses. This shift ultimately affected more than compensation: it was part of a change in attitude, the demise of a more collaborative and genteel business relationship and the rise of a more beady-eyed, “eat what you kill” approach. As Issacharoff and Harris put it, “Employees who currently have expectations of wages above marginal output will appear to be an unaffordable luxury in highly competitive markets.”48 For example, an internal Wal-Mart memo that came to light in 2005 notes unapprovingly that “the cost of an associate with seven years of tenure is almost 55 percent more than the cost of an associate with one year of tenure, yet there is no difference in his or her productivity. Moreover, because we pay an associate more in salary and benefits as his or her tenure increases, we are pricing that associate out of the labor market, increasing the likelihood that he or she will stay with Wal-Mart.”49 Perhaps the atmosphere that Brian Reid encountered at Google reflects this harder-edged employment relationship and the resulting contempt for employees even slightly “past their prime.”
Third, an unscrupulous employer may breach the implicit bargain and fire a loyal employee just as she is about to recoup the surplus she built up during her most productive years: a nasty bait and switch. Ironically, the ADEA does not prevent employers from sacking older employees just before their pensions vest—provided they do so out of simple greed and not bias. When sixty-two-year-old Walter Biggins was fired just a few weeks before his pension benefits vested, he sued for age discrimination and won a jury trial verdict, which was sustained on appeal. But the Supreme Court held that Biggins hadn’t suffered age discrimination. According to the Court, “It is the very essence of age discrimination for an older employee to be fired because the employer believes that productivity and competence decline with old age … on the basis of inaccurate and stigmatizing stereotypes.” But in Biggins’s case “the decision [was] not … the result of an inaccurate and denigrating generalization about age, but … rather … an accurate judgment … that he indeed is ‘close to vesting.’”50 Cheating an employee of his pension doesn’t involve anti-elderly bias—just evenhanded avarice. As a result, the Court held that age discrimination laws don’t prohibit one of the most common tricks employers use to cheat older employees. In fact, the employer’s conspicuous avarice was almost a defense to the age discrimination claim. “Inferring age motivation … may be problematic in cases where other unsavory motives … [are] present,” opined Justice Sandra Day O’Connor.
But the Court didn’t leave Walter Biggins without a remedy. Although his employer didn’t discriminate on the basis of age, it did violate the federal Employee Retirement Income Security Act, which regulates employee pension plans. Biggins doesn’t imply that there’s nothing wrong with cheating an employee of his nearly vested pension. Instead, it suggests that much of what we call age discrimination may be better dealt with outside the civil rights framework.
*   *   *
Instead of reducing unemployment among the needy elderly, the ADEA benefited older workers who had jobs—the group that was already “frequently preferred over … younger” employees. The ADEA let employees forty and older—and only employees forty and older—sue when they were fired or passed over for promotion: unlike every other civil rights law, the ADEA expressly rules out most “reverse discrimination” lawsuits. Of course, if discrimination against the elderly is the problem, perhaps it makes sense to limit the law’s protection to older plaintiffs. But such an asymmetry does amplify the concern that a law designed to ensure fair treatment will become a boondoggle for a specific, politically connected group. We’ve all heard this concern voiced loud and long in the context of race-based affirmative action, which California’s former governor Pete Wilson famously condemned as a “racial spoils system.” Ironically, the concern is much more valid—though less noticed—in the context of age-based civil rights. Although the ADEA is, strictly speaking, an antidiscrimination law and not an affirmative action law, because of its built-in asymmetry—age discrimination is unlawful only when it disadvantages older people—one could argue that the entire law is a form of affirmative action. And because older Americans as a group are not disadvantaged at all, the remedial purpose of these rights is more dubious than for race- or sex-based affirmative action policies. In fact, older people have a vastly disproportionate share of the nation’s wealth and political influence, and their fortunes were improving dramatically even as Congress continued to expand and strengthen age discrimination laws. Between 1970 and 1984 the median income of people over sixty-five rose by 35 percent as compared with less than 1 percent for people from twenty-five to sixty-four51—yet Congress expanded the ADEA to prohibit mandatory retirement in 1986.
The ADEA looks even more like an age-based spoils system when one looks at its effect on retirement benefits. After the ADEA outlawed mandatory retirement, employers were faced with the prospect of having to retain older, overpaid employees long after they had recouped the deferred compensation earned in their more productive and less remunerated mid-career years. Many tried to buy their way out of the problem by offering older employees a “golden handshake”—a onetime cash payment on retirement. Already this was a huge windfall for older employees: the older life-cycle wage arrangement assumed mandatory retirement at a specific age; outlawing mandatory retirement essentially rewrote the deal in favor of older employees. The golden handshake is the amortized cash value of this imposed revision of the employment contract—a direct transfer of wealth from employers (and, indirectly, younger employees) to older employees.
Employers typically offered the golden handshake to employees who were young enough that they were likely to continue working for many years unless bribed to retire, and offered younger employees more than older employees. In other words, the employers discriminated on the basis of age. But this had nothing to do with animus or negative stereotypes; it was a straightforward reflection of the economics that led employers to offer golden handshakes in the first place. The employer was basically buying older employees out of what had become an unprofitable mandatory employment contract from the employer’s perspective. The value of the buyout would depend on the expected length of the employees’ tenure. Age was a pretty good proxy for expected length of tenure: older employees would generally retire without inducement earlier than younger employees. Moreover, the golden handshake had to be enough to allow the employee to live comfortably in retirement, and that figure would be higher for younger people, who would have to make it stretch over a longer period of time. Adjusting the size of the golden handshake by age made sense, both from a narrow economic perspective and from a more humane perspective.
The American Association of Retired Persons (AARP) had emerged as a powerful lobby in favor of expanded and strengthened ADEA provisions by 1986, when Congress revised the ADEA to outlaw mandatory retirement. It continued that role, lobbying Congress in the late 1980s to outlaw age-targeted retirement incentives. Its position was that targeted retirement incentives were age discrimination and therefore violated the ADEA per se. But the AARP changed its tune when it realized that a strict prohibition of age discrimination would kill the goose that gave the golden handshakes altogether. Strict application of the ADEA’s age discrimination rule would require employers to offer retirement inducements to everyone over the age of forty—or no one at all. As Issacharoff and Harris explain, “No employer could afford to offer retirement inducements to its entire workforce. At this point, the AARP did an about-face and began to lobby heavily for the preservation of [age-targeted retirement incentives] … so long as they were offered to everyone over a minimum age … When it came to benefiting older workers,… violations of the equal treatment principle proved to be more than just acceptable—they were required.”52
Issacharoff and Harris aptly describe this and other lobbying for expanded age discrimination laws as “wealth-grabbing self-interest” resulting in a “windfall to older workers.” It’s important to add that only some older workers benefited, and some benefited much more than others. A common argument against mandatory retirement is that changes in society have made it an anachronism. Advances in public health have allowed people to stay healthy and productive well into what would once have been their golden years, and today many people are psychologically invested in their careers in a way that their parents were not. But in fact the average age of voluntary retirement from the workforce has declined steadily and steeply since the mid-twentieth century, from about age sixty-eight in the early 1950s to age sixty-two in the late 1990s.53 The abolition of mandatory retirement makes no difference to the many people who choose to retire early.
The main exception to the trend toward earlier retirement is highly educated professionals and business managers. Because these careers are not physically demanding, people don’t “burn out” as readily as in other jobs. And because performance in such careers can be hard to measure objectively, status and reputation play a large role. Perhaps older professionals and managers retire later because they are more likely to be productive later in their careers. But productivity may just be harder to measure in the professions and upper management, allowing today’s older employees to use their reputations to hang on to coveted positions when an earlier generation would have “passed the baton” to younger protégés.
The uncharitable might think it telling that the legal profession has been largely exempt from age discrimination laws. For the most part, law firm partners are not considered employees covered under the ADEA, and mandatory retirement is still relatively common: over half of large law firms had mandatory retirement of some form in 2007.54 This may be changing: disgruntled older law firm partners have sued their firms over mandatory retirement, and the Equal Employment Opportunity Commission has taken up the cause of these unlikely subalterns—a move that has prompted many firms to drop mandatory retirement.55 But law firms have clung to mandatory retirement for good reasons. Many large law firms are still prime examples of the life-cycle model of compensation, which is responsible for preserving what collegiality remains in the typical Big Law sweatshop. Starting salaries at the more prestigious law firms in large cities are, as I write, as high as $170,000 a year. With all due admiration for the graduates of our nation’s law schools, I’m certain that no student fresh from law school or a clerkship can justify these salaries, especially when the costs of inevitable on-the-job practical training are deducted from the balance sheet. It’s well-known that many firms lose money on first- and sometimes even second-year attorneys and begin to break even only in the third year. Law partnerships remain lucrative because senior associates and young partners bring in much more than they earn in salary: salaries rise with seniority, but not as rapidly as skill and productivity do. For associates, compensation is typically tied to seniority and to the number of hours billed. But for the most highly compensated partners, compensation reflects the value of client relationships—a partner’s “book of business.” A partner who brings an important client into the firm will receive a yearly draw that reflects the billings to the client, even if he or she does little of the actual legal work for the client. So some senior partners earn much more than their current productivity would justify, based on the firm’s total billings to clients in their “book.” In effect, the mid-career lawyers subsidize both the novices and many older partners.
Mandatory retirement is indispensable to such a compensation structure. If partners can keep clients in their accounts indefinitely, the firm will become top-heavy with partners taking more out of the partnership than their current contributions justify. Of course, partners deserve to be compensated for “rainmaking,” as client cultivation is referred to in the profession. But not all client relationships are the result of a current partner’s rainmaking efforts: many clients have been with the same firm for generations. Traditionally, as partners age, they groom younger partners, to whom they will hand off their client relationships when they retire. But without the predictable and orderly handing down of client relationships that mandatory retirement encourages, partners within the same firm will begin to compete with each other for clients. Junior partners, with little prospect of building their own books of business within the firm, will try to build a book by leaving and then poaching clients they have worked with from senior partners at their old firms. Senior partners will become stingy mentors, guarding their client relationships from the younger lawyers with whom they work. Collegiality, mentoring, and client service will all suffer.
There’s no doubt that rigid mandatory retirement deprived businesses and society of talented older employees. But nothing required employers to impose mandatory retirement, and nothing prohibited talented older workers who faced mandatory retirement at one job from working elsewhere. At its best, mandatory retirement was an orderly and humane way of ending the employment relationship on a high note—with a gold watch and a party rather than with a bad performance evaluation and a pink slip. It allowed older workers to train and mentor younger replacements without the now-common fear of grooming one’s own competition, and it gave younger workers some assurance that coveted high-level positions, department chairs, and client portfolios would eventually be open to them.
The ADEA was first conceived of as a modest intervention to correct a flaw in employment markets that locked many older people in unemployment. It failed to correct that flaw, which continues to injure older people looking for work today. Instead, the ADEA morphed into what Issacharoff and Harris call “a benefits protection regulation for older workers … [mandating] open ended obligations to provide older workers lump sum buyouts [and other benefits] without giving any consideration to the economic rationale for these programs.”56 This has contributed to a backlash against employee benefits generally. For example, it’s now a commonplace quip among business managers and consultants that General Motors (now radically downsized even after receiving billions in government bailout money) was once a car company that offered employee benefits and is now a benefits company that happens to make cars. In 2005, the Nobel laureate economist Paul Krugman pointed out that GM’s health-care benefits alone accounted for $1,500 of the price of every car the company makes.57 The ADEA is hardly the sole cause of this crisis in employee benefits, but it played a role by disrupting the arrangements that made compensation and fixed-benefits plans economically viable and forcing employers to offer a windfall to a narrow class of older workers at the expense of younger workers, future generations, consumers, and—when the employers go broke and the federally insured pension plans become insolvent—taxpayers.
This has little to do with equality or justice for older people. If the real issue were bias against the elderly or irrational stereotypes, we wouldn’t find such ready and compelling economic justifications for so many of the practices the ADEA forbids. If the real issue were the integration of older people into the workforce or the alleviation of material disadvantage, the law would return to its original focus on the jobless elderly rather than locking in and increasing advantages for people who are already employed. If the real problem were that discrimination on the basis of age is somehow inherently demeaning or presumptively suspect, then the law would prohibit discrimination against the young as well as against the old rather than limit coverage to people forty and older as the ADEA does. After all, unlike stereotypes about race, sex, or disability, which fall almost exclusively on specific maligned groups, every questionable stereotype about the aged is mirrored by a denigrating generalization about the young: if the elderly are “sluggish,” the young are “reckless”; if older people are set in their ways, young people lack wisdom; if the aged are complacent, younger people are naive and inexperienced.
Ironically, civil rights law includes this asymmetry only in the context where it is least justified. Title VII prohibits so-called reverse discrimination as a matter of principle, even though there is no evidence of, say, widespread anti-white or anti-male bias. By contrast, the egalitarian rationale that might justify asymmetrical protection for “protected groups” in the case of race, sex, and religion doesn’t apply to age, because older people are disproportionately wealthy and powerful and hence can and do take care of themselves quite well in the market and in politics. The Seventh Circuit judge and University of Chicago Law School professor Richard Posner makes this point well:
It is as if the vast majority of persons who established employment policies and who made employment decisions were black, federal legislation mandated huge transfer payments from whites to blacks, and blacks occupied most high political offices in the nation. It would be mad in those circumstances to think the nation needed a law that would protect blacks from discrimination in employment. Employers—who … for the most part are not young themselves—are unlikely to harbor either serious misconceptions about the vocational capacities of the old … or a generalized antipathy toward old people.58
Today’s age discrimination laws are not “mad.” They make perfect sense—as interest-group politics. If age discrimination laws just benefited older people at the expense of the young, any inequity they caused would be short-lived: after all, with luck we will all be old someday. But because age discrimination laws benefit well-off older people—relatively wealthy professionals and business managers—much more than the poor, they are, in effect, a tax on industry levied for the benefit of the relatively rich. Much of this extra income will go unspent, like the inherited advantages locked in by learning disability accommodations, and will pass to future generations. In effect, age discrimination laws have created a sort of reverse inheritance tax, helping well-off upper-level managers and wealthy professionals to have plenty left over to leave to their offspring even after enjoying a lush retirement of travel and comfortable leisure. A bumper sticker popular in places like Coral Gables, Florida, Palm Springs, California, and Leisure World, Arizona, reads, “I’m Spending My Children’s Inheritance.” Thanks to civil rights laws, some of the nation’s richest retirees won’t need to.
From Civil Rights to Individual Entitlements
Each of the civil rights laws I’ve discussed here serves a legitimate purpose. It’s wrong and shortsighted to disfavor the disabled or the elderly because of inaccurate stereotypes or irrational aversion. Civil rights laws have an important role to play in making sure such prejudices don’t poison the labor market and pollute the public sphere. And prejudice aside, government, employers, schools, and building owners should help isolated and disadvantaged groups join the mainstream of society. But even as they offer a fair deal to the disadvantaged, these laws have also become a perk for the privileged.
We could reduce unemployment and social isolation for the disabled and the elderly without treating them as civil rights issues. The civil rights approach is based on the questionable premise that racial isolation and poverty, gender hierarchy, the isolation of the disabled, and unemployment among the elderly are all diseases caused by a common virus—irrational discrimination. This is plausible only if one has a very capacious definition of discrimination. For instance, in the 1970s Paul Brest, my former dean at Stanford Law School, argued that for legal purposes, discrimination should include distinctions based on not only irrational animus and stereotypes but also what he called “selective sympathy and indifference.”59 This definition of bigotry is familiar enough: If one of every four young white men were languishing in prison, you can bet Congress would have found another approach to law enforcement. Or: If men got pregnant, abortion would be a sacrament. This definition of discrimination reflects a generous notion of collective responsibility and social justice: society has a responsibility to eliminate and avoid not only overt discrimination but also inequality caused by milder forms of bias, such as the inability or unwillingness of decision makers to sympathize with people unlike themselves. But it wreaks havoc when translated into individual entitlements.
If you try, you can make a case that some kind of bigotry—animus, stereotypes, or selective sympathy and indifference—is behind almost any inequality. Why are black men overrepresented in prisons? Because of racial animus on the part of police and prosecutors. And even if crime rates really do vary among racial groups, tough law enforcement reflects “selective sympathy and indifference” toward the groups with higher crime rates. Why are the lines for the ladies’ rooms usually longer than for the gents’? Because architects and building planners are selectively indifferent to the needs of women. Why did most American employers have mandatory retirement at age sixty-five until the 1980s? Because of stereotypes about the productivity of older people and selective indifference to them. Why were older public buildings designed with stairs and not wheelchair ramps? Because landlords and architects were selectively indifferent to the needs of the disabled.
In fact these social problems are quite different from one another: they have different histories; they are perpetuated by different institutions in different ways; and they are justified by different misconceptions (and in some cases, valid concerns). It’s not very helpful to insist that they all involve “discrimination” and deserve condemnation for that reason. Discrimination, strictly speaking, is often both necessary and just. Exams are supposed to discriminate between people who have acquired the necessary skills and mastered the relevant material and those who haven’t. Discrimination on the basis of age made sense and was reasonably fair given the old life-cycle model of employment where compensation rose with years of service rather than productivity. Of course, it’s possible that certain exams don’t measure the right skills, and it’s possible that the life-cycle model of compensation should be discouraged and employers should be forced to tie compensation more tightly to job performance. But both of these questions demand distinctive, fact-specific inquiries and controversial judgment calls.
Instead of making the necessary inquiries and judgment calls, civil rights thinking equates all discrimination with bigotry and assumes that inequalities between groups must be the result of bigotry against the less fortunate groups. If we assume the problem is bigotry, then all of the tricky questions of implementation (how can we best address the real causes of inequality?) and distributive justice (who should pay?) disappear, and the answers seem simple: we should eliminate bigotry, and the bigots should pay. Driven by the unexamined presumption that exams that disfavor students diagnosed with learning disabilities are “biased” against them, the law demands that those students receive special exceptions. But no one asks whether the exams accurately measure relevant skills and knowledge generally, because garden-variety poor performers aren’t a discrete group against which educators could be biased or about which they might harbor stereotypes. Age discrimination law condemns mandatory retirement as a reflection of “stereotypes” about the aged, despite its fairly obvious practical function of offering older workers with declining productivity a dignified exit from the workplace. But it allows an employer to breach the implicit employment contract and cheat his older employee of an almost-vested pension—one of the most common hazards for older employees—because the bait and switch is motivated not by stereotypes but by simple avarice. Shoehorning such a wide range of social problems into the discrimination framework has made it harder to remove the impediments that trip up many disabled and elderly people, even as it’s made it easy for people to turn civil rights into selfish entitlements—and feel justified in doing so.


 
Copyright © 2011 by Richard Thompson Ford

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

1
 
Entitlement and Advantage
 
 
Now you want me to tell you my opinion on autism…? A fraud, a racket. For a long while we were hearing that every minority child had asthma … Why was there an asthma epidemic amongst minority children? Because I’ll tell you why: the children got extra welfare if they were disabled, and they got extra help in school. It was a money racket … Now the illness du jour is autism. You know what autism is? I’ll tell you what autism is. In 99 percent of the cases, it’s a brat who hasn’t been told to cut the act out. That’s what autism is … Everybody has an illness … Stop with the sensitivity training. You’re turning your son into a girl and you’re turning your nation into a nation of losers.
On July 16, 2008, the radio talk show host Michael Savage managed to offend parents of disabled children, racial minorities, and women in less than a minute and a half—an accomplishment that his rivals Rush Limbaugh and Glenn Beck can only aspire to. The group Autism United demonstrated in front of the New York radio station that carries Savage’s program. One of his sponsors, the insurance company Aflac, promptly gave Savage some unwelcome sensitivity training: it pulled its advertising from his program, explaining that the company found “his recent comments about autistic children to be both inappropriate and insensitive.” Criticism was almost unanimous among doctors, child psychologists, disability rights advocates, parents, and pundits alike. Several local stations dropped Savage’s program in response to public outrage.
Savage is a provocateur—deliberately insulting and extreme, with a loose regard for factual accuracy. According to the clinical psychologist Catherine Lord, autism is “just like epilepsy or … diabetes or a heart condition. [Savage’s comments are] like blaming the child with a heart condition for not being able to exercise.”1 Savage eventually backpedaled, saying his remarks were “hyperbole,” designed to draw attention to the problem of fraudulent diagnosis. He agreed to devote another show to the subject so that parents of autistic children and others could air dissenting views.
Savage, like Limbaugh and Beck, is conservative and contentious, but he is also idiosyncratic—often unexpectedly thoughtful, even cerebral. While Limbaugh and Beck are activists for conservative politicians and causes, Savage is distinguished by a kind of crotchety ennui. As contemptuous of other conservatives as he is of liberals (he called Glenn Beck a “hemorrhoid with eyes”), he treats partisan politics with an aloof disdain: “You’ll have to go to one of the other talk-show hosts to get ‘Obama’s a Ma-a-arxist’ and ‘McCain is a wa-a-ar hero.’”2 As a result, where other conservative talk show hosts are annoyingly predictable, Savage’s off-the-cuff ramblings and intemperate tirades are often surprising and intriguing, and they often contain at least a grain of truth. For instance, Dr. Lord admitted that mild autism is vaguely defined and can be a catchall diagnosis for children with behavioral problems who fit no other category. A year and a half after Savage’s remarks, the psychiatrists in charge of writing the fifth edition of theDiagnostic and Statistical Manual of Mental Disordersannounced that they were considering folding several types of mild autism—such as Asperger’s syndrome and pervasive developmental disorder—into a single broad category—autism spectrum disorder—reflecting a new understanding that autism is not a single disorder but rather a range of conditions, from severe mental disabilities to mild emotional abnormalities that can come with extraordinary mental gifts.
There’s a professional consensus that severe autism is a discrete neurobiological condition, but mild cases can be hard to distinguish from less well-defined conditions, such as attention deficit hyperactivity disorder (ADHD) and other vaguely defined “learning disabilities.” Here, diagnosis is difficult and contestable, and expert opinions differ. “We’re fairly good about making the diagnosis of kids who are classically autistic, but as you move away from that specific disorder, it gets harder … [F]or kids who are of average, close to average or above average intelligence, it is difficult to sort out what is eccentricity versus what is a real social deficit,” said Dr. Lord.3
Federal law doesn’t reflect a continuum that includes mild autism and learning disabilities along with eccentricity and poor concentration. For legal purposes, a disability is a discrete condition: either you have it, and therefore have a right to an array of special concessions and extra help, or you don’t. The law doesn’t define learning disabilities with precision, but it does provide a partial definition: “a severe discrepancy between achievement and intellectual ability.”4 In practice, this means that learning disabilities are diagnosed, in large part, by identifying a gap between a child’s performance in academic settings and the performance one would expect of a child of his or her age and IQ.
Civil rights laws entitlealldisabled people to special accommodations and services: a blind person might require an exam to be administered orally or written in Braille; a paraplegic might require voice-recognition software or transcription. These accommodations let the disabled reach their potential. Children with learning disabilities are also legally entitled to accommodations and services that other children are not, such as special tutoring and extra time on exams. In theory, just as a blind person needs Braille, a Seeing Eye dog, or a cane to overcome his blindness, a person with ADHD may need extra time to get organized and overcome his inability to concentrate.
But there are some important differences between severe disabilities like blindness and milder learning and behavioral disabilities. First, conspicuous disabilities often trigger reflexive animus or prejudice. Many employers wrongly assume disabled people can’t work, and businesses discriminate against them because of squeamishness and irrational aversion. A business that refuses to accommodate a disabled person might secretly wish to exclude him. Milder disabilities don’t trigger such reflexive prejudice because, for the most part, they are not conspicuous: typically an employer learns of a learning or an emotional disability only when an employee seeks an accommodation for it. Second, most of the accommodations that people with severe disabilities need wouldn’t help a nondisabled person at all. A sighted person wouldn’t benefit from having an exam written in Braille; an able-bodied person wouldn’t get much of an edge from using voice-recognition software or a professional transcriber. By contrast, people with learning and emotional disabilities often enjoy extra time on competitive exams, costly one-on-one tutoring, and exemptions from discipline for disruptive behavior—things that would benefit anyone. Finally, unlike blindness or a physical disability, many learning disabilities are hard to define objectively; as Dr. Lord admits, they are on a continuum with ordinary “eccentricity.” Put these together and you have a recipe for gaming the system: no one would suggest that an eccentric person with a wandering mind has a right to extra time on a timed exam, but someone with ADHD does—and the two can be hard to distinguish. This doesn’t suggest that civil rights for people with mild cognitive disabilities are a “racket,” but it does suggest that they have the potential to encourage opportunism and can lead to unwarranted advantages.
Suppose two children achieve low scores on a competitive timed exam: one has a diagnosed learning disability, and the other doesn’t. Suppose both of the children’s scores would improve dramatically if they had extra time to complete the exam. Is it fair to give one student extra time and not the other? Maybe. In theory, the extra time isn’t an advantage for the person with a learning disability; it’s just the way he copes with his disability. But if the disability is on a continuum with garden-variety poor concentration, then in fairnessanyonewith poor concentration should be entitled to extra time in proportion to the severity of his concentration deficit. This would, of course, defeat the purpose of a timed exam, which is to test not only skills and knowledge but also the ability to perform quickly.
*   *   *
The Harvard medical student Sophie Currier became a heroine to advocates of breast-feeding in 2007 when she demanded and eventually won the right to a breast-pumping break during a medical licensing exam. No hothouse flower, Currier first took the exam—widely considered to be one of the most challenging of all professional qualification exams—when eight months pregnant and came just short of a passing score. Currier chose to nurse her newborn baby as most experts in the medical profession she was poised to join recommend. But she still needed to pass the exam in order to start her residency at Massachusetts General in the fall. So she asked the National Board of Medical Examiners to give her a break—specifically, an extra hour each day to express and store her breast milk. The board refused, informing Currier that it would accommodate only disabilities as defined by the Americans with Disabilities Act.
Currier wasn’t the first woman to get a less-than-nurturing reaction to her nursing. Until recently, nursing an infant in public was considered indecent exposure and could result in citation or even arrest. Businesses and employers not only refused to accommodate nursing mothers but often deliberately embarrassed them or asked them to leave. The problem isn’t a relic of the era of three-martini lunches and cars with tail fins either. In October 2006, Emily Gillette was flying with her husband and twenty-two-month-old daughter on a Freedom Airlines flight from Burlington, Vermont. Freedom Airlines didn’t give Gillette the freedom to feed her baby; instead, a flight attendant barked, “You need to cover up. You are offending me,” and thrust a blanket into Gillette’s hand. Gillette balked: “No thank you. I will not put a blanket on top of my child’s head.” The flight attendant kicked her off the flight. In response Gillette filed a complaint against the airline with the Vermont Human Rights Commission. Her story inspired over eight hundred women to stage a “nurse-in” at thirty-nine airline ticket counters nationwide.5 This wasn’t the first time lactation took on the character of social protest: a year earlier women staged a “nurse-in” in front of ABC studios after Barbara Walters spoke unapprovingly about a woman nursing her baby on a flight.
A growing number of women have decided that Mother Nature is a more wholesome provider than Gerber or Nestlé and nurse their newborns for a year or longer. In reaction to social squeamishness about breast-feeding and widespread ignorance of its many virtues, some have become “lactivists,” proselytizing to pregnant women and young mothers about the benefits of the breast, lobbying for policy changes to accommodate nursing mothers, and agitating against inhospitable businesses and employers. Their goal is to reverse the decades-long trend toward bottle-feeding, which they see as the result of a conspiracy among hubristic scientists, perverse moralists who eroticize the female breast, and callous industrialists anxious to get new mothers back on assembly lines and behind desks. While breast-feeding was, for obvious reasons, almost universal before the Industrial Revolution, it declined throughout the twentieth century: by 1972 only 22 percent of American mothers nursed their infants.6 Lactivists reject the notion of better living through technology and cite mounting evidence that breast-fed children are less susceptible to illness and emotionally healthier than those who receive only manufactured formula. Scandals involving contaminated baby formula and conspiracies to foist costly baby formula on an impoverished third world have only strengthened their resolve and increased their numbers.
Medical opinion has shifted decisively in favor of nursing: the American Academy of Pediatrics decided in 1997 to recommend that mothers breast-feed their infants for six months. The U.S. Department of Health and Human Services started a campaign to encourage breast-feeding. Public opinion followed quickly, and today bottle-feeding is tantamount to child abuse among the Bugaboo stroller set. As mothers found themselves caught between the old-school squeamishness of blanket-wielding prudes and a trendy new obligation to breast-feed, some feminists began to wonder whether the new ethos was a totem for women’s liberation or a Trojan horse. Hanna Rosin complained inThe Atlantic: “In Betty Friedan’s day, feminists felt shackled to domesticity by the unreasonably high bar for housework, the endless dusting and shopping and pushing the Hoover around … When I looked at the picture on the cover of [Dr.] Sears’sBreastfeeding Book—a lady lying down, gently smiling at her baby andstill in her robe, although the sun is well up—the scales fell from my eyes: it was not the vacuum that was keeping me and my twenty-first-century sisters down, but another sucking sound.”7
Nursing requires a significant commitment. Nursing mothers must either feed their children directly or express the milk every several hours; failure to do either can lead to painful engorgement, infections, and a reduction in the milk supply. The National Women’s Health Information Center helpfully suggests to working mothers of newborns: “Let your employer know that you are breastfeeding and explain that, when you’re away from your baby, you will need to take breaks throughout the day to pump … Ask where you can pump at work, and make sure it is a private, clean, quiet area … If your direct supervisor cannot help you with your needs … go to your Human Resources department to make sure you are accommodated.”8
Or, failing that, go to court.Sophie Currier v. National Board of Medical Examinerswasn’t even a close contest in the end. The National Board of Medical Examiners, with their creaky old rules and their hand-wringing about the integrity of their precious exam, didn’t have a chance against the sisterhood of virtuous lactation—a powerful fusion of modern feminism and the Victorian cult of pure womanhood, backed by the American Academy of Pediatrics, with Angelina Jolie as glamorous spokesmodel. Currier lost her sex discrimination lawsuit at the trial court but won handily on appeal: Judge Gary Katzmann held that “in order to put the petitioner on equal footing as the male and non-lactating female examinees, she must be provided with sufficient time to pump breast milk.”9
Pumping breast milk is time-consuming and uncomfortable: a machine must be assembled, the milk must be pumped, the machine must be cleaned so it’s ready for next time (which will be roughly four hours later) and disassembled for storage, and the milk must be stored on ice so that it is still fit for the baby to drink later. This could easily consume the entire forty-five-minute standard break for the medical licensing exam, leaving Currier no time to eat or use the restroom. Pumping might not take the entire hour that Currier asked for, but any extra time wouldn’t really give her an edge. She couldn’t use it to think through or reconsider her answers, because the exam was administered in discrete blocks, and once a block was finished, the examinee could not return to it. The board’s concern that the accommodation would compromise the exam seemed unwarranted: after all, Currier wasn’t asking for extra time to take the exam itself.
But actually, she was. Currier had been diagnosed with ADHD and dyslexia; as an accommodation, she had demanded and received a fulleight hoursof additional exam time—double the normal limit. The board granted this request because ADHD and dyslexia are recognized disabilities within the definition of the Americans with Disabilities Act. Having failed the exam once even with the extra time, Currier had come back to the board with another demand for an additional accommodation.
It was starting to look as if Currier wanted to keep changing the rules until she passed. This may explain why relatively few feminists or lactivists took up her cause. Pondering the lack of support for Currier,Slate’s legal analyst Dahlia Lithwick complained that “if we can’t stand up for a woman with a brilliant career who is fighting to care for her babies as she chooses … you really have to wonder if we can stand up for anyone at all,” but worried that “it’s harder to sympathize … when we learn that she is already getting a whole extra day to take the test because she has ADHD and dyslexia, or that she received extra accommodation in her schooling as well … Suddenly … she isn’t a pioneer for the rights of working moms. She’s a crybaby and an opportunist.”10 This lack of sympathy was widely expressed on blogs and websites devoted to working mothers and lactation rights. “This woman is a disgrace,” groused an anonymous commenter on a motherhood blog. “Not only has she failed the exam, she is expecting everyone else to fix her problems for her … I am a physician, a working, nursing mom, who passed her general and subspecialty boards (written and oral) while nursing without difficulty.” On another site a nursing mother complained, “As a nursing mother who has managed to get through a LOT of daylong exams without whining … I can only say there is a limit to special entitlements … Ms. Currie [sic] is simply an example of entitlement gone too far.” Another woman wrote, “While I sympathize with her for nursing … keep in mind that she did get lots of extra help [and didn’t pass the first time] … Is there any chance of passing the 2nd time? Maybe, with the extra 2 days she has been given for a one day test, plus the extra time given for her to lactate … In a way, I am glad [she won] … now other people will get an awareness and learn how to get … perks … when going through the educational system.”11
Doctors, on the whole, were even less sympathetic. One insisted: “The USMLE is a STANDARDIZED test to assess a minimum competency … If you don’t pass, then the exam is doing what it was intended to do: preventing somebody without a core knowledge of medicine [from] practicing … When the patient dies on the table [because the doctor is too slow] who is going to be supporting her when her excuse is ‘I needed to breast feed at that moment.’” Another echoed this macabre theme: “When your Father has a heart attack, do you want [someone who] is … practicing only because he/she was granted 3 months of time to pass his licensing exam while every other MD passed it in 8 hours?”12
Few observers bothered to distinguish between the accommodations Currier received for her disabilities and those she received to pump. Currier’s supporters typically treated the extra eight hours she received due to her dyslexia and ADHD as irrelevant: “If a man were to have ADHD and dyslexia … [and] were to also have cancer … he’d be given accommodations for his ADHD and dyslexia,andI would think that additional accommodations would be made for his cancer … as well.” Her critics thought that each accommodation—regardless of the justification—compromised the integrity of the exam and gave Currier an unfair advantage: “Allowing some students to have a time advantage, no matter the reason, destroys the integrity of the exam.”13
But there’s a big difference between Currier’s modest request for an extra break to pump and the extra eight hours of exam time she enjoyed as an accommodation of her disabilities. Perversely, federal civil rights law gave Currier an entitlement to the more extreme accommodation while leaving the modest request open to debate (Currier eventually got her pumping break under Massachusetts state law). Contrary to the complaints of her critics, letting Currier take an extra hour to pump doesn’t compromise the exam much, if at all. The extra break is pretty close to the amount of time Currier would actually need to pump and store her milk—leaving her no better off than a non-lactating examinee. You might think that the extra time away from the test would give Currier a recuperation advantage, but any woman who has used a breast pump will tell you that it’s not exactly relaxing or rejuvenating. Currier’s critics often remarked that she won’t be able to ask for extra time in the operating room, but unless she’s lactating again when she needs to perform an eight-hour surgical procedure, she won’t need to. The break simply compensates for the effects of a temporary condition that would otherwise depress Currier’s test results and make the exam an inaccurate measure of her true abilities.
We can’t say the same of the legally mandated accommodation for Currier’s disabilities. ADHD and dyslexia are not temporary conditions. If they affect Currier’s ability to take the exam, they will affect her ability to perform any similar task under time pressure. Of course, an exam isn’t a perfect measure of real-life job skills: plenty of people who do poorly on exams excel in real-life situations, and just as many do well on exams and poorly on the job. But when used to test for minimum competence, the exams serve an important function: they are a cheap and efficient way to screen out the ill prepared and the incompetent. You’d be a fool to entrust your health to a doctor just because she had a high score on her medical boards, but you’d be a bigger fool to entrust it to someone who couldn’t pass them. Here the morbid fantasies of Currier’s critics are relevant: if Currier couldn’t focus on a make-or-break professional exam because of her ADHD, will she be able to focus on a life-or-death time-sensitive medical procedure or complete a complex diagnosis? Perhaps Currier will choose a medical specialty where speed and concentration are never required. But if that’s the reason to give her extra time, shouldn’t anyone willing to limit himself to time-insensitive specialties get extra exam time?
*   *   *
Several federal laws prohibit discrimination against people with disabilities. The most important are the Rehabilitation Act, the Americans with Disabilities Act, the Fair Housing Act, and the Individuals with Disabilities Education Act (IDEA). Together these laws cover employers, landlords, proprietors of public facilities, public schools, and any other organization that receives federal funding. The Rehabilitation Act and the Americans with Disabilities Act define a disability as a physical or mental impairment that substantially limits a major life activity. The IDEA adopts a similar definition but also specifically defines as learning disabled any child who fails to “achieve commensurate with his or her age and ability levels … [and] has a severe discrepancy between achievement and intellectual ability.”14
The idea behind these laws is that the failure to accommodate a disability is a kind of discrimination. Before the 1970s most disabled people were excluded from meaningful social interaction and gainful employment. Blatant discrimination was the norm, and few institutions made any effort to be accessible to disabled people. The all-too-common view was that if someone was unable to attend school, enter public buildings, or hold jobs because of his handicap, it was a tragic fact of life about which nothing could be done.
Advocates for the disabled, inspired by the civil rights movement, began to challenge this widespread idea in the 1970s. They insisted that disabled people could lead productive lives without science-fiction technological cures if society made an effort to accommodate them. In fact, they argued, many disabled people suffered less from the natural consequences of their physical condition than from discriminatory practices and insensitive policies established in disregard of their needs. Many people were openly contemptuous of the disabled, insulted their dignity with condescension and pity, or avoided them out of an irrational squeamishness. And how different were the myriad subtler decisions made in callous ignorance of disabled people and their needs? A wheelchair-bound architect would never design a building with stairs as the only means of ingress and access to upper floors. A deaf school administrator would make sure teachers provided written as well as oral instruction. Just as discriminatory laws once excluded blacks, discriminatory employment standards, educational policies, and architectural design excluded the disabled.
Congress passed the first major law prohibiting discrimination against the disabled—the Rehabilitation Act—in 1973, prohibiting recipients of federal funding from discriminating. It passed the Education for All Handicapped Children Act banning discrimination in public education two years later. But these laws were too mild and too limited: the disabled remained locked out of the mainstream of the job market and public life. When Congress passed the Americans with Disabilities Act (ADA) in 1990, banning discrimination in employment and businesses open to the public, it found rampant discrimination against the disabled that had resulted in widespread unemployment and poverty in their ranks: “Two-thirds of all disabled Americans between the age of 16 and 64 are not working at all … Fifty percent of all adults with disabilities have household incomes of $15,000 or less. Among non-disabled persons [the figure is] only twenty-five percent.”15 The ADA forbids discrimination against people with disabilities and defines “discrimination” to include a failure to make “reasonable accommodations” of their disabilities. The simple nondiscrimination provisions require employers, landlords, and proprietors to treat disabled people as well as they treat people without disabilities. The accommodation provisions require employers, landlords, and proprietors to make special exceptions and take affirmative steps to help the disabled succeed.
The idea that disabled people were limited by laws, policies, and design rather than by their physical handicaps inspired a cumbersome but instructive nominal innovation: the disabled became “differently abled.” For instance, the idea that blind people developed their other four senses to an almost superhuman degree was sufficiently mainstream by 1967 to serve as the premise of the Hollywood filmWait Until Dark. Audrey Hepburn played a blind woman who is terrorized by criminals looking for smuggled drugs. In the climactic sequence, her character fends off a knife-wielding man by plunging her apartment into darkness, giving her the advantage over her sighted assailant. In the same year the television police dramaIronsidefeatured Raymond Burr as the retired detective Robert Ironside, who had been paralyzed by a sniper’s bullet. Aided by a modified police van designed to accommodate his wheelchair, Ironside remained an ace sleuth, using his years of experience and intelligence to solve crimes his able-bodied colleagues couldn’t crack. Under the right conditions, a handicap could be a strength.
Social movements for the disabled followed the lead of Black Power and turned what had been a cause for stigma into a source of power. And just as black pride matured into multiculturalism, with its vague but consistent implication that any social practice that was sufficiently widespread among a racial group was a part of that group’s unique and precious “culture,” some disability rights groups came to see their conditions and unique methods of coping as parts of a distinctive and precious culture as well. For instance, activists for the hearing impaired argued for the existence of a “deaf culture” grounded in sign language.16 Some in the deaf culture movement rejected lip-reading as a demeaning form of assimilation. Some went as far as to reject hearing aids and other medical devices designed to restore lost hearing as an insult to deaf culture: these interventions implied that deafness is a defect to be fixed rather than a condition that gives rise to an equally valid and valuable alternative mode of interaction with the world.
Disability rights laws were inspired by the long-overdue recognition that disabled people could make valuable contributions if given the chance. But the laws could also give effect to a much more questionable claim: that disabilities are not in fact disabling, but simply define different, equally effective modes of perception and interaction. It follows from the stronger claim that any practical impediment to the full and equal interaction of disabled people is the result of some form of invidious discrimination: the wrongful hegemony of bipedal over alternative modes of locomotion prevents a wheelchair-bound paraplegic from easily entering a nineteenth-century building built with grand staircases; the unjust emphasis on concentration and speed keeps a person with ADHD from passing the medical licensing exam.
It can be hard to tell the difference between the natural limitations of a disability and limitations that are imposed or magnified by bigotry, callous indifference, and careless oversight. Until recently, most people assumed that the disabled were simply incapable of making valuable contributions to society, so very few things were designed to accommodate them. Often, minor changes could have accommodated disabled people at relatively little cost. Doors can be widened slightly to accommodate wheelchairs, written materials made available to the deaf to supplement an oral presentation, oral descriptions used to aid the blind. And these changes may inadvertently improve things for a much larger group of people: ramps designed to accommodate wheelchairs also help people with wheeled carts, baby strollers, and wheeled luggage; written supplements to an oral presentation benefit the large number of people who find spoken lectures hard to follow and remember. Rights for the disabled have improved public life dramatically by punishing irrational prejudice and encouraging everyone to rethink habitual practices.
But disabilities are disabling. No amount of design accommodation will allow a blind person to pilot an aircraft safely or help a person with Parkinson’s disease to practice delicate surgery. And even when accommodation is possible, disability rights present difficult trade-offs: How much can we afford to change norms, rules, and physical infrastructure to help people with disabilities? Ramps and elevators to accommodate wheelchairs are expensive; remodeling older buildings can destroy their architectural character; Braille translations are costly and hard to acquire; closed-captioning isn’t free. We’ve correctly decided to make the changes in many cases—but not all. The law requires that employers, landlords, and proprietors make “reasonable” accommodations, inviting a cost-benefit analysis. Courts often find that a disabled person is entitled to some accommodation, but not everything that he or she might want. To accommodate a wheelchair, an employer may have to remodel a bathroom but not a staff kitchen. New construction must be designed to accommodate the disabled, but older buildings can remain inaccessible until they are substantially remodeled. Employees must be able to perform the “essential functions of the job” in order to qualify for mandatory accommodations: that rules out the blind pilot and the surgeon with the shakes.
Unfortunately, thinking of these conflicts in terms of civil rights encourages claimants to ignore the necessity of tough decisions and trade-offs. Sophie Currier and her supporters consistently argued that her demands for accommodation were questions of simple fairness, as if there were no downside to changing the rules just for her. Judge Katzmann, for example, insisted that Currier’s accommodations just put her “on an equal footing” with other examinees, and another Currier supporter was confident that the accumulation of special breaks didn’t matter: someone with ADHD, dyslexia, and cancer should get extra time for all three conditions, she insisted.
There’s a reasonable argument that fairness required giving Currier extrabreaktime to pump. But there’s also a strong argument that giving her two days to complete a time-sensitive exam doesn’t put her on an “equal footing” with the examinees who had only eight hours; it gives her an advantage. The argument for this accommodation was that the exam was biased against Currier and the extra time only corrected the bias. But the exam was “biased” only if speedy performance is irrelevant. And in that case the exam is biased againstall examineeswho would have passed if they had had more time. If the speed limitation is arbitrary and misguided, the National Board of Medical Examiners should drop it entirely rather than make case-by-case exceptions.
There’s a sound civil rights precedent for such an approach: Title VII of the Civil Rights Act requires an employer to abandon an employment practice that disproportionately screens out members of a minority groupand isn’t job related. But the employer has to drop the practice entirely—not suspend it or change it just for members of the minority group. On the other hand, if the practiceisjob related, the employer can use it regardless of its effect on minority groups.
It’s a conceptual sleight of hand to define one person’s inability to answer questions quickly and accurately as a disability that society must accommodate in order to reach the merits, when the same inability is considered a lack of merit for other people. This is especially true of a disability like ADHD, which many experts believe differs only in degree from what we might simply call a high-strung or absentminded personality trait. It makes little more sense to insist that exceptions to the normal rules simply “make up” for ADHD than it would to insist that an exam that favors smart people “discriminates” against the less intelligent. We all have unique natural strengths and weaknesses that make us better suited to some jobs than to others. Short people are at a disadvantage in basketball tryouts; socially awkward people typically don’t succeed in politics; clumsy people make bad jugglers. Isn’t it possible that people who have a hard time concentrating usually don’t make the best doctors?
*   *   *
Tom Freston may be best known as the man who discovered music videos. He got involved in cable television in 1979, when it still seemed doubtful that people would pay for television when they could get Big Three network programming for free. Along with the legendary adman George Lois, who designedEsquiremagazine’s avant-garde covers during its golden age in the 1960s, Freston created the now iconic “I Want My MTV” ad campaign that defined shopping-mall chic during the early 1980s. He went on to turn MTV from a cultural phenomenon into a global media empire, launching VH1, Nickelodeon, Comedy Central, and many other cable channels and creating independent programming to edify the masses, includingSpongeBob SquarePants,South Park, andBeavis and Butt-head. Freston’s MTV Networks invented reality television withThe Real World, the first television show to place several strangers in a house together and tape their every move. An arrangement that would have been an unambiguous violation of professional ethics if done in the name of science was an unqualified success as entertainment. He became president of Viacom—MTV’s parent company—in 2004, where he remained until 2006. He left with a $60 million severance package.
In 1995, after his son Gilbert was diagnosed with ADHD, Freston enrolled him in the Stephen Gaynor School, a private school specializing in learning disabilities. The Gaynor school isn’t cheap: one year there cost $21,819 in 1999. Still, that wasn’t much more than what a typical New York City private school would cost, and few people with Freston’s wealth send their kids to public schools in New York: private school tuition is simply one of the many extraordinary expenses that wealthy urbanites consider a necessity.
The Individuals with Disabilities Education Act requires all states that receive any federal funding for special education to provide all children with disabilities a free and appropriate public education. The law requires that public schools develop “specially designed instruction, at no cost to parents, to meet the unique needs of a child with a disability.” If the school district fails to provide a child with an “appropriate” education, the parents are legally entitled to tuition reimbursement for private schools. Public schools are often unable to accommodate a child with a rare and severe disability at a reasonable cost: private placement may be better for the child and cheaper for the district. And if a district simply fails to offer an appropriate education due to incompetence or neglect, parents should be able to take matters into their own hands and make sure their child gets the education he or she needs. The law makes sure that disabled children have the same access to a free public education as any other child: the school district must either provide an education that meets their needs or outsource the job to someone who can.
But what about parents who would never consider public school for a nondisabled child? Freston asked the New York City Board of Education to evaluate Gilbert and recommend an educational program suited to his special needs, but nothing the school district had to offer could match the pricey private school Gilbert was already attending. Freston sued the school district for his son’s private school tuition in 1997 and 1998, and the district agreed to compensate him. He later argued that this was a tacit acknowledgment that the district had not offered Gilbert an appropriate education; the district insisted that it paid up only in order to avoid litigation. In 1999 the district offered Gilbert a coveted placement in the city’s Lower Lab School for Gifted Education with a student-faculty ratio of fifteen to one along with additional tutoring and counseling. But Freston never visited the school or met with any of its staff, and later testified that “it was sort of a moot point … I spent the summer in California … The down payment [for private school] had been made.”17
School administrators from coast to coast worried that wealthy parents would game the system to get school districts to pay for private schools—and more. TheSan Francisco Chronicledescribed the parents of a student with learning disabilities and anxiety disorder who enrolled in a “$30,000-a-year prep school in Maine—then sent the bill to their local public school district.” According to theChronicle, “Parents of special education students seek extra-special education at public expense: private day schools, boarding schools, summer camps, aqua therapy, horseback therapy … Special education is a growing portion of budgets in many districts, squeezing out services for other pupils.” Similarly,Timemagazine reported that an autistic child’s parents “informed Colorado’s Thompson school district it had to pick up the bill for Boston Higashi’s $135,000 annual tuition.”The New York Timesquoted a Westport, Connecticut, school superintendent who faced special education reimbursement requests for horseback riding and personal trainers.18 These reports suggested that a law designed to help the disabled and needy had become a giveaway for the rich and greedy. Mainstream media coverage of “extra-special education” echoed the radio talk show host Michael Savage’s claim that learning disabilities had become a “money racket.”
New York fought Freston’s claim for private school tuition. Joined by a coalition of other large urban school districts, the city argued that “many parents ask public school districts to develop an [educational plan] for their child despite intending from the outset to reject whatever … is developed and then claim that the district is unable to provide [an appropriate education] … These parents, who never intended to use the public schools, unilaterally place their child in the private school in which they planned to enroll their child all along, and then request reimbursement, hoping for a windfall.”19 The cities pointed out that private schools for the disabled often encourage parents to sue local school districts for tuition reimbursement; some even gave parents a list of “contact information for … lawyers and … instructions on how to sue the city.” The cities insisted that in order to “prevent abuse by parents who never intended to use the public schools,” the IDEA allowed parents to seek tuition reimbursement only after their children had tried public schools and they had proven inadequate.
Advocates for the disabled countered that most disabled children do not come from wealthy families; to the contrary, “30 percent of children with disabilities live in foster care … Almost 25 percent … are living in poverty.”20 The advocacy group Autism Speaks warned that disabled children who are forced to “try out” inappropriate public school placements before moving to an effective private school may miss a critical window of opportunity for development: “The effectiveness of intervention depends on early application … When the opportunity presented during this window passes, the squandered potential cannot be regained later.”21 As for the threat of escalating expenses, advocates for the disabled pointed out that private placements accounted for only a tiny fraction of the costs of special education and most private placements involved severely disabled children, whom school districts admitted they couldn’t serve. The cases where parents unilaterally put their children in private school and sued the district for reimbursement were trivial in number.
Moreover, the cost of private placement was typically not much more than an adequate public education: in fact, New York City’s public schools spent more on average for a disabled pupil attending public schools than Freston had requested in reimbursement.22 One of the briefs filed on behalf of the City of New York complained that “in one recent school year, public schools spent over 20% of their general operating budgets on special education students.”23 But, as a brief filed on behalf of Freston pointed out, most of that amount was spent on special education inpublicschools—not on tuition reimbursement.24 Taken together, these arguments implied, perhaps unintentionally, that tuition reimbursement wasn’t a unique problem; it was just a dramatic example of the cost of special education generally.
Mark Kelman, my colleague at Stanford, and Gillian Lester, now a professor at UC Berkeley Law School, conducted an extensive study of learning disability claims in public schools. They visited a number of local school districts and talked to local school administrators, teachers, and parents to see how the disability rights laws worked in practice. They came away convinced that treating the education of learning disabled children as a civil rights issue benefited rich families at the expense of the poor and actually made it harder to educate most students—disabled and nondisabled alike.
For nondisabled children, the problem is obvious: the law requires school districts to spend more—often a lot more—on costly special services reserved exclusively for children diagnosed with learning disabilities. This might make sense if the districts were awash in money, or if the special services were uniquely helpful to the children with learning disabilities, the way, say, Braille texts are uniquely helpful to the blind. But in fact many of the special services the schools are required to provide for children with learning disabilities would benefitanychild: smaller classes with better student-teacher ratios, one-on-one tutoring, immunity from discipline for disruptive behavior, extra time on exams. One administrator Kelman and Lester interviewed worried that only
maybe half the people we label are “really” LD. The problem is that the truly LD kids are irremediable. The 25 percent who eventually show significant changes were probably misdiagnosed. In theory, the LD kids have alternative coping mechanisms, and the educator should try to help the kids tap into these alternatives, [but] slow learners [who aren’t diagnosed as learning disabled] may also have untapped abilities … The difference between the two is merely a matter of degree.
Good teaching, simply, is what makes it work … For the LD kids or for anyone else, good teaching is good teaching.25
Some administrators believe the law requires them to prevent services earmarked for a child with a learning disability from “leaking” over to other, presumably undeserving students who may simply be slow learners. For instance, if a student with a mild learning disability attends class with nondisabled students and receives one-on-one tutoring during the school day, can the tutor also help other kids who have questions about the day’s lesson? While some school officials think they must prevent the diversion of special education resources to nondisabled students, others welcome “leakage” as a way to compensate for the inevitably imprecise diagnosis of learning disabilities. “This way, the sharp categories formally exist, but all students who need assistance … get it,” said one California administrator.26
Special education services eat up a growing share of the public school budget in many districts. In 1979 there were 796,000 students diagnosed with learning disabilities; in 2003 there were 2,848,000, and the number continues to grow at a rapid pace. Perhaps too few students were diagnosed with learning disabilities in the 1970s, but as a larger and larger percentage of students are said to have a “disability” that keeps them from learning, one has to wonder whether the cause is truly neurobiological, or whether it’s political and social. Under the IDEA, schools that fail to effectively educate disabled children can be made to pay for private school tuition. But the public schools—especially those in large cities like New York—are failing to educate many of their students who aren’t disabled too. For instance, in 2006 over 3 percent ofallthe students served by the Washington, D.C., school district were in private placements at a cost, according toThe Washington Post, of 15 percent of the district’s entire budget.27 But, as two special education experts acknowledged, “the D.C. schools struggle to provide an adequate education to any of their students. Disabled students are entitled … to demand an adequate education … The nondisabled students … lack the same mechanism for exiting failing schools.”28 Contrary to the civil rights theory underlying the IDEA, disabled students who don’t receive an adequate education aren’t necessarily being discriminated against; tragically, they’re often receiving the same-quality education as everyone else.
The civil rights approach to special education also disserved many disabled children—especially those from poor and minority families. Historically, special education has been split along the lines of family income and race. Culturally unsophisticated children—often poor blacks and Latinos and poor people who had moved from rural to urban areas—accounted for the lion’s share of children labeled “slow,” mentally retarded, emotionally disturbed, or culturally deprived. These students were typically either expelled from school or shunted off into dead-end special ed classes. The problem was so pervasive that civil rights activists in the 1950s and 1960s worried that special ed had become a cloak for racial discrimination and lobbied hard for provisions designed to ensure that minority students were not segregated from mainstream public education.29
Meanwhile, the category of “learning disability” emerged due to the efforts of wealthier, predominantly white families in the 1950s and 1960s who saw their underachieving children slip through the cracks of the educational system. Armed with psychological research that had identified discrete neurological causes (such as dyslexia) for certain cases of poor academic performance, they lobbied for a new category that would distinguish their children from the “mentally retarded” and from children who were simply lazy or slow—a recognition of a discrete condition that did not actually decrease intelligence, but only masked it. In studies of children with learning disabilities published in the 1960s and early 1970s, 98.5 percent were white and 69 percent were of middle-class or higher socioeconomic status.30
Today’s learning disability rights laws are a result of the efforts of these two groups: litigation to prevent the isolation and expulsion of retarded, emotionally disturbed, and hyperactive children eventually led to the Education for All Handicapped Children Act in 1975, now renamed the Individuals with Disabilities Education Act. Special education under the IDEA can range from reimbursement of expensive private school tuition to isolation in a dead-end class with “slow” children. Kelman and Lester worry that poor children typically receive very different treatment under the IDEA mandates than do the children of wealthy parents, who have the wherewithal to pressure school districts for better and more costly options: “The IDEA system … permit[s] relatively privileged white pupils to capture high-cost … in-class resources that others with similar educational deficits cannot obtain while, at the same time, allowing disproportionate numbers of African-American and poor pupils to be shunted into [dead-end special ed] classes.” There was even more reason to worry that the IDEA system benefited the rich at the expense of the poor in the case of demands for tuition reimbursement like Tom Freston’s because only wealthy parents could afford to send their child to an expensive private school and sue for reimbursement later. As the coalition of urban school districts warned in its amicus brief: “Every dollar spent on tuition reimbursement is a dollar that can no longer be spent to improve public special education programs … [This harms] students with the greatest need for public services, namely those whose families cannot afford to seek services outside the public school system.”31
Those families face deteriorating schools with large classes and dramatically reduced extracurricular activities. In New York City, kindergarten classes averaged 22 students in 2009, and elementary and middle school classes averaged 25.8 students.32 In California, budget cuts have made classes of over 30 students commonplace, and many students have to pay for extracurricular activities such as sports and music out of their own pocket—if they are offered at all.33 It’s easy for parents to argue that public school classes don’t offer an adequate education to their learning disabled children when they don’t offer an adequate education toanyone. Given the state of many American public schools, who can blame parents for seeking private alternatives or trying to finagle extra resources for their children? And even the top public schools can’t compete with the best that money can buy. New York’s offer of a much-sought-after spot at the prestigious Lower Lab School for Gifted Education paled in comparison to the education Gilbert Freston was receiving at the private Gaynor school, where he enjoyed a four-to-one student-staff ratio: the head teacher at Gaynor suggested that the city’s proposed class size of fifteen “could be a bit overwhelming.”34
Freston insisted that he sued as a matter of principle: after taking his case all the way to the U.S. Supreme Court, he donated the tuition reimbursement that he was awarded to tutoring for public school children. But all things considered, Freston’s stance is somewhat perverse: What sound moral principle would force cash-strapped public schools to provide a gourmet education for some students while others must make do with a dog’s breakfast?
In 2009 students with learning disabilities accounted for almost half the entire population of disabled students receiving special services under the IDEA. It’s no accident that the explosion of learning disability diagnoses comes at the same time the public schools are increasingly troubled by overcrowding, spotty teaching quality, and violence. The strongest students manage to learn despite overcrowding and poor teaching, but weaker students don’t. So while all students suffer from overcrowding and indifferent teaching, poor performers—whether diagnosed with disabilities or not—suffer most. The parents of such students are right to insist that the schools are failing to help their children realize their potential, and failing themmore dramaticallythan they are failing students who learn easily and without much help. In that sense, poor schools are inherently discriminatory: they make any student who has difficulty learning—for whatever reason—worse off than students who learn easily. But of course in this sense any poorly provided public service “discriminates” against the people who need it most: badly run hospitals discriminate against the injured and the sick; incompetent police departments discriminate against people living in crime-ridden neighborhoods; inadequately maintained parks discriminate against people without backyards.
The solution is obvious: better public services for everyone. But the IDEA doesn’t make the public schools better; instead, it shifts resources to a small fraction of the larger group of people who need them most. This might make some sense if that small fraction were especially injured by inadequate education or if they would uniquely profit from the extra resources. But if, as many educators believe, these children need the same things that any other student needs—good teaching in small classes—then it’s wrong to treat their needs as inalienable civil rights when we treat the needs of other students as luxuries that nearly bankrupt districts can’t afford. At any rate, the IDEA doesn’t even try to find out whether children with learning disabilities get more out of extra resources than other children would. Instead, the law mandates that some children should have more than others whether or not they need it more or will benefit more from it. All in the name of equality.
*   *   *
Dr. Paul Steinberg, a Washington, D.C., psychiatrist, argues that many students with what we call learning disabilities may in fact simply learn differently than other students and excel in different areas: for instance, “attention deficit disorder” may be a valuable asset in situations that demand spontaneity. “Essentially, ADHD is a problem dealing with the menial work of daily life, the tedium involved in many school situations and 9-to-5 jobs … [but] in many situations of hands-on activities or activities that reward spontaneity, ADHD is not a disorder.” But in today’s economy of technical and professional specialization, concentration is king, spontaneity is less valued, and impulsiveness can be ruinous: “What once conferred certain advantages in a hunter-gatherer era, in an agrarian age or even in an industrial age is now a potentially horrific character flaw.”35 Of course there have always been tasks that required concentration. But in past eras, a lot of thingsdidn’trequire sustained concentration: people we now would diagnose with ADHD could be great hunters, gladiators, knights, traveling minstrels, or rich aristocrats who didn’t need to work. During the Industrial Revolution, at least until the era of Henry Ford and modern management science, factory managers expected that workers would daydream and lose focus on the job. By contrast, in the information economy it’s harder and harder to find a good job where focus and detail orientation are optional.
This suggests that ADHD—even if it is the result of a discrete neurological condition—isn’t really a disability in the way that blindness, paralysis, severe autism, or even dyslexia is. Steinberg suggests we abandon the idea that some people have an attention deficit and instead think of everyone else as blessed (or cursed) with “attention-surplus disorder.” He argues that “children … with attention disorder may need more hands-on learning. Some may perform more effectively using computers and games rather than books. Some may do better with fieldwork and wilderness programs.” Steinberg urges that we “change the contexts in schools to accommodate the needs of children who have [ADHD], not just support and accommodate the needs of children with attention-surplus disorder.” Changing the context doesn’t suggest case-by-case exceptions to a general rule: it suggests a new pedagogical approach. If some children learn better using computers and fieldwork, we should introduce these teaching methods, and there’s no reason to limit them to children with diagnosed learning disabilities. Making viable alternatives available to all children who would profit from them would make the accommodations more equitable, further the important goal of integrating disabled children into regular classes, and eliminate any stigma now attached to “special education.”
Of course that’s practical only if games, fieldwork, and wilderness programs prepare children for life in the modern economy as well as “tedious” conventional schoolwork. Unfortunately, such ideas are often more attractive as therapy than as pedagogy. Educators tried out similar new and untested pedagogical methods in the 1960s and 1970s: when I was in grade school, for several years we learned “new math” and were graded on the quality of our ideas, regardless of whether they were well composed using proper grammar and sentence structure. The idea behind these new pedagogical methods was much the same as Dr. Steinberg’s idea: different children have different learning styles, and many children aren’t engaged by conventional pedagogy. These experiments were often short-lived because the new methods didn’t teach children as effectively: in order to tackle advanced subjects such as trigonometry, calculus, and college-level composition, you needed to have mastered the “old” math, with its multiplication tables and long division, and the boring old rules of grammar, sentence structure, and vocabulary. Moreover, students needed the mental discipline that the old methods imposed: part of the point of rote memorization was to teach children to focus on a single task for long periods of time.
Dr. Steinberg points out that “each child and adult learns and performs better in certain contexts than others.” Of course, this is true whether the person in question is diagnosed with a learning disability or not. It’s best to encourage people to pursue interests and careers for which they are well suited. Let’s face it: in many jobs a wandering mind isn’t a superficial condition that somehow masks an employee’s good performance; it’s a flaw that makes for poor performance. This is true whether or not the cause is an immutable neurological condition, inadequate practice, or a simple lack of diligence. We should help people with short attention spans find jobs where sustained attention isn’t important, not artificially inflate their grades and test scores and kid ourselves that concentration and speedy performance isn’t important in jobs where it is.
*   *   *
From the beginning, the precise rationale for disability rights has been unclear. Disability rights enjoyed widespread support among both liberals and conservatives, but for very different reasons. That has made it hard for courts to know how to interpret the law and easy for new claimants to press for expanded application and new entitlements. Liberals typically saw the extra resources and special exceptions for students with learning disabilities as civil rights that advance equality—part of a larger set of egalitarian social welfare policies designed to level hierarchies based on what philosophers might call “morally irrelevant” differences. But it’s unclear how far liberals will go in pursuit of this conceptual goal. Arguablyalldifferences in innate ability and intelligence aremorallyirrelevant. But of course differences in ability—whether due to disabilities or not—are very relevantpractically. Disability accommodations have less to do with the mainstream civil rights goal of equal opportunity than with equality ofresult—forbidding even-handed policies and practices that happen to disadvantage the disabled. Requiring employers, proprietors, landlords, and schools to ignore differences in ability and absorb the extra costs of compensating for such differences goes further than simply prohibiting irrational discrimination: it’s effectively a redistribution of wealth. In many cases, that redistribution makes sense; for instance, forcing building owners to pay for wheelchair ramps when they remodel or forcing employers to make allowances for blind or handicapped employees gives a long-neglected and disproportionately impoverished group of people a chance to lead fulfilling and constructive lives. But we should evaluate such accommodations as social welfare policies—not categorically accept them as inalienable civil rights.
Conservatives, by contrast, saw the disabled as among a small group of deserving unfortunates who suffer through no fault of their own—unlike the much larger group of losers who have their own shiftlessness and irresponsible behavior to blame for their misfortunes. Disability rights correct for variations in human ability caused by accidents and genetic randomness while leaving more patterned and predictable inherited inequalities firmly in place. Educational accommodations for students with learning disabilities are a conspicuous example: a diagnosis of a learning disability often effectively allows successful parents to pass their advantages in academic accomplishment along to their less successful children. Perhaps this is why many conservatives have supported fairly aggressive disability rights but have opposed much milder civil rights for other groups. To the extent disability rights protect existing socioeconomic statuses, they are consistent with a conservative tradition at least as old as Edmund Burke that places a high value on continuity and social stability. But they are at odds with the more widely accepted libertarian conservatism of today, which emphasizes self-reliance, free enterprise, and the discipline of the market. And they are certainly at odds with the civil rights tradition, which abhors hierarchies of birth and prizes social equity.
Disability rights serve two important purposes: they prohibit discrimination based on irrational aversion or inaccurate stereotypes, and they help to integrate disabled people into the mainstream of society. As for simple discrimination, given the long history of aversion to and prejudice against the disabled, it makes sense to require reluctant employers and proprietors to give disabled people a chance—even when doing so requires some extra effort or expense. There’s no doubt that simple prejudice against the disabled is still a serious and pervasive problem. Like race and sex, most disabilities are conspicuous: a blind person with a cane or Seeing Eye dog, a paraplegic in a wheelchair, or a mentally ill person muttering to herself makes an obvious target for the bigoted employer, landlord, or proprietor. But the milder emotional and learning disabilities aren’t conspicuous; in fact, they weren’t considered disabilities at all until recently. When people with these conditions do poorly in school or at work, they aren’t suffering because of irrational prejudice or inaccurate stereotypes; they’re suffering from an accurate assessment of their performance.
Disability rights also help to integrate the disabled into the mainstream. Congress noted the isolation and resulting impoverishment of the disabled when it passed the Americans with Disabilities Act in 1990. Again, people with more severe and conspicuous disabilities are the ideal beneficiaries of such an integrationist policy. Without mandatory accommodations, people with severe disabilities would be unable to compete for jobs, unable to communicate effectively, and unable to get around in cities designed for the able-bodied. But mild emotional and learning disabilities don’t prevent people from finding gainful employment or making their way in the world. They may prevent some people from getting the jobs they most want, but many, many nondisabled people can’t get the jobs they most want because they lack the required skill, temperament, or intelligence—deficits that are at least partially determined by discrete neurological conditions too. That’s not a civil rights issue—that’s life. Mandatory accommodations for the disabled involve the redistribution of resources—to disabled employees from employers, to disabled customers from proprietors, and to disabled students from the other students who compete with them for teaching resources and high grades. Deciding when such redistribution is justified requires difficult trade-offs between competing policy priorities—not an inflexible legal entitlement.
Rock of the Aged: Civil Rights for Older Workers
Google was one of the few high-tech Silicon Valley firms that emerged from the dot-com crash of 2001 not only unscathed but actually stronger. By 2003 its Internet search engine had become so popular that its lawyers had to worry that the “Google” name might become a generic term for Internet search (“I’ll Google it”), jeopardizing its legal status as a trademark. Having conquered web searches, Google moved on to dominate Internet maps, directions, and real-time traffic conditions. It launched an ambitious—some said quixotic—plan to digitally scan every book in the world for a text-searchable database: Google Books collaborated with some of the world’s largest libraries and alienated some of the largest publishers and literary agencies, who organized a lawsuit to block the project. Another Google project bested some of the world’s most powerful military intelligence organizations: in 2006 a satellite image from Google Earth revealed top secret U.S. operations in Pakistan in a photograph sharp enough to render the painted lines on the tarmac. When Google went public in 2004, the offering was among the most anticipated in Silicon Valley history, although many were skeptical that the company could hold its initial valuation of $27 billion. In 2009, in the midst of the worst economy since the Great Depression, Google was worth $140 billion.36
Google had done so much so quickly with a huge team of energetic, talented, and fiercely dedicated employees. The Googleplex—its main offices in Mountain View, California—is a sprawling campus of four large buildings, each surrounded by lawns, courtyards, and, this being suburban California, ample parking. The Googleplex is a cross between a university quadrangle and the ultimate party house. Google offers its employees three free meals a day at eleven cafeterias, free laundry, free hairstyling, a state-of-the-art gym complete with stationary lap pool, a volleyball court, lounges with pool tables, foosball, video games, and replicas of a spaceship and a dinosaur skeleton. Google provides bicycles and Segway scooters for employees to move around its campus, where they work on laptops at informal workstations, “yurts,” and “huddle rooms” that encourage collaboration and out-of-the-box thinking. The company provides toys to entertain the children of employees, and dogs are always welcome.
With all of these postmod cons, Google employees never need leave work. “We have a preference for those who like to work hard and play hard and are enthused about working on collaborative global teams,” informs each job listing for Google. Brian Reid started working at Google in 2002 as director of operations and engineering. A former professor of electrical engineering at Stanford, Reid was an early Internet pioneer. He helped invent one of the first Ethernet networks at Stanford in 1981 and worked on the first e-mail protocols and on the first Internet search engine—AltaVista—in 1995. Reid was fifty-two years old when he started work at Google—a good two decades older than Google’s founders, Larry Page and Sergey Brin. Reid received positive evaluations from his superiors, in which they described him as “very intelligent” and “creative” and complimented his “confidence when dealing with fast changing situations” and his “excellent attitude.” But, in what was to prove a foreboding observation, his first performance review noted that “adapting to the Google culture is the primary task for the first year here … Google is simply different: Younger contributors, inexperienced first line managers, and the super fast pace are just a few examples.”
Reid lasted less than two years at Google. During his short tenure on the company’s campus, he was the target of a series of age-related jokes and disparaging comments. His immediate supervisor called him “lethargic” and dismissed his ideas as “obsolete” and “too old to matter.” His co-workers referred to him as an “old man” and an “old fuddy-duddy.” A CD jewel case served as an office placard for Google managers: some quipped that Reid’s should be a vinyl LP. Google’s management began to ease Reid out in October 2003 when they replaced him as director of operations with someone fifteen years younger. Reid was moved to a new position in charge of a pilot program that would allow Google’s engineers to earn graduate degrees on-site at Google. The new program turned out to be little more than a place to park Reid before driving him out: the degree program was never staffed or funded, and in January of the next year Google’s top management worked on “a proposal … on getting [Reid] out.” On February 13, 2004, the vice president of engineering, Wayne Rosing, told Reid he was not a “cultural fit” in Google’s engineering department. Reid applied for positions in other departments but the company’s management had already made sure Reid would not find a new home in the Googleplex. Various department heads had coordinated by e-mail to adopt a uniform line with Reid. “My line at the moment is that there is no role for him,” wrote the vice president of business operations. “We’ll all agree on the job elimination angle,” advised the human resources director, Stacy Sullivan. Five months later, Reid sued Google for age discrimination.37
Much of Reid’s complaint focused on Google’s corporate culture. Reid was let go because he wasn’t a cultural fit at Google. He argued that the atmosphere at the Googleplex was biased against older workers. Was Google’s culture a culture of youth? In many ways it was: Google used physical activities, such as skiing and hockey, as a way of team building; the Googleplex is modeled on a college campus; the decor is bright, colorful, and eccentric, like a dream house designed by MTV; and many of the perks offered to employees—from foosball to table tennis to free T-shirts—are likely to appeal to the young. But it’s just these attributes that make Google both a successful business and a beloved employer. Google is widely considered one of the best places to work in the high-tech industry. Its youthful culture is a deliberate attempt to cultivate the fresh, innovative thinking that has made the company a success. Even the age-related comments Reid rightly complained of may have reflected this emphasis on novel, out-of-the-box thinking rather than age-based animus. It’s not inconceivable that an older person could fit in at Google; in fact, the company hired Reid with the expectation and hope that he would adapt to the Google lifestyle. Is it a civil rights issue when an employer wants a workforce that’s young at heart?
*   *   *
In 1967 roughly half of all private job openings were explicitly closed to applicants over the age of fifty-five; one-fourth were limited to those forty-five or younger. Older people were disproportionately unemployed and stayed jobless longer than younger people. Spurred to action by the pathetic image of a jobless older person reduced to eating pet food in his cold-water flat, Congress passed the Age Discrimination in Employment Act of 1967, or ADEA. Naturally, the ADEA was modeled on the Civil Rights Act of 1964, which prohibited discrimination on the basis of race, color, national origin, religion, and sex. Unlike the 1964 act, the ADEA sailed through a Congress made up overwhelmingly of middle-aged and older people, buoyed by a broad consensus that discrimination on the basis of age was cruel, inefficient, and unjust. “Nobody defends such discrimination, and—it ought to be stopped,” declared a labor union representative in his testimony before Congress.
As expected, employers quickly took down the “Elderly Need Not Apply” signs after the ADEA was enacted. But unemployment among the elderly actuallyincreasedin the ten years after the ADEA was passed.38 Was the reason lack of enforcement? Did employees not know their rights under the ADEA? To the contrary, there was a “backlog” of age discrimination lawsuits that had grown every year since the ADEA was passed. The problem wasn’t in enforcement; it was in the design of the ADEA. Age discrimination was different from discrimination based on race, sex, and religion. It was oddly lopsided: older people suffered discrimination in hiring, but once hired, they fared as well as or better than younger workers. The Department of Labor’s commissioner on aging found that older employees were “frequently preferred over the younger” for promotions and received favorable treatment on the job.39 That’s still true today. “If you are old and have a job, you are less likely … to be fired,” said Alicia Munnell of the Center for Retirement Research at Boston College in 2009.40 There was really no need for a civil rights law to protect olderworkers; it was olderapplicantsfor jobs who needed protection. This was still the case over forty years after the ADEA was passed: in the recession of 2009 older people who were laid off were out of work for an average of 22.2 weeks; by contrast, younger people were unemployed an average of 16.2 weeks.41
The ADEA didn’t do much for jobapplicants, because an unemployed person is unlikely to sue for a job he didn’t get. A job applicant has very little information about the hiring process, the decision makers, or the qualifications of other people vying for the same job: it’s hard to know whether discrimination or legitimate factors made the difference. Moreover, litigation takes time—time that could be better spent continuing the job search. That’s why only about 9 percent of all employment discrimination claims challenge hiring decisions.42 By contrast, once hired, people become invested in their jobs and are much more likely to sue to keep them and to advance in them. Accordingly, almost 80 percent of all employment discrimination claims involve firing and denied promotions. Unsurprisingly, then, the ADEA encouraged current employees to sue over promotions and termination—where age discrimination wasn’t much of a problem. The ADEA empowered older workers who, as a group, were already better off than younger employees, but did little to solve the problem of chronic unemployment among the elderly. In fact, because the law gave older people a new weapon to use against employers—a weapon that they were much more likely to use once hired—the law probablyencouragedemployers to discriminate against older job applicants, making the problem the law was designed to solve worse.
The civil rights solution didn’t fit the problem of unemployment among older workers very well. Even when the ADEA was passed, it was well-known that “age discrimination is … seldom a matter of blind or arbitrary prejudice which often exists for reasons of race, creed, color, national origin, or sex … [It] is a more subtle series of problems … a combination of institutional factors and stereotyped thinking.”43 But the civil rights approach focused only on stereotypes, neglecting the subtler institutional factors that are actually the biggest cause of the problem. Congress basically cribbed from the 1964 Civil Rights Act, extending the same civil rights protections to a new group—the elderly—as if age discrimination were caused by irrational bigotry against the aged and inaccurate stereotypes about their qualifications. For instance, Secretary of Labor Willard Wirtz argued that age discrimination reflected outdated attitudes, “a failure on the part of employers to realize how technology and the life sciences have combined to increase the value of older people’s work”;44 and William D. Bechill, the commissioner on aging, insisted that “stereotyped attitudes about the ability of older people … play a major role in barring older workers from fair … consideration.”45
Stereotypes about older people are a problem, but as the NYU law professor Samuel Issacharoff and his coauthor Erica Harris point out in a detailed discussion of the ADEA, age discrimination is often driven by simple economics. Labor economists describe the typical career path in terms of a “life cycle.”46 The typical employee needs training early in her career, becomes a seasoned and efficient worker in mid-career, and then slows down a bit near the end of her career. If wages and salary perfectly matched the productivity of each employee, most people would receive very low compensation early in their careers, when they are still learning—in some cases the costs of training might be greater than the contribution new employees make, suggesting an “apprenticeship” model where the new employee should work for very little. Then pay would rise very quickly after the employee became accomplished. Finally, pay would level off and eventually drop as the older employee slowed down and thoughts turned from work to a well-earned retirement filled with grandchildren, gardening, and exotic travel (at least that’s whatIhope to be looking forward to at seventy).
But of course this isn’t how compensation is usually set. Typically, compensation rises relatively slowly but inexorably: pay cuts reflect a firm in financial crisis or a sanction for unacceptably poor performance. This type of arrangement involves an implicit bargain between employer and employee: the junior employee builds up a debt while being trained, which she pays off as she gains skills, eventually banking a surplus, which she will collect as she ages and her compensation exceeds her value to the firm. Viewed sympathetically, it’s a humane and practical model based on a long-term relationship: younger people receive a decent wage and avoid the indignities of apprenticeship, and older workers enjoy compensation that reflects the esteem and respect they’ve earned throughout a long career—even if it exceeds their current productivity.
But the deferred compensation and cross subsidy inherent in this approach encourage three types of age discrimination. Two are necessary parts of the implicit bargain and therefore defensible if one accepts its terms; one is an indefensible breach of the implicit bargain by the employer.
First, an employer might refuse to hire older employees because they will enter the pay scale at just the point where compensation exceeds contribution, never having “banked” the surplus by working when their contributions exceeded compensation. If the firm pays according to seniority, or guarantees retirement benefits at a certain age, the newly hired older worker will arrive just in time to collect the subsidy. Even an employer that is happy to retain older workers hired early in their careers or in mid-career might not want tohireworkers already near the end of their careers. Of course, such an employer could hire a senior employee for a wage she’s worth—even if it’s much less than other workers of the same seniority who have contributed to the firm during their most productive and less well-compensated years. But this would require the firm to be explicit about the implicit cross subsidy involved in the salary structure. An advantage of the employment life cycle is that the subsidies are implicit. Making them explicit would be bad for employee morale: mid-career employees would resent subsidizing older and younger employees, and older employees would lose esteem and respect.
Of course, discrimination in hiring is just what the ADEA was supposed to prevent. But ironically the law may have made this kind of discrimination more likely by giving employers anadditionalreason to avoid hiring older people. Under the ADEA, an employer who hired an older employee on a trial basis would have to worry about a lawsuit if things didn’t work out. It’s not surprising that employers responded to the ADEA by eliminating conspicuous discriminatory policies while continuing to discriminate against older job applicants in less obvious ways.
Second, a smoothly and inexorably rising pay scale assumes mandatory retirement—a form of age discrimination. At some point the surplus banked in mid-career runs out. Before the ADEA was amended to prohibit the practice, most employers openly discriminated by imposing a mandatory retirement age. The ADEA originally covered only employees between forty and sixty-five, but the upper limit was eliminated in 1986, effectively outlawing most mandatory retirement.47 As a result, many older workers had the option of staying on at high compensation long after having recouped any surplus they had contributed in the middle of their careers. Many employers adjusted to the new legal regime by replacing “lockstep” compensation based on seniority with bottom-line-oriented compensation such as merit pay and low base salaries supplemented with productivity bonuses. This shift ultimately affected more than compensation: it was part of a change in attitude, the demise of a more collaborative and genteel business relationship and the rise of a more beady-eyed, “eat what you kill” approach. As Issacharoff and Harris put it, “Employees who currently have expectations of wages above marginal output will appear to be an unaffordable luxury in highly competitive markets.”48 For example, an internal Wal-Mart memo that came to light in 2005 notes unapprovingly that “the cost of an associate with seven years of tenure is almost 55 percent more than the cost of an associate with one year of tenure, yet there is no difference in his or her productivity. Moreover, because we pay an associate more in salary and benefits as his or her tenure increases, we are pricing that associate out of the labor market, increasing the likelihood that he or she will stay with Wal-Mart.”49 Perhaps the atmosphere that Brian Reid encountered at Google reflects this harder-edged employment relationship and the resulting contempt for employees even slightly “past their prime.”
Third, an unscrupulous employer may breach the implicit bargain and fire a loyal employee just as she is about to recoup the surplus she built up during her most productive years: a nasty bait and switch. Ironically, the ADEA does not prevent employers from sacking older employees just before their pensions vest—provided they do so out of simple greed and not bias. When sixty-two-year-old Walter Biggins was fired just a few weeks before his pension benefits vested, he sued for age discrimination and won a jury trial verdict, which was sustained on appeal. But the Supreme Court held that Biggins hadn’t suffered age discrimination. According to the Court, “It is the very essence of age discrimination for an older employee to be fired because the employer believes that productivity and competence decline with old age … on the basis of inaccurate and stigmatizing stereotypes.” But in Biggins’s case “the decision [was] not … the result of an inaccurate and denigrating generalization about age, but … rather … anaccuratejudgment … that he indeed is ‘close to vesting.’”50 Cheating an employee of his pension doesn’t involve anti-elderly bias—just evenhanded avarice. As a result, the Court held that age discrimination laws don’t prohibit one of the most common tricks employers use to cheat older employees. In fact, the employer’s conspicuous avarice was almost adefenseto the age discrimination claim. “Inferring age motivation … may be problematic in cases where other unsavory motives … [are] present,” opined Justice Sandra Day O’Connor.
But the Court didn’t leave Walter Biggins without a remedy. Although his employer didn’t discriminate on the basis of age, it did violate the federal Employee Retirement Income Security Act, which regulates employee pension plans.Bigginsdoesn’t imply that there’s nothing wrong with cheating an employee of his nearly vested pension. Instead, it suggests that much of what we call age discrimination may be better dealt with outside the civil rights framework.
*   *   *
Instead of reducing unemployment among the needy elderly, the ADEA benefited older workers who had jobs—the group that was already “frequently preferred over … younger” employees. The ADEA let employees forty and older—and only employees forty and older—sue when they were fired or passed over for promotion: unlike every other civil rights law, the ADEA expressly rules out most “reverse discrimination” lawsuits. Of course, if discriminationagainst the elderlyis the problem, perhaps it makes sense to limit the law’s protection to older plaintiffs. But such an asymmetry does amplify the concern that a law designed to ensure fair treatment will become a boondoggle for a specific, politically connected group. We’ve all heard this concern voiced loud and long in the context of race-based affirmative action, which California’s former governor Pete Wilson famously condemned as a “racial spoils system.” Ironically, the concern is much more valid—though less noticed—in the context of age-based civil rights. Although the ADEA is, strictly speaking, an antidiscrimination law and not an affirmative action law, because of its built-in asymmetry—age discrimination is unlawful only when it disadvantages older people—one could argue that the entire law is a form of affirmative action. And because older Americansas a groupare not disadvantaged at all, the remedial purpose of these rights is more dubious than for race- or sex-based affirmative action policies. In fact, older people have a vastly disproportionate share of the nation’s wealth and political influence, and their fortunes were improving dramatically even as Congress continued to expand and strengthen age discrimination laws. Between 1970 and 1984 the median income of people over sixty-five rose by 35 percent as compared with less than 1 percent for people from twenty-five to sixty-four51—yet Congress expanded the ADEA to prohibit mandatory retirement in 1986.
The ADEA looks even more like an age-based spoils system when one looks at its effect on retirement benefits. After the ADEA outlawed mandatory retirement, employers were faced with the prospect of having to retain older, overpaid employees long after they had recouped the deferred compensation earned in their more productive and less remunerated mid-career years. Many tried to buy their way out of the problem by offering older employees a “golden handshake”—a onetime cash payment on retirement. Already this was a huge windfall for older employees: the older life-cycle wage arrangement assumed mandatory retirement at a specific age; outlawing mandatory retirement essentially rewrote the deal in favor of older employees. The golden handshake is the amortized cash value of this imposed revision of the employment contract—a direct transfer of wealth from employers (and, indirectly, younger employees) to older employees.
Employers typically offered the golden handshake to employees who were young enough that they were likely to continue working for many years unless bribed to retire, and offered younger employees more than older employees. In other words, the employers discriminated on the basis of age. But this had nothing to do with animus or negative stereotypes; it was a straightforward reflection of the economics that led employers to offer golden handshakes in the first place. The employer was basically buying older employees out of what had become an unprofitable mandatory employment contract from the employer’s perspective. The value of the buyout would depend on the expected length of the employees’ tenure. Age was a pretty good proxy for expected length of tenure: older employees would generally retire without inducement earlier than younger employees. Moreover, the golden handshake had to be enough to allow the employee to live comfortably in retirement, and that figure would be higher for younger people, who would have to make it stretch over a longer period of time. Adjusting the size of the golden handshake by age made sense, both from a narrow economic perspective and from a more humane perspective.
The American Association of Retired Persons (AARP) had emerged as a powerful lobby in favor of expanded and strengthened ADEA provisions by 1986, when Congress revised the ADEA to outlaw mandatory retirement. It continued that role, lobbying Congress in the late 1980s to outlaw age-targeted retirement incentives. Its position was that targeted retirement incentives were age discrimination and therefore violated the ADEA per se. But the AARP changed its tune when it realized that a strict prohibition of age discrimination would kill the goose that gave the golden handshakes altogether. Strict application of the ADEA’s age discrimination rule would require employers to offer retirement inducements to everyone over the age of forty—or no one at all. As Issacharoff and Harris explain, “No employer could afford to offer retirement inducements to its entire workforce. At this point, the AARP did an about-face and began to lobby heavily for the preservation of [age-targeted retirement incentives] … so long as they were offered to everyone over a minimum age … When it came to benefiting older workers,… violations of the equal treatment principle proved to be more than just acceptable—they were required.”52
Issacharoff and Harris aptly describe this and other lobbying for expanded age discrimination laws as “wealth-grabbing self-interest” resulting in a “windfall to older workers.” It’s important to add that only some older workers benefited, and some benefited much more than others. A common argument against mandatory retirement is that changes in society have made it an anachronism. Advances in public health have allowed people to stay healthy and productive well into what would once have been their golden years, and today many people are psychologically invested in their careers in a way that their parents were not. But in fact the average age of voluntary retirement from the workforce hasdeclinedsteadily and steeply since the mid-twentieth century, from about age sixty-eight in the early 1950s to age sixty-two in the late 1990s.53 The abolition of mandatory retirement makes no difference to the many people who choose to retire early.
The main exception to the trend toward earlier retirement is highly educated professionals and business managers. Because these careers are not physically demanding, people don’t “burn out” as readily as in other jobs. And because performance in such careers can be hard to measure objectively, status and reputation play a large role. Perhaps older professionals and managers retire later because they are more likely to be productive later in their careers. But productivity may just be harder to measure in the professions and upper management, allowing today’s older employees to use their reputations to hang on to coveted positions when an earlier generation would have “passed the baton” to younger protégés.
The uncharitable might think it telling that the legal profession has been largely exempt from age discrimination laws. For the most part, law firm partners are not considered employees covered under the ADEA, and mandatory retirement is still relatively common: over half of large law firms had mandatory retirement of some form in 2007.54 This may be changing: disgruntled older law firm partners have sued their firms over mandatory retirement, and the Equal Employment Opportunity Commission has taken up the cause of these unlikely subalterns—a move that has prompted many firms to drop mandatory retirement.55 But law firms have clung to mandatory retirement for good reasons. Many large law firms are still prime examples of the life-cycle model of compensation, which is responsible for preserving what collegiality remains in the typical Big Law sweatshop. Starting salaries at the more prestigious law firms in large cities are, as I write, as high as $170,000 a year. With all due admiration for the graduates of our nation’s law schools, I’m certain that no student fresh from law school or a clerkship can justify these salaries, especially when the costs of inevitable on-the-job practical training are deducted from the balance sheet. It’s well-known that many firms lose money on first- and sometimes even second-year attorneys and begin to break even only in the third year. Law partnerships remain lucrative because senior associates and young partners bring in much more than they earn in salary: salaries rise with seniority, but not as rapidly as skill and productivity do. For associates, compensation is typically tied to seniority and to the number of hours billed. But for the most highly compensated partners, compensation reflects the value of client relationships—a partner’s “book of business.” A partner who brings an important client into the firm will receive a yearly draw that reflects the billings to the client, even if he or she does little of the actual legal work for the client. So some senior partners earn much more than their current productivity would justify, based on the firm’s total billings to clients in their “book.” In effect, the mid-career lawyers subsidize both the novices and many older partners.
Mandatory retirement is indispensable to such a compensation structure. If partners can keep clients in their accounts indefinitely, the firm will become top-heavy with partners taking more out of the partnership than their current contributions justify. Of course, partners deserve to be compensated for “rainmaking,” as client cultivation is referred to in the profession. But not all client relationships are the result of a current partner’s rainmaking efforts: many clients have been with the same firm for generations. Traditionally, as partners age, they groom younger partners, to whom they will hand off their client relationships when they retire. But without the predictable and orderly handing down of client relationships that mandatory retirement encourages, partners within the same firm will begin to compete with each other for clients. Junior partners, with little prospect of building their own books of business within the firm, will try to build a book by leaving and then poaching clients they have worked with from senior partners at their old firms. Senior partners will become stingy mentors, guarding their client relationships from the younger lawyers with whom they work. Collegiality, mentoring, and client service will all suffer.
There’s no doubt that rigid mandatory retirement deprived businesses and society of talented older employees. But nothingrequiredemployers to impose mandatory retirement, and nothing prohibited talented older workers who faced mandatory retirement at one job from working elsewhere. At its best, mandatory retirement was an orderly and humane way of ending the employment relationship on a high note—with a gold watch and a party rather than with a bad performance evaluation and a pink slip. It allowed older workers to train and mentor younger replacements without the now-common fear of grooming one’s own competition, and it gave younger workers some assurance that coveted high-level positions, department chairs, and client portfolios would eventually be open to them.
The ADEA was first conceived of as a modest intervention to correct a flaw in employment markets that locked many older people in unemployment. It failed to correct that flaw, which continues to injure older people looking for work today. Instead, the ADEA morphed into what Issacharoff and Harris call “a benefits protection regulation for older workers … [mandating] open ended obligations to provide older workers lump sum buyouts [and other benefits] without giving any consideration to the economic rationale for these programs.”56 This has contributed to a backlash against employee benefits generally. For example, it’s now a commonplace quip among business managers and consultants that General Motors (now radically downsized even after receiving billions in government bailout money) was once a car company that offered employee benefits and is now a benefits company that happens to make cars. In 2005, the Nobel laureate economist Paul Krugman pointed out that GM’s health-care benefits alone accounted for $1,500 of the price of every car the company makes.57 The ADEA is hardly the sole cause of this crisis in employee benefits, but it played a role by disrupting the arrangements that made compensation and fixed-benefits plans economically viable and forcing employers to offer a windfall to a narrow class of older workers at the expense of younger workers, future generations, consumers, and—when the employers go broke and the federally insured pension plans become insolvent—taxpayers.
This has little to do with equality or justice for older people. If the real issue were bias against the elderly or irrational stereotypes, we wouldn’t find such ready and compelling economic justifications for so many of the practices the ADEA forbids. If the real issue were the integration of older people into the workforce or the alleviation of material disadvantage, the law would return to its original focus on thejoblesselderly rather than locking in and increasing advantages for people who are already employed. If the real problem were that discrimination on the basis of age is somehow inherently demeaning or presumptively suspect, then the law would prohibit discrimination against the young as well as against the old rather than limit coverage to people forty and older as the ADEA does. After all, unlike stereotypes about race, sex, or disability, which fall almost exclusively on specific maligned groups, every questionable stereotype about the aged is mirrored by a denigrating generalization about the young: if the elderly are “sluggish,” the young are “reckless”; if older people are set in their ways, young people lack wisdom; if the aged are complacent, younger people are naive and inexperienced.
Ironically, civil rights law includes this asymmetry only in the context where it is least justified. Title VII prohibits so-called reverse discrimination as a matter of principle, even though there is no evidence of, say, widespread anti-white or anti-male bias. By contrast, the egalitarian rationale that might justify asymmetrical protection for “protected groups” in the case of race, sex, and religion doesn’t apply to age, because older people are disproportionately wealthy and powerful and hence can and do take care of themselves quite well in the market and in politics. The Seventh Circuit judge and University of Chicago Law School professor Richard Posner makes this point well:
It is as if the vast majority of persons who established employment policies and who made employment decisions were black, federal legislation mandated huge transfer payments from whites to blacks, and blacks occupied most high political offices in the nation. It would be mad in those circumstances to think the nation needed a law that would protect blacks from discrimination in employment. Employers—who … for the most part are not young themselves—are unlikely to harbor either serious misconceptions about the vocational capacities of the old … or a generalized antipathy toward old people.58
Today’s age discrimination laws are not “mad.” They make perfect sense—as interest-group politics. If age discrimination laws just benefited older people at the expense of the young, any inequity they caused would be short-lived: after all, with luck we will all be old someday. But because age discrimination laws benefit well-off older people—relatively wealthy professionals and business managers—much more than the poor, they are, in effect, a tax on industry levied for the benefit of the relatively rich. Much of this extra income will go unspent, like the inherited advantages locked in by learning disability accommodations, and will pass to future generations. In effect, age discrimination laws have created a sort of reverse inheritance tax, helping well-off upper-level managers and wealthy professionals to have plenty left over to leave to their offspring even after enjoying a lush retirement of travel and comfortable leisure. A bumper sticker popular in places like Coral Gables, Florida, Palm Springs, California, and Leisure World, Arizona, reads, “I’m Spending My Children’s Inheritance.” Thanks to civil rights laws, some of the nation’s richest retirees won’t need to.
From Civil Rights to Individual Entitlements
Each of the civil rights laws I’ve discussed here serves a legitimate purpose. It’s wrong and shortsighted to disfavor the disabled or the elderly because of inaccurate stereotypes or irrational aversion. Civil rights laws have an important role to play in making sure such prejudices don’t poison the labor market and pollute the public sphere. And prejudice aside, government, employers, schools, and building owners should help isolated and disadvantaged groups join the mainstream of society. But even as they offer a fair deal to the disadvantaged, these laws have also become a perk for the privileged.
We could reduce unemployment and social isolation for the disabled and the elderly without treating them as civil rights issues. The civil rights approach is based on the questionable premise that racial isolation and poverty, gender hierarchy, the isolation of the disabled, and unemployment among the elderly are all diseases caused by a common virus—irrational discrimination. This is plausible only if one has a very capacious definition of discrimination. For instance, in the 1970s Paul Brest, my former dean at Stanford Law School, argued that for legal purposes, discrimination should include distinctions based on not only irrational animus and stereotypes but also what he called “selective sympathy and indifference.”59 This definition of bigotry is familiar enough:If one of every four youngwhitemen were languishing in prison, you can bet Congress would have found another approach to law enforcement. Or:If men got pregnant, abortion would be a sacrament. This definition of discrimination reflects a generous notion of collective responsibility and social justice: society has a responsibility to eliminate and avoid not only overt discrimination but also inequality caused by milder forms of bias, such as the inability or unwillingness of decision makers to sympathize with people unlike themselves. But it wreaks havoc when translated into individual entitlements.
If you try, you can make a case that some kind of bigotry—animus, stereotypes, or selective sympathy and indifference—is behind almost any inequality. Why are black men overrepresented in prisons? Because of racial animus on the part of police and prosecutors. And even if crime rates really do vary among racial groups, tough law enforcement reflects “selective sympathy and indifference” toward the groups with higher crime rates. Why are the lines for the ladies’ rooms usually longer than for the gents’? Because architects and building planners are selectively indifferent to the needs of women. Why did most American employers have mandatory retirement at age sixty-five until the 1980s? Because of stereotypes about the productivity of older people and selective indifference to them. Why were older public buildings designed with stairs and not wheelchair ramps? Because landlords and architects were selectively indifferent to the needs of the disabled.
In fact these social problems are quite different from one another: they have different histories; they are perpetuated by different institutions in different ways; and they are justified by different misconceptions (and in some cases, valid concerns). It’s not very helpful to insist that they all involve “discrimination” and deserve condemnation for that reason. Discrimination, strictly speaking, is often both necessary and just. Exams aresupposedto discriminate between people who have acquired the necessary skills and mastered the relevant material and those who haven’t. Discrimination on the basis of age made sense and was reasonably fair given the old life-cycle model of employment where compensation rose with years of service rather than productivity. Of course, it’s possible that certain exams don’t measure the right skills, and it’s possible that the life-cycle model of compensation should be discouraged and employers should be forced to tie compensation more tightly to job performance. But both of these questions demand distinctive, fact-specific inquiries and controversial judgment calls.
Instead of making the necessary inquiries and judgment calls, civil rights thinking equates all discrimination with bigotry and assumes that inequalities between groups must be the result of bigotry against the less fortunate groups. If we assume the problem is bigotry, then all of the tricky questions of implementation (how can we best address the real causes of inequality?) and distributive justice (who should pay?) disappear, and the answers seem simple: we should eliminate bigotry, and the bigots should pay. Driven by the unexamined presumption that exams that disfavor students diagnosed with learning disabilities are “biased” against them, the law demands that those students receive special exceptions. But no one asks whether the exams accurately measure relevant skills and knowledge generally, because garden-variety poor performers aren’t a discrete group against which educators could be biased or about which they might harbor stereotypes. Age discrimination law condemns mandatory retirement as a reflection of “stereotypes” about the aged, despite its fairly obvious practical function of offering older workers with declining productivity a dignified exit from the workplace. But it allows an employer to breach the implicit employment contract and cheat his older employee of an almost-vested pension—one of the most common hazards for older employees—because the bait and switch is motivated not by stereotypes but by simple avarice. Shoehorning such a wide range of social problems into the discrimination framework has made it harder to remove the impediments that trip up many disabled and elderly people, even as it’s made it easy for people to turn civil rights into selfish entitlements—and feel justified in doing so.


 
Copyright © 2011 by Richard Thompson Ford

Rewards Program