Archive for July, 2012

Original site: http://www.newyorker.com/arts/critics/books/2012/05/14/120514crbo_books_acocella?currentPage=all


The battle over the way we should speak.

by Joan AcocellaMAY 14, 2012

Technorati 标签:

“The Language Wars: A History of Proper English” (Farrar, Straus & Giroux);
Henry Hitchings;
“A Dictionary of Modern English Usage”;
H. W. Fowler;
George Orwell;
“Politics and the English Language”

For a long time, many English speakers have felt that the language was going to the dogs. All around them, people were talking about “parameters” and “life styles,” saying “disinterested” when they meant “uninterested,” “fulsome” when they meant “full.” To the pained listeners, it seemed that they were no longer part of this language group. To others, the complainers were fogies and snobs. The usages they objected to were cause not for grief but for celebration. They were pulsings of our linguistic lifeblood, proof that English was large, contained multitudes.

The second group was right about the multitudes. English is a melding of the languages of the many different peoples who have lived in Britain; it has also changed through commerce and conquest. English has always been a ragbag, and that encouraged further permissiveness. In the past half century or so, however, this situation has produced a serious quarrel, political as well as linguistic, with two combatant parties: the prescriptivists, who were bent on instructing us in how to write and speak; and the descriptivists, who felt that all we could legitimately do in discussing language was to say what the current practice was. This dispute is the subject of “The Language Wars: A History of Proper English” (Farrar, Straus & Giroux), by the English journalist Henry Hitchings, a convinced descriptivist.

In England, the most important and thorough prescriptivist volume of the twentieth century was “A Dictionary of Modern English Usage,” written by H. W. Fowler, a retired schoolteacher, and published in 1926. Its first edition is seven hundred and forty-two pages long, and much of it has to do with small questions of spelling and pronunciation. Fowler’s true subject, however—his heart’s home—is a set of two general principles, clarity and unpretentiousness, that he felt should govern all use of language. The book’s fame derives from the articles he wrote in relation to those matters—“genteelism,” “mannerisms,” “irrelevant allusion,” “love of the long word,” to name a few. Fowler defines “genteelism” as “the substituting, for the ordinary natural word that first suggests itself to the mind, of a synonym that is thought to be less soiled by the lips of the common herd, less familiar, less plebian, less vulgar, less improper, less apt to come unhandsomely betwixt the wind & our nobility.” As is obvious here, Fowler was dealing not just with language but with its moral underpinnings, truth and falsehood. To many people, he seemed to offer an idealized view of what it meant to be English—decency, fair play, roast beef—and to recommend, even to prescribe, those things. Accordingly, Hitchings deplores the book.

England did not. “A Dictionary of Modern English Usage” sold sixty thousand copies in its first year. Its most famous descendant was George Orwell’s 1946 essay “Politics and the English Language.” Published just after the Second World War—that is, just after most of the world had been nearly destroyed by ideologues—the essay said that much political language, by means of circumlocution and euphemism and other doctorings, was “designed to make lies sound truthful and murder respectable.” (Orwell repeated the point three years later, in “1984.”) Orwell was thus the most urgent prescriptivist possible. To him, our very lives depended on linguistic clarity. Hitchings nods at Orwell respectfully but still has questions about the campaign for plain English to which the great man contributed so heavily.

What the plain-English manifestos have been to Britain, “The Elements of Style,” by William Strunk, Jr., and E. B. White, is to the United States. Strunk was an English professor at Cornell, and “The Elements of Style” began life as a forty-three-page pamphlet that he wrote in 1918 and distributed to his students in the hope of reforming what he saw as their foggy, verbose, and gutless writing. His goals were the same as Fowler’s: clarity and unpretentiousness. He also had a mania for conciseness.

A year after the pamphlet appeared, E. B. White, the twenty-year-old son of a piano manufacturer, enrolled in Strunk’s course. After graduation, he forgot his professor’s manual for many years, during which time he became a professional essayist, renowned for his clarity and unpretentiousness. Then, one day, a friend from college sent him a copy of Strunk’s pamphlet, thinking that it might amuse him. Impressed by his old teacher’s wisdom, White agreed to revise the manual for readers of his own time. The volume, now widely known as “Strunk and White,” was published in 1959. It is not without faults: the passive voice, frowned on in the book, occurs eleven times just on page 16 of the fourth edition. Nevertheless, “The Elements of Style” is the most trusted style manual in the United States.

White appended an essay to the manual, “An Approach to Style,” which carried the question of usage beyond correctness, into art. After the book’s many pages of rules, he says that excellence in writing depends less on following rules than on “ear,” the sense of what sounds right. Also, White stressed morals even more than Fowler did. “Style takes its final shape more from attitudes of mind than from principles of composition,” he says. “This moral observation would have no place in a rule book were it not that style is the writer, and therefore what a man is, rather than what he knows, will at last determine his style.” In short, to write well, you had to be a good person.

Strunk and White, together with Fowler and, to some extent, Orwell, addressed their remarks to people who were of their own social class, or who at least had had an education similar to theirs. Hence their ease, their wit, and their willingness to prescribe. None of them had any interest in telling steelworkers how to use English. But in the middle of the twentieth century their prescriptivist assumptions came up against violent opposition, at least in the academic world. The newly popular theory of structural linguistics held, in part, that you couldn’t legislate language. It had its own, internal rules. The most you could do was catalogue them. A second important objection came from the reform politics of the late twentieth century. In a world changed by immigration, and intolerant of the idea of an élite, many people felt that prescriptive style manuals were exclusionary, even cruel. Why should we let some old Protestant men tell us how to write our language?

Also on the level of taste and tone, the books seemed to some readers—for example, Hitchings—provincial and small-minded. “The idea of Fowler,” he writes, “is part of that nimbus of Englishness that includes a fondness for flowers and animals, brass bands, cups of milky tea, net curtains, collecting stamps, village cricket, the quiz and the crossword.” The idea of Strunk and White, too, was a little discomforting. The book became a cult object. A ballet based on it, by Matthew Nash, had its New York première in 1981. Nico Muhly composed a song cycle on the subject, and performed it at the New York Public Library in 2005, in conjunction with the publication of Maira Kalman’s illustrated edition of “Elements.” In 2009, Mark Garvey, a journalist, brought out a book, “Stylized: A Slightly Obsessive History of Strunk & White’s The Elements of Style,” that quotes the correspondence between White and his publishers, reproduces testimonials by celebrated writers, and describes Garvey’s feelings—all his feelings—about the book: “I love its trim size. I love the trade dress of the 1979 third edition: The authors’ last names fill the top half of the honey-mustard cover in a stocky, crimson, sans serif typeface.” For some, such fetishism was a bit nauseating—and also clubbish. Strunk and White could be associated with what some readers saw as the pipe-and-slippers tone of The New Yorker, where White was a celebrated contributor for decades.

The crucial document of the language dispute of the past half century was Webster’s Third New International Dictionary, published in 1961. This 2,662-page revised edition of the standard unabridged dictionary of American English was emphatically descriptivist. “Ain’t” got in, as did “irregardless.” “Like” could be used as a conjunction, as in “Winston tastes good like a cigarette should.” Some of these items had appeared in the preceding edition of the unabridged Webster’s (1934), but with plentiful “usage labels,” characterizing them as slang, humorous, erroneous, or illiterate. In Web. III, usage labels appeared far less often; they bore more neutral names, such as “nonstandard” and “substandard”; and they were defined in subtly political terms. “Substandard,” the dictionary tells us, “indicates status conforming to a pattern of linguistic usage that exists throughout the American language community but differs in choice of word or form from that of the prestige group in that community.” Two examples that the dictionary gave of words acceptable throughout the American language community except in its prestige group were “drownded” and “hisself.”

On many sides, Web. III was met with fury. A number of readers had no memory of having heard “drownded” or “hisself” said by anyone, ever, prestigious or not. Some people—including the influential critic Dwight Macdonald, in an acidulous 1962 essay, “The String Untuned”—went so far as to accuse the editors of equivocating, misleading, and concealing, for political reasons. Even the middle-of-the-road Times ridiculed Web. III. Rex Stout’s beloved detective Nero Wolfe threw the book into the fire because of its failure to distinguish between “imply” and “infer.” This was the closest thing to a public scandal that the quiet little world of English-language manuals had ever seen.

Out of it a new lexicon was born: the American Heritage Dictionary of the English Language, published in 1969. The A.H.D. was a retort to Web. III. It was unashamedly prescriptive and also, strictly speaking, élitist. In the words of its editor, William Morris, the book was written to provide “that sensible guidance toward grace and precision which intelligent people seek in a dictionary.” Intelligent people, dictionary consulters: that’s not everybody. Still, A.H.D.’s makers did their best to keep the doors open. They had put together a “usage panel” of about a hundred people, mostly professional writers and editors, whom they consulted—indeed, they asked them to vote—on controversial words and phrases. The editors then arrived at their decisions, but for many words they added not just a usage label but also a usage “note,” giving the voting results, which were sometimes close. Here, for instance, is the entry on “ain’t”: “Nonstandard. Contraction of am not.” But this is followed by an eighteen-line usage note, saying that while “ain’t” is strongly condemned, “ain’t I” is a little more tolerable than “ain’t” combined with any other word. Actually, sixteen per cent of the panel thought that “ain’t I” was acceptable in speech. (Don’t try it in writing, though. Only one per cent approved this.) Such polling could be viewed as a preëmptive defense against a charge of exclusiveness, but it can also be seen as an attempt to purvey common sense, rather than snobbery or defensiveness, and, in the end, just to tell the truth. In every quarter of the society, there is an élite. Web. III tried to make that fact go away. The A.H.D. did not, but it also demonstrated that occupants of the upper tier often—even usually—disagreed. So this was an élite that you might be able to join. It didn’t have a secret code.

In making the case for the language as it was spoken, the descriptivists did one great service: they encouraged studies of the vernacular. Dictionaries of slang have been around for a long time. In 2010, the Bodleian Library, at Oxford, brought out what its editors claim is the first specimen, a 1699 volume entitled “A New Dictionary of the Terms Ancient and Modern of the Canting Crew, In its Several Tribes, of Gypsies, Beggers, Thieves, Cheats, &c.”—“cant” means slang—whose author is listed only as B.E. I did not know, though I was glad to learn, many of its listings: “Louse-land” (Scotland), “Suck your face” (drink), “Hogen-mogen” (a Dutchman). The grandfather of twentieth-century slang books is considered to be Eric Partridge, whose “Dictionary of Slang and Unconventional English” (1937) shocked many people of the time. Since then, there have been national slang books, theoretical slang books, slang books covering tweets and texts and e-mails (Julie Coleman’s “Life of Slang”). Two years ago, a new contestant lumbered into the field: “Green’s Dictionary of Slang,” a three-volume, six-thousand-page lexicon. It covers the street talk not only of England—home of the book’s author, the language scholar Jonathon Green—but of most other English-speaking countries, and of numerous subcultures within them: the gay, the incarcerated, the military, and so on. An important event in lexicography this year was the publication of the fifth and final volume of Joan Houston Hall’s “Dictionary of American Regional English,” with such items as “too yet” (also); “we-uns” (we, us); “toe jam,” in wide use; and the “toe social,” a party where the women stand behind a curtain, sticking their toes out beneath it, and the men, after appraising the toes, bid for a companion for the evening.

Unsurprisingly, sex is the richest contributor to slang. Jonathon Green claims to have found fifteen hundred words for copulation and a thousand each for “penis” and “vagina.” There have been books strictly limited to obscenity. Green wrote one, “The Big Book of Filth” (1999). More recent is Ruth Wajnryb’s “Expletive Deleted” (2005). Wajnryb breaks no ground in her discussion of the reasons for dirty talk: obscenity enhances your vivacity; it cements fellowship within the group doing the talking. But she does discuss ethnic variations. Arabic and Turkish, she says, are justly praised for elaborate, almost surrealist curses (“You father of sixty dogs”). Bosnians focus on the family (“May your mother fart at a school meeting”). Wajnryb gives generous treatment to the populations, such as the Scots and the African-Americans, who hold actual competitions of verbal abuse, and she offers memorable examples:

I hate to talk about your mother, she’s a good old soul,
She got a ten-ton pussy and a rubber asshole.

For many years, the filthiest word in English was “fuck.” Even the dauntless Partridge had to use “f*ck.” (In Norman Mailer’s 1948 war novel, “The Naked and the Dead,” the G.I.s use “fug.” In what may be an apocryphal story, Mae West, meeting Mailer at a party, said, “Oh, you’re the guy who can’t spell ‘fuck’!”) According to Wajnryb, “fuck” has ceded first place to “cunt.”

While most discussions of slang focus on the lower and lower-middle classes, the gentry, too, have their argot. “U and Non-U: An Essay in Sociological Linguistics,” written in 1954 by the scholar Alan S. C. Ross, was an early and notorious study of this. For many years, language manuals had provided double-column lists of correct and incorrect words. Ross and his colleagues offered parallel columns of upper-class (U) speech versus the speech of (non-U) middle-class people trying to attain, or pretend to, upper-class status. Here is a sample:

U                                Non-U
Expensive                Costly
False Teeth              Dentures
Pregnant                   Expecting
House (a lovely)    Home (a lovely)
What?                       Pardon?
Napkin                     Serviette
Awful smell            Unpleasant odor
Rich                         Wealthy
Curtains                   Drapes

Some of the distinctions, such as “house” versus “home,” and “curtains” versus “drapes,” are still in force.

Note how well the non-U words conform to Fowler’s definition of genteelism: the choice of the fancier, rarer, or more euphemistic word. Americans have made their own contributions to non-U. Today, “discomfit” often turns up where “discomfort” should be.

Ross insisted that he did not endorse the U and non-U rules. He was a blameless professor at the University of Birmingham, and his essay was written for an obscure journal of philology in Helsinki. But it was swiftly leaped upon by people in England who did endorse such rules and were happy to talk about them. The essay was reprinted, in modified form, several times—for example, in “Noblesse Oblige,” a volume edited by Nancy Mitford. Here, various contributors added their own notes on U ways. Mitford told us that any sign of haste is non-U. Whenever possible, she said, she avoided airmail.

However descriptive Professor Ross’s intentions, his essay brings us to the obvious vice of the prescriptivists: many of them are indeed snobbish, as the descriptivists charge. The problem is not that they believe in the existence of élite groups—anyone who denies this is fooling himself—but that they are willing to scold us for not belonging to one. The novelist Kingsley Amis, who wrote a very Fowlerian manual called “The King’s English” (1997), instructed us that “medieval” was to be pronounced in four syllables, as “meedy-eeval.” To pronounce it in three syllables was “an infallible sign of fundamental illiteracy.”

Moving to a higher level, can we justly conclude that clear English is significantly related to moral worth? Unclarity, E. B. White says, is “a destroyer of life, of hope.” Such statements are intended, in part, as comical hyperboles, but how funny are they, in the end, since most people would like to be on the side of life and hope? It must be said that the writers in question are not oppressing the masses. No, the object of prescriptivist scorn is Ross’s non-U’s, the aspiring middle class. It is always rewarding, Amis writes, “to spot a would-be . . . infiltrator.” Amis’s father was a clerk in a mustard-manufacturing firm, so his pleasure in spotting arrivistes is understandable. But is it O.K.?

The descriptivists’ response to such statements is one of outraged virtue, and that is their besetting sin: self-righteousness. Hitchings sometimes casts himself as Candide, viewing with dismay the vile underbelly of the linguistic world. The rules are relative, he tells us. (Can it be?) They express the rule-makers’ social class, education, and values. (No!) Accordingly, they are also grounded in the rule-makers’ politics. (Really!) Having arrived at this last conclusion, the main point of his book, Hitchings ceases to be the shocked idealist and becomes an avenger. Purists are bullies, he writes. Even the soft-spoken language manuals are agents of tyranny. He says of Strunk and White’s restraint, “As with so much that masquerades as simplicity, it is really a cover for imperiousness.” Linguistic rigidity, he writes, is the product of its proponents’ “anxieties about otherness and difference.” You know what that means.

To support his points, Hitchings applies a great deal of faulty reasoning, above all the claim that since things have changed before, we shouldn’t mind seeing them change now. Usages frowned on today were once common. (Dr. Johnson split infinitives; Shakespeare wrote “between you and I,” and just about anything else he wanted.) Conversely, words considered respectable now were once decried. (Fowler took a firm stand against “placate” and “antagonize.”) And people have been complaining about the bad new ways, as opposed to the excellent old ways, for millennia. Why should we be so tedious as to repeat their error? Hitchings thinks that many of the distinctions that prescriptivists insist on—not just small things like “disinterested”/“uninterested” but big things like “who”/“whom”—“may already have been lost.”

It is not hard to see the illogic of this argument. What about the existence of a learned language, or a literary language? If Milton took from Virgil, and Blake from Milton, and Yeats from Blake, were those fountains dry, because they were not used by most people? As for the proposition that, if something was good enough for Dr. Johnson, it should be good enough for us, would we like to live with the dentistry, or the penal codes, or the views on race of Johnson’s time?

But the most curious flaw in the descriptivists’ reasoning is their failure to notice that it is now they who are doing the prescribing. By the eighties, the goal of objectivity had been replaced, at least in the universities, by the postmodern view that there is no such thing as objectivity: every statement is subjective, partial, full of biases and secret messages. And so the descriptivists, with what they regarded as their trump card—that they were being accurate—came to look naïve, and the prescriptivists, with their admission that they held a specific point of view, became the realists, the wised-up.

In the same period, the reformism of the sixties became, in some quarters, a stern, absolutist enterprise. Hitchings acknowledges the tie between political correctness (he calls it that) and the descriptive approach to language study. Faithful to his book’s thesis, he steps up to defend the enforcers, who were, he says, decent-minded people “demonized by the political right.” But he has a difficult time reconciling their views with his proclaimed anti-authoritarianism. Things get awkward for him as the book progresses.

Once you check his sources, things get worse. In the prescriptivists’ books, you will find that, contrary to Hitchings’s claims, many of them, or the best ones, are not especially tyrannical. Those men really wanted clear, singing prose, much more than rules, and they bent rules accordingly. White, addressing the question of “I” versus “me,” in “The Elements of Style,” asks, “Would you write, ‘The worst tennis player around here is I’ or ‘The worst tennis player around here is me’? The first is good grammar, the second is good judgment.” Kingsley Amis, for all his naughty jokes, is often philosophical, even modest. His preference for “all right” over “alright,” he tells us, is probably just a matter of what he learned in school. But it is Fowler, that supposedly starchy old schoolmaster, who is the most striking opponent of rigidity. In his first edition, he called the ban on prepositions at the end of a sentence “cherished superstition,” and said that those who avoid split infinitives at the cost of awkwardness are “bogy-haunted creatures.” Even more interesting is to watch him deal with matters of taste. One of his short essays, “vulgarization,” has to do with overusing a fancy word. It’s wrong to do this, he says, but “Nobody likes to be told that the best service he can do to a favourite word is to leave it alone, & perhaps the less said here on this matter the better.” This almost brings a tear to the eye. He doesn’t want people to lose face.

Nowadays, everyone is moving to the center. The big fight produced some useful discussions of linguistic history, including Guy Deutscher’s “The Unfolding of Language” (2005). These books, by demonstrating how language changes all the time, brought about some concessions on the part of the prescriptivists, notably the makers of the A.H.D.’s later editions. First, the editors changed the makeup of their advisory panel. (The original hundred advisers were not dead white men, but most of them were white men, and the average age was sixty-eight.) Some definitions were made more relativist.

Most important is that the editors tried to pull descriptivists over to their side. In the most recent edition, the fifth, they have not one but two introductory essays explaining their book’s philosophy. One is by John R. Rickford, a distinguished professor of linguistics and humanities at Stanford. Rickford tells us that “language learning and use would be virtually impossible without systematic rules and restrictions; this generalization applies to all varieties of language, including vernaculars.” That’s prescriptivism—no doubt about it. But turn the page and you get another essay, by the cognitive psychologist Steven Pinker. He tells us more or less the opposite. There are no rules, he declares. Or they’re there, but they’re just old wives’ tales—“bubbe-meises,” as he puts it, in Yiddish, presumably to show us what a regular fellow he is. And he attaches clear political meaning to this situation. People who insist on following supposed rules are effectively “derogating those who don’t keep the faith, much like the crowds who denounced witches, class enemies, and communists out of fear that they would be denounced first.” So prescriptivists are witch-hunters, Red-baiters. For the editors of the A.H.D. to publish Pinker’s essay alongside Rickford’s is outright self-contradiction. For them to publish it at all is cowardice, in service of avoiding a charge of élitism.

But the A.H.D.’s run for cover is not as striking as the bending over of certain descriptivists, notably Hitchings. Having written chapter after chapter attacking the rules, he decides, at the end, that maybe he doesn’t mind them after all: “There are rules, which are really mental mechanisms that carry out operations to combine words into meaningful arrangements.” We should learn them. He has. He thinks that the “who”/“whom” distinction may be on its way out. Funny, how we never see any confusion over these pronouns in his book, which is written in largely impeccable English.

No surprise here. Hitchings went to Oxford and wrote a doctoral dissertation on Samuel Johnson. He has completed three books on language. He knows how to talk the talk, but, as for walking the walk, he’d rather take the Rolls. You can walk, though. ♦


Read Full Post »

Annals of Ideas


The brainstorming myth.

by Jonah Lehrer January 30, 2012

Repeated scientific debunking hasn

Repeated scientific debunking hasn’t dented brainstorming’s popularity.

Related Links
Audio: Jonah Lehrer on how to stimulate group creativity.
Alex Osborn;
“Your Creative Power”;
Building 20;
M.I.T.: Problem Solving

In the late nineteen-forties, Alex Osborn, a partner in the advertising agency B.B.D.O., decided to write a book in which he shared his creative secrets. At the time, B.B.D.O. was widely regarded as the most innovative firm on Madison Avenue. Born in 1888, Osborn had spent much of his career in Buffalo, where he started out working in newspapers, and his life at B.B.D.O. began when he teamed up with another young adman he’d met volunteering for the United War Work Campaign. By the forties, he was one of the industry’s grand old men, ready to pass on the lessons he’d learned. His book “Your Creative Power” was published in 1948. An amalgam of pop science and business anecdote, it became a surprise best-seller. Osborn promised that, by following his advice, the typical reader could double his creative output. Such a mental boost would spur career success—“To get your foot in the door, your imagination can be an open-sesame”—and also make the reader a much happier person. “The more you rub your creative lamp, the more alive you feel,” he wrote.

“Your Creative Power” was filled with tricks and strategies, such as always carrying a notebook, to be ready when inspiration struck. But Osborn’s most celebrated idea was the one discussed in Chapter 33, “How to Organize a Squad to Create Ideas.” When a group works together, he wrote, the members should engage in a “brainstorm,” which means “using the brain to storm a creative problem—and doing so in commando fashion, with each stormer attacking the same objective.” For Osborn, brainstorming was central to B.B.D.O.’s success. Osborn described, for instance, how the technique inspired a group of ten admen to come up with eighty-seven ideas for a new drugstore in ninety minutes, or nearly an idea per minute. The brainstorm had turned his employees into imagination machines.

The book outlined the essential rules of a successful brainstorming session. The most important of these, Osborn said—the thing that distinguishes brainstorming from other types of group activity—was the absence of criticism and negative feedback. If people were worried that their ideas might be ridiculed by the group, the process would fail. “Creativity is so delicate a flower that praise tends to make it bloom while discouragement often nips it in the bud,” he wrote. “Forget quality; aim now to get a quantity of answers. When you’re through, your sheet of paper may be so full of ridiculous nonsense that you’ll be disgusted. Never mind. You’re loosening up your unfettered imagination—making your mind deliver.” Brainstorming enshrined a no-judgments approach to holding a meeting.

Brainstorming was an immediate hit and Osborn became an influential business guru, writing such best-sellers as “Wake Up Your Mind” and “The Gold Mine Between Your Ears.” Brainstorming provided companies with an easy way to structure their group interactions, and it became the most widely used creativity technique in the world. It is still popular in advertising offices and design firms, classrooms and boardrooms. “Your Creative Power” has even inspired academic institutes, such as the International Center for Studies in Creativity, at Buffalo State College, near where Osborn lived. And it has given rise to detailed pedagogical doctrines, such as the Osborn-Parnes Creative Problem Solving Process, which is frequently employed by business consultants. When people want to extract the best ideas from a group, they still obey Osborn’s cardinal rule, censoring criticism and encouraging the most “freewheeling” associations. At the design firm IDEO, famous for developing the first Apple mouse, brainstorming is “practically a religion,” according to the company’s general manager. Employees are instructed to “defer judgment” and “go for quantity.”

The underlying assumption of brainstorming is that if people are scared of saying the wrong thing, they’ll end up saying nothing at all. The appeal of this idea is obvious: it’s always nice to be saturated in positive feedback. Typically, participants leave a brainstorming session proud of their contribution. The whiteboard has been filled with free associations. Brainstorming seems like an ideal technique, a feel-good way to boost productivity. But there is a problem with brainstorming. It doesn’t work.

The first empirical test of Osborn’s brainstorming technique was performed at Yale University, in 1958. Forty-eight male undergraduates were divided into twelve groups and given a series of creative puzzles. The groups were instructed to follow Osborn’s guidelines. As a control sample, the scientists gave the same puzzles to forty-eight students working by themselves. The results were a sobering refutation of Osborn. The solo students came up with roughly twice as many solutions as the brainstorming groups, and a panel of judges deemed their solutions more “feasible” and “effective.” Brainstorming didn’t unleash the potential of the group, but rather made each individual less creative. Although the findings did nothing to hurt brainstorming’s popularity, numerous follow-up studies have come to the same conclusion. Keith Sawyer, a psychologist at Washington University, has summarized the science: “Decades of research have consistently shown that brainstorming groups think of far fewer ideas than the same number of people who work alone and later pool their ideas.”

And yet Osborn was right about one thing: like it or not, human creativity has increasingly become a group process. “Many of us can work much better creatively when teamed up,” he wrote, noting that the trend was particularly apparent in science labs. “In the new B. F. Goodrich Research Center”—Goodrich was an important B.B.D.O. client—“250 workers . . . are hard on the hunt for ideas every hour, every day,” he noted. “They are divided into 12 specialized groups—one for each major phase of chemistry, one for each major phase of physics, and so on.” Osborn was quick to see that science had ceased to be solitary.

Ben Jones, a professor at the Kellogg School of Management, at Northwestern University, has quantified this trend. By analyzing 19.9 million peer-reviewed academic papers and 2.1 million patents from the past fifty years, he has shown that levels of teamwork have increased in more than ninety-five per cent of scientific subfields; the size of the average team has increased by about twenty per cent each decade. The most frequently cited studies in a field used to be the product of a lone genius, like Einstein or Darwin. Today, regardless of whether researchers are studying particle physics or human genetics, science papers by multiple authors receive more than twice as many citations as those by individuals. This trend was even more apparent when it came to so-called “home-run papers”—publications with at least a hundred citations. These were more than six times as likely to come from a team of scientists.

Jones’s explanation is that scientific advances have led to a situation where all the remaining problems are incredibly hard. Researchers are forced to become increasingly specialized, because there’s only so much information one mind can handle. And they have to collaborate, because the most interesting mysteries lie at the intersections of disciplines. “A hundred years ago, the Wright brothers could build an airplane all by themselves,” Jones says. “Now Boeing needs hundreds of engineers just to design and produce the engines.” The larger lesson is that the increasing complexity of human knowledge, coupled with the escalating difficulty of those remaining questions, means that people must either work together or fail alone. But if brainstorming is useless, the question still remains: What’s the best template for group creativity?

In 2003, Charlan Nemeth, a professor of psychology at the University of California at Berkeley, divided two hundred and sixty-five female undergraduates into teams of five. She gave all the teams the same problem—“How can traffic congestion be reduced in the San Francisco Bay Area?”—and assigned each team one of three conditions. The first set of teams got the standard brainstorming spiel, including the no-criticism ground rules. Other teams—assigned what Nemeth called the “debate” condition—were told, “Most research and advice suggest that the best way to come up with good solutions is to come up with many solutions. Freewheeling is welcome; don’t be afraid to say anything that comes to mind. However, in addition, most studies suggest that you should debate and even criticize each other’s ideas.” The rest received no further instructions, leaving them free to collaborate however they wanted. All the teams had twenty minutes to come up with as many good solutions as possible.

The results were telling. The brainstorming groups slightly outperformed the groups given no instructions, but teams given the debate condition were the most creative by far. On average, they generated nearly twenty per cent more ideas. And, after the teams disbanded, another interesting result became apparent. Researchers asked each subject individually if she had any more ideas about traffic. The brainstormers and the people given no guidelines produced an average of three additional ideas; the debaters produced seven.

Nemeth’s studies suggest that the ineffectiveness of brainstorming stems from the very thing that Osborn thought was most important. As Nemeth puts it, “While the instruction ‘Do not criticize’ is often cited as the important instruction in brainstorming, this appears to be a counterproductive strategy. Our findings show that debate and criticism do not inhibit ideas but, rather, stimulate them relative to every other condition.” Osborn thought that imagination is inhibited by the merest hint of criticism, but Nemeth’s work and a number of other studies have demonstrated that it can thrive on conflict.

According to Nemeth, dissent stimulates new ideas because it encourages us to engage more fully with the work of others and to reassess our viewpoints. “There’s this Pollyannaish notion that the most important thing to do when working together is stay positive and get along, to not hurt anyone’s feelings,” she says. “Well, that’s just wrong. Maybe debate is going to be less pleasant, but it will always be more productive. True creativity requires some trade-offs.”

Another of her experiments has demonstrated that exposure to unfamiliar perspectives can foster creativity. The experiment focussed on a staple of the brainstorming orthodoxy—free association. A long-standing problem with free association is that people aren’t very good at it. In the early nineteen-sixties, two psychologists, David Palermo and James Jenkins, began amassing a huge table of word associations, the first thoughts that come to mind when people are asked to reflect on a particular word. (They interviewed more than forty-five hundred subjects.) Palermo and Jenkins soon discovered that the vast majority of these associations were utterly predictable. For instance, when people are asked to free-associate about the word “blue,” the most likely first answer is “green,” followed by “sky” and “ocean.” When asked to free-associate about “green,” nearly everyone says “grass.” “Even the most creative people are still going to come up with many mundane associations,” Nemeth says. “If you want to be original, then you have to get past this first layer of predictability.”

Nemeth’s experiment devised a way of escaping this trap. Pairs of subjects were shown a series of color slides in various shades of blue and asked to identify the colors. Sometimes one of the pair was actually a lab assistant instructed by Nemeth to provide a wrong answer. After a few minutes, the pairs were asked to free-associate about the colors they had seen. People who had been exposed to inaccurate descriptions came up with associations that were far more original. Instead of saying that “blue” reminded them of “sky,” they came up with “jazz” and “berry pie.” The obvious answer had stopped being their only answer. Even when alternative views are clearly wrong, being exposed to them still expands our creative potential. In a way, the power of dissent is the power of surprise. After hearing someone shout out an errant answer, we work to understand it, which causes us to reassess our initial assumptions and try out new perspectives. “Authentic dissent can be difficult, but it’s always invigorating,” Nemeth says. “It wakes us right up.”

Criticism allows people to dig below the surface of the imagination and come up with collective ideas that aren’t predictable. And recognizing the importance of conflicting perspectives in a group raises the issue of what kinds of people will work together best. Brian Uzzi, a sociologist at Northwestern, has spent his career trying to find what the ideal composition of a team would look like. Casting around for an industry to study that would most clearly show the effects of interaction, he hit on Broadway musicals. He’d grown up in New York City and attended his first musical at the age of nine. “I went to see ‘Hair,’ ” Uzzi recalls. “I remember absolutely nothing about the music, but I do remember the nude scene. That just about blew my mind. I’ve been a fan of Broadway ever since.”

Uzzi sees musicals as a model of group creativity. “Nobody creates a Broadway musical by themselves,” he said. “The production requires too many different kinds of talent.” A composer has to write songs with a lyricist and a librettist; a choreographer has to work with a director, who is probably getting notes from the producers.

Uzzi wanted to understand how the relationships of these team members affected the product. Was it better to have a group composed of close friends who had worked together before? Or did strangers make better theatre? He undertook a study of every musical produced on Broadway between 1945 and 1989. To get a full list of collaborators, he sometimes had to track down dusty old Playbills in theatre basements. He spent years analyzing the teams behind four hundred and seventy-four productions, and charted the relationships of thousands of artists, from Cole Porter to Andrew Lloyd Webber.

Uzzi found that the people who worked on Broadway were part of a social network with lots of interconnections: it didn’t take many links to get from the librettist of “Guys and Dolls” to the choreographer of “Cats.” Uzzi devised a way to quantify the density of these connections, a figure he called Q. If musicals were being developed by teams of artists that had worked together several times before—a common practice, because Broadway producers see “incumbent teams” as less risky—those musicals would have an extremely high Q. A musical created by a team of strangers would have a low Q.

Uzzi then tallied his Q readings with information about how successful the productions had been. “Frankly, I was surprised by how big the effect was,” Uzzi told me. “I expected Q to matter, but I had no idea it would matter this much.” According to the data, the relationships among collaborators emerged as a reliable predictor of Broadway success. When the Q was low—less than 1.7 on Uzzi’s five-point scale—the musicals were likely to fail. Because the artists didn’t know one another, they struggled to work together and exchange ideas. “This wasn’t so surprising,” Uzzi says. “It takes time to develop a successful collaboration.” But, when the Q was too high (above 3.2), the work also suffered. The artists all thought in similar ways, which crushed innovation. According to Uzzi, this is what happened on Broadway during the nineteen-twenties, which he made the focus of a separate study. The decade is remembered for its glittering array of talent—Cole Porter, Richard Rodgers, Lorenz Hart, Oscar Hammerstein II, and so on—but Uzzi’s data reveals that ninety per cent of musicals produced during the decade were flops, far above the historical norm. “Broadway had some of the biggest names ever,” Uzzi explains. “But the shows were too full of repeat relationships, and that stifled creativity.”

The best Broadway shows were produced by networks with an intermediate level of social intimacy. The ideal level of Q—which Uzzi and his colleague Jarrett Spiro called the “bliss point”—emerged as being between 2.4 and 2.6. A show produced by a team whose Q was within this range was three times more likely to be a commercial success than a musical produced by a team with a score below 1.4 or above 3.2. It was also three times more likely to be lauded by the critics. “The best Broadway teams, by far, were those with a mix of relationships,” Uzzi says. “These teams had some old friends, but they also had newbies. This mixture meant that the artists could interact efficiently—they had a familiar structure to fall back on—but they also managed to incorporate some new ideas. They were comfortable with each other, but they weren’t too comfortable.”

Uzzi’s favorite example of “intermediate Q” is “West Side Story,” one of the most successful Broadway musicals ever. In 1957, the play was seen as a radical departure from Broadway conventions, both for its focus on social problems and for its extended dance scenes. The concept was dreamed up by Jerome Robbins, Leonard Bernstein, and Arthur Laurents. They were all Broadway legends, which might make “West Side Story” look like a show with high Q. But the project also benefitted from a crucial injection of unknown talent, as the established artists realized that they needed a fresh lyrical voice. After an extensive search, they chose a twenty-five-year-old lyricist who had never worked on a Broadway musical before. His name was Stephen Sondheim.

A few years ago, Isaac Kohane, a researcher at Harvard Medical School, published a study that looked at scientific research conducted by groups in an attempt to determine the effect that physical proximity had on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Then he assessed the quality of the research by counting the number of subsequent citations. The task, Kohane says, took a “small army of undergraduates” eighteen months to complete. Once the data was amassed, the correlation became clear: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were working within ten metres of each other; the least cited papers tended to emerge from collaborators who were a kilometre or more apart. “If you want people to work together effectively, these findings reinforce the need to create architectures that support frequent, physical, spontaneous interactions,” Kohane says. “Even in the era of big science, when researchers spend so much time on the Internet, it’s still so important to create intimate spaces.”

A new generation of laboratory architecture has tried to make chance encounters more likely to take place, and the trend has spread in the business world, too. One fanatical believer in the power of space to enhance the work of groups was Steve Jobs. Walter Isaacson’s recent biography of Jobs records that when Jobs was planning Pixar’s headquarters, in 1999, he had the building arranged around a central atrium, so that Pixar’s diverse staff of artists, writers, and computer scientists would run into each other more often. “We used to joke that the building was Steve’s movie,” Ed Catmull, the president of both Disney Animation and Pixar Animation, says. “He really oversaw everything.”

Jobs soon realized that it wasn’t enough simply to create an airy atrium; he needed to force people to go there. He began with the mailboxes, which he shifted to the lobby. Then he moved the meeting rooms to the center of the building, followed by the cafeteria, the coffee bar, and the gift shop. Finally, he decided that the atrium should contain the only set of bathrooms in the entire building. (He was later forced to compromise and install a second pair of bathrooms.) “At first, I thought this was the most ridiculous idea,” Darla Anderson, a producer on several Pixar films, told me. “I didn’t want to have to walk all the way to the atrium every time I needed to do something. That’s just a waste of time. But Steve said, ‘Everybody has to run into each other.’ He really believed that the best meetings happened by accident, in the hallway or parking lot. And you know what? He was right. I get more done having a cup of coffee and striking up a conversation or walking to the bathroom and running into unexpected people than I do sitting at my desk.” Brad Bird, the director of “The Incredibles” and “Ratatouille,” says that Jobs “made it impossible for you not to run into the rest of the company.”

In the spring of 1942, it became clear that the Radiation Laboratory at M.I.T.—the main radar research institute for the Allied war effort—needed more space. The Rad Lab had been developing a radar device for fighter aircraft that would allow pilots to identify distant German bombers, and was hiring hundreds of scientists every few months. The proposed new structure, known as Building 20, was going to be the biggest lab yet, comprising two hundred and fifty thousand square feet, on three floors. It was designed in an afternoon by a local architecture firm, and construction was quick and cheap. The design featured a wooden frame on top of a concrete-slab foundation, with an exterior covered in gray asbestos shingles. (Steel was in short supply.) The structure violated the Cambridge fire code, but it was granted an exemption because of its temporary status. M.I.T. promised to demolish Building 20 shortly after the war.

Initially, Building 20 was regarded as a failure. Ventilation was poor and hallways were dim. The walls were thin, the roof leaked, and the building was broiling in the summer and freezing in the winter. Nevertheless, Building 20 quickly became a center of groundbreaking research, the Los Alamos of the East Coast, celebrated for its important work on military radar. Within a few years, the lab developed radar systems used for naval navigation, weather prediction, and the detection of bombers and U-boats. According to a 1945 statement issued by the Defense Department, the Rad Lab “pushed research in this field ahead by at least 25 normal peacetime years.” If the atom bomb ended the war, radar is what won it.

Immediately after the surrender of Japan, M.I.T., as it had promised, began making plans for the demolition of Building 20. The Rad Lab offices were dismantled and the radio towers on the roof were taken down. But the influx of students after the G.I. Bill suddenly left M.I.T. desperately short of space. Building 20 was turned into offices for scientists who had nowhere else to go.

The first division to move into Building 20 was the Research Laboratory of Electronics, which grew directly out of the Rad Lab. Because the electrical engineers needed only a fraction of the structure, M.I.T. began shifting a wide variety of academic departments and student clubs to the so-called “plywood palace.” By the nineteen-fifties, Building 20 was home to the Laboratory for Nuclear Science, the Linguistics Department, and the machine shop. There was a particle accelerator, the R.O.T.C., a piano repair facility, and a cell-culture lab.

Building 20 became a strange, chaotic domain, full of groups who had been thrown together by chance and who knew little about one another’s work. And yet, by the time it was finally demolished, in 1998, Building 20 had become a legend of innovation, widely regarded as one of the most creative spaces in the world. In the postwar decades, scientists working there pioneered a stunning list of breakthroughs, from advances in high-speed photography to the development of the physics behind microwaves. Building 20 served as an incubator for the Bose Corporation. It gave rise to the first video game and to Chomskyan linguistics. Stewart Brand, in his study “How Buildings Learn,” cites Building 20 as an example of a “Low Road” structure, a type of space that is unusually creative because it is so unwanted and underdesigned. (Another example is the Silicon Valley garage.) As a result, scientists in Building 20 felt free to remake their rooms, customizing the structure to fit their needs. Walls were torn down without permission; equipment was stored in the courtyards and bolted to the roof. When Jerrold Zacharias was developing the first atomic clock, working in Building 20, he removed two floors in his lab to make room for a three-story metal cylinder.

The space also forced solitary scientists to mix and mingle. Although the rushed wartime architects weren’t thinking about the sweet spot of Q or the importance of physical proximity when they designed the structure, they conjured up a space that maximized both of these features, allowing researchers to take advantage of Building 20’s intellectual diversity.

Room numbers, for instance, followed an inscrutable scheme: rooms on the second floor were given numbers beginning with 1, and third-floor room numbers began with 2. Furthermore, the wings that made up the building were named in an unclear sequence: B wing gave onto A wing, followed by E, D, and C wings. Even longtime residents of Building 20 were constantly getting lost, wandering the corridors in search of rooms. Those looking for the Ice Research Lab had to walk past the military recruiting office; students on their way to play with the toy trains (the Tech Model Railroad Club was on the third floor, in Room No. 20E-214) strolled along hallways filled with the latest computing experiments.

The building’s horizontal layout also spurred interaction. Brand quotes Henry Zimmerman, an electrical engineer who worked there for years: “In a vertical layout with small floors, there is less research variety on each floor. Chance meetings in an elevator tend to terminate in the lobby, whereas chance meetings in a corridor tended to lead to technical discussions.” The urban theorist Jane Jacobs described such incidental conversations as “knowledge spillovers.” Her favorite example was the rise of the automobile industry in Detroit. In the eighteen-twenties, the city was full of small shipyards built for the flour trade. Over time, the shipyards became centers of expertise in the internal-combustion engine. Nearly a century later, those engines proved ideal for powering cars, which is why many pioneers of the automotive industry got their start building ships. Jacobs’s point was that the unpredictable nature of innovation meant that it couldn’t be prescribed in advance.

Building 20 was full of knowledge spillovers. Take the career of Amar Bose. In the spring of 1956, Bose, a music enthusiast, procrastinating in writing his dissertation, decided to buy a hi-fi. He chose the system with the best technical specs, but found that the speakers sounded terrible. Bose realized that the science of hi-fi needed help and began frequenting the Acoustics Lab, which was just down the hall. Before long, Bose was spending more time playing with tweeters than he was on his dissertation. Nobody minded the interloper in the lab, and, three years later, Bose produced a wedge-shaped contraption outfitted with twenty-two speakers, a synthesis of his time among the engineers and his musical sensibility. The Bose Corporation was founded soon afterward.

A similar lesson emerges from the Linguistics Department at M.I.T., which was founded by Morris Halle, in the early fifties. According to Halle, he was assigned to Building 20 because that was the least valuable real estate on campus, and nobody thought much of linguists. Nevertheless, he soon grew fond of the building, if only because he was able to tear down several room dividers. This allowed Halle to transform a field that was often hermetic, with grad students working alone in the library, into a group exercise, characterized by discussion, Socratic interrogation, and the vigorous exchange of clashing perspectives. “At Building 20, we made a big room, so that all of the students could talk to each other,” Halle remembers. “That’s how I wanted them to learn.”

One of Halle’s first recruits was Carol Chomsky, a young scholar who was married to a Harvard grad student named Noam Chomsky, also a linguist. Halle encouraged Chomsky to apply for an open position at M.I.T., and in 1955 he joined the linguistics faculty at Building 20. For the next several decades, Halle and Chomsky worked in adjacent offices, which were recalled by a colleague as “the two most miserable holes in the whole place.” Although the men studied different aspects of language—Chomsky focussed on syntax and grammar, and Halle analyzed the sounds of words—the men spent much of their day talking about their work. “We became great friends,” Halle says. “And friends shouldn’t be shy about telling each other when they are wrong. What am I supposed to do? Not tell him he’s got a bad idea?”

After a few years at M.I.T., Chomsky revolutionized the study of linguistics by proposing that every language shares a “deep structure,” which reflects the cognitive structures of the mind. Chomsky’s work drew from disparate fields—biology, psychology, and computer science. At the time, the fields seemed to have nothing in common—except the hallways of Building 20. “Building 20 was a fantastic environment,” Chomsky says. “It looked like it was going to fall apart. But it was extremely interactive.” He went on, “There was a mixture of people who later became separate departments interacting informally all the time. You would walk down the corridor and meet people and have a discussion.”

Building 20 and brainstorming came into being at almost exactly the same time. In the sixty years since then, if the studies are right, brainstorming has achieved nothing—or, at least, less than would have been achieved by six decades’ worth of brainstormers working quietly on their own. Building 20, though, ranks as one of the most creative environments of all time, a space with an almost uncanny ability to extract the best from people. Among M.I.T. people, it was referred to as “the magical incubator.”

The fatal misconception behind brainstorming is that there is a particular script we should all follow in group interactions. The lesson of Building 20 is that when the composition of the group is right—enough people with different perspectives running into one another in unpredictable ways—the group dynamic will take care of itself. All these errant discussions add up. In fact, they may even be the most essential part of the creative process. Although such conversations will occasionally be unpleasant—not everyone is always in the mood for small talk or criticism—that doesn’t mean that they can be avoided. The most creative spaces are those which hurl us together. It is the human friction that makes the sparks. ♦

*Editor’s Note: Noam Chomsky’s comments about M.I.T.’s Building 20 were not made directly to Jonah Lehrer, nor was a colleague’s description of Chomsky’s and Morris Halle’s offices as “the two most miserable holes in the whole place.” Chomsky and his colleague were interviewed by Peter Dizikes for his article in the November/December issue of Technology Review.

ILLUSTRATION: Nishant Choksi

Read more http://www.newyorker.com/reporting/2012/01/30/120130fa_fact_lehrer#ixzz1zwMo7YhX

Read Full Post »