And why you’ll never convince your aunt that climate change is real
In 1991, Timothy Eagan published a story in The New York Times that sought to determine why apples had suddenly terrified the United States. For two decades, the popular fruit enjoyed a perch atop the list of healthy snacks, but in the late 1980s, demand for apples fell so fast and people feared eating them so much that apple prices dropped below the break-even price for producers. Schools in Los Angeles and New York banned shipments of apple juice, and farmers began dumping produce into ditches or giving it away to homeless shelters rather than attempt to sell through stores. It took nearly five years before the apple industry recovered to pre-crisis levels.
The root cause of the public outcry was the chemical daminozide, commonly known as Alar. Farmers used this pesticide to help keep fruit firm and brightly-colored, and in 1988, the year before the scare, only about five percent of store-bought apples had been sprayed with the chemical. At the time, the scientific community agreed that studies showed no significant link between Alar and human harm.
Yet science’s opinion would soon be rendered irrelevant. More than 15 years prior, a private study found that when mice were fed extremely high doses of Alar (equal to 35,000 times the highest estimate of the daily intake at the time), these mice developed tumors. Seeing these results, the National Resources Defence Council (NRDC) made an inferential leap to conclude that because children ate proportionally more apples than adults, Alar necessarily posed a cancer risk to thousands of schoolkids. Panicked, the NGO hired a public relations firm to spread the news that America’s favorite fruit contained the kiss of death. An ensuing 60 Minutes segment launched one of the most effective scare campaigns in history.
The citizenry responded in pandemonium, and the EPA quickly banned Alar, even though farmers had already begun giving up the pesticide voluntarily. To no avail: It was easier for shoppers to avoid apples entirely, resulting in plummeting prices and wasted produce. Even an official statement by the federal government proclaiming that apples were safe didn’t change buying habits. By the end, the crisis cost apple growers $125 million in Washington state alone, and hundreds of small, family-owned farms collapsed into bankruptcy. Two years later, apple prices stabilized somewhat, but as Tom Hale, president of the Washington Apple Commission, stated, the industry “never really got over the bitterness of that image that was created, that we were somehow trying to poison children.”
In the ensuing legal battle, which alleged irresponsibility on the part of the media and the NRDC, defendants blamed the panic on oversimplification.
“We didn’t set out to hurt apple farmers,” said Frances Beinecke, deputy director of the NRDC. “The original report was about pesticides in more than 20 kinds of food. What happened was, the media simplified it and focused on apples.”
Beinecke blamed the media, but the true culprit was Fenton Communications, the PR firm hired to spread NRDC’s concerns. Wisely realizing that the public did not have the attention span for 20 foods, the firm stripped the focus to just one. The message was terrifying in its simplicity: Americans loved apples. Americans trusted apples. If apples were poisonous — if the bedrock of healthy food lists actually wrecked the body with cancerous tumors — then nothing was safe. In a leaked memo, David Fenton outlined his publicity strategy:
“Our goal was to create so many repetitions of NRDC’s message that average American consumers (not just the policy elite in Washington) could not avoid hearing it, from many different media outlets within a short period of time. The idea was for the story to achieve a life of its own, and continue for weeks and months to affect policy and consumer habits.”
“A life of its own.” Fenton correctly realized that for a story like this, the media narrative could never be controlled. In a panic, the public has no time for nuance, no desire for detail. It wasn’t the media that produced the fear that burned the apple industry to the ground; it was the public itself. Fenton simply gave them an excuse.
The crowd hates the crowd.
Outwardly. It admits you or me
as an enormous lidless eye admits glittering
beams. Endless watching, washing us in.
The crowd’s object, its point,
is always vanishing into its own mass. It is a sea
with no concern for us, even as it scores.
Crowd psychology dates back to the 19th century, when bourgeoisie scholars began to fear uprisings from an ever-growing proletariat. Industrialization launched a population boom that coalesced around cities, and upper classes began to feel tension from those in poverty who realized they might not be getting the best deal. After the bloody French Revolution, society’s intellectual elite became more convinced than ever of the need for crowd control.
Initially, leading scholars such as Gustave le Bon naively assumed that joining a crowd eliminated a person’s ability to reason. Le Bon, who referred to crowd members as “barbarians” subject to animalistic behaviors, laid the foundation of crowd psychology research, and nearly one hundred years later, public relations guru Edward Bernays published similar thoughts in his memoir “Propaganda”:
“The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized.”
The problem with this attitude, as the UK’s Stephen Reicher outlines in “The Psychology of Crowd Dynamics,” is that throughout much of history, scholars did not look for an objective understanding of reality, instead pursuing a set of practices that could keep status quos in check. While effective for the vast propaganda schemes of the 20th century, the approach was overly-simplistic and missed obvious inputs.
For instance, Le Bon’s research completely ignores external forces. To understand this oversight, consider peaceful protesters with no intentions of violence. However, aggressive policing prompts crowd members to act in kind, and an escalating cycle ends in the once-level-headed protesters rioting out of control. Now, consider the opposite: Fifteen years ago, relying on Reicher’s recommendations, the Portuguese Public Society Police changed how they dealt with drunk football fans in attempt to reduce frequent crowd violence. After encouraging officers to use a firm, but kind, approach, the PPSP saw “an almost complete absence of disorder at England games during Euro 2004,” according to an Aeon report.
To many people, it is obvious that police behavior affects a crowd’s response, and Le Bon’s research has no way of acknowledging this. But broadening the lens to include external forces still falls short of capturing the entire picture. In fact, group psychology is a complex mix of role-playing, ingroup vs. outgroup dynamics, and signaling behavior that can have nothing at all to do with someone’s personality or background.
Repeat: repetition, repetition, repetition
Men like Le Bon and Bernays demonstrate that elite society has always distrusted the public. Perhaps the most famous modern tropism, often attributed to Winston Churchill, is that democracy, or literally “government of the people,” is only the least-bad form of government explored so far. But ancient society feared crowds too: Plato’s Republic ranks democracy lower than monarchy, aristocracy, meritocracy and oligarchy — sitting only above tyranny.
To the ruling classes, crowd behavior, at least on the surface, seems nebulous and unpredictable. Throughout history, demagogues have used ever-changing public whims to decry establishments and rocket to power. Yet even they fear the crowd. In his memoirs, one of the most successful propagandists of the 20th century argued that true democracy is impossible, and the democratic “principles” of the West cleverly disguised an authoritarian regime that could only be managed through public manipulation. “Man is and remains an animal,” he wrote. “Here a beast of prey, there a housepet, but always an animal … man only honors what he conquers or defends.”
As the man behind one of history’s most memorable crowd control campaigns, this propagandist operated through a single strategy, from which he never deviated: The repetition of a few key messages to the public, without end, until the public adopted those messages as truth. In his memoirs, he summarizes:
“It would not be impossible to prove with sufficient repetition and a psychological understanding of the people concerned that a square is in fact a circle. They are mere words, and words can be molded until they clothe ideas and disguise….The most brilliant propagandist technique will yield no success unless one fundamental principle is borne in mind constantly — it must confine itself to a few points and repeat them over and over.”
This was Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany. There are many versions of his quote, but the bottom line is that Goebbels believed that the public, after hearing a message enough times, would believe anything. With that understanding, he stripped Germany of its humanity.
It is easy to view Goebbels as an anomaly, pointing to the unique economic concerns, crushing war debt, and lost national pride in Germany as reasons for his success. But that view of history is myopic. Almost 80 years later, the message has changed; the methods have not. Remember Fenton Communications’ publicity plan from the Alar scare? Here it is again:
“Our goal was to create so many repetitions of NRDC’s message that average American consumers (not just the policy elite in Washington) could not avoid hearing it, from many different media outlets within a short period of time.”
In the face of a single, repeated message, facts no longer matter.
Naziism and the Alar scare seem extreme, but subtle examples of these strategies exist everywhere. Since 1977, when it was first documented, scientists have observed what is called “The Illusion of Truth” in more than a dozen research studies. It is now widely understood that you can increase the likelihood a person will rate information as “true” simply by repeating it to them. This effect holds true regardless of supporting facts and even if the information is repeated only once.
On the surface, this seems a bit mind-boggling, leading one researcher to liken the process to “buying a second newspaper to see if the first one was right.” Yet Goebbels’ propaganda schemes continue to play out in clinical laboratory settings, and “the illusion of truth” appears again and again, frustrating psychologists and behavioral economists alike.
If this is discouraging, consider that the effect is compounded when we hear repeated information from diverse sources, especially if those sources are in our social network. In 1999, local governments, frustrated with the lack of recycling, stumbled upon a method to increase curb-side participation significantly: they told people that their neighbors recycled. In a few weeks, with no one wanting to be left out of this perceived social norm, recycling rates climbed substantially, giving credence to Bernay’s 1930s remark that “propaganda is the executive arm of the invisible government.”
Goebbels, Bernays and Le Bon reveal that the mechanisms for crowd control are quite simple, but demagogues have understood the “how” for centuries. Less understood is the “why.” Why does repetition work so well? Why are we so willing to believe our neighbors?
Why are we so seemingly incapable of thinking for ourselves?
Nothing more than political manners
There are two popular theories that describe the way people consume information. The first is called decision heuristics, or the process of relying on cognitive shortcuts to avoid having to think too deeply about an issue. By adopting analyses handed to them as their own, people can save limited brainpower for tasks of greater importance, such as personal relationships or work. After all, what is the point in trying to identify the true source of climate change on your own, when entire governmental organizations are already tackling the issue? Better to nod and agree with what you hear in the media.
The same process holds true for opinions. If a scholar announces that after 30 years of research, he holds view that gun control is harmful, why bother arguing? He has spent more time on the problem than you ever could. Religion employs a similar strain of logic, albeit on a much greater scale. For millennia, the questions of “where did we come from,” “where are we going,” and “what is morality,” have driven countless philosophers and humanistic thinkers to suicide. These questions have no real answers, and it is far more comforting to accept a prefabricated system of rules and beliefs that have been around for thousands of years. At least you have the assurance they’ve worked before.
The problem with decision heuristics as a theory is that if people relied solely on the media to formulate their opinions, higher media consumption should lead to a more balanced view of the world. It is expected for someone who watches the nightly news once a day to have minimally original opinions about current events. It is also expected that as a person increases his media consumption, his ability to formulate nuanced conclusions will correspondingly improve. In reality, the exact opposite occurs: greater media consumption almost always translates to greater partisan attitudes.
A second hypothesis argues the behavior path is actually inverted. People are not looking for someone to hand them an opinion to make sense of the facts they already know; instead, people are looking for someone to hand them facts to support an opinion they already have. Media consumption, then, is still a fact-finding mission; it’s just biased in favor of facts that do not contradict a pre-held belief.
If true, this hypothesis begs the question: If people’s opinions do not originate from the media, and equally do not arrive through a critical study of the facts at hand, where do they come from?
The crowd, according to Yale’s Dan Kahan. People desperately want to feel like they belong to their society, and this need for belonging is so strong, it blinds people to information they instinctively know to be false. In perhaps the best summary I’ve ever read, Dr. Kahan explains the two choices a person faces when considering whether to accept or reject an opinion:
Any mistake an individual makes about the science on, say, the reality or causes of climate change … will not affect the level of risk for her or for any other person or thing she cares about: Whatever she, as a single individual, does and can do will be too inconsequential to have an impact. But, insofar as competing positions on these issues have come to express membership in and loyalty to opposing social groups, a person’s’ formation of a belief out of keeping with the one that predominates in hers could mark her as untrustworthy or stupid, and thus compromise her relationships with others. These consequences could substantially diminish her welfare — materially and psychically.
Again, we find that facts are irrelevant. In effort to display loyalty to a political party, a person will adopt a position as his own, regardless of any argument he hears to the contrary — merely as a self-defense mechanism. It’s similar to the notion that manners, while on the surface seem utterly useless, are built out of the need to position oneself as part of the in-group. Political opinions are nothing more than ideological manners.
Yet as I’ve mentioned before, blaming the crowd’s behavior (or in this case, attitudes) entirely on in-group dynamics is too limited because it ignores external influences. As the final theory examined in this article shows, we are just as motivated to hold a belief by the groups with whom we don’t associate, as the groups with whom we do.
I’m a Republican; therefore, I’m not a Democrat
Traditional identity models stipulate that when considering a choice of behavior we choose not what we truly want to do, but what we believe someone like us would do. We choose from among hundreds of categories into which we slot ourselves, and behavior results from how we interpret normal behavior as defined by these identities. The crowd model of this theory is no different: As Reicher writes, “Crowd members do not simply ask ‘what is appropriate for us in this context?’ but ‘what is appropriate for us as members of this category in this context?’” If the crowd members identify as peaceful protesters, they’re unlikely to resort to violence unless strongly provoked.
Reicher then takes this idea one step further in his Elaborated Social Identity Model, which argues that when we self-categorize, we do so in reference to other groups. In other words, if I say I’m “American,” I’m also saying that I’m not “Chinese,” or “European” or any other ethnicity. By calling myself a “Democrat,” I’m equally stating that I’m “not a Republican.” Therefore, when choosing a behavior, we are equally considering what is appropriate for someone in our category, but also what is inappropriate for someone in another category. A Democrat thus believes in universal health care both because his party tells him to believe it and the opposing party specifically tells him not to believe it.
This is why you’ll never convince your aunt that climate change is caused by human beings. It’s crucial to her social identity to tell people that she agrees with the conservative elite who tell her that climate change is nothing to worry about. And the fact that you, as a liberal, are telling her she should believe otherwise clearly displays outgroup behavior that defines for her the path that she should not take. It would be better to fake ambivalence about your position; any argument you make is a motivation for her to run in the opposite direction. Facts are completely irrelevant as this is a battle of identity.
In his book The Art of Seduction, author Robert Greene describes two methods of selling an idea to the public. The hard sell utilizes statistics, success stories, expert opinions and even a little fear in attempt to win through rationality. This attempt often fails because it is too grounded in reality. Specific facts can be disputed, and experts can be discredited, calling doubt into the minds of potential supporters.
A far more successful approach, argues Greene, is the soft sell, which ignores reality and relies on entertainment to lower people’s innate defensiveness to manipulation. To sell their quackery, 17th century European charlatans staged circus shows before dramatically revealing fake medicines and elixirs which promised eternal youth and beauty. Today, advertisers pay for billboards of beautiful women holding beer bottles and cigarettes. It’s all for the same reason: the hard sell is based on the merits of a product; the soft sell is based on the merits of a lifestyle.
“Never seem to be selling something,” writes Greene. “That will look manipulative and suspicious. Instead, let entertainment value and good feelings take center stage, sneaking the sale through the side door. And in that sale, you do not seem to be selling yourself or a particular idea or candidate; you are selling a lifestyle, a good mood, a sense of adventure, a feeling of hipness, or a neatly packaged rebellion.”
So too, I would add, you are selling an identity.