Mind the Elephant: How Automatic Judgements Impact Org Change

Elephant with rider
Who’s really in charge here?

When you and I hear something we don’t want to believe, here’s what happens in our brains, according to my thesis research on persuasion[1]:

  1. The emotional centers of our brain are triggered, and we get an unpleasant physical response such as tightness in the chest.
  2. Our brain starts searching for reasons to dismiss the offending statement.
  3. Only if it can’t—which rarely happens—will it rationally process the opposing facts and logic.

Dr. Johnathon Haidt puts this more succinctly: “Intuitions come first, strategic reasoning second.” Haidt is a Univ. of Virginia psychology professor, and New York Univ. visiting professor of business ethics, who studies the psychology of moral judgements. I am quoting from his book The Righteous Mind: Why Good People are Divided by Politics and Religion[2]. Subtitle aside, he also discusses how that lesson applies to business, which in turn explains much about resistance to organizational culture change. (I thank Susan Craft of Pfizer for sagely recommending this book to me.)

Haidt covers a mass of evidence that we first react emotionally to moral questions, then come up with rationalizations for those reactions. He adopts from another researcher the metaphor of an elephant and a rider. Our instinctive judgements of right and wrong (the elephants) send us down a path of support or offense before our conscious minds (the riders) realize it. Our elephants are bigger than us and have minds of their own. Usually our riders convince our conscious minds we want to go in the direction our elephants do, because that is easier than changing an elephant’s direction. Haidt writes, “Reasoning can take us to almost any conclusion we want to reach, because we ask ‘Can I believe it?’ when we want to believe something, but ‘Must I believe it?’ when we don’t want to believe.”

I came to think of it as a badly trained and stubborn elephant with an inexperienced rider. Although, my friend Randy Russell of Red Hat pointed out, every year elephant trainers get killed by their charges!

In lab experiments Haidt describes, people were presented with stories in which no one was physically, mentally, or emotionally harmed. However, each story was designed to trigger a gut-level reaction of disgust. For example, in one a brother and sister on vacation decide to have sex. They both use birth control to be extra safe, and though they never make love again, they do not regret it later on, and in fact feel closer to each other.

As expected, 80% or more of test subjects said the actions in the stories were wrong. Challenged as to why, people kept coming up with claims of harm the stories had made clear did not occur. When an inaccuracy was pointed out, the person came up with another equally wrong reason the action was wrong. In short, people repeatedly tried to find rational reasons for their initial reactions.

The point is, actual harm is not the basis for our moral judgements, much as we want to think it is. Rather, some mix of genetic, family, and cultural factors provide an innate response to something as moral or not. Haidt quotes neuroscientist Gary Marcus in defining innateness this way: “‘Nature provides a first draft, which experience then revises… “Built-in” does not mean unmalleable; it means “organized in advance of experience.”’” We can change our moral positions, but it is difficult.

Building on other researchers work, Haidt has identified six bases for all humans’ moral judgements. These are analogous to taste buds, he says: Most cultures have some type of sweet beverage, but the same taste receptors for “sweetness” are at work in each case. Similarly, “Buddha, Christ, and Muhammed all talked about compassion, but in rather different ways,” he writes. “Nonetheless, when you see that some version of kindness, fairness, and loyalty is valued in most cultures, you start wondering if there might be some low-level pan-human social receptors… that make it particularly easy for people to notice some kinds of social events rather than others.”

His six “moral foundations” are shown here with the function each served in early human societies and its results today:

  • “Care/harm”—from “caring for vulnerable children,” making us aware of suffering, need, and cruelty.
  • “Fairness/cheating”—important to “reaping the rewards of cooperation without getting exploited,” which helps us determine whether individuals are likely to return our help or cheat instead.
  • “Loyalty/betrayal”—for “meeting the adaptive challenge of forming cohesive coalitions,” causes us to trust team players and ostracize those who aren’t.
  • “Authority/subversion”—for “forging relationships that will benefit us within social hierarchies,” which causes us to notice “signs of rank or status” and whether people are acting in accordance.
  • “Sanctity/degradation”—first helped us with identifying safe foods, and later “the broader challenge of living in a world of pathogens and parasites,” which has transferred to sensing “symbolic objects and threats” that help create cohesive groups.
  • “Liberty/oppression”—from “living in small groups with individuals who would, if given the chance, dominate, bully, and constrain others,” causing us to notice attempts to dominate.

Haidt provides compelling evidence that these foundations are the reasons American liberals (progressives) and conservatives (traditionalists) have difficulty talking to each other. Both notice social actions related to all six. However, in a series of studies, Haidt and his team found that liberals mostly only cared about the Care and Fairness foundations, whereas conservatives rated all six as important, but those two less so than did liberals. In some cases this is just a matter of emphasis. In others, each side literally cannot conceive of the others’ position, and resorts to calling the other side stupid or insane for believing things related to its preferred foundations.

More evidence comes from studies in which identical twins raised in the same or different families are compared to each other; to fraternal twins in each situation; and to non-twin siblings. Something I wrote about 30 years ago in a very different context, Haidt re-confirms: “Whether you end up on the right or the left of the political spectrum turns out to be just as heritable as most other traits; genetics explains between a third and a half of the variability… on political attitudes.”

The key to the foundations, again, is their contribution to our ability to live and work in relatively small groups. Recent evidence suggests it is quite possible the cultural innovation of working in groups and genetic drivers to support that innovation co-evolved. I won’t dive into the ongoing debate over “group selection” in evolution, but an example is newer research into “epigenetic” triggers that can turn specific genes on or off based on environmental factors. Also, comparing the genomes of people around the world allowed scientists to show that human evolution sped up beginning 40,000 years ago and even more the last 20,000—the same periods in which our societies arose and became more complex.

If, as seems likely, “there is a genetic basis for feelings of loyalty and sanctity… then intense intergroup competition will make these genes become more common in the next generation.” In other words, “groups in which these traits are common” supplant those in which they aren’t, improving the odds of reproductive success for all members and the further spread of those genes within the human population.

That brings us to the issue we face when a bunch of people are thrown together into a society like… a corporation. Societies create “moral systems,” Haidt argues, which “are interlocking sets of values, virtues, norms, practices… and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible.” A community’s “moral capital” is the degree those sets are consistent.

Therefore, Haidt proclaims, “if you are trying to change an organization or a society and you do not consider the effects of your changes on moral capital, you’re asking for trouble.” Both conservatives and liberals are guilty of doing that, but I’ll pick on liberals in this example from Haidt. Immigration and diversity tend to reduce moral consistency across a society, which explains why conservatives see it as threatening to that society’s survival. When liberals argue for immigration based only on Care and Fairness, they are not accounting for the potential impacts on moral capital or addressing the other moral foundations conservatives cannot help but consider important.

Extending the logic, the same is true during organizational change efforts, as people wanting the change confront those who don’t. Initial resistance arises more from the elephants than the riders, and people have little control over those emotional responses. The wise change leader will account for this and be prepared to argue for the change from every elephant’s perspective.


Please share this post at the bottom of the page.

[1] Jim Morgan, ‘Persuasive Techniques for Journalism’ (University of Missouri-Columbia, 1995).

[2] Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion (New York, NY: Pantheon Books, 2012).

Tell the world: