The Righteous Mind: Why Good People are Divided by Politics and Religion by Jonathan Haidt

It was only recently that I developed an an interest in politics. It was during my 2016 campaign in which I sought election onto the Tauranga City Council. I was not elected but that has not extinguished my new found fascination with politics, both local, national and international.

I came across a TED Talk in which Chris Anderson interviewed Jonathan Haidt on why Trump was elected into office. I found what he had to say very interesting and I sought out this book to find out more.

I loved the book. It really does do a great job at explaining why people vote the way they do.

One section that really stood out to me was in Chapter Four which talked about how being accountable to an audience increases “evenhanded consideration of alternative points of view”. I saw myself in this because I’ve found that having my audience on my mind when I read business books helps me concentrate, and again when I read through a large volume of council documents during my election campaign, I read them with an open mind because I intended to share my summaries of them with the public.

Another thing I’ll say about Haidt is that he writes amazing chapter summaries.

Forgive me if some of the sections I’ve highlighted seem a bit choppy in places. The justification for some of his statements stretched on for several paragraphs or pages and I wanted to be brief (even then the following notes are 6000 words!).

As always, I encourage you to read this book yourself in full, but in the meantime, here are my notes/highlights on the book “The Righteous Mind: Why Good People are Divided by Politics and Religion” by Jonathan Haidt [Amazon].

Introduction

When I was a teenager I wished for world peace, but now I yearn for a world in which competing ideologies are kept in balance, systems of accountability keep us all from getting away with too much, and fewer people believe that righteous ends justify violent means.  Not a very romantic wish, but one that we might actually achieve.

The central metaphor of these first 4 chapters is that the mind is divided, like a rider on an elephant, and the rider’s rob is to serve the elephant. The rider is our conscious reasoning – the stream of words and images of which we are fully aware. The elephant is the other 99 percent of mental processes – the ones that occur outside of awareness but that actually govern most of our behaviour. The rider and elephant work together, sometimes poorly, as we stumble through life in search of meaning and connection. In this book I’ll use the metaphor to solve puzzles such as why it seems like everyone (else) is a hypocrite and why political partisans are so willing to believe outrageous lies and conspiracy theories.  I’ll also use the metaphor to show you how you can better persuade people who seem unresponsive to reason.

ONE. Where Does Morality Come From?

For you, as for most people on the planet, morality is broad.

Some actions are wrong even though they don’t hurt anyone.

Understanding the simple fact that morality differs around the world, and even within societies, is the first step toward understanding your righteous mind.

Piaget argued that children’s understanding of morality is like their understanding of those water glasses: we can’t say that it is innate, and we can’t say that kids learn it directly from adults.

It is, rather, self-constructed as kids play with other kids.

Taking turns in a game is like pouring water back and forth between glasses.

No matter how often you do it with three-year-olds, they’re just not ready to get the concept of fairness, any more than they can understand the conservation of volume.

But once they’ve reached the age of five or six, then playing games, having arguments, and working things out together will help them learn about fairness far more effectively than any sermon from adults.

Piaget and Kohlberg both thought that parents and other authorities were obstacles to moral development.

If you want your kids to learn about the physical world, let them play with cups and water; don’t lecture them about the conservation of volume.

And if you want your kids to learn about the social world, let them play with other kids and resolve disputes; don’t lecture them about the Ten Commandments.

And, for heaven’s sake, don’t force them to obey God or their teachers or you. That will only freeze them at the conventional level.

Morality is about treating individuals well.

It’s about harm and fairness (not loyalty, respect, duty, piety, patriotism, or tradition).

Hierarchy and authority are generally bad things (so it’s best to let kids figure things out for themselves).

Schools and families should therefore embody progressive principles of equality and autonomy (not authoritarian principles that enable elders to train and constrain children).

Even in the United States the social order is a moral order, but it’s an individualistic order built up around the protection of individuals and their freedom.

When you put individuals first, before society, then any rule or social practice that limits personal freedom can be questioned.

If it doesn’t protect somebody from harm, then it can’t be morally justified. It’s just a social convention.

Unexpectedly, the effect of social class was much larger than the effect of city.

In other words, well-educated people in all three cities were more similar to each other than they were to their lower-class neighbors.

I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university.

These subjects were reasoning. They were working quite hard at reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions.

Where does morality come from?

The two most common answers have long been that it is innate (the nativist answer) or that it comes from childhood learning (the empiricist answer).

In this chapter I considered a third possibility, the rationalist answer, which dominated moral psychology when I entered the field: that morality is self-constructed by children on the basis of their experiences with harm.

Kids know that harm is wrong because they hate to be harmed, and they gradually come to see that it is therefore wrong to harm others, which leads them to understand fairness and eventually justice.

I explained why I came to reject this answer after conducting research in Brazil and the United States.

I concluded instead that:

  • The moral domain varies by culture. It is unusually narrow in Western, educated, and individualistic cultures. Sociocentric cultures broaden the moral domain to encompass and regulate more aspects of life.
  • People sometimes have gut feelings—particularly about disgust and disrespect—that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.
  • Morality can’t be entirely self-constructed by children based on their growing understanding of harm. Cultural learning or guidance must play a larger role than rationalist theories had given it.

If morality doesn’t come primarily from reasoning, then that leaves some combination of innateness and social learning as the most likely candidates.

In the rest of this book I’ll try to explain how morality can be innate (as a set of evolved intuitions) and learned (as children learn to apply those intuitions within a particular culture).

We’re born to be righteous, but we have to learn what, exactly, people like us should be righteous about.

TWO. The Intuitive Dog and Its Rational Tail

Yet the result of the separation was not the liberation of reason from the thrall of the passions. It was the shocking revelation that reasoning requires the passions.

The head can’t even do head stuff without the heart.

Margolis proposed that there are two very different kinds of cognitive processes at work when we make judgments and solve problems: “seeing-that” and “reasoning-why.”

“Reasoning-why,” in contrast, is the process “by which we describe how we think we reached a judgment, or how we think another person could reach that judgment.”

“Because I don’t want to” is a perfectly acceptable justification for one’s subjective preferences. Yet moral judgments are not subjective statements; they are claims that somebody did something wrong. I can’t call for the community to punish you simply because I don’t like what you’re doing. I have to point to something outside of my own preferences, and that pointing is our moral reasoning. We do moral reasoning not to reconstruct the actual reasons why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment.

The Rider and the Elephant

Two different kinds of cognition: intuition and reasoning.

I chose an elephant rather than a horse because elephants are so much bigger—and smarter—than horses. Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they’re very good at what they do, like software that has been improved through thousands of product cycles. When human beings evolved the capacity for language and reasoning at some point in the last million years, the brain did not rewire itself to hand over the reins to a new and inexperienced charioteer. Rather, the rider (language-based reasoning) evolved because it did something useful for the elephant.

The rider can do several useful things. It can see further into the future (because we can examine alternative scenarios in our heads) and therefore it can help the elephant make better decisions in the present. It can learn new skills and master new technologies, which can be deployed to help the elephant reach its goals and sidestep disasters. And, most important, the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm.

HOW TO WIN AN ARGUMENT

Hume diagnosed the problem long ago: “And as reasoning is not the source, whence either disputant derives his tenets; it is in vain to expect, that any logic, which speaks not to the affections, will ever engage him to embrace sounder principles.”

If you want to change people’s minds, you’ve got to talk to their elephants.

You’ve got to use links 3 and 4 of the social intuitionist model to elicit new intuitions, not new rationales.

Carnegie repeatedly urged readers to avoid direct confrontations.

Instead he advised people to “begin in a friendly way,” to “smile,” to “be a good listener,” and to “never say ‘you’re wrong.’

He used a quotation from Henry Ford to express it: “If there is any one secret of success it lies in the ability to get the other person’s point of view and see things from their angle as well as your own.”

If you really want to change someone’s mind on a moral or political matter, you’ll need to see things from that person’s angle as well as your own.

And if you do truly see it the other person’s way—deeply and intuitively—you might even find your own mind opening in response.

Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide.

If you ask people to believe something that violates their intuitions, they will devote their efforts to finding an escape hatch—a reason to doubt your argument or conclusion.

They will almost always succeed.

THREE. Elephants Rule

Here are six major research findings that collectively illustrate the first half of the first principle: Intuitions Come First.

1. BRAINS EVALUATE INSTANTLY AND CONSTANTLY

Animal brains make such appraisals thousands of times a day with no need for conscious reasoning, all in order to optimize the brain’s answer to the fundamental question of animal life: approach or avoid?

Wundt said that affective reactions are so tightly integrated with perception that we find ourselves liking or disliking something the instant we notice it, sometimes even before we know what it is.

The brain tags familiar things as good things. Zajonc called this “mere exposure effect,” and it is a basic principle of advertising.

In a landmark article, Zajonc urged psychologists to use a dual-process model in which affect or “feeling” is the first process.

10 It has primacy both because it happens first (it is part of perception and is therefore extremely fast) and because it is more powerful (it is closely linked to motivation, and therefore it strongly influences behavior). The second process—thinking—is an evolutionarily newer ability, rooted in language and not closely related to motivation.

In other words, thinking is the rider; affect is the elephant. The thinking system is not equipped to lead—it simply doesn’t have the power to make things happen—but it can be a useful advisor. Zajonc said that thinking could work independently of feeling in theory, but in practice affective reactions are so fast and compelling that they act like blinders on a horse: they “reduce the universe of alternatives” available to later thinking.11 The rider is an attentive servant, always trying to anticipate the elephant’s next move. If the elephant leans even slightly to the left, as though preparing to take a step, the rider looks to the left and starts preparing to assist the elephant on its imminent leftward journey. The rider loses interest in everything off to the right.

2. SOCIAL AND POLITICAL JUDGMENTS ARE PARTICULARLY INTUITIVE

The bottom line is that human minds, like animal minds, are constantly reacting intuitively to everything they perceive, and basing their responses on those reactions. Within the first second of seeing, hearing, or meeting another person, the elephant has already begun to lean toward or away, and that lean influences what you think and do next. Intuitions come first.

3. OUR BODIES GUIDE OUR JUDGMENTS

Immorality makes us feel physically dirty, and cleansing ourselves can sometimes make us more concerned about guarding our moral purity.

4. PSYCHOPATHS REASON BUT DON’T FEEL

They feel no compassion, guilt, shame, or even embarrassment, which makes it easy for them to lie, and to hurt family, friends, and animals.

Psychopaths seem to live in a world of objects, some of which happen to walk around on two legs.

Psychopaths learn to say whatever gets them what they want.

It’s a genetically heritable condition that creates brains that are unmoved by the needs, suffering, or dignity of others.

5. BABIES FEEL BUT DON’T REASON

Psychologists discovered that infants are born with some knowledge of physics and mechanics: they expect that objects will move according to Newton’s laws of motion, and they get startled when psychologists show them scenes that should be physically impossible (such as a toy car seeming to pass through a solid object). Psychologists know this because infants stare longer at impossible scenes than at similar but less magical scenes (seeing the toy car pass just behind the solid object).

It makes sense that infants can easily learn who is nice to them. Puppies can do that too. But these findings suggest that by six months of age, infants are watching how people behave toward other people, and they are developing a preference for those who are nice rather than those who are mean.

6. AFFECTIVE REACTIONS ARE IN THE RIGHT PLACE AT THE RIGHT TIME IN THE BRAIN

FOUR. Vote for Me (Here’s Why)

I’ll show that Glaucon was right: “people care a great deal more about appearance and reputation than about reality.”

The most important principle for designing an ethical society is to make sure that everyone’s reputation is on the line all the time, so that bad behavior will always bring bad consequences.

Human beings are the world champions of cooperation beyond kinship, and we do it in large part by creating systems of formal and informal accountability.

In Tetlock’s research, subjects are asked to solve problems and make decisions. For example, they’re given information about a legal case and then asked to infer guilt or innocence. Some subjects are told that they’ll have to explain their decisions to someone else. Other subjects know that they won’t be held accountable by anyone. Tetlock found that when left to their own devices, people show the usual catalogue of errors, laziness, and reliance on gut feelings that has been documented in so much decision-making research. But when people know in advance that they’ll have to explain themselves, they think more systematically and self-critically. They are less likely to jump to premature conclusions and more likely to revise their beliefs in response to evidence.

Exploratory thought is an “evenhanded consideration of alternative points of view.” Confirmatory thought is “a one-sided attempt to rationalize a particular point of view.”

Accountability increases exploratory thought only when three conditions apply:

  1. decision makers learn before forming any opinion that they will be accountable to an audience,
  2. the audience’s views are unknown, and
  3. they believe the audience is well informed and interested in accuracy.

When all three conditions apply, people do their darnedest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time—which is almost all of the time—accountability pressures simply increase confirmatory thought. People are trying harder to look right than to be right.

Tetlock concludes that conscious reasoning is carried out largely for the purpose of persuasion, rather than discovery.

Our moral thinking is much more like a politician searching for votes than a scientist searching for truth.

1. WE ARE OBSESSED WITH POLLS

Appearing concerned about other people’s opinions makes us look weak, we (like politicians), often deny that we care about public opinion polls. But the fact is that we care a lot about what others think of us.

2. OUR IN-HOUSE PRESS SECRETARY AUTOMATICALLY JUSTIFIES EVERYTHING

Wason called this phenomenon “confirmation bias”, the tendency to seek out and interpret new evidence in ways that confirm what you already think. People are quite good at challenging statements made by other people, but if it’s your belief, then it’s your possession—your child, almost—and you want to protect it, not challenge it and risk losing it.

3. WE LIE, CHEAT, AND JUSTIFY SO WELL THAT WE HONESTLY BELIEVE WE ARE HONEST

Many psychologists have studied the effects of having “plausible deniability.”

Ariely summarizes his findings from many variations of the paradigm like this: “When given the opportunity, many honest people will cheat. In fact, rather than finding that a few bad apples weighted the averages, we discovered that the majority of people cheated, and that they cheated just a little bit.”

People didn’t try to get away with as much as they could. Rather, when Ariely gave them anything like the invisibility of the ring of Gyges, they cheated only up to the point where they themselves could no longer find a justification that would preserve their belief in their own honesty.

The bottom line is that in lab experiments that give people invisibility combined with plausible deniability, most people cheat.

4. REASONING (AND GOOGLE) CAN TAKE YOU WHEREVER YOU WANT TO GO

The social psychologist Tom Gilovich studies the cognitive mechanisms of strange beliefs. His simple formulation is that when we want to believe something, we ask ourselves, “Can I believe it?” Then (as Kuhn and Perkins found), we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have a justification, in case anyone asks.

In contrast, when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must.

Now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day.

5. WE CAN BELIEVE ALMOST ANYTHING THAT SUPPORTS OUR TEAM

Many political scientists used to assume that people vote selfishly, choosing the candidate or policy that will benefit them the most. But decades of research on public opinion have led to the conclusion that self-interest is a weak predictor of policy preferences.

Rather, people care about their groups, whether those be racial, regional, religious, or political. The political scientist Don Kinder summarizes the findings like this: “In matters of public opinion, citizens seem to be asking themselves not ‘What’s in it for me?’ but rather ‘What’s in it for my group?’

Extreme partisans are so stubborn, closed-minded, and committed to beliefs that often seem bizarre or paranoid. Like rats that cannot stop pressing a button, partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Extreme partisanship may be literally addictive.

THE RATIONALIST DELUSION

Most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people.

Each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons. We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system.

Nobody is ever going to invent an ethics class that makes people behave ethically after they step out of the classroom. Classes are for riders, and riders are just going to use their new knowledge to serve their elephants more effectively.

FIVE. Beyond WEIRD Morality

Shweder found three major clusters of moral themes, which they called the ethics of autonomy, community, and divinity. Each one is based on a different idea about what a person really is.

  1. The ethic of autonomy is based on the idea that people are, first and foremost, autonomous individuals with wants, needs, and preferences.
  2. The ethic of community is based on the idea that people are, first and foremost, members of larger entities such as families, teams, armies, companies, tribes, and nations.
    • In such societies, the Western insistence that people should design their own lives and pursue their own goals seems selfish and dangerous—a sure way to weaken the social fabric and destroy the institutions and collective entities upon which everyone depends.
  3. The ethic of divinity is based on the idea that people are, first and foremost, temporary vessels within which a divine soul has been implanted.
    • In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.

I had escaped from my prior partisan mind-set (reject first, ask rhetorical questions later) and began to think about liberal and conservative policies as manifestations of deeply conflicting but equally heartfelt visions of the good society.

SIX. Taste Buds of the Righteous Mind

According to one of the leading autism researchers, Simon Baron-Cohen, there are in fact two spectra, two dimensions on which we can place each person: empathizing and systemizing.

Baron-Cohen has shown that autism is what you get when genes and prenatal factors combine to produce a brain that is exceptionally low on empathizing and exceptionally high on systemizing.

SEVEN. The Moral Foundations of Politics

Marcus’s analogy leads to the best definition of innateness I have ever seen: “Nature provides a first draft, which experience then revises. … “Built-in” does not mean unmalleable; it means “organized in advance of experience.””

1. THE CARE/HARM FOUNDATION

It is just not conceivable that the chapter on mothering in the book of human nature is entirely blank, leaving it for mothers to learn everything by cultural instruction or trial and error.

2. THE FAIRNESS/CHEATING FOUNDATION

Altruism toward kin is not a puzzle at all. Altruism toward non-kin, on the other hand, has presented one of the longest-running puzzles in the history of evolutionary thinking.

We feel pleasure, liking, and friendship when people show signs that they can be trusted to reciprocate. We feel anger, contempt, and even sometimes disgust when people try to cheat us or take advantage of us.

Everyone cares about fairness, but there are two major kinds. On the left, fairness often implies equality, but on the right it means proportionality—people should be rewarded in proportion to what they contribute, even if that guarantees unequal outcomes.

3. THE LOYALTY/BETRAYAL FOUNDATION

4. THE AUTHORITY/SUBVERSION FOUNDATION

The failure to detect signs of dominance and then to respond accordingly often results in a beating.

But authority should not be confused with power.

When I began graduate school I subscribed to the common liberal belief that hierarchy = power = exploitation = evil. But when I began to work with Alan Fiske, I discovered that I was wrong.

5. THE SANCTITY/DEGRADATION FOUNDATION

Omnivores therefore go through life with two competing motives: neophilia (an attraction to new things) and neophobia (a fear of new things). People vary in terms of which motive is stronger, and this variation will come back to help us in later chapters: Liberals score higher on measures of neophilia (also known as “openness to experience”), not just for new foods but also for new people, music, and ideas. Conservatives are higher on neophobia; they prefer to stick with what’s tried and true, and they care a lot more about guarding borders, boundaries, and traditions.

Plagues, epidemics, and new diseases are usually brought in by foreigners—as are many new ideas, goods, and technologies—so societies face an analogue of the omnivore’s dilemma, balancing xenophobia and xenophilia.

Whatever its origins, the psychology of sacredness helps bind individuals into moral communities.42 When someone in a moral community desecrates one of the sacred pillars supporting the community, the reaction is sure to be swift, emotional, collective, and punitive.

In Sum

  1. The Care/harm foundation evolved in response to the adaptive challenge of caring for vulnerable children. It makes us sensitive to signs of suffering and need; it makes us despise cruelty and want to care for those who are suffering.
  2. The Fairness/cheating foundation evolved in response to the adaptive challenge of reaping the rewards of cooperation without getting exploited. It makes us sensitive to indications that another person is likely to be a good (or bad) partner for collaboration and reciprocal altruism. It makes us want to shun or punish cheaters.
  3. The Loyalty/betrayal foundation evolved in response to the adaptive challenge of forming and maintaining coalitions. It makes us sensitive to signs that another person is (or is not) a team player. It makes us trust and reward such people, and it makes us want to hurt, ostracize, or even kill those who betray us or our group.
  4. The Authority/subversion foundation evolved in response to the adaptive challenge of forging relationships that will benefit us within social hierarchies. It makes us sensitive to signs of rank or status, and to signs that other people are (or are not) behaving properly, given their position.
  5. The Sanctity/degradation foundation evolved initially in response to the adaptive challenge of the omnivore’s dilemma, and then to the broader challenge of living in a world of pathogens and parasites. It includes the behavioral immune system, which can make us wary of a diverse array of symbolic objects and threats. It makes it possible for people to invest objects with irrational and extreme values—both positive and negative—which are important for binding groups together.

EIGHT. The Conservative Advantage

Republicans understand moral psychology. Democrats don’t. Republicans have long understood that the elephant is in charge of political behavior, not the rider, and they know how elephants work.

Republicans don’t just aim to cause fear, as some Democrats charge. They trigger the full range of intuitions described by Moral Foundations Theory.

Republicans since Nixon have had a near-monopoly on appeals to loyalty (particularly patriotism and military virtues) and authority (including respect for parents, teachers, elders, and the police, as well as for traditions).

The Democrats offered just sugar (Care) and salt (Fairness as equality), whereas Republican morality appealed to all five taste receptors.

We’ve made many improvements since Jesse’s first simple survey, but we always find the same basic pattern that he found in 2006. The lines for Care and Fairness slant downward; the lines for Loyalty, Authority, and Sanctity slant upward. Liberals value Care and Fairness far more than the other three foundations; conservatives endorse all five foundations more or less equally.

THE LIBERTY/OPPRESSION FOUNDATION

Anything that suggests the aggressive, controlling behavior of an alpha male (or female) can trigger this form of righteous anger, which is sometimes called reactance. (That’s the feeling you get when an authority tells you you can’t do something and you feel yourself wanting to do it even more strongly.)

Punishing bad behavior promotes virtue and benefits the group. And just as Glaucon argued in his ring of Gyges example, when the threat of punishment is removed, people behave selfishly.

Why did most players pay to punish? In part, because it felt good to do so.47 We hate to see people take without giving. We want to see cheaters and slackers “get what’s coming to them.” We want the law of karma to run its course, and we’re willing to help enforce it.

When people work together on a task, they generally want to see the hardest workers get the largest gains.

When a few members of a group contributed far more than the others—or, even more powerfully, when a few contributed nothing—most adults do not want to see the benefits distributed equally.

THREE VERSUS SIX

People don’t crave equality for its own sake; they fight for equality when they perceive that they are being bullied or dominated,

Liberals have a three-foundation morality, whereas conservatives use all six.

NINE. Why Are We So Groupish?

Yes, people are often selfish,

But it’s also true that people are groupish. We love to join teams, clubs, leagues, and fraternities.

I will suggest that human nature is mostly selfish, but with a groupish overlay that resulted from the fact that natural selection works at multiple levels simultaneously.

Real armies, like most effective groups, have many ways of suppressing selfishness.

Whenever a way is found to suppress free riding so that individual units can cooperate, work as a team, and divide labor, selection at the lower level becomes less important, selection at the higher level becomes more powerful, and that higher-level selection favors the most cohesive superorganisms.

Bees construct hives out of wax and wood fibers, which they then fight, kill, and die to defend. Humans construct moral communities out of shared norms, institutions, and gods that, even in the twenty-first century, they fight, kill, and die to defend.

TEN. The Hive Switch

People are happy to follow when they see that their group needs to get something done, and when the person who emerges as the leader doesn’t activate their hypersensitive oppression detectors.

Exploit synchrony.

  • People who move together are saying, “We are one, we are a team; just look how perfectly we are able to do that Tomasello shared-intention thing.” Japanese corporations such as Toyota begin their days with synchronous companywide exercises.
  • Groups prepare for battle—in war and sports—with group chants and ritualized movements. (If you want to see an impressive one in rugby, Google “All Blacks Haka.”)
  • If you ask people to sing a song together, or to march in step, or just to tap out some beats together on a table, it makes them trust each other more and be more willing to help each other out, in part because it makes people feel more similar to each other.

Create healthy competition among teams, not individuals.

  • Pitting individuals against each other in a competition for scarce resources (such as bonuses) will destroy hivishness, trust, and morale.

Transactional leadership appeals to followers’ self-interest, but transformational leadership changes the way followers see themselves—from isolated individuals to members of a larger group. Transformational leaders do this by modeling collective commitment (e.g., through self-sacrifice and the use of “we” rather than “I”), emphasizing the similarity of group members, and reinforcing collective goals, shared values, and common interests.

ELEVEN. Religion Is a Team Sport

If you think about religion as a set of beliefs about supernatural agents, you’re bound to misunderstand it. You’ll see those beliefs as foolish delusions, perhaps even as parasites that exploit our brains for their own benefit. But if you take a Durkheimian approach to religion (focusing on belonging) and a Darwinian approach to morality (involving multilevel selection), you get a very different picture. You see that religious practices have been binding our ancestors into groups for tens of thousands of years. That binding usually involves some blinding—once any person, book, or principle is declared sacred, then devotees can no longer question it or think clearly about it.

Our ability to believe in supernatural agents may well have begun as an accidental by-product of a hypersensitive agency detection device, but once early humans began believing in such agents, the groups that used them to construct moral communities were the ones that lasted and prospered.

We humans have an extraordinary ability to care about things beyond ourselves, to circle around those things with other people, and in the process to bind ourselves into teams that can pursue larger projects. That’s what religion is all about. And with a few adjustments, it’s what politics is about too.

TWELVE. Can’t We All Disagree More Constructively?

Here’s a simple definition of ideology: “A set of beliefs about the proper order of society and how it can be achieved.”8 And here’s the most basic of all ideological questions: Preserve the present order, or change it?

Whether you end up on the right or the left of the political spectrum turns out to be just as heritable as most other traits: genetics explains between a third and a half of the variability among people on their political attitudes.14 Being raised in a liberal or conservative household accounts for much less.

Innate does not mean unmalleable; it means organized in advance of experience.

Step 1: Genes Make Brains

Step 2: Traits Guide Children Along Different Paths

Step 3: People Construct Life Narratives

Being small, isolated, or morally homogeneous are examples of environmental conditions that increase the moral capital of a community. That doesn’t mean that small islands and small towns are better places to live overall—the diversity and crowding of big cities makes them more creative and interesting places for many people—but that’s the trade-off. (Whether you’d trade away some moral capital to gain some diversity and creativity will depend in part on your brain’s settings on traits such as openness to experience and threat sensitivity, and this is part of the reason why cities are usually so much more liberal than the countryside.)

John Stuart Mill said that liberals and conservatives are like this: “A party of order or stability, and a party of progress or reform, are both necessary elements of a healthy state of political life.”

“From 600 BC to the present day, philosophers have been divided into those who wished to tighten social bonds and those who wished to relax them.”

Point #1: Governments Can And Should Restrain Corporate Superorganisms

Corporations are superorganisms. They’re not like superorganisms; they are actual superorganisms. So, if the past is any guide, corporations will grow ever more powerful as they evolve, and as they change the legal and political systems of their host countries to become ever more hospitable. The only force left on Earth that can stand up to the largest corporations are national governments, some of which still maintain the power to tax, regulate, and divide corporations into smaller pieces when they get too powerful.

Economists speak of “externalities”—the costs (or benefits) incurred by third parties who did not agree to the transaction causing the cost (or benefit). For example, if a farmer begins using a new kind of fertilizer that increases his yield but causes more damaging runoff into nearby rivers, he keeps the profit but the costs of his decision are borne by others.

When corporations operate in full view of the public, with a free press that is willing and able to report on the externalities being foisted on the public, they are likely to behave well, as most corporations do. But many corporations operate with a high degree of secrecy and public invisibility

Point #2: Some Problems Really Can Be Solved by Regulation

The chemical industry had been able to block all efforts to ban lead additives from gasoline for decades.

This one regulation saved vast quantities of lives, IQ points, money, and moral capital all at the same time.

Rather than building more prisons, the cheapest (and most humane) way to fight crime may be to give more money and authority to the Environmental Protection Agency.

When conservatives object that liberal efforts to intervene in markets or engage in “social engineering” always have unintended consequences, they should note that sometimes those consequences are positive. When conservatives say that markets offer better solutions than do regulations, let them step forward and explain their plan to eliminate the dangerous and unfair externalities generated by many markets.

YANG #1: LIBERTARIAN WISDOM

People with libertarian ideals have generally supported the Republican Party since the 1930s because libertarians and Republicans have a common enemy: the liberal welfare society that they believe is destroying America’s liberty (for libertarians) and moral fiber (for social conservatives).

Counterpoint #1: Markets Are Miraculous

Only a working market can bring supply, demand, and ingenuity together to provide health care at the lowest possible price. For example, there is an open market for LASIK surgery.

Competition and innovation have driven down the price of the surgery by nearly 80 percent since it was first introduced.

YANG #2: SOCIAL CONSERVATIVE WISDOM

Counterpoint #2: You Can’t Help the Bees by Destroying the Hive

Putman examined the level of social capital in hundreds of American communities and discovered that high levels of immigration and ethnic diversity seem to cause a reduction in social capital.

The urge to help Hispanic immigrants in the 1980s led to multicultural education programs that emphasized the differences among Americans rather than their shared values and identity. Emphasizing differences makes many people more racist, not less.

Morality binds and blinds. This is not just something that happens to people on the other side. We all get sucked into tribal moral communities. We circle around sacred values and then share post hoc arguments about why we are so right and they are so wrong. We think the other side is blind to truth, reason, science, and common sense, but in fact everyone goes blind when talking about their sacred objects.

Your Thoughts?

Have your say in the comments below.

Leave a Reply