Dear Diary 

Moral Tribes: Emotion, reason, and the gap between them

I promised a while ago to write a post about this excellent book by Joshua Greene, a researcher in the interdisciplinary field of moral cognition -- an intersection of philosophy, psychology, and neurology. However, the more I think about what it would take to convey the fascinating and complex argument, the more I realize that that would take another book. So instead, I will do 3 things: I will write a bird's-eye overview of Greene's argument, I will quote from Thomas Nagel's review thereof, and I will add my comments, as harvested from multiple bookmarks I made in the book.

Here's the short summary:
  1. What we regard as morality is evolved behavior that fosters survival of social groups via cooperation, by placing US over ME
  2. But while it fosters survival within groups, it also promotes survival at the expense of other groups, by placing US over THEM
  3. Thus, our intuitive, evolved morality is of little help when different cultures -- different 'moral tribes' -- find themselves in conflict over what's moral or not
  4. Different moral tribes can't even talk to each other meaningfully, because their 'moral languages' are so radically different
  5. However, while our moral sense is intuitive and emotive, we also have rational facilities available to us
  6. We, all of our different 'moral tribes', rationally recognize happiness as a worthwhile goal
  7. Thus, the calculus of happiness -- i.e. utilitarianism -- can be the rational 'lingua franca' for different 'moral tribes' to talk to each other. That doesn't make it true, it merely makes it a common basis for discussion
  8. Alas, when Greene tries to apply this approach to a concrete problem -- abortion -- he IMO fails.

Now, excepts from Nagel's review (bolding mine, italics original -V):

Joshua Greene, who teaches psychology at Harvard, is a leading contributor to the recently salient field of empirical moral psychology. This very readable book presents his comprehensive view of the subject, and what we should make of it. The grounds for the empirical hypotheses that he offers about human morality are of three types: psychological experiments, observations of brain activity, and evolutionary theory. The third, in application to the psychological properties of human beings, is necessarily speculative, but the first and second are backed up by contemporary data, including many experiments that Greene and his associates have carried out themselves.
...
The book is framed as the search for a solution to a global problem that cannot be solved by the kinds of moral standards that command intuitive assent and work well within particular communities. Greene calls this problem the “tragedy of commonsense morality.” In a nutshell, it is the tragedy that moralities that help members of particular communities to cooperate peacefully do not foster a comparable harmony among members of different communities. 

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups).... As with the evolution of faster carnivores, competition is essential for the evolution of cooperation. 

The tragedy of commonsense morality is conceived by analogy with the familiar tragedy of the commons, to which commonsense morality does provide a solution... As Greene puts it, commonsense morality requires that we sometimes put Us ahead of Me; but the same disposition also leads us to put Us ahead of Them. We feel obligations to fellow members of our community but not to outsiders. So the solution to the tragedy of the commons has generated a new tragedy, which we can see wherever the values and the interests of different communities conflict...

To solve this problem Greene thinks we need what he calls a “metamorality,” based on a common currency of value that all human beings can acknowledge, even if it conflicts with some of the promptings of the intuitive moralities of common sense. Like others who have based their doubts about commonsense morality on diagnoses of its evolutionary pedigree, Greene thinks that this higher-level moral outlook is to be found in utilitarianism, which he proposes to re-name “deep pragmatism” (lots of luck). Utilitarianism, as propounded by Bentham and Mill, is the principle that we should aim to maximize happiness impartially, and it conflicts with the instinctive commonsense morality of individual rights, and special heightened obligations to those to whom one is related by blood or community.

Greene’s argument against the objective authority of commonsense morality hinges on Daniel Kahneman’s distinction between fast instinctive thought and slow deliberative thought... Greene says that they are like the two ways a contemporary camera can operate: by automatic settings or by manual mode... Our decision apparatus, according to Greene, is similar. When it comes to moral judgment—deciding whether an act would be right or wrong—we can be fast, automatic, and emotional, or slow, deliberate, and rational. 
...
Greene wants to persuade us that moral psychology is more fundamental than moral philosophy. Most moral philosophies, he maintains, are misguided attempts to interpret our moral intuitions in particular cases as apprehensions of the truth about how we ought to live and what we ought to do, with the aim of discovering the underlying principles that determine that truth. In fact, Greene believes, all our intuitions are just manifestations of the operation of our dual-process brains, functioning either instinctively or more reflectively.
...
Utilitarianism, he contends, is not refuted by footbridge-type intuitions that conflict with it, because those intuitions are best understood not as perceptions of intrinsic wrongness, but as gut reactions that have evolved to serve social peace by preventing interpersonal violence.
...
While we cannot get rid of our automatic settings, Greene says we should try to transcend them—and if we do, we cannot expect the universal principles that we adopt to “feel right.” 
...
When he tries to describe the appropriate place of utilitarianism in our lives, this is what he says:

It’s not reasonable to expect actual humans to put aside nearly everything they love for the sake of the greater good. Speaking for myself, I spend money on my children that would be better spent on distant starving children, and I have no intention of stopping. After all, I’m only human! But I’d rather be a human who knows that he’s a hypocrite, and who tries to be less so, than one who mistakes his species-typical moral limitations for ideal values. 

The most difficult problem posed by Greene’s proposals is whether we should give up trying to understand our natural moral intuitions as evidence of a coherent system of individual rights that limit what may be done even in pursuit of the greater good. Should we instead come to regard them as we regard optical illusions, recognizing them as evolutionary products but withholding our assent?

Finally, interesting excerpts from the book:
  • Successful, high-earning (developed-world) societies almost universally score much higher on willingness to cooperate than the less successful, developing society. The association between pro-social behavior (including 'altruistic punishment') is strikingly strong.

    Public%20Goods%20game_zpsxdzwe7kj.png 

  • All 'moral tribes' are prone to hold crazy beliefs, because culture trumps fact, and once a tribe accepts a belief as a 'cultural identity badge', a shibboleth, its actual factual basis becomes all but irrelevant. This applies to all moral tribes.

  • Suppose you are asked to look at words and say what color they are written in: bird, tree... easy, right? How about: red, blue... Much harder. The reading of the word is done in 'automatic' cognitive mode, where Ventromedial Prefrontal Cortex (VMPFC) dominates. Overriding that initial impulse and performing the designates task is dominated by Dorsolateral Prefrontal Cortex (DLPFC). This is the essence of the difference between 'automatic' and 'manual' mode, which applies to moral judgement just as much as it does to color naming.

    This is exceptionally well illustrated by 'trolleyology' -- the commonly used thought experiment used to analyze our moral intuitions. Suppose a runaway trolley is about to kill 5 people, but you can avert that disaster by pulling a lever to redirect is to another branch, where it will only kill 1 -- would you? How about if you can only stop a trolley by pushing a huge fat guy near you onto the tracks (you are too thin to stop the trolley by throwing yourself in front of it)?

    The bottom line is, our 'automatic mode' is generally OK with letting a person die to save 5, or even pulling a lever that would effectively kill them in order to save 5 lives -- but not with pushing him under the trolley with your own hands. This distinction is clearly nor morally significant, but our evolutionary 'automatic mode' doesn't give a f#$k.

  • Utilitarianism is commonly misunderstood, because people imagine that it might lead to all sorts of miserable outcomes; but BY DEFINITION, if your ostensibly utilitarian supposition leads to a miserable outcome, then it's not in fact utilitarian. Utilitarianism must intrinsically account for the realities of human nature.

    So if you think utilitarianism demands that you spend every spare penny on donations to Africa, because you american salary can save tend of people on the other end of the world (become a 'happiness pump') -- you are wrong, because given the realities of human nature, this would just make you miserable, depressed, ultimately lose your job and become an example for others of how bad utilitarianism is. Thus this can't be what utilitarianism demands, and your understanding of utilitarianism is naive and incorrect.

  • Tragedy of Commonsense Morality -- the seemingly irreconcilable clash of moral values between different moral tribes -- arises BECAUSE evolution 'solved' the cooperation problem via commonsense morality: we evolved with moral sensibilities as the invisible enforcer of intra-tribe cooperative behavior; but those moral sensibilities, which have to be strong enough to make us cooperate, are also so strong that when they conflict (between different tribes), they CONFLICT.

  • There is no way to derive a common metamorality (for resolving inter-tribe moral conflicts) from religion, science, or reason. Thus, we must simply find something which can function as the lingua franca among different moral tribes. We are forced into pragmatism, because the alternative is irreconcilable enmity and, eventually, mutual annihilation.

  • Most seemingly rational beliefs we hold, are actually post-factum rationalizations. We first take up a belief for emotional reasons, for reasons of 'automatic processing', and then invest rational explanations for them; and usually we don't even realize we are doing it.

    A brilliant illustration of this comes from a split-brain patient. A split-brain patient is shown a picture of snow to his right hemisphere, and a picture of the chicken claw to his left hemisphere, and asked to pick with his left hand (the one controlled by right hemisphere) a matching picture; he picked a shovel, entirely appropriate for snow. However, when asked to EXPLAIN his choice -- and language is governed by the left hemisphere -- he said: "I saw a claw and picked a shovel, and you can clean a chicken shed with a shovel".

  • Rationalization is the great enemy of moral progress, and thus of deep pragmatism. If moral tribes fight because their members have different gut feelings, then we'll get nowhere by using our manual modes to rationalize our feelings.

  • This doesn't mean that the 'automatic mode' is bad. In fact, most of the time, when there's to disagreements  about the conclusions of the 'automatic mode', we should trust it; it is, after all, a finely honed result of millions of years of evolution, geared to enable cooperation and social co-existence. It's just that the automatic mode has its limitations, and when it runs into those limitations, it's inflexible. Thus, we should be prepared to recognize when the 'automatic mode' is not up to the task, and explicitly engage 'manual mode' -- reason -- instead.

  • Talking about rights usually isn't the premise in an argument, it's a way to avoid an argument. Declaring that one has an intrinsic right to this or that, merely means that you are unwilling to examine WHY they should have such a right.

    [classic liberals, like Mill, argued that e.g. right to liberty and right to free speech are paramount because they make for a better, more prosperous, happier society, not because they are metaphysical first-order objects; I agree. Rights must be JUSTIFIED, just like anything else -V]

    Realistically, when we speak of rights, what we are REALLY saying is that those principles have been tested, found strong and important, and declaring them 'rights' expresses our strong and insuperable commitment to those. Thus, language of 'rights' is properly seen not as declaring their metaphysical existence, but expressing our strong, considered, informed utilitarian commitment.
Finally, Greene attempts to apply his methodology to a high-profile 'commonsense morality' conflict -- abortion.

He analyzes the pro-choice side, and finds it wanting in consistency (though he never touches upon the type of position I had expressed, hybridizing right to choice and right to life at viability, and using neural complexity as a threshold).

He then analyzes the pro-life side, and while also finding it wanting, finds that it's stronger on prudential grounds (more people = more happiness, utilitarianism FTW).

But then Greene does a two-step, and declares that the pro-life case is TOO strong, and demands impossible of people, failing for the same reason as the suppositions that utilitarianism would demand that we turn into perfectly selfless, endless, inexhaustible donors to others' happiness ('happiness pumps'). After all, the salary of someone in USA can save tens of lives somewhere in Africa, so aren't they OBLIGATED to do so by utilitarian considerations? But no, since this would go against human nature and ultimately result in depression, loss of job, and the loss of all that income, such naively utilitarian argument is incorrent. The pro-life argument, in Greene's view, fails for the same reason.

IMO, what Greene has done here is exactly the sort of rationalization he decries. It's perfectly possible to render the pro-life position in such a way that you DON'T have to fall into the 'happiness pump' trap.



Overall, I found the book extremely interesting and informative and thought-provoking, but IMO if falls short of its stated goal -- offering us a workable, practical 'metamorality' system which can be used to resolve conflicts of principle between conflicting 'moral tribes'.

12:40:33 on 09/27/15 by danilche - Category: Philosophical

Comments

No comments yet

Add Comments

This item is closed, it's not possible to add new comments to it or to vote on it