The Trolley Problem:
This experimental research, performed by Joshua Greene, is centered on an idea posed by the philosophers Pilippa Foot and Judith Jarvis Thomson, called the “Trolley Problem.” This idea is comprised of two theoretical situations that spur a moral dilemma. One being the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say, "Yes." The other is the footbridge dilemma: the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No." The two situations lead to inquiry about why, in the switch case, it is permissible to kill one person in order to save the lives of five others, while in the footbridge case, killing the one person is completely unacceptable. Moreover, “how does everyone know (or “know”) that it is okay to turn the trolley but not okay to push the man off the footbridge?”
Study:
In Joshua Greene’s study, brain imaging was used to record the response people had to moral dilemmas like the ones just mentioned, as well as many others. He also divided the moral dilemmas into two categories, based on difficulty, recording not only the brain activity, but also the relationship between reaction time and the difficulty of the dilemma. He hypothesized that the difference between each situation is that, one case is “up-close-and-personal” while the other is impersonal. Greene predicted that brain imaging would indicate the differentiation in brain activity when responding to a personal case as opposed to an impersonal case. The study was a preliminary attempt to understand the question “are the moral truths to which we subscribe really full blown truths, mind-independent facts about the nature of moral reality, or are they… in the mind of the beholder?” Also, in moral dilemmas such as the one mentioned, why is it horrific and outrageous to react one way and completely acceptable to act another way. Through his testing he tries to support the theory that this phenomena is less about structured moral codes enforced through experience or evolution, and more about “the way our brains are wired up.” His hypothesis also argues that in personal dilemmas, a difference in reaction times for “yes” and “no” answers would indicate that there is different brain activity allowing the person to make that judgment.
Results:
When responding to a more personal moral dilemma, three emotional related brain regions: the posterior cingulate cortex, the medial prefrontal cortex, and the amygdala, were more active. Whereas, when a person responded to an impersonal dilemma, there was greater activity in areas associated with simple cognition: the dorsolateral prefrontal cortex, and the inferior parietal lobe. When reviewing response time, in relation to the brain images, it was shown that in cases where a person answered, “yes” to questions like the footbridge dilemma, there was intense cognitive brain activity in the dorsolateral prefrontal cortex, the inferior parietal lobe, as well as in the anterior cingulate cortex (associated with conflict), and their reaction time was extended; when a person answered “no” in cases like the footbridge dilemma, there was heightened emotional brain activity in the posterior cingulate cortex, medial prefrontal cortex, and the amygdala, and their reaction time was significantly faster.
These results suggest that moral judgments emerge from more than one neurological system. Going even further, it appears that these emotional and cognitive systems are essentially battling with each other, striving to control and influence the outcome of a moral decision. He arrived at this conclusion by using the localization of morally driven responses in the brain to watch neural activity, while simultaneously recording the time it takes one to make a decision; from this he presumed that shorter reaction times were indicative of the emotional brain systems dominating the cognitive process, while longer reaction times mean that the cognitive systems have overpowered the impulsive emotional response.
Based on his findings, Greene believes that profound moral positions are not, as commonly perceived, invented by humans, or God given, but rather that they may somehow be embedded in brain chemistry. He thinks that morality is fundamentally a product of the interaction between neurological systems. With this theory, Greene also considers that through the evolution of humanity, we have developed a stronger cognitive response to moral dilemma, and yet the less consequentialistic (philosophical view that moral judgment is a product of evaluating the consequences of the decision) response to moral dilemma, is still, very often, overriding.
Kiddie Morality:
This segment looks at how young children adhere, very early, to the moral universe. Through Dr. Judith Smetana research, it is clear that moral judgment for these children is not solely realized through the rules enforced by the adults around them. In addition to what they are taught, there seems to be certain moral concepts that are understood and driven by something innate. Smetana interviews pre-school children, asking them all kinds of questions that embody complex moral ideas, but are presented simply in terms that a young child can relate to. She asks:
-Who makes the rules here?
· The teacher.
-Can the teacher change the rules if they want to?
· Yes. She’s the teacher; she can do whatever she wants.
-Is there a rule about hitting at your school?
· Yes.
-Suppose the teachers agree that they don’t have a rule about hitting anymore, would it then be okay to hit?
· No. Because that would make somebody would feel bad.
-Is there a rule about sitting during lunch? Is that a rule the teacher could change?
· Yes. Yes, if she says okay you can stand up, you can do that... You have to listen to the teacher.
What she found was that with certain rules, the children felt that changing or deviating from the rule would be okay, and with others, the children expressed that it would not be okay to break them. In cases dealing with hitting, hurting, and teasing, the general consensus is that it would be wrong, even if the teacher did not see them or did not have a rule about hitting; whereas with rules like sitting in a circle during “circle time” or during lunch, the children felt that it would be acceptable if the rule was different or if there was not rule at all.
She also addresses how young children develop this moral sense, which could be partially innate, but is also largely discovered through experience. While most young children understand a lot rules and the conditions of the rules, they still often subside and break the rules, giving in to certain intrinsic urges. Researchers think that these defiant moments could be relating to their young underdeveloped sense of empathy. In one anecdotal story, a women describes watching her four-year-old son, in his classroom, make his best friend bleed by tackling him in front of the entire class. She explained that although it was extremely difficult to intervene when she saw how mortified her son was after the incident, it was important for him understand the emotional consequences of this kind of behavior.
The next story in this segment also looked at moral development from an experiential perspective. Two adults talked about the ways in which very specific events from their childhood, where they had diverged from their moral rules, continue to resonate even after so many years. They explained how they continue to feel guilt and regret about their poor moral judgments; although, in retrospect, those experiences have heavily shaped their moral sense and ability to empathize.
Controversy Surrounding the Neurological and Psychological Study of Morality
Morality has always been, for a number of reasons, highly controversial; even before there were any kinds of scientific theories or connections made. It deals with the way we understand “right” and “wrong”, and from this very subjective perception conflict spurs. Morality is also one of the most integral components of religion. Different religions affirm different explanations as to why/how morality should be incorporated into our lives. For instance, Christianity explains morality as the 10 commandments, physically handed down from God on a tablet. In this case, neurological research is in complete opposition with Christian convictions, because it essentially proposes that morality could be, in fact, a product of brain chemistry. Because most religious perspective on morality is so heavily rooted in history, something that radically defies the common belief the way the recent scientific studies do is bound to spark significant controversy and debate.
Resources
Greene, J. D. (2007). The secret joke of Kant's soul, in Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development, W. Sinnott-Armstrong, Ed., MIT Press, Cambridge, MA
Greene, J.D. (2003) From neural "is" to moral "ought": what are the moral implications of neuroscientific moral psychology? Nature Reviews Neuroscience, Vol. 4, 847-850
“Morality,” Radiolab, wnyc