Skip to main content

The Origins of Human Morality

How we learned to put our fate in one another’s hands

Credit:

Yuko Shimizu

If evolution is about survival of the fittest, how did humans ever become moral creatures? If evolution is each individual maximizing their own fitness, how did humans come to feel that they really ought to help others and be fair to them?

There have traditionally been two answers to such questions. First, it makes sense for individuals to help their kin, with whom they share genes, a process known as inclusive fitness. Second, situations of reciprocity can arise in which I scratch your back and you scratch mine and we both benefit in the long run.

But morality is not just about being nice to kin in the manner that bees and ants cooperate in acts of inclusive fitness. And reciprocity is a risky proposition because at any point one individual can benefit and go home, leaving the other in the lurch. Moreover, neither of these traditional explanations gets at what is arguably the essence of human morality—the sense of obligation that human beings feel toward one another.

Recently a new approach to looking at the problem of morality has come to the fore. The key insight is a recognition that individuals who live in a social group in which everyone depends on everyone else for their survival and well-being operate with a specific kind of logic. In this logic of interdependence, as we may call it, if I depend on you, then it is in my interest to help ensure your well-being. More generally, if we all depend on one another, then we must all take care of one another.

How did this situation come about? The answer has to do with the particular circumstances that forced humans into ever more cooperative ways of life, especially when they are acquiring food and other basic resources.

The Role of Collaboration

Our closest living relatives—chimpanzees and bonobos—forage for fruit and vegetation in small parties, but when resources are found, each individual scrambles to obtain food on its own. If any conflict arises, it is solved through dominance: the best fighter wins. In the closest thing to collaborative foraging among apes, a few male chimpanzees may surround a monkey and capture it. But this approach to hunting resembles more closely what lions and wolves do than the collaborative form of foraging undertaken by humans. Each chimpanzee maximizes its own chances in the situation by trying to block one possible avenue of the monkey's escape. The captor chimp will try to consume the entire carcass alone but typically cannot. Then all the individuals in the area converge on the captured prey and begin grabbing at it. The captor must allow this to happen or else fight the others, which would likely mean losing the food in the melee; thus, a small amount of food sharing takes place.

For a long time humans have done things differently. Around two million years ago the genus Homo emerged, with larger brains and new skills in making stone tools. Soon after, a global cooling and drying period led to a proliferation of terrestrial monkeys, which competed with Homo for many resources.

Credit: Portia Sloan Rollings

Early humans needed new options. One alternative involved scavenging carcasses killed by other animals. But then, according to an account from anthropologist Mary C. Stiner of the University of Arizona, some early humans—the best guess is Homo heidelbergensis some 400,000 years ago—began obtaining most of their food through active collaboration in which individuals formed joint goals to work together in hunting and gathering. Indeed, the collaboration became obligate (compulsory) in that it was essential to their survival. Individuals became interdependent with one another in immediate and urgent ways to obtain their daily sustenance.

An essential part of the process of obligate collaborative foraging involved partner choice. Individuals who were cognitively or otherwise incompetent at collaboration—those incapable of forming joint goals or communicating effectively with others—were not chosen as partners and so went without food. Likewise, individuals who were socially or morally uncooperative in their interactions with others—for example, those who tried to hog all the spoils—were also shunned as partners and so doomed. The upshot: strong and active social selection emerged for competent and motivated individuals who cooperated well with others.

The key point for the evolution of morality is that early human individuals who were socially selected for collaborative foraging through their choice of partners developed new ways of relating to others. Most important, they had strong cooperative motives, both to work together to achieve common goals and to feel sympathy for and help existing or prospective partners. If an individual depended on partners for foraging success, then it made good evolutionary sense to help them whenever necessary to make sure they were in good shape for future outings. In addition, one's own survival depended on others seeing you as a competent and motivated collaborative partner. Thus, individuals became concerned with how others evaluated them. In experiments from our laboratory, even young children care about how they are being evaluated by others, whereas chimpanzees seemingly do not.

Absent a historical record and, in many cases, even evidence from fossil remains and archaeological artifacts, our lab in Leipzig, Germany, and others have investigated the origins of human thinking and morality by comparing the behaviors of our close primate relatives with those of young children who have yet to integrate the norms of their culture.

From these studies we have surmised that early humans who engaged in collaborative foraging developed a new kind of cooperative reasoning that led them to treat others as equally deserving partners—that is, not just with sympathy but also with a sense of fairness (based on an understanding of the equivalence between oneself and others). Partners understood that they could, in principle, take on any role in a collaboration and that both of them needed to work together for combined success. Moreover, as two individuals collaborated repeatedly with one another as foragers, they developed an understanding—a mental “common ground”—that defined the ideal way that each partner needed to fulfill a role for mutual success. These role-specific standards shaped the expectation of what each partner should do: for example, in hunting antelopes, the chaser must do X, and the spearer must do Y. These idealized standards were impartial in that they specified what either partner had to do to fulfill the role “properly” in a way that ensured joint success. The roles—each of which had mutually known and impartial standards of performance—were, in fact, interchangeable. As such, each partner on the hunt was equally deserving of the spoils, in contrast to cheats and free riders who did not lend a hand.

In choosing a partner for a collaborative effort, early humans wanted to pick an individual who would live up to an expected role and divide the spoils fairly. To reduce the risk inherent in partner choice, individuals who were about to become partners could use their newfound skills of cooperation to make a joint commitment, pledging to live up to their roles, which required a fair division of the spoils. As part of this commitment, the would-be partners also could pledge implicitly that whoever might renege on a commitment would be deserving of censure. (The box on the next page explains the evolution of morality within the framework of the philosophical concept of intentionality.)

Anyone who deviated from what was expected and wanted to stay in good cooperative standing would willingly engage in an act of self-condemnation—internalized psychologically as a sense of guilt. A “we is greater than me” morality emerged. During a collaboration, the joint “we” operated beyond the selfish individual level to regulate the actions of the collaborative partners “I” and “you.”

The outcome of early humans' adaptations for obligate collaborative foraging, then, became what is known as a second-personal morality—defined as the tendency to relate to others with a sense of respect and fairness based on a genuine assessment of both self and others as equally deserving partners in a collaborative enterprise. This sense of fairness was heightened by the feeling of obligation, the social pressure to cooperate and respect one's partner. That is, whereas all primates feel pressure to pursue their individual goals in ways they believe will be successful, the interdependency that governed social life for early humans meant that individuals felt pressures to treat others as they deserve to be treated and to expect others to treat them in this same way. This second-personal morality did not have all the defining attributes of modern human morality, but it already had the most important elements—mutual respect and fairness—in nascent form.

The Birth of Cultural Norms

The second critical step in the evolution of human morality came when the small-scale collaborative foraging of early humans was eventually destabilized by two demographic factors that gave rise to modern humans more than 200,000 years ago. This new era came about because of competition among human groups. The struggles meant that loosely structured populations of collaborators had to turn into more tightly knit social groups to protect themselves from outside invaders. Each of these groups developed internal divisions of labor, all of which led to a collective group identity.

At the same time, population sizes were increasing. As numbers grew within these expanding tribal groups, the larger entities split into smaller subunits that still felt bound to the supergroup—or what might be characterized as a distinctive “culture.” Finding ways to recognize members of one's own cultural group who were not necessarily next of kin—and then to separate them from members of other tribal groups—became essential. This type of recognition was important because only members of one's own cultural group could be counted on to share one's skills and values and be trustworthy partners, particularly for group defense. The dependence of individuals on the group thus led to a sense of collective identity and loyalty. A failure, meanwhile, to display this group identity and loyalty could result in being ostracized or dying in clashes with rivals.

Contemporary humans have many diverse ways of marking group identity, but the original ways were mainly behavioral ones and based on a number of assumptions: people who talk like me, prepare food like me and otherwise share my cultural practices are very likely members of my cultural group. And so from these suppositions emerged modern humans' tendency toward conformity to the group's cultural practices. Teaching one's children to do things in the conventional way defined by the group became mandatory for survival.

Teaching and conformity lay the foundations as well for cumulative cultural evolution—in which a practice or an artifact that had been in place for a long time could be improved on and that innovation could then be passed along to subsequent generations as part of a group's conventions, norms and institutions. Individuals were born into these collaborative social structures and had no choice but to conform to them. The key psychological characteristic of individuals adapted for cultural life was a group-mindedness, whereby people took the cognitive perspective of the group as a whole to care for its welfare and to conform to its ways—an inference derived from studies of the behavior of three-year-olds published in the late 2000s.

Individuals who belonged to a cultural group had to conform to the prevailing cultural practices and social norms to advertise that they identified with the group and its way of doing things. Some social norms were about more than conformity and group identity. They touched on a sense of sympathy and fairness (inherited from early humans), which became moral norms. Thus, just as some norms codified the right and wrong way of doing things in hunting or making tools, moral norms categorized the proper way of treating other people. Because the collective group goals and cultural common ground of human groups created an “objective” perspective—not “me” but “we” as a people—modern human morality came to be characterized as an objective form of right and wrong.

Of course, any individual could choose to act against a moral norm. But when called to task by other group members, the options were limited: one could ignore their criticism and censure and so place oneself outside the practices and values shared by the culture, perhaps leading to exclusion from the group. Modern humans thought of the cultural norms as legitimate means by which they could regulate themselves and their impulses and signal a sense of group identity. If a person did deviate from the group's social norms, it was important to justify uncooperativeness to others in terms of the shared values of the group (“I neglected my duties because I needed to save a child in trouble”). In this way, modern humans internalized not only moral actions but moral justifications and created a reason-based moral identity within the community.

The People of We

In my 2016 book A Natural History of Human Morality, I proceed from the assumption that a major part of the explanation for human moral psychology comes from processes of evolution by means of natural selection. More important, though, the selecting is done not by the physical environment but rather by the social environment. In contrast to evolutionary approaches that base their arguments on reciprocity and the managing of one's reputation in the community, I emphasize that early human individuals understood that moral norms made them both judger and judged. The immediate concern for any individual was not just for what “they” think of me but rather for what “we,” including “I,” think of me. The essence of this account is thus a kind of “we is greater than me” psychological orientation, which gives moral notions their special powers of legitimacy in personal decision making.

The challenge in the contemporary world stems from an understanding that humans' biological adaptations for cooperation and morality are geared mainly toward small group life or cultural groups that are internally homogeneous—with out-groups not being part of the moral community. Since the rise of agriculture some 10,000 years ago, human societies have consisted of individuals from diverse political, ethnic and religious lines.

As a consequence, it becomes less clear who constitutes a “we” and who is in the out-group. The resulting potential for divisiveness leads to both internal social tensions within a society and, at the level of nations, to outright war—the ultimate example of in- and out-group conflicts. But if we are to solve our largest challenges as a species, which threaten all human societies alike, we had best be prepared to think of all of humanity as a “we.”

Michael Tomasello is a professor of psychology and neuroscience at Duke University and emeritus director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.
More by Michael Tomasello
Scientific American Magazine Vol 319 Issue 3This article was originally published with the title “The Origins of Morality” in Scientific American Magazine Vol. 319 No. 3 (), p. 74
doi:10.1038/scientificamerican0918-70