We evolved to argue

They say that politics and religion are to be avoided in polite company, as they are likely to cause an argument. This is perhaps one of the dumbest social norms we have, as moral and political disagreement has been essential to our survival for millennia. If you don’t agree, let’s argue about it.

Let’s start with three simple facts about human beings:

1) The human brain evolved sequentially over time from simpler versions of itself (evolution tinkers, it doesn’t build from scratch);

2) the brain evolved to quickly identify patterns and respond to them, not to deduce truths through formal logic; and

3) human beings as a whole evolved to be social, omnivorous mammals.

These three facts have far-reaching implications concerning how we interact with one another: As our society has evolved far faster than our bodies, we’ve been left with 21st Century politics, ancient brains and the question as to how to reconcile the two.

The brain evolved on the fly

In the philosophy of science, Neurath’s Raft refers to the idea that science only makes progress in reference to itself. If you want to repair a raft while you’re already on the river, you have to stand on an existing plank of the raft at any given time; you can’t ever stop and build the whole raft over again. Evolution operates much in the same way: species never get to pause and start over; they instead make slight adaptations based on what already exists.

Arguing, via Shutterstock.

Arguing, via Shutterstock.

The anatomy of the human brain is perhaps one of the best examples of this phenomenon: while the details are obviously a bit more complicated, as a general rule the closer you get to the forehead and the farther away you get from the brain stem, the more complex and less-evolved the structure in question is. In other words, our brain evolved by building on itself from the bottom-up since it couldn’t replace what was already there.

// //

//

Consciousness, which primarily resides in a small subset of outer-brain neuroanatomical regions, is the newest and, by extension, least-evolved addition to our cognitive arsenal. Importantly, as a corollary to the evolutionary Neurath’s Raft, it developed to improve upon, not replace, the structures that came before and reside below it. This means that the human brain in its current form is one in which conscious processes interpret and react to unconscious processes that are still very much “in charge.” To this point, the neocortex is in constant dialogue with the anterior cingulate cortex (ACC), a structure that sits between the neocortex and the limbic system (responsible for emotional processing, among other things) and regulates which stimuli warrant conscious awareness.

In short, even consciousness is regulated by unconscious processes – in layman’s terms, we call this selective attention. When the brain is unable to efficiently regulate what should and should not reach consciousness, we call it schizophrenia, and the disorder in part lies in the ACC.

The brain we have is the result of millions of years of primate evolution, and it shows. As we’ll see, when placed in the 21st Century and tasked with the responsibility of answering some very complicated questions, it reverts to eons-old heuristics in order to come up with workable solutions.

The brain doesn’t function like a computer

Early cognition was simple pattern recognition in response to stimuli; find a reflex that works and figure out the details later (this is why you involuntarily jump when you hear sudden loud noises). The way in which we currently process information is only a few steps more removed from that: our behavior is still founded upon the recognition of patterns and the selection of appropriate responses to those patterns. Over time, these behaviors become ingrained in networks of neurons that are activated more efficiently as the same, or similar, stimuli are presented.

The kicker here is that these networks of neurons need not be internally consistent, hence our varied reactions when logically identical situations are presented in different frames. The same person can be equally moved in opposite directions when it comes to supporting or opposing expanding the social safety net when it’s phrased as “aiding the poor” or “expanding welfare,” as the two phrases activate different arrays of neural networks. You can call that hypocritical, but you also have to call it natural, and we’re all susceptible to similar hypocrisies.

You’ll notice that this depiction of human cognition violates one of the principles required of rational actors, namely that the actor respond to situations which will produce identical outcomes in identical ways. A rational actor should be indifferent to framing effects concerning political issues, but we all know that’s not the case (“gay marriage” vs. “marriage equality,” “abortion” vs. “reproductive health,” etc.). The reason why these effects persist is that they are literally activating different parts of our brains and, therefore, the incoming political information is processed differently.

Unless they’re consciously laid bare, our mind has no need to iron out the kinks between these inconsistencies. However, when one is explicitly called out, we are prompted to refine our thinking and either rationalize or modify our predisposition. Sociologist Howard Margolis calls this the “seeing-that”/”reasoning why” process of judgment. Normally, simply “seeing-that” a given response conforms to our understanding of the world is a perfectly serviceable and efficient strategy to take; only when “seeing-that” proves insufficient is it necessary to consciously grapple with our existing mode of thought. By extension, this means that we can only learn and improve the process by which we evaluate information if we have someone to push us to provide reasons.

In other words, the brain evolved to be emotional and efficient at the expense of being logical or consistent. While this form of cognition is essential for survival, it is also prone to error, which can be disastrous in more complex situations, as we essentially sacrificed objectivity and consistency for decisiveness. What makes the tradeoff evolutionarily sustainable is a social environment in which errors are called out and corrected. Emotional, internally inconsistent, pattern-seeking humans are interdependent on each other to tell them when they’re wrong. In this way, large-scale rationality emerges from individual-level non-rationality.

21st Century politics and morality evolved from ancient adaptive advantage

However, this only works if there is variation of thought and interaction between those variations, which brings me to the last plank of my argument: humans evolved ideological diversity as a consequence of overcoming the omnivore’s dilemma concerning what to eat. The theory, posed by moral psychologist Jonathan Haidt in The Righteous Mind, goes like this:

For carnivores, the decision as to what to eat is fairly straightforward. For herbivores, it’s only slightly more complicated, but as a general rule it’s safe to eat green things. However, omnivores in competition with each other for calories have to decide how exploratory they want to be when it comes to ingesting the things they find in nature: be too picky and your competitors will get more calories and have more offspring, washing you out of the gene pool; too exploratory, and you eat the wrong things and die off, exiting the gene pool even more quickly.

The key to winning this evolutionary game isn’t to get lucky and evolve into a species that just “gets” exactly what is and isn’t OK to eat, especially as new species of plants and animals are constantly dying off and being introduced into the environment. The species that wins is the species that includes a variety of adventurous and picky eaters – one could safely call them liberals and conservatives – that form an equilibrium throughout the population. Remnants of this evolutionary process show up on ideological lines. Conservatives register more marked physiological reactions to disgusting images than liberals. Conservatives today are also, on average, pickier eaters, and are even more likely to carry the gene that makes one sensitive to phenylthiocarbamide (PTC), a chemical similar to the one found in cilantro that makes the plant taste bad for some and not others.

And as society gets more complex and the big questions begin to move beyond what to eat, similar ideological equilibria emerge. As questions concerning how to interact with outsiders, how to orient oneself towards authority and how to punish those who violate the social contract arise, communities that allow varied answers to compete for public acceptance will be more sustainable than communities in which one dogma dominates. And when it comes to these questions, self-described liberals and conservatives are predisposed to serve different functions when it comes to answering them. This is true down to our cognitive physiology and behavior: On average, conservatives have larger amygdalae, the brain structure most closely associated with threat perception, than liberals. Unsurprisingly, they were also less-distracted by nearby stimuli than liberals when tasked with identifying an angry face as being angry in a laboratory setting.

There are two extremely important caveats here, both of which Haidt makes in his book: First, while ideological orientations (and the propensity to participate in politics at all) are in part genetic, biology is not wholly deterministic. As cognitive scientist Gary Marcus notes, “Built-in does not mean unmalleable; it means organized in advance of experience.” Second, none of this evidence alone makes either ideological orientation better or worse a priori; it just makes them different – different to the point at which the two groups really do consider morality across fundamentally different channels. In other words, the liberal and conservative sides of the ideological spectrum evolved together, and it’s likely that neither would be sustainable alone.

So, with all this taken together, what we’re left with is a species in which disagreement about the big questions concerning how best to live is both inevitable and necessary. We need to push each other on our beliefs and actions, as that is both how we learn and society progresses. The individual mind is too primitive to answer large-scale social and moral questions alone.

But don’t we suck at arguing?

Well, kinda. But we’re good enough to make it work.

In a twist of sad irony, it seems as though the reasons why we have to argue are also the reasons why we argue so inefficiently. That our brain processes political information via an emotional, associative process instead of a cool, calculated one means that we are unlikely to readily admit when we’re wrong. To that point, when a team of researchers set about using hard data to challenge participants’ beliefs concerning politically-charged information, they found that the more mathematically competent you were, the less likely you were to change your mind in the face of evidence. For the participants, and fitting with the model of the brain outlined above, conclusions came first, and those who understood data were able to rationalize their positions more effectively than participants who couldn’t explain away the numbers.

And this is why Reddit (and, I’m sorry to say, the comments section on this article) isn’t the saving grace of American democracy. Just because you have two people who disagree with each other talking doesn’t mean that the result will be in any way productive. However, argumentation oriented towards amicable compromise and agreeable outcomes is both possible and useful on a large scale. If that sounded too unicorns-and-rainbows-y, take a look at the participatory budgeting process, first pioneered in Porto Alegre, Brazil.

The process goes something like this: A percentage of the city’s budget is set aside to be allocated by deliberative bodies of regular citizens. In a typical example, a city is divided up into a number of regions, which are then divided up into neighborhoods (think wards and voting precincts), who then meet to discuss how best to allocate the budget. Decisions are made by majority vote, but not before everyone who wants to speak has spoken. Neighborhoods then elect representatives to argue for their district’s interests at regional meetings, where the budget is finalized and sent to the mayor for approval. If the mayor vetoes the budget, the regional council can either modify their budget or override the veto with a two-thirds vote.

If you’re familiar with American political science, this probably sounds a lot like Robert Putnam and his work on social capital (Bowling AloneMaking Democracy Work, etc.), since you’re bringing people together to talk politics, but the PB actually goes a few steps beyond Putnam to address two major criticisms of his theory: First, even if you can get people to go bowling together, who says that they’re going to talk about politics? Second, the civic traditions that made democracy work so much better in southern Italy than in the civically-barren north also made fascism work better, suggesting that civic traditions make government work better, not just democracy.

In places where participatory budgets are implemented along the lines of the one modeled above, citizens’ standard of living and overall wellbeing immediately improve. As the participatory budget in Porto Alegre cut out clientelism and graft in the political process, money was freed up for schools, roads and basic public services, such as sewage. Furthermore, and more relevant to Putnam, regardless as to how the budget gets divided up, opening up governance to participatory processes improves citizenship, as measured by the number of neighborhood associations, cooperatives and other organizations pertaining to local governance, as well as lower levels of tax evasion and higher voter turnout. While Putnam’s theory seems to suggest that civic traditions are in the historical cards for some and not others, cities’ experiences with participatory budgeting suggest that you can create civic traditions where few previously existed, making the whole democratic system function more smoothly.

The biggest cause for skepticism in processes like this is that the discussion will be dominated by more sophisticated citizens. However, in Porto Alegre, while education and gender initially affected whether or not a citizen spoke up, by the time a citizen had attended three meetings the only significant predictor of their participation was how many meetings they had attended. As noted by the researchers who were observing the process, repeated deliberation elevated the political capacity of ordinary citizens and led them to establish collaborative relationships with citizens both in and out of their neighborhoods.

The success of the participatory budget jives with what we’d expect given the evolution and functionality of our brains. Political argumentation is, in many ways, the sharpening of mind on mind through reasoned speech: When we put competing interests and thought processes in the same room and pit them against each other until an agreement emerges, everyone is elevated a little bit; materially, socially and intellectually. So long as the institutional framework is such that a fight can be avoided, an argument is healthy.

So, the next time you find yourself at dinner and someone mentions the 2014 elections, don’t change the subject. Sit up in your chair and have an argument, keeping in mind that your interlocutor may not be evil, or even wrong. Believe it or not, you will quite literally making our species slightly more viable in the long term.

For a far lengthier account of this and other topics related to political cognition, you can download my senior honors thesis here.


Jon Green is a graduate of Kenyon College with a degree in Political Science and high honors in Political Cognition. A veteran of the campaigns of Congressman Tom Perriello in 2010 and President Obama in 2012, he writes on a number of topics, but pays especially close attention to elections, religion and political cognition. Follow him on Twitter at @JonGreen8, and on Google+. .

Share This Post

  • http://americablog.com/author/jon-green Jon Green
  • P+T

    Jon, we wanted to learn more about your fascinating and important post. We attempted to download your senior honors thesis, but Digital Kenyon says, “This content is available only to authorized visitors” and we don’t have a Bepress account. Is your thesis available through any other channels? Please advise!

    Thanks,
    P+T
    Peter+Trudy Johnson-Lenz

  • Bubbles

    I teach comparative law overseas. I teach it as a global history of law class. I like to start with pre-history, i.e. hunting and gathering societies. To make my point I show the movie “Dances with Wolves” which describes the life style of the native Americans as “harmony”. For a while I worried that this was perhaps too fictional for academics. Then I saw a CNN report on the Hazda tribe in Africa – they live in the valley where it is believed the first humans emerged 100,000 years ago – they appear to live as a hunting and gathering group of small numbers, and it is harmonious in the manner of Dances with Wolves. I would show the CNN report but my students English is not good enough to understand it. http://www.cnn.com/2014/04/18/world/africa/africas-ancient-hunter-gatherers-hadza/

    Stage 2 is the Neolithic or Agricultural revolution. And this is where humanity begins to live in large societies and becomes cruel to each other. It is an absence of fairness/justice. This is espeically so before the Axial age (500 b.c.e). The movie I use to demonstrate this is Apocalypto, which begins where Dances with Wolves leaves off. These early societies were often unstable/brittle, rarely got very large or lasted very long, ie. prone to collapse because their foundation was the peasantry, but elites quickly learn they can use there power and position to take as much as they want from the peasantry, until they take to much, become top heavy and collapses because the peasantry can no longer bear the weight to hold up society – Apocalypto serves this up nicely.

    Stage 3 is the Axial age. This is the invention of academic thought as a discipline, the topic is ethics, and the ethic is fairness. It’s called the Axial age because prominent philosophers emerge in isolated societies, almost simultaniously, through out the Eurasian periphery around 500 b.c.e: Confucious (who first postulated the “golden rule” and Lao Tze in China, Buddha in India, Zoroaster (perhaps a bit earlier) in Iran, Deutero-Isaiah in Israel, and the Sophist philosophers in Greece. At about the same time, Rome, still a small city-state, is dealing with similar issues, roles out the 12 tables, which is the foundation of modern law. The axial age can be stretched out to include later refinements of Christ and perhaps even Mohammed. Immediately after the Axial age we see the emergent of more stable societies and larger states and Empires. Fittingly Persian Empire emerges as one of these early examples. These societies get bigger and last longer, but eventually they too collapse do to the same phenomina – the rich get richer, the poor get poorer, but the poor bear the burden of holding up society, so when the rich take too much and leave too little society/state collapse. That is, indeed, what eventually happened with the Roman Empire. The big problem is the Axial age ethics are only ethics, they are not hard wired into societies. It maybe unethical and wrong for the rich to take too much, it may even be a sin or evil, but the ethical norm is only an informal norm. Eventually the rich realize that they can live with being “unethical” and unfair, if it makes them richer. Ironically the last Axial Age system Islam, tries to ‘hard wire’ fairness into the system. That system is a legalistic system that bundles religion, politics and law, (and everything else) into a single system, and by law in that system, among other things, the wealthy must give 2.5% of their wealth to the poor. That system initially was very successful, but eventually was constrained by the excessive bundling.

    The problem still exist today. Fairness is understood by most people as proper. Most normal (middle class) people live according to the golden rule. Only the very rich and the poor do not: the poor because they can’t afford to, the rich because they get no benefit by adhering to it and incur no penalty or loss by not adhering to it. But fairness is still, quite often, an informal norm, and not routinely a matter of law. Democracy was supposed to be the great equalizer – to level the economic playing field – but the rich in the U.S. at least, have learned how to coopt democacies.

    Those societies that develop the most inclusive norms, gain a competitive edge, tend to have a burst onto the world stage. But no society has succeeded in formalizing the norms and ethics of fairness without hamstringing themselves. The New Deal was an age of inclusiveness, it was enacted all over the free world, and global GNP mushroomed, democracy spread, and history hit its most golden of golden ages, culminating in a man landing on the moon. Beginning around 1970 the rich in the first world learned to reverse the new deal and we are now living in the angst of that post 1970 society.

  • Sean

    Thanks for a wonderful, and very heartening post. But looking at some of the depressed responses, I’d like to add that war isn’t the same thing as an argument. It’s the use of violence to get what you want – so the Israel/Gaza tragedy isn’t pertinent. Neither are the ritual insults of trolls. The comment that many people are emotionally committed to their political views, and won’t change easily is true, but I have a little story about that. A couple years back I was visiting my parents and a friend of theirs came to dinner. She is a right-wing Fox News person, and she went off on some story about “flaming liberals,” which she said with a sneer. I shot my hand up and said, “I’m one!” She was shocked. No one had ever called her on that crap before. I argued some points with her before moving to other topics for the actual dinner. I stood my ground, but what I didn’t do was yell or insult, or walk out in a huff. At the end of the evening she made a point of giving me a polite goodbye, and I returned the favor.

  • Hue-Man

    Almost all our evolutionary history has been spent in groups of about 20 people where everyone knew everyone else and developed social rules within that small group. I doubt those “pre-programmed” skills are suited to communities of 50,000 within a larger city of 10 million. More to the point of the posting, the words “noise” and “riot” are used to describe an argument involving more than 2 people!

    I like the food analysis of conservatives vs liberals. My explanation for the last 30 years of Western political history is “The conservatives took over control of the food supply to ensure that only rotten, poisoned food was supplied to the liberals.”

  • keirmeister

    No it doesn’t!

  • bkmn

    Unfortunately the right wingers have lost the ability to change their minds based on sound argument.

  • Indigo

    At that rate, Israel and Gaza are way ahead on the evolutionary scale.

© 2014 AMERICAblog News. All rights reserved. · Entries RSS