How Not to Think About Motivated Reasoning

The New Yorker recently published a small piece on motivated reasoning and evolutionary psychology titled Why Facts Don’t Change Our Minds. Motivated reasoning (“MR”) is a pet interest of mind – *ahem* Intellectually Honestest – and every so often it gets some run for wider audiences (like the New Yorker). The specific impetus in this case seems to be a new book that tries to answer why we like our preexisting biases so much:

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber  . . . point out that reason is an evolved trait, like bipedalism or three-color vision.

Cool. I like evolutionary psychology. I tend to think more in economic frameworks, e.g. incentives, and don’t worry so much about the “why,” but all ideas are welcome.

But . . . the New Yorker can’t help itself and eventually tells us why it’s really interested in motivated reasoning. Because, Trump, obviously:

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

I don’t want to write-off the article completely because it does a good job describing the overriding tendency for reason to (a) favor certainty and conviction; (b) favor what is already assumed to be true; and (c) make us feel good when both (a) and (b) are accomplished.

But I agree with Arnold Kling that the New Yorker mostly succeeds by demonstrating the thing it purports to describe: “Don’t worry loyal readers, motivated reasoning is why the roughnecks and uncredentialed don’t get us. They just haven’t fully evolved from our hunter-gatherer ancestors. Amiright?! High Fives!”

As Kling puts it:

Kolbert and her New Yorker readers are reassuring one another that they are right to be contemptuous of President Trump . . .

Suppose that I were to apply the illusion of explanatory depth to the response to the financial crisis, including the bank bailouts. The elites in this country believe that they understand the causes of this policy (too much deregulation) and the consequences of this policy (saved us from another Great Depression). They hold this baseless belief because their fellow elite-members hold this baseless belief. And one could argue that the Trump Administration is a consequence of the fact that the elite view is not convincing to the rest of the country. (Note, however, that I do not claim to understand last year’s election. I am just suggesting that elites can be just as shallow as Trump supporters. I would go further and suggest that flattering yourself because you hate Trump is itself a sign of intellectual shallowness.)

And therein lies the problem with evangelizing concepts like motivated reasoning. Demonstrating that bias is more pervasive than we give it credit for very quickly becomes someone else’s bias. It’s just further evidence that, seriously, we were right all along.

MR: We’re incredibly good at convincing ourselves that we were right all along and that our team is way better.

Person: Yeah, other people are totally like that. It explains why I can’t convince them that they’re wrong — even when I’ve got charts and videos that totally prove I’m right.

MR: No, everyone. That means you and me too. For example, I’m constantly looking for evidence to confirm my belief that people are motivated reasoners.

Person: Yeah. I agree. Other people are totally biased. It’s a problem. How do we fix them?

It’s part of what makes the behavioral economics policy crowd so troubling. It’s a great insight to realize that decision-making is highly contextual and that the “right” thing is often reverse-engineered from “best thing for me and the things that make me feel good and care about.” The problem is that it’s everyone else that needs “nudging,” but never them. The thought doesn’t even cross their minds.

Whatever institutional, social, neurological or game-theoretic constraints that apply to ordinary citizens and justify all kinds of paternalistic interventions apparently suspend in time and space when you become a policy-maker. Back to the dialogue (and pardon the self-indulgence):

Technocrat: “We need federal agencies to help people overcome their biases and make better decisions.”

Humble Blogger: “Well, what if the federal agencies are also biased in the exact same ways?”

Technocrat: *Blink* “Did you read the New Yorker on Why Facts Don’t Change Our Minds? You totally should — I think it would help you a lot.”

Humble Blogger: “No, but seriously. There’s a ton of research on this – it’s called Public Choice and it’s been around a long time. Incentives for public officials make them do all kinds of things that are mostly good for public officials. It turns out they’re human too. In general, technocracy has a really bad track record, relative to trade-tested trial-and-error, but that’s my bias.”

Technocrat: “Oh, you’re just a paid shill for the Koch brothers. Stupid, profiteering, one-note ideologue. Good luck with the Gilded Age and the Great Depression and the Racism and the sweatshops and the financial crisis and the Super PACs and all the things that we technocrats totally fixed.”

Humble Blogger: “Yeah, have you ever wondered how that account of history became so prevalent?”

Technocrat: “Triumph of reason. Obviously. Have you seen this one chart that proves it all?”

If it seems like I’m doing the same thing to the technocrats, you’re not wrong. There is too much uncertainty, particularly when it comes to something as complex and uncontrolled as human civilization, to be overly certain about anything. Reasonable doubt is reasonable, and it allows all teams to say: “It would have worked, it just wasn’t [socialist/capitalist/conservative/liberal] enough — if only it had been more [regulated / de-regulated / redistributive / libertarian] then you’d see I’m right.”

Granted. But to my mind, the fact of uncertainty strongly favors less prescription, and not more. It also favors weight-of-the-evidence approaches, rather than anecdotes, and marginal changes, rather than one-size-fits-all.

By all means, start your commune, just don’t force anyone to join.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s