Business US

‘Cognitive Surrender’ Is a New and Useful Term for How AI Melts Brains

Kyle Orland of ArsTechnica wrote a blog post about the term “cognitive surrender” on April 3. Maybe I should have noticed it sooner since it’s been floating around since at least January, when it was, it appears, coined in this context by the Wharton Business School marketing researchers Steven Shaw and Gideon Nave. Their paper is incredibly troubling, and once you read about these findings, the term “cognitive surrender” will be stuck in your head too.

If your brain is too gelatinous from offloading your thinking to a chatbot, and now you can’t read about the findings in any detail, here’s a video of the authors talking about them:

What Shaw and Nave did was give 1,372 people a test, and access to an AI chatbot for help—with the twist that the chatbot sometimes gave wrong answers. The test was an “adapted” version of something called a Cognitive Reflection Test, meaning every question was a certain type of brain-buster you’ve seen before:

“If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?”

The answer is “5 minutes,” which requires you to use your “slow, deliberative, and analytical reasoning,” rather than your “fast, intuitive, and affective processing.” Your fast intuition might have told you the answer was 100 minutes. These are concepts made famous by a wonderful (if imperfect) airport book called Thinking, Fast and Slow by the late Daniel Kahneman.

The results of the test were, of course, horrifying. Though I should add before we go any further that you should read up on the replication crisis before you take the results of any experiment as gospel. The validity of experimental results, especially in psychology, has sometimes been called into question over the past few years because they can’t be replicated. That’s not to say I noticed problems with these authors’ work (I’m far from qualified to do so anyway), just a trend worth flagging.

At any rate, in the part of the study where the subjects were allowed to consult the chatbot, they did so about half the time. When it gave correct answers, they accepted them 93 percent of the time. Unfortunately, when it was wrong, they accepted answers 80 percent of the time. And keep in mind, they didn’t have to use it at all. They let the bad advice trump their own brains. Even worse, those who used AI rated their confidence 11.7 percent higher than those who didn’t, even though it was wrong.

The authors write that in addition to Kahneman’s fast and slow “systems” of cognition, this new artificial crutch is creating what they call “System 3.”

The authors write:

Our findings demonstrate that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism. This seamless engagement with System 3 underscores its potential to enhance everyday cognition by reducing cognitive effort, accelerating decisions, and supplementing or substituting internal cognition with externally processed, vastly resourced, AI-powered insights.

Cognitive surrender isn’t necessarily all bad in their view. It “illustrates the value and integration of System 3, but also highlights the vulnerability of System 3 usage.”

This isn’t the first time the phrase cognitive surrender has existed. The theologian Peter Berger used it in a religious context in the 1990s, but it meant something more like surrendering faith in God to relieve cognitive dissonance. And if you’re like me, you’ve probably noticed that AI-assisted cognitive surrender looks like older forms of mental laziness.

On the classic sitcom Home Improvement, for instance, Tim “The Toolman” Taylor used to ask his neighbor Wilson for advice every week when some situation in his life couldn’t be resolved with “more power.” The sagely Wilson would intone some dusty old piece of wisdom from the ancient ones, and Tim would always completely accept it.

But one might argue that Tim was just using Wilson as yet another time-saving “tool” (if you will), and that he was performing his own bit of cognitive surrender without AI. Wilson’s advice may have been sound, but when Tim tried to repeat it, he would mangle it so horribly that it frequently called into question whether he had performed any cognitive reflection at all, or had indeed just relied on his fast, intuitive system, and accepted Wilson’s intelligence blindly.

 

Perhaps soon, AI will turn us into a society of Tim Taylors, cognitively surrendering to our AI Wilsons. I can think of worse fates than that for our species.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button