A shocking study made me rethink how I use AI, and you should probably do that too

I’ve always thought of myself as a light AI user. I don’t have ChatGPT write my emails or draft my thoughts into a story. Mostly, I use it to quickly look things up or fill in something that’s on the tip of my tongue. It felt like the responsible way to approach things. As a journalist, I am well aware of AI’s hallucination problems and the “burden of truth verification” that comes with availing the services of an AI assistant. But a new study has me second-guessing whatever little utility I got from AI tools like Google’s Gemini for real-life chores.
The findings are harder to dismiss than you’d expect
The research, conducted across three separate randomized experiments involving math and reading comprehension tasks, found something that should make any AI user pause and think. After around ten minutes of AI-assisted problem-solving, participants who then lost access to the AI performed worse and gave up more frequently than those who had never used it at all. Not after months of dependency. Just ten minutes.
arXiv
The fact that the effects showed up across both math and reading comprehension is interesting, since these are fundamentally different cognitive skills. This suggests that the findings aren’t a quirk of one type of task but a more general consequence of how we’re using these tools. But here’s the part that stood out the most: it wasn’t the AI itself that caused the damage. It was how people were using it.
Now, on an ordinary day, I might have taken this study with a bit of a dismissive approach, because research into AI’s benefits and pitfalls has been in a wayward swing. But this one comes courtesy of joint research from the folks over at esteemed institutions like Carnegie Mellon University, University of Oxford, the Massachusetts Institute of Technology, and the University of California, Los Angeles.
How you use AI matters more than how much you use it
The majority of participants used AI to get answers directly. These participants showed the largest declines in performance and persistence, not only compared to the control group, but also compared to the participants who used AI for hints and clarifications. Participants who used AI for hints showed no significant impairments relative to the control group.
arXiv
The people who asked AI to just solve the problem got worse at solving problems themselves, whereas the people who used it for a nudge in the right direction, or for some clarity, were fine. They were statistically indistinguishable from people who hadn’t used AI at all.
That’s a meaningful distinction, and it reframes the whole conversation around AI making people dumber. It shifts the question from “should I use AI?” to “what am I actually doing when I do?” That question matters whether you use AI occasionally or rely on it daily for work or school.
It might be time to change your habits
If you’ve been using AI for cognitive outsourcing, essentially handing your problem off until you get an answer back, this research suggests the habit may be quietly training you to expect rescue at moments of difficulty rather than learning to push through them.
The researchers warn that if these effects accumulate with sustained AI use, current AI systems risk eroding the very human capabilities they are meant to support. You won’t notice it right away, but it will become apparent the next time you’re on your own.
I don’t think this means you should stop using AI tools altogether. But starting today, I’m going to be more deliberate about what I’m actually asking for when I open a chat window. Am I looking for a fact? A direction? A sanity check? Or am I just tired of thinking and hoping the chatbot will do it for me?
The first few are probably fine. The last one, not so much.




