Here we go again.
A small study in very not-real-world circumstances using techniques that aren’t exactly the gold standard leads to moral panic about the use of AI.
If people at MIT say AI is making you dumber, then it must be (they didn’t but who is going to check the actual paper behind the headlines).
Wow… this guy actually destroyed trees to print it out and read and wrote a thread about it highlighting part of the title. Must be true, then.
Not so fast. I read it too (so you don’t have to). And it’s not peer-reviewed yet, meaning that it was posted online without any other scientists critiquing it.
If you want to read my full critique, then you can head over to Hindustan Times.
Here’s the short version:
If you were stupid before ChatGPT, then obviously you can’t blame it (with any credibility).
If you are stupid (or more stupid) now you shouldn’t blame ChatGPT. Because the current paper causing hysteria certainly doesn’t show it.
Smart and stupid are both relative. No one can be smart in everything, and that’s the point of AI. You don’t want to be smart in everything you do, that’s why if used right, AI can be a boost. I’ll get to that.
In one camp, the techno-optimists tell us that AI superintelligence, which can do everything better than all of us, is just around the corner. In the other, there’s a group that blames just about everything that goes wrong anywhere on AI. If only the truth were that simple.
Now on to the details.
The MIT study that sparked this panic deserves a sober hearing. Researchers at the Media Lab asked: How does using AI to write affect our brain activity?
But answering that is harder than it looks. Much harder.
So what did they actually do?
The researchers recruited 54 Boston-area university students, divided them into three groups, and each wrote 20-minute essays on philosophical topics while monitoring their brain activity with EEG sensors. One group used ChatGPT, another used Google Search, and a third used only their brains.
Over four months, participants tackled questions like “Does true loyalty require unconditional support?”
What the researchers claim is that ChatGPT users showed less brain activity during the task, struggled to recall what they’d written, and felt less ownership of their work. They call this “cognitive debt.”
One design choice was especially troubling. In the final session, 18 participants from the original groups switched conditions, but this created an unfair comparison. The ChatGPT group writing without assistance for the first time faced off against the brain-only group on their fourth attempt. That's like comparing someone's first tennis match against someone who's been practicing serves for months.
The researchers acknowledged this "familiarization effect" in their limitations section, but no one is talking about.
Then there's what EEG actually measures electrical activity at the brain's surface, missing deeper structures where complex thinking occurs. It's like judging a city's economy by counting visible streetlights.
The researchers noted this, suggesting future studies use fMRI.
Fair enough, but it means their claims about "cognitive debt" rest on incomplete neural data.
But even if we fixed all these methodological issues, we'd still be studying the wrong thing.
Here's what went unnoticed in the media frenzy: participants who started without AI and later used it showed the strongest brain signals of all. That pattern suggests AI works best when it supports users who already have something to contribute. It didn't make headlines because it doesn't fit the "AI makes us dumb" narrative.
And our guy who printed out 200+ pages would have a less viral post.
Here’s another thing.
If AI really damages how we think, then what participants did between sessions matters. Over four months, were the “brain-only” participants really avoiding ChatGPT for all their coursework? These students were never using AI for anything? I find that hard to believe.
With hundreds of millions using ChatGPT weekly, that seems unlikely, so you have a contaminated baseline. For a fairer comparison, you’d want to compare people who never used AI to those who used it regularly before drawing strong conclusions about brain rot.
Now, let’s get to the task at hand, writing college-level essays. How many adults do you know are actually writing philosophical essays on love and loyalty?
What real world correlation does being able to write about this or the standard Indian cow have on intelligence? It tells you how well someone can write a particular kind of essay.
I don’t doubt that actually writing something requires more brain activity that outsourcing it to AI. This part sounds a bit obvious. Writing is hard. Writing philosophical essays on abstract topics is harder. But doing everything the hard way hardly seems intelligent to me.
And here’s the problem with stretching a small study on writing philosophical college-level essays too far. While journalists were busy writing sensational headlines about “brain rot,” they missed the bigger picture.
Most of us are using ChatGPT to avoid thinking about things we’d rather not think about anyway.
Later this month, I’m travelling to Vietnam. I could spend hours sorting out my travel documents, emailing hotels about pickups and tours, and coordinating logistics. Instead, I’ll use AI to draft those communications, check them, and move on.
One day maybe my AI agent will talk to their AI agent and spare us both, but we’re not there yet.
Using your brain doesn’t mean doing everything you have to do with maximum effort.
In this case, using AI doesn’t make me stupid. It makes me efficient. It frees up mental energy and time for things I actually want to focus on, like writing this newsletter.
This is the key point, and one I think that got lost. Learning can’t be outsourced to AI. It still has to be done the hard way. But collectively and individually we do get to choose what’s worth learning.
When I use GPS instead of memorizing routes, maybe my spatial memory dulls a bit, but I still get where I’m going. When I use a calculator, my arithmetic gets rusty, but that doesn’t mean I don’t understand math. If anyone wants to train their brain like a London cabbie or Shakuntala Devi, they can.
But that’s not my cup of tea.
Our goal isn’t to use our brains for everything. It’s to use them for the things that matter to us.
Rather than fearing this transition, we might ask: What uniquely human activities will we choose to pursue with the time and mental energy AI frees up?
I’ll freely admit that there are issues with the ethics of using AI that has hoovered up content and is copying people. AI is also leading to a lot of low-quality content out there (my low-quality content is human generated).
But AI is not making you or me stupid.
That’s it for today.
Take care of yourself and the planet. It’s a jungle out there.
Anirban