The military doesn't do PT every day because every day requires a high level of physical fitness. The military runs every day because there are days when they will have to run, and they want to be ready.
We have no equivalent practice for thinking.
We should. And the fact that we don't is about to become one of the most consequential gaps in how we prepare people, in the military, in business, in education, for a world saturated with artificial intelligence.
The Wrong Way to Hear This
Don't get me wrong. This is not an argument against using AI. I am not about to tell you to put down the chatbot and pick up a pencil. That argument is boring, it's wrong, and it misunderstands the problem completely.
The case for cognitive independence is not anti-AI any more than the morning run is anti-vehicle. Soldiers run every day. They also drive vehicles, fly helicopters, and ride in the backs of Strykers. The running doesn't replace the vehicles. The running makes them better at everything they do, including the things they do from vehicles. Cardiovascular fitness affects alertness, stress tolerance, decision-making under fatigue, and recovery time. You don't run instead of driving. You run so that when you're driving, or planning, or leading, or making a call under pressure, you're operating from a higher baseline.
That's the argument for cognitive independence. Not that you should think without AI. That you should be able to think without AI, so that when you think with AI, you're actually thinking and not just accepting.
The Muscle You Don't Know You're Losing
The research on this is early but it's pointing in a direction that should make anyone paying attention uncomfortable.
In 2025, a team at MIT's Media Lab ran an experiment. They had people write essays under three conditions: with ChatGPT, with a search engine, or with no tools at all. Then they measured what happened in their brains using EEG. The people who used AI produced their work faster. But they also showed weaker brain connectivity, lower memory retention, and (this part is striking) a fading sense of ownership over what they'd written. The AI-assisted group didn't just think less hard. They stopped experiencing the work as theirs.
That study is small and not yet peer-reviewed, so I want to be careful not to overweight it. But it's consistent with a pattern the automation bias literature has documented for decades. When autopilot systems became standard in commercial aircraft, researchers discovered that pilots who relied on automation for routine flight operations showed measurable degradation in their ability to fly manually when the automation failed. The skills were still there, somewhere. But the reaction times were slower, the judgment was less crisp, and the confidence was lower. This wasn't because the pilots were lazy or bad. It was because the skill wasn't being exercised, and unexercised skills atrophy. That's not a moral failing. That's physiology.
A theoretical perspective paper published in Cognitive Research: Principles and Implications laid out what likely makes AI-specific atrophy particularly insidious. The researchers identified what they called "illusions of understanding," people who work with AI developing a false sense that they understand more than they actually do. They believe they've considered all the options when they've only considered the ones the AI surfaced. They believe they grasp a problem deeply when they've actually just accepted the AI's framing of it. And they believe the AI's output is objective when it carries the biases of its training data.
The worst part? These illusions remain hidden until the AI is removed. Performance looks fine. The person feels competent. The gap only becomes visible at exactly the moment you can least afford to discover it, when you need the independent judgment and it isn't there.
There's another dimension that I think the literature is just starting to catch up with. A 2025 study published in Nature Scientific Reports ran four experiments with over 3,500 participants. People who worked with generative AI and then transitioned to working alone reported significant decreases in intrinsic motivation and increases in boredom. Some of that is predictable. If you've been using a powerful tool and someone takes it away, of course the old workflow feels slower and more tedious. Going back to fat-fingering Python after a month of Claude Code is going to feel boring. That's rational.
But the study found something harder to explain away. Even people who kept the AI for both tasks showed declining motivation. The contrast explanation doesn't cover that. If boredom were just about losing the better tool, the people who never lost it should have been fine. They weren't.
I think what's happening is something that anyone familiar with the research on intrinsic motivation would predict. People are intrinsically motivated by three things: autonomy, mastery, and purpose. The sense that you're directing your own work. The feeling that you're getting better at something difficult. The belief that the difficulty matters. When a new technology takes over the parts of the work where those three things lived, the challenge, the craft, the small acts of problem-solving that prove to you that you're good at what you do, intrinsic motivation drops. Not because the person got lazy, but because the fuel is gone.
This is predictable. It has happened every time a technology has displaced skilled craft work and it is, at bottom, a leadership problem. When you introduce a technology that strips autonomy, mastery, and purpose out of someone's workflow, you should expect a motivation collapse unless you actively manage the transition. Unless you help people find the new sources of mastery in the AI-augmented workflow. Unless you rebuild purpose around the capabilities that remain distinctly human. Left unmanaged, the gap between "more productive" and "less engaged" will widen until the productivity gains are eaten by the disengagement they created.
Whether you frame it as skill atrophy, illusions of understanding, or the erosion of intrinsic motivation, the direction is the same. The person who used to draft a planning estimate from scratch now edits one that the AI produced. The person who used to argue with a source's methodology now skims the AI's summary and moves on. The person who used to stare at a blank page until the right framing emerged now never sees the blank page at all.
These are capacities. They require exercise. And if the early research is any indication, they are at serious risk of quiet degradation across an entire generation of knowledge workers who are using AI every day without maintaining the underlying cognitive fitness that makes their AI use worth anything.
Every Day Is Leg Day
Most versions of this argument focus on the wrong scenario. They say the danger is the rare day when the technology fails, the network goes down, the power cuts out, the system crashes. And sure, that's real. If your AI tools go offline and you've lost the ability to think without them, you're in trouble.
But the strongest case for cognitive independence is that it matters every single time you use AI. Not just on the day the system fails. Every day. Every interaction.
Every interaction with AI is an evaluation task. The AI produces something. You have to decide: Is this good enough? Is this framed correctly? Is something missing? Should I act on this? Every one of those decisions requires independent judgment, judgment that didn't come from the AI, that exists prior to the AI's output, and that you bring to the interaction from your own thinking.
If you can't do that, if you can't form an independent take before or alongside the AI's output, then you're not using a tool. You're being used by one. You're a rubber stamp with a salary.
The military doesn't just run for the rare day someone has to chase an insurgent through an alley. Cardiovascular fitness affects everything: how clearly you think at hour fourteen of a planning cycle, how quickly you recover from a bad night's sleep, how well you regulate your stress response when the plan falls apart. The fitness isn't for the emergency. The fitness is the baseline that makes everything else work.
Cognitive independence is the same. It's not for the day the network goes down. It's the baseline that makes every AI-assisted decision trustworthy. Without it, you're not collaborating with AI. You're just surrendering to it in slow motion.
The Organizational Blind Spot
If this were just an individual problem, it would be serious but manageable. People can decide to maintain their own cognitive fitness, just like people can decide to go for a run.
But PT in the military isn't optional. It isn't left to individual motivation. It is institutional. It is scheduled. It is led. It is, in many units, the first thing that happens every duty day. The organization decided that physical readiness was too important to leave to personal choice, because personal choice is unreliable when the thing you're choosing is difficult and the consequences of skipping are invisible in the short term.
Every condition that justified making PT institutional applies to cognitive fitness, and then some. Cognitive atrophy is even more invisible than physical atrophy. You can look in the mirror and see that you've gained weight. You can't look in the mirror and see that you've lost the ability to independently evaluate an AI-generated planning estimate. The degradation is silent, the consequences delayed. And by the time you discover the gap, the moment you need the judgment and it isn't there, it's too late to build it.
This is a leadership problem, not a personal development problem. When leaders introduce a technology that displaces the autonomy, mastery, and purpose their people used to find in their work, they own the consequences. Expecting individuals to find new sources of meaning on their own, without organizational support, is like issuing Humvees and canceling PT because "they have vehicles now." Nobody would do that. But that is, functionally, what every organization adopting AI without investing in cognitive independence is doing.
Yet no organization I'm aware of has built cognitive independence maintenance into its daily rhythm the way the military builds in PT. I teach senior military officers. I watch them work with AI every day. The ones who came up solving hard problems on their own still push back on the machine, still catch the framing errors, still say "that's not quite right" and know why.
But no one is scheduling twenty minutes of "think without the machine" before the workday starts. No one appears to be assessing whether their team can still frame problems independently, generate alternatives without AI assistance, or catch errors in AI-generated analysis. We seem to be measuring AI adoption rates, how many people are using the tools, how often, for what tasks, and treating that as progress. We don't seem to be measuring whether the humans in the loop are maintaining the capacity that makes the loop meaningful.
We are tracking how far people drive. We are not checking whether they can still run.
The Invisible Bet
Every organization that has adopted AI without investing in cognitive independence has made a bet. Most of them don't know they've made it.
The bet is: our people will maintain the ability to think independently without any deliberate effort to ensure it. They'll just... keep being sharp. The AI will handle more and more of the cognitive work, but somehow the humans will retain the judgment to evaluate that work, to catch errors, to recognize when the framing is wrong, to know when to override the machine.
That bet has been tested in other domains (aviation, nuclear power, automated trading) and it has lost every time. The more reliable the automation, the harder it was for humans to catch the automation's failures. We already know how this goes.
The AI systems people are using today are more persuasive, more fluent, and more confident-sounding than any automation that came before. They produce outputs that look like expert human work. They structure arguments, cite evidence, and anticipate objections. The psychological pull toward acceptance is enormous, and it increases over time as the user's own independent capacity decreases. It's a flywheel, and it turns in only one direction.
None of this requires AI to be malicious or deceptive. The system doesn't have to be trying to undermine your judgment. It just has to be good enough that you stop exercising your own and the rest takes care of itself.
The Morning You Find Out
There is a moment, and it's coming for a lot of people, that will feel like stepping off a treadmill you didn't know you were running on.
Maybe it's the analyst who has been using AI to draft intelligence assessments for eighteen months and then gets asked, in a meeting with no laptop, to walk a general through her reasoning on a developing situation. The AI isn't there. The polished structure isn't there. And she discovers, in real time, in front of people who matter, that she can't reconstruct the thinking that used to come naturally. She's been editing AI drafts for so long that she's lost the ability to generate one.
Maybe it's the lawyer who has delegated research memos to AI for a year and then gets deposed. Opposing counsel asks how he arrived at a particular legal theory. He knows the answer is in the memo. He can picture the paragraph. But he can't explain the reasoning because the reasoning was never his. He approved it. He didn't build it.
Maybe it's simpler than that. Maybe it's the moment you sit down to write an email, not a report, not an analysis, just an email, and you open the AI out of habit, and then you stop, and you try to write it yourself, and you notice that the words come slower than they used to.
Everyone is doing what they are supposed to be doing, what some organizations require that they do. They used a powerful tool the way it was designed to be used. They got more efficient. They produced more output. They looked, by every metric their organizations track, like high performers. The gap in their capacity was invisible right up until the moment it wasn't.
We know how to prevent this. The military figured it out for physical fitness a long time ago. You don't wait for the moment someone needs to run. You build the running into the daily rhythm so the capacity is there when it matters.
We have not done any of this for cognitive fitness. Not in the military. Not in business. Not in education. We are fielding the most powerful cognitive tools in human history and we have not asked — seriously, institutionally, as a matter of policy — how we keep the humans sharp enough to use them well.
Sooner or later, we are going to need to think for ourselves. Will we still be able to?

No comments:
Post a Comment