Back

AI and Cognitive Downfall

MD Rashid Hussain
MD Rashid Hussain
Jul-2025  -  5 minutes to read
Image source attributed to: https://images.thesaurus.ie.edu

The rise of artificial intelligence has undoubtedly marked a significant milestone in human innovation—comparable to the invention of the wheel, the printing press, or the internet. Yet, behind the immense benefits that AI systems promise—efficiency, scalability, precision, and automation—lurks a subtler, more insidious threat: the potential erosion of human cognitive capacity.

I know, you are probably exhausted by all the AI hype flooding our feeds these days. Looks like AI excels at everything, and we as developers are on the brink of being obsolete. The hype-bros say that AI writes, refactors, explains, documents code, and even pretends to review PRs better than humans do. And that too faster, better, and cheaper.

AI is not a silver bullet. It cannot actually think or solve complex problems (as explained by a recent research paper by Apple The Illusion of Thinking

People using these LLMs are actually making themselves less intelligent.

AI is not infallible. It can and does make mistakes, sometimes serious ones. But, the people using these tools (often learning things for the first time) are not equipped to identify these mistakes. So, they end up learning and propagating incorrect information.

AI is not creative. It can only remix and rehash existing information. It cannot come up with truly original ideas or solutions.

AI is not ethical. It does not understand the consequences of its actions. It can and does produce biased, offensive, or harmful content. Often the contents are biased towards the common notions of the majority, sidelining minority viewpoints. So, people using these tools may end up reinforcing existing biases and prejudices.

AI is not a substitute for human judgment. It can provide useful information and suggestions, but ultimately, it is up to humans to make decisions and take responsibility for their actions.

Automation and Mental Atrophy

AI systems, by design, automate complex tasks—language translation, content summarization, route planning, coding assistance, even therapy. While this creates incredible efficiency, it also reduces the mental friction that once drove cognitive growth. Tasks that previously required attention, memory, reasoning, or creativity are now outsourced to machines.

“When we no longer use certain cognitive muscles, they begin to weaken.”

This is not a new phenomenon. The invention of GPS impacted our spatial memory. The proliferation of calculators diminished mental arithmetic. Now, AI models threaten to externalize higher-order thinking: analysis, synthesis, abstraction.

The Seduction of Passive Consumption

AI-powered content algorithms are optimized for engagement, not enlightenment. The result? An endless scroll of shallow stimuli—videos, headlines, social posts—engineered to capture attention without demanding much in return. Over time, this rewires attention spans and diminishes the appetite for deep, effortful thought.

Most of the things I have mentioned above are not new. I have my own prejudices and biases for/against certain tools and technologies. But, what is new is the scale at which these tools are being adopted and the speed at which they are being integrated into our workflows.

The problem is not with the tools themselves, but with the way people are using them. Suppose a simple scenario of creating a new feature in a codebase. A developer (with some prior knowledge and experience) critically examines the requirements and comes up with a solution. In this process, they research existing solutions, read the docs, ask for help from seniors and discuss with peers. These small steps compound in the longer run and shape them into a well-rounded engineer later. Then, they become a mentor and help others grow.

The case is different with AI tools. Now, you give a prompt and the AI generates code for you. You copy-paste it and move on. You dont even bother to understand the code, let alone question its correctness or efficiency. The only measure of success is whether the code works or not. If it works, you are done. If it doesnt, you tweak the prompt and try again.

Epistemic Inertia

As AI becomes a crutch for knowledge retrieval, humans risk losing their intuitive grasp of concepts and connections. The more we rely on AI to tell us what to think, the less incentive there is to struggle with ambiguity, navigate contradictions, or build mental models.

This leads to epistemic inertia—the condition in which people stop updating their beliefs or generating new insights because they defer too readily to machine-generated answers. When the map becomes more trusted than the territory, our cognitive agency starts to erode.

Decision-Making Offloading

In professional and personal settings, AI systems are increasingly involved in decision-making—from hiring to criminal sentencing, from medical diagnostics to stock trading. Humans may start to view these decisions as objective or superior, sidelining their own judgment, skepticism, and critical faculties.

While AI can augment decision-making, over-delegation without comprehension breeds learned helplessness: a passive acceptance of outcomes we no longer understand or control.

The Commodification of Creativity

Generative AI tools produce poems, paintings, essays, and music—often with startling quality. But when creativity becomes commodified, humans are tempted to become mere curators rather than creators. Originality becomes a remix of past data; insight becomes statistical probability.

This shift undermines the messy, nonlinear, and deeply human aspects of creativity: doubt, experimentation, risk. We risk losing not just creative output, but the internal transformation that comes with the creative process.

The Collapse of Cognitive Grit

Modern AI systems offer “zero-friction cognition.” But true intellectual growth often involves friction: confusion, failure, prolonged focus. If AI removes the pain of thinking, it may also remove the gains.

Cognitive grit—the capacity to wrestle with difficult problems, to persist in uncertainty, to think independently—is not just a mental skill; it’s a psychological virtue. Without it, societies become intellectually fragile, vulnerable to misinformation, and unable to challenge authority, systems, or ideas.

Intelligence as a Commodity

AI challenges long-held assumptions about intelligence being a uniquely human attribute. When machines outperform us in language, strategy, and perception, we are forced to ask: What does it mean to be intelligent? And what does it mean to be human in a world where machines can replicate many of our mental functions?

The Shift from Thinkers to Overseers

One possible trajectory is that humans become supervisors of machine cognition rather than practitioners of cognition themselves. We prompt, direct, fine-tune, audit—but we rarely think from first principles.

The role of a prompt engineer is emblematic: a meta-thinker guiding a deeper engine of thought. But will that satisfy our intellectual instincts, or merely simulate them?

While the cognitive challenges of AI are real, they are not inevitable. Several countermeasures can help preserve and even enhance human cognition in this new paradigm:

  • Deliberate Friction: Engage in tasks without automation. Write by hand. Calculate mentally. Reflect before searching.
  • Cognitive Workouts: Treat thinking like exercise. Read difficult texts. Solve puzzles. Argue ideas.
  • AI as a Partner, Not Replacement: Use AI to challenge your ideas, not just confirm them. Interact critically, not passively.
  • Techno-Philosophical Education: Teach people not just how to use AI, but how it shapes thought, values, and society.
  • Human-Centric Design: Build systems that amplify human cognition, not replace it—tools that scaffold thought rather than suppress it.

I am not against AI tools. I use them myself. They can be useful for certain tasks, such as generating boilerplate code, formatting code, or finding syntax errors. They can save time and effort for mundane and repetitive tasks. For example: they can be extremely useful for greenfield projects where you dont have any existing codebase or context. They can help you get started quickly and avoid reinventing the wheel.

But, as the codebase grows, and the complexity increases, the usefulness of these tools diminishes. Now is the best time to go into the complexities of the codebase. If you don't understand the code fully, there is no way you can reproduce it, let alone debug or optimize it.

The hype is real and some of it is true, but do not take marketing materials at face value. Be critical and skeptical. Question everything. Verify the information. Cross-check with other sources. Consider the ethical implications. And most importantly, keep learning and growing as an engineer.

The arrival of AI does not guarantee a cognitive downfall. But it does present a test of our intellectual character. Will we let machines think for us, or will we think better because of them?

If history teaches us anything, it's that tools shape their users. But it also teaches us that awareness breeds resistance. The future of human cognition may hinge not on what AI can do, but on what we choose to stop doing—and why.

Let AI compute. But let us not forget how to think.

AI and Cognitive Downfall