30 Oct 2025

The Impact of AI Usage on Cognition

How is generative AI influencing human cognition?

In this article, our CTO & Co-Founder, Chris Jefferson, examines the growing evidence that GenAI is reshaping core skills such as attention, memory, reasoning, and critical thinking, and considers how we can protect and strengthen human cognition in an AI-driven world.

Words by
Chris Jefferson
1761822451508

The Impact of AI Usage on Cognition

The rapid rise of generative artificial intelligence (AI) systems is reshaping how we think, learn and reason. In the recent paper “Protecting Human Cognition in the Age of AI” by Singh et al., the authors synthesise literature on how generative AI (GenAI) is influencing human cognition specifically memory, attention, reasoning, metacognition and creativity.

 

Protecting Human Cognition in the Age of AI

They argue that while AI tools offer enormous promise (for example, boosting productivity, assisting creativity, enhancing access to information) they also carry risks of cognitive off loading, diminished critical thinking, reduced engagement with deeper cognitive processes, and learner over reliance. 

 

 For example, learners using AI tools may skip the “apply / analyse / evaluate / create” stages of learning (from revised Bloom’s Revised Taxonomy) and instead rely on the machine to deliver answers, thereby bypassing cognitive challenge, metacognitive reflection, and deeper understanding. The paper further explains how GenAI systems may discourage sustained effortful thinking (what the authors call “productive struggle”), reduce perplexity or confusion (which often drive inquiry), and reinforce echo chamber effects by aligning with users’ prior beliefs. 

The implications are profound: educators, designers of human AI interaction, and learners themselves must rethink how AI is deployed so that key cognitive skills (especially critical thinking and metacognition) are preserved and nurtured.

A striking real-world illustration of cognitive risk arises in the domain of misinformation. 

A study by BBC found that when generative AI chatbots were asked to summarise news articles, more than half of the summaries contained significant issues: factual inaccuracies, altered or non-existent quotes, misrepresentations of context.  Just over half of pupils struggled to tell if this content was true and in some instance just copied and pasted the content.

Pupils struggle to tell if AI content is true, report says - BBC News 

These errors highlight how AI tools may produce outputs that appear polished and persuasive, yet embed distortions or omissions. As the BBC’s research highlights:

"They (Pupils) do not yet have that bank of knowledge and experience to test whether something is correct or not."

This example illustrates a key cognitive hazard: if users become accustomed to receiving AI generated information without sufficient scrutiny, their ability to critically evaluate, to detect error, to engage in reflective thinking may atrophy. The very act of trusting a seemingly authoritative output from an AI may displace the cognitive work of verifying, questioning, and reflecting thereby weakening the human cognitive muscles that underpin sound judgement. The BBC’s article did encouragingly find that students did feel they benefited from AI usage and importantly cited they were also developing creative writing and critical thinking skills. 

 

Given these intertwined opportunities and risks, it becomes imperative to design effective mechanisms for human machine teaming that preserve and strengthen rather than supplant human cognition. We need systems and educational designs that encourage active engagement, critical questioning, metacognitive prompts (e.g., “What assumption is the AI making? What evidence supports this claim?”), and friction in human AI interaction (so users don’t accept outputs uncritically). The Singh et al. paper offers design recommendations such as decreasing AI support at early stages of learning, scaffolding schema connections, and prompting reflective thought. 

This is not simply a matter of using AI tools it’s about being the co-pilot of the AI, not a passive consumer. Protecting human cognition in the age of AI means deliberately creating interaction frameworks, educational practices and personal habits that sustain active thinking, scrutiny, and reflection.