LmCast :: Stay tuned in

Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant

Recorded: Jan. 22, 2026, 11:03 a.m.

Original Summarized

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task — MIT Media Lab

User

Login

Search

Search

Nav

Nav

Find People, Projects, etc.

Search

Login
Register

Email:

Password:

Work
for a Member organization and forgot your password?

Work for a Member organization and need a Member Portal account? Register here with your official email address.
Register a Member Account

News + Updates

Research

About

Support the Media Lab

MAS Graduate Program

People

Events

Videos

Member Portal

For Press + Media

Publication
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

 

June 10, 2025

People

Nataliya Kos'myna
Research Scientist

Projects

Your Brain on ChatGPT

Groups

Media Lab Research Theme: Life with AI

Share this publication

Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. "Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task." arXiv preprint arXiv:2506.08872 (2025).

Abstract
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

via https://arxiv.org/abs/2506.08872

Related Content

Post
Research

Media Lab Brain Study on ChatGPT Sparks Global Media Coverage

From CNN to The New Yorker, international outlets spotlight Nataliya Kos’myna’s research on how AI tools affect cognitive function.

June 24, 2025

in Fluid Interfaces · Media Lab Research Theme: Life with AI

Article
Research

CNN: AI's Effects On The Brain

Study: Using AI Could Cost You Brainpower

via CNN · June 20, 2025

in Fluid Interfaces

#human-computer interaction #artificial intelligence

Article
Research

Is Using ChatGPT to Write Your Essay Bad for Your Brain? New MIT Study Explained

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

via Time · June 25, 2025

in Fluid Interfaces

#human-computer interaction #artificial intelligence

Article
Research

Brain Experts WARNING: Watch This Before Using ChatGPT Again!

The evolutionary reason why ChatGPT is dangerous for your brain.

via YouTube · Aug. 18, 2025

in Fluid Interfaces

#human-computer interaction #artificial intelligence

News + Updates
Research
About
Support the Media Lab
MAS Graduate Program
People
Events
Videos
Member Portal
For Press + Media

More ways to explore

Videos
Publications
Job Opportunities
Contact

Massachusetts Institute of Technology
School of Architecture + Planning

Accessibility
Donate to the Lab

This research study, conducted by Nataliya Kos’myna and colleagues at the MIT Media Lab, investigates the neurological and behavioral consequences of utilizing Large Language Model (LLM) assistance during essay writing tasks. The core premise centers on the potential for “cognitive debt” accrued when relying on AI tools for this type of cognitive labor. The study employed a controlled experimental design involving three distinct participant groups: an LLM group, a Search Engine group, and a “Brain-only” group which deliberately avoided utilizing any external writing aids. A key aspect of the design included a reassignment phase where participants switched between groups, allowing researchers to observe the impact of tool dependency on cognitive processes.

The research utilized electroencephalography (EEG) to measure cognitive load, alongside Natural Language Processing (NLP) analysis of generated essays, and human and AI-based scoring of the resulting work. A total of 54 participants engaged in the initial three sessions, while a subset of 18 participated in the final reassignment session. The initial sessions revealed convergent patterns across groups, as evidenced by similar neural representations (NERs), n-gram patterns, and topic ontologies – highlighting the inherent cognitive processes involved in essay construction.

Analysis of the EEG data showcased significant differences in brain network connectivity between the groups. The “Brain-only” group demonstrated the strongest and most distributed neural networks, indicating a robust and dynamic engagement of cognitive resources. The “Search Engine” group exhibited moderate levels of brain activity, representing a more deliberate and focused engagement with information. Conversely, the LLM group displayed the weakest connectivity, revealing a diminished overall network engagement. This reduction in network complexity correlated with a decrease in cognitive activity as the reliance on the LLM increased.

The final reassignment phase, where participants switched between groups, provided critical insights. “LLM-to-Brain” participants demonstrated reduced alpha and beta connectivity, characterized by a decrease in the frequency of brainwaves associated with active processing and attention. This reduction suggests a level of under-engagement or disengagement when the participant was forced to utilize the “Brain-only” condition, further solidifying the notion of cognitive debt. Conversely, “Brain-to-LLM” participants showed increased activation in occipito-parietal and prefrontal brain regions, mirroring patterns observed in the “Search Engine” group. This reallocation of neural resources points to a return to more traditional, and potentially more effective, cognitive strategies. Notably, self-reported ownership of the generated essays was lowest in the LLM group, with participants struggling to accurately cite their own work—a key indicator of diminished cognitive investment and control.

The findings collectively suggest a concerning trend: sustained reliance on LLMs for essay writing appears to lead to a measurable reduction in cognitive resource utilization and a corresponding decline in the participant’s own cognitive engagement. Over the four-month observation period, the LLM group consistently underperformed relative to the other groups at the neural, linguistic, and behavioral levels, indicating a longer-term impact. These results raise serious concerns regarding the long-term educational implications of widespread LLM usage, particularly regarding the potential for erosion of critical thinking skills and reduced cognitive autonomy. The study necessitates further investigation into the nuanced ways in which AI tools can shape learning processes and underscores the need for a deeper understanding of the cognitive costs associated with automated knowledge construction.