Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant
Recorded: Jan. 22, 2026, 11:03 a.m.
| Original | Summarized |
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task — MIT Media Lab User Login Search Search Nav Nav Find People, Projects, etc. Search Login Email: Password: Work Work for a Member organization and need a Member Portal account? Register here with your official email address. News + Updates Research About Support the Media Lab MAS Graduate Program People Events Videos Member Portal For Press + Media Publication
June 10, 2025 People Nataliya Kos'myna Projects Your Brain on ChatGPT Groups Media Lab Research Theme: Life with AI Share this publication Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. "Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task." arXiv preprint arXiv:2506.08872 (2025). Abstract via https://arxiv.org/abs/2506.08872 Related Content Post Media Lab Brain Study on ChatGPT Sparks Global Media Coverage From CNN to The New Yorker, international outlets spotlight Nataliya Kos’myna’s research on how AI tools affect cognitive function. June 24, 2025 in Fluid Interfaces · Media Lab Research Theme: Life with AI Article CNN: AI's Effects On The Brain Study: Using AI Could Cost You Brainpower via CNN · June 20, 2025 in Fluid Interfaces #human-computer interaction #artificial intelligence Article Is Using ChatGPT to Write Your Essay Bad for Your Brain? New MIT Study Explained Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results. via Time · June 25, 2025 in Fluid Interfaces #human-computer interaction #artificial intelligence Article Brain Experts WARNING: Watch This Before Using ChatGPT Again! The evolutionary reason why ChatGPT is dangerous for your brain. via YouTube · Aug. 18, 2025 in Fluid Interfaces #human-computer interaction #artificial intelligence News + Updates More ways to explore Videos Massachusetts Institute of Technology Accessibility |
This research study, conducted by Nataliya Kos’myna and colleagues at the MIT Media Lab, investigates the neurological and behavioral consequences of utilizing Large Language Model (LLM) assistance during essay writing tasks. The core premise centers on the potential for “cognitive debt” accrued when relying on AI tools for this type of cognitive labor. The study employed a controlled experimental design involving three distinct participant groups: an LLM group, a Search Engine group, and a “Brain-only” group which deliberately avoided utilizing any external writing aids. A key aspect of the design included a reassignment phase where participants switched between groups, allowing researchers to observe the impact of tool dependency on cognitive processes. The research utilized electroencephalography (EEG) to measure cognitive load, alongside Natural Language Processing (NLP) analysis of generated essays, and human and AI-based scoring of the resulting work. A total of 54 participants engaged in the initial three sessions, while a subset of 18 participated in the final reassignment session. The initial sessions revealed convergent patterns across groups, as evidenced by similar neural representations (NERs), n-gram patterns, and topic ontologies – highlighting the inherent cognitive processes involved in essay construction. Analysis of the EEG data showcased significant differences in brain network connectivity between the groups. The “Brain-only” group demonstrated the strongest and most distributed neural networks, indicating a robust and dynamic engagement of cognitive resources. The “Search Engine” group exhibited moderate levels of brain activity, representing a more deliberate and focused engagement with information. Conversely, the LLM group displayed the weakest connectivity, revealing a diminished overall network engagement. This reduction in network complexity correlated with a decrease in cognitive activity as the reliance on the LLM increased. The final reassignment phase, where participants switched between groups, provided critical insights. “LLM-to-Brain” participants demonstrated reduced alpha and beta connectivity, characterized by a decrease in the frequency of brainwaves associated with active processing and attention. This reduction suggests a level of under-engagement or disengagement when the participant was forced to utilize the “Brain-only” condition, further solidifying the notion of cognitive debt. Conversely, “Brain-to-LLM” participants showed increased activation in occipito-parietal and prefrontal brain regions, mirroring patterns observed in the “Search Engine” group. This reallocation of neural resources points to a return to more traditional, and potentially more effective, cognitive strategies. Notably, self-reported ownership of the generated essays was lowest in the LLM group, with participants struggling to accurately cite their own work—a key indicator of diminished cognitive investment and control. The findings collectively suggest a concerning trend: sustained reliance on LLMs for essay writing appears to lead to a measurable reduction in cognitive resource utilization and a corresponding decline in the participant’s own cognitive engagement. Over the four-month observation period, the LLM group consistently underperformed relative to the other groups at the neural, linguistic, and behavioral levels, indicating a longer-term impact. These results raise serious concerns regarding the long-term educational implications of widespread LLM usage, particularly regarding the potential for erosion of critical thinking skills and reduced cognitive autonomy. The study necessitates further investigation into the nuanced ways in which AI tools can shape learning processes and underscores the need for a deeper understanding of the cognitive costs associated with automated knowledge construction. |