Health Medical Research

Large language models reduce brain activity, produce ‘soulless’ essays: MIT

People who rely on artificial intelligence chatbots had reduced levels of brain activity and produced “soulless” essays in an MIT study.
Photo Credit: Neuroelectrics.

HQ Team

July 25, 2025: People who rely on artificial intelligence chatbots had reduced levels of brain activity and produced “soulless” essays in an MIT study.

Researchers at the Massachusetts Institute of Technology analysed the brain’s cognitive function in three groups comprising a total of 54 people in the Boston area.

They asked them to write an essay — one with the assistance of OpenAI’s ChatGPT, the other with only online browsers and a third with no outside tools at all.

Led by MIT Media Lab research scientist Nataliya Kosmyna, the team studied participants between the ages of 18 and 39. 

Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups consisting of 18 people each.

EEG signals

Each participant had 20 minutes to write an essay from one of three prompts taken from the SATs.

As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain’s electrical activity.

Teachers and AI judges who evaluated the essays were provided only with the participants’ educational background (no school names), age, and the conditions of the essay, like timing and the prompts.

The participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, according to the study. On the other hand, essays written with the help of ChatGPT were more homogeneous.

Researchers found “robust” evidence that participants who used no writing tools displayed the “strongest, widest-ranging” brain activity, while those who used ChatGPT displayed the weakest. 

‘Essays lack personal nuances’

The ChatGPT group displayed 55% reduced brain activity, according to the study.

The participants who used only search engines had less overall brain activity than those who used no tools; these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen.

One of the English teachers, who evaluated the essays, said: “Some essays across all topics stood out because of a close to perfect use of language and structure while simultaneously failing to give personal insights or clear statements. 

“These, often lengthy, essays included standard ideas, recurring typical formulations and statements, which made the use of AI in the writing process rather obvious. 

“We, as English teachers, perceived these essays as ‘soulless,’ in a way, as many sentences were empty about content, and essays lacked personal nuances.”

Decrease analytical processes

Emerging research raised critical concerns about the cognitive implications of extensive large language model (LLM) usage. 

Studies indicate that while these systems reduce immediate cognitive load, they may simultaneously diminish critical thinking capabilities and lead to decreased engagement in deep analytical processes, according to the authors of the study.

“This is particularly concerning in educational contexts, where the development of robust cognitive skills is paramount.”

In the study, the LLM reduced the friction involved in answering participants’ questions compared to the search engine. 

“However, this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ‘opinions’ (probabilistic answers based on the training datasets). 

‘Echo-chamber effect’

“This highlights a concerning evolution of the ‘echo chamber’ effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM’s shareholders.”

The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as the study demonstrated over 4 months, the LLM group’s participants performed worse than their counterparts in the no outside tools at all group at all levels — neural, linguistic, and scoring, it stated.

The authors called for a need for longitudinal studies in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognised as something that is net positive for humans.