HealthQuill Health Fake research papers fuelling AI algorithms, turning fiction into fact
Health Opinion Research

Fake research papers fuelling AI algorithms, turning fiction into fact

Fake academic papers fuelling AI algorithms /Photo by Emily Morter on Unsplash

HQ Team

April 21, 2026: When a Swedish researcher named Almira Osmanovic Thunström invented a disease called “bixonimania” in early 2024, she expected failure. She got the opposite.

Thunström, a medical researcher at the University of Gothenburg, dreamt up the fictional skin condition and uploaded two fake studies about it to a preprint server. It was not to deceive doctors, but to test whether large language models would swallow the misinformation and spit it back out as credible health advice.  Her experiment, intended to expose a theoretical vulnerability, turned into a live demonstration of one of medicine’s most dangerous emerging crises: the collision of fabricated research and AI systems that cannot tell fact from fiction.

The papers described bixonimania as characterised by sore eyes and dark pigmentation around the eyelids, supposedly caused by blue light from computer screens. The fictitious paper was accompanied by convincing AI-generated images.  But there were obvious red flags. The paper stated in plain text that “this entire paper is made up” and that “fifty made-up individuals aged between 20 and 50 years were recruited.”

None of it mattered.

By April 13, 2024, less than a month after the fake papers appeared, Microsoft Bing’s Copilot was describing bixonimania as “an intriguing and relatively rare condition.” Google’s Gemini informed users the disease was caused by excessive blue light and recommended visiting an ophthalmologist. Later that month, both Perplexity AI and ChatGPT were helping users determine whether their own symptoms matched the fictional illness,

The fiction had become, in algorithmic terms, fact.

But the damage did not stop at chatbots. A 2024 paper published in the peer-reviewed Cureus journal by researchers from India’s Maharishi Markandeshwar Institute cited bixonimania as an “emerging periorbital melanosis linked to blue light.” The paper was retracted in March 2026 for including “irrelevant references to a fictitious disease.”  An invented condition had migrated from a preprint hoax into a published, peer-reviewed citation, the kind of citation that future researchers, and future AI systems, would train on next.

Problem in plain sight

Bixonimania is alarming precisely because it is not an anomaly. It is the newest, most visible symptom of a medical research ecosystem under severe structural strain.

Consider the most consequential example of recent years. A 2006 Nature paper by neuroscientist Sylvain Lesné of the University of Minnesota purported to identify Aβ*56 — a specific protein assembly — as a direct cause of memory impairment in Alzheimer’s disease. The paper became one of the most cited works in the field, and hundreds of millions in public and private funding followed its conclusions.  For nearly two decades, it shaped the direction of Alzheimer’s research globally.

The Nature paper was formally retracted in 2024. The retraction notice cited “signs of excessive manipulation, including splicing, duplication and the use of an eraser tool.” Every author except Lesné agreed to the retraction. Following the university’s investigation, Lesné resigned from his tenured professorship, effective March 2025. A generation of scientific work, and the hundreds of millions of dollars that funded it, rested at least partly on fabricated images.

That a single fraudulent paper could distort an entire field of medicine for nearly twenty years is not just a story about one dishonest researcher. It is a story about how vulnerable the scientific publishing system has become.

Fake science

Individual misconduct is serious. But researchers have now identified something more systemic and more frightening: an industrial-scale underground market for fabricated research.

A 2025 Northwestern University study, published in the Proceedings of the National Academy of Sciences, uncovered coordinated global networks of paper mills, brokers, and compromised journals working together to manufacture scientific credibility at scale.  According to the study’s findings, the number of fraudulent scientific papers appears to be doubling every 1.5 years, far outpacing the growth of legitimate scientific publishing, which doubles roughly every 15 years.

These paper mills sell authorship slots for hundreds or thousands of dollars, offer first-author positions at premium prices, and guarantee automatic acceptance in journals through sham peer-review processes.  “If these trends are not stopped, science is going to be destroyed,” warned Luís A. Nunes Amaral, the study’s senior author and a data scientist at Northwestern.

The medical literature, which clinicians and policymakers rely on to make decisions affecting millions of lives, is being actively contaminated.

When AI amplifies the rot

The bixonimania experiment revealed a specific and urgent new dimension to this problem. A separate 2026 study published in The Lancet Digital Health, led by researchers at Mount Sinai, found that AI models are significantly more prone to amplify misinformation when content is formatted to look professionally medical, like a hospital discharge note or clinical paper, compared to informal social media posts.  In one test, a discharge note falsely advising esophagitis patients to drink cold milk was accepted without challenge by several leading AI models.

The pattern is consistent: AI systems are, at present, better at mimicking expertise than verifying it. They do not read with scepticism. They read with fluency.

As researchers and writers lean more heavily on AI-generated summaries and citations, experts warn that the risk of unverified information slipping through the cracks grows significantly. The knowledge ecosystem is increasingly shaped by systems trained on a corpus that already contains, to an unknown and growing degree, fabricated facts.

The disease, in other words, is real. It just doesn’t have a name yet.

Exit mobile version