Bixonimania: The fake disease AI believed in 


While generative AI can be useful in certain aspects of healthcare, it should never be used as the final authority for diagnosis or treatment. — 123rf

The Internet has been emerging as the source of health-related information for many.

Some people have even replaced their doctors with Internet search engines and chatbots, so much so that it is not uncommon to hear the mention of “Dr Google”.

A 2020 publication reported that 77.2% of Malaysian senior citizens used the Internet to search for health-related information.

Artificial intelligence (AI) is also increasingly being used in healthcare for improving administrative tasks, diagnosis, investigations, treatment and the management of healthcare services.

While AI can help improve healthcare delivery, it has to be managed carefully to ensure its safety, fairness and trustworthiness.

In 2024, Google began to incorporate AI into its search engine.

On Jan 7 (2026), ChatGPT Health was launched so that “You can now securely connect medical records and wellness apps – like Apple Health, Function and MyFitnessPal – so ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns”.

DeepSeek, the Chinese chatbot, is also rapidly emerging as a significant participant in the AI healthcare space.

ALSO READ: A (human) doctor advises on using ‘Dr’ AI wisely

AI changes as it learns, which means that what begins as low risk could become high risk with time.

Understanding how AI generates results, or outputs, is difficult.

This means that errors or harm can be more likely.

AI systems use large amounts of data, which, if not protected properly, could lead to harm to patients or privacy breaches.

If data used in AI systems are not inclusive, its results could also be inaccurate or unfair.

This column is about Bixonimania, a fake disease that AI propagated.

It is a useful reminder of the limitations of the use of AI chatbots in healthcare.

The advent of a fake disease

University of Gothenburg medical researcher Dr Almira Osmanovic Thunström in Sweden wanted to assess how AI chatbots managed a fictional illness, whether it would swallow misinformation and then regurgitate it out as reputable health advice.

She created a condition based on the common practice of frequent rubbing of the eyes, which can lead to the eyelids appearing pink.

The faked eye condition, termed bixonimania, was attributed it to a fabricated researcher, Lazljiv Izgubljenovic, whose image was AI-generated and who purportedly worked at the non-existent Asteria Horizon University in the fake Nova City, California.

The paper acknowledged “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise” and were funded by the fictional “Professor Sideshow Bob Foundation for its work in advanced trickery. This work is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”.

The papers were designed to appear as academic scholarship, although they were obviously fake to humans.

The chatbots’ AI large language models (LLM) treated the texts as valid as it resembled that of actual medical writing.

Dr Thunström and her team uploaded blog posts about the fake disease in March 2024, and followed up with two preprints in late April and early May 2024.

The giveaway of bixonimania to doctors was the inclusion of “mania”, which is a term used in psychiatry.

The scientific journal Nature reported that major AI chatbots began repeating the fake disease within weeks.

Microsoft Copilot reportedly termed bixonimania a rare disease on April 13, 2024, and Google Gemini associated it with blue light exposure and advised consulting an ophthalmologist.

Perplexity AI quoted a prevalence rate of one in 90,000 on April 27, 2024, and ChatGPT diagnosed user prompts about eyelid issues with the fake disease.

The pattern in all the platforms was the same, i.e. confident language, clinical framing and no meaningful scepticism.

The spread through AI platforms illustrates the interdependence of information systems today.

Chatbots do not just devour scholarly databases.

They reflect the surrounding ecosystems that include preprints, blogs, indexed snippets and references.

Once a fake term is repeated sufficiently, it can become part of the ambient consensus that the LLM models use to answer queries.

That is how a fake disease began to feel real without ever becoming so.

The researcher used a fictional author with an AI-generated image and the content appeared plausible in format, but not in truth.

The fake disease was propagated through blogs and preprints.

The LLMs held on firmly to the structure of the article.

When repeated many times, the false disease gained legitimacy.

Vulnerability of medical queries

Confidence is one of the central factors of trust in healthcare.

People resort to health prompts at times of uncertainty or fear.

When a user is concerned about symptoms, the reply from LLMs can be sufficient to shift the user’s curiosity to concern, especially if the answers include numbers, mechanisms and messages that sound technical.

Users are more likely to trust refined, direct answers, and less likely to question the source chain behind them, especially when there is no professionalism from their healthcare providers.

The latter was illustrated in the BBC article of Feb 13, 2025, headlined ‘DeepSeek moved me to tears’: How young Chinese find therapy in AI.

The World Health Organization (WHO) has been warning that LLMs can disseminate highly convincing health disinformation (i.e. information that is designed or spread with full knowledge of it being false with the intention to deceive and/or cause harm) that users may struggle to distinguish from reliable medical information.

Recent research and reporting have also suggested that chatbot reliability in medical settings remains uneven.

Nature’s coverage about AI and health has repeatedly highlighted how systems can misfire when they are pushed beyond narrow, controlled tasks.

The bixonimania case was a classic example of how health dis- and misinformation (i.e. the spread of false information without the intention to mislead) functions.

The chatbot does not diagnose clinically, but its replies appeared like a diagnosis, and that is sufficient to cause harm.

The interactions in health-related questions involve much trust and chatbots often sound more definitive than the evidence supports.

Users may not verify the chatbot’s replies if it sounds professional.

Fake conditions like bixonimania are reinforced by repeated mentions in the Internet.

A confident chatbot response can be mistaken for competence.

ALSO READ: Don't believe everything chatbots tell you

What can readers do?

The uncomfortable truth is not to treat a chatbot as a healthcare authority.

AI systems may be helpful for questions, translating jargon or organising symptoms, but it is never a substitute for a doctor’s clinical judgment.

The more specific and concerned the health issue is, the more carefully the chatbot’s answer needs to be verified.

This does not mean that AI should be avoided altogether.

It means that users need to appreciate that there is a difference between a tool that can summarise information and one that can verify it – they are not the same.

An AI model can explain a fake condition as easily as it can explain a real one, and the user often cannot tell which is which until it is too late.

If one resorts to the Internet for health-related information, one should go to reliable sites, which are usually that of regulators and professional organisations.

The safest approach is to use chatbots for triage, not diagnosis.

The chatbots can be asked to explain terminologies, list possibilities or suggest questions for the doctor.

It is critical not let chatbots become the final authority on symptoms, medicines or treatments.

This is especially so when the answers contain uncommon diseases that have never been heard before or cannot be independently verified.

A guiding principle is that the more unusual the diagnosis, the more suspicious one should be.

If a chatbot provides a technical-sounding term that does not appear in trusted health-related references, treat it as a need to investigate, not a conclusion to accept.

In summary:

  • Use chatbots for explanation, not diagnosis
  • Treat claims with scepticism, especially if they are not well-sourced
  • Verify uncommon terms from reputable health-related sources
  • Remember that chatbots using AI can explain a fake condition as easily as it can explain a real one.
  • Most importantly, seek medical attention whenever symptoms are persistent and/or worsening.

Dr Milton Lum is a past president of the Federation of Private Medical Practitioners Associations and the Malaysian Medical Association. For more information, email starhealth@thestar.com.my. The views expressed do not represent that of organisations that the writer is associated with. The information provided is for educational and communication purposes only, and it should not be construed as personal medical advice. Information published in this article is not intended to replace, supplant or augment a consultation with a health professional regarding the reader’s own medical care. The Star disclaims all responsibility for any losses, damage to property or personal injury suffered directly or indirectly from reliance on such information.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Health

Here's how to eat out without the guilt�
When your child isn't growing like they should
Tackling burnout and imposter syndrome�
Hantavirus: A sudden outbreak abroad a cruise ship
When a man comes too fast
Waking up to radio-guided exercise
This vitamin can help cancer grow
Hyrox: The new fitness race craze everyone can join
Compressed nerves�could be causing that headache
Should you sync your exercise time to your chronotype?

Others Also Read