Skip to main content
Full access
AI & Digital Health
Published Date: 25 February 2025

Moving From ‘Hallucinations’ to ‘Fabrications’

Using the word “hallucinations” in the context of AI diminishes the importance of this symptom and the pain it creates for those suffering from mental illness.
Getty Images/iStock/ma_rish
As the field of artificial intelligence continues to evolve and expand into various domains, including health care, a pressing issue has emerged that requires immediate attention from the psychiatric community. The term “hallucination” has been increasingly used to describe AI-generated content that is factually incorrect or exaggerated, particularly in health care contexts (Maleki, et al., 2024).
However, this terminology is problematic, as it conflicts with the specific scientific meaning of hallucinations in the mental health context. Hallucinations are sensory experiences experienced by an individual, and as such are false perceptions that can be at times bizarre, distressful, and disturbing to the individual. The underlying conditions that typically lead to an individual experiencing hallucinations could be psychiatric, neurological, or substance-induced. Using the word hallucinations in the context of AI diminishes the importance of this symptom and the pain it creates for those suffering from mental illness.
Unfortunately, the literature abounds with articles regarding AI hallucinations, and it is difficult to sway the technology community to abandon this term (Maleki, et al., 2024; Ji, et al., 2023). The field of psychiatry needs to champion alternative terms that are less likely to conflict with established mental health terminology. We support a unified definition of “AI hallucinations” as “Responses confidently providing fluent but factually incorrect or exaggerated responses, stemming mainly from statistical text prediction from training the AI models using potentially biased data and the stochasticity inherent in some AI algorithms rather than genuine understanding or reasoning.” We also suggest replacing the term “hallucination” with “fabrication” to differentiate this meaning from a hallucination in the mental health context.
AI fabrications pose significant risks, including the dissemination of potentially false health-related information (Linfield & Parsonnet, 2023). This is particularly concerning in health care contexts, where AI tools are increasingly being used to provide medical information and advice to patients. The medical informatics community must develop a robust architecture to prevent the spread of such misinformation and ensure the accuracy and trustworthiness of AI-generated content. Robin Emsley has eloquently written about the danger of a future filled with fictitious information (Emsley, 2023).
Beyond changing terminology, we propose a framework for addressing this issue, which includes evaluating AI-generated content against ground truth, implementing verification mechanisms that are human-reviewed, and ensuring citations and references to every claim. We also suggest prioritizing correctness, consistency with inputs, and trustworthiness in health care contexts, while limiting creativity and the use of anthropomorphic references.
This is not a trivial task, and there are no easy solutions. However, it is essential that the psychiatric and medical informatics communities take an active role in developing solutions to address this issue. The risks associated with the dissemination of false, nonfactual health-related information by AI tools are too great to ignore, and it is our responsibility as health care professionals to ensure that AI-generated content is accurate, reliable, and trustworthy.
We must work collaboratively to develop a methodology to prevent the spread of misinformation and ensure the accuracy and trustworthiness of AI-generated content in health care contexts. Psychiatry has a longstanding track record of developing taxonomies and definitions, and we should take the lead in this area. The medical informatics community likewise must develop a robust architecture to prevent the spread of misinformation and ensure the accuracy and trustworthiness of AI-generated content.
We urge psychiatrists to take an active role in addressing this issue and to work collaboratively to develop solutions that prioritize correctness, consistency with inputs, and trustworthiness in health care contexts. ■

References

Maleki N, Padmanabhan B, Dutta K: AI hallucinations: a misnomer worth clarifying. 2024 IEEE Conference on Artificial Intelligence, Marina Bay Sands, Singapore, 2024.
Ji Z, Lee N, Frieske R, et al.: Survey of hallucination in natural language generation. ACM Comput Surv 2023; 55(12):1-38.
Linfield R, Parsonnet J: 338. ChatGPT conveys incorrect information about the FDA’s antibiotic boxed warnings. Open Forum Infect Dis 2023; 10(Suppl 2):ofad500.409.

Biographies

Naakesh A. Dewan, M.D., is vice president of behavioral health for GuideWell.
Balaji Padmanabhan, Ph.D., is a professor of decision, operations, and information technologies at the University of Maryland Robert H. Smith School of Business.
Negar Maleki, Ph.D., is a graduate research assistant in the School of Information Systems and Management at the University of South Florida.
Svetlana Bender, Ph.D., is vice president of AI and behavioral science for GuideWell.

Information & Authors

Information

Published In

History

Published online: 25 February 2025
Published in print: March 1, 2025 – March 31, 2025

Keywords

  1. Artificial intelligence
  2. Digital psychiatry
  3. AI hallucinations
  4. AI fabrications
  5. Medical informatics

Authors

Details

Naakesh A. Dewan, M.D.
Balaji Padmanabhan, Ph.D.
Svetlana Bender, Ph.D.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Figures

Tables

Media

Share

Share

Share article link

Share