Skip to main content
Full access
Viewpoints
Published Online: 28 July 2023

Generative Chatbots Are Not Search Engines

Artificial intelligence (AI) has arrived, and we read daily about how it will change our lives forever, replacing repetitive and mundane tasks, much like word processors and answering machines did. No doubt utilizing generative AI to accomplish routine tasks or enhance our communication skills will become a welcome addition to our workload. At this stage of technological advancement, however, generative chatbots are not ready to accurately assist us in compiling scientific data, such as appropriate facts with references.
About six weeks ago I was finalizing the manuscript for my new book for APA Publishing Encountering Treatment Resistance: Solutions through Reconceptualization. I decided I wanted one last fact to enhance a discussion, that being the percentage of patients with psychiatric symptoms who also have transmissible spongiform encephalopathies (for example, Creutzfeldt-Jakob disease). I had searched online for this datum for seven hours before deciding that the question had probably never been researched.
Having just experimented with the generative chatbots Chat GPT-4 from OpenAI and Bard from Google to see if they could produce an itinerary for an upcoming vacation, I thought I’d see if they could help. I was surprised to get an answer from Bard within a millisecond: “0.5%.” I requested a reference, only to read “Oh, no, I cannot provide that.” Cautious and dubious, I inquired as to the source and was told “Mentally ill people don’t get out much.” Following a similar experience with Chat GPT-4, which informed me that transmissible spongiform encephalopathies is not typically associated with psychiatric symptoms (actually, 80% of infected patients show psychiatric symptoms within the first 100 days, according to Christopher A. Wall and colleagues), I completed the manuscript without the fact.
I also contacted software engineers at Google and prominent search engine developers to learn what had happened. They all agreed that this iteration of AI is merely a text generator, not a search engine that returns trusted sources. It does not just research the scientific literature as we might when we query PubMed, Google Scholar, or APA’s PsychiatryOnline. Any relevant content that it finds can be used when generating a response, no matter how trusted that content is.
Generative AI uses large language modules to “generate” rather than “find” information, based on data it has been trained on or has learned about. These chatbots do not match text previously written by humans that might answer a question or a situation you have posed. They encode your words in sequence and context into an input stream that weights the importance of these features, then locates other encoded inputs that are associated, but not matched, with your input. They are not finding an answer to your question or need, but generating a text output that fits the context and input given: not scientific data seekers but text generators.
Traditional search engines, including those aided by other forms of AI, return unaltered information, with links that you can check for relevance and accuracy. Generative AI is not providing this same web surf. Generative chatbots are programmed to provide you with text; if they have not learned enough context for input like yours, they will generate a response that is based on a wider and unrelated context. Unsurprisingly, these data can and will be incorrect; sometimes they will even be entirely fabricated, including formal references that do not exist.
Armed with a lack of awareness of AI’s current scientific limitations, the less assiduous of us, as well as some patients and their families, might rely on these AI sources, not realizing that the responses may be spurious and invalid.
In a recent Psychiatric News article (Is It Cheating to Use Chat GPT-4 in the Clinical Practice of Psychiatry?), Steven Hyler, M.D., reported that his studies showed that Chat GPT-4 provides accurate responses 70% to 80% of the time. Offering an unnecessary 20% to 30% error rate to our patients is unacceptable when we have more reliable methods for accessing scientific data.
No doubt the APA and Psychiatric News will continue to provide important guidance to the field on what we can and cannot expect from each iteration and type of AI. Part of being a rational, compassionate clinician is being as certain of our data as possible. Just as we review clinical trial design and methods of statistical analysis before accepting conclusions from randomized, controlled trials or meta-analyses, we must remain aware of the limitations of the technology we and our patients use, as Dr. Hyler also pointed out.
Generative AI will quickly evolve and perhaps eventually offer us more reliable and useful facts. At present, though, we must not confuse process with content: Writing letters for us is not the same as providing evidence for evidence-based medicine. For now, we must understand the difference and rely upon the standard methods, which themselves are already being enhanced by the application of other forms of AI. ■

Resource

Biographies

H. Paul Putman III, M.D., has a background in research and the private practice of psychiatry, lecturing, and consulting. He now writes full time and is the author of Rational Psychopharmacology: A Book of Clinical Skills from APA Publishing. Members may purchase the book at a discount.

Information & Authors

Information

Published In

History

Published online: 28 July 2023
Published in print: August 1, 2023 – August 31, 2023

Keywords

  1. Generative Chatbots
  2. Search Engine
  3. Artificial Intelligence
  4. AI
  5. compiling
  6. data
  7. chat gtp
  8. Generative AI
  9. OpenAI
  10. Bard
  11. Google

Authors

Details

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

Get Access

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share