Skip to main content
Full access
Viewpoint
Published Online: 8 January 2020

Good News: Artificial Intelligence in Psychiatry Is Actually Neither

Discussions about artificial intelligence in health care have raised concerns about the dehumanization of healing relationships (1). Reliance on “big data” to inform treatment decisions might lead to ignoring experiences and values that cannot be reduced to discrete data elements. Computer-generated recommendations may carry a false authority that would override expert human judgment. Concerns regarding the disruptive effects of artificial intelligence on clinical practice, however, probably reflect marketing hype more than near-term clinical reality. When actual uses of big data and machine learning in mental health care are considered, the term artificial intelligence is usually a misnomer.
Some health care applications of big data and machine learning may represent true artificial intelligence: computerized algorithms processing machine-generated data to automatically deliver diagnoses or recommend treatments. For example, an autonomous artificial intelligence system for diagnosis of diabetic retinopathy was recently approved by the U.S. Food and Drug Administration (2).
Clinical applications of so-called artificial intelligence in psychiatry, however, generally depend on human-generated data to predict human experience or inform human action. For example, as investigators, we might use clinical records to identify young people at high risk of a first episode of psychosis. Or we might use data from pretreatment clinical assessments to predict patients’ subsequent improvement with specific depression treatments. Or we might use data from interactions of human patients and human therapists to select helpful responses for an automated therapy program. Each of these examples involves use of machine learning and large records databases to develop prediction models or decision support tools. In each case, however, both the input data and the predicted outcome reflect human experience. There may be complicated mathematics in the middle, but human beings are essential actors at both ends.
Consequently, what is intelligent is not actually artificial. When we use clinical data to predict clinical outcomes or inform clinical decisions, machine learning depends on the results of past assessments and decisions made by human clinicians and patients. In hindsight, some assessments may have been more accurate than others, and some decisions may have led to better outcomes than others. Machine learning can help select the clinical decisions leading to the best outcomes. Artificial intelligence or machine learning tools, however, neither conduct assessments nor make decisions. Nor do they understand why some assessments were more accurate or why some decisions led to better outcomes. Humans explore, decide, experience, and evaluate. Machines simply aggregate and efficiently manipulate the intelligence that humans have created or discovered. We can certainly learn more quickly from the aggregated experience of millions than from the individual observations of a few. But the intelligence we aggregate is still fundamentally human. To use the language of machine learning: humans determine the features, and humans label the outcomes. Machines just select the best-fitting statistical relationship between the two.
Furthermore, what is artificial is not actually intelligent. The oldest machine learning tools resemble the regression models many of us studied in statistics classes. While newer machine learning methods may appear more complex or intelligent, they typically involve repetition of very simple building blocks (3). For example, we begin the first tree in a random forest by selecting a random sample of the observations we hope to classify. To create the first branch, we select a random sample of possible predictors. We then sort our observations by each of those predictors to find the one predictor that classifies best. The first branch is then complete, and our sample is divided in two. We then repeat that sorting step for each of the two second-level branches, then for each of the four third-level branches, continuing until the branches get too small to divide any further. Using paper and pencil, a human could create each of those branches. But sorting by each of 100 predictors at each of 100 branches in each of 100 trees would add up to sorting each of the observations up to one million times. A human attempting to create a random forest model would require unlimited time, unlimited paper and pencils, and an unlimited tolerance for boredom. Fortunately, computers can do that repetitive work for us.
Ironically, machine learning methods may be useful precisely because they lack human intelligence. Machines begin with no preconceptions, so they may detect patterns that humans overlook or ignore when conventional wisdom is not supported by data.
Ideally, our field would abandon the term artificial intelligence in regard to actual diagnosis and treatment of mental health conditions. Using that term raises false hopes that machines will explain the mysteries of mental health and mental illness. It also raises false fears that all-knowing machines will displace human-centered mental health care. Big data and advanced statistical methods have and will continue to yield useful tools for mental health care. But calling those tools artificially intelligent is neither necessary nor helpful.
Despite the buildup around artificial intelligence, we need not fear the imminent arrival of “The Singularity,” that science fiction scenario of artificially intelligent computers linking together and ruling over all humanity (4). For the foreseeable future, the most important data regarding mental health conditions will arise from human experience and be recorded by human patients and clinicians. While machine learning may manipulate those human-generated data to deliver treatment recommendations, those recommendations would be delivered to human patients and clinicians. A scenario of autonomous machines selecting and delivering mental health treatments without human supervision or intervention remains in the realm of science fiction.
Artificial intelligence as a term has marketing value, however, so it is unlikely to disappear. Buyers should therefore beware of exaggerated claims and unnecessary complexity. We can certainly point to examples of big data delivering useful predictions or advice to human clinicians and patients. But we cannot point to clear examples of more complex (and more opaque) statistical methods proving more useful than simpler (and more transparent) methods. In the example of models using health records data to identify people at risk of suicidal behavior, more complex modeling methods do not yield more accurate predictions, and simpler models facilitate practical implementation (5). For those who hope to profit from prediction models and other artificial intelligence tools, opacity and complexity are central to the business model. For customers who hope to implement those models, skepticism about opaque and proprietary statistical methods is definitely warranted.
If the artificial intelligence label is here to stay, we might still try to change the images it calls to mind. Instead of imagining an omniscient and powerful calculating machine wringing humanity out of mental health care, we could imagine an amusingly practical robot vacuum. That robot vacuum does useful work. It never gets bored, and it eventually covers every spot. But it is neither all-knowing nor all-powerful. In our clinical work, so-called artificial intelligence can be a useful tool for well-defined jobs for which we humans have neither time nor patience.

References

1.
Verghese A, Shah NH, Harrington RA: What this computer needs is a physician: humanism and artificial intelligence. JAMA 2018; 319:19–20
2.
Abràmoff MD, Lavin PT, Birch M, et al: Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018; 1:39
3.
Beam AL, Kohane IS: Big data and machine learning in health care. JAMA 2018; 319:1317–1318
4.
Flight of the Conchords: The Humans Are Dead. New York, Big Deal Music, 2008. https://www.youtube.com/watch?v=0BcFHvEpP7A
5.
Kessler RC, Hwang I, Hoffmire CA, et al: Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans Health Administration. Int J Methods Psychiatr Res 2017; 26

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Psychiatric Services
Pages: 219 - 220
PubMed: 31910752

History

Received: 19 September 2019
Revision received: 14 October 2019
Accepted: 22 October 2019
Published online: 8 January 2020
Published in print: March 01, 2020

Keywords

  1. Computer technology
  2. Service delivery systems

Authors

Details

Gregory E. Simon, M.D., M.P.H. [email protected]
Kaiser Permanente Washington Health Research Institute, Seattle (Simon); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough).
Bobbi Jo Yarborough, Ph.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough).

Notes

Send correspondence to Dr. Simon ([email protected]).

Funding Information

This work was supported by cooperative agreement U19 MH092201 with the National Institute of Mental Health.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Get Access

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share