Skip to main content

Abstract

Statistical models, including those based on electronic health records, can accurately identify patients at high risk for a suicide attempt or death, leading to implementation of risk prediction models for population-based suicide prevention in health systems. However, some have questioned whether statistical predictions can really inform clinical decisions. Appropriately reconciling statistical algorithms with traditional clinician assessment depends on whether predictions from these two methods are competing, complementary, or merely duplicative. In June 2019, the National Institute of Mental Health convened a meeting, “Identifying Research Priorities for Risk Algorithms Applications in Healthcare Settings to Improve Suicide Prevention.” Here, participants of this meeting summarize key issues regarding the potential clinical application of suicide prediction models. The authors attempt to clarify the key conceptual and technical differences between traditional risk prediction by clinicians and predictions from statistical models, review the limited evidence regarding both the accuracy of and the concordance between these alternative methods of prediction, present a conceptual framework for understanding agreement and disagreement between statistical and clinician predictions, identify priorities for improving data regarding suicide risk, and propose priority questions for future research. Future suicide risk assessment will likely combine statistical prediction with traditional clinician assessment, but research is needed to determine the optimal combination of these two methods.

HIGHLIGHTS

Statistical models using health records data can accurately identify risk for suicidal behavior.
Data are limited regarding the accuracy of risk predictions by real-world clinicians using data available in practice.
The article highlights the need for research to examine how statistical predictions and clinicians’ predictions can be optimally combined.
Despite increased attention to suicide prevention, U.S. suicide rates continue to rise—increasing by more than one-third since 1999 (1). Over three-quarters of individuals attempting or dying by suicide had at least one outpatient health care visit during the previous year (2, 3). Health care encounters are a natural setting for identifying and addressing risk for suicidal behavior (4), but effective prevention requires accurate identification of risk. Unfortunately, assessments based on traditional risk factors are not accurate enough to direct interventions to those at highest risk (5, 6). Results from recent research indicate that statistical models, which are often based on electronic health records (EHRs), more accurately identify patients at high risk for suicide in mental health, general medical, and emergency department settings (712). Motivated by that research, some health care systems have implemented risk prediction models as a component of population-based suicide prevention (13). At the same time, some have questioned whether statistical predictions can really inform clinician decisions, given concerns regarding high false-positive rates, potential for false labeling, and lack of transparency or interpretability (14).
Current (13) and planned implementations of suicide risk prediction models typically call for delivering statistical predictions to treating clinicians. Clinicians then assess risk and implement specific prevention strategies (e.g., collaborative safety planning [15]) or treatments (e.g., cognitive-behavioral therapy [16] or dialectical behavior therapy [17]). This sequence presumes that clinician assessment improves on a statistical prediction. Whether or not a clinician assessment adds to statistical risk predictions, however, depends on whether those two methods are competing, complementary, or simply duplicative approaches.
In June 2019, the National Institute of Mental Health convened a meeting, “Identifying Research Priorities for Risk Algorithms Applications in Healthcare Settings to Improve Suicide Prevention” (18). The meeting considered a range of clinical, technical, and ethical issues regarding implementation of prediction models. As participants in that meeting, we present in this article our conclusions and recommendations, which are based on discussions during and after the meeting. We focus on one central issue: how algorithms or statistical predictions might complement, replace, or compete with traditional clinician prediction. First, we attempt to clarify conceptual and practical differences between suicide predictions by clinicians and predictions from statistical models. Second, we review the limited evidence available regarding accuracy of and concordance between these two methods. Third, we present a conceptual framework for understanding agreement and disagreement between clinician-based and statistical predictions. Finally, we identify priorities for improving data regarding suicide risk and propose priority questions for research.
Although we focus on prediction of suicidal behavior, clinicians can expect to encounter more frequent use of statistical models to predict other clinical events, such as psychiatric hospitalization (19) or response to antidepressant treatment (20). We hope that the framework we describe in the following can broadly inform research and policy regarding the integration of statistical prediction and clinician judgment to improve mental health care.

Defining and Distinguishing Statistical and Clinician Prediction

We define statistical prediction as the use of empirically derived statistical models to predict or stratify risk for suicidal behavior. These models might be developed by using traditional regression or various statistical learning methods, but their defining characteristic is the development of a statistical model from a large data set with the aim of applying this model to future practice. We define clinician prediction as a treating clinician’s estimate of risk during a clinical encounter. As summarized in Table 1, statistical and clinician predictions typically differ in the predictors considered, the process for developing predictions, the nature of the resulting prediction, and the practicalities of implementation.
TABLE 1. Distinguishing characteristics of statistical and clinician predictions of suicide risk
CharacteristicStatistical predictionClinician prediction
Inputs
 Data sourcesElectronic health records and other computerized databasesClinician observations, clinician review of available records
 No. of possible predictorsHundreds or thousandsTypically <10
Process
 Combining and weighting predictorsStatistical optimization, often involving machine learningClinician judgment regarding relevance and weighting of risk factors
 Accommodating heterogeneityInteractions or tree-based approaches to identify subgroup-specific predictorsClinician judgment regarding applicability of specific risk factors to individuals
 Balancing sensitivity and specificityExplicit assessment of performance at varying thresholds (may also evaluate alternative loss or cost functions)Clinician judgment regarding importance of false-positive and false-negative errors
Outputs
 ProductUnidimensional prediction, often a continuous scoreClinical formulation
 Use in treatment decisionsNot intended to identify causal relationships or treatment targetsAims to identify causal relationships and treatment targets
Implementation
 TimingTypically computed before clinical encountersFormulated during clinical encounters
 ScaleBatch calculation for large populationsDistinct assessments for each individual

Inputs for Generating Predictions

Data available to statistical models and data available to clinicians overlap only partially. Statistical models to predict suicidal behavior typically consider hundreds or thousands of potential predictors extracted from health records data (EHRs, insurance claims, and hospital discharge data) (21). These data most often include diagnoses assigned, prescriptions filled, and types of services used but could also include information extracted from clinical text or laboratory data (22, 23). Statistical models may also consider “big data” not typically included in health records, such as genomic information, environmental data, online search data, or social media postings (24, 25). In contrast, clinician reasoning is typically limited to a handful of predictors. Clinicians, however, can consider any data available during a clinical encounter, ranging from unstructured clinical interviews to standardized assessments tools (2629). Most important, clinician assessment usually considers narrative data systematically recorded in EHRs, such as stressful life events, and subjective information, such as clinicians’ observations of thought processes or nonverbal cues (30, 31).

Prediction Process

Statistical and clinician predictions differ fundamentally in accommodating high dimensionality and heterogeneity of predictive relationships. Statistical methods select among large numbers of correlated predictors and assign appropriate weights in ways a human clinician cannot easily replicate. The statistical methods for reducing dimensionality involve choices and assumptions that may be obscure to both clinicians and patients. In contrast, clinician prediction relies on individual judgment to select salient risk factors for individuals and assign each factor greater or lesser importance. Statistical prediction reduces dimensionality empirically, and clinician prediction reduces dimensionality by using theory, heuristics, and individual judgment.
Statistical models account for heterogeneity or complexity by allowing nonlinear relationships and complex interactions between predictors. Clinician prediction may consider interactions among risk factors but in a subjective rather than a quantitative sense. For example, clinicians might be less reassured by absence of suicidal ideation for a patient with a history of unplanned suicide attempt.
Statistical predictions typically produce a continuous risk score rather than a dichotomous classification. Selecting a threshold on that continuum must explicitly balance false-negative and false-positive errors. A higher threshold (less sensitive and more specific) is appropriate when considering an intervention that is potentially coercive or harmful. Machine-learning methods for developing risk scores also permit explicit choices regarding loss or cost functions that emphasize accuracy in different portions of the risk spectrum. Although clinician prediction may consider the relative importance of false-negative and false-positive errors, that consideration usually is not explicit or quantitative.

Outputs or Products

Statistical prediction typically has narrower goals than does clinician prediction. Statistical models simply identify associations in large samples and combine these associations to optimize a unidimensional prediction, that is, the probability of suicidal behavior during a specific period. The resulting statistical models are optimized for prediction but often have little value for explanation or causal inference. Even when statistical models consider potentially modifiable risk factors (e.g., alcohol use), statistical techniques used for prediction models are not well suited to assess causality or mechanism. In contrast, clinician prediction considers psychological states or environmental factors to generate a multidimensional formulation of risk. Explanation and interpretation are central goals of clinician prediction.
Because statistical prediction values optimal prediction over interpretation, statistical models can paradoxically identify treatments appropriate for suicide prevention (such as starting a mood stabilizer medication) as risk factors. Recent discussion underlines the importance of properly modeling interventions in prognostic models to prevent this phenomenon (32). In contrast, clinician prediction often aims to identify modifiable risk factors as treatment targets. For example, hallucinations regarding self-harm would be both an indicator of risk and a target for treatment.

Implementing Predictions in Practice

The abovementioned differences in inputs and processes have practical consequences for implementation of predictions. Statistical prediction occurs outside a clinical encounter through use of data readily accessible to statistical models. Clinician prediction occurs in real time during face-to-face interactions. Statistical prediction is performed in large batches, with accompanying economies of scale. Clinician prediction is, by definition, a handcrafted activity. Detailed clinician assessment may not be feasible in primary care or other settings where clinician time or specialty expertise is limited. In contrast, statistical prediction may not be feasible in settings without access to comprehensive electronic records.
The boundary between statistical prediction and clinician prediction is not crisply defined. Statistical prediction models may consider subjective information extracted from clinical texts (33). Clinician prediction may include use of risk scores calculated from structured assessments or checklists (6). Additionally, clinicians determine what information is entered into medical records and thus available for statistical predictions. Despite this imprecise boundary, we believe that the distinction between statistical and clinician predictions has both practical and conceptual importance.

Empirical Evidence on Accuracy and Generalizability of Statistical and Clinician Predictions

Little systematic information is available regarding the accuracy of clinicians’ predictions in everyday practice. Outside of research settings, clinicians’ assessments are not typically recorded in any form that would allow for formal assessment of accuracy or comparison to a statistical prediction. Nock and colleagues (34) reported that predictions by psychiatric emergency department clinicians were not significantly associated with probability of repeat suicidal behavior among patients presenting after a suicide attempt. A meta-analysis by Franklin and colleagues (5) found that commonly considered risk factors (e.g., psychiatric diagnoses, general medical illness, and previous suicidal behavior) are only modestly associated with risk for suicide attempt or suicide death. That finding, however, may not apply to clinicians’ assessments that use all data available during a clinical encounter. Any advantage of clinician prediction probably derives from use of richer information (e.g., facial expression and tone of voice) and from clinicians’ ability to assess risk in novel individual situations. No published data have examined the accuracy of real-world clinicians’ predictions in terms of sensitivity, positive predictive value, or overall accuracy (i.e., area under the curve [AUC]).
Some systematic data are available regarding accuracy of standard questionnaires or structured clinician assessments. For example, outpatients reporting thoughts of death or self-harm “nearly every day” on the 9-item Patient Health Questionnaire (PHQ-9) depression instrument (35) were eight to 10 times more likely than those reporting such thoughts “not at all” to attempt or die by suicide in the following 30 days (27). Among Veterans Health Administration (VHA) outpatients, the corresponding odds ratio was ∼3 (36). Nevertheless, this screening measure has significant shortcomings in both sensitivity and positive predictive value. More than one-third of suicidal behaviors within 30 days of completing a PHQ-9 questionnaire occurred among those reporting suicidal ideation “not at all” (27). The 1-year risk for suicide attempt among those reporting suicidal ideation “nearly every day” was only 4%. The Columbia–Suicide Severity Rating Scale has been reported to predict suicide attempts among outpatients receiving mental health treatment when administered either by clinician interview or by electronic self-report (26, 37). The Columbia scale has shown overall accuracy (AUC) of ∼80% in predicting future suicide attempts among U.S. veterans entering mental health care (38) and individuals seen in emergency departments for self-harm (39).
Statistical models developed for a range of clinical situations (e.g., active duty soldiers [9, 40], large integrated health systems [11], academic health systems [7, 41], emergency departments [12], and the VHA [8]) all appear to be substantially more accurate than predictions based on clinical risk factors or structured questionnaires. Overall classification accuracy as measured by AUC typically exceeds 80% for statistical prediction of suicide attempt or suicide death, clearly surpassing the accuracy rates of 55%–60% for predictions based on clinical risk factors. Overall accuracy of statistical models (79, 11) also exceeds that reported for self-report questionnaires (27, 36) or structured assessments (26, 3739). Nearly half of suicide attempts and suicide deaths occurred among patients with computed risk scores in the highest 5% (79, 11), an indicator of practical utility in identifying a group with 10-fold elevation of risk.
Comparison of overall accuracy across samples (clinician prediction assessed in one sample and statistical prediction in a different sample), however, cannot inform decisions about how the two methods can be combined to facilitate suicide prevention. The future of suicide risk assessment will likely involve a combination of statistical and clinician predictions (42). Rather than framing a competition between the two methods, we should determine the optimal form of collaboration. Informing that collaboration requires detailed knowledge regarding how alternative methods agree or disagree.
Few data are available from direct comparisons of statistical predictions and clinician assessments within the same patient sample. Among ambulatory and emergency department patients undergoing suicide risk assessment, a statistical prediction from health records was substantially more accurate than clinicians’ assessments that used an 18-item checklist (10). Of patients identified as being at high risk by the VHA prediction model, the proportion “flagged” as high risk by treating clinicians ranged from approximately one-fifth for those with statistical predictions above the 99.9th percentile to approximately one-tenth for those with statistical predictions above the 99th percentile (43). Among outpatients treated in seven large health systems, adding item 9 of the PHQ-9 to statistical prediction models had no effect for mental health specialty patients and only a slight effect for primary care patients (44).
Although these data suggest substantial discordance between statistical predictions and clinician predictions, they do not identify patient, provider, or health system characteristics associated with such disagreements. Available data also do not enable us to examine how statistical and clinician predictions agree by using higher or lower risk thresholds or over shorter versus longer periods. Theoretically, statistical prediction can consider information available within and beyond EHRs. In statistical prediction models reported to date, however, the strongest predictors remain psychiatric diagnoses, mental health treatments, and record of previous self-harm (711). Thus, statistical predictions still rely primarily on data created and recorded by clinicians. Any advantage of statistical prediction derives not from access to unique data but from rapid complex calculation not possible for clinicians.
Prediction models based on EHR data could replicate or institutionalize bias or inequity in health care delivery (45). Given large racial and ethnic differences in suicide mortality rates (46), a statistical model considering race and ethnicity would yield lower estimates of suicide risk for some traditionally underserved groups (44). A model not allowed to consider race and ethnicity would yield less accurate predictions. Whether and how suicide risk predictions should consider race and ethnicity is a complex question. But the role of race and ethnicity in statistical predictions is at least subject to inspection, whereas biases of individual clinicians cannot be directly examined or easily remedied.
Any prediction involves applying knowledge based on previous experience. For either statistical or clinician prediction to support clinical decisions, knowledge regarding risk for suicidal behavior developed in one place at one time should still serve when transported to a later time, a different clinical setting, or a different patient population. Published data regarding transportability or generalizability, however, are sparse. Understanding the logistical and conceptual distinctions between statistical and clinician predictions, we can identify specific concerns regarding generalizability for each method. Because clinician prediction depends on the skill and judgment of human clinicians to assess risk, we must consider how clinicians’ abilities might vary across practice settings and how clinicians’ decisions might be influenced by the practice environment. Because statistical prediction usually depends on data elements from EHRs to represent underlying risk states, we must consider how differences in clinical practice, documentation practices, or data systems would affect how specific EHR data elements are related to actual risk (44). Clinical practice or documentation could vary among health care settings or within one health care setting over time (47). Even if statistical methods apply equally well across clinical settings, the resulting models may differ in predictors selected and weights applied (48).

Understanding Disagreement Between Statistical and Clinician Predictions

Current (13) and planned implementations of statistical prediction models have typically called for a two-step process, with statistical and clinician predictions operating in series. That scheme is shown at the top of Figure 1, simplifying risk as a dichotomous state. Statistical predictions are delivered to human clinicians, who then evaluate risk and initiate appropriate interventions. This two-step process may or may not be reasonable, depending on how we understand the relationship between statistical and clinician risk predictions. Those two processes may estimate the same or different risk states or estimands. Understanding similarities and differences in the risk states or estimands of statistical and clinician predictions is essential to optimally combining these potentially complementary tools.
FIGURE 1. Alternative logical combinations of statistical and clinician predictions of suicide riska
aIn a series arrangement (top), clinician assessment would focus on individuals already identified by a statistical model as having an increased suicide risk. Any subsequent intervention would be limited to those considered at high risk by both statistical prediction and clinician assessment. In a parallel arrangement (bottom), clinician assessment would occur regardless of statistical prediction. Any subsequent intervention would be provided to those considered at high risk by either statistical or clinician prediction.
If statistical and clinician predictions are imperfect measures of the same underlying risk state or estimand, applying these tools in series may be appropriate. Even if clinician prediction is less accurate on average than statistical prediction, combining the two may improve overall accuracy. Statistical predictions can be calculated efficiently at large scale, whereas clinician prediction at the level of individual encounters is more resource intensive. Consequently, it would be practical to compute statistical predictions for an entire population and reserve detailed clinician assessment and clinician prediction for those above a specific statistical threshold (Figure 1, top). Such an approach would be especially desirable if clinician prediction were more accurate for patients in the upper portion of the risk distribution (i.e., those identified by a statistical model as needing further clinician assessment).
We can, however, identify a range of scenarios when a series arrangement would not be optimal. Statistical and clinician predictions could identify different underlying risk states or estimands. For example, statistical models might be more accurate for identifying sustained or longer-term risk, whereas clinician assessment could be more accurate for identifying short-term or imminent risk. Alternatively, statistical and clinician predictions could also identify distinct pathways to the same outcome. For example, clinician prediction might better identify unplanned suicide attempts, and statistical prediction might identify suicidal behavior after a sustained period of suicidal ideation. Finally, statistical and clinician predictions might identify risk in different populations or subgroups. For example, statistical models could be more accurate for identifying risk among those with a known mental disorder, whereas clinician assessment could be more accurate for identifying risk among those without a mental disorder diagnosis. In any of these scenarios, we should implement statistical and clinician predictions in parallel rather than in series. That parallel logic is illustrated at the bottom of Figure 1. We would not limit clinician assessment to persons identified by a statistical model or allow clinician assessment to override an alert based on a statistical model. Instead, we would consider each as an indicator of suicide risk.
More complex combinations of statistical and clinician prediction methods might lead to optimal risk prediction. Logical combination of two imperfect measures may improve both accuracy and efficiency (49). Clinician assessment might be guided or informed by results of statistical predictions, with tailoring of clinician assessment based on patterns detected by statistical models. For example, clinicians might employ specific risk assessment tools for patients identified because of substance use disorder and different tools for patients identified because of a previous suicide attempt.
Effective prevention requires more than accurate risk predictions. Both statistical and clinician predictions aim to guide the delivery of preventive interventions, most likely delivered by treating clinicians. The discussion above presumes that statistical and clinician predictions aim to inform similar types of preventive interventions. But just as statistical and clinician predictions may identify different types of risk, they may be better suited to inform different types of preventive interventions. However, for both statistical and clinician predictions, we again caution against confusing predictive relationships with causal processes or intervention targets. If recent benzodiazepine use is selected as an influential statistical predictor, this selection does not imply that benzodiazepines cause risk or that interventions focused on benzodiazepine use would reduce risk. Compared with statistical models, clinicians may be better able to identify unmet treatment needs, but correlation does not equal causation (or imply treatment effectiveness) for clinician or statistical prediction. Caution is warranted regarding causal inference even for risk factors typically considered to be modifiable, such as alcohol use or severity of anxiety symptoms.

Priorities for Future Research

Rather than framing statistical and clinician predictions of suicide risk as competing with each other, future research should address how these two approaches could be combined. At this time, we cannot distinguish between the scenarios described above to rationally combine statistical and clinician predictions. We can, however, identify several specific questions that should be addressed by future research.
Addressing any of these questions will require accurate data regarding risk assessment by real-world clinicians in actual practice. Creating such data would depend on routine documentation of clinicians’ judgments based on all data at hand. This could be accomplished through use of some standard scale allowing for individual clinician judgment in integrating all available information. The widely used Clinical Global Impressions scale and Global Assessment of Functioning scale are examples of such clinician-rated standard measures of symptom severity and disability. Clearer documentation regarding clinicians’ assessment of suicide risk would seem to be an essential component of high-quality care for people at risk for suicidal behavior, but changing documentation standards would likely require action from health systems or payers. In addition, data regarding clinicians’ risk assessment should be linked with accurate and complete data on subsequent suicide attempts and suicide deaths. Systematic identification of suicidal behavior outcomes is essential for effective care delivery and quality improvement, independently of any value for research. With an adequate data infrastructure in place, we can then address specific questions regarding the relationship between statistical and clinician predictions of suicidal behavior.
First, we must quantify agreement and disagreement between statistical and clinician predictions by using a range of risk thresholds. This quantification would require linking data from clinicians’ assessments to statistical predictions by using data available before the clinical encounter. Any quantification of agreement should include the same at-risk population and should consider the same outcome definition(s) and outcome period(s). Ideally, quantification of agreement should consider risk predictions at the encounter level (how methods agree in identifying patients at high risk during a health care visit) and at the patient level (how methods agree in identifying patients at high risk in a defined population).
Second, after identifying overall discordance between predictions, we should examine actual rates of suicidal behavior among those for whom predictions were concordant (i.e., high or low risk by both measures) or discordant (high statistical risk but not high risk by clinician assessment or vice versa). Addressing this question would require linking data regarding both statistical and clinician predictions to population-based data on subsequent suicide attempt or death. If these two methods are imperfect measures of the same risk state, we would expect risk in both discordant groups to be lower than that in concordant groups. Equal or higher risk in either discordant group would suggest that one measure identifies a risk state not identified by the other measure.
Third, we should quantify disagreement and compare performance within distinct vulnerable subgroups: those with specific diagnoses (e.g., psychotic disorders or substance use disorders), groups known to be at high risk (e.g., older men or Native Americans), or groups for whom statistical prediction models do not perform as well (e.g., those with minimal or no history of mental disorder diagnosis or mental health treatment). Either statistical prediction or clinician prediction may be superior in any specific subgroup.
Fourth, we should examine how predictions operate in series (i.e., how one prediction method adds meaningfully to the other). As discussed above, practical considerations argue for statistical prediction to precede clinician assessment. That sequence presumes that clinician assessment adds value to predictions identified by a statistical model. Previous research regarding accuracy of clinician prediction has typically examined overall accuracy and not accuracy conditional on a prior statistical prediction. Relevant questions include, Does clinician assessment improve risk stratification among those identified as being at high risk by a statistical model? How often does clinician assessment identify risk not identified by a statistical model? Is clinician prediction of risk more or less accurate in subgroups with specific patterns of empirically derived predictors (e.g., young people with a recent diagnosis of psychotic disorder).
Finally, we should examine the effects of specific interventions among those identified by statistical and clinician predictions of suicide risk. Clinical trials indicating the risk-reducing effects of either psychosocial or pharmacological interventions have typically included participants identified by clinicians or selected according to clinical risk factors. We should not presume that these interventions would prove equally effective among individuals identified by statistical predictions. Addressing this question would likely require large pragmatic trials, not limited to those who volunteer to receive suicide prevention services. If statistical prediction identifies different types or pathways of risk, those identified by statistical models might experience different benefits or harms from treatments developed and tested in clinically identified populations.

Conclusions

We anticipate both that statistical prediction tools will see increasing use and that clinician prediction will continue to improve with the development of more efficient and accurate assessment tools. Consequently, the future of suicide risk prediction will likely involve some combination of these two methods. Rational combination of traditional clinician assessment and new statistical tools will require both clear understanding of the potential strengths and weaknesses of these alternative methods and empirical evidence for how these strengths and weaknesses affect clinical utility across a range of patient populations and care settings.

References

1.
Hedegaard H, Curtin SC, Warner M, et al : Increase in Suicide Mortality in the United States, 1999–2018. NCHS Data Brief 362. Hyattsville, MD, National Center for Health Statistics, 2020
2.
Ahmedani BK, Simon GE, Stewart C, et al : Health care contacts in the year before suicide death. J Gen Intern Med 2014 ; 29 : 870 – 877
3.
Ahmedani BK, Stewart C, Simon GE, et al : Racial/ethnic differences in health care visits made before suicide attempt across the United States. Med Care 2015 ; 53 : 430 – 435
4.
Detecting and Treating Suicidal Ideation in All Settings. Sentinel Event Alert 56. Oak Terrace, IL, Joint Commission, 2016. https://www.jointcommission.org/resources/patient-safety-topics/sentinel-event/sentinel-event-alert-newsletters/sentinel-event-alert-56-detecting-and-treating-suicide-ideation-in-all-settings. Accessed Feb 6, 2021
5.
Franklin JC, Ribeiro JD, Fox KR, et al : Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychol Bull 2017 ; 143 : 187 – 232
6.
Warden S, Spiwak R, Sareen J, et al : The SAD PERSONS scale for suicide risk assessment: a systematic review. Arch Suicide Res 2014 ; 18 : 313 – 326
7.
Barak-Corren Y, Castro VM, Javitt S, et al : Predicting suicidal behavior from longitudinal electronic health records. Am J Psychiatry 2017 ; 174 : 154 – 162
8.
Kessler RC, Hwang I, Hoffmire CA, et al : Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans health Administration. Int J Methods Psychiatr Res 2017 ; 26
9.
Kessler RC, Stein MB, Petukhova MV, et al : Predicting suicides after outpatient mental health visits in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Mol Psychiatry 2016 ; 22 : 544 – 551
10.
Tran T, Luo W, Phung D, et al : Risk stratification using data from electronic medical records better predicts suicide risks than clinician assessments. BMC Psychiatry 2014 ; 14 : 76
11.
Simon GE, Johnson E, Lawrence JM, et al : Predicting suicide attempts and suicide deaths following outpatient visits using electronic health records. Am J Psychiatry 2018 ; 175 : 951 – 960
12.
Sanderson M, Bulloch AG, Wang J, et al : Predicting death by suicide following an emergency department visit for parasuicide with administrative health care system data and machine learning. EClinicalMedicine 2020 ; 20 : 100281
13.
Reger GM, McClure ML, Ruskin D, et al : Integrating predictive modeling into mental health care: an example in suicide prevention. Psychiatr Serv 2018 ; 70 : 71 – 74
14.
Belsher BE, Smolenski DJ, Pruitt LD, et al : Prediction models for suicide attempts and deaths: a systematic review and simulation. JAMA Psychiatry 2019 ; 76 : 642 – 651
15.
Miller IW, Camargo CA, Jr, Arias SA, et al : Suicide prevention in an emergency department population: the ED-SAFE Study. JAMA Psychiatry 2017 ; 74 : 563 – 570
16.
Brown GK, Ten Have T, Henriques GR, et al : Cognitive therapy for the prevention of suicide attempts: a randomized controlled trial. JAMA 2005 ; 294 : 563 – 570
17.
Linehan MM, Comtois KA, Murray AM, et al : Two-year randomized controlled trial and follow-up of dialectical behavior therapy vs therapy by experts for suicidal behaviors and borderline personality disorder. Arch Gen Psychiatry 2006 ; 63 : 757 – 766
18.
Identifying Research Priorities for Risk Algorithms Applications in Healthcare Settings to Improve Suicide Prevention: Bethesda, MD, National Institute of Mental Health, 2019. https://www.nimh.nih.gov/news/events/2019/risk-algorithm/index.shtml
19.
Burningham Z, Leng J, Peters CB, et al : Predicting psychiatric hospitalizations among elderly veterans with a history of mental health disease. EGEMS 2018 ; 6 : 7
20.
Chekroud AM, Gueorguieva R, Krumholz HM, et al : Reevaluating the efficacy and predictability of antidepressant treatments: a symptom clustering approach. JAMA Psychiatry 2017 ; 74 : 370 – 378
21.
Beam AL, Kohane IS : Big data and machine learning in health care. JAMA 2018 ; 319 : 1317 – 1318
22.
Oh KY, Van Dam NT, Doucette JT, et al : Effects of chronic physical disease and systemic inflammation on suicide risk in patients with depression: a hospital-based case-control study. Psychol Med 2020 ; 50 : 29 – 37
23.
McCoy TH Jr, Castro VM, Roberson AM, et al : Improving prediction of suicide and accidental death after discharge from general hospitals with natural language processing. JAMA Psychiatry 2016 ; 73 : 1064 – 1071
24.
Bryan CJ, Butner JE, Sinclair S, et al : Predictors of emerging suicide death among military personnel on social media networks. Suicide Life Threat Behav 2018 ; 48 : 413 – 430
25.
Barros JM, Melia R, Francis K, et al : The validity of Google trends search volumes for behavioral forecasting of national suicide rates in Ireland. Int J Environ Res Public Health 2019 ; 16 : E3201
26.
Posner K, Brown GK, Stanley B, et al : The Columbia–Suicide Severity Rating Scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. Am J Psychiatry 2011 ; 168 : 1266 – 1277
27.
Simon GE, Coleman KJ, Rossom RC, et al : Risk of suicide attempt and suicide death following completion of the Patient Health Questionnaire depression module in community practice. J Clin Psychiatry 2016 ; 77 : 221 – 227
28.
Horowitz LM, Bridge JA, Teach SJ, et al : Ask Suicide-Screening Questions (ASQ): a brief instrument for the pediatric emergency department. Arch Pediatr Adolesc Med 2012 ; 166 : 1170 – 1176
29.
Boudreaux ED, Camargo CA Jr, Arias SA, et al : Improving suicide risk screening and detection in the emergency department. Am J Prev Med 2016 ; 50 : 445 – 453
30.
The VA/DoD Clinical Practice Guideline for Assessment and Management of Patients at Risk for Suicide. Washington, DC, Department of Veterans Affairs and Department of Defense, 2019
31.
Practice Guidelines for the Psychiatric Evaluation of Adults, 3rd ed. Washington, DC, American Psychiatric Association, 2015
32.
Lenert MC, Matheny ME, Walsh CG : Prognostic models will be victims of their own success, unless…. J Am Med Inform Assoc 2019 ; 26 : 1645 – 1650
33.
Leonard Westgate C, Shiner B, Thompson P, et al : Evaluation of veterans’ suicide risk with the use of linguistic detection methods. Psychiatr Serv 2015 ; 66 : 1051 – 1056
34.
Nock MK, Park JM, Finn CT, et al : Measuring the suicidal mind: implicit cognition predicts suicidal behavior. Psychol Sci 2010 ; 21 : 511 – 517
35.
Kroenke K, Spitzer RL, Williams JB, et al : The Patient Health Questionnaire somatic, anxiety, and depressive symptom scales: a systematic review. Gen Hosp Psychiatry 2010 ; 32 : 345 – 359
36.
Louzon SA, Bossarte R, McCarthy JF, et al : Does suicidal ideation as measured by the PHQ-9 predict suicide among VA patients? Psychiatr Serv 2016 ; 67 : 517 – 522
37.
Mundt JC, Greist JH, Jefferson JW, et al : Prediction of suicidal behavior in clinical research by lifetime suicidal ideation and behavior ascertained by the electronic Columbia–Suicide Severity Rating Scale. J Clin Psychiatry 2013 ; 74 : 887 – 893
38.
Katz I, Barry CN, Cooper SA, et al : Use of the Columbia–Suicide Severity Rating Scale (C-SSRS) in a large sample of veterans receiving mental health services in the Veterans Health Administration. Suicide Life Threat Behav 2020 ; 50 : 111 – 121
39.
Lindh AU, Dahlin M, Beckman K, et al : A comparison of suicide risk scales in predicting repeat suicide attempt and suicide: a clinical cohort study. J Clin Psychiatry 2019 ; 80 : 18m12707
40.
Kessler RC, Warner CH, Ivany C, et al : Predicting suicides after psychiatric hospitalization in US Army soldiers: the Army Study To Assess Risk and rEsilience in Servicemembers (Army STARRS). JAMA Psychiatry 2015 ; 72 : 49 – 57
41.
Walsh CG, Ribeiro JD, Franklin JC : Predicting risk of suicide attempts over time through machine learning. Clin Psychol Sci 2017 ; 5 : 457 – 469
42.
Simon GE : Big data from health records in mental health care: hardly clairvoyant but already useful. JAMA Psychiatry 2019 ; 76 : 349 – 350
43.
McCarthy JF, Bossarte RM, Katz IR, et al : Predictive modeling and concentration of the risk of suicide: implications for preventive interventions in the US Department of Veterans Affairs. Am J Public Health 2015 ; 105 : 1935 – 1942
44.
Simon GE, Shortreed SM, Johnson E, et al : What health records data are required for accurate prediction of suicidal behavior? J Am Med Inform Assoc 2019 ; 26 : 1458 – 1465
45.
Obermeyer Z, Powers B, Vogeli C, et al : Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019 ; 366 : 447 – 453
46.
Hedegaard H, Curtin SC, Warner M : Suicide Mortality in the United States, 1999–2017. NCHS Data Brief. Hyattsville, MD, National Center for Health Statistics, 2018. https://www.cdc.gov/nchs/data/databriefs/db330-h.pdf
47.
Lu CY, Stewart C, Ahmed AT, et al : How complete are E-codes in commercial plan claims databases? Pharmacoepidemiol Drug Saf 2014 ; 23 : 218 – 220
48.
Barak-Corren Y, Castro VM, Nock MK, et al : Validation of an electronic health record-based suicide risk prediction modeling approach across multiple health care systems. JAMA Netw Open 2020 ; 3 : e201262
49.
Denchev P, Kaltman JR, Schoenbaum M, et al : Modeled economic evaluation of alternative strategies to reduce sudden cardiac death among children treated for attention deficit/hyperactivity disorder. Circulation 2010 ; 121 : 1329 – 1337

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Psychiatric Services
Pages: 555 - 562
PubMed: 33691491

History

Received: 9 April 2020
Revision received: 21 July 2020
Accepted: 21 August 2020
Published online: 11 March 2021
Published in print: May 01, 2021

Keywords

  1. Epidemiology
  2. Suicide and self-destructive behavior
  3. Machine learning
  4. Prediction models
  5. Statistical modeling

Authors

Details

Gregory E. Simon, M.D., M.P.H. [email protected]
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Bridget B. Matarazzo, Psy.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Colin G. Walsh, M.D., M.A.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Jordan W. Smoller, M.D., Sc.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Edwin D. Boudreaux, Ph.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Bobbi Jo H. Yarborough, Psy.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Susan M. Shortreed, Ph.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
R. Yates Coley, Ph.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Brian K. Ahmedani, Ph.D., L.M.S.W.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Riddhi P. Doshi, Ph.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Leah I. Harris, M.A.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).
Michael Schoenbaum, Ph.D.
Kaiser Permanente Washington Health Research Institute, Seattle (Simon, Shortreed, Coley); Department of Veterans Affairs Rocky Mountain Mental Illness Research, Education and Clinical Center, and Department of Psychiatry, University of Colorado School of Medicine, Aurora (Matarazzo); Department of Medicine and Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee (Walsh); Department of Psychiatry, Massachusetts General Hospital, Boston (Smoller); Department of Emergency Medicine and Department of Psychiatry, University of Massachusetts Medical School, Worcester (Boudreaux); Kaiser Permanente Northwest Center for Health Research, Portland, Oregon (Yarborough); Department of Biostatistics, University of Washington, Seattle (Shortreed, Coley); Center for Health Policy and Health Services Research, Henry Ford Health System, Detroit (Ahmedani); Department of Community Medicine and Healthcare, University of Connecticut, Farmington (Doshi); Shifa Consulting, Arlington, Virginia (Harris); Division of Services and Intervention Research, National Institute of Mental Health, Bethesda, Maryland (Schoenbaum).

Notes

Send correspondence to Dr. Simon ([email protected]).

Funding Information

Dr. Smoller reports service on an advisory panel for 23andMe and receipt of an honorarium from Biogen. Dr. Yarborough has received research support from Syneos Health. Dr. Shortreed reports being a coinvestigator on projects funded by Syneos Health. The other authors report no financial relationships with commercial interests.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share