Experts in violence risk management are lately grappling, like climatologists, with inconvenient truths: that mental health professionals cannot predict an individual's violent behavior much better than chance (
1,
2 ); that clinicians could do a better job of predicting violence if they would use the tools developed by risk experts (
3 )—but they don't; and that even if clinicians could accurately forecast violence, there is not much they can do about it. Competent patients often decline or stop treatment, which is their right, and the remedies for mental illness are not designed to prevent much of what causes violence anyway (
4,
5 ).
Nevertheless, whenever a person with serious mental illness commits a rare act of horrific violence in the community, two allegations surface: that mental health professionals should have seen this coming and that they should have prevented it (
6,
7,
8 ). What is an expert to do?
Prediction versus prevention
The Virginia Tech Review Panel, in its report on the tragic shooting in Blacksburg last year, cited the campus mental health care team as "ineffective in connecting the dots or heeding the red flags that were so apparent with Cho" (
6 ). The report asserted that "there are particular behaviors and indicators of dangerous mental instability that threat assessment professionals have documented among murderers" and that in Cho's case the professionals either did not see these warning signs or ignored them.
For clinicians so accused, the truth ceases to be inconvenient. Refuge is sought in the skillful defenses of ignorance, ineffectiveness, and irrelevance (
9,
10,
11 ). "We can't predict violence in individual patients," they say, "so how can we prevent the unpredictable? And we can't cure major causes of violence anyway (poverty and past trauma, for example), so what good is prediction?"
There is a kernel of truth under this cover. After all, what doctors are supposed to do mainly is treat diseases and their symptoms. Psychiatrists are not psychics, and they are not the police.
However, the defense falls short when it conflates prediction and prevention (
12,
13 ). That these are two different things—not necessarily related—becomes obvious when one thinks of other areas of medicine. A primary care physician cannot predict which individual patient will have a heart attack or get cancer but is rightfully expected to do something to prevent early deaths from these diseases across a panel of patients: screen for risk factors and early signs of disease, work with patients to reduce modifiable risk factors, and intervene promptly when worrisome signs of pathology are detected (
14 ).
There is also the "bad weather" example: pretty good predictability of totally unpreventable events. However, although we can't prevent hurricanes, we can surely take steps to limit a storm's damage to us (
15 ). We pay attention when the Big One is coming, warn everyone in its path, and then load up the dogs and evacuate.
Of course, the argument about violence is that we don't know when the storm is approaching. But maybe it is not such a good idea to be living on a sinking sandbar in the first place. We have large numbers of people with severe mental illness living in jails, homeless shelters, and substandard apartments in impoverished neighborhoods where every block has two liquor stores and a pawn shop. Then we talk about preventing violence by tweaking antipsychotic treatment regimens (
4,
16 ).
There is much that both public policy makers and mental health professionals could do to limit the human catastrophes for which they are sometimes, rightly or wrongly, held responsible. To be clear: clinicians actually can predict violence with reasonable certainty—they just need to consider their patients as a group, the way a public-health epidemiologist would (
9,
14 ). A clinical team treating 100 persons with schizophrenia in the community could confidently predict that a small proportion—more than one and fewer than ten (or so)—will engage in some serious violent behavior within the coming six months (
17 ). An additional ten to 15 (or so) will engage in minor acts of violence toward others, such as hitting someone without causing physical injury. The clinicians could count on it.
The treatment team could go further and confidently expect that these violent acts will occur more frequently among subsets of their patients who have certain characteristics: young adults with severe mental illness who have trauma and violence in their past, substance abuse in their present, and no plans for taking prescribed psychotropic medications in their future (
4,
16,
18 ). Knowing that some of these patients are going to be violent—that it is not a matter of "if" but "when"—means that mental health professionals are responsible for taking whatever steps they can, within reason, to prepare for violence and thus limit its damage and reach (
2,
12,
19 ).
Interventions and leverage
And what can mental health professionals do? There is the timeworn strategy of containment: put a solid barrier between the source of harm and potential victims (
13 ). (This is what the U.S. Army Corps of Engineers had in mind when they built the levees in New Orleans, which worked well for a long time.) The problem is that some ways of limiting risk are highly effective but morally illegitimate. Before the 1960s countless potential acts of violence by people with mental illness were probably thwarted by keeping patients locked up in psychiatric hospitals for decades and giving them major tranquilizers. A half-century later, clinicians have come to regard inpatient civil commitment sensibly, the way many people view abortion: it should be safe, legal, and rare.
Community-based "leverage," on the other hand, is not so rare. Use of the mere threat of hospitalization or jail or of withholding housing or money to ensure adherence with outpatient mental health treatment has become common practice (
20 ). Is this coercive? Perhaps so, although research suggests that most patients with severe mental illness do not perceive it to be (
21 ). Some patients clearly do feel coerced but mainly when they are subjected to more than one form of leverage at a time—such as having a judge order treatment and having a representative money manager who believes in rewarding the patient—with the patient's own money—for taking medications (
22 ). Stack the "leverages" on top of each other, and they start to feel heavy.
Does the use of community leverage prevent violence by people with severe mental illness? It can, but not necessarily. Legally mandating services will do nothing (at least nothing good) if the services being mandated are unavailable or ineffective (
23 ). Even if we assume that treatment can sometimes work (
16 ), it needs to target actual risk factors for violence, at least indirectly, in order to prevent violence. The risk factors have to be modifiable by something that mental health professionals can reasonably do. And the clinicians need to know that the risk factors exist in the first place.
The challenge of risk assessment
Apropos of knowing, why don't professionals routinely and systematically assess, document, and monitor dynamic risk factors for violence, as some experts have lately recommended (
19,
24 )? Why not do this at least for patients above a baseline threshold of putative risk, such as those with a previous history of violence or threats or with a history of substance abuse? Given the potential for catastrophe, would it not be better to find out those "known unknowns" (as a famous war poet once put it)?
Maybe not. For one thing, the notion of routine violence risk assessment—built into the machinery of usual care—rests on a besmirching assumption about all of the people seeking mental health services (
25,
26 ). For another thing, who is this for? Screening for medical conditions is primarily to benefit the patient. Screening for violence risk is primarily to benefit other people. This difference raises nettlesome ethical questions. Should the patient get to choose whether or not to be screened for violent behavior, given that the interests of others may be at stake? Who gets to decide what to do, or not to do, with the resulting information (such as deciding not to act on a positive finding, for example)? And what happens if the information is simply wrong?
Structured risk assessment is not that sharp and discriminating. It's not like looking for a 10-mm polyp in a colon—the best endoscopists likely won't miss it if it's there and won't find it if it's not (
27 ). Rather, assessing patients' violence potential often involves a perverse tradeoff between unacceptably high rates of false negatives and false positives (
28 ). The Violence Risk Appraisal Guide is perhaps the most accurate tool yet devised to assess risk. It has a sensitivity of 73% and a specificity of 63% (
29 ), substantially below what would be considered acceptable in medicine for a screening instrument. Chest X-rays are not used to screen for lung cancer, because the sensitivity and specificity of the procedure when used in this way are only 84% and 90% (
30 ).
Then there is the business-model problem. Structured risk assessment is not reimbursed by insurance the way medical tests are, where doctors can make almost as much money doing screening procedures as they could be sued for if they did not (
31,
32 ).
Nevertheless, maybe there is a lesson or two to be learned from the way medicine manages the risk of a disease that you probably won't get but that might kill you if you do. If medical professionals tell us to get screened for it, we don't complain; we show up and assume the position. Should prevention fail, we get treatment fast. Should treatment ultimately fail, our family members will not sue the doctor for failing to predict and prevent our demise. They will invite him to our funeral, grateful that "he did everything he could." And the doctor will remember us fondly as he silently thanks us for getting that colonoscopy.
Lamentably, violence risk appraisal and management in mental health practice are not there yet. We need more accurate and efficient prediction tools, particularly for use in nonforensic patient populations. We also need better incentives for practitioners to use them and for payers to reimburse for their use (for example, through pay-for-performance policies.) If we do reach a point where it is possible, and feasible, to predict individual patient violence with a high degree of precision, we need better interventions—more effective and tolerable, less coercive and stigmatizing—to forestall the violence that we are predicting, and we need more choices in community care than that between a pill and a shot.
In the meantime, clinicians should, of course, do the best they can with the tools and resources that are available to them. Let not the perfect defeat the good (or the better than nothing.) Don't screen every patient, but for those who have committed violent acts in the past or report thoughts of hurting someone in the future, a structured risk assessment should be, and is becoming, the standard of care (
3 ).
Conclusions
It hardly needs saying that all patients with serious mental illnesses—not just those at risk of violence—could benefit from accurate assessment of their problems, timely services that include evidence-based interventions, diligent clinical follow-up, and appropriate outreach to those who cannot or will not voluntarily seek the treatment they need. Clearly, there are complex economic, legal, and other systemic reasons why all psychiatric patients currently do not get the best treatment that clinicians already know how to provide. But if they did, it is likely that much patient violence—and a great deal of human heartache all around—would be averted in the process.
And the risk experts? Perhaps they should step back and consider the field that they are in. In the end, a career is not all about what one knows but about what good one can do and the value people place on it. The median salary of gastroenterologists in the United States is over 175% of the median salary of psychiatrists; maybe there's some reason for that. On the other hand, psychiatrists make about twice as much as meteorologists (
33 ).
Acknowledgments and disclosures
Preparation of this article was supported by the National Institute of Mental Health through an Independent Research Career Award (K02-MH67864) to the author. The author acknowledges the helpful comments of several colleagues who read a draft of the article: Paul Appelbaum, M.D., Wendell Bell, Ph.D., Alec Buchanan, Ph.D., M.D., Thomas B. Cole, M.D., Eric Elbogen, PhD., Kai Erikson, Ph.D., John Monahan, Ph.D., and Marvin Swartz, M.D.
The author reports no competing interests.