Aquarter century has passed since Robert Miller (
1) described what he thought were the “obvious advantages” of involuntary outpatient commitment and called for three things to ensure the success of this promising but unproven legal practice that was emerging from the ashes of deinstitutionalization. First, rigorous empirical research was needed “to determine how effective involuntary community treatment can be and for what type of patients.” Second, the practice would have to gain widespread support among community-based clinicians; if they did not believe in outpatient commitment, it would never be widely implemented. And third, outpatient commitment “must be accompanied by sufficient resources to permit adequate treatment to be provided.” Unless these changes came to pass, Miller warned, “outpatient commitment is all too likely to remain a theoretical but not practical alternative to revolving-door hospitalizations and community neglect.”
Have we gotten anywhere? After three generations of studies and evaluations and systematic reviews of the evidence for outpatient commitment, there is yet little agreement about whether it works, little systematic effort to implement the practice in states that permit it (with the notable exception of New York State), and dwindling allocations of public funds to pay for intensive community treatment—mandated or not. Meanwhile, debate continues over how to solve an old problem: what to do about adult members of our communities who suffer from debilitating psychiatric illnesses such as schizophrenia, and who, for a variety of reasons—illness severity, lack of needed community supports, insufficient efforts at engagement, and forms of treatment that are poorly adapted to their needs—fail to adhere to treatment until they deteriorate to the point of requiring involuntary hospitalization or commit a crime and get arrested.
As a potential solution to this problem, outpatient commitment remains highly controversial, to such an extent that the controversy has itself become part of the challenge in implementing the practice and developing a broader evidence base for its effectiveness (
2). Ironically, stakeholders’ persistent and passionate disagreements over whether outpatient commitment is effective, appropriate, beneficial, necessary, affordable, or fair may hobble good-faith attempts to make it work and to evaluate its impact in different service systems and populations. This Open Forum reviews the debate over outpatient commitment and argues that current evidence of its effectiveness is sufficient to justify more widespread implementation with systematic local evaluations.
Limits of a gold standard
Proponents of outpatient commitment believe that the practice can provide access to needed treatment in less restrictive settings than a hospital. Although the civil court orders for outpatient commitment are variably used, they tend to be initiated at times when a person is already involuntarily hospitalized, thus allowing an earlier reentry into the community in an arrangement not dissimilar to conditional release from involuntary hospitalization (
3). Opponents believe that outpatient commitment orders, especially the newer so-called “preventive” outpatient commitment statutes, such as New York’s Kendra’s Law, are too coercive. Under these regimes, a court can issue an order for outpatient mental health treatment to a person who is neither mentally incompetent nor imminently dangerous and who has broken no laws (
4). (Nonadherence to prescribed psychiatric treatment is unwise, perhaps, but not a crime.)
Many observers who are not opposed to outpatient commitment on principle still want to know whether it “works.” A fair-minded reading of the literature on outpatient commitment’s effectiveness would be that the evidence is mixed, with success largely conditioned on effective implementation, the availability of intensive community-based services, and the duration of the court order. But not everyone thinks so, because even such a qualified endorsement rests on equally valuing results from quasi-experimental analyses of outpatient commitment outcomes and results from the presumed gold standard of randomized controlled trials (RCTs) (
4).
The debate over outpatient commitment’s effectiveness also invites a broader question: Can any single-site community-based intervention study appropriately generalize its results to the remarkably diverse service systems and communities in which community treatment orders may be applied? Community-based intervention trials bear little resemblance to carefully controlled drug trials where the question is simply whether drug A is more efficacious than drug B. Inevitably, a trial involving a court mandate to participate in outpatient mental health services will produce different results depending on the services locally available, the community environment in which the trial takes place, financing and social insurance schemes, and the sociodemographic characteristics of the participants. Rather than asking whether outpatient commitment orders are effective, we think it is more appropriate to ask, “Under what conditions, and for whom, can outpatient commitment orders be effective?”
The well-documented design constraints and implementation challenges that often bedevil real-world RCTs—bias from inclusion and exclusion criteria, study refusals and dropouts, and protocol deviations and crossovers (
5)—were all commonly encountered in the RCTs of outpatient commitment (
6–
8). But there is no reason to expect any community-based effectiveness trial to remain immune from the validity threats that are a common plague of such trials. In our view, the solution is not to simply persist in a quixotic quest for the perfect RCT of outpatient commitment. Rather, we would incorporate and welcome into the evidence base the results of well-conducted, large-scale, quasi-experimental and naturalistic studies with rigorous multivariable statistical controls. We believe that such studies should be afforded evidentiary status comparable to that of RCTs, while acknowledging that unmeasured and thereby uncontrolled selection bias is an enduring threat to the validity of nonrandomized studies. Unfortunately, Cochrane and other systematic evidence reviews tilt heavily toward RCTs as a gold standard, even for community interventions in which randomized study designs may be infeasible and dubiously generalizable (
9).
OCTET and an unanswered question
The recently reported Oxford Community Treatment Order Evaluation Trial (OCTET) in the United Kingdom, the third RCT of outpatient commitment’s effectiveness, encountered other specific design challenges in addition to those described above (
8). In OCTET, individuals who were involuntarily hospitalized were enrolled in an unblinded prospective trial and randomly assigned to be released in one of two study conditions. The experimental condition consisted of a community treatment order, the U.K. equivalent of involuntary outpatient commitment authorized under the 2007 Mental Health Act (
10). The control condition consisted of an authorized “leave of absence from hospital,” a form of conditional release authorized under Section 17 of U.K.’s 1983 Mental Health Act (
11,
12). The primary outcome for OCTET was whether or not the person was readmitted to the hospital during the 12-month follow-up period. Secondary outcomes included length of time to the first readmission, number of readmissions, total amount of time spent in the hospital, clinical functioning, and social functioning. No significant differences were found across any of the outcomes at the 12-month follow-up.
Unfortunately, there are several reasons that the debate over the effectiveness of outpatient commitment will not end with OCTET—some reasons specific to OCTET’s design and others having to do with the aforementioned nature of real-world community intervention research. For those already convinced that outpatient commitment works, the troubles that OCTET experienced with protocol violations, refusals, and crossovers might be enough to undermine the study’s credibility. To researchers in this field, such difficulties are merely indicative of the fierce headwinds to be encountered in conducting a community-based RCT of this sort (
13–
16). But the main problem, in our view, is something else: Whereas the fundamental debate over outpatient commitment’s effectiveness is about whether compulsory treatment can work better than voluntary treatment for persons who are eligible, OCTET was never designed to make that comparison.
The similarities between OCTET’s experimental and control conditions turn out to be as important as their differences for the purpose of interpreting the trial’s results and applicability to the larger debate over outpatient commitment’s effectiveness. Although an extended period of hospital leave under Section 17 does not explicitly entail a compulsory outpatient treatment regimen, it can functionally amount to the same thing. It requires “an integrated care programme approach” whereby the responsible inpatient clinician overseeing the person’s leave ensures that the outpatient clinicians and community mental health nurses are aware of prescribed medication to be administered to the person in the community, as authorized on requisite forms. Perhaps most important, some legal leverage over the person remains in place, because the responsible clinician can revoke the leave and have the person returned to the hospital at any time “in the interests of the patient’s health or safety or for the protection of other persons” (
11). Thus OCTET’s comparison condition for community treatment orders was itself a variation on legally leveraged and supervised community treatment.
Why was Section 17 hospital leave chosen as the control condition to evaluate community treatment orders? Perhaps the main reason is that ethical approval of OCTET’s protocol in the United Kingdom required a legal opinion that the trial design achieved “legal equipoise” in its two randomized conditions; in validation of such equipoise, the reviewing lawyers opined that “it is unclear whether either condition is more restrictive than the other” (
8). Thus the OCTET investigators were prohibited from conducting a trial comparing compulsory and strictly voluntary treatment. As the study unfolded, the individuals in the comparison group experienced far fewer days under legal compulsion than did the group under community treatment orders, with no differences in outcome. Nevertheless, it is clear that without a clean randomized comparison between voluntary and compulsory treatment and an adequate “dose” of both—while service availability is held constant—OCTET could not fully settle the debate about the effectiveness of outpatient commitment.
That the OCTET investigators were not permitted to carry out an RCT of compulsory treatment against voluntary treatment speaks of the evolving regulatory climate for research involving human participants. It is reasonable to ask whether today’s institutional review boards would have allowed the two prior RCTs of outpatient commitment in North Carolina and New York City. The constraints placed on OCTET also reveal a prevailing assumption that outpatient commitment is so coercive that it would be unethical to foist it randomly on people who could safely go without it. As a result, the trial could not answer the key real-world question for outpatient commitment research: Under what conditions does compulsory treatment work better than the purely voluntary alternative for individuals who are otherwise legally eligible for compulsory treatment?
Quasi-experimental studies
As for quasi-experimental studies of outpatient commitment, such as the significant “before-and-after” findings of the recent evaluation of assisted outpatient treatment in New York (
17), critics have been inclined to dismiss these results as “regression to the mean.” The fact that many individuals who initiate a program at the absolute nadir of their clinical course will naturally improve, no matter what the intervention consists of, seems a valid criticism of these designs. Skeptics also suggest that the results observed in naturalistic studies of outpatient commitment might be explained by uncontrolled correlation of service intensity with court orders; if more intensive or preferred treatment is offered to patients under outpatient commitment, these studies cannot distinguish between the effectiveness of the court order and the benefit of intensive services—that is, treatment that might have helped even without the coercion.
Although these may be valid criticisms in principle, we would emphasize that much was done to mitigate potential threats to validity in the evaluation of assisted outpatient treatment in New York State (
17). Specifically, the New York evaluation employed rigorous quasi-experimental methods, including propensity score adjustments, to evaluate the experience of several thousand persons—far more than a randomized trial could reasonably recruit. That study also compared results for persons who received assertive community treatment, the most intensive form of community mental health service available, with and without an outpatient commitment order in place and found that the court order provided a significant advantage over and above assertive community treatment alone. The investigators concluded that the court order made a difference by exerting an effect on the individuals in treatment
and on the service system (
17). In our view, such evidence is sufficient to justify more widespread implementation of outpatient commitment, accompanied where possible by systematic local evaluations similar to the New York assisted outpatient treatment study.
Conclusions
We doubt that OCTET or another RCT of outpatient commitment is going to solve the debate over outpatient commitment’s effectiveness. It is important to understand why people disagree about outpatient commitment in the first place, and how various entrenched positions about the policy also shape interpretation of evidence of its effectiveness (
18). But it is also time to rethink what should count as persuasive evidence that outpatient commitment works when appropriately targeted and funded. Perhaps quasi-experimental and naturalistic studies are “definitive enough.” We at least think they should count in this arena, perhaps just as much as, if not more than, studies that sacrifice real-world validity on the altar of randomization (
5).
Acknowledgments and disclosures
The authors report no competing interests.