Implementation of evidence-based practices (EBPs), which aims to integrate the best available evidence into clinical practice, has become one of the primary mandates for state and health care organizations in their efforts to improve quality of care. These efforts are supported by the development of practice guidelines and registries of EBPs (e.g., Patient Outcomes Research Team, Department of Defense/Veterans Affairs) (
1,
2). Unfortunately, the 17-year lag between research outcomes and the delivery of such EBPs in clinical care is practically legendary. Utilization of clozapine and the individual placement and support employment model provide two examples of interventions that have an extensive evidence base yet are still underused. In these instances, practice clearly lags behind the research. To accelerate implementation of EBPs, implementation science has developed a core set of methods, and although some of these are successful, at times this work has revealed limitations of EBPs, and outcomes comparable to those observed in research have not been attained (
3) Our experience suggests that the focus of implementation science has been largely unidirectional, either to implement a new practice or to “course-correct” when there is substantial drift in clinical practice, with a starting point of assuming that practice is “broken.”
Less recognized but perhaps equally important in improving quality of care are efforts to deliver services that may lack a solid research foundation or go beyond or deviate from research evidence. New clinically based practices can result from careful innovation and creative efforts to tackle challenges in care delivery that are not adequately addressed by established EBPs. Arguably, the rapid rise of peer and family support interventions initially evolved without an array of pre-existing randomized controlled trials. Such trials were eventually conducted, creating a situation in which research initially lagged behind practice and then self-corrected.
We assert that efforts to implement EBPs built on scientific research or to implement practices without a fully developed evidence base could each either optimize or degrade care and outcomes. The result—unfortunate inevitability versus creative tension from which to learn—depends on the extent to which these two forces in care—evidence-based science and the focus on local adaptation and experience-driven innovation—remain tethered and in reach of each other, each using the tools and perspectives of the other, built around a backbone of measurement and iterative adaptation that is guided by oversight of stakeholders, especially patients. What is called for is the expectation that clinical settings be designed as learning environments and that research be designed to accommodate flexibility and to be studied in routine practice. This expectation requires a culture change for both clinical leadership and researchers.
Two case examples illustrate the tension in the implementation of evidence-based practice in behavioral health care. Both examples involve the use of integrated care (IC) to justify changing the delivery of mental health care. Case 1 is an example of applying a model of IC despite scant evidence that outcomes improve and without having a framework for measuring this adaptation. Case 2 is an example in which the EBP for IC is overgeneralized and implemented without fidelity to the research, again without measuring these changes. Both cases illustrate how applied clinical practice deviates from the evidence base and why understanding whether the deviation optimizes or degrades care is difficult.
Case 1: Integrated Care for Serious Mental Illness
People with serious mental illnesses have a greater burden of general medical illness and higher mortality than the general population (
4,
5). The challenge of providing adequate medical care to individuals with serious mental illness has resulted in multiple innovative interventions, including involving a primary care provider in mental health clinics in order to reach this hard-to-treat population (
6). For example, one author oversees a state-funded assertive community treatment team that received a two-year grant to add an advanced practice nurse to improve access of its serious mental illness population to primary medical care. This staffing change occurred despite the lack of infrastructure to assess the impact of this addition and little ability to have this program become self-sustaining. Innovations like these often have been implemented ahead of evidence for better health outcomes. There is some support for improvement of care processes with different IC models but almost no evidence that any IC model improves short-term or long-term health outcomes (
7,
8).
Are current forms of IC positive and progressive adaptations, or has the implementation of IC in this population moved too far beyond the evidence? Beyond process improvement, the value of IC could involve other important outcomes such as satisfaction of stakeholders, sustainability of revenue support, and consolidation of resources. However, without assessment of these variables, IC has the danger of failing with time, either as a result of faddism or results that are other than as expected and desired.
In this case, implementation beyond the evidence should encourage us to consider the potential harm or cost of adaptation that strays too far from the evidence and prompt us to evaluate current practices through formalized evaluation. From a cost-benefit viewpoint, IC may be redirecting resources from other programs that are more definitively known to affect mortality and morbidity—programs that treat obesity or aid in smoking cessation, for example, or platforms that improve housing, education, and employment. Measuring the effect of these adaptations would not only build the evidence base for (or against) IC, but also provide a set of checks and balances on research and evidence-driven clinical care.
Case 2: Integrating Mental Health Care Into Primary Care Settings
For individuals with mild to moderate depression, a strong evidence base exists for integrating behavioral health care into primary care settings in order to improve access and outcomes (
9–
12). Such programs were developed as part of the chronic care model first described by Katon and Sullivan (
9). The evidence base was developed by teaming care managers (masters’-level mental health professionals, psychologists, and nurses) with primary care providers under the supervision of psychiatrists. Key features of the model include the use of clinical information systems and measurement-based care to drive treatment planning, as well as an emphasis on brief, focused behavioral interventions such as problem-solving therapy or behavioral activation, development of self-efficacy and self-management, psychiatrist supervision, and short-term engagements (
2).
Some clinical practices have broadly adapted the original IC model for mild to moderate depression, applying the same principles to a broader array of mental health conditions, patients with more severe illness, and patients with more substantial psychosocial needs, including housing, employment, and case management. These adaptations have also encompassed less fidelity to key components, such as brief treatment, psychiatric supervision, and measurement-based care, and have resulted in the rise of colocated care focused on a smaller number of more severely ill patients. Such modifications are happening despite a lack of a priori evidence for them.
In this case the consequences of a lack of fidelity to the original IC model and its expansion to other patient populations need to be assessed to determine the model’s effectiveness. Decreased fidelity may well lead to potentially equivocal or even negative outcomes, whereas overgeneralization of IC may lead to an overextension of resources toward managing complex cases at the expense of those who are definitively known to benefit from IC. For example, the primary care practice of one author’s academic medical center has been screening all patients annually with the Patient Health Questionnaire–9 but chooses to refer the patients with more complex illness, leaving those with less complex illness to be treated perhaps inadequately. The social worker spends time triaging patients and does not systematically track or use measurement-based approaches, and the system has no mechanism to assess the success of these adaptations.
Discussion and Conclusions
These two IC examples help illustrate the tension between evidenced-based care and its application to clinical practice. Ideally, all clinical practice would be founded on adequate evidence and implemented with high fidelity, and effects of adaptation and innovation would be studied and understood. IC would be implemented to function as a learning environment informed by patient-level outcomes and program evaluation. However, EBPs cannot keep pace with changes in health care systems and evolving knowledge, and clinical trials cannot possibly study every nuance of clinical practice, and thus compromises are made. Further, the termination of National Registry of Evidence-Based Programs and Practices underscores the ambiguity regarding the value of these registries. Overall, the required compromises must be better understood. Chambers and Norton (
13) offered one framework based on several truisms: programmatic drift from an initial EBP is inevitable, adaptation can be positive or negative, evidence always evolves, and there is a bidirectional relationship between evidence and implementation/dissemination. This framework suggests that clinical practice should exist in a learning environment in which adaptation is recognized and embraced, adaptation should inform the evidence base, clinical practice should implement measurement-based care so that outcomes may be monitored, and evidence and practice should be tethered at an undefined but appropriate distance.
Most important, a culture in which time is periodically taken to assess current practice, as well as whether and in what ways it deviates from EBP, is paramount (
14). Meanwhile, research needs to evolve to conduct more pragmatic trials that are done in community practices, using frontline clinicians and involving broader patient populations.
The belief that implementation is a linear process from intervention development to implementation may not only be incorrect but detrimental to desired outcomes. Further, far too often, implementation science is used at times of substantial separation between research and practice. There is a natural space and therefore tension between implementation and evidence that needs to be recognized and understood. Evidence and implementation orbit each other; sometimes evidence outstrips implementation and sometimes vice versa. Negotiating the space between these two poles requires the creation of learning environments in which the methods created by implementation science become routine for both clinical practice and research design. There is always the need to take into account variability in clinical practice that leads to adaptations to the evidence base in order to fit clinical realities. The unfounded belief in the absolute validity of a static “evidence base” as opposed to an organic evolution of improvements and setbacks between evidence and practice risks inappropriate allocation of resources and missed opportunities. Thus we must pay close attention to the deliberate movement from evidence to practice and acknowledge the importance of feedback in this process.