The CBTD framework developed by Weisz (
4) and the CID framework developed by Hoagwood and colleagues (
3) for validating treatments and interventions have the potential to increase the range and utility of empirically supported interventions available to children and families and the pace at which treatment validation and dissemination occurs. At each step in the progression from construction and refinement of a treatment through testing of efficacy, levels of effectiveness, cost, and sustainability, decisions must be made about which variables, at which levels, are most relevant. If conditions for the anticipated users of the intervention—clients, practitioners, provider organizations, and funding sources—are considered at the outset of treatment development, some steps in the proposed progression may be taken quickly or even skipped. Of course, not all notions about the factors most relevant to treatment implementation and outcomes can be divined in advance. Such notions emerge from research experiences and findings and practical experiences. For example, research may uncover moderators of outcome, and replication studies may succeed or fail.
The following discussion illustrates some issues that investigators will face on the journey from treatment efficacy to "street-ready" status to dissemination. The term street-ready describes interventions that can be implemented in representative service settings and systems.
Differences in conditions
Several treatments for children have already been deemed efficacious, probably efficacious, or effective on the basis of research reviews and consensus documents published by the American Psychological Association, the American Association of Child and Adolescent Psychiatry, the Practice Guidelines Coalition, and the Office of the Surgeon General. Dimensions on which research-based and clinic-based deployment of treatments may differ can be identified, some on the basis of child psychotherapy research (
23,
24,
25) and others on the basis of research on organizational behavior, technology transfer, and diffusion.
Table 1 presents a preliminary list of dimensions and examples of variables that might be subsumed under each dimension. The dimensions are the intervention itself; the practitioners delivering the intervention, including clinical training, support, and monitoring; the client population; service delivery characteristics; the organization employing the practitioners; and the service system, including referral and reimbursement mechanisms and interagency relations.
Research questions might arise out of anticipated differences on one or more dimensions. Beginning with intervention and practitioner dimensions (
Table 1), several variables have been shown to differentiate laboratory-based and community-based child treatment conditions (
24,
25). Specifically, research treatments tend to be behavioral and problem focused and based on written manuals; clinicians receive specialized training in the experimental intervention and monitoring in the fidelity of its implementation.
For which interventions and under what circumstances is specialized training and monitoring necessary for effective implementation in real-world service settings? Research on organizational behavior suggests that routine human service tasks generally require less training and support than do more complex and individualized ones (
27). Even routine tasks in industry, however, are monitored to ensure that the quality of the product received by the last customer or client of the day is equal to that received by the first. Specialized training and monitoring may be less necessary for treatments that are less complex, but there is probably no promising treatment for which specialized training and ongoing support for fidelity of implementation can be eliminated altogether.
This proposition challenges norms and regulations that have governed the work of mental health and social service professionals (
28) and thus has implications for dissemination. The literature on diffusion of innovation suggests that the extent to which the innovation is perceived as similar to or different from prevailing practice will influence adoption of the practice. Moreover, individual endorsement of prevailing practice is supported by organizational, financial, and value-based structures. Thus the likelihood that practitioners will not only adopt a new treatment but also implement it as intended may be contingent on perceived differences between the treatment and current practice and the extent to which organizational and fiscal influences support the new practice over the prevailing one. Other features of an innovation, its end users, and the context in which it is introduced may also influence the adoption of innovation (
6,
29). Discussion of such features is beyond the scope of this paper; however, they have been addressed elsewhere (
30).
Multisystemic therapy generally supplants either residential or outpatient treatment in which an eclectic mix of interventions is deployed by a variety of clinicians who meet primarily with the child and occasionally with the child's parent or parents. Three aspects of multisystemic therapy—the service delivery model, the intervention itself, and clinical support and monitoring-contrast sharply with these prevailing practices. First, multisystemic therapy is delivered in a home-based service model that requires a flexible work schedule. Second, it is operationalized in terms of nine treatment principles that integrate key aspects of empirically based treatment approaches for youths and families into an ecological framework. Third, because evidence suggests that therapists' adherence to the multisystemic therapy model predicts outcomes for children (
31), intensive clinical supervision and support and monitoring of progress in treatment and barriers to such progress is ongoing.
In contrast to multisystemic therapy, most evidence-based treatments for children and adolescents have been validated in outpatient service delivery settings. To implement a treatment such as multisystemic therapy, the substance of the practitioner-client interchange during sessions would change, but the location, frequency, and, in some cases, the targets of the treatment may not.
For example, an outpatient clinician who previously spent an hour conducting play therapy with a ten-year-old child with conduct disorder could spend the same hour implementing parent management training with the parent or parents of the child. Ostensibly, the changes required for the practitioner and provider organization to adopt parent management training would be fewer than those required to adopt multisystemic therapy. When multisystemic therapy is adopted, the location, intensity, timing, focus (family and ecology versus the individual child), and content of the intervention changes. To adopt parent management training, only two features would change: the focus—on the parents rather than the child—and the content. However, these changes are by no means insignificant. To the extent that parent management training is less different from current outpatient practice, clinicians, the organizations that employ them, and the entities that provide reimbursement for services may be quicker to adopt it than multisystemic therapy. On the other hand, the presence of all the trappings of previous practice may tempt clinicians and organizations to adapt the parent management training model until it more closely resembles previous practice, and treatment fidelity may erode quickly.
In the service system domain (
Table 1), different methods of calculating the costs of treatment and paying for it in research settings and usual-care settings may influence treatment implementation, fidelity, and outcomes. Grant funding of a research study of effectiveness may require time-limited deployment of an intervention, whereas financing at local clinics may be on a fee-for-service basis. Thus outcomes achieved in four months in the study may take six months to achieve in the clinics. If the clinic sessions occur with the frequency and duration of the study sessions, then the clinic-deployed version of the intervention will be more costly because of the prolonged treatment time. Factors contributing to the prolonged treatment time would have to be examined to determine whether they also influence the clinical effectiveness of the intervention.
For example, it will be important to demonstrate to payers, including clients and third-party payers, that replacing an hour of play therapy for a ten-year-old child with parent management training will yield better outcomes at the same or a lower cost. The cost of training and monitoring associated with implementation of parent management training, if training and monitoring are found necessary in effectiveness studies, would be included in the cost equation.
Differences in questions
The scenarios about the implementation of multisystemic therapy and parent management training illustrate how treatment models might differ from usual-care conditions with respect to the intervention itself, the model of service delivery, and costs. Other efficacious and effective interventions can be similarly evaluated by using the dimensions listed in
Table 1. For each treatment, the magnitude of similarity or difference between the conditions that characterized the validation studies and those that characterize real-world service settings and systems can be estimated. Data on similarities and differences are not yet available for many dimensions, precisely because we have not conducted research in ways that assess them. Moreover, not all differences will be equally relevant to the real-world effectiveness and ultimate dissemination of all interventions. Thus some educated guessing will be needed to "individualize" the progression of existing treatments to street-ready status.
As suggested by others (
3,
4), case studies may be needed to explore differences hypothesized to be salient or similarities presumed to exist. If the training and clinical support of clinicians differentiates conditions in the study from those in practice sites, then case studies would focus on these features. If the service setting—for example, an outpatient clinic, a school, or a public social service agency—differentiates conditions in the study from those in practice sites, then case studies would focus on these variables.
However, case studies cannot provide strong evidence for effectiveness or transportability. Larger-scale quasi-experimental or experimental studies are needed to achieve that goal. For example, the type or amount of specialized training and clinical support provided to community-based clinicians who are implementing a new treatment could be experimentally manipulated, with fidelity and outcomes compared across experimental training conditions. The climate or culture of an organization may influence clinicians' implementation of a new treatment. Conversely, clinicians' implementation of the treatment may have an impact on the organization's climate or culture. Effectiveness studies should therefore include a large enough sample of organizations and clinicians in each organization to detect such influences.
The variables most critical to dissemination may or may not include those central to effectiveness. For example, the impact of an organization's climate on treatment adherence may be of primary interest in a study of the effectiveness of parent management training in outpatient settings. If such a study found that climate was associated with adherence and that adherence, in turn, predicted outcomes, is it reasonable to assume that climate is important to dissemination? Such an assumption might be valid if climate were found to be associated with organizational variables that predict the adoption of innovation. Otherwise, perhaps not.
Suppose that a dissemination study indicated that the presence of a "champion" of parent management training predicted the willingness of local mental health agencies to adopt it. An organization's climate, which was found to predict clinician adherence in the hypothetical effectiveness study of parent management training described above, may or may not be correlated with the presence of a champion or with the champion's ability to cultivate interest in the innovation. Thus one variable at the organizational level—that of the champion—may be important for dissemination but not for effectiveness. If the presence of a champion also predicts adherence to the treatment model, then the variable is relevant both to dissemination and to effectiveness.