The National Institute of Mental Health (
1,
2) has advocated for the use of experimental therapeutics in its research program. The use of experimental therapeutics enables researchers to open the black box of interventions and illuminate how change in outcomes occurs. This approach requires the delineation of intervention components and the interrogation of presumed mediating targets addressed by those components (
3). Raghavan et al. (
4) have described the approach and decomposed it with examples, explicating a framework for moving forward. However, little guidance exists on how to apply experimental therapeutics outside medicine, psychology, and public health. This article begins with a focused literature review to document the frequency of research explicitly engaging experimental therapeutics in psychiatric services research as published in some of the American Psychiatric Association’s periodicals. We then describe a menu of methods to guide researchers who seek to adopt the approach.
We reviewed articles published between 2011 and 2021 in some of the American Psychiatric Association’s periodicals (Psychiatric Services, The American Journal of Psychiatry, Psychiatric News, The Journal of Neuropsychiatry and Clinical Neurosciences, and Focus). Using the term, “experimental therapeutics,” we found 30 articles: 18 commentaries or editorials, 10 studies examining biological or neuroscientific phenomena, and two studies focused on human or social sciences. Only five of the articles examined mediation empirically, and none applied qualitative developmental research for experimental therapeutics purposes. This study aimed to discuss methods that can be used to apply the experimental therapeutics approach to gain new knowledge on how interventions work or do not work.
There is ample evidence of ineffective interventions, nonadherence to interventions, clinician resistance to evidence-based practices on the grounds of “fit” with client service plans, challenges of sustainability, implementation complexity, and excessive costs (
5). Some of these problems can be addressed if researchers apply experimental therapeutics throughout the intervention development process. The methods we propose in this article are organized into four components (
Figure 1), which researchers can draw on to embed experimental therapeutics in the development, refinement, and testing of interventions. These methods are not meant to be exhaustive; rather, our purpose is to describe strategies to move the field toward more acceptable, effective, and cost-efficient services and interventions.
Stepped Approach to Applying Experimental Therapeutics
Component 1: Outline Prerequisites
Identify a public health concern and potential targets for change.
An efficient first step is to conduct a systematic review of what is known in order to identify the most pressing public health needs and potential targets (mechanisms) to resolve significant problems for a given population. Systematic reviews reduce redundancy and result in knowledge to develop an initial conceptualization of the problem (
6). Reviews that embed experimental therapeutics principles can point researchers toward effective or promising target mechanisms that are critical to understanding how an intervention affects outcomes and for whom (
7). Systematic reviews can also uncover targets found to be effective in improving outcomes; they can also identify ineffective targets, thus directing teams to focus elsewhere (
8–
15). Finally, systematic reviews can discover differences in how individuals or groups respond to different intervention components or target mechanisms to help inform the tailoring of programs to the specific needs of subpopulations.
Incorporate genuine collaborative approaches.
Collaboration with stakeholders, such as community members and service users, is essential for long-term impact of interventions (
16) and is instrumental throughout the process of applying experimental therapeutics principles to intervention development. This collaborative process includes formulating questions, identifying promising target mechanisms, and brainstorming intervention components. Researchers and advocates are calling for a deepening of engagement with diverse stakeholders, including service users (
17–
20). Some authors (
21,
22) have characterized stakeholder involvement in research as superficial, lacking rigor, and in need of conceptual frameworks. Studies that do not embrace genuine collaboration are at risk of tokenism (
21) and are more likely to develop interventions and to focus on mechanisms that lack relevance (
23). Importantly, effective collaborations are more likely to lead to sustainable interventions (
24) and an increase in capacity among community leaders, who remain in the community after the research has been completed (
24,
25).
One approach to expanding community- and stakeholder-engaged research is to enlarge practice-based research networks (PBRNs). PBRNs focus on deepening collaborations between researchers and provider participants in an effort to pose questions with concrete clinical implications (
26,
27). PBRNs are a form of community-based participatory research and increasingly have included community members (
28) as well as peer providers (
29). Expanding PBRNs to include service users would be a natural next step and could lead to more clarity on how change in outcomes occurs and to an increase in power sharing. Another approach to deepen stakeholder engagement is to reconceptualize translational research from a pipeline to an interlocking loop model that centers service user involvement at every stage (
17). Finally, measures to assess community and stakeholder engagement in research can promote accountability in research (
30–
34). These measures can examine the quality, depth, consistency, transparency, and impact of community and stakeholder engagement in research. Such approaches may reduce the possibility of research and interventions that lack relevance for end users.
Component 2: Identify Promising Target Mechanisms
Several methods are useful for developing a program evaluation framework and for identifying potential mediators of change in outcome(s). Our approach is focused primarily on mediators of change, because they are the presumed variables that lie on the causal pathway between treatments or interventions and outcome effects.
Conduct in-depth qualitative research.
Qualitative studies can open the black box of interventions by eliciting an understanding of the processes that underlie change in behavior, which can help identify promising mediators of change (
35–
37). Such studies use open-ended questions with extensive probing to uncover how, or in what ways, an intervention works or does not work. For example, in adult mental health care, research (
38–
42) has validated the use of peer support. Yet, the field is only beginning to uncover what it is that peers do to make an impact (
43,
44). Qualitative studies eliciting information from individuals who receive or provide peer support can help uncover the specific mechanisms, or pathways, to behavior change.
Qualitative research is also crucial for the development of valid and reliable measures of mechanisms or mediators when they do not yet exist, including necessary adaptations when measures are not valid for particular subpopulations. For example, researchers (
45) have documented the need for measures that have strong psychometric properties for mechanisms detailed in the research domain criteria matrix (e.g., loss and sustained threat). Researchers can conduct studies with focus groups and individual interviews (e.g., elicitation studies) to inform the development of items and scales that accurately and reliably measure mechanisms tailored to the population of interest (
46). Such development could overcome this limitation of psychiatry research, and collaboration with psychometricians is needed to develop measures after specific mechanistic targets have been identified.
Incorporate community-based system dynamics (CBSD).
CBSD is a process that can crowdsource intervention targets from community members (
23,
47). It allows for the uncovering of relevant mechanisms through community engagement and enables the process of incorporating these mechanisms into system dynamic models. System dynamic models emphasize three stages: problem scoping and identification; core modeling, planning, and capacity building; and group model building workshops (
23). Through such processes, system dynamic models construct representations of complex systems by using “stocks” (which are elements or properties within a system that can increase or decrease over time) and “flows” (which is a change in a stock over time) to identify the complex processes within the system (
23). CBSD approaches both allow for identification of mediators or mechanisms from lived experience and collectively reduce a large set of possible mediators or mechanisms to a more manageable number suitable for an intervention or implementation study (
48).
The CBSD approach has been used to help prioritize interventions and implementation targets for suicide prevention (
49), to identify factors that lead to help seeking for behavioral health problems (
50), and to uncover factors that impede mental health service utilization (
51). These variables, uncovered through CBSD, are targets for which researchers can construct specific intervention and implementation strategies. Uncovering mechanisms through community engagement, rather than by listening solely to academic investigators, may be a more sustainable and valid way of eliciting mechanisms and outcomes of importance.
Use concept mapping.
Concept mapping is a mixed-methods participatory group approach. This approach combines methods to represent perspectives of the group on problem resolution and paths to resolution (e.g., mechanisms) by using visual maps (
52). Concept mapping consists of brainstorming around the project question (either in person or online), synthesis of ideas, organization and sorting of ideas, evaluation of ideas on the basis of relevant dimensions (e.g., feasibility, cost, and relative importance), representation of ideas on visual maps, interpretation of the maps, engaging in a rating process, and deciding how to proceed (
52,
53).
Concept mapping can be used to uncover intervention targets. For example, Onken (
54) investigated the concept of a “supportive community” in mental health and found the constructs of participants’ basic needs, legal rights, and community education, and availability of community services, as its most important dimensions. Given this finding, interventionists interested in enhancing supportive communities can further operationalize these constructs and use them as targets for intervention studies. In addition to such analysis, concept mapping can be used to uncover mechanisms of how implementation strategies work or do not work. For example, Sommerfeld and colleagues (
55) used concept mapping to uncover factors associated with the implementation of cognitive-behavioral social skills training (CBSST) within assertive community treatment programs. They conducted focus groups with 87 stakeholders, which led to an informative visual map with 14 mechanisms deemed important to successful implementation (
55). After sorting and rating the mechanisms on importance and changeability, a smaller set of mechanisms emerged as most salient (e.g., training support, alignment of leadership, and perceived benefits of CBSST) (
55). In addition to informing implementation efforts, these factors can be modeled and empirically examined in explanatory trials of the implementation of CBSST into assertive community treatment programs.
Completion of studies applying methods from the first two components can lead to an informed conceptual framework with an a priori set of mediators. Once the pool of relevant mediating targets has been identified, iterative research is needed to facilitate intervention development (i.e., component 3, described next). Methods used to accomplish this third task ultimately contribute to the development of the most promising program, service, or policy initiative that is replete with empirical support and protocols to prepare for component 4 (described further below).
Component 3: Identify Intervention Strategies That Address Target Mechanisms
Generate ideas for intervention strategies (What are the “active ingredients”?).
For each plausible target mediator, the question to be answered becomes, Exactly how does the program, service, or policy address or bring about change in that target? It is crucial for program developers to map the specific content, activities, and processes the program uses or will use to bring about mechanistic change, as well as to identify what communication, messaging, or structural changes are likely to most effectively bring about change (
56–
58). The attention shifts from changing the outcome per se to changing the mechanisms of that outcome which, when changed, should bring about the desired outcome change. This shift in focus is subtle but an important and defining feature of the experimental therapeutics approach.
Conduct feasibility and acceptability studies.
Feasibility and acceptability are crucial to any initiative. Developing new programs or translating efficacy studies (from highly controlled research settings) to routine clinical practice is complex. Small feasibility and acceptability studies can answer many important questions related to recruitment and retention, program or policy execution, acceptability, safety protocols, measurement, and fidelity. These elements are critical in preparing for a rigorous empirical trial.
Feasibility studies need to engage the population of interest in study development and provide time and space to listen, discuss their perspectives, and refine and possibly change intervention activities. The overarching principles in such activities are sometimes referred to as community-based research or community-participatory partnered research (
59,
60).
Conduct preliminary impact studies.
Small developmental trials that incorporate random assignment can be used to preliminarily explore mediational chains that are presumed to influence outcomes when sample sizes and resources do not allow for larger, more sophisticated designs. Such preliminary impact studies set the stage for large-scale randomized controlled trials (RCTs) (
61). One such type of randomized trial has been called the randomized explanatory trial (RET) (
62). RET denotes trials that are “scientific in motivation and aimed at causal understanding” (
63) and are contrasted with what are called pragmatic trials that “evaluate therapeutic interventions in practice” (
63). The RET concept was introduced >50 years ago, and considerable advancements have been made in trials that seek causal understanding of interventions. The current article provides an update of the RET concept by integrating it with experimental therapeutics and by adding to it modern conceptualizations and methods for trial-based causal analysis. (See the
online supplement to this article for elaboration on what some readers may see as a different use of the RET term but which we propose can be useful for mental health services research.) RETs are not only small and exploratory—they can also be large scale (
61); we discuss larger trials in the next section.
Importantly, in the context of experimental therapeutics, small-scale pilot RETs inform investigators on whether a program needs refinement. RETs do so by identifying nonsignificant change in the presumed mediators that the program was hypothesized to change. Such nonsignificant change informs investigators of the need to revise program activities aimed at the mediator for which change was not achieved. This information is valuable before embarking on a large and costly trial (
64). Pilot RETs also empirically examine whether the presumed mediators are relevant to (or are correlated with) the ultimate outcome as presumed by the intervention designers (
61,
64). If a given mediator is found not to be empirically relevant, the team has important information to possibly consider alternative targets for change or to drop that target mechanism. This pilot RET approach is cost-efficient and provides key information for program refinement before costly large-scale trials are pursued.
Component 4: Use Advanced Analytic Methods
Once preliminary research shows initial support for program success, a fully powered trial can further examine the ability of the program to change target mechanisms and to identify which mediators (mechanisms) are most important to outcome change and for whom.
Conduct full-scale RETs.
A full-scale RET examines multiple mediators and moderators (what works for whom) simultaneously. A causal model links the program to the hypothesized mediators, links the mediators to outcomes, and then specifies moderators of both. Testable hypotheses or predictions are made on the basis of this model and then are empirically evaluated to provide perspectives on model viability (
61). Such RETs address two core links: whether the program produces change in a given mediator and the strength of the relationship between the mediator and the outcome, thereby providing feedback on why a program works or does not work and how to improve it (
65). If either of these links in the mediational chain is broken (i.e., nonsignificant), the broken link must be addressed in program revisions.
To implement a RET, a service team needs to use methods described in the three components discussed above to develop a strong conceptual logic model (
65). In RETs, subgroup differences in program effects and mediator relevance are explored. Whereas many scientists define RCTs as a gold standard for evaluation research, we propose that RETs are an intriguing prospect and are worth exploration in mental health services research, because of their capability to simultaneously examine multiple mediators (
61). It is not enough for evaluation research to document whether a program works. Rather, we must know how to improve the program. RETs may prove to be an important tool in psychiatric services research to accomplish this goal.
Conduct dismantling studies and multiphase optimization strategies.
When multicomponent interventions are delivered and found to be efficacious, some natural questions arise: Are all components equally effective? Can some components be eliminated, thereby increasing efficiency and reducing costs? One way to address these questions is to conduct dismantling studies (
66). A dismantling design is a decomposition of a multicomponent intervention where investigators compare a smaller intervention (with only a subset of components) with the complete intervention. Results are usually reported as a noninferiority trial, where the smaller intervention is tested to see whether it is no less efficacious than the original intervention, by using equivalence testing (
67). Additional limbs of such studies can compare subcomponents to one another. Rather than focus exclusively on the analysis of outcomes vis-à-vis an outcome-only perspective, dismantling studies can include mediators to provide insights into the mechanisms through which each component influences (or fails to influence) outcomes.
The multiphase optimization strategy (MOST) is a way to reduce intervention components to a manageable number, not all of which may be active, and to reduce the time involved in serial experiments when assessing component efficacy (
68). The MOST methodology involves first screening interventions to identify a smaller set of efficacious components (preparation phase). These components and their intensity or dose are then calibrated by using further experimental designs to arrive at a finalized (smaller) intervention containing the most efficacious subcomponents (optimization phase). This intervention can then be subjected to a two-arm or k-arm RET to establish efficacy (evaluation phase). These trials also can be extended beyond outcome-only thinking to include mediators of each surviving component. MOST designs can be implemented within the types of multilevel contexts commonly encountered in human services settings (
69).
Conduct classical mediational analysis.
Numerous analytic frameworks for both mediation and moderation have been elaborated. Many investigators working in human services research settings are unable to conduct RCTs because of ethical or feasibility concerns. These researchers need to find a way to strengthen causal inference of their observational studies. The strategies described in this section along with those in the following sections can help mitigate bias in observational designs. For mediation, initial analytic efforts relied on the modeling of single mediators in regression contexts. Following the early work of Judd and Kenny (
70) and Baron and Kenny (
71), this approach involves estimating models with and without the presumed mediator, and then examining any differences in coefficients linking the treatment to the outcome in the two scenarios. This approach is sometimes referred to as the “difference” method. A second approach involves obtaining the product of two coefficients generated from two separate models—one that regresses the mediator on the intervention and another that regresses the outcome onto the mediator (while controlling for relevant covariates). Mediation is reflected by multiplying select coefficients across the two regression analyses. Such a coefficient product approach is often based on simplified Sobel-like tests and is referred to as the “product” method (
72). These and other so-called traditional methods of mediation represent popular approaches to mediational analysis, especially for clinical trials (
73). Two more modern methods of analysis have emerged that are worth considering as alternatives, one based in traditional structural equation modeling (SEM) and the other called causal mediation analysis, as derived from Pearl’s (
74) structural causal modeling (SCM) framework.
Use SEM and SCM.
SEM is an elegant multivariate framework for testing whether causal models depicted by influence diagrams are consistent with experimental or observational data generated by research designed to provide perspectives on causal dynamics surrounding interventions (
61). The causal model represented by the diagram makes predictions about how the research data should be patterned. If the predictions are borne out, one has increased confidence in the hypothesized causal model. If the predictions are not borne out, the model is rejected. Importantly, SEM can address multiple mediators, causal relationships among mediators, correlated disturbances, measurement error, longitudinal dynamics, interaction effects among mediators as well as between treatments and mediators, and it can handle both linear and nonlinear relationships for variables measured with diverse metrics (e.g., ordinally scaled or binary variables), all while allowing for control of confounders. For examples with RETs, see Jaccard (
61) and Jaccard and Bo (
65); for an introduction to SEM more generally, see Kline (
75) and Hoyle (
76). For critiques of SEM, see Bollen and Pearl (
77). SEM is a far more powerful method for analyzing mediation than traditional methods based on difference and product coefficient approaches.
SCM is related to SEM but has evolved from different mathematical and statistical traditions. A noteworthy facet of SCM is known as causal mediational analysis (CMA), which is used to model mediators and relies on the potential outcomes framework to conceptualize causality (
78). CMA is concerned with, among other things, capturing effects of unobserved variables that influence both the mediator and the outcome (“mediator-outcome confounding”) and with estimating interaction effects between the intervention and the mediator (“exposure-mediator interactions”). CMA uses definitions of direct (i.e., unmediated) and indirect (i.e., mediated) effects and estimates models in ways that permit interactions, nonlinearities, and other potential problems with observational data; for details of this method, see Hicks and Tingley (
79), Imai et al. (
80), and VanderWeele (
81); for critiques of CMA, see Keele (
82). A variety of macros in SAS, Stata, and SPSS allow for the fitting of such mediational models for single mediator scenarios, but these models need to be extended to the multiple mediator contexts that typify program evaluation research. Both SCM and CMA, being rooted in a theory of causality, represent robust alternatives to traditional methods of estimating causal effects.
Discussion and Conclusions
Mental health services research could benefit from balancing implementation with a deeper understanding of how programs, services, and policies work, what has been deemed the “science of how” (
4). In this article, we have described methods that can be applied to systematically evaluate whether presumed relationships are empirically supported during refining of a program or policy initiative and uncovering of potential mechanisms of change. The approach of focusing on mechanisms of change is also informative for advancing implementation science, as has been articulated by Lewis and colleagues (
83). Although experimental therapeutics can slow the research process in terms of pragmatic deliverables, the methods ultimately can help answer the ever-important questions of how and for whom an intervention works, thereby achieving long-term goals of effective interventions more quickly. A key asset of the approach is that it can help inform why certain interventions do not work in certain communities or what unique community-specific mechanisms exist that require targeting. The experimental therapeutics approach challenges the assumption that mechanisms of action are the same within all subgroups of people and accelerates the development of more specific—and more effective—interventions that can help meet the health and social needs of vulnerable populations.
Acknowledgments
The authors are grateful to their colleagues for reading and commenting on the manuscript.