The Assessing the Evidence Base (AEB) series advances the understanding of current evidence to support school mental health (SMH) interventions. Systematic reviews, such as those in the series, have become pivotal resources to guide decision makers concerning the acceptability, effective implementation, and impact of SMH programs on variables such as school climate; acceptability and fit of interventions; impact on student academic, social, emotional, behavioral, and mental health functioning; and influence on training, coaching, and retention of a workforce that is experiencing many challenges (e.g., in recruitment, retention, and career advancement). Further, systematic reviews guide aims for researchers seeking to elucidate key details requisite for decision makers. These well-organized and thoughtful reviews are enhanced by tables that clearly chronicle the findings (
1–
3). The authors categorize the interventions on the basis of key program components that are critical to effectiveness, providing helpful information for decision makers in schools. And the good news is that the reviews affirm strategies that will help scale effective SMH efforts.
Yet the reviews also unveil critical shortcomings that cloud understanding of the evidence base. Specifically, education and mental health system leaders do not have sufficient information to make strong evidence-informed decisions as they move toward true system integration and overcome the common challenge of colocated SMH by providers from the mental health system (e.g., staff not integrated in team meetings and schools’ multitiered systems of support [MTSS], educators generally unaware of interventions provided to students and families). The reviews highlight an enduring problem in the field as it relates to understanding what works, for whom, and under what conditions. This knowledge gap is further complicated by the multitude of intervention targets (e.g., aggression, disruption, oppositional behavior, self-regulation, peer relations, attention problems, and social skills), diverse ages and cultural and ethnic backgrounds of students, and local relevance of specific interventions (which in most cases were developed in different contexts such as in other states and countries). Key education and mental health systems leaders will face challenges when selecting and investing in SMH when so little is known about the generalizability of the findings.
The problem of generalizability extends to other factors as well. Few studies reported (or included) race-ethnicity, and those that did largely implemented the intervention with a sample of diverse students, without investigating treatment effect heterogeneity among subpopulations. Moreover, interventions for youths in high school are mostly absent from the reviews. Equally concerning is the absence of data assessing effectiveness with LGBTQ+ students, a group particularly vulnerable to mental health challenges. Further, the strength of evidence seems linked to the number of studies conducted; few found null effects, again a common problem in intervention research synthesis. Finally, there is essentially no information about cost or cost-effectiveness. Here, a highly relevant factor is the proliferation of proprietary programs sold to schools, many with a limited (or no) evidence base and a preponderance of programs reflected by manuals that are sitting on shelves. These challenges highlight the need for significant advancements in investigating how intervention effects differ based on real-world considerations.
Despite the promise of many of the reviewed interventions, there is very little evidence that schools have the capacity and motivation to routinely implement the interventions as intended. The field of implementation science provides many relevant directions pertaining to this issue, but implementation in most school settings is poor. There is growing agreement, supported by the current reviews, that anchoring all programming within well-developed MTSS in schools (as best articulated by positive behavioral interventions and supports with effective teams, data-based decision making focused on equity, programming aligned across prevention-early intervention-treatment tiers, and progress and outcome monitoring) sets the stage for effectiveness and impact. However, MTSS is variably implemented in schools across the country, and there is a glaring need for significant enhancements in coaching and ongoing training of MTSS coaches (with very limited models for doing this).
Would further explicating factors that influence the effectiveness of existing interventions offer a greater contribution to the field? Similarly, what variables are most important for decision makers (e.g., cost, cost-effectiveness, ease of implementation, subgroup effectiveness) that should be a routine focus of research? These timely reviews in the influential AEB series underscore the growing promise and importance of the SMH field but also point to a range of research needs that should inform interconnected practice and policy.