School mental health programs are reducing the gap between youths who need and receive mental health services (
1). This improvement is related to the many advantages the school context offers for prevention, early intervention, and intervention (
1,
2). The trend is also due to federal and state policies (
3–
6) that call for increased school mental health and the use of evidence-based programs and practices. However, evidence-based programs are often implemented in schools unsuccessfully or with poor quality, which increases the need to understand barriers (
7) and capacities necessary for high-quality implementation (
2) and to elucidate the connection between the delivery of effective evidence-based programs in schools and the achievement of valued outcomes (
8).
In 2006, the Substance Abuse and Mental Health Services Administration (SAMHSA) launched the National Registry of Evidence-Based Programs and Practices (NREPP), an objective, scientifically based mechanism for tracking prevention and intervention programs for mental health problems and substance abuse and assisting users in evaluating their application. Programs are voluntarily submitted for consideration by their developers and evaluated by independent reviewers trained by SAMHSA. The registry documents empirical support and the resources and materials required for implementing these programs, and it identifies their readiness for dissemination. Particular programs are not endorsed, promoted, or approved by SAMHSA, leaving decision-making around program selection to the user. Few studies have examined the factors associated with use of NREPP in relation to the actual implementation of programs and practices in community and school settings.
A recent comprehensive review of mental health interventions in NREPP considered programs implemented for all ages and across all settings, including clinical setting, school, community, and home (
9). Findings indicated that a greater proportion of youth programs were exclusively proprietary (that is, fees were charged by developers) in contrast to programs for adults or for adults and youths. Although the review (
9) did not focus on the ability of schools, specifically, to implement these programs, it identified potential barriers for program implementation, with 57% of intervention program materials and training found to be proprietary. Although exclusively proprietary programs achieved significantly lower NREPP ratings for quality of research compared with programs that provided some or only publicly available materials, programs that provided only publicly available materials yielded lower readiness-for-dissemination scores, creating challenges in decision making by potential consumers (
9).
We used this prior review (
9) as a basis for this study and examined programs that are listed in NREPP as being appropriate for schools. We describe program characteristics according to each program’s level of intervention, area of focus, intended outcome, and proprietary status, and we calculated program costs to examine associations with program characteristics and program ratings on quality of research and readiness for dissemination provided on NREPP’s Web site. Our specific aims included reviewing differences in programs by using descriptive statistics for program characteristics, costs, and ratings; correlations between program costs and ratings; and analysis of variance (ANOVA) in program costs as a function of program characteristics and program ratings. Implications of this review are discussed, including the NREPP’s practical application, feasibility of conducting and maintaining these programs within schools, and future directions for school mental health programs.
Methods
During summer of 2011, we conducted a review of programs in NREPP’s database that were classified as mental health promotion or treatment (or both) and were deliverable in school settings. Reference to this date is critical due to the registry’s reviewing processes; interventions are being continuously added as intervention developers elect to participate through a voluntary, self-nomination system for review. We note, however, that SAMHSA’s process does not review all programs and that NREPP is not an exhaustive registry.
Using information provided on the registry, we describe the program characteristics, including intervention tier (that is, whether the intervention was universal, selective, or indicated), outcomes (substance use, academic achievement, violence rate, externalizing behaviors, family functioning, and suicide rate), mechanism addressed to reach outcomes (such as social or emotional skills, parenting, system-level delivery, cognitive-behavioral approaches, psychoeducation, or a combination of these), and proprietary status (whether materials and services were available only for a fee or whether they were available at no cost, or some combination). Program costs were calculated on the basis of the first year of use with ten students and for continuing the program a second year, which was based on the expenses that the program developers listed (such as for workbooks, training, software, and fidelity monitoring tools). Costs reflected the sum of the items listed by the program developer; costs for two programs could not be determined from information provided. We acknowledge that this may not encompass costs for hiring staff to deliver programs unless specified by the developer.
We considered program ratings, as scored by SAMHSA reviewers, on quality of research and readiness for dissemination. Quality was measured on a 5-point scale ranging from 0, not acceptably addressed, to 4, acceptably addressed. Readiness for dissemination was measured on a 5-point scale ranging from 0, not available, to 4, adequate materials available. The SAMHSA reviewer assessing quality of research assigns ratings according to reliability and validity of measures, intervention fidelity, missing data and attrition, and potential confounds accounting for intervention effects. Readiness-for-dissemination scores are assigned on the basis of the reviewer’s evaluation of implementation material, training and support resources, and availability of quality assurance procedures.
Results
At the time of the review, search results yielded 67 (34%) programs of the 200 interventions on the registry that met search criteria (specified by NREPP for being applicable in schools and focused on mental health promotion or treatment); however, four were not included in this review because the program’s target age did not include youths. In regard to program characteristics, almost half of the programs consisted of universal mental health promotion (N=29, 46%). Four (6%) programs were classified as selective, and three (5%) were indicated, whereas others addressed a combination of tiers. Only six (10%) programs provided support across all tiers, and five (8%) developers indicated that the program was not classifiable in this way.
The most common intervention outcomes addressed were youth substance use (N=15, 24%), followed by academic achievement (N=12, 19%), violence prevention (N=10, 16%), externalizing behaviors (such as attention-deficit hyperactivity disorder, conduct disorder, and aggression; N=8, 13%), family functioning (N=8, 13%), and suicide prevention (N=5, 8%). Programs used different mechanisms to address mental health; 19 programs (30%) targeted youth social or emotional skills or both, seven (11%) programs addressed parenting, four (6%) aimed efforts at system-level delivery, four (6%) addressed cognitive-behavioral approaches, three (5%) were primarily psychoeducation, and 26 (41%) programs targeted more than one of these areas.
Twenty-eight (44%) of the 63 programs provided all materials, training, and support at a cost—no program items were listed as available for free. Only one program (2%), TeenScreen (
10), was publicly available at no cost. TeenScreen’s focus is to identify middle school and high school youths who need mental health services because of suicide risk or mental health concerns. TeenScreen reported free training, manuals, survey tools, consultation, and technical assistance, although costs for staff to administer the screening tool were not itemized in the estimate. This program was unique in that most programs identified expenses associated with program materials and manuals, training, measures, or technical support for adaptation or implementation. Program costs ranged from $0 to $50,000 for implementation for each year of the program. The mean±SD program cost was $4,338±$7,589 in the first year and an additional $1,856±6,805 for continuing the program a second year.
Program scores for quality of research (2.92±.56) and readiness for dissemination (3.34±.66) and their association with program costs are shown in
Table 1. Notably, quality-of-research scores were not correlated with readiness-for-dissemination scores (r=.07, ns). Although quality of research was not significantly correlated with first-year costs, as quality scores increased, costs associated with second-year implementation (r=.34, p<.01) and total costs across the two years (r=.28, p<.05) increased.
ANOVAs examining differences in costs, quality-of-research scores, and readiness-for-dissemination scores as a function of program characteristics supported differences in program costs as a function of program tier (F=4.88, df=1 and 55, p<.05). Programs that focused on a single tier (universal, selective, or indicated) had significantly fewer costs the first year ($2,626±$3,284) compared with programs that addressed multiple tiers ($7,247±$11,699). Program readiness-for-dissemination scores also differed as a function of tier addressed (F=8.05, df=3 and 54, p<.01). Post hoc analyses indicated that programs that addressed only indicated supports yielded the poorest ratings for readiness for dissemination (1.93±.15), compared with universal interventions (3.48±.55) and interventions targeting multiple tiers (3.45±.59).
ANOVAs did not support significant differences in costs, quality-of-research scores, and readiness-for-dissemination scores as a function of program outcome or mechanism targeted to address outcome. Analyses supported differences in program costs as a function of proprietary status (first-year cost, F=4.78, df=1 and 58, p<.05; total costs across two years, F=4.07, df=1 and 58, p<.05). Programs that indicated that materials, training, or support were provided at cost and for free (that is, some items with cost and others not) had significantly fewer costs the first year (first year, $2,521±$2,903; total, $3,081±$4,202) compared with programs that provided all resources at cost (first year, $6,719±$10,570; total, $10,229±$19,863).
Discussion and conclusions
In summary, mental health treatment and promotion programs that can be delivered in schools accounted for 32% of NREPP, and only 10% of those support the provision of services across the continuum of prevention and selective and indicated intervention. Almost exclusively, programs required substantial expenses that generally did not include financial support and infrastructure for staffing and sustainability of programs, and almost half of them (42%) indicated they charged fees for all materials, training, and support. Although costs were higher for materials, training, and support for programs that addressed multiple tiers instead of just a single tier, it is important to consider the need to purchase additional programs to meet the needs of students in the other tiers. Readiness for dissemination was also low for programs addressing only tertiary or indicated supports in comparison with those addressing multiple tiers.
As found in the recent more general review of NREPP (
9), this review, which focused on school-based programs for youths, suggests that navigation of evidence-based program registries to increase the likelihood of evidence-based practice is a challenging and multidimensional set of tasks. Effective integration of mental health intervention into the school environment often includes training school mental health professionals on how to work effectively in schools, including developing and maintaining relationships with school administrators, teachers, health staff, and others, and understanding relevant educational regulations and policies (
11). Notably, the measurement and development of these relationships in school mental health continues to be explored empirically relative to interdisciplinary collaboration and resource sharing (
12) and quality assessment and improvement (
11).
While resource sharing and quality of interventions implemented with manuals are potentially captured by NREPP’s readiness-for-dissemination rating, additional assessment of the resources and infrastructure capacities for delivering these programs should be considered, given that they are essential to successful implementation (
13). In addition, considering modular approaches for tertiary interventions may prove useful in advancing understanding of transportability and implementation in school settings (
14). Further research is needed to understand organizational qualities (such as infrastructure, financing, implementation support, and emphasis on quality) that influence successful use of such registries and transportability of evidence-based programs and practices into common practice settings.
Acknowledgments and disclosures
The authors report no competing interests.