In recent years, many states have taken on the challenge of educating their existing mental health care workforce in evidence-based practices (EBPs), both to improve the quality of care provided to children and adolescents and to address a shift toward reimbursing services for quality rather than quantity (
1,
2). Because of the complexity of EBPs, training clinicians to fidelity has proven challenging. There are scant data evaluating these state training programs and even less data on whether, following training, the use of EBPs is sustained (
3,
4). Reviews suggest that sustainment of EBPs has been examined less frequently than implementation of EBPs (
4), and the literature on sustainability is “fragmented and underdeveloped” (
5).
New York State (NYS) has been a leader in the efforts to educate its existing mental health workforce in EBPs. After September 11, 2001, a large percentage of the population of NYS was in need of mental health services but the workforce was poorly prepared to deliver these services. Through a partnership with Columbia University, the NYS Office of Mental Health launched a training program in cognitive-behavioral therapy for childhood trauma (
6). The success of this early effort led to the establishment of the Evidence-Based Treatment Dissemination Center (EBTDC) in 2006. EBTDC was designed as a quality improvement initiative for the clinical workforce in agencies licensed by the NYS Office of Mental Health to serve children and adolescents. In its first 3 years, EBTDC trained 916 clinicians and 275 supervisors (
7).
In 2013, EBTDC began training in Managing and Adapting Practice (MAP), an evidence-informed approach to guide clinicians in the selection of treatments that match a child’s characteristics (
8). MAP was selected because it has broad coverage of child populations and clinician-friendly decision support tools with measurable outcomes, and it has been successfully implemented across the United States (
1,
9). Although the initial training was successful in training clinicians to fidelity, dropout from training was high (51.2%). Older clinicians were more likely to drop out, as were clinicians from downstate urban areas (
1). To improve the dropout rate, multiple modifications were made to the NYS MAP training. After these modifications, the dropout rate decreased significantly to 12.3%, and the only predictor of dropout was a low score on the appeal subscale of the Evidence-Based Practices Attitude Scale (EBPAS) (
2,
10).
However, no data were collected on sustained use of MAP. Therefore, the objectives of this longitudinal cohort study were to document the rate of sustained use of MAP after completion of the training sponsored by NYS and identify the characteristics related to sustained use.
Methods
The study population consisted of 89 clinicians who were employed in licensed NYS Office of Mental Health agencies serving children and adolescents and who were trained to proficiency in MAP from January 1 through December 31, 2016 (
8). Nine to 18 months posttraining, clinicians were contacted via e-mail up to seven times and asked whether they were still using MAP and their reason for use or nonuse. Fifty-one (57%) clinicians completed the survey. The data collection part of the EBTDC program is considered to be a quality improvement activity and did not require institutional review board review.
Sociodemographic and professional practice characteristics—such as age, sex, race-ethnicity, education level, hours worked, service setting, and licensure—and reasons for enrolling in MAP training were all assessed prior to the training. The Texas Christian University (TCU) Survey of Organizational Functioning Efficacy and Director Leadership scales (
11) were also administered prior to training. This scale and its components have excellent reliability (
12). Clinicians’ familiarity and experience with Excel were also assessed prior to training.
The EBPAS was administered after the didactic training and again after the postconsultation Webinars (
10). Three of the four subscales—appeal, openness, and divergence (12 questions)—were administered. EBPAS has moderate to good internal consistency and reliability (
10,
13). The language was slightly edited to ascertain clinicians’ attitudes toward MAP specifically rather than toward EBPs in general. Responses (ranging from 1 to 5) to individual items in the subscales were summed and then averaged to create a mean score.
Categorical data were summarized with counts and percentages, whereas continuous data were summarized with means and standard deviations. Differences between the full sample of trained clinicians and the follow-up participants, between follow-up participants and nonparticipants, and between clinicians who continued to use (users) and did not continue to use (nonusers) MAP were evaluated with unpaired t tests for continuous data, chi-square tests for categorical data, and logistic regression analyses. All analyses used SPSS, version 23.0 (
14).
Results
Table 1 displays the characteristics of the 89 clinicians trained in 2016 and the 51 (57%) clinicians who responded to the follow-up survey. There were no statistically significant differences between the trained clinicians and the subgroup who completed the follow-up survey in sociodemographic or practice characteristics or in reasons for MAP training. Additionally, the two groups did not differ in their evaluation of organizational leadership and efficacy as measured by the TCU scales, in technology skills, or in EPBAS scores. Compared with responders (N=51), nonresponders (N=38) were less likely to be female (84% versus 100%, p=.003), more likely to report that MAP training was required by their agencies or supervisors (55% versus 33%, p=.05), and less likely to have a personal interest in using MAP (34% versus 63%, p=.01).
Sociodemographic and practice characteristics did not differ between MAP users (N=41, 80%) and nonusers (N=10, 20%). Similarly, there were no differences between MAP users and nonusers in the reason for seeking MAP training, the results of the TCU efficacy and director leadership subscales, the technology questions, or score on the EBPAS total scale at the end of the didactic training. The most common reason given for discontinued use was lack of time. Differences between MAP users and nonusers were observed in scores on the EBPAS administered after the consultation Webinars, both for total score (2.80±.44 versus 2.38±.44) and for the appeal subscale (2.69±.52 versus 2.19±.51). Logistic regression results (data not shown) suggested that clinicians who scored high on the appeal subscale were more likely to have continued using MAP compared with those who scored low on this scale (odds ratio=5.98, 95% confidence interval=1.31, 27.36).
Discussion
Data from this follow-up study of a cohort of clinicians trained in an evidence-informed practice, MAP, suggest that 80% of clinicians who responded to the follow-up survey continued to use the EBP for 9 to 18 months after training. Given the investment that states are making in training agencies and providers, this is an encouraging finding. Brookman-Frazee and colleagues (
15) examined the sustainability of using any of six EBPs over a 57-month period among clinicians practicing in the Los Angeles Department of Mental Health. They found that 88.9% of clinicians made claims for at least one of the six EBPs and submitted claims for one or more EBPs for an average of 21.71 months. Notably, MAP was one of two EBPs that showed a lower risk of discontinuation compared with the other EBPs examined. Thus MAP is well suited for use in community mental health settings that serve children. Southam-Gerow et al. (
9), also examining MAP implementation in Los Angeles County, concluded that 75% of clinicians were trained to proficiency and that MAP could be implemented on a large scale.
Given the importance of sustained use of EBPs to improve the quality of mental health care for children in community settings, the ability to identify trainees who are unlikely to continue use of an EBP is critical. Our data suggest that the four-item appeal subscale of Aarons’ EBPAS (
10) can differentiate between those who will and will not sustain use. This short set of questions, which in earlier work identified those who dropped out of training (
2), can also be used to identify those who might benefit from additional training to improve sustained use of MAP. Interestingly, only the total subscale score—rather than any individual items—predicted sustained use. Clinicians who discontinued use were more likely to report a lack of time to use the MAP system. Future research is needed to document whether providing additional training for clinicians who score low on the EBPAS appeal subscale improves rates of sustainability.
These data had certain limitations. The study sample consisted of clinicians who practice in NYS licensed mental health agencies and who volunteered for MAP training. Thus they are a self-selected sample. Although the cohort of 89 clinicians trained in 2016 was contacted by e-mail up to seven times, the response rate, 57%, was suboptimal. Although there were no statistically significant differences between the trained cohort and the follow-up sample in sociodemographic or practice characteristics, reasons for seeking MAP training, or attitudes toward organizational efficacy and leadership, undetected differences between the samples could have biased the results. Further, although we know that there is considerable staff turnover in these community agencies and that turnover is likely responsible for some of the loss to follow-up, other possible reasons for nonresponse include lack of perceived relevance of MAP, lack of agency support for participation, and clinical demands. Comparisons between responders and nonresponders to the survey suggest that interest in using MAP and feeling required by agencies or supervisors to participate in training may be driving nonresponse. MAP use is self-reported, and no data were collected on fidelity to MAP or on whether it was related to improved client outcomes.
Conclusions
Data from NYS show that MAP was successfully adapted for use in a state system. Clinicians were trained to fidelity (
2), and 80% reported having sustained use of the EBP for 9 to 18 months after training. Importantly, these data also show that the four-item appeal subscale of Aarons’ EBPAS (
10) can be used to identify clinicians who are likely to discontinue use and who should be targeted for additional training.