In recent years, coordinated specialty care (CSC) providers have worked to collect client-level data by using harmonized outcome and process measures (e.g., assessment batteries) in an effort to implement data-informed care (
1). A core assessment battery often includes several provider-rated scales and client self-report measures that assess psychiatric symptoms, quality of life, and service utilization (
2). As CSC continues to be adapted and improved, data-driven care has become an essential component of many models (
3). Yet, the implementation of data collection measures that enable the data-driven component of care in real-world settings has received little attention.
At the program level, the collection and monitoring of outcome measures can improve the quality of providers’ service delivery when providers are part of a feedback monitoring system (
4). The delivery of measures and collection of data can also set the stage for mental health agencies to provide measurement-based care, which has been linked to improved outcomes for clients with serious mental illnesses (e.g., psychosis, major depression) (
5). The data collected often provide empirical evidence that informs the decisions of administrators who distribute funding for services, and data collection is often required by funding agencies (
4). Yet, the delivery of measures by providers to collect these essential data is often challenging, and completion rates are generally low (
2,
6).
The level of support, education, and training needed to overcome barriers to integration of an assessment battery within CSC has not been well documented. To address this gap in the context of CSC, we assessed the enactment of a technical assistance and support program to improve measure completion in CSC as the first step toward implementing measurement-based care.
Setting
From January to December 2020, nine CSC programs of community-based outpatient mental health agencies (
7) located in the Pacific Northwest of the United States participated in the study and, during its 12-month duration, provided services to 247 clients. Providers were responsible for completing provider-rated scales and facilitating clients’ completion of self-report measures via an online measurement delivery and data platform, a tool developed in Research Electronic Data Capture. The assessment battery included two provider-rated scales—Clinician-Rated Dimensions of Psychosis Symptom Severity, which was collected monthly, and a service utilization measure, which was collected weekly—and six client self-report measures—Patient Health Questionnaire–9, Generalized Anxiety Disorder–7, Community Assessment of Psychic Experiences–15, Health Days Core Module, Substance Use Monthly and Lifetime Measure, and Physical Health and Medical Measure, which were delivered monthly or quarterly. The Washington State Institutional Review Board certified the study as exempt.
Implementation Strategy
The technical assistance strategy was implemented at the program level, and the end users of change were providers. Implementation comprised three components: in-person training, booster training, and consultation.
In-Person Training
The 12-hour in-person training for providers included a lecture-style presentation that provided an overview of the online measurement delivery and data platform, evidence to support the use of clinical and functional measures in clinical care, client self-report measures and cutoff scores, provider-rated scales, and examples of how data are used (e.g., to inform legislation and for program evaluation). Training also involved hands-on assistance with the tablet devices provided to each program (with funds acquired from Washington State Health Care Authority), mock data entry, and small-group discussion. Supplemental materials included an overview of all measures and corresponding cutoff scores, a delivery schedule for measures, and recommendations on how to distribute measurement responsibilities across team members to reduce the burden on a single provider.
As-Needed Booster Training Sessions
A condensed version of the in-person training was conducted through videoconference to train new employees or to serve as a refresher for providers. Booster sessions were made available for individual providers or teams and were tailored to meet the needs and address concerns of providers. Sessions ranged from 1 to 2 hours and were scheduled across several days, depending on staff availability.
Consultation
Technical assistance had a goal-focused approach based on principles of quality improvement and was delivered monthly via videoconference meetings held with program directors and CSC experts (
8). These meetings included audits and feedback on the quality of the data and discussion about measurement delivery and completion rates in the past month. Internal and external barriers that required modifications to the online measurement delivery and data platform were addressed. These barriers continued to inform the focus of subsequent meetings, which were tailored to each CSC program. Technical support was offered to all providers, who could contact CSC experts by e-mail, videoconference, or telephone at any time to troubleshoot issues and raise additional concerns or suggestions for improvement.
Outcomes
Attendance and Engagement
Program directors had the opportunity to participate in approximately 12 monthly consultation meetings by videoconference; their attendance was tracked. Among the statistical measures of ongoing participation in consultation were the number and percentage of consultation sessions attended by program directors, the mean number of consultation meetings across programs, and the number of booster sessions delivered.
Measurement Completion
Monthly administrative data pulls from the online measurement delivery and data platform were used to assess measurement completion by providers and clients over the 12-month study period. Completion was operationalized as occurring when measures were delivered and when scales and measures were completed by the provider or by the client, either with the support of the provider or independently after a session (measures sent by providers to clients via text message or e-mail).
Statistical Analysis
Differences in measure completion were examined with logistic generalized linear mixed-effects models with a random intercept to capture the differences between the CSC programs, a random intercept to capture clients’ repeated outcomes (over 12 months) nested within a CSC program, and a fixed effect for time (i.e., month). Missing values were handled by maximum-likelihood estimation, which allows unbiased estimation when data are missing at random. Separate models were fit for each measure completion outcome (total measures completed, client self-report measures completed, and provider rating scales completed). Results are reported as ORs and are presented with 95% CIs as well as the change in probability (model-derived percentages based on random-effects model estimates) for ease of interpretation. A two-tailed alpha error rate of 0.05 was the statistical significance threshold. All statistical analyses were performed with R, version 3.6.2. Generalized linear mixed-effects models were fit using the glmmTMB package, and marginal means were calculated with the emmeans package.
Participation and Attendance
Over the course of 12 months, program directors attended 73 of 103 (71%) technical assistance consultation meetings. Meeting durations ranged from 16 to 60 minutes, with longer meetings focused on features of the online data platform or strategies to engage clients. After the initial training, 22 booster sessions (mean=2.4 per individual or team, range 1–4) were delivered virtually.
Changes in Measurement Completion Over Time
A total of 5,831 provider-rated scales and 3,190 client self-report measures were completed over 12 months. Of the client self-report measures, 109 (3%) were independently completed and submitted by e-mail or text message. Over the 12-month study period, the overall measure completion rate significantly increased (OR=1.09, 95% CI=1.07–1.10, p<0.001). The rate of completion of client self-report measures (OR=1.05, 95% CI=1.03–1.07, p<0.001) and the rate of completion of provider-rated scales (OR=1.18, 95% CI=1.15–1.20, p<0.001) significantly increased. Of the three outcomes investigated, the rate of provider-rated scale completion improved the most over time (estimated percentages: 47% at month 1 vs. 84% at month 12).
Figure 1 shows the improvements in measure completion over time.
Discussion and Conclusions
The delivery and completion of provider-rated scales and client self-report measures in CSC programs significantly improved over the 12 months of the intervention’s implementation. However, results revealed a more pronounced improvement in the completion of provider-rated scales. Prior work suggests that CSC providers experience more challenges with administering and having clients complete self-report measures than with completing provider-rated scales (
2). Our findings demonstrate that although the completion rate of client self-report measures increased slightly, providers may have difficulties with engaging clients in the process of completing measures and explaining to them the utility of doing so.
Although our findings are primarily focused on the delivery and completion of an assessment battery, measure delivery and completion constitute only one component of the provision of data-informed care. In a practice setting, only focusing on measure completion rates without emphasizing the active involvement of clients limits potential improvements in outcomes. Another crucial component that warrants additional attention is how the active involvement of clients (or lack thereof) affects the quality and validity of responses and their meaningful integration into care. Before implementing an assessment battery or prior to scale-up, CSC providers should engage in participatory approaches (e.g., rapid learning) to ensure the selected measures are meaningful to clients, because those with lived experience tend to be excluded during the preimplementation phase of evidence-based practices. The use of task-shifting or task-sharing approaches, whereby the responsibility to complete a task is shifted from one role to another to better use resources, could also serve as a way to further integrate peer specialists into care delivery (
9). That is, rather than providers, peer specialists could be tasked with addressing client needs and improving responsiveness to data-driven components of CSC. For instance, learning health care systems have had peer data champions inform current clients about how data are used to personalize treatment and gather client feedback that can be integrated into care and thereby aid quality improvement (
10).
It should also be noted that the technical assistance strategy was used within a network of CSC providers and supported by an academic-community partnership funded by Washington State, and this strategy may not be generalizable to all CSC providers without this additional level of support. At a practical level, the amount of external support (e.g., training, coaching) required and the associated costs for CSC programs to integrate an assessment battery, especially for programs in low-resource settings, need to be understood.
The findings from this study address an understudied area in the implementation of CSC and suggest a feasible technical assistance strategy to facilitate the delivery of an assessment battery. The findings also highlight the need for stakeholder input throughout the implementation process as well as for additional strategies to increase client buy-in regarding the utility of data-informed care as programs move toward implementing measurement-based care.