Depressive disorders—compounded by racial disparities in access to and quality and outcomes of care in underresourced communities—are a leading cause of disability in the United States (
1–
7). Depression care quality improvement programs that focus on improvement of depression treatment by using team-based management of chronic diseases in primary care can improve quality and care outcomes for depressed adults, including members of racial and ethnic minority groups (
8–
17). Under health care reform, Medicaid funds can be used to establish behavioral health homes that provide incentives for partnerships among general medical, mental health, and social and community agencies, such as parks and senior centers. These services “must include prevention and health promotion, health care, mental health and substance use, and long-term care services, as well as linkages to community supports and resources” (
18). However, few guidelines exist to help organize diverse agencies into systems that support chronic disease management. Also, no studies have compared alternative approaches for providing training in depression care quality improvement for providers from a diverse assortment of health care and social-community programs.
This study analyzed data from Community Partners in Care (CPIC), a group-level, randomized, comparative-effectiveness study of two approaches for implementation of evidence-based, depression care quality improvement tool kits adapted for diverse health care and social-community settings. One implementation approach, resources for services (RS), relied on providing more traditional technical assistance to individual programs. The other approach, community engagement and planning (CEP), used community-partnered, participatory research (CPPR) principles to support collaborative planning by a network of agencies seeking to implement the same depression care tool kits (
19–
25).
Health care and social-community programs were assigned randomly to each approach (
20,
21). Six-month follow-up comparing outcomes of clients with depression in CEP and RS revealed that clients in CEP had improved mental health–related quality of life, increased physical activity, reduced homelessness risk factors, reduced behavioral health hospitalizations and specialty care medication visits, and increased use of depression services in primary care or public health, faith-based, and park or community center programs (
20). At 12 months, the effects of CEP on mental health–related quality of life continued (
25).
This study focused on CPIC’s main intervention effects for program and staff participation in evidence-based, depression care quality improvement training. Training participation (program level) and total training hours (staff level) were the primary outcomes. We hypothesized that CEP would lead to a broader range of staff training options compared with RS. To determine the types of organizations that would participate in training, we compared interventions’ effects by program type (health care versus social-community). On the basis of prior work, we hypothesized that compared with RS, CEP would increase mean hours of training participation, especially among social-community programs, where such training is novel (
26–
28). To inform future depression care quality improvement dissemination efforts in safety-net communities, we conducted exploratory analyses of the interventions’ effects on staff training participation for each depression care quality improvement component and by service sector.
Methods
CPIC was conducted by using CPPR, a manualized form of community-based, participatory research, with community and academic partners coleading all aspects of research under equal authority (
19–
25,
29,
30). The study was designed and implemented by the CPIC Council, which comprises three academic organizations and 22 community agencies. The study design is described elsewhere (
19–
21,
25).
Sampling and Randomization
Two Los Angeles communities with high poverty and low insurance rates (
31), South Los Angeles and Hollywood-Metro, were selected by convenience on the basis of established partnerships between Healthy African American Families II, QueensCare Health and Faith Partnership, Behavioral Health Services, the University of California, Los Angeles, and RAND (
19,
32–
36).
Programs.
County lists of agencies supplemented by community nominations were used to identify agencies (
32). After assessing eligibility, we offered consent to 60 potentially eligible agencies with 194 programs. To be eligible, programs were required to treat at least 15 clients per week, have at least one staff member, and to have a focus other than psychotic disorders or home services; 133 of the 194 programs were potentially eligible and were assigned at random to RS (N=65) or CEP (N=68). [A CONSORT diagram of the recruitment and enrollment of agencies, programs, and staff is available as an
online supplement to this article.]
Agencies were paired into units or clusters of smaller programs, based on location and program characteristics and randomized to CEP or RS. Site visits were conducted postrandomization to finalize enrollment; 20 programs were ineligible, 18 program refused to participate, and 95 programs from 50 consenting agencies were enrolled. Program administrators were informed of intervention status by letter. Participating and nonparticipating programs were located in comparable neighborhoods according to U.S. Census data on age, sex, race, population density, and income by zip code level (
37).
Staff.
All staff (paid, volunteer, licensed, and nonlicensed) with direct client contact were eligible for training. The number of eligible staff was indicated on surveys completed by program administrators at baseline. [Survey items related to number of staff with direct client contact are available in the online supplement.]
For missing responses or outliers (low or high values), we made phone calls to programs to obtain, confirm, or correct information. The 95 enrolled programs had 1,622 eligible staff. One eligible administrative staff member who was responsible for oversight of a program assigned to RS and a program assigned to CEP was excluded from analysis, resulting in a final analytic sample of 1,621 staff.
The institutional review boards of RAND and participating agencies approved study procedures. Administrators provided written consent before completing the surveys, and oral consent for use of data related to attendance at training events was obtained from staff.
Interventions.
The interventions were designed to support implementation of depression care quality improvement components relevant to each program’s scope. Both interventions used the same evidence-based tool kits for support of care management (screening, coordination, and patient education), medication management, and cognitive-behavioral therapy (CBT) (
16,
19,
21,
25,
38–
40). Materials were made available to eligible programs via print manuals, a Web site, and flash drives (
34). Tool kits were introduced at one-day kickoff conferences in each community before randomization (
19,
21,
25). After randomization and enrollment, staff who had attended prior study meetings were invited by phone, e-mail, and postcard to attend training sessions for the intervention in which their program had been enrolled and were encouraged to circulate the invitations to all eligible staff. Eligible staff could choose to participate in any, all, or no training sessions, which were offered at no charge. The only incentives for participation were continuing education credits, access to training, and the food served during training.
The content, structure, and training intensity of RS were developed by a research team and the authors, rather than by participating RS agencies, to reflect a more traditional approach to depression care quality improvement implementation. Similar to Partners in Care (
19,
20,
21,
25,
39), the training provided technical assistance to individual programs by using a “train the trainer” model. RS training, consisting of Webinars and primary care site visits, was conducted between December 2009 and July 2010 by an interdisciplinary team of three psychiatrists, who discussed medication management; a nurse care manager; a licensed psychologist, who discussed CBT; an experienced community administrator; and research assistants. Tool kits were modified to fit programs.
Programs assigned to CEP were invited to identify one or more staff to join South Los Angeles and Hollywood-Metro CEP councils. Each council met biweekly for two hours over five months to tailor depression care tool kits and implementation plans that would maximize each community’s strengths. The councils were given a written manual and online materials, including community engagement strategies. In South Los Angeles, the planning meetings occurred from December 2009 to April 2010, and in Hollywood-Metro, meetings were held from March to July 2010. Each council met through January 2011 to oversee implementation. In South Los Angeles, council participants included 12 academic and 13 community participants, and in Hollywood-Metro, council participants included 19 academic and 11 community participants. During planning, each council modified the tool kits as well as the goals, intensity, duration, and format of training sessions to fit community and program needs (
29,
30). CEP trainings were not prespecified. Each council could have chosen any plan, including replicating RS or conducting no training at all.
Data Sources
Data about service sector and community for the 95 enrolled programs were obtained from administrators during recruitment. The number of eligible staff with direct client contact was obtained from a baseline survey of administrators and follow-up phone calls to administrators. Training event data, such as date, hours, depression care component, and program affiliation of attendees, were obtained from registration forms for training events, training logs, and sign-in sheets. A data set was created listing staff members coded by program sector, intervention status, and community.
Outcomes
At the program level, the primary outcome was program participation in depression care quality improvement training, defined as the percentage of programs with any staff participation in training. At the staff level, the primary outcome was total hours of staff participation across all programs and stratified by service sector. Health care sectors included primary care, mental health care, and substance abuse services; social-community sectors included homelessness services and other social and community-based services. Secondary outcomes at the staff level included percentage of staff who participated in any training and hours of participation in each depression care component (medication management, CBT, care management, or other).
The main independent variable was program random assignment (CEP or RS). Covariates included program service sector (health care or social-community) and community (South Los Angeles or Hollywood-Metro).
Statistical Methods
At baseline, we compared program and staff characteristics among programs assigned to each intervention by using chi square tests. For main program-level analyses, we examined the interventions’ effects on outcomes; the analyses were controlled for service sector and community, and results are reported as chi square statistics. For staff-level analyses, we compared the interventions’ effects on total hours of training by using two-part models because of skewed distributions (
41). The first part used logistic regression to estimate the probability of receipt of any training hours. The second part used ordinary least squares to estimate the log of hours, a measure of the total training hours received by staff who received any training; the analyses controlled for community and service sector (
42).
We used smearing estimates for retransformation, applying separate factors for each intervention group to ensure consistent estimates (
43,
44). We adjusted models for clustering by programs by using SAS macros developed by Bell and McCaffrey (
45), which used a bias reduction method for standard error estimation. We also conducted exploratory stratified analyses within each service sector by using logistic regression models for dichotomous measures and log-linear models for counts, with intervention condition as the independent variable adjusted for service sector and community because the cell sizes for each sector were not sufficient for two-part models (
46). To assess robustness of the adjusted models, we supplemented the adjusted models by performing the same calculations with unadjusted raw data. [The results of the unadjusted models are available in the
online supplement.] Analyses were conducted by using SUDAAN, version 11.0, and accounted for clustering of staff within programs (
47).
Results
Of 95 enrolled and randomized programs, 46 participated in RS and 49 participated in CEP. Randomized programs showed no statistically significant differences by baseline characteristics (community, service sector, and total staff) or participation in study activities before randomization (attended a kickoff conference). [A comparison of program characteristics by intervention condition is available in the online supplement]. About half of the programs in each intervention were from each community, and programs were well distributed across sectors—primary care (N=17, 18%), mental health care (N=18, 19%), substance abuse services (N=20, 21%), homelessness services (N=10, 11%), and community-based services (N=30, 32%).
Of 1,621 eligible staff, 723 participated in RS programs and 898 participated in CEP programs; 493 (30%) worked in programs in the primary care or public health sector, 290 (18%) in mental health services, 264 (16%) in substance abuse services, 168 (10%) in homelessness services, and 406 (25%) in community-based services. There were no significant differences in staff characteristics by intervention status [see online supplement].
After program randomization, the training experiences developed by CEP councils in Hollywood-Metro and South Los Angeles were more intensive, broader, and more flexible than the types of training available at the time of the kickoff conference and the training experiences offered by RS. Examples of more-intensive training consisted of CBT consultation support for staff for one or two patients over 12 to 16 weeks and a ten-week Webinar providing CBT consultation to groups of staff. Training in self-care for providers and active listening are examples of broader training. The use of various methods, such as Webinars, conference calls, and multiple one-day conferences, to offer the same content was an example of more flexible training.
Table 1 summarizes training modifications and innovations introduced by CEP. Across both communities, CEP provided 144 training interventions totaling 220.5 hours, including 135.0 hours for CBT, 60.0 hours for care management, 6.0 hours for medication management, and 19.5 hours for other skills.
After randomization, a greater percentage of CEP programs than RS programs participated in training (86% versus 61%, p=.006). Stratified analyses by service sector showed that the percentage of health care programs that participated in training was greater for CEP than for RS (p=.016) (
Table 2). A similar trend, although not significant, was found within social-community sectors.
The two-part models showed that staff from programs assigned to CEP were more likely than staff from programs assigned to RS to participate in any training overall (p<.001). In social-community sectors, staff from CEP programs were more likely than staff from RS programs to participate in training (p<.001), but there were no intervention differences in training participation among staff from health care sectors (
Table 3). Estimated hours of training among staff who participated in training were greater among CEP staff compared with RS staff in all programs (p<.001), in programs in health care sectors (p=.004), and in programs in social-community sectors (p=.003). Similarly, mean hours of training among staff who participated in training were greater among CEP staff compared with RS staff for all depression care quality improvement components except medication management, which did not differ significantly by intervention.
In exploratory stratified analysis by service sector, there were no intervention differences in percentage of staff in the primary care or mental health specialty sector who attended any training. However, participation in training was greater among CEP staff compared with RS staff in sectors related to substance abuse services (p=.005), homelessness services (p<.001), and community-based services (p<.001) (
Table 4). In addition, training hours were significantly greater among programs that participated in CEP versus RS for all sectors except primary care.
Discussion
Our main finding was that a CEP approach to implement depression collaborative care developed a broader and more flexible range of training experiences and provided more hours of training across diverse health care and social-community sectors compared with programs assigned to technical assistance (RS) to implement depression collaborative care. Subsequently, staff of programs assigned to CEP had higher rates of training participation compared with staff of RS programs, both for all forms of training and for each component of depression quality improvement. This is an important finding that may offer insight into previously reported positive effects of CEP on clients’ health-related quality-of-life outcomes at six and 12 months (
20,
25). Before randomization, there were no significant differences in the percentage of RS and CEP programs that participated in kickoff events. However, after randomization, 86% of CEP programs participated in any training compared with 61% of RS programs; in health care sectors, participation in training was significantly greater among CEP programs (92%) compared with RS programs (66%). The use of a group-randomized trial increases confidence that the differences observed in training associated with the two approaches are due to the intervention’s approach. In other words, CEP’s increased training intensity and greater focus on creating a network of training opportunities for programs and providers are consistent with its community-driven plan.
For the primary outcome at the staff level, the study show that staff assigned to CEP programs were more likely than staff assigned to RS programs to participate in training. CEP was also associated with a greater likelihood of participation in training among programs in a health care sector but not among programs in a social-community sector. However, for staff with any training participation, CEP was associated with greater hours of training among programs overall as well as among programs in both the social-community and the health care sectors. Further exploratory analyses suggest that at the program level, CEP’s effects may be greater for health care programs than for social-community programs. However, for staff who attended any training, mean training hours for both health care and social-community programs were higher among staff associated with CEP rather than RS.
Few reports in the mental health services literature describe how penetration of training among staff and programs is affected by implementation of interventions for evidence-based programs. One study found that increased participation in training in an evidence-based child curriculum was associated with increased intervention delivery to patients (
48). Another study found that the use of financial incentives for providers promoted depression collaborative care implementation in health care systems (
49). In contrast, programs enrolled in CPIC were told that their staff could participate in any, all, or no trainings, with continuing education credits, access to trainings and materials, and the food provided during training as the only incentives. This suggests that community engagement can encourage agencies and providers, particularly from social-community sectors, to participate in quality improvement efforts to enhance quality of and access to depression care.
CEP may have increased staff participation compared with RS through several mechanisms. Partnering with local programs and staff to adapt training content may have made the materials more consistent with the programs’ existing capacities or interests, particularly for social-community settings. Although training was offered to programs and staff, it was not mandatory. CEP may have increased participation particularly among staff in programs with engaged leadership. In addition, CEP councils offered more training opportunities in response to community partners’ feedback (
29,
30). The inclusion of agency staff as cotrainers may have increased ownership and trust in training among CEP programs, just as including local opinion leaders in the development of practice guidelines appears to benefit implementation (
50–
52). The multiagency training plan developed by the community councils may have been appealing to both programs in health care sectors and programs in social-community sectors. The CEP group’s development of a more intensive training plan with greater training options may have been more consistent with staff’s sense of the support needed to implement depression care. More generally, the community engagement principles and activities associated with CEP may have instilled a greater sense of ownership and commitment, especially among programs in social-community sectors, which traditionally are not included in depression care training.
For both interventions, training exposure estimates may be conservative, given that staff who attended a training session may have shared what they learned with staff who were not in attendance. The CEP Councils’ efforts to develop a tailored plan for implementing depression care, consisting of biweekly meetings for five months followed by monthly implementation meetings, were substantial but feasible, given the large population (up to two million people) of the participating communities. Conducting the planning required coleadership by community and academic partners with experience in applying CPPR principles to depression care. RS also had a preparation period during which expert leaders conducted outreach to the participating programs by calling or visiting, in some cases up to five times. Future research should clarify which features of CEP promoted more provider engagement relative to RS; identify potential strategies, such as financial incentives, to enhance participation in training; and determine whether training participation mediated the intervention’s impact on patient outcomes.
The study had several limitations. Estimates of eligible staff were based on administrator survey items and follow-up calls, whereas staff training participation was based on registration, logs, and attendance sheets. Given that administrator estimates of eligible staff were largely obtained prior to randomization, it is unlikely that there was any differential bias in estimation by intervention condition. Future work may benefit from validating administrator reports with human resources records. Generalizability of our findings may be limited, given that the study design and data were not designed to separate out the differential effect of increased community engagement and changes in training—such as increased hours, intensity, flexibility, and breadth—associated with CEP. If replicated, the results suggest that CEP groups may offer a different set of training options with different participation effects. The study was not designed to assess whether increased training participation led to improved quality of care or whether improved quality of care led to improved client outcomes.
Conclusions
As health care reform expands access to care for millions of Americans, including many low-income Latinos and African Americans, building capacity among underresourced communities to implement evidence-based, depression care quality improvement programs (
18,
53–
56), for example, through Medicaid behavioral health homes and accountable care organizations, will be a continuing priority. Our findings suggest that a CEP approach to develop and implement training to a network of providers may increase program and staff engagement, particularly among programs in health care sectors. It may also help develop staff capacity in other sectors, such as homelessness services and social services, that are typically located in racial-ethnic minority communities with historical distrust of services and research (
22–
24,
57–
63). Future work is needed to compare the cost-effectiveness of CEP and other interventions related to staff training, replicate this study’s findings in larger samples, clarify which CEP components improve providers’ depression care competencies, and determine whether training participation mediates intervention effects on client outcomes.
Acknowledgments
The authors thank the 28 participating agencies of the Community Partners in Care Council and their representatives; Paul Koegel, Ph.D., Elizabeth Dixon, R.N., Ph.D., Elizabeth Lizaola, M.P.H., Susan Stockdale, Ph.D., Peter Mendel, Ph.D., Mariana Horta, B.A., and Dmitry Khodyakov, Ph.D., for their support; and Robert Brook, M.D., Sc.D., David Miklowitz, Ph.D., Ira Lesser, M.D., and Loretta Jones, M.A., for comments on the manuscript.