Although efforts to measure health care performance have been under way for more than 25 years in the United States, the federal government and states have accelerated their investments in measurement in recent years (
1,
2). The Affordable Care Act (ACA) required the secretary of the Department of Health and Human Services (DHHS) to “establish a national strategy to improve the delivery of health care services, patient health outcomes, and population health.” In March 2011, DHHS released the first report to Congress establishing the National Quality Strategy’s three aims: improve the overall quality of care, improve population health, and reduce the cost of high-quality health care (
3).
Advancing performance measures is of particular importance for behavioral health care. The ACA and the Mental Health Parity and Addiction Equity Act of 2008 are expected to stimulate demand for behavioral health services by providing mental health care and substance abuse treatment benefits to an estimated 62 million additional Americans. Beginning in 2014, the ACA requires all nongrandfathered small-group plans and individual policies to cover mental health and substance abuse treatment as an essential health care benefit (
4). The ACA also provides incentives for the development of integrated service delivery systems such as affordable care organizations, bundled payment systems, pay-for-performance systems, and other delivery and financing structures aimed at containing costs (
5–
8). Performance measures are critical under these new systems to ensure the appropriate balance among costs, access to care, and quality of care. Individuals with mental health and substance use disorders may be particularly vulnerable to low-quality treatment because disproportionately they are in a low-income bracket, lack social supports, have cognitive and functional disabilities, and may be reticent to complain about poor-quality care because of concerns about stigma (
9).
Although historically the quality improvement infrastructure for behavioral health care has been less developed than that of medical care (
10,
11), in recent years a number of agencies have responded to the call for more quality measures for behavioral health care. In 2014, the Substance Abuse and Mental Health Services Administration (SAMHSA) released the National Behavioral Health Quality Framework (NBHQF) in an effort to harmonize and prioritize health behavior measures that reflect the core principles of SAMHSA, as well as to support the National Quality Strategy. The NBHQF identifies six priorities: promoting effective prevention, treatment, and recovery practices for behavioral health disorders; ensuring that behavioral health care is person centered; encouraging the coordination and integration of care; assisting communities to use best practices; increasing the safety of behavioral health care; and fostering affordable high-quality care.
Other organizations, including the National Committee for Quality Assurance (NCQA), the National Quality Forum (NQF), the Institute of Medicine, and the Agency for Healthcare Research and Quality, are also moving forward with new behavioral health performance measures. For example, NCQA’s Healthcare Effectiveness Data and Information Set (HEDIS) contains ten behavioral health measures covering the quality domains of effectiveness of care, access to and availability of care, and utilization of care (see
box on this page). Four of these measures were added in calendar year 2013 to address coordination of primary care for individuals with diagnoses of schizophrenia or bipolar disorder (
12). As of mid-2014, NQF had endorsed approximately 75 measures specific to behavioral health. Eight of the HEDIS measures are currently endorsed by NQF (
13). The Centers for Medicare and Medicaid Services (CMS) is also developing behavioral health quality measures for inpatient psychiatric facilities (
14).
The purpose of this study was to examine the behavioral health care quality measures that state Medicaid and behavioral health agencies use for reporting on quality and to identify common measures or themes among states. Findings will help determine which measures are most salient for states and any gaps in measurement areas.
Methods
We conducted online searches of 50 state Medicaid and mental health and substance abuse agency Web sites in September 2013. Using the following terms, we searched the state Web sites in regard to behavioral health quality indicators: behavioral health quality; behavioral health performance; and mental health quality, mental health performance reports, publications, data, and special projects. A search was also conducted for state Medicaid managed care contracts, quality strategies, quality improvement plans, quality and performance indicators data, annual outcomes reports, performance measures specification manuals, legislative reports, and Medicaid waiver requests for proposals. We used the Joint Commission’s definition for quality indicator: a quantitative measure that can be used to monitor and evaluate the quality of governance, management, and clinical and support functions that affect behavioral health outcomes (
15). These measures were typically listed as such on the agency Web sites.
Through federal Medicaid managed care regulations (42 C.F.R. §438.200), CMS requires all states contracting with a managed care organization to have a written State Quality Strategy for assessing and improving the quality of the managed care services they offer (
16). Therefore, we searched for these CMS-required quality strategy reports. Finally, we searched for any surveys or similar tools administered by the states to assess consumer satisfaction with quality of care.
We then created a database to organize the information collected. The database contained the following information on each quality indicator: state name, indicator name, originator (HEDIS, state, and so on), domain (as established by the National Inventory of Mental Health Quality Measures [NIMHQM]), category (structure, process, or outcome), source document, Web site from which the information was retrieved, and whether the state has a Medicaid managed care behavioral health program.
We queried a number of states that did not have publicly available quality indicators, and two reasons these states provided for not publishing behavioral health quality indicators on the Internet were that their quality measurement initiatives were in the development stages and that there was no state mandate to publish quality measurement initiatives.
Results
From the 50 states reviewed, 29 states provided information on their Web site: Arizona, California, Colorado, Connecticut, Florida, Illinois, Iowa, Kansas, Kentucky, Louisiana, Massachusetts, Michigan, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Oklahoma, Oregon, Rhode Island, South Carolina, Tennessee, Texas, Utah, Vermont, Virginia, and Washington. Except for New Hampshire, each of the 29 states reviewed had some form of Medicaid-managed behavioral health care. Some states used an integrated plan that offered behavioral and general medical care through the same health plan; other states used a carve-out plan, in which behavioral health services were managed by a separate health plan (
Table 1).
Table 2 provides information on the quality measure reporting that the 29 states use. The Donabedian quality-of-care framework guided us in identifying these measures as relating to structure, process, or outcome (
17). In this framework, structure measures concern the attributes of the setting, human resources, financing, and organizational structure; process measures describe what occurs in giving and receiving care; and outcome measures refer to the effects of health care on the health status of patients and populations (
18).
Table 2 also provides information on consumer experience-of-care surveys that states are using with individuals who receive behavioral health services and information on the originator of the measure.
Structure-Process-Outcome Framework
Seventy-four of the 369 measures (20%) used by states in our sample were classified as structure measures. Fifteen of the states (52%) represented in the sample used at least one measure of financial, human resource, or organizational structure. Over half of the measures used by states in our sample were classified as process measures; a total of 222 (60%) measures focused on the process of giving and receiving care. All states in the sample had at least one process measure, and ten states (34%) used only process measures to evaluate the quality of behavioral health services. Seventy-three of the measures (20%) could be classified as outcome measures, with 14 of the states (48%) using at least one outcome measure.
HEDIS Measures
Table 2 also indicates the states’ use of HEDIS measures and other state-developed measures. Twenty state agencies (69%) in our sample used the behavioral health HEDIS measures for 2011, and eight of these states (28%) relied solely on HEDIS measures to assess behavioral health performance.
Non-HEDIS Measures
Finally, 21 states (72%) were identified as either adapting HEDIS measures or developing their own behavioral health measures by relying on administrative or nonadministrative data gathered from chart review or state-specific databases. Most of these states developed measures in addition to the standard measures, but two states (Connecticut and Michigan) relied mainly on non-HEDIS or locally developed measures. Some representative examples of state-developed measures are shown in
Table 3, grouped into the domains adopted from the NIMHQM.
Some states—for example, Iowa and New Mexico—used administrative data to measure recovery-based services. Both states measured the percentage of claims for consumer-run services. Administrative data were also used to measure evidence-based pharmacotherapy. Colorado used administrative claims to determine the percentage of enrollees who were prescribed redundant antipsychotic medication; several states used pharmacy claims to determine whether prescriptions for high-risk medications were not refilled.
Finally, New Mexico and Utah are examples of states that used nonadministrative data. New Mexico measured the number of programs employing workers who are Native American or who speak Spanish. Utah uses a state-developed measure of screening for clinical depression and follow-up as a measure of continuity and coordination of care.
Consumer Experience-of-Care Surveys
All states in our sample except one (New York) used an identified consumer experience-of-care survey. Thirteen states (45%) reported using the Consumer Assessment of Healthcare Providers and Systems (CAHPS) consumer survey. The CAHPS has a set of comprehensive surveys sponsored by CMS that collect consumer data on the interpersonal aspects of health care (
19). Eight states (28%) required an unnamed state-approved survey. Four states (14%) used the Mental Health Statistics Improvement Program (MHSIP) consumer survey. The MHSIP was developed by SAMHSA to assess the quality of mental health services, specifically in the areas of general satisfaction, access to services, service quality and appropriateness, participation in treatment, treatment outcomes, cultural sensitivity, improved functioning, and social connectedness. Two states (7%) reported using a state-developed survey. One state (3%) required the Experience of Care and Health Outcomes (ECHO) Survey. The ECHO combines aspects of the MHSIP and CAHPS and was endorsed by the NQF in 2007 (
13,
20).
Mental Health and Substance Abuse Treatment Measures
Table 4 provides information regarding the target population for the behavioral health measures. A total of 172 of the measures (47%) used by states in our sample were classified as targeting individuals with mental illness. Twenty-five states (86%) used at least one mental health measure. Only 56 measures (15%) concerned substance use disorders, with 14 states (48%) having at least one substance abuse treatment measure. Sixteen states (55%) published 141 measures (38%) targeting individuals with either a mental illness or a substance use disorder; however, these measures were largely structure measures targeting claims payment, critical incidents, and grievance and appeals.
Discussion
Many state behavioral health performance measure efforts are under way. State variation in the assessment of behavioral health care services has implications for determining the quality of current care and the impact of health care reforms.
Many states have continued to rely on the NQF-endorsed HEDIS measures to provide stakeholders with an assessment of the quality of these services. However, as the landscape of behavioral health care changes, we found that many states are incorporating, adapting, or developing additional measures to fill in gaps. Some alterations broaden the existing HEDIS measures. Examples include the expansion of existing HEDIS follow-up measures to assess the quality of follow-up care after hospitalization for substance abuse or after emergency department visits. Some states have attempted to fill perceived gaps in the NQF-endorsed measures, such as the addition of measures that address recovery progression, integrated care, and patient safety. States are also using administrative and nonadministrative data to create quality measures that cover a wide variety of domains. An example of a measure using administrative data is Iowa’s measure of the percentage of expenditures used to support consumer-run services. Examples of nonadministrative data measures include Kansas’ measure of the percentage of state-qualified providers of services to children with serious emotional disturbance, Louisiana’s measure of the number of persons served by evidence-based practices and by promising practices that have been implemented to fidelity.
Despite states efforts to develop additional quality indicators that promote evidence-based care and increased access to care, gaps in monitoring the quality of care continue to be found. For example, the use of outcome measures among states was limited. Although almost half of the states (48%) included at least one measure that addressed issues such as self-reported improvement in symptom severity or stable living environment, only 20% of all cataloged measures could be classified as outcome measures. A proportion of the outcome measures (21%) were hospital readmission rates, which could be considered to be a proxy measure of outcome. We consider outcome measures to be an area for expansion in quality measurement, although states may hesitate to include measures that do not account for illness severity. Although the use of behavioral health risk-adjustment models has accelerated in recent years, states may lack resources to collect the data needed to conduct appropriate risk adjustment (
21). None of the states in our sample indicated the use of risk adjustment for any measure.
Greater efforts are also needed to develop additional standardized substance abuse treatment measures, given that Medicaid expansion is expected to significantly increase coverage for individuals with substance use disorders (
22). Although some states have responded by adapting the few existing HEDIS measures to capture follow-up from substance abuse treatment, additional measures are needed for evaluating screening, integrated care, and use of evidence-based treatment. New York has attempted to fill the gap in substance use disorder treatment measures by developing a comprehensive array of measures for substance abuse treatment, follow-up care, and psychopharmacological treatment.
States also need to standardize measures of crisis services utilization. Many states reportedly struggle with finding psychiatric beds for individuals needing inpatient psychiatric care and with emergency department overuse because of the reduction in psychiatric hospital beds, lack of community-based services, and lack of insurance for behavioral health treatment needs (
23). The ACA may ultimately reduce overuse by providing needed insurance coverage; however, standardized emergency department utilization and emergency department wait-time measures are needed in the interim as more states resort to managed care for their behavioral health services. The standardization and adoption of crisis services utilization measures—such as the emergency department utilization measures used by Louisiana or the treatment follow-up measures used by Illinois and Colorado after emergency department visits—could provide states with a means of comparison and assist in improving quality of care.
The ubiquity of experience-of-care surveys is noteworthy. Our findings reveal that states used several different consumer surveys, including CAHPS, MHSIP, and state-developed or state-approved surveys. Only one state (Oklahoma) used the ECHO, which was developed specifically for managed behavioral health care. There are likely two main reasons for states’ high utilization of experience-of-care surveys. Some surveys are required for federal funding. The MHSIP is currently part of the data requirements for block grants from the SAMHSA Center for Mental Health Services. States initially started reporting MHSIP data in 2002, and as of December 2012, all 50 states were reporting MHSIP results (
24). Some experience-of-care surveys appear to be related to managed care implementation. Although experience-of-care surveys are not a federal requirement, many states use the survey results as a means to show compliance with Medicaid managed care regulations.
A standardized survey that would allow for national comparisons, particularly among states with behavioral health managed care, would be beneficial. In the absence of widespread adoption of the ECHO, one possibility includes developing standardized modules to add to the MHSIP survey that would be specific to managed care and substance abuse treatment. Standardized administration and sampling strategies would also help states compare the quality of their services with the quality in other states.
Finally, it should be noted that these findings are fairly similar to earlier scans of behavioral health quality measures, which revealed that most state Medicaid programs use HEDIS measures, consumer surveys, and some outcome measures (
25,
26). This relatively fixed state of behavioral health quality measurement seems to indicate that states may face challenges in meeting the increasingly demanding needs of a complex and evolving health care system, but there are some indications that measurement of behavioral health care quality is improving. For example, states such as Colorado, Kansas, and New York are developing a rich array of measures to meet their needs for performance measurement. Also, the number of NQF-endorsed measures continues to increase. It is expected that state adoption of these measures will follow. Also, the expansion of Medicaid managed behavioral health care is providing the impetus for states and behavioral health providers to improve their data collection systems, thereby improving their ability to collect performance measurement data.
We acknowledge limitations in our study. We limited our review to a convenience sample defined by state information available on the Internet. The information provided on the Internet for the 29 states reviewed may not allow for a comprehensive assessment. Also, our findings may be skewed toward more information for states with Medicaid managed care, because these states are required to publish and submit a quality strategy to CMS.
Conclusions
Under the ACA insurance expansions, behavioral health services should be more readily accessible to those seeking care. A more robust quality improvement infrastructure is needed as behavioral health service utilization increases in the United States to improve the overall quality of care, improve population health, and reduce the cost of high-quality health care. Standardized surveys and measures, new measures, standardized administration procedures, and comprehensive sampling strategies should be developed to allow for state-by-state comparisons and for the establishment of national benchmarks. Finally, there is a need for additional nationally endorsed measures in the areas of substance abuse, treatment outcome, and crisis services. States are developing their own measures in these areas to fill the current gap, which may make standardization of measures more challenging in the future. The National Quality Strategy should be considered as a foundation for future development of national benchmarks.