The quality of health care provided by the U.S. Department of Veterans Affairs (VA) has been the subject of controversy. Some reports suggest that the quality of care is as good as or better than the private sector or Medicare, whereas others suggest that VA care is characterized by “unchecked incompetence” (
1–
4). No recent studies have compared the quality of outpatient care provided to veterans with mental disorders with the care provided to a comparable population treated in the private sector across multiple mental health diagnoses, although one study published in 2000 examined the quality of inpatient care episodes from 1993 to 1997 and found that during that time VA care improved “markedly” compared with the private sector (
5). The lack of research is important, because many individuals with mental disorders have complex conditions that are costly to care for (whether in the VA or private sector); because mental health conditions are among the principal sources of disability in the veteran population and use of appropriate care processes has important consequences for outcomes of these conditions; and because comparison of the quality of care processes across systems can provide important insight into the effectiveness of alternative approaches to the organization, financing, and management of services for this large and growing population and inform efforts to improve care processes.
U.S. veterans are a vulnerable population, with higher rates of serious mental disorders than found in the civilian population (
6). Among veterans of the Afghanistan and Iraq conflicts, prolonged and repeated deployments have magnified these problems (
7). The prevalence of mental health problems, especially posttraumatic stress disorder (PTSD), is also high among veterans of earlier conflicts. Meeting the health care needs of this vulnerable population is the responsibility of the VA, which has the nation’s largest integrated health care system. In recent years, the VA has made improving mental health care for veterans an institutional priority.
In 2006, the VA Office of Policy and Planning contracted with Altarum Institute and the RAND Corporation to conduct a formal, independent evaluation of the quality of VA mental health and substance use care. The evaluation focused on veterans who had a diagnosis of one of five conditions: schizophrenia, bipolar disorder, PTSD, major depressive disorder, and substance use disorders—conditions that are the most prevalent in this population, are associated with high levels of disability, and are costly to treat. Results of this comprehensive evaluation have been reported elsewhere (
8). In the study reported here, we compared the quality of VA care to that received by comparable individuals in the private sector, in an analysis conducted in collaboration with a team of researchers at Rutgers University.
Discussion
We found that the quality of care provided by the VA to veterans with mental and substance use disorders consistently exceeded the quality of care provided by the private sector for the performance indicators examined, sometimes by large margins. The findings presented here are consistent with prior reports that VA performance consistently exceeds that of non-VA comparison groups for process-based quality measures (
2,
11).
It is likely that the superior performance observed in the VA system is in part the result of the additional structures that the VA has put in place to support and encourage high-quality care. These structures influence both the provider’s ability to deliver care and the patient’s ability to access and adhere to recommended treatment. For example, the colocation of pharmacy and laboratory services near specialty and primary care clinics facilitates patient access to these services, and the integrated electronic medical record means that all providers can instantly review and address patient laboratory results. Colocation of laboratory services may be particularly important for monitoring metabolic parameters among patients receiving medications that can have significant metabolic impact, as reflected in the VA’s superior performance in this area. VA providers also have access to decision support tools, and the electronic medical record supports best practices through automated clinical reminders. Network leadership provides systematic oversight of performance, and the salary model of care provides more flexibility in how resources and personnel are organized. Finally, best practices are encouraged through the dissemination of clinical practice guidelines, performance metrics, and financial performance incentives for network leaders.
Some of the differences may have stemmed from differences in patient populations. For example, the VA cohort was older, and it is established in the literature that medication adherence (which influences five of these measures) is positively correlated with age. However, VA performance was superior within each age category, suggesting that population differences were not a primary reason for the observed differences.
Our results are unlikely to be affected by missing information on dual coverage by Medicare for some individuals in the private-plan population. Because the MarketScan data set we used is derived from an employed population, it should contain very few individuals covered by Medicare in addition to their private insurance, because only dependents are eligible for additional coverage by Medicare. We note that because the private plan is the first payer for the services received by such dependents (
12), the service and pharmacy use measures based on MarketScan should be valid and should not be biased downward because of dual enrollment in Medicare.
Our findings indicate much lower rates of both acute and maintenance antidepressant treatment than rates reported by Busch and colleagues (
11), who compared the quality of VA and private-sector treatment in 2000 by using MarketScan data. In their study, 84.7% of VA patients versus 81% of MarketScan patients received appropriate antidepressant treatment for an acute episode, and 53.9% versus 50.9%, respectively, received appropriate continuation phase treatment. Our differing results are likely attributable to the different way our cohorts were constructed and the indicators defined and underline how even small variations in the ways that an indicator and eligible population are operationalized can have a substantial impact on results. Busch and colleagues required only one outpatient visit with a diagnosis of major depression, rather than two visits, and included individuals who may have had an additional modal diagnosis of bipolar disorder, schizophrenia, or PTSD.
Our analysis had several limitations. First, the number of indicators compared was relatively small; thus the results may not be generalizable to the care delivered for these conditions more broadly. Second, we do not know the extent of missing data. For example, medication data may have been missing because individuals were paying out of pocket for medications. Although some of the private-plan individuals were covered under behavioral health carve-out arrangements, laboratory and pharmacy claims are typically not included in the carve out and so should have been present in our data. However, data about laboratory tests may have been missing if the laboratory or physician received bundled or capitated payments for medical care on a per episode or per patient basis, which does not encourage filing of claims for individual tests. This may be more of a problem in private plans than in the VA and may explain some of the observed difference. We do not know the extent of bundled or capitated payments.
Third, the privately insured individuals did not have uniform coverage and benefit levels. The MarketScan data represent many different types of health plans, including fee-for-service, fully capitated, and partially capitated arrangements, and we did not have information on the generosity of coverage provided by different plans. Access to specialty care, particularly mental health services, probably varied from plan to plan, because those services may be carved out to behavioral health services companies, with varying screening and preauthorization algorithms.
However, because the contributors to the MarketScan databases tend to be large employers, it is likely that the health care coverage provided is more comprehensive than among the privately insured population in general, suggesting that lack of coverage was not the reason for differences in performance. Differences in out-of-pocket expenses for services (for example, copayments) may also have been larger in private insurance plans than in the VA.
Finally, the two study populations may have differed on dimensions that we could not observe and measure. Apart from age and gender, we were unable to risk adjust for unmeasured differences. Although we present national-level estimates of performance by age and gender, there may be other systematic differences between the cohorts—for example, in race-ethnicity, socioeconomic status, or general medical or mental health status—that would be useful in understanding performance results and that could not be included, because administrative data, to which the MarketScan database is limited, do not contain this information. To the extent differences existed in socioeconomic and general medical or mental health status, the veteran population was likely to be more economically disadvantaged and sicker.
The direction of any bias related to the chronicity and severity of mental disorders is unclear. Because of the stigma associated with a psychiatric diagnosis, providers may record a psychiatric diagnosis only for the sickest individuals. It is also possible that the VA may be more likely than private plans to identify mental disorders at lower levels of severity because of the VA’s extensive screening procedures, particularly for depression. There may have been differences in diagnostic coding practices between VA and private providers, and it is possible that the severity of mental disorders may have been confounded with willingness to take medications on a long-term basis. Finally, we had no information on medication possession ratios and were unable to definitively show that paid claims in the MarketScan data were equivalent to prescriptions filled in the VA.