Skip to main content
Full access
Columns
Published Online: 1 June 2004

State Mental Health Policy: Less Is More: Virginia's Performance Outcomes Measurement System

Public mental health authorities are faced with the responsibility of containing costs, improving quality, and providing greater accountability. In response, a number of states began developing mechanisms to assess consumer outcomes and provider performance. In the late 1990s Virginia took a proactive stance by pilot testing the Performance and Outcome Measurement System (POMS), which was based on the Consumer-Oriented Report Card of the Mental Health Statistics Improvement Program (MHSIP). The Virginia Department of Mental Health, Mental Retardation, and Substance Abuse Services implemented the pilot project at eight sites. Implementing POMS statewide was intended to serve two major purposes: it would continuously improve the quality of services, and it would increase accountability for taxpayer dollars.
POMS was designed by a committee that included major stakeholder groups: consumers, family members, and providers. This committee also oversaw the implementation of the pilot project and obtained input from a wider array of stakeholders throughout the implementation process. In early 1997 six community mental health centers and two state hospitals were selected to participate in the pilot project. Two sites each were selected to represent adult mental health, child and adolescent mental health, adult substance abuse, and inpatient services. A total of 46 different indicators were tested across the four populations. The goals of the pilot project were to determine which of these indicators provided the most useful information at the lowest cost and to develop recommendations for improving POMS and for implementing the program statewide. The pilot project took place from 1997 to 1999.

The POMS project

Implementing POMS

An orientation meeting was conducted at each site to present the background, purpose, and expectations of POMS. Local workgroups were established, which included representation from direct service staff members, administrative and management staff members, consumers, and family members. Each location developed a site-specific protocol to guide the implementation of POMS and to ensure compliance with statewide standards. Finally, clinical staff members were trained in administering the assessments.
Initially, 11 standardized instruments were selected for the pilot project, not including the "satisfaction" instruments. Instruments were selected if they seemed to have the potential both to be clinically useful and to serve as good indicators of performance. Standardized assessments varied according to the populations that were being served at the pilot sites and included the Child and Adolescent Functional Assessment Scale (1), the Multnomah Community Ability Scale (2), the Addiction Severity Index (ASI) (3), the MHSIP Symptom Distress Scale (4), and the Specific Level of Functioning Scale (5). In addition, organizational and client data were collected from administrative records.

Assessing implementation

The Southeastern Rural Mental Health Research Center at the University of Virginia conducted an independent evaluation of the pilot project. Goals of the evaluation were to determine the feasibility of POMS, identify refinements that would enhance the efficiency and effectiveness of statewide implementation, and provide initial estimates of the overall costs. The evaluation involved both qualitative and quantitative methods, including focus groups, clinician ratings of instruments, and an analysis of the time that was associated with completing the various components of the pilot project. The results were used to identify potential problems, so that a refined system could be implemented statewide and so that difficulties that were likely to be encountered during the implementation would be anticipated, if not circumvented. Focus groups were conducted before and after the implementation of the pilot program with each site; with the Central Office of the Department of Mental Health, Mental Retardation, and Substance Abuse Services; and with consumers and family members. Focus groups were held at the national office of the National Alliance for the Mentally Ill in Northern Virginia; at the central office of the Virginia Department of Mental Health, Mental Retardation, and Substance Abuse Services in Richmond; and at locations in Fairfax, Suffolk, Roanoke, Hanover, Petersburg, Alexandria, and Martinsville.
A key component of the pilot test was to determine the perceived accuracy and clinical utility of the instruments. The instruments were rated by the clinicians at the time of intake, at three and six months after admission, and at discharge. To determine the amount of time that was associated with completing the components of the pilot project, time allocation sheets were completed monthly by persons who were involved in the administration, planning, training, data collection, report preparation, or any other aspect of the pilot project. A total of 838 individuals participated in the evaluation.
Audiotape recordings of focus groups were transcribed and analyzed by using QSR NUD*IST, a software package for theme coding and qualitative data analysis (6). QSR NUD*IST is an all-purpose qualitative data analysis system that was designed to aid researchers in handling nonnumerical unstructured data by supporting processes of indexing, searching, and theorizing. The analysis was inductive, meaning that concepts and themes emerged from the data throughout the process of data collection and analysis.
Acceptance. Perhaps most striking was the recognition among clinicians and administrators that greater accountability would be demanded from providers. Overall, participants accepted that a systematic collection of performance indicators was necessary and appropriate.
Problems. The problems with POMS tended to be logistical rather than ideological. Implementation problems occurred at all eight pilot sites. Staff members expressed concern that adequate time was not allocated for training on how to administer the instruments. They also expressed concern that the time required to implement POMS was burdensome to employees and problematic for those with brief tenures, such as students and interns. Some providers commented that time that they spent on POMS detracted attention from their clinical work. Communication problems among local POMS coordinators, management information system staff members, and clinicians were encountered by a majority of sites, including difficulty notifying clinicians and consumers when forms needed to be completed.
Providers reported that some consumers resisted being interviewed, some consumers were suspicious about the purpose of the interview, and some consumers felt coerced because their treatment was court ordered. Many providers complained about the length of time that was required for the interviews, perhaps resulting in insufficient effort to locate consumers for post-admission assessments.
Despite the national prominence of the standardized instruments, their use during the pilot project was controversial. Resistance was due in part to the belief that the instruments were not clinically useful. Focus groups commented that existing administrative data, such as length of stay and frequency of consumer contacts, would provide more useful information. Staff members were concerned that the instruments would be used to determine service eligibility and did not accurately reflect the day-to-day functioning of consumers. Some staff members felt that the instruments served to increase consumer anxiety, paranoia, and fear. An exception was the ASI, which was more accepted because it could be used as an intake interview and did not create much of an additional burden for consumers of substance abuse services.
Participants from all sites reported problems that were related to information technology. Most problems were related to a lack of resources, such as not having enough computer hardware and, especially, not having in-house technical expertise available. Clinicians rated the study instruments for clinical accuracy and utility. The clinicians were asked for ratings at intake, at three and six months after admission, and at discharge. The response rate was poor for the rating survey; only 25 clinicians completed the rating survey. Clinicians who responded tended to rate the accuracy and utility of the instruments as less than satisfactory. However, these ratings should be interpreted in light of the possibility of a negative halo effect. In addition, the poor response rate renders the ratings less reliable.
More individuals completed the time allocation sheets; 838 responses were collected from clinicians, administrators, project managers, trainers, and management information system personnel for up to four data points. The number of individuals who completed the sheets per site per data point ranged from three to 43, with a mean±SD of 23±15.7 individuals. Participants reported that they spent a total of 9,849 hours on POMS during the pilot project, ranging from 516 to 2,162 hours per site. The cost of implementing POMS was calculated by using standard wage rates. However, the costs varied from setting to setting, and estimated costs in other settings will depend on assumptions about what upgrades to existing data collection infrastructure will be needed and what marginal increases in staff time will be necessary.
The sites did not uniformly recruit consumers to take part in the pilot project. In particular, at some sites consumer participation was presented as being optional, and informed consent, or at least assent, was obtained. At other sites the standardized assessments were considered to be a routine part of the intake process and consent was not obtained. Whether consumer participation in POMS is voluntary or mandatory in future POMS efforts should be clarified to ensure consistency with the accepted standards of ethics of conducting service evaluations.
The software and hardware problems arose because of the variety of computing and software systems that evolved over time at the sites, making it difficult to address problems centrally. Uniformity did not exist from locality to locality, and resistance to adopting a uniform system arose, because the localities had made substantial investments in the existing systems. These problems could be solved by expanding existing hardware and software technology.
Suggestions. Participants reported that they were satisfied with the training, although it was time-consuming. Participants also reported that having a project coordinator was critical to the success of the project and that more time needed to be allocated for the coordinator. Furthermore, participants reported that there was a need for additional funding for such things as additional hardware and software and for the additional cost of data entry and training in the use of standardized instruments. The pilot project included a very detailed manual of consumer selection and data collection protocols that proved to be invaluable. This manual should be replicated in any implementation of a performance measurement system. It was also suggested that assessment instruments be normed at a fourth grade reading level.
Limitations. It should be noted that the pilot project involved only a single population at each site, either adults with serious mental illness, children and adolescents with serious emotional disturbances, or substance abusers. Full implementation of POMS would involve all populations that the state authority serves, and administering the complete set of measures would engender even greater burdens on the community mental health centers.

Lessons learned

The pilot project was perceived to be costly, time-consuming, and burdensome by a majority of participants. This type of response may reflect, in part, providers' resistance to increased scrutiny. The response may also reflect objective assessments of the project. A large number of participants were dissatisfied with the standardized instruments that were used, from both methodological and administrative standpoints. However, the ASI and similar instruments were somewhat better received, because they could be incorporated into standard clinical practice, thus avoiding the duplication of effort. The evaluation determined that the estimated cost of implementing the pilot project was more than twice as much as was budgeted. Furthermore, many sites indicated that additional staff members were needed in order to successfully implement POMS.
Perhaps the greatest lesson that was learned from Virginia's experience with POMS is that less is more. Measuring a large number of performance and outcome indicators requires a large investment of resources. It is critical that these limited resources be devoted to indicators that provide the best cost-benefit—that is, indicators that can be measured reliably, at low cost, and with the greatest utility.
Future efforts to implement performance measurement systems should use a phased approach. A small set of indicators should be implemented first, and any associated technical difficulties should be resolved before the next set is implemented. This phased approach allows for the gradual introduction of new indicators, without the burden of integrating a large set of indicators all at once, with the exception that the number would subsequently be reduced.
In response to the pilot testing, POMS was redesigned to prepare for statewide implementation. The redesign had an increased emphasis on training, protocol fidelity, integrating existing clinical and administrative data, and increasing funding from the Commonwealth of Virginia. However, despite the large investment and efforts in redesigning POMS to be more efficient and responsive, POMS fell victim to dramatic state budget cuts in 2002. However, many performance indicators were retained for statewide use, and many providers continued to use the components of POMS that they found helpful.

Acknowledgments

The authors thank Edna Kamis-Gould, Ph.D., and David Mandell, Sc.D., for their helpful suggestions. This project was supported in part by grant HR1-SM-52525, CFDA No. SM-98-010, from the Center for Mental Health Services and by grant P50-MH-49173 from the National Institute of Mental Health.

Footnote

Dr. Blank is affiliated with the Center for Mental Health Policy and Services Research in the department of psychiatry at the University of Pennsylvania, 3535 Market Street, Suite 3020, Philadelphia, Pennsylvania 19104 (e-mail, [email protected]). Dr. Koch is with the Institute for Drug and Alcohol Studies at Virginia Commonwealth University in Richmond. Dr. Burkett is with the Woodrow Wilson Rehabilitation Center in Fishersville, Virginia. Howard H. Goldman, M.D., Ph.D., is editor of this column.

References

1.
Hodges K, Wong MM: Psychometric characteristics of a multidimensional measure to assess impairment: the Child and Adolescent Functional Assessment Scale. Journal of Child and Family Studies 5:445–467, 1996
2.
Barker S, Barron N, McFarland BH, et al: A community ability scale for chronically mentally ill consumers: part I. reliability and validity. Community Mental Health Journal, 30:363–379, 1994
3.
McLellan AT, Luborsky L, Woody GE, et al: An improved diagnostic evaluation instrument for substance abuse patient. Journal of Nervous and Mental Disease 168:26–33, 1980
4.
The MHSIP Consumer-Oriented Mental Health Report Card: The Final Report of the Mental Health Statistics Improvement Program (MHSIP) Task Force on a Consumer-Oriented Mental Health Report Card. Hyattsville, Md, Mental Health Statistics Improvement Program, 1996
5.
Schneider LC, Struening EL: SLOF: a behavioral rating scale for assessing the mentally ill. Social Work Research and Abstracts 19:9–21, 1983
6.
Richards L: QSR NUD*IST User Guide. Melbourne, Australia, Qualitative Solutions and Research, 1997

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Psychiatric Services
Pages: 643 - 645
PubMed: 15175460

History

Published online: 1 June 2004
Published in print: June 2004

Authors

Details

Michael B. Blank, Ph.D.
Barbara J. Burkett, Ph.D.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share