Skip to main content
Full access
Columns
Published Online: 1 April 2005

State Mental Health Policy: Implementing a Multisource Outcome Assessment Protocol in a State Psychiatric Hospital: A Case Study

Most mental health care facilities that treat patients who have diagnoses of severe and persistent mental illness—for example, psychiatric hospitals and community mental health centers—have clinical services as their primary focus, with sparse resources available for outcome assessment. Nonetheless, all facilities are faced with evidence-based accountability requirements to demonstrate the effectiveness of clinical services. Unfortunately, patients with severe and persistent mental illness are a notoriously difficult population in which to demonstrate reliable treatment gains. The difficulty in tracking change in this population is particularly vexing because their clinical services typically use a disproportionate percentage of the mental health care dollar (1).
Utah State Hospital (USH) is the only inpatient facility that serves such individuals in the state of Utah and consists of ten units (280 inpatient beds) that treat adult, geriatric, and forensic patients. In 1998, USH was awarded a three-year grant from the state academic collaboration committee that provided resources for education and research in the area of outcome assessment. Since then, USH has implemented and refined an outcome management program for monitoring treatment.
Although case studies for implementing outcome management programs in managed care (2), private practice (3), and public settings (4) are available, a specialized model may be necessary when working with persons with severe and persistent mental illness. Such a model might facilitate the implementation of outcome management programs by similar facilities. Accordingly, academically based researchers, USH administrators, mental health care providers, and patient advocates teamed together to review current research and experiential information and provide a case study describing the development and application of an outcome management program for a population of patients with severe and persistent mental illness. The method used to select the outcome measures is detailed in a companion article in this issue of Psychiatric Services (5).

Program implementation

In 1998, USH created a committee composed of executive staff, unit administrators, discipline chiefs, and line clinical staff to match hospital needs (clinical versus administrative) with measures that have been shown to be effective and sensitive to change with our patient population. The team selected the expanded version of the clinician-rated Brief Psychiatric Rating Scale (BPRS-E) (6) and the self-reported Outcome Questionnaire (OQ) (7) as a multisource outcome battery (5) and implemented it in 1999. The OQ was chosen because it correlated highly with the SCL-90-R, the most frequently used self-reported outcome instrument for our population (5), but was far more cost-effective given that there is no per-administration fee. Moreover, this instrument had an experimental set of 15 items that track outcome for our patient population.
The next two tasks facing the committee were to develop an infrastructure to collect outcome data for hospital patients and a training system to ensure the integrity of BPRS ratings.

Development of a data collection protocol

Some have suggested that it is important to identify a "champion" to ensure the success of outcome management initiatives in mental health care organizations (3). Thus a single department was given primary responsibility for each measure. The director of psychology was designated as the BPRS champion and was given the responsibility of training, monitoring data collection, and reporting outcomes data to the hospital administration. The director of social work was given the parallel task for the OQ data that were collected during regularly scheduled individual therapy appointments with social work staff. The psychology staff was initially reluctant to take on this additional responsibility, because the staff did not have regular contact with all patients in the hospital. This resistance has been noted for similar public-sector outcome projects (4). However, resources from an existing psychology extern program, coupled with resources from a predoctoral psychology internship, were combined and allocated to support this initiative. Specifically, trained psychology interns and staff would collect BPRS data, and the hospital administration would, in turn, use the data to monitor performance and report treatment effectiveness to its stakeholders with use of an electronic medical records system (EMR). The use of the EMR enabled outcomes data to be immediately available to the treatment team or other authorized personnel.
The frequency of BPRS administration was established after assessment of the clinical utility of outcomes data to the treatment team, administrative and staff resources, and patients' length of stay (the mean length of stay was 185 days). After assessing all factors, we decided that the BPRS would be administered within three days of a patient's admission, every 90 days thereafter, and within three days of discharge for all patients. At a minimum, this approach would provide treatment teams with BPRS data for assessing patients' progress at fixed intervals during the course of their treatment. Additional BPRS protocols would be available on an as-needed basis.
A word on the frequency and interval of assessment: Utah does not have a long-term-care facility for the mentally ill. Thus patients who have chronic mental illnesses are treated at USH, thereby increasing length of stay as well as the assessment interval. However, the measures we adopted at USH have been found to be useful in tracking change for inpatient stays of only a few days' duration (8,9). It is critical to initially select an instrument that has been shown to be empirically sensitive to change (5) for the patient population and average length of stay for the treatment under consideration.

The BPRS rater training system

Research has repeatedly shown that individual differences between raters can lower interrater reliability and result in rater drift on the BPRS. To prevent these problems, we established a standard of interrater agreement of .80 (10,11) and adopted two training processes to initially reach and maintain interrater agreement. First, an expert BPRS trainer from the University of California, Los Angeles (11) conducted a two-day seminar, which was videotaped so that future psychology interns could benefit. The seminar followed Faustman and Overall's (12) procedure, which requires that one clinician interview a volunteer patient while other clinicians observe and make simultaneous ratings. After each group practice interview, clinicians compared ratings and received feedback. This technique was repeated iteratively until group ratings converged. We also purchased UCLA's consensus-coded BPRS tapes.
The second process that was implemented to maintain a standard training protocol was to capacitate the psychology department to carry out this training with each new intern class. Subsequent intakes of intern raters were given BPRS manuals. They read the manual and observed interviews conducted by trained BPRS raters, rated six consensus-coded videos of BPRS interviews with actual patients in order to meet the agreement standard of .80, rated actual patients concurrently with a trained BPRS rater to maintain the .80 agreement in live interviews, and rated a videotaped or live interview three times a year as a quality-assurance check to prevent rater drift.

Success of the program

The training protocol was associated with an average interrater reliability (intraclass correlation coefficient) of .85 and an average interrater agreement of .90. The latter exceeded the numeric cutoff of .80 proposed by Ventura and colleagues (11) for interrater agreement and suggests that applying literature-based principles for training that is anchored to a common standard (UCLA consensus-coded tapes) can result in an interrater reliability that is seen more often in clinical research than in service delivery settings (13).
Employment of the clinician-rated BPRS consumed ample resources from the hospital units that focused principally on service delivery. Conversely, use of the self-reported OQ was associated with greater "buy in" from providers. More specifically, transferring responsibility for the collection of self-reported patient data to the director of social work generated a second, resource-efficient outcome-assessment source (patient-generated change) while also engaging an important clinical resource: social workers.
Although the cost of self-reported patient-change data was minimal, an equally—and perhaps more—important consideration was its benefit. In other words, were self-reported outcomes data from our patients meaningful? With this patient population, our answer is that it depends. Although the BPRS had been shown to be sensitive to patient change, irrespective of diagnosis (unpublished data, Burlingame GM, Seaman S, Johnson J, et al, 2004), the self-reported measure fared less well. Approximately one-fourth of the patients who were admitted to the facility were either unable (because of the acuity of their illness) or unwilling to complete a self-reported outcome instrument on admission. Specifically, aggregate outcomes data from these patients were either far below expected normative levels for this population or so erratic (item endorsement at both ends of the range) that meaningful interpretation was impossible. Although the proportion of unusable self-reported outcome assessments dropped after the patients stabilized (low scores increased as denial and impaired reality remitted), the out-of-range values made meaningful change difficult to track.
A balancing perspective with respect to self-reported measures was that changes in the remaining 75 percent of cases were moderately correlated with BPRS change noted by clinicians who were independent from the actual treatment of the patient (psychology interns). The correspondence in change profiles between two independent sources is clearly promising, especially when one considers the investment of staff resources in BPRS assessment. This finding suggests that the progress of a significant portion of hospitalized patients might be tracked by using less costly self-reported measures, once patient acuity and cooperation reach appropriate levels. Interestingly, even though some patients with severe and persistent mental illness underreported their absolute degree of symptom distress on the self-reported measure, the actual change trajectory in a portion of this subsample remained similar to that of patients with higher levels of distress.

Lessons learned

Implementing an outcomes management program at USH has had an impact on the hospital's funding and treatment for patients. In the face of increased scrutiny by public agencies and consumers, USH has been able to provide evidence-based outcomes drawn from the extant literature that demonstrate that services are making a difference in the lives of the patients. With regard to the state legislature, outcomes data are presented to legislators each year to provide evidence of patients' progress in response to services. This evidence-based accountability has served as a strong rationale on more than one occasion to justify government support. However, these gains were not without internal costs.
Initially, psychology staff resisted using the BPRS-E because of the instrument's labor-intensive nature. Utilizing psychology interns—and justifying their employment to cover the bulk of this duty—greatly reduced the initial resistance. An unexpected side benefit to psychology staff was motivation to keep their knowledge more up-to-date as they became involved in the training and supervision of interns who were fresh out of psychology doctoral programs. Similarly, the use of the OQ added to the responsibilities of the social work staff. However, as both instruments became more widely understood and utilized by treatment teams, acceptance increased.
Perhaps the greatest factor that contributed to "buy in" by clinical staff occurred when the data started to empirically demonstrate that treatment was effective and made a difference in the functioning of our patients. This was the single most important factor in reducing the staff's dread and suspicion and created an atmosphere of pride in a job well done.
The group that has been both the most skeptical and the most embracing has been our psychiatrists. Critical inquiries often led to extensive data analysis, increasing confidence in the empirical evidence on which to base disposition decisions. Finally, patients' compliance increased with staff acceptance. Initially, outcome measures were viewed as a necessary evil, but it was rare for a patient to refuse to participate. Over time these assessments have become accepted by staff and patients alike as a normal hospital routine.
Future directions of the outcome management program involve a fuller integration of outcomes data for individual patients with use of our EMR. To facilitate this goal, outcomes data are entered into the hospital's electronic charting system, providing clinicians with current and individualized patient graphs and scores. These patient reports are used to track patients' progress and adjust treatment strategies to fit individual patients' needs, a process that has been shown to improve outcome (14).

Footnote

Mr. Earnshaw is assistant clinical director and Dr. Rees is director of psychology at Utah State Hospital in Provo. Mr. Dunn and Dr. Burlingame are with the department of clinical psychology at Brigham Young University in Provo. Dr. Chen is with the division of substance abuse and mental health for the state of Utah. Send correspondence to Dr. Burlingame at Brigham Young University, 238 TLRB, Provo, Utah 84602 (e-mail, [email protected]). Fred C. Osher, M.D., is editor of this column.

References

1.
Carey MP, Carey KB: Behavioral research on severe and persistent mental illnesses. Behavioral Therapy 30:345–353, 1999
2.
Brown GS, Burlingame GM, Lambert MJ, et al: Pushing the quality envelope: a new outcomes management system. Psychiatric Services 52:925–934, 2001
3.
Burlingame GM, Lambert MJ, Reisinger CW, et al: Pragmatics of tracking mental health outcomes in a managed care setting. Journal of Mental Health Administration 22:226–236, 1995
4.
Blank M, Koch J, Burkett B: Less is more: Virginia's performance outcomes measurement system. Psychiatric Services 55:643–645, 2004
5.
Burlingame GM, Dunn TW, Chen S, et al: Selection of outcome assessment instruments for inpatients with severe and persistent mental illness. Psychiatric Services 56:444–451, 2005
6.
Lukoff D, Nuechterlein KH, Ventura J: Manual for the expanded BPRS. Schizophrenia Bulletin 12:594–602, 1986
7.
Lambert MJ, Burlingame GM, Umphress V, et al: The reliability and validity of the Outcome Questionnaire. Clinical Psychology and Psychotherapy 3:249–258, 1996
8.
Doerfler LA, Addis ME, Morran PW: Evaluating mental health outcomes in an inpatient setting: convergent and divergent validity of the OQ-45 and BASIS-32. Journal of Behavioral Health Services and Research 29:394–404, 2002
9.
Lachar D, Espadas A, Bailley S: The Brief Psychiatric Rating Scale: contemporary applications, in The Use of Psychological Testing for Treatment Planning and Outcomes Assessment, 3rd ed. Edited by Maruish ME. Mahwah, NJ, Erlbaum, 2004
10.
Andersen J, Korner A, Larsen JK, et al: Agreement in psychiatric assessment. Acta Psychiatrica Scandinavica 87:128–132, 1993
11.
Ventura J, Green MF, Shaner A, et al: Training and quality assurance with the Brief Psychiatric Rating Scale: the drift busters. International Journal of Methods in Psychiatric Research 3:221–244, 1993
12.
Faustman WO, Overall JE: Brief Psychiatric Rating Scale, in The Use of Psychological Testing for Treatment Planning and Outcome Assessment, 2nd ed. Edited by Maruish ME. Mahwah, NJ, Erlbaum, 1999
13.
Hill CE, Lambert MJ: Methodological issues in studying psychotherapy processes and outcomes, in Bergin and Garfield's Handbook of Psychotherapy and Behavior Change, 5th ed. Edited by Lambert MJ. New York, Wiley, 2004
14.
Lambert MJ, Whipple JL, Smart DW, et al: The effects of providing therapists with feedback on patient progress during psychotherapy: are outcomes enhanced? Psychotherapy Research 11:49–68, 2001

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Psychiatric Services
Pages: 411 - 413
PubMed: 15812088

History

Published online: 1 April 2005
Published in print: April 2005

Authors

Affiliations

Dallas Earnshaw, A.P.R.N., C.N.S.
Gary M. Burlingame, Ph.D.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

There are no citations for this item

View Options

View options

PDF/ePub

View PDF/ePub

Get Access

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share