Skip to main content

Abstract

Objective:

Given psychiatry's need to implement measurement-based care, the study examined whether direct-care staff could reliably administer brief positive and negative symptom instruments to track symptom changes and inform clinical decision making.

Methods:

Raters (82 case managers) were assessed at baseline. Training was provided for individuals not meeting reliability criteria. These individuals were reassessed to determine the effect of training. In addition, rater drift was assessed for raters judged to be reliable at baseline.

Results:

Seventy-seven percent of direct-care staff met criteria for reliability either at baseline or after they received additional training.

Conclusions:

A majority of direct-care staff can be trained to reliability on brief scales of positive and negative symptoms that can be used to guide clinical decision making. (Psychiatric Services 62:558–560, 2011)
In nearly all domains of medicine, quantified measures of outcome are used to characterize changes in a patient's symptoms during the course of treatment (for example, monitoring changes in blood pressure before and after the prescription of medication to determine its efficacy). Psychiatry has lagged behind other medical disciplines with respect to using standardized assessments of outcome to guide clinical decision making (1,2). In mental health care, assessments of outcome are often based on unstructured conversations between the client and prescriber that yield impressionistic judgments of progress rather than quantifiable data (1).
The Group for the Advancement of Psychiatry recommends that health care systems implement standardized outcome assessments for individuals with mental illnesses (2). This approach, known as measurement-based care, has been found to be both feasible and effective (3). Standardized, self-report measures or brief symptom scales have been suggested as practical ways to monitor changes in key symptoms in routine practice (13).
Use of brief symptom rating scales is a component of the Texas Implementation of Medication Algorithms (TIMA), a disease management program using specific recommendations for medication (including dose and duration) and standardized brief assessments for monitoring outcomes for individuals with psychiatric disorders treated by publicly funded organizations providing mental health services in Texas. TIMA is based on the Texas Medication Algorithm Project (4) and was used statewide during the period of this work. For individuals with schizophrenia, two semistructured assessments are administered to track changes in symptomatology and guide clinical decision making—the Positive Symptom Rating Scale (PSRS) and the Brief Negative Symptom Assessment (BNSA) (5).
The extent to which these brief structured interviews can be reliably applied in routine clinical practice settings by direct-care staff is unclear. We investigated this question in a large, publicly funded community mental health center (CMHC).
Our project had three objectives: first, to determine the level of interrater reliability among case management staff using the PSRS and BNSA; second, to provide training to improve interrater reliability and prevent rater drift; and third, to observe case managers during the administration of these assessments to evaluate interviewing techniques and appropriate use of anchor points (6).

Methods

Participants were 82 direct-care staff responsible for administering the PSRS and BNSA every three months to track symptom changes among their patients. Twenty-eight participants had master's degrees, and 52 had bachelor's degrees. The level of education was not documented for two participants. The mean±SD number of years the staff worked at the CMHC was 4.18±4.05. Phases 1 to 4 of the project described below took place between November 2006 and August 2008. Data were analyzed with the SAS statistical package. On the basis of the regulations of the institutional review board of the University of Texas Health Science Center, the project was not considered to be human subjects research, because it was part of a quality improvement initiative at the CMHC, and therefore, informed consent was not required.
In phase 1, direct-care staff participated in an initial rater assessment to determine whether their ratings on the combined brief scales agreed with established gold standards. Participants viewed and scored three brief interviews. For a rater to attain acceptable reliability, he or she needed to be within 1 point of the criterion rating on 80% percent of items.
In phase 2, experts provided detailed individual training to staff not meeting criteria for reliability. Anchor points of the scales were reviewed, and a detailed explanation of the criterion ratings was provided. Trainers also conducted on-site observation of direct-care staff to evaluate interviewing techniques and the application of the rating scales with actual consumers. The focus of the observation was on the staff member's ability to elicit clear statements regarding the presence or absence of symptoms, their frequency and severity, and the extent to which they interfered with the client's daily life.
In phase 3, raters were asked to score a series of new interviews to determine whether the training was able to improve reliability of raters who had not met the criterion and to assess rater drift among raters already certified as reliable.
In phase 4, we used a standardized rater training program for all new hires at the agency.
The PSRS assesses the four psychosis items from the expanded version of the Brief Psychiatric Rating Scale (BPRS) (7)—hallucinations, unusual thought content, conceptual disorganization, and suspiciousness. The BNSA contains four items drawn from the Schedule for the Assessment of Negative Symptoms (8) and the Negative Symptom Assessment (9)—prolonged time to respond, reduced social drive, poor grooming and hygiene, and blunted affect. Each rater was identified as meeting or failing to meet the criterion for reliability for each item, and reliability was calculated on the basis of the eight items of the two brief scales combined.

Results

Of the 82 direct-care staff members, 57% (N=47) met criteria for rating reliably, and 43% (N=35) did not. There was no relationship between degree attained and whether an individual met criteria for reliability, nor was there one between years of service and reliability. The mean±SD reliability for individuals meeting criteria was 90.1%±6.4%. For those not meeting the reliability criteria, the average score was 69.8%±8.9%.
Results of the phase 2 observation of individuals who did not reach reliability at baseline revealed that several individuals were not using the structured interview questions or were not consulting the anchor points to make ratings. Trainers indicated that the interview and anchor points should be consulted in every case. Training staff reminded the raters to preface questions with statements that reminded consumers of the time frame and to obtain information on the frequency, severity, and the extent to which the symptom interfered with daily functioning before moving to the next question.
In phase 3, there were three opportunities for staff members to review taped interviews. For each participant, scores were averaged across the interviews that they rated (38% completed one tape, 40% completed two tapes, and 22% completed all three tapes). Of the 35 individuals who did not reach reliability criterion in phase 1, 29 participated in retesting. Of these, 55% (N=16) achieved reliability and 45% (N=13) remained below criterion. Average reliability on this retest was 81.1%±11.8%. Therefore, of the original 82 individuals participating, 77% (N=63) reached reliability on the brief scales component of training.
With respect to rater drift, of the original 47 individuals who met the criterion for reliability at baseline, 36 viewed additional tape-recorded interviews that used the brief scales. About 67% (24 of 36) maintained reliability, and the remaining 33% (N=12) did not.
In phase 4 we rolled out standardized rater training with all new employees. Ten new individuals were trained in our formalized program. 80% (eight of ten) met reliability criteria after this training, and the remaining two individuals missed attaining the criterion by 1 percentage point.
In all phases, all raters were made aware of rating deficits. There was some variability in the reliability of specific scale items. Overall, item 1 on the BNSA, “Prolonged time to response,” had the highest pass rate (that is, the percentage scoring within 1 point of the criterion 95.9%), and item 2 on the BNSA, “Unchanging facial expression,” had the lowest (68.7%). Item 1 has very clear behavioral criteria for rating. Item 2 is based solely on observation and may be the most subjective item on the brief scales. In general, individuals with more disorganized speech and behavior were the most difficult to rate, with average failure rates across items ranging between 22% and 24%.

Discussion and conclusions

It is important for psychiatry to move into measurement-based care, although there are a number of challenges in doing so. Results of this study suggest that rating scales can reliably be applied by a majority of direct-care staff. However, a training program is needed to ensure the reliability of these ratings. Moreover, rater drift must be considered, and periodic recalibration of raters is important (7). There are some staff members who were not able to reliably utilize the rating scales even after the standard training was provided. Whether these individuals would improve with more targeted or longer training would need to be investigated in a follow-up program.
Other approaches to measurement-based care include having the physicians rather than case-management staff conduct the ratings or using self-report measures of symptomatology (1,3). Although self-report may be feasible for individuals with schizophrenia, problems with insight and delusional thinking may interfere with validity (10).
The move to measurement-based care in psychiatry is important to ensure that we are helping individuals attain the most favorable outcomes. Once implemented, care must be taken to ensure that the measurements used are reliable and valid as administered by direct-care staff.

Acknowledgments and disclosures

This work was supported in part by grant R24-MH072830 from the National Institute of Mental Health.
Dr. Miller reports receiving grant funds from Pfizer, Inc., and he is a consultant for RBM, Inc. The other authors report no competing interests.

References

1.
Zimmerman M, McGlinchey JB, Chelminski I: An inadequate community standard of care: lack of measurement of outcome when treating depression in clinical practice. Primary Psychiatry 15(6):67–75, 2008
2.
Valenstein M, Adler DA, Berlant J, et al.: Implementing standardized assessments in clinical care: now's the time. Psychiatric Services 60:1372–1375, 2009
3.
Trivedi MH, Rush JA, Gaynes BN, et al.: Maximizing the adequacy of medication treatment in controlled trials and clinical practice: STAR*D measurement-based care. Neuropsychopharmacology 32:2479–2489, 2007
4.
Rush AJ, Crismon ML, Kashner TM, et al.: Texas Medication Agorithm Project, phase 3 (TMAP-3): rationale and study design. Journal of Clinical Psychiatry 64:357–369, 2003
5.
Argo TR, Crismon ML, Miller AL, et al.Texas Medication Algorithm Project Manual: Schizophrenia Treatment Algorithms (Schizophrenia Clinicians Manual). Austin, Texas Department of State Health Services, 2008. Retrieved from www.dshs.state.tx.us/mhprograms/pdf/SchizophreniaManual_060608.pdf. (Manuals no longer available online but can be obtained from the first author)
6.
Miller AL, Lopez L, Gonzalez JM, et al.: Research in community mental health settings: a practicum experience for researchers. Psychiatric Services 59:1246–1248, 2008
7.
Ventura J, Green MF, Shaner A, et al.: Training and quality assurance with the Brief Psychiatric Rating Scale: the drift busters. International Journal of Methods in Psychiatric Research 3:221–224, 1993
8.
Andreasen NC: The Scale for the Assessment of Negative Symptoms (SANS). Iowa City, University of Iowa, 1983
9.
Alphs L, Sommerfelt A, Muller RJ: The Negative Symptom Assessment: a new instrument to assess negative symptoms in schizophrenia. Psychopharmocology Bulletin 25:159–163, 1989
10.
Patterson TL, Goldman S, McKibbin CL, et al.: UCSD performance based skills assessment: development of a new measure of everyday functioning for severely mentally ill adults. Schizophrenia Bulletin 27:235–245, 2001

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Cover: The Lee Shore, by Edward Hopper, 1941. Oil on canvas, 28 × 43 inches. Private collection. Photo © Art Resource, New York.
Psychiatric Services
Pages: 558 - 560
PubMed: 21532087

History

Published online: 1 May 2011
Published in print: May 2011

Authors

Details

Dawn I. Velligan, Ph.D. [email protected]
Prof. Velligan, Ms. Castillo, and Dr. Miller are affiliated with the Department of Psychiatry, University of Texas Health Science Center, 7703 Floyd Curl Dr., MSC 7797, San Antonio, TX 78229-3900 (e-mail: [email protected]).
Linda Lopez, M.A., L.P.C.I.
Ms. Lopez, Ms. Manaugh, and Dr. Milam are with the Center for Health Care Services, San Antonio.
Desirée A. Castillo, B.S. [email protected]
Prof. Velligan, Ms. Castillo, and Dr. Miller are affiliated with the Department of Psychiatry, University of Texas Health Science Center, 7703 Floyd Curl Dr., MSC 7797, San Antonio, TX 78229-3900 (e-mail: [email protected]).
A. Camis Milam, M.D.
Ms. Lopez, Ms. Manaugh, and Dr. Milam are with the Center for Health Care Services, San Antonio.
Alexander L. Miller, M.D. [email protected]
Prof. Velligan, Ms. Castillo, and Dr. Miller are affiliated with the Department of Psychiatry, University of Texas Health Science Center, 7703 Floyd Curl Dr., MSC 7797, San Antonio, TX 78229-3900 (e-mail: [email protected]).

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share