Skip to main content
Full access
Open Forum
Published Online: November, 2008

Improving Care, Improving Performance, or Just Improving Numbers?

Computerized quality improvement systems that link to electronic medical records or other automated databases are being used to assess quality, support clinical decision making, and provide feedback (cognitive and financial) to clinicians and administrators. How well do these systems capture the quality of actual clinical practice, and how useful are they in improving it?

Limitations of computerized systems

Aside from these definitional issues there is the even more fundamental question of whether these automated databases provide a valid representation of clinical practice. Kramer and colleagues ( 2 ) examined adherence to clinical guidelines in newly diagnosed cases of depression identified by using data either electronically extracted from VA databases or manually extracted from charts. They found that more than a third of the patients who had been "newly" diagnosed on the basis of computerized data were found on manual chart review to have been diagnosed or treated for depression in the previous six months. Because so many of the newly diagnosed cases had already been identified or treated, the adherence rate for these patients newly diagnosed by electronic means was greater than for those who were actually newly diagnosed according to the medical record. They noted that if such a computerized system was used to measure the quality of care, then clinicians with higher adherence rates might not have been better clinicians than others but may simply have been treating the more chronic patients.
Rosenheck and colleagues ( 3 ) attempted to validate computerized quality indicators by retrospectively analyzing data from 4,000 veterans who had been discharged from 62 specialized posttraumatic stress disorder (PTSD) treatment centers and then clinically assessed four months later. Not surprisingly, at the individual patient level the computerized indicators showed very weak correlations (<.1), even though they were statistically significant, between quality indicators measuring rehospitalization and the intensity of PTSD symptoms or violent behavior. However, there was no relationship between clinical variables and quality indicators of outpatient care or access to, intensity of, or continuity of treatment. Most importantly there was no relationship between the performance of these 62 sites on any indicator and the clinical outcomes at these sites.
Although computerized systems have a limited ability to assess quality, whether they can improve clinical decision making is not clear. In a comprehensive review of controlled studies of computerized clinical decision support systems, only four studies dealt specifically with psychiatry ( 4 ). These four focused on improving the diagnosis of depression in primary care. Three of the four found no improvement in practitioner performance, and none found any improvement in patient outcomes. The one study that found improved practitioner performance noted that a computerized prompt to general practitioners, as opposed to a prompt written into the medical record, increased the diagnosis of major depression ( 5 ). However, that study did not assess clinical outcomes.
Rollman and colleagues ( 6 ) examined the rate of diagnosis as well as clinical variables when primary care physicians were presented electronically the results of a depression screen of their patients. They were provided three levels of electronic feedback: "active care," in which physicians were provided patient-specific advisory messages; "passive care," in which physicians were simply reminded of the patients' depression diagnosis and encouraged to provide treatment; and "usual care," in which physicians received no additional information aside from the results of the depression screening. Over a six-month period, patients in the three conditions did not differ on levels of depression or proportion prescribed antidepressants, offered counseling, or referred to a mental health specialist. The only variable that differed between conditions was that patients diagnosed as having depression in the active or passive care conditions had more office visits than those in the usual condition. That the more elaborate interventions increased the number of visits without otherwise improving clinical decision making or patient outcomes argues that instead of enhancing clinician judgment, the intervention diminished it.
In summarizing their findings, Rollman and colleagues ( 6 ) concluded that although electronic feedback through an electronic medical record system can prompt receptive physicians to perform onetime actions such as diagnosing depression or ordering a mammogram, such systems have little positive effect on clinical outcomes in chronic conditions. These conclusions are consistent with the comprehensive review across various medical specialties that found improvement in two-thirds of the interventions that measured practitioner performance but in only 13% of studies that measured any patient outcomes ( 4 ).

Experiences in a state hospital system

The limitations of a computerized quality improvement system were highlighted for me when I served as clinical director of Illinois' mental health authority. We introduced a computerized system to improve physicians' laboratory monitoring of serum levels and potential side effects of psychotropic drugs administered in the state hospitals ( 7 ). Initially we focused on whether physicians ordered appropriate tests in a timely fashion. With the computerized system some of the physicians with the lowest performance (40%–60% correct) became those with the highest performance (90%–100% correct). But because the computer recognized only the tests done in our hospitals, if the test had been done at some other facility (which they often were), the computer would indicate a deficiency. Instead of indicating clinical excellence, very high rates of performance (90%–100% correct) actually meant that redundant tests were being ordered. Yet there was pressure to do just this; no one wants to be below average, after all. In the 50% of facilities that were by definition below the state median for laboratory monitoring of psychotropic drugs, the hospital administrators pressured the medical directors to raise the facility average, and they in turn pressured the 50% of physicians who were below their facility's average. This is known as "continuous quality improvement."
Now it is true that we could have adjusted the system so that ratings of 90% and above (or some other level) were interpreted as poor performance, but physicians might then comply by not ordering the appropriate test every tenth time. We therefore shifted our attention to how well physicians used the information provided by abnormal laboratory results to adjust their prescribing practices. We noted, on average, that if a physician had responded inappropriately (according to the computer algorithm) to any one of these abnormal laboratory results, then the patient's hospital stay would be twice as long as for a patient for whom the physician responded appropriately. There was also a moderately strong correlation (-.45) between the proportion of incorrect responses a physician made on these measures and his or her patients' average length of stay. Such inappropriate responses to abnormal laboratory results were made with over half of our patients, suggesting that with timely feedback we might be able to dramatically reduce their average length of stay. But when we carried out a similar analysis that restricted our attention to patients who had no abnormal laboratory results, to our astonishment we found that much of the previously described relationship could be accounted for by situations in which there were no opportunities to make errors because there was no abnormal laboratory test to which to respond. These errors may have been indicators of poor physician practice or dysfunctional clinical teams, but preventing these errors was not going to do much to improve the physicians' care, make the clinical teams more functional, or shorten the length of stay. It would, however, make the computerized performance measures look great.
Because "improvement" on measures used in computerized quality improvement systems may be only marginally related to actual improvement in clinical care, reliance on such measures to judge clinician performance may be not only incorrect but self-deluding. We must guard against the tendency to believe that the patients are getting the care that they need simply because the system is producing the numbers that administrators might want. Assessment of quality of care may require reliance on other, nonquantitative methods that can be complementary or corrective.

Another perspective

The models that underlie these computerized systems are an outgrowth of attempts early in the 20th century to apply positivist science (measure, predict, and control) to industrial production ( 8 ). But there are other ways of knowing, as practiced in the law, humanities, and the arts, that may be more appropriate for professional as opposed to industrial activities. Careful descriptive and qualitative analysis as used by anthropologists can often provide a far more accurate portrayal of complex situations than do numbers ( 9 ). Similarly, there may be a value in distinguishing, as Aristotle did, between scientific knowledge that may be correct in the abstract and practical knowledge that is concerned with the particular. For him, medicine and navigating were exemplary of the practical; a good physician or pilot must always be ready to put aside rules and deal with the particular situation ( 10 ).
Returning to these values would mean that good mental health administration would be an outgrowth of good mental health treatment, not the other way around. Because the specialized knowledge of the profession underlies the ability to wisely administer it, mental health administrators must be clinicians first and foremost. They would spend less time looking at and discussing numbers with the other clinicians and more time working alongside them to understand the particular situation with which they are dealing.
This is not to say that quality of care never has or never will be improved by such computerized performance improvement efforts. But it does question a way of thinking—that is, "The only things you can change are the things you can measure"—by suggesting that should this be true, then perhaps, "the only things you are changing are measurements."

Acknowledgments and disclosures

The author reports no competing interests.

Footnote

Dr. Luchins is affiliated with the Jesse Brown Veterans Affairs Medical Center, 820 S. Damen Ave., 116A, Chicago, IL 60612 (e-mail: [email protected]).

References

1.
Kerr EA, McGlynn EA, Van Vorst KA, et al: Measuring antidepressant prescribing practice in a health care system using administrative data. Joint Commission Journal on Quality Improvement 26:203–216, 2000
2.
Kramer TL, Owen RR, Cannon D, et al: How well do automated performance measures assess guideline implementation for new-onset depression in the Veterans Health Administration? Joint Commission Journal on Quality and Safety 29:479–489, 2003
3.
Rosenheck R, Fontana A, Stolar M: Assessing quality of care: administrative indicators and clinical outcomes in posttraumatic stress disorder. Medical Care 37:180–188, 1999
4.
Garg AX, Adhikari NKJ, McDonald H, et al: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 293:1223–1238, 2005
5.
Cannon DS, Allen SN: A comparison of the effect of computer and manual reminders on compliance with a mental health clinical practice guideline. Journal of the American Informatics Association 7: 196–203, 2000
6.
Rollman BL, Hanusa BH, Lowe HJ, et al: A randomized trial of computerized decision support to improve treatment of major depression in primary care. Journal of General Internal Medicine 17:493–503, 2002
7.
Luchins DJ: Computerized performance monitoring systems: learning and living with its limitations. Administration and Policy in Mental Health 34:73–77, 2007
8.
Gillespie R: Manufacturing Knowledge: A History of the Hawthorne Experiments. Cambridge, England, Cambridge University Press, 1991
9.
Geertz C: Available Light: Anthropological Reflections on Philosophical Topics. Princeton, NJ, Princeton University Press, 2000
10.
Toulmin S: Return to Reason. Cambridge, Mass, Harvard University Press, 2001

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Psychiatric Services
Pages: 1328 - 1330
PubMed: 18971410

History

Published in print: November, 2008
Published online: 13 January 2015

Authors

Details

Daniel J. Luchins, M.D.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

View Options

View options

PDF/EPUB

View PDF/EPUB

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share