The value of evidence-based medicine is a principle that all physicians endorse. Medicine in general over the past century has progressed from being a “country doctor” profession to becoming a technologically sophisticated and therapeutically powerful endeavor. One hundred years ago we had little to offer except compassion, careful clinical observation, and treatments such as digitalis and anesthesia. In the year 2000 we doctors can take pride in the fact that we have discovered insulin, antibiotics, and many other techniques such as blood transfusion and elegant surgical interventions. We can effectively control or cure diabetes, cancer, cardiovascular diseases, and almost all infectious diseases.
In psychiatry we have also made great progress. We have moved patients from hospitals to the community. We can offer powerful medications to reduce depression, anxiety, and psychosis, as well as a variety of well-targeted psychotherapies for specific symptoms or problems. We have refined our diagnostic tools and our knowledge of pathophysiology. Increasingly, we understand how various disorders arise from neurodevelopmental or neurodegenerative processes, how life experiences intersect with these processes to help or hinder the development and evolution of disease processes, and how genes, molecules, and circuits interact dynamically in health and disease. We are learning more and more about how our treatments work at the chemical and molecular level. It is an exciting time to be a psychiatrist. As in other medical specialties, a great deal of our clinical practice is evidence-based.
Thus, in psychiatry as well as in general medicine, we have moved far from the days when the “country doc” would simply make a diagnosis and then palliatively manage a natural course of illness. Now we can intervene with treatments, perhaps preventing an acute disorder from becoming chronic and debilitating. This brings us to an era when, more than ever before, we need to develop a deeper understanding of the predictive features of psychiatric diseases, as well as markers for a wide range of possible outcomes. Without this information to guide our treatment interventions, we are left in the world of empirical treatment and “best guesses.”
This issue of the Journal contains a group of articles that contribute new evidence in one important area of clinical practice: how well can we predict the future from the present? If our patients have any single set of questions that they would like to have answered, it is, “Given my current problems, what does the future hold? Will I (or my child) get better? Did I do anything to cause these problems? Is there anything I can do now or in the future to keep them from worsening?” As the articles in this issue indicate, we are making progress, although we still have a long way to go.
For example, the Devanand et al. article asks a question everyone over the age of 50 or 60 would like to have answered. When the signs of mild memory impairment occur, is there any way of telling whether they will progress on to become Alzheimer’s disease? In this study, baseline olfaction scores were a key variable used to predict outcome 2 years later. Group comparisons indicated that both poor olfaction and poor insight about it distinguished those who developed Alzheimer’s disease from those who did not. But can we predict on an individual basis? This is the real goal that we pursue in predictive studies. Depending on how the data are analyzed, olfactory scores may or may not help identify the unlucky individuals. We know from many other studies that younger age, female gender, and higher education are all protective factors. When these are used in the predictive model along with olfactory impairment, olfaction has no additional predictive power. When awareness of olfaction is included, however, the time to develop Alzheimer’s disease can be predicted with a relative risk of 7.3. This is a fairly high predictive capacity, but it is also based on a subjective measure. So, in this study, we have some interesting progress in prediction. But, on the other hand, we will not be using this single test any time soon to make predictions on a case-by-case basis.
Other articles in this issue examine other aspects of prediction. The Johnson et al. study moves to the opposite end of the age range and examines whether personality disorders diagnosed during adolescence can be used to predict later violent and criminal behavior during adulthood. The answer is “yes, but to a modest degree.” In another study of the impact of childhood experiences on later adolescent behavior, Brown et al. observe that childhood sexual abuse is associated with risky sexual practices during adolescence. In yet another study looking at childhood, Erlenmeyer-Kimling et al. examine whether measures of cognitive and motor skills obtained early in life can be used to predict the later development of schizophrenia-related psychoses. Again, the results are important and suggestive, although not definitive. Fifty percent of those who later develop psychosis can be identified when three measures (attention, verbal memory, and motor impairment) are combined.
These studies have two important messages.
The first is clinical. As clinicians, we are dealing at present with probabilities, but not certainties. We cannot (yet) tell our patients what the future holds with any definitive answers. We do not have diagnostic or outcome markers. Lest this seem cause for alarm, we should remind ourselves that we are not a great deal worse off than our friends in pediatrics or internal medicine. Although they may be able to measure blood sugar and glucose tolerance (and therefore are a bit ahead of us with a variety of diagnostic tests), they too usually must use “fuzzy logic” when it comes to predicting who will do well or poorly and which treatment regimen will work best. Laboratory tests to diagnose cancer often rely on subjective judgments concerning the degree of cellular dysplasia that are akin to those made in rating severity of psychiatric symptoms or insight about degree of olfactory impairment. Studies predicting the role of risk factors for diabetes, cancer, or coronary artery disease are doing no better than those published in this issue of the Journal. Like our colleagues in the rest of medicine, we have evidence that we can draw on as we discuss with our patients the meaning of a diagnosis or the rationale for choosing a treatment. We can also discuss the degree to which past experiences and behavior may be associated with particular outcomes in the future, appealing to studies such as those published in this issue. But, as we must say over and over, we have no crystal ball.
The second message concerns the national research agenda. If we want clinical psychiatry to be at the forefront of evidence-based medicine, we need to include a healthy amount of longitudinal clinical research in the national research portfolio, and we need to encourage and support people who undertake such studies. Three out of four of the predictive studies in this issue of the Journal use longitudinal designs, which is the coin of the realm if one wants to make valid predictions. Longitudinal designs are costly and labor-intensive. They are clinical in nature and have less high-tech glamour than neuroimaging, neuropathology, genetics, or molecular biology. They require taking a long view of things, being very patient, and waiting somewhere between 3 and 20 years before useful results can be “harvested.” The scientist who undertakes and completes such studies is displaying an exemplary ability to delay gratification in order to reach an important long-term goal. The information that we can glean from such studies may not as yet be definitively predictive, but it is important and useful in building a clinical evidence base to which psychiatrists can appeal as they answer their patients’ most important question: what does the future hold?