The introduction of fluoxetine and other selective serotonin reuptake inhibitor (SSRI) antidepressants in the late 1980s and 1990s substantially widened the scope of pharmacological treatment for major depression. But over the subsequent decades, there have been few major advances in the treatment of a disorder that affects nearly 15 million Americans annually and is a leading cause of disability worldwide. Recently, highly publicized meta-analyses and examinations of unpublished clinical trials have led to questions about the efficacy of antidepressants and widespread skepticism in the popular media. Taken together, these difficulties have prompted scholars to describe a “crisis of confidence in antidepressants” (
1). We have entered an era in which the former editor of the
New England Journal of Medicine, in a highly publicized book review, is open to theories that “psychoactive drugs are useless … or worse than useless” (
2).
One major factor contributing to both the failure of experimental antidepressant compounds and the controversies surrounding currently approved medications is the remarkably high placebo response rate in clinical trials. Undoubtedly, placebos have an important role in antidepressant trials. They help account for the passage of time in a disorder with episodic fluctuation of symptoms, and they control for the nonspecific therapeutic factors such as the attention of an interested clinical investigator. But since 1980, the rate of placebo response to antidepressants has climbed as much as 7% per decade and has been has high as 50% in some trials (
3,
4).
Why are placebo response rates so high? Some portion of the increase may be attributable to the broadening of inclusion criteria—subjects who are less depressed may be more likely to respond to placebos. Industry-sponsored trials that pay investigators incrementally by subject incentivize raters to inflate symptoms at screening visits preferentially over symptoms at midtrial visits. The incremental payments also discourage careful screening for exclusion criteria. Participation that is induced by cash payments may lead subjects to exaggerate their symptoms. Clinical trials have also become longer in duration, giving nonspecific factors more chance to accumulate. Another contributing factor to high placebo response rates may be the extent to which the volunteers in antidepressant trials are really generalizable to patients in clinical practice. Since the initial antidepressant trials in the 1960s, participants have gone from being patients who were recruited primarily from inpatient psychiatric populations to outpatient volunteers who are often recruited by advertisements (
4). At times, these symptomatic volunteers have participated in other trials. When we contact potential participants to schedule screening, they often ask to be reminded which trial we are screening for or mistake our research trial for a different protocol in which they recently participated. The following stories illuminate some of the other problems we have recently encountered using symptomatic volunteers.
“Mr. A” participated in a trial of a biological marker for selecting medications in the treatment of depression. He improved over the course of the study, so it was surprising when he decided to discontinue his medication when the study concluded. A year later, he responded to an advertisement recruiting depressed subjects for a new clinical trial for an experimental antidepressant with a novel mechanism. But after we enrolled Mr. A in this new trial, the study monitor informed us that a subject with the identical birth date and initials had just completed the same trial at another local institution. Mr. A claimed that this was an honest mistake, but we were skeptical. Our suspicions were confirmed when we learned by happenstance that he was also enrolled in a trial as a healthy comparison subject at a neighboring institution.
“Ms. B” is another symptomatic volunteer who responded to an advertisement and enrolled in one of our trials. Three weeks later, we received a call from a colleague across town saying that Ms. B was already a subject in a trial at that site. This double enrollment was discovered when a pharmacy called the second site to verify a duplicate prescription for a controlled substance. Mr. A and Ms. B are two problematic volunteers whom we have identified in our own clinical trial program over the past year. We worry, however, that there may be similar subjects we are not aware of. In a fall 2009 article about the U.S. pharmaceutical industry moving clinical trials overseas, a Wyeth executive noted that “the trend toward placebo results and so-called failed trials is increasing in the United States. That means we are getting 'fake' patients, treatment-resistant patients, or patients who have been recycled from other studies” (
5). Shiovitz et al. (
6) recently described similar cases of “professional subjects” and were told anonymously by trial sponsors that duplicate subjects in some protocols have been as high as 5%.
While our experiences and similar reports from colleagues are merely anecdotal, they do call attention to the assumption that symptomatic volunteers are representative of patients seen in clinical practice. This assumption is made in all areas of clinical research that use symptomatic volunteers in an era where web sites such as
www.clinicalconnection.com and
www.clinicaltrials.gov connect volunteers to investigators studying conditions as diverse as bipolar disorder, diabetes, obesity, and psoriasis. But it is a more pronounced problem for psychiatry, which lacks validated biomarkers and often relies on self-report for diagnosis confirmation.
While it is always assumed that patients seen in clinical practice are seeking relief from suffering, the motives of the symptomatic volunteer are inherently more opaque. On the altruistic end of the spectrum, many may experience gratification from participating in biomedical research that contributes to our collective evidence base and will provide better treatments for future patients. Understandably, some volunteers desire free access to medical care or cash stipends. On the pernicious end of the spectrum, the “job” of an actively fraudulent volunteer is to have the correct diagnosis at study entry in order to collect a paycheck for answering questions and completing self-reports. No matter where on the spectrum these volunteers fall, these motivations and whatever underlying substrate propels them to pick up a telephone and respond to an advertisement may make them distinct from patients seen in clinical practice.
Perhaps the most effective way to address these problems is to increase the number of patients who self-present for treatment in clinical trials. Using fewer symptomatic volunteers would also mean using fewer for-profit research organizations, which do not treat patients and therefore rely heavily on advertisements for recruitment, and would institute a greater degree of responsible and transparent collaboration between industry and medical centers where real patients present for care.
We cannot conclude from the basis of our anecdotal experiences that placebo response rates are being significantly inflated by fraudulent volunteers. But we do think that these incidents point out some of the difficulties with subject recruitment in clinical trials, and we hope that by being transparent about our own difficulties and shortcomings we can generate a broader discussion. We also urge clinicians and professional associations to encourage real patients to volunteer for clinical research. Taken together, these measures could improve the value of clinical research.