The topic of addiction in psychiatry remains contentious, riddled with moral arguments that skew public sentiment and policy. Since the 1970s, when a “War on Drugs” was declared, U.S. discourse has veered between portraying addicted individuals as morally bankrupt criminals or victims of biology and environment.
The history of opium use in 19th-century Britain illustrates how sociopolitical factors shaped this epidemic and those like it. Public opinion about addiction, as evidenced by the opium epidemic, has been strongly influenced by professional, social, and geopolitical interests.
Prior to the 1868 Pharmacy Act, which restricted the sale of opium to pharmacists, opium was widely available, typically purchased at the grocer (
1). Opium’s uses were manifold, from toothaches and bruises to cough and diarrhea. The working class used it as a stimulant prior to going to work, and mothers found laudanum (a form of opium) useful for quieting babies. Medical discussion during this time had little to do with opium’s addictive potential. Instead, medical experts addressed opium’s role in limiting life expectancy and in accidental poisoning, as well as the lack of product purity in the market (
2).
Eventual restriction of opiate use in the mid-1800s was influenced by a number of factors, including professional self-interest, class and racial tension, and various international pressures. As a professional group interested in safeguarding its role as gatekeepers to medicines, pharmacists advocated for the 1868 Pharmacy Act, which limited opium’s point of sale to specific vendors. Doctors, invested in their role as prescribers, started discouraging self-medication with opium.
Class and racial tensions also contributed to growing public concern—while opium was “respectable” for the middle class to use, its spread to the working class caused concerns about opium abuse contributing to their “degeneracy” (
3). Later, public sentiment and xenophobia were stirred as opium became associated with Chinese opium dens; in particular, white women were thought to be at risk of being corrupted by foreigners (
3).
International political and economic pressures also played a role. The 1874 Society for the Suppression of the Opium Trade was created specifically to campaign against Britain’s involvement in the opium trade with China; in the process, they became a forceful voice describing opium’s addictive nature. Wartime concerns that narcotics were corrupting the character of military men fueled the passage of the 1916 Defense of the Realm Act. This was a precursor to the 1920 Dangerous Drugs Act, which penalized opium and its derivatives other than for “legitimate” medical use (
2).
Within a century, societal and cultural factors shaped an evolving public perception of opiates. At the beginning of the 19th century, it was considered a syrup innocuous enough for babies. By the end, it was viewed as an immoral, addictive drug to be tightly regulated. This illustrates how societal and cultural factors can drive policy and medical management. In the United States, our drug policy debates remain colored by morality-based rhetoric. It behooves us, especially as psychiatrists working to address the problem individually and systematically, to consider the myriad factors influencing our status quo.