Psychiatry has a lengthy and contentious history of coercion in the care of patients with mental illness. Legal frameworks often govern formal use of coercion in psychiatric care, such as involuntary hospitalization, seclusion, physical restraints, and medication over objection. Because these interventions can infringe on individual liberties and cause distress for patients and others, these types of formal coercion have generated considerable attention and controversy (
1–
9). In recent years, recognition has increased that informal coercion, such as subtle inducements, leverage, or threats, is prevalent and influential in psychiatric care (
10–
15). For example, in multiple countries, patients admitted to psychiatric units on a voluntary basis often report feeling coerced into doing so or during their admission (
16–
19). In a 1990 British survey of 412 patients with a history of voluntary psychiatric hospitalization, 183 (44%) reported that they did not believe that their admission was genuinely voluntary (
16). In a 2005 survey of patients admitted to psychiatric hospitals in England, 61 (48%) of 128 voluntarily admitted patients reported high levels of perceived coercion on the MacArthur Admission Experience Survey (
18). Research suggests that informal coercion in psychiatry may arise for numerous reasons, such as clinicians’ desires to promote patients’ adherence to treatment, to act in the perceived best interests for patients, and to avoid use of formal coercion (
12,
15).
Digital technologies are rapidly being integrated into psychiatry and bring promise for expanding access to mental health services and information (
20,
21). A study of U.S. emergency departments (EDs) in 2016 found that 885 (20%) of 4,507 used telepsychiatry services (
22). Surveys estimated that 29% (3,385 of 11,576) of U.S. mental health facilities used telepsychiatry in 2017, up from approximately 15% in 2010 (
23). These numbers have likely increased further in recent years, particularly as the COVID-19 pandemic has led to sudden and widespread telepsychiatry adoption (
24–
26). Telepsychiatry services are far from the only example of the integration of digital technologies into psychiatry. In 2017, Torous and Roberts (
27) wrote that >10,000 mobile mental health apps were available for download. A 2018 article on digital psychiatry noted a “dizzying and rapidly changing array of mobile apps, artificial intelligence (AI) resources, and virtual reality (VR)–based therapies currently available” (
20). Social media platforms are also being used for research, education, and interventions related to mental illness (
28).
Recent literature has explored examples of coercive uses of digital technologies, such as digital coercive control as part of domestic violence and cyberbullying among youths (
29–
31). Although the adoption of digital technologies in psychiatry has raised concerns about the effectiveness, privacy, regulation, and equitability of these tools (
20,
32), less attention has been paid to the potential relationships between digital technologies and coercion in psychiatric care. In two recent systematic reviews on coercion in psychiatry, the words “digital” and “technology” are not mentioned (
5,
12). Using several case examples, I examine in this review how the integration of digital technologies into psychiatry may influence patients’ experiences of coercion in psychiatric care. I also offer suggestions for studying and addressing these effects as clinicians adopt and rely on digital technologies for the provision of psychiatric services. The following examples represent composite cases from past clinical experiences, rather than actual, individual cases.
Electronic Medical Record Flags: Case 1
A 45-year-old man is found confused and wandering in a street. Emergency medical services transport him to a local ED, where an emergency physician logs into the man’s electronic medical record (EMR). Upon opening the patient’s electronic chart, the physician sees a flashing pop-up noting high suicide risk. The pop-up also includes a flag noting high violence risk. The physician closes the flags and continues reviewing the patient’s chart, noticing that the patient has a history of psychosis and stimulant use disorder. The emergency physician has several other patients waiting for evaluation as well as an incoming patient with trauma-related injuries arriving within minutes. He briefly evaluates the confused man, then places him on a psychiatric hold for grave disability and consults with the psychiatry department for further assistance. The patient is admitted to a locked psychiatric unit. The next day, the patient develops a fever, and a nurse discovers an infected wound on his inner thigh. The on-call psychiatrist requests input from a medicine consultation team, which determines that the patient’s confusion is likely due to sepsis and delirium and transfers the patient to a medical unit for further care.
The digitization of medical records not only allows for easy access to patients’ current and historical psychiatric information but also brings opportunities to improve psychiatric care. For example, although many patients experience suicidal ideation or have other suicide risk factors, health professionals may miss or not highlight these risks in medical documentation (
33). Adding electronic flags, such as alerts about high suicide risk, to patient charts is one way in which clinicians might use EMRs to better care for patients with mental illness and to mitigate the risks for adverse outcomes. These flags might remind clinicians to pursue suicide prevention, such as creating suicide safety plans with patients. A 2015 study of 200 veterans with medical record flags for high risk of suicide found that 180 (90%) had suicide safety plans in their charts (
34). Furthermore, the flags may prompt clinicians to connect patients at risk for suicide with mental health services. In a 2012 study of >8,700 veterans with a substance use disorder, the addition of suicide risk flags to patients’ charts was associated with 1.22 times more primary care visit days (95% confidence interval [CI]=1.20–1.25), 1.98 times more substance use disorder visit days (95% CI=1.84–2.13), and 2.22 times more mental health visit days (95% CI=2.17–2.27) from the year before and the year after flag initiation (
35).
Despite the promise of EMR flags to support psychiatric care, the degree to which these flags might shape coercion of patients warrants consideration. EMRs can transmit stigmatizing language or sensitive information about patients’ histories, which may foster more negative attitudes by clinicians toward patients (
36,
37). As suggested by case 1, these flags may specifically draw clinicians’ attention to sensitive aspects of a patient’s psychiatric history, potentially biasing clinical decision making. In case 1, the emergency physician had to click through suicide- and violence-related alerts to get into the patient’s chart, anchoring the busy physician to mental health as the likely reason for the patient’s confused presentation. Because of this anchoring bias, the emergency physician leads the patient’s care down a pathway toward involuntary psychiatric hospitalization rather than pursuing a broader medical workup and discovering that the patient had delirium due to an infected wound and sepsis.
Within this context, the emergency physician and the psychiatric consultant failed to take adequate histories and to complete general medical exams that might have discovered the patient’s infected wound. These mistakes might have led not only to unnecessary involuntary psychiatric care but also to potential morbidity and mortality from untreated sepsis and delirium. These kinds of scenarios are not theoretical. Studies have estimated that as many as 46%–64% of patients with delirium are misdiagnosed when referred for psychiatric consultation, and a history of psychiatric diagnosis is often associated with a missed diagnosis (
38–
40). A review of data from 1,953 patients admitted to a psychiatric unit from 2001 to 2007 estimated that 55 (3%) of these patients had a medical disorder causing their symptoms and that these patients had fewer complete medical histories, general medical examinations, laboratory studies, and treatments of abnormal vital signs than patients admitted to medical units (
41).
EMR flags could foster misattribution of patient presentations to psychiatric concerns or bias clinicians toward coercive interventions. For instance, the study of veterans with substance use disorders identified >8,700 who received suicide risk flags in 2012 (
35). Although the study found that ED visit days fell across the year before and after flag initiation (incidence rate ratio [IRR]=0.83, 95% CI=0.80–0.85), the number of hospitalized days for a psychiatric disorder (IRR=1.54, 95% CI=1.43–1.66) or for a substance use disorder (IRR=1.41, 95% CI=1.30–1.53) rose significantly over the same period (
35). These hospitalizations may have been necessary and helpful for patients, which could underscore the utility of EMR flags, but the study did not specify the extent to which these hospitalizations were voluntary or involuntary. If EMR flags influence clinicians to pursue different degrees of involuntary care, this finding could represent one example for how the use of digital technologies influences coercion in psychiatric care. In surveys conducted between 2018 and 2019, 68 mental health clinicians were asked to consider a situation in which they receive a suicide risk flag for a patient and information about why the flag appeared; nearly half of the respondents reported that they would require the patient to present to an ED or an inpatient unit for admission because of the flag (
42).
Clinicians might also use EMR flags to deny access to care or to force patients to receive care that does not match their preferences. Beyond indicating risk for suicide, EMR flags can be used for other purposes, such as identifying patients with histories of violence. At the Veterans Health Administration (VHA), these behavioral flags may include stipulations for patients’ care, such as requiring a police escort anytime patients come onto VHA property or metal detector screening before offering care (
43). These flags may warn clinicians about potentially disruptive patients and foster appropriate safety precautions, but “critics have alleged that behavioral flags are a method used to punish those who complain about their health care” (
43). In a 2018 article about behavioral flags, Weinberger et al. (
43) estimated that most behaviors leading to EMR flags are verbal and expressed concern that these flags might discourage patients from seeking care or force patients to receive limited care with “humiliating” restrictions.
Surveillance Cameras: Case 2
A 57-year-old woman with schizophrenia is admitted to a locked psychiatric unit for suicidal ideation amid worsening psychosis. Because of her verbal outbursts and threatening postures, staff place her in a seclusion room for safety purposes. When eating, the patient spills juice on herself, and a nurse provides her with a clean set of hospital-provided clothing. The patient is changing her clothing when she notices a small video camera in a ceiling corner. She begins yelling that the staff are “watching her.” Nursing staff attempt to reassure her that no one is watching her change and that video monitoring is used only for safety purposes. Later, a psychiatrist mentions the patient’s “acute paranoia about video cameras” during a civil commitment hearing. The psychiatrist also testifies about recent behaviors of the patient that took place when the patient was alone in her hospital room.
The integration of video surveillance into psychiatric units has drawn attention because of its potential benefits as well as potential ethical concerns. In a 2020 review of 16 articles on this topic, Appenzeller et al. (
44) identified “two main purposes of video surveillance: constant surveillance for security purposes and selective observation of the safety and well-being of patients.” They concluded that existing evidence did not support use of video surveillance for security purposes in psychiatric settings and that video surveillance could lead to psychological harm for some patients. However, they also found that video surveillance may be useful under specific circumstances, such as for nighttime observation to avoid sleep disruptions (
44). This article and other studies have raised concerns over privacy, consent, dignity, data protection, and potential exacerbation of psychiatric symptoms related to use of video surveillance in psychiatric settings (
44–
46).
Video surveillance can shape patients’ experiences of coercion in many ways. As noted by Appenzeller et al. (
44), these technologies “might directly contribute to an atmosphere of detachment, control, and fear” on psychiatric units. In case 2, the patient voices understandable concerns about changing her clothing in front of a video camera whose purpose has not been explained to her, that might be monitored by unknown persons, and that could be saving its recordings to unspecified devices for unknown periods. Patients might hesitate to object to treatment, to question their clinicians, or to talk about their symptoms if they worry that anything they say or do may be recorded by video surveillance. Under the gaze of surveillance cameras, patients might also not feel comfortable speaking openly about how they are doing and about the circumstances of their hospitalization with visitors, such as family, friends, or attorneys.
Clinicians may intentionally use video surveillance in ways that coerce patients. The psychiatrist in case 2 uses the patient’s statements about video surveillance—statements that may arise out of reasonable concern rather than psychosis—as evidence that she needs involuntary treatment. In addition, the psychiatrist refers in the civil commitment hearing to behaviors that the patient exhibited when no one else was in the room, presumably captured only by video monitoring. Civil commitment might have been an appropriate intervention for this patient; however, as a result of the psychiatrist’s questionable use of this technology for commitment purposes, this patient might end up confined for longer periods than she otherwise would have been, regardless of whether she might benefit from further care. Although the patient in case 2 became aware of the video surveillance, health professionals have used covert video monitoring to observe patients when concern arises about potential hidden motives or behaviors (e.g., malingering or factitious disorder imposed on another) (
47–
49). Covert video surveillance could enable clinicians to identify and respond to deceptive behaviors but could also lead to breaches of patient trust, invasions of privacy, and incidental discoveries that foster further coercion (e.g., placement on a psychiatric hold or extension of civil commitment).
Videoconferencing: Case 3
A 27-year-old man is hospitalized involuntarily on a psychiatric unit while he is experiencing a manic episode with psychotic features. A civil commitment hearing is held to determine whether he should remain in the hospital. Because of the COVID-19 pandemic, the hospital and court system began using videoconferencing to complete legal proceedings and to minimize the number of people on the psychiatric unit. During the legal hearing, the patient cannot hear the lawyers or the doctors very well on the monitor. He is not sure whether the Wi-Fi connection is poor or the sound is too low, but he is afraid to say anything because he does not want to get into more trouble. Also, he feels tired and finds it difficult to concentrate on the screen because of a new medication he has received. Suddenly, a court official says that she is upholding the order keeping him in the hospital. The patient walks back to his room after the hearing, confused about why he is still being held in the hospital.
Videoconferencing brings exciting opportunities for family meetings, legal hearings, use of interpreters, multidisciplinary team meetings, and other group interactions in mental health services. Medical institutions have used videoconferencing for televisitation by family to inpatients and to conduct civil commitment hearings for many years (
50–
52). In 1998, the American Psychiatric Association published a resource document noting that videoconferencing may allow family members to be present for clinical interactions and can be useful for civil commitment hearings (
53). In 2014, Ithman et al. (
52) noted that videoconferencing in civil commitment hearings had “saved health care staff members’ time, improved productivity, enhanced patient and staff safety, and eliminated the burden and embarrassment of transportation in restraints by law enforcement. In particular, this process [helped] preserve the dignity of the patient.” More recently, use of videoconferencing in psychiatric settings may be expanding considerably because of the COVID-19 pandemic. As COVID-19 cases spread globally, mental health facilities started restricting visitors, encouraging physical distancing, and taking other steps to mitigate viral spread; meanwhile, many facilities rapidly adopted videoconferencing technologies to continue providing care, connect patients with loved ones, and conduct legal proceedings (
24–
26,
54).
Alongside these many benefits, clinicians should also remain mindful of other ways in which videoconferencing might shape patients’ experiences of coercion. Case 3 describes a patient who struggles to understand and participate in his civil commitment hearing because of videoconferencing. The patient’s difficulties with videoconferencing not only lead to the continuation of his civil commitment, which might not have occurred if the hearing had been in person, but also contribute to the patient’s lack of understanding of why he remains in the hospital. Many patients leave commitment hearings confused about why they remain in the hospital regardless of whether videoconferencing was used. Still, when vulnerable patients are placed in coercive settings, such as civil commitment hearings, it is worth considering how the introduction of videoconferencing might affect these patients and ways to fix problems (e.g., poor quality of videoconferencing) that could arise. For instance, a 2018 review (
55) found that telepsychiatry was largely effective and acceptable for providing mental health services in forensic settings. However, the article also raised concern that some people, particularly those with mental disorders or substance use disorders, may feel less connected with their legal counsel or may be hesitant to share sensitive information with strangers when legal proceedings are held by videoconferencing.
Several U.S. courts have supported the use of teleconferencing or videoconferencing in civil commitment hearings (
52,
56,
57), but this support has not been universal, and concerns have been raised about the effects of these technologies on patients’ rights. In one example, an inmate faced civil commitment when a North Carolina federal court was piloting videoconferencing for these proceedings; the patient challenged the use of videoconferencing, in part because of concerns that “the quality of the video transmission was not great. . . . Though each participant was recognizable, there was a fuzziness and jerkiness to the video image” (
56). After the district court upheld the use of videoconferencing in the hearing, the patient appealed to the U.S. Court of Appeals for the Fourth Circuit, which decided in 1995 that the use of videoconferencing did not violate his constitutional or statutory rights. In a dissenting opinion, one circuit judge questioned “whether a man should be deprived of his liberty by a merely televised witness and whether man should be so deprived of the opportunity to be present and face and address the court” (
56). By comparison, a 2017 case before the Supreme Court of Florida arose out of 15 petitions protesting the decision by a county court judge to remotely preside over civil commitment hearings rather than attend in person (
58,
59). Although the Second District Court of Appeal upheld the judge’s ability to do so, the Supreme Court of Florida quashed this decision, determining that individuals “have a right to have a judicial officer physically present at their . . . commitment hearing, subject only to their consent to the contrary” (
59).
Civil commitment hearings are not the only scenario in which videoconferencing might affect the coercion of patients during psychiatric care. Televisitation can allow family or friends to more easily connect with patients who are hospitalized, and these technologies can prove indispensable in certain situations, for example, to connect patients with family who live far away or to overcome in-person visitor restrictions during the COVID-19 pandemic (
26,
50). In other situations, it is possible that some family or friends who would have come in person to visit patients might not do so because of the convenience of videoconferencing. By not visiting psychiatric facilities in person, family and friends may not see firsthand the circumstances of involuntary hospitalization for their loved ones; in addition, they could also miss opportunities to share information with staff or to advocate for the patient in ways that limit the duration of the patient’s civil commitment. Some patients may hesitate to speak openly with family or friends via televisitation, particularly if the patients fear that their visits are being recorded or monitored. Moreover, many psychiatric facilities limit patients’ access to the Internet and electronic devices (
60), and staff willingness or availability to provide devices to patients could shape patients’ abilities to engage with remote visitors.
Risk Assessment Tools: Case 4
An 18-year-old man is placed on a 72-hour psychiatric hold and admitted to a psychiatric unit after threatening to kill himself while drunk. After becoming sober, the patient denies any suicidal thoughts, intent, or plans, insisting that he was drunk and in a fight with his girlfriend. He denies any other history of suicidal thoughts, self-harm, psychiatric hospitalizations, or access to firearms. A psychiatrist speaks with the patient’s girlfriend, who says, “I don’t think he was being serious.” The psychiatrist talks with the patient about the seriousness of the threats that he made, creates a written safety plan for any future crises, counsels him about the risks of alcohol use, schedules outpatient follow-up, and plans to discharge the patient. Before discharging him, the psychiatrist remembers a suicide risk assessment tool that a colleague had mentioned to her. She puts the patient’s information into the tool, which estimates that the patient has a 20% probability of dying by suicide within a year. The psychiatrist is unsure how the risk assessment tool works, but a 20% risk seems high to her. Instead of discharging the patient, she keeps him on a psychiatric hold until a civil commitment hearing, when a hearing officer must decide whether to release the patient.
Risk assessment tools may estimate the likelihood of adverse outcomes in psychiatric care (e.g., suicide or violence). Clinical risk assessment instruments have existed for many years in psychiatry, but digital technologies are changing the development and use of these tools. Computer-based algorithms can now generate results by combing through large data sets in EMRs, and researchers are using AI methods, such as machine learning, to create and test risk assessment instruments in psychiatry (
42,
61). Clinicians can easily access online calculators that incorporate patient information and provide risk assessments down to a percentage for specific behaviors by patients. The development of risk assessment algorithms that predict suicide, violence, or other adverse events for individual patients with a reasonable degree of accuracy could revolutionize the practice of psychiatry; however, current risk assessment tools remain imperfect. A 2019 systematic review identified 64 suicide prediction models that had included data from more than 14 million participants across five countries (
62). The review found that these models could generate accurate overall risk classifications but that the positive predictive values of these models are very low (
62).
Use of these imperfect tools could affect the degree of coercion in psychiatric care. False-negative results are a major concern with risk assessment tools; in the aforementioned surveys of 68 mental health clinicians between 2018 and 2019, 59 (87%) believed that a false negative in suicide risk assessment (i.e., someone is not flagged for suicide who is at risk) was worse than a false positive (i.e., someone is flagged for suicide who is not at risk) (
42). Yet, false-positive results can also have considerable consequences. As noted in 2019 by Marks (
63),
Patients might be hospitalized against their will, and a diagnosis of suicidal thoughts would become part of their permanent medical record. Health care providers may find it difficult to ignore the results of AI-based suicide predictions even when they disagree with the predictions and suspect they might be false positives. . . . [D]octors may be incentivized to follow AI-based suicide predictions because overriding a prediction could expose them to medical malpractice liability if they don’t hospitalize patients who subsequently attempt or complete suicide.
Involuntarily hospitalizing someone on the basis of a false computer-generated prediction of suicide may seem reminiscent of science fiction. However, risk assessment tools related to psychiatry are already being used in these types of contexts. Facebook has developed suicide risk assessment algorithms to scan content on its social networking platforms, and, in a 2018 post, Facebook’s chief executive officer, Mark Zuckerberg, reported that “we’ve helped first responders quickly reach around 3,500 people globally who needed help” (
64). Mental health professionals have published case reports about noticing suicidal postings on social media and grappling with the ethical complexities of what to do next (
65,
66); evaluating a patient brought to a hospital because a social networking platform’s algorithm identified the person at risk for suicide further complicates these kinds of clinical decision making. A 2020 article noted that “though [Facebook] has not published outcome data for this program, it strains credulity to imagine there have not been some false positive reports” (
67).
These types of risk assessment tools not only might produce inaccurate or misleading results that affect coercion in psychiatric care but might also perpetuate structural inequities related to race, sex, socioeconomic status, and other patient characteristics. For example, research suggests that Black patients are more likely to receive a diagnosis of a psychotic disorder or to be hospitalized for psychiatric reasons than are White patients (
68,
69). Algorithms might incorporate these kinds of systemic biases, compounding discrimination by producing skewed risk assessments and influencing psychiatric care received by patients who are marginalized. In a recent example, researchers found racial bias in a widely used algorithm for stratifying patients’ health risks and targeting high-risk patients for additional care management (
70). Because less money often is spent on Black patients than on White patients with similar needs, and the algorithm stratified risk on the basis of costs rather than illness, the algorithm perpetuated less attention to the health needs of Black patients (
70).
Case 4 highlights the difficulties with interpreting the results of risk assessment tools, particularly because algorithms may produce probability-based predictions rather than binary “positive” or “negative” predictions. In case 4, the patient has some risk factors for suicide (e.g., recent verbal threat about suicide and substance use) but also many protective factors (e.g., denying suicidal intent or plans, no known history of previous attempts or self-harm, no known history of psychiatric hospitalizations, no known access to firearms, and completion of a safety plan with staff). It is not clear whether the 20% risk for suicide in the next year generated by the risk assessment tool represents a “false positive”; however, the psychiatrist stops the planned discharge of the patient and instead continues his involuntary hospitalization because of the tool’s risk rating.
Clinicians may not know how to interpret or use specific risk assessment tools, which could influence coercion in psychiatric care. In the surveys conducted between 2018 and 2019, 64 (94%) of 68 mental health clinicians reported that they would want to know which clinical features led to a patient receiving a machine learning–based suicide risk flag (
42). This finding is a key part of case 4, in that the psychiatrist did not understand how the suicide risk assessment tool worked. The psychiatrist interpreted 20% as a high risk for suicide within the next year; however, this number is not necessarily useful without understanding how it is to be interpreted. For instance, a group at the University of Oxford has developed Web-based risk calculators for suicide (Oxford Mental Illness and Suicide tool [OxMIS]) and violence (Oxford Mental Illness and Violence tool [OxMIV]) (
71,
72). An OxRisk website (
https://oxrisk.com) provides quick access to these risk assessment calculators, including tabs for entering patient characteristics and risk percentages that change as inputs accumulate (
73). As noted on the website, these calculators may not apply to every patient because the tools were validated with data from patients in Sweden with bipolar or schizophrenia spectrum disorders. The website describes additional limitations that clinicians need to recognize; for example, the OxMIV calculator offers probability scores up to a maximum of 20% for violent offending within 12 months and thereafter generates >20%.
Moving Forward
These cases represent just a small subset of examples of coercion related to the use and misuse of digital technologies in psychiatry. Numerous digital technologies, including mobile mental health applications, inpatient and outpatient telepsychiatry, and sensor-based therapeutics, are making their way into psychiatric practice (
26,
74–
76), and many of these technologies may expand access to care, increase patients’ choices, and decrease patients’ experiences of coercion. In addition, coercion in psychiatric care has nuanced implications, and it is not always entirely “bad” or “good.” Instances of coercion, such as involuntary psychiatric hospitalization, may be necessary to save a patient’s life during a psychiatric crisis, even if the experience of coercion is distressing for the patient. Some patients who undergo involuntary hospitalization later report that these experiences were justified (
77). Moreover, public surveys in multiple countries suggest widespread support for coercive interventions, such as involuntary hospitalization or medication, in specific situations for patients with mental illness (
78–
80). In a 2018 survey of 1,173 U.S. adults, respondents read a vignette of a person who met clinical criteria for schizophrenia, and approximately 60% supported coerced hospitalization (
80).
Still, it is important to study how the integration of digital technologies into psychiatric care might influence the occurrences of formal and informal coercion. Do suicide risk flags in EMRs affect rates of involuntary, as opposed to voluntary, psychiatric hospitalization? How often do mental health professionals cite evidence from surveillance cameras in civil commitment hearings, and does this information shape the decision making of court officials? Do patients feel that civil commitment hearings are legitimate and inclusive when held via videoconferencing as opposed to in person? Are specific risk assessment tools associated with changes in civil commitment incidence, length of inpatient stays, or other outcomes related to coercion in psychiatric care?
Despite the prevalence of coercion in psychiatric care, publicly available data about coercive interventions, such as the number of involuntary psychiatric hospitalizations in the United States each year, can be sporadic and difficult to find (
81–
83). Privacy concerns, decentralization of mental health systems, and other constraints may complicate obtaining these types of data (
82,
83). The integration of digital technologies into psychiatry not only calls for reassessing the degree of coercion in psychiatric care but also provides opportunities to do so. The digitization of health care information can support efforts by clinicians, policy makers, patient advocacy groups, and others to examine and oversee coercive practices in psychiatry. Because information about psychiatric care can be sensitive and stigmatizing for individuals, steps must be taken (e.g., deidentification) to protect the privacy of those whose data may be used in these efforts.
Disclosing use of specific technologies to patients may be one way to mitigate these concerns about coercion. It is not practical or reasonable to ask that clinicians disclose every instance of technology in psychiatric care, such as daily use of smartphones or EMRs. Moreover, some patients, particularly those grappling with acute psychiatric symptoms, substance intoxication, or cognitive impairment, may struggle to understand these types of disclosures. However, when specific technologies are likely to shape patients’ experiences of coercion in psychiatric settings, clinicians should consider discussing these issues with patients and their families in meaningful ways, rather than briefly mentioning these technologies in a checkbox manner. For instance, the 2020 review of surveillance cameras on psychiatric units recommended that “patients need to be clearly informed about when they are observed and when they have privacy so that they have the chance to present themselves accordingly” (
44). Similarly, patients who are going to attend civil commitment hearings deserve to know what these hearings will look like and how they function; if clinical staff, attorneys, or court officials will attend only by videoconferencing, patients should be informed about these arrangements ahead of time so that they can prepare with a reasonable understanding of how these proceedings will function.
Providing patients with mechanisms for “opting out” of certain technologies might safeguard against some of these concerns. In a 2020 article, Torous et al. (
84) wrote that “the only established contraindication to telehealth is a patient not wishing to partake,” adding that lack of income, homelessness, language, culture, and other factors may shape people’s willingness or ability to use digital mental health services. In their 2020 article on surveillance cameras, Appenzeller et al. (
44) noted that “in units where the default is video monitoring, patients should be given at least the right to opt out in favor of in-person observation,” although staffing constraints may influence the degree to which psychiatric facilities can fulfill these requests. Similarly, for patients who are experiencing psychiatric crises and present to rural EDs, in-person psychiatric consultation may not be possible in lieu of telepsychiatry because of shortages of mental health professionals in rural areas. The abovementioned 2017 decision by the Supreme Court of Florida indicated that patients have a right to have a judicial officer physically present at civil commitment hearings and could request that videoconferencing not be used (
59). However, these rights may have limits, and these types of requests may not always be feasible. For instance, to limit disease transmission during the COVID-19 pandemic, the Supreme Court of Florida directed the state’s courts to conduct remote legal proceedings by using technology when possible and temporarily suspended “all rules of procedure, court orders, and opinions applicable to court proceedings that limit or prohibit the use of communication equipment for conducting proceedings by remote electronic means” (
85).
Patients may desire options for correcting or even erasing digital trails from their psychiatric care. Patients may not know where and how long electronic information related to psychiatric care, such as video surveillance on inpatient units and videoconferencing content from civil commitment hearings, is stored. Expanding patients’ abilities to access their own medical data may allow patients to challenge or correct information with which they disagree. For example, many patients are gaining access to their health information through open EMRs, and some may be displeased to learn about mental health–related flags in their charts. Such patient responses do not mean that clinicians should automatically remove the flags; instead, clinicians may want to engage in a discussion with patients about the meaning and the purpose of these flags. Creating formal processes by which patients can request removal of EMR flags may be an alternative approach. Federal regulations state that patients have the right to submit requests for amendments to their medical records and that health care providers must generally respond to these requests within 60 days (
86). A 2014 study examined 818 amendment requests by 181 patients over a 7-year period, finding that 70 (9%) requests were related to psychiatric conditions, 70 (9%) were related to in-clinic behavior, and 52 (6%) were related to drug or alcohol use (
87).
Last, alongside the integration of digital technologies into psychiatric settings, clinicians need training and support regarding how these technologies function and their implications for patient care. Patients might experience elements of coercion when digital technology malfunctions, such as poor-quality videoconferencing during civil commitment hearings or false-positive results from a risk assessment algorithm; however, the ways in which clinicians interpret, react to, and use digital technologies in psychiatric settings can influence coercion of patients, particularly when clinicians lack proper training or understanding of these tools. The burdens to understand the integration of digital technologies into practice should not fall entirely on the shoulders of individual clinicians. By developing guidelines and clinical decision support tools, health care organizations can help clinicians learn about and navigate these types of new digital technologies in psychiatric practice. Alternatively, for highly specialized or complex technologies, health care organizations might embed clinical technology specialists in psychiatric settings who can train frontline clinicians, offer technical support, liaise with patients and families, and monitor technology use to minimize unnecessary coercion (
88).
Conclusions
Digital technologies are reshaping psychiatry, and these technologies will in many ways improve the quality, accessibility, and personalization of psychiatric care. Technophobia, or the fear of new technology, would be a misguided reaction to these changes (
89), and a balanced approach requires careful consideration of the many potential benefits alongside the possible risks of these technologies in psychiatry. Psychiatric care has long entailed coercive elements even in the absence of digital technologies; still, the integration of digital technologies into psychiatric settings where coercion is frequent or even legally sanctioned warrants further scrutiny. Coercion is just one possible outcome among many, including loss of privacy, distress for patients and families, transmission of stigmatizing information, and exacerbation of racial and socioeconomic disparities, related to digital technology use and misuse in psychiatry. At the same time, these technologies bring new opportunities for reconsidering and studying coercive practices to support the well-being of and respect for patients in psychiatric settings.