Skip to main content
Full access
Reviews & Overviews
Published Online: 12 January 2021

Digital Technologies and Coercion in Psychiatry

Abstract

Psychiatry has a contentious history of coercion in the care of patients with mental illness, and legal frameworks often govern use of coercive interventions, such as involuntary hospitalization, physical restraints, and medication over objection. Research also suggests that informal coercion, including subtle inducements, leverage, or threats, is prevalent and influential in psychiatric settings. Digital technologies bring promise for expanding access to psychiatric care and improving delivery of these services; however, use and misuse of digital technologies, such as electronic medical record flags, surveillance cameras, videoconferencing, and risk assessment tools, could lead to unexpected coercion of patients with mental illness. Using several composite case examples, the author proposes that the integration of digital technologies into psychiatric care can influence patients’ experiences of coercion and provides recommendations for studying and addressing these effects.

HIGHLIGHTS

Digital technologies are rapidly being integrated into psychiatry and shaping the care of patients who have mental illness.
The integration of digital technologies into psychiatric settings where coercion is frequent or even legally sanctioned warrants scrutiny.
Use and misuse of digital technologies, particularly by clinicians who lack proper training or understanding of these tools, may influence the degree of coercion in psychiatric care.
The digitization of psychiatric care calls for reassessing coercion in psychiatric care and offers new opportunities for studying coercive practices.
Psychiatry has a lengthy and contentious history of coercion in the care of patients with mental illness. Legal frameworks often govern formal use of coercion in psychiatric care, such as involuntary hospitalization, seclusion, physical restraints, and medication over objection. Because these interventions can infringe on individual liberties and cause distress for patients and others, these types of formal coercion have generated considerable attention and controversy (19). In recent years, recognition has increased that informal coercion, such as subtle inducements, leverage, or threats, is prevalent and influential in psychiatric care (1015). For example, in multiple countries, patients admitted to psychiatric units on a voluntary basis often report feeling coerced into doing so or during their admission (1619). In a 1990 British survey of 412 patients with a history of voluntary psychiatric hospitalization, 183 (44%) reported that they did not believe that their admission was genuinely voluntary (16). In a 2005 survey of patients admitted to psychiatric hospitals in England, 61 (48%) of 128 voluntarily admitted patients reported high levels of perceived coercion on the MacArthur Admission Experience Survey (18). Research suggests that informal coercion in psychiatry may arise for numerous reasons, such as clinicians’ desires to promote patients’ adherence to treatment, to act in the perceived best interests for patients, and to avoid use of formal coercion (12, 15).
Digital technologies are rapidly being integrated into psychiatry and bring promise for expanding access to mental health services and information (20, 21). A study of U.S. emergency departments (EDs) in 2016 found that 885 (20%) of 4,507 used telepsychiatry services (22). Surveys estimated that 29% (3,385 of 11,576) of U.S. mental health facilities used telepsychiatry in 2017, up from approximately 15% in 2010 (23). These numbers have likely increased further in recent years, particularly as the COVID-19 pandemic has led to sudden and widespread telepsychiatry adoption (2426). Telepsychiatry services are far from the only example of the integration of digital technologies into psychiatry. In 2017, Torous and Roberts (27) wrote that >10,000 mobile mental health apps were available for download. A 2018 article on digital psychiatry noted a “dizzying and rapidly changing array of mobile apps, artificial intelligence (AI) resources, and virtual reality (VR)–based therapies currently available” (20). Social media platforms are also being used for research, education, and interventions related to mental illness (28).
Recent literature has explored examples of coercive uses of digital technologies, such as digital coercive control as part of domestic violence and cyberbullying among youths (2931). Although the adoption of digital technologies in psychiatry has raised concerns about the effectiveness, privacy, regulation, and equitability of these tools (20, 32), less attention has been paid to the potential relationships between digital technologies and coercion in psychiatric care. In two recent systematic reviews on coercion in psychiatry, the words “digital” and “technology” are not mentioned (5, 12). Using several case examples, I examine in this review how the integration of digital technologies into psychiatry may influence patients’ experiences of coercion in psychiatric care. I also offer suggestions for studying and addressing these effects as clinicians adopt and rely on digital technologies for the provision of psychiatric services. The following examples represent composite cases from past clinical experiences, rather than actual, individual cases.

Electronic Medical Record Flags: Case 1

A 45-year-old man is found confused and wandering in a street. Emergency medical services transport him to a local ED, where an emergency physician logs into the man’s electronic medical record (EMR). Upon opening the patient’s electronic chart, the physician sees a flashing pop-up noting high suicide risk. The pop-up also includes a flag noting high violence risk. The physician closes the flags and continues reviewing the patient’s chart, noticing that the patient has a history of psychosis and stimulant use disorder. The emergency physician has several other patients waiting for evaluation as well as an incoming patient with trauma-related injuries arriving within minutes. He briefly evaluates the confused man, then places him on a psychiatric hold for grave disability and consults with the psychiatry department for further assistance. The patient is admitted to a locked psychiatric unit. The next day, the patient develops a fever, and a nurse discovers an infected wound on his inner thigh. The on-call psychiatrist requests input from a medicine consultation team, which determines that the patient’s confusion is likely due to sepsis and delirium and transfers the patient to a medical unit for further care.
The digitization of medical records not only allows for easy access to patients’ current and historical psychiatric information but also brings opportunities to improve psychiatric care. For example, although many patients experience suicidal ideation or have other suicide risk factors, health professionals may miss or not highlight these risks in medical documentation (33). Adding electronic flags, such as alerts about high suicide risk, to patient charts is one way in which clinicians might use EMRs to better care for patients with mental illness and to mitigate the risks for adverse outcomes. These flags might remind clinicians to pursue suicide prevention, such as creating suicide safety plans with patients. A 2015 study of 200 veterans with medical record flags for high risk of suicide found that 180 (90%) had suicide safety plans in their charts (34). Furthermore, the flags may prompt clinicians to connect patients at risk for suicide with mental health services. In a 2012 study of >8,700 veterans with a substance use disorder, the addition of suicide risk flags to patients’ charts was associated with 1.22 times more primary care visit days (95% confidence interval [CI]=1.20–1.25), 1.98 times more substance use disorder visit days (95% CI=1.84–2.13), and 2.22 times more mental health visit days (95% CI=2.17–2.27) from the year before and the year after flag initiation (35).
Despite the promise of EMR flags to support psychiatric care, the degree to which these flags might shape coercion of patients warrants consideration. EMRs can transmit stigmatizing language or sensitive information about patients’ histories, which may foster more negative attitudes by clinicians toward patients (36, 37). As suggested by case 1, these flags may specifically draw clinicians’ attention to sensitive aspects of a patient’s psychiatric history, potentially biasing clinical decision making. In case 1, the emergency physician had to click through suicide- and violence-related alerts to get into the patient’s chart, anchoring the busy physician to mental health as the likely reason for the patient’s confused presentation. Because of this anchoring bias, the emergency physician leads the patient’s care down a pathway toward involuntary psychiatric hospitalization rather than pursuing a broader medical workup and discovering that the patient had delirium due to an infected wound and sepsis.
Within this context, the emergency physician and the psychiatric consultant failed to take adequate histories and to complete general medical exams that might have discovered the patient’s infected wound. These mistakes might have led not only to unnecessary involuntary psychiatric care but also to potential morbidity and mortality from untreated sepsis and delirium. These kinds of scenarios are not theoretical. Studies have estimated that as many as 46%–64% of patients with delirium are misdiagnosed when referred for psychiatric consultation, and a history of psychiatric diagnosis is often associated with a missed diagnosis (3840). A review of data from 1,953 patients admitted to a psychiatric unit from 2001 to 2007 estimated that 55 (3%) of these patients had a medical disorder causing their symptoms and that these patients had fewer complete medical histories, general medical examinations, laboratory studies, and treatments of abnormal vital signs than patients admitted to medical units (41).
EMR flags could foster misattribution of patient presentations to psychiatric concerns or bias clinicians toward coercive interventions. For instance, the study of veterans with substance use disorders identified >8,700 who received suicide risk flags in 2012 (35). Although the study found that ED visit days fell across the year before and after flag initiation (incidence rate ratio [IRR]=0.83, 95% CI=0.80–0.85), the number of hospitalized days for a psychiatric disorder (IRR=1.54, 95% CI=1.43–1.66) or for a substance use disorder (IRR=1.41, 95% CI=1.30–1.53) rose significantly over the same period (35). These hospitalizations may have been necessary and helpful for patients, which could underscore the utility of EMR flags, but the study did not specify the extent to which these hospitalizations were voluntary or involuntary. If EMR flags influence clinicians to pursue different degrees of involuntary care, this finding could represent one example for how the use of digital technologies influences coercion in psychiatric care. In surveys conducted between 2018 and 2019, 68 mental health clinicians were asked to consider a situation in which they receive a suicide risk flag for a patient and information about why the flag appeared; nearly half of the respondents reported that they would require the patient to present to an ED or an inpatient unit for admission because of the flag (42).
Clinicians might also use EMR flags to deny access to care or to force patients to receive care that does not match their preferences. Beyond indicating risk for suicide, EMR flags can be used for other purposes, such as identifying patients with histories of violence. At the Veterans Health Administration (VHA), these behavioral flags may include stipulations for patients’ care, such as requiring a police escort anytime patients come onto VHA property or metal detector screening before offering care (43). These flags may warn clinicians about potentially disruptive patients and foster appropriate safety precautions, but “critics have alleged that behavioral flags are a method used to punish those who complain about their health care” (43). In a 2018 article about behavioral flags, Weinberger et al. (43) estimated that most behaviors leading to EMR flags are verbal and expressed concern that these flags might discourage patients from seeking care or force patients to receive limited care with “humiliating” restrictions.

Surveillance Cameras: Case 2

A 57-year-old woman with schizophrenia is admitted to a locked psychiatric unit for suicidal ideation amid worsening psychosis. Because of her verbal outbursts and threatening postures, staff place her in a seclusion room for safety purposes. When eating, the patient spills juice on herself, and a nurse provides her with a clean set of hospital-provided clothing. The patient is changing her clothing when she notices a small video camera in a ceiling corner. She begins yelling that the staff are “watching her.” Nursing staff attempt to reassure her that no one is watching her change and that video monitoring is used only for safety purposes. Later, a psychiatrist mentions the patient’s “acute paranoia about video cameras” during a civil commitment hearing. The psychiatrist also testifies about recent behaviors of the patient that took place when the patient was alone in her hospital room.
The integration of video surveillance into psychiatric units has drawn attention because of its potential benefits as well as potential ethical concerns. In a 2020 review of 16 articles on this topic, Appenzeller et al. (44) identified “two main purposes of video surveillance: constant surveillance for security purposes and selective observation of the safety and well-being of patients.” They concluded that existing evidence did not support use of video surveillance for security purposes in psychiatric settings and that video surveillance could lead to psychological harm for some patients. However, they also found that video surveillance may be useful under specific circumstances, such as for nighttime observation to avoid sleep disruptions (44). This article and other studies have raised concerns over privacy, consent, dignity, data protection, and potential exacerbation of psychiatric symptoms related to use of video surveillance in psychiatric settings (4446).
Video surveillance can shape patients’ experiences of coercion in many ways. As noted by Appenzeller et al. (44), these technologies “might directly contribute to an atmosphere of detachment, control, and fear” on psychiatric units. In case 2, the patient voices understandable concerns about changing her clothing in front of a video camera whose purpose has not been explained to her, that might be monitored by unknown persons, and that could be saving its recordings to unspecified devices for unknown periods. Patients might hesitate to object to treatment, to question their clinicians, or to talk about their symptoms if they worry that anything they say or do may be recorded by video surveillance. Under the gaze of surveillance cameras, patients might also not feel comfortable speaking openly about how they are doing and about the circumstances of their hospitalization with visitors, such as family, friends, or attorneys.
Clinicians may intentionally use video surveillance in ways that coerce patients. The psychiatrist in case 2 uses the patient’s statements about video surveillance—statements that may arise out of reasonable concern rather than psychosis—as evidence that she needs involuntary treatment. In addition, the psychiatrist refers in the civil commitment hearing to behaviors that the patient exhibited when no one else was in the room, presumably captured only by video monitoring. Civil commitment might have been an appropriate intervention for this patient; however, as a result of the psychiatrist’s questionable use of this technology for commitment purposes, this patient might end up confined for longer periods than she otherwise would have been, regardless of whether she might benefit from further care. Although the patient in case 2 became aware of the video surveillance, health professionals have used covert video monitoring to observe patients when concern arises about potential hidden motives or behaviors (e.g., malingering or factitious disorder imposed on another) (4749). Covert video surveillance could enable clinicians to identify and respond to deceptive behaviors but could also lead to breaches of patient trust, invasions of privacy, and incidental discoveries that foster further coercion (e.g., placement on a psychiatric hold or extension of civil commitment).

Videoconferencing: Case 3

A 27-year-old man is hospitalized involuntarily on a psychiatric unit while he is experiencing a manic episode with psychotic features. A civil commitment hearing is held to determine whether he should remain in the hospital. Because of the COVID-19 pandemic, the hospital and court system began using videoconferencing to complete legal proceedings and to minimize the number of people on the psychiatric unit. During the legal hearing, the patient cannot hear the lawyers or the doctors very well on the monitor. He is not sure whether the Wi-Fi connection is poor or the sound is too low, but he is afraid to say anything because he does not want to get into more trouble. Also, he feels tired and finds it difficult to concentrate on the screen because of a new medication he has received. Suddenly, a court official says that she is upholding the order keeping him in the hospital. The patient walks back to his room after the hearing, confused about why he is still being held in the hospital.
Videoconferencing brings exciting opportunities for family meetings, legal hearings, use of interpreters, multidisciplinary team meetings, and other group interactions in mental health services. Medical institutions have used videoconferencing for televisitation by family to inpatients and to conduct civil commitment hearings for many years (5052). In 1998, the American Psychiatric Association published a resource document noting that videoconferencing may allow family members to be present for clinical interactions and can be useful for civil commitment hearings (53). In 2014, Ithman et al. (52) noted that videoconferencing in civil commitment hearings had “saved health care staff members’ time, improved productivity, enhanced patient and staff safety, and eliminated the burden and embarrassment of transportation in restraints by law enforcement. In particular, this process [helped] preserve the dignity of the patient.” More recently, use of videoconferencing in psychiatric settings may be expanding considerably because of the COVID-19 pandemic. As COVID-19 cases spread globally, mental health facilities started restricting visitors, encouraging physical distancing, and taking other steps to mitigate viral spread; meanwhile, many facilities rapidly adopted videoconferencing technologies to continue providing care, connect patients with loved ones, and conduct legal proceedings (2426, 54).
Alongside these many benefits, clinicians should also remain mindful of other ways in which videoconferencing might shape patients’ experiences of coercion. Case 3 describes a patient who struggles to understand and participate in his civil commitment hearing because of videoconferencing. The patient’s difficulties with videoconferencing not only lead to the continuation of his civil commitment, which might not have occurred if the hearing had been in person, but also contribute to the patient’s lack of understanding of why he remains in the hospital. Many patients leave commitment hearings confused about why they remain in the hospital regardless of whether videoconferencing was used. Still, when vulnerable patients are placed in coercive settings, such as civil commitment hearings, it is worth considering how the introduction of videoconferencing might affect these patients and ways to fix problems (e.g., poor quality of videoconferencing) that could arise. For instance, a 2018 review (55) found that telepsychiatry was largely effective and acceptable for providing mental health services in forensic settings. However, the article also raised concern that some people, particularly those with mental disorders or substance use disorders, may feel less connected with their legal counsel or may be hesitant to share sensitive information with strangers when legal proceedings are held by videoconferencing.
Several U.S. courts have supported the use of teleconferencing or videoconferencing in civil commitment hearings (52, 56, 57), but this support has not been universal, and concerns have been raised about the effects of these technologies on patients’ rights. In one example, an inmate faced civil commitment when a North Carolina federal court was piloting videoconferencing for these proceedings; the patient challenged the use of videoconferencing, in part because of concerns that “the quality of the video transmission was not great. . . . Though each participant was recognizable, there was a fuzziness and jerkiness to the video image” (56). After the district court upheld the use of videoconferencing in the hearing, the patient appealed to the U.S. Court of Appeals for the Fourth Circuit, which decided in 1995 that the use of videoconferencing did not violate his constitutional or statutory rights. In a dissenting opinion, one circuit judge questioned “whether a man should be deprived of his liberty by a merely televised witness and whether man should be so deprived of the opportunity to be present and face and address the court” (56). By comparison, a 2017 case before the Supreme Court of Florida arose out of 15 petitions protesting the decision by a county court judge to remotely preside over civil commitment hearings rather than attend in person (58, 59). Although the Second District Court of Appeal upheld the judge’s ability to do so, the Supreme Court of Florida quashed this decision, determining that individuals “have a right to have a judicial officer physically present at their . . . commitment hearing, subject only to their consent to the contrary” (59).
Civil commitment hearings are not the only scenario in which videoconferencing might affect the coercion of patients during psychiatric care. Televisitation can allow family or friends to more easily connect with patients who are hospitalized, and these technologies can prove indispensable in certain situations, for example, to connect patients with family who live far away or to overcome in-person visitor restrictions during the COVID-19 pandemic (26, 50). In other situations, it is possible that some family or friends who would have come in person to visit patients might not do so because of the convenience of videoconferencing. By not visiting psychiatric facilities in person, family and friends may not see firsthand the circumstances of involuntary hospitalization for their loved ones; in addition, they could also miss opportunities to share information with staff or to advocate for the patient in ways that limit the duration of the patient’s civil commitment. Some patients may hesitate to speak openly with family or friends via televisitation, particularly if the patients fear that their visits are being recorded or monitored. Moreover, many psychiatric facilities limit patients’ access to the Internet and electronic devices (60), and staff willingness or availability to provide devices to patients could shape patients’ abilities to engage with remote visitors.

Risk Assessment Tools: Case 4

An 18-year-old man is placed on a 72-hour psychiatric hold and admitted to a psychiatric unit after threatening to kill himself while drunk. After becoming sober, the patient denies any suicidal thoughts, intent, or plans, insisting that he was drunk and in a fight with his girlfriend. He denies any other history of suicidal thoughts, self-harm, psychiatric hospitalizations, or access to firearms. A psychiatrist speaks with the patient’s girlfriend, who says, “I don’t think he was being serious.” The psychiatrist talks with the patient about the seriousness of the threats that he made, creates a written safety plan for any future crises, counsels him about the risks of alcohol use, schedules outpatient follow-up, and plans to discharge the patient. Before discharging him, the psychiatrist remembers a suicide risk assessment tool that a colleague had mentioned to her. She puts the patient’s information into the tool, which estimates that the patient has a 20% probability of dying by suicide within a year. The psychiatrist is unsure how the risk assessment tool works, but a 20% risk seems high to her. Instead of discharging the patient, she keeps him on a psychiatric hold until a civil commitment hearing, when a hearing officer must decide whether to release the patient.
Risk assessment tools may estimate the likelihood of adverse outcomes in psychiatric care (e.g., suicide or violence). Clinical risk assessment instruments have existed for many years in psychiatry, but digital technologies are changing the development and use of these tools. Computer-based algorithms can now generate results by combing through large data sets in EMRs, and researchers are using AI methods, such as machine learning, to create and test risk assessment instruments in psychiatry (42, 61). Clinicians can easily access online calculators that incorporate patient information and provide risk assessments down to a percentage for specific behaviors by patients. The development of risk assessment algorithms that predict suicide, violence, or other adverse events for individual patients with a reasonable degree of accuracy could revolutionize the practice of psychiatry; however, current risk assessment tools remain imperfect. A 2019 systematic review identified 64 suicide prediction models that had included data from more than 14 million participants across five countries (62). The review found that these models could generate accurate overall risk classifications but that the positive predictive values of these models are very low (62).
Use of these imperfect tools could affect the degree of coercion in psychiatric care. False-negative results are a major concern with risk assessment tools; in the aforementioned surveys of 68 mental health clinicians between 2018 and 2019, 59 (87%) believed that a false negative in suicide risk assessment (i.e., someone is not flagged for suicide who is at risk) was worse than a false positive (i.e., someone is flagged for suicide who is not at risk) (42). Yet, false-positive results can also have considerable consequences. As noted in 2019 by Marks (63),
Patients might be hospitalized against their will, and a diagnosis of suicidal thoughts would become part of their permanent medical record. Health care providers may find it difficult to ignore the results of AI-based suicide predictions even when they disagree with the predictions and suspect they might be false positives. . . . [D]octors may be incentivized to follow AI-based suicide predictions because overriding a prediction could expose them to medical malpractice liability if they don’t hospitalize patients who subsequently attempt or complete suicide.
Involuntarily hospitalizing someone on the basis of a false computer-generated prediction of suicide may seem reminiscent of science fiction. However, risk assessment tools related to psychiatry are already being used in these types of contexts. Facebook has developed suicide risk assessment algorithms to scan content on its social networking platforms, and, in a 2018 post, Facebook’s chief executive officer, Mark Zuckerberg, reported that “we’ve helped first responders quickly reach around 3,500 people globally who needed help” (64). Mental health professionals have published case reports about noticing suicidal postings on social media and grappling with the ethical complexities of what to do next (65, 66); evaluating a patient brought to a hospital because a social networking platform’s algorithm identified the person at risk for suicide further complicates these kinds of clinical decision making. A 2020 article noted that “though [Facebook] has not published outcome data for this program, it strains credulity to imagine there have not been some false positive reports” (67).
These types of risk assessment tools not only might produce inaccurate or misleading results that affect coercion in psychiatric care but might also perpetuate structural inequities related to race, sex, socioeconomic status, and other patient characteristics. For example, research suggests that Black patients are more likely to receive a diagnosis of a psychotic disorder or to be hospitalized for psychiatric reasons than are White patients (68, 69). Algorithms might incorporate these kinds of systemic biases, compounding discrimination by producing skewed risk assessments and influencing psychiatric care received by patients who are marginalized. In a recent example, researchers found racial bias in a widely used algorithm for stratifying patients’ health risks and targeting high-risk patients for additional care management (70). Because less money often is spent on Black patients than on White patients with similar needs, and the algorithm stratified risk on the basis of costs rather than illness, the algorithm perpetuated less attention to the health needs of Black patients (70).
Case 4 highlights the difficulties with interpreting the results of risk assessment tools, particularly because algorithms may produce probability-based predictions rather than binary “positive” or “negative” predictions. In case 4, the patient has some risk factors for suicide (e.g., recent verbal threat about suicide and substance use) but also many protective factors (e.g., denying suicidal intent or plans, no known history of previous attempts or self-harm, no known history of psychiatric hospitalizations, no known access to firearms, and completion of a safety plan with staff). It is not clear whether the 20% risk for suicide in the next year generated by the risk assessment tool represents a “false positive”; however, the psychiatrist stops the planned discharge of the patient and instead continues his involuntary hospitalization because of the tool’s risk rating.
Clinicians may not know how to interpret or use specific risk assessment tools, which could influence coercion in psychiatric care. In the surveys conducted between 2018 and 2019, 64 (94%) of 68 mental health clinicians reported that they would want to know which clinical features led to a patient receiving a machine learning–based suicide risk flag (42). This finding is a key part of case 4, in that the psychiatrist did not understand how the suicide risk assessment tool worked. The psychiatrist interpreted 20% as a high risk for suicide within the next year; however, this number is not necessarily useful without understanding how it is to be interpreted. For instance, a group at the University of Oxford has developed Web-based risk calculators for suicide (Oxford Mental Illness and Suicide tool [OxMIS]) and violence (Oxford Mental Illness and Violence tool [OxMIV]) (71, 72). An OxRisk website (https://oxrisk.com) provides quick access to these risk assessment calculators, including tabs for entering patient characteristics and risk percentages that change as inputs accumulate (73). As noted on the website, these calculators may not apply to every patient because the tools were validated with data from patients in Sweden with bipolar or schizophrenia spectrum disorders. The website describes additional limitations that clinicians need to recognize; for example, the OxMIV calculator offers probability scores up to a maximum of 20% for violent offending within 12 months and thereafter generates >20%.

Moving Forward

These cases represent just a small subset of examples of coercion related to the use and misuse of digital technologies in psychiatry. Numerous digital technologies, including mobile mental health applications, inpatient and outpatient telepsychiatry, and sensor-based therapeutics, are making their way into psychiatric practice (26, 7476), and many of these technologies may expand access to care, increase patients’ choices, and decrease patients’ experiences of coercion. In addition, coercion in psychiatric care has nuanced implications, and it is not always entirely “bad” or “good.” Instances of coercion, such as involuntary psychiatric hospitalization, may be necessary to save a patient’s life during a psychiatric crisis, even if the experience of coercion is distressing for the patient. Some patients who undergo involuntary hospitalization later report that these experiences were justified (77). Moreover, public surveys in multiple countries suggest widespread support for coercive interventions, such as involuntary hospitalization or medication, in specific situations for patients with mental illness (7880). In a 2018 survey of 1,173 U.S. adults, respondents read a vignette of a person who met clinical criteria for schizophrenia, and approximately 60% supported coerced hospitalization (80).
Still, it is important to study how the integration of digital technologies into psychiatric care might influence the occurrences of formal and informal coercion. Do suicide risk flags in EMRs affect rates of involuntary, as opposed to voluntary, psychiatric hospitalization? How often do mental health professionals cite evidence from surveillance cameras in civil commitment hearings, and does this information shape the decision making of court officials? Do patients feel that civil commitment hearings are legitimate and inclusive when held via videoconferencing as opposed to in person? Are specific risk assessment tools associated with changes in civil commitment incidence, length of inpatient stays, or other outcomes related to coercion in psychiatric care?
Despite the prevalence of coercion in psychiatric care, publicly available data about coercive interventions, such as the number of involuntary psychiatric hospitalizations in the United States each year, can be sporadic and difficult to find (8183). Privacy concerns, decentralization of mental health systems, and other constraints may complicate obtaining these types of data (82, 83). The integration of digital technologies into psychiatry not only calls for reassessing the degree of coercion in psychiatric care but also provides opportunities to do so. The digitization of health care information can support efforts by clinicians, policy makers, patient advocacy groups, and others to examine and oversee coercive practices in psychiatry. Because information about psychiatric care can be sensitive and stigmatizing for individuals, steps must be taken (e.g., deidentification) to protect the privacy of those whose data may be used in these efforts.
Disclosing use of specific technologies to patients may be one way to mitigate these concerns about coercion. It is not practical or reasonable to ask that clinicians disclose every instance of technology in psychiatric care, such as daily use of smartphones or EMRs. Moreover, some patients, particularly those grappling with acute psychiatric symptoms, substance intoxication, or cognitive impairment, may struggle to understand these types of disclosures. However, when specific technologies are likely to shape patients’ experiences of coercion in psychiatric settings, clinicians should consider discussing these issues with patients and their families in meaningful ways, rather than briefly mentioning these technologies in a checkbox manner. For instance, the 2020 review of surveillance cameras on psychiatric units recommended that “patients need to be clearly informed about when they are observed and when they have privacy so that they have the chance to present themselves accordingly” (44). Similarly, patients who are going to attend civil commitment hearings deserve to know what these hearings will look like and how they function; if clinical staff, attorneys, or court officials will attend only by videoconferencing, patients should be informed about these arrangements ahead of time so that they can prepare with a reasonable understanding of how these proceedings will function.
Providing patients with mechanisms for “opting out” of certain technologies might safeguard against some of these concerns. In a 2020 article, Torous et al. (84) wrote that “the only established contraindication to telehealth is a patient not wishing to partake,” adding that lack of income, homelessness, language, culture, and other factors may shape people’s willingness or ability to use digital mental health services. In their 2020 article on surveillance cameras, Appenzeller et al. (44) noted that “in units where the default is video monitoring, patients should be given at least the right to opt out in favor of in-person observation,” although staffing constraints may influence the degree to which psychiatric facilities can fulfill these requests. Similarly, for patients who are experiencing psychiatric crises and present to rural EDs, in-person psychiatric consultation may not be possible in lieu of telepsychiatry because of shortages of mental health professionals in rural areas. The abovementioned 2017 decision by the Supreme Court of Florida indicated that patients have a right to have a judicial officer physically present at civil commitment hearings and could request that videoconferencing not be used (59). However, these rights may have limits, and these types of requests may not always be feasible. For instance, to limit disease transmission during the COVID-19 pandemic, the Supreme Court of Florida directed the state’s courts to conduct remote legal proceedings by using technology when possible and temporarily suspended “all rules of procedure, court orders, and opinions applicable to court proceedings that limit or prohibit the use of communication equipment for conducting proceedings by remote electronic means” (85).
Patients may desire options for correcting or even erasing digital trails from their psychiatric care. Patients may not know where and how long electronic information related to psychiatric care, such as video surveillance on inpatient units and videoconferencing content from civil commitment hearings, is stored. Expanding patients’ abilities to access their own medical data may allow patients to challenge or correct information with which they disagree. For example, many patients are gaining access to their health information through open EMRs, and some may be displeased to learn about mental health–related flags in their charts. Such patient responses do not mean that clinicians should automatically remove the flags; instead, clinicians may want to engage in a discussion with patients about the meaning and the purpose of these flags. Creating formal processes by which patients can request removal of EMR flags may be an alternative approach. Federal regulations state that patients have the right to submit requests for amendments to their medical records and that health care providers must generally respond to these requests within 60 days (86). A 2014 study examined 818 amendment requests by 181 patients over a 7-year period, finding that 70 (9%) requests were related to psychiatric conditions, 70 (9%) were related to in-clinic behavior, and 52 (6%) were related to drug or alcohol use (87).
Last, alongside the integration of digital technologies into psychiatric settings, clinicians need training and support regarding how these technologies function and their implications for patient care. Patients might experience elements of coercion when digital technology malfunctions, such as poor-quality videoconferencing during civil commitment hearings or false-positive results from a risk assessment algorithm; however, the ways in which clinicians interpret, react to, and use digital technologies in psychiatric settings can influence coercion of patients, particularly when clinicians lack proper training or understanding of these tools. The burdens to understand the integration of digital technologies into practice should not fall entirely on the shoulders of individual clinicians. By developing guidelines and clinical decision support tools, health care organizations can help clinicians learn about and navigate these types of new digital technologies in psychiatric practice. Alternatively, for highly specialized or complex technologies, health care organizations might embed clinical technology specialists in psychiatric settings who can train frontline clinicians, offer technical support, liaise with patients and families, and monitor technology use to minimize unnecessary coercion (88).

Conclusions

Digital technologies are reshaping psychiatry, and these technologies will in many ways improve the quality, accessibility, and personalization of psychiatric care. Technophobia, or the fear of new technology, would be a misguided reaction to these changes (89), and a balanced approach requires careful consideration of the many potential benefits alongside the possible risks of these technologies in psychiatry. Psychiatric care has long entailed coercive elements even in the absence of digital technologies; still, the integration of digital technologies into psychiatric settings where coercion is frequent or even legally sanctioned warrants further scrutiny. Coercion is just one possible outcome among many, including loss of privacy, distress for patients and families, transmission of stigmatizing information, and exacerbation of racial and socioeconomic disparities, related to digital technology use and misuse in psychiatry. At the same time, these technologies bring new opportunities for reconsidering and studying coercive practices to support the well-being of and respect for patients in psychiatric settings.

References

1.
Saks ER: The use of mechanical restraints in psychiatric hospitals. Yale Law J 1986; 95:1836–1856
2.
Kjellin L, Andersson K, Candefjord IL, et al: Ethical benefits and costs of coercion in short-term inpatient psychiatric care. Psychiatr Serv 1997; 48:1567–1570
3.
Husum TL, Bjørngaard JH, Finset A, et al: A cross-sectional prospective study of seclusion, restraint and involuntary medication in acute psychiatric wards: patient, staff and ward characteristics. BMC Health Serv Res 2010; 10:89
4.
Newton-Howes G: Coercion in psychiatric care: where are we now, what do we know, where do we go? Psychiatrist 2010; 34:217–220
5.
Newton-Howes G, Mullen R: Coercion in psychiatric care: systematic review of correlates and themes. Psychiatr Serv 2011; 62:465–470
6.
Szmukler G: Compulsion and “coercion” in mental health care. World Psychiatry 2015; 14:259–261
7.
Steinert T: Ethics of coercive treatment and misuse of psychiatry. Psychiatr Serv 2017; 68:291–294
8.
Turnpenny Á, Petri G, Finn A, et al: Mapping and Understanding Exclusion: Institutional, Coercive, and Community-Based Services and Practices Across Europe. Brussels, Mental Health Europe, 2017. https://mhe-sme.org/wp-content/uploads/2018/01/Mapping-and-Understanding-Exclusion-in-Europe.pdf
9.
Sashidharan SP, Mezzina R, Puras D: Reducing coercion in mental healthcare. Epidemiol Psychiatr Sci 2019; 28:605–612
10.
Lützén K: Subtle coercion in psychiatric practice. J Psychiatr Ment Health Nurs 1998; 5:101–107
11.
Yeeles K: Informal coercion: current evidence; in Coercion in Community Mental Health Care: International Perspectives. Edited by Molodynski A, Rugkasa J, Burns T. New York, Oxford University Press, 2016
12.
Hotzy F, Jaeger M: Clinical relevance of informal coercion in psychiatric treatment—a systematic review. Front Psychiatry 2016; 7:197
13.
García-Cabeza I, Valenti E, Calcedo A, et al: Perception and use of informal coercion in outpatient treatment: a focus group study with mental health professionals of Latin culture. Salud Mental 2017; 40:63–69
14.
Pelto-Piri V, Kjellin L, Hylén U, et al: Different forms of informal coercion in psychiatry: a qualitative study. BMC Res Notes 2019; 12:787
15.
Andersson U, Fathollahi J, Gustin LW: Nurses’ experiences of informal coercion on adult psychiatric wards. Nurs Ethics 2020; 27:741–753
16.
Rogers A: Coercion and “voluntary” admission: an examination of psychiatric patient views. Behav Sci Law 1993; 11:259–267
17.
Hoge SK, Lidz CW, Eisenberg M, et al: Perceptions of coercion in the admission of voluntary and involuntary psychiatric patients. Int J Law Psychiatry 1997; 20:167–181
18.
Sheehan KA, Burns T: Perceived coercion and the therapeutic relationship: a neglected association? Psychiatr Serv 2011; 62:471–476
19.
O’Donoghue B, Roche E, Shannon S, et al: Perceived coercion in voluntary hospital admission. Psychiatry Res 2014; 215:120–126
20.
Hirschtritt ME, Insel TR: Digital technologies in psychiatry: present and future. Focus Am Psychiatr Publ 2018; 16:251–258
21.
Hategan A, Giroux C, Bourgeois JA: Digital technology adoption in psychiatric care: an overview of the contemporary shift from technology to opportunity. J Technol Behav Sci 2019; 4:171–177
22.
Freeman RE, Boggs KM, Zachrison KS, et al: National study of telepsychiatry use in US emergency departments. Psychiatr Serv 2020; 71:540–546
23.
Spivak S, Spivak A, Cullen B, et al: Telepsychiatry use in US mental health facilities, 2010–2017. Psychiatr Serv 2020; 71:121–127
24.
Yellowlees P, Nakagawa K, Pakyurek M, et al: Rapid conversion of an outpatient psychiatric clinic to a 100% virtual telepsychiatry clinic in response to COVID-19. Psychiatr Serv 2020; 71:749–752
25.
Kalin ML, Garlow SJ, Thertus K, et al: Rapid implementation of telehealth in hospital psychiatry in response to COVID-19. Am J Psychiatry 2020; 177:636–637
26.
Morris NP, Hirschtritt ME: Telepsychiatry, hospitals, and the COVID-19 pandemic. Psychiatr Serv 2020; 71:1309–1312
27.
Torous J, Roberts LW: Needed innovation in digital health and smartphone applications for mental health: transparency and trust. JAMA Psychiatry 2017; 74:437–438
28.
Chancellor S, De Choudhury M: Methods in predictive techniques for mental health status on social media: a critical review. NPJ Digit Med 2020; 3:43
29.
Harris BA, Woodlock D: Digital coercive control: insights from two landmark domestic violence studies. Br J Criminol 2019; 59:530–550
30.
Woodlock D, McKenzie M, Western D, et al: Technology as a weapon in domestic violence: responding to digital coercive control. Aust Soc Work 2020; 73:368–380
31.
Moreno MA, Vaillancourt T: The role of health care providers in cyberbullying. Can J Psychiatry 2017; 62:364–367
32.
Aboujaoude E: Telemental health: why the revolution has not arrived. World Psychiatry 2018; 17:277–278
33.
Anderson HD, Pace WD, Brandt E, et al: Monitoring suicidal patients in primary care using electronic health records. J Am Board Fam Med 2015; 28:65–71
34.
Gamarra JM, Luciano MT, Gradus JL, et al: Assessing variability and implementation fidelity of suicide prevention safety planning in a regional VA healthcare system. Crisis 2015; 36:433–439
35.
Berg JM, Malte CA, Reger MA, et al: Medical records flag for suicide risk: predictors and subsequent use of care among veterans with substance use disorders. Psychiatr Serv 2018; 69:993–1000
36.
Goddu AP, O’Conor KJ, Lanzkron S, et al: Do words matter? Stigmatizing language and the transmission of bias in the medical record. J Gen Intern Med 2018; 33:685–691
37.
Martin K, Ricciardelli R, Dror I: How forensic mental health nurses’ perspectives of their patients can bias healthcare: a qualitative review of nursing documentation. J Clin Nurs 2020; 29:2482–2494
38.
Armstrong SC, Cozza KL, Watanabe KS: The misdiagnosis of delirium. Psychosomatics 1997; 38:433–439
39.
Swigart SE, Kishi Y, Thurber S, et al: Misdiagnosed delirium in patient referrals to a university-based hospital psychiatry department. Psychosomatics 2008; 49:104–108
40.
Kishi Y, Kato M, Okuyama T, et al: Delirium: patient characteristics that predict a missed diagnosis at psychiatric consultation. Gen Hosp Psychiatry 2007; 29:442–445
41.
Reeves RR, Parker JD, Loveless P, et al: Unrecognized physical illness prompting psychiatric admission. Ann Clin Psychiatry 2010; 22:180–185
42.
Brown LA, Benhamou K, May AM, et al:. Machine learning algorithms in suicide prevention: clinician interpretations as barriers to implementation. J Clin Psychiatry 2020; 81:19m12970
43.
Weinberger LE, Sreenivasan S, Smee DE, et al: Balancing safety against obstruction to health care access: an examination of behavioral flags in the VA health care system. J Threat Assess Manag 2018; 5:35–41
44.
Appenzeller YE, Appelbaum PS, Trachsel M: Ethical and practical issues in video surveillance of psychiatric units. Psychiatr Serv 2020; 71:480–486
45.
Stolovy T, Melamed Y, Afek A: Video surveillance in mental health facilities: is it ethical? Isr Med Assoc J 2015; 17:274–276
46.
Olsen DP: Ethical considerations of video monitoring psychiatric patients in seclusion and restraint. Arch Psychiatr Nurs 1998; 12:90–94
47.
Kay NR, Morris-Jones H: Pain clinic management of medico-legal litigants. Injury 1998; 29:305–308
48.
Flannery MT: First, do no harm: the use of covert video surveillance to detect Munchausen Syndrome by Proxy—an unethical means of “preventing” child abuse. Univ Mich J Law Reform 1998; 32:105–194
49.
Lahey T: A Watchful Eye in Hospitals. The New York Times, 2014. https://www.nytimes.com/2014/02/17/opinion/a-watchful-eye-in-hospitals.html. Accessed June 9, 2020
50.
Nicholas B: Televisitation: virtual transportation of family to the bedside in an acute care setting. Perm J 2013; 17:50–52
51.
Price J, Sapci H: Law and psychiatry: telecourt: the use of videoconferencing for involuntary commitment hearings in academic health centers. Psychiatr Serv 2007; 58:17–18
52.
Ithman M, Gopalakrishna G, Harry B, et al: Videoconferencing for civil commitment: preserving dignity. IEEE Technol Soc Mag 2014 (Winter):35–36
53.
Telepsychiatry via Videoconferencing: Resource Document. Washington, DC, American Psychiatric Association, 1998. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.173.2939&rep=rep1&type=pdf
54.
Bojdani E, Rajagopalan A, Chen A, et al: COVID-19 pandemic: impact on psychiatric care in the United States. Psychiatry Res 2020; 289:113069
55.
Sales CP, McSweeney L, Saleem Y, et al: The use of telepsychiatry within forensic practice: a literature review on the use of videolink—a ten-year follow-up. J Forensic Psychiatry Psychol 2018; 29:387–402
56.
US v Baker, 45 F3d 837 (4th Cir 1995)
57.
American Bar Association: Civil commitment. Ment Phys Disabil Law Rep 2010; 34:829–835
58.
Pearson A, Ciccone JR: Judicial telepresence in involuntary commitment hearings. J Am Acad Psychiatry Law 2018; 46:250–252
59.
Doe v State, 217 So3d (Fla 2017)
60.
Morris NP: Internet access for patients on psychiatric units. J Am Acad Psychiatry Law 2018; 46:224–231
61.
Brunn M, Diefenbacher A, Courtet P, et al: The future is knocking: how artificial intelligence will fundamentally change psychiatry. Acad Psychiatry 2020; 44:461–466
62.
Belsher BE, Smolenski DJ, Pruitt LD, et al: Prediction models for suicide attempts and deaths: a systematic review and simulation. JAMA Psychiatry 2019; 76:642–651
63.
Marks M: Artificial intelligence-based suicide prediction. Yale J Health Policy Law Ethics 2019; 18:98–121
64.
Zuckerberg M: A Blueprint for Content Governance and Enforcement. Menlo Park, CA, Facebook, 2018. https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634. Accessed June 9, 2020
65.
Lehavot K, Ben-Zeev D, Neville RE: Ethical considerations and social media: a case of suicidal postings on Facebook. J Dual Diagn 2012; 8:341–346
66.
Young SD, Garett R: Ethical issues in addressing social media posts about suicidal intentions during an online study among youth: case study. JMIR Ment Health 2018; 5:e33
67.
Cockerill RG: Ethics implications of the use of artificial intelligence in violence risk assessment. J Am Acad Psychiatry Law (Epub May 14, 2020). doi: 10.29158/JAAPL.003940-20
68.
Snowden LR, Hastings JF, Alvidrez J: Overrepresentation of Black Americans in psychiatric inpatient care. Psychiatr Serv 2009; 60:779–785
69.
Schwartz RC, Blankenship DM: Racial disparities in psychotic disorder diagnosis: a review of empirical literature. World J Psychiatry 2014; 4:133–140
70.
Obermeyer Z, Powers B, Vogeli C, et al: Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019; 366:447–453
71.
Fazel S, Wolf A, Larsson H, et al: The prediction of suicide in severe mental illness: development and validation of a clinical prediction rule (OxMIS). Transl Psychiatry 2019; 9:98
72.
Fazel S, Wolf A, Larsson H, et al: Identification of low risk of violent crime in severe mental illness with a clinical prediction tool (Oxford Mental Illness and Violence tool [OxMIV]): a derivation and validation study. Lancet Psychiatry 2017; 4:461–468
73.
OxRisk. Oxford, England, University of Oxford, 2020. https://oxrisk.com. Accessed June 7, 2020
74.
Wang K, Varma DS, Prosperi M: A systematic review of the effectiveness of mobile apps for monitoring and management of mental health symptoms or disorders. J Psychiatr Res 2018; 107:73–78
75.
Rahman T: Should trackable pill technologies be used to facilitate adherence among patients without insight? AMA J Ethics 2019; 21:E332–E336
76.
Hubley S, Lynch SB, Schneck C, et al: Review of key telepsychiatry outcomes. World J Psychiatry 2016; 6:269–282
77.
Gabriel A: Perceptions and attitudes toward involuntary hospital admissions of psychiatric patient. J J Psych Behav Sci 2016; 2:013
78.
Steinert T, Lepping P, Baranyai R, et al: Compulsory admission and treatment in schizophrenia: a study of ethical attitudes in four European countries. Soc Psychiatry Psychiatr Epidemiol 2005; 40:635–641
79.
Joa I, Hustoft K, Anda LG, et al: Public attitudes towards involuntary admission and treatment by mental health services in Norway. Int J Law Psychiatry 2017; 55:1–7
80.
Pescosolido BA, Manago B, Monahan J: Evolving public views on the likelihood of violence from people with mental illness: stigma and its consequences. Health Aff 2019; 38:1735–1743
81.
Lee G, Cohen D: How many people are subjected to involuntary psychiatric detention in the US? First verifiable population estimates of civil commitment. Paper presented at the 23rd annual conference of the Society for Social Work and Research, San Francisco, 2019
82.
Morris NP: Detention without data: public tracking of civil commitment. Psychiatr Serv 2020; 71:741–744
83.
Morris NP: Reasonable or random: 72-hour limits to psychiatric holds. Psychiatr Serv (Epub Aug 4, 2020). 10.1176/appi.ps.202000284
84.
Torous J, Jän Myrick K, Rauseo-Ricupero N, et al: Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health 2020; 7:e18848
85.
Canady CT: No. AOSC20-23: Amendment 2: Comprehensive COVID-19 Emergency Measures for the Florida State Courts. Tallahassee, Supreme Court of Florida, 2020. https://www.floridasupremecourt.org/content/download/633282/7195631/AOSC20-23.pdf
86.
45 CFR § 164.526
87.
Hanauer DA, Preib R, Zheng K, et al: Patient-initiated electronic health record amendment requests. J Am Med Inform Assoc 2014; 21:992–1000
88.
Ben-Zeev D, Drake R, Marsch L: Clinical technology specialists. BMJ 2015; 350:h945
89.
Ben-Zeev D: How I stopped fearing technology-based interventions. Psychiatr Serv 2014; 65:1183

Information & Authors

Information

Published In

Go to Psychiatric Services
Go to Psychiatric Services
Psychiatric Services
Pages: 302 - 310
PubMed: 33430653

History

Received: 9 June 2020
Revision received: 14 July 2020
Accepted: 23 July 2020
Published online: 12 January 2021
Published in print: March 01, 2021

Keywords

  1. Involuntary commitment
  2. Risk assessment
  3. Coercion
  4. Digital technologies
  5. Videoconferencing
  6. Psychiatry

Authors

Details

Nathaniel P. Morris, M.D. [email protected]
Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, San Francisco.

Notes

Send correspondence to Dr. Morris ([email protected]).

Competing Interests

The author reports no financial relationships with commercial interests.

Metrics & Citations

Metrics

Citations

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format
Citation style
Style
Copy to clipboard

There are no citations for this item

View Options

View options

PDF/ePub

View PDF/ePub

Get Access

Login options

Already a subscriber? Access your subscription through your login credentials or your institution for full access to this article.

Personal login Institutional Login Open Athens login
Purchase Options

Purchase this article to access the full text.

PPV Articles - Psychiatric Services

PPV Articles - Psychiatric Services

Not a subscriber?

Subscribe Now / Learn More

PsychiatryOnline subscription options offer access to the DSM-5-TR® library, books, journals, CME, and patient resources. This all-in-one virtual library provides psychiatrists and mental health professionals with key resources for diagnosis, treatment, research, and professional development.

Need more help? PsychiatryOnline Customer Service may be reached by emailing [email protected] or by calling 800-368-5777 (in the U.S.) or 703-907-7322 (outside the U.S.).

Media

Figures

Other

Tables

Share

Share

Share article link

Share