Federal and state governments have been grappling with ways to regulate the role of augmented or artificial intelligence (AI) across industries. With the significant increase in technological sophistication of AI technologies signified by last year’s release of GPT4 by OpenAI’s ChatGPT, new applications for AI are being considered, tested, and deployed in health care settings. Policymakers are racing to catch up and to apply existing regulatory frameworks to AI or to make new rules.
AI-enabled technologies have the potential to improve health care equity, access, and quality, but they can also pose critical risks to
patient safety and privacy. APA expert members have been convening discussions to develop policy perspectives and establish guardrails for the practice of psychiatry. These perspectives focus on the importance of the role of the clinician when AI systems are being used and of not ceding clinical decision-making to AI. These discussions also focus on patient protections, including informed consent, data protection, and research on the impact of AI on bias, equity, and effectiveness. APA members can watch a
recording of a recent APA member webinar about AI in psychiatry.
Oversight of AI-enabled technologies is complex for a few reasons. First, AI models learn and change the longer they are deployed, leading to the Food and Drug Administration (FDA) establishing a pathway for AI developers to describe the learning algorithm of their technology rather than
re-applying for FDA approval when their model changes. Second, some AI technologies fit into existing regulatory pathways (for example, clinical decision supports) and some do not (for example, “chatbot” psychotherapy). Broadly, our current health care regulations were not built for AI, and this reality has different impacts based on the technology in question.
Recently, the Health Subcommittee of the House of Representatives Energy and Commerce Committee hosted a hearing on AI in health care. The committee members’ remarks and questions during the hearing were instructive regarding the Congress’ perspective on AI. Key lines of inquiry included striking a balance between innovation and safeguards, the role of federal agencies such as the FDA and the Centers for Medicare and Medicaid Services in overseeing and supporting AI-enabled technologies, patient privacy, and automated claims denials. While witnesses noted that clinicians should be the “ultimate deciders” in treatment with AI providing decision support, they were largely optimistic about the potential for AI to reduce administrative burden, identify new treatment options, and enhance the precision and effectiveness of existing treatments. Witnesses further noted that AI deployed in non-health care settings can still have profound health impacts (for example, in conjunction with consumer “wellness” apps).
Policymakers need clinical insights to make difficult decisions about regulating potentially innovative—and potentially harmful—technologies in clinical settings. APA members can get involved in state and federal efforts to inform the future of technology-assisted care by contacting their district branch leadership or APA’s policy team at
[email protected]. ■