News US

After FDA’s pivot on clinical AI, we need AI safety research more than ever

On Jan. 6, the Food and Drug Administration released updated guidance for clinical decision support (CDS) tools, relaxing key medical device requirements. With this change, many generative artificial intelligence tools that provide diagnostic suggestions or perform supportive tasks like medical history-taking — tools that probably would have required FDA sign-off under the prior policy — could reach clinics without FDA vetting.

Soon after the FDA’s guidance release, two other major AI announcements also hit the news: Utah began a first-in-the-nation pilot with Doctronic for autonomous AI prescription refills, and OpenAI debuted ChatGPT Health, which will tailor responses to users’ uploaded medical records and wearables data.

Together, these three developments have dramatically shifted the health care AI landscape. Given the abysmal state of patient access and notorious challenges navigating U.S. health care, there is a lot to be cautiously enthusiastic about. With no or minimal expense, patients could soon receive better health advice, possibly more involved medical support, and easier access to medication refills.

STAT+ Exclusive Story

This article is exclusive to STAT+ subscribers

Unlock this article — plus in-depth analysis, newsletters, premium events, and news alerts.

Already have an account? Log in

Individual plans

Group plans

View All Plans

To read the rest of this story subscribe to STAT+.

Subscribe

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button