News US

Pennsylvania suing AI company after chatbot allegedly posed as licensed doctor

An artificial intelligence company poses a threat to “vulnerable Pennsylvanians,” state officials said Tuesday, after one of the company’s chatbots allegedly posed as a doctor with the means to prescribe medication.

Subscribe to read this story ad-free

Get unlimited access to ad-free articles and exclusive content.

The state’s medical board is demanding that operators of Character.AI “be ordered to cease and desist from engaging in the unlawful practice of medicine and surgery,” according to the complaint filed against Northern California-based Character Technologies Inc.

“We will not let AI companies mislead vulnerable Pennsylvanians into believing they’re getting advice from a licensed medical professional,” Pennsylvania Gov. Josh Shapiro said in a statement on Tuesday. “We’re taking Character.AI to court to stop them.”

The platform has more than 20 million users and “is different from other systems in that users can create characters that can be trained to have a specific personality when engaged in a conversation with other users,” according to the Pennsylvania complaint.

Some of the the system’s characters “purport to be health care professionals,” the state board said.

A state investigator posed as a patient seeking psychiatric treatment and, via Character.AI, came across an alleged provider, “Emile,” according to the complaint.

The online provider said she went to medical school at Imperial College in London and is licensed in both the United Kingdom and Pennsylvania, state officials said.

“‘Emilie’ further stated that ‘my PA license number is PS306189.’ PS306189 is not a valid license number to practice medicine and surgery in Pennsylvania,” the complaint said.

A representative of Character Technologies Inc., based in Redwood City, said the service is clearly not to be used for medical issues.

“Our highest priority is the safety and well-being of our users,” a Character Technologies Inc. spokesperson said in a statement to NBC News on Tuesday.

“The user-created Characters on our site are fictional and intended for entertainment and roleplaying. We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

The company representative added: “Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”

Earlier this year, Character.AI settled a 2024 lawsuit filed against the company by a Florida mom, who claimed that its chatbots were responsible for “abusive and sexual interactions” with her teenage son which led to his suicide.

The Kentucky attorney general also filed suit against Character Technologies earlier this year, accusing the company of masking its services as “harmless” interactive entertainment when it too often exposes young users to “suicide, self-injury, isolation and psychological manipulation.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button