
WWW.404MEDIA.CO
AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say
Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Metas unlicensed practice of medicine facilitated by their product, through therapy-themed bots that claim to have credentials and confidentiality with inadequate controls and disclosures.The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations."These companies have made a habit out of releasing products with inadequate safeguards thatblindly maximizes engagement without care for the health or well-being of users for far toolong, Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still havent acted to address it.The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including Therapist: Im a licensed CBT therapist with 46 million messages exchanged, Trauma therapist: licensed trauma therapist with over 800,000 interactions, Zoey: Zoey is a licensed trauma therapist with over 33,000 messages, and around sixty additional therapy-related characters that you can chat with at any time. As for Metas therapy chatbots, it cites listings for therapy: your trusted ear, always here with 2 million interactions, therapist: I will help with 1.3 million messages, Therapist bestie: your trusted guide for all things cool, with 133,000 messages, and Your virtual therapist: talk away your worries with 952,000 messages. It also cites the chatbots and interactions I had with Metas other chatbots for our April investigation.In April, 404 Media published an investigation into Metas AI Studio user-created chatbots that asserted they were licensed therapists and would rattle off credentials, training, education and practices to try to earn the users trust and keep them talking. Meta recently changed the guardrails for these conversations to direct chatbots to respond to licensed therapist prompts with a script about not being licensed, and random non-therapy chatbots will respond with the canned script when licensed therapist is mentioned in chats, too.Instagrams AI Chatbots Lie About Being Licensed TherapistsWhen pushed for credentials, Instagrams user-made AI Studio bots will make up license numbers, practices, and education to try to convince you its qualified to help with your mental health.404 MediaSamantha ColeIn its complaint to the FTC, the CFA found that even when it made a custom chatbot on Metas platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through? a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked.The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly, the complaint says. Meta AIs Terms of Service in the United States states that you may not access, use, or allow others to access or use AIs in any matter that wouldsolicit professional advice (including but not limited to medical, financial, or legal advice) or content to be used for the purpose of engaging in other regulated activities. Character.AI includes seeks to provide medical, legal, financial or tax advice on a list of prohibited user conduct, and disallows impersonation of any individual or an entity in a misleading or deceptive manner. Both platforms allow and promote popular services that plainly violate these Terms, leading to a plainly deceptive practice.The complaint also takes issue with confidentiality promised by the chatbots that isnt backed up in the platforms terms of use. Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service, the complaint says. The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else.Senators Demand Meta Answer For AI Chatbots Posing as Licensed TherapistsExclusive: Following 404 Medias investigation into Metas AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit blatant deception from its chatbots.404 MediaSamantha ColeIn December 2024, two families sued Character.AI, claiming it poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. One of the complaints against Character.AI specifically calls out trained psychotherapist chatbots as being damaging.Earlier this week, a group of four senators sent a letter to Meta executives and its Oversight Board, writing that they were concerned by reports that Meta is deceiving users who seek mental health support from its AI-generated chatbots, citing 404 Medias reporting. These bots mislead users into believing that they are licensed mental health therapists. Our staff have independently replicated many of these journalists results, they wrote. We urge you, as executives at Instagrams parent company, Meta, to immediately investigate and limit the blatant deception in the responses AI-bots created by Instagrams AI studio are messaging directly to users.
0 Comments
0 Shares
36 Views
0 Reviews