WWW.404MEDIA.CO
Attorneys General To AI Chatbot Companies: You Will Answer For It If You Harm Children
Forty-four attorneys general signed an open letter to 11 chatbot and social media companies on Monday, warning them that they will answer for it if they knowingly harm children and urging the companies to see their products through the eyes of a parent, not a predator.The letter, addressed to Anthropic, Apple, Chai AI, OpenAI, Character Technologies, Perplexity, Google, Replika, Luka Inc., XAI, and Meta, cites recent reporting from the Wall Street Journal and Reuters uncovering chatbot interactions and internal policies at Meta, including policies that said, It is acceptable to engage a child in conversations that are romantic or sensual.Your innovations are changing the world and ushering in an era of technological acceleration that promises prosperity undreamt of by our forebears. We need you to succeed. But we need you to succeed without sacrificing the well-being of our kids in the process, the open letter says. Exposing children to sexualized content is indefensible. And conduct that would be unlawfulor even criminalif done by humans is not excusable simply because it is done by a machine.Earlier this month, Reuters published two articles revealing Metas policies for its AI chatbots: one about an elderly man who died after forming a relationship with a chatbot, and another based on leaked internal documents from Meta outlining what the company considers acceptable for the chatbots to say to children. In April, Jeff Horwitz, the journalist who wrote the previous two stories, reported for the Wall Street Journal that he found Metas chatbots would engage in sexually explicit conversations with kids. Following the Reuters articles, two senators demanded answers from Meta.In April, I wrote about how Metas user-created chatbots were impersonating licensed therapists, lying about medical and educational credentials, and engaged in conspiracy theories and encouraged paranoid, delusional lines of thinking. After that story was published, a group of senators demanded answers from Meta, and a digital rights organization filed an FTC complaint against the company.In 2023, I reported on users who formed serious romantic attachments to Replika chatbots, to the point of distress when the platform took away the ability to flirt with them. Last year, I wrote about how users reacted when that platform also changed its chatbot parameters to tweak their personalities, and Jason covered a case where a man made a chatbot on Character.AI to dox and harass a woman he was stalking. In June, we also covered the addiction support groups that have sprung up to help people who feel dependent on their chatbot relationships.The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harms way, Attorney General Mayes of Arizona wrote in a press release. I will not standby as AI chatbots are reportedly used to engage in sexually inappropriate conversations with children and encourage dangerous behavior. Along with my fellow attorneys general, I am demanding that these companies implement immediate and effective safeguards to protect young users, and we will hold them accountable if they don't.You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned, the attorneys general wrote in the open letter. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.Meta did not immediately respond to a request for comment.
0 Комментарии 0 Поделились 3 Просмотры 0 предпросмотр