WWW.404MEDIA.CO
AI Models And Parents Dont Understand Let Him Cook
Young people have always felt misunderstood by their parents, but new research shows that Gen Alpha might also be misunderstood by AI. A research paper, written by Manisha Mehta, a soon-to-be 9th grader, and presented today at the ACM Conference on Fairness, Accountability, and Transparency in Athens, shows that Gen Alphas distinct mix of meme- and gaming-influenced language might be challenging automated moderation used by popular large language models.The paper compares kid, parent, and professional moderator performance in content moderation to that of four major LLMs: OpenAIs GPT-4, Anthropics Claude, Googles Gemini, and Metas Llama 3. They tested how well each group and AI model understood Gen Alpha phrases, as well as how well they could recognize the context of comments and analyze potential safety risks involved.Mehta, who will be starting 9th Grade in the fall, recruited 24 of her friends to create a dataset of 100 Gen Alpha phrases. This included expressions that might be mocking or encouraging depending on the context, like let him cook and ate that up, as well as expressions from gaming and social media contexts like got ratioed, secure the bag, and sigma.Our main thesis was that Gen Alpha has no reliable form of content moderation online, Mehta told me over Zoom, using her dads laptop. She described herself as a definite Gen Alpha, and she met her (adult) co-author last August, who is supervising her dads PhD. She has seen friends experience online harassment and worries that parents arent aware of how young peoples communication styles open them up to risks. And theres a hesitancy to ask for help from their guardians because they just dont think their parents are familiar enough [with] that culture, she says.Given the Gen Alpha phrases, all non-Gen Alpha evaluatorshuman and AIstruggled significantly, in the categories of Basic Understanding (what does a phrase mean?), Contextual Understanding (does it mean something different in different contexts?), and Safety Risk (is it toxic?). This was particularly true for emerging expressions like skibidi and gyatt, with phrases that can be used ironically or in different ways, or with insults hidden in innocent comments. Part of this is due to the unusually rapid speed of Gen Alphas language evolution; a model trained on todays hippest lingo might be totally bogus when its published in six months.In the tests, kids broadly recognized the meaning of their own generation-native phrases, scoring 98, 96, and 92 percent in each of the three categories. However, both parents and professional moderators showed significant limitations, according to the paper; parents scored 68, 42, and 35 percent in those categories, while professional moderators did barely any better with 72, 45, and 38 percent. The real life implications of these numbers mean that a parent might only recognize one third of the times when their child is being bullied in their instagram comments.The four LLMs performed about the same as the parents, potentially indicating that the data used to train the models might be constructed from more grown-up language examples. This makes sense since pretty much all novelists are older than 15, but it also means that content-moderation AIs tasked with maintaining young peoples online safety might not be linguistically equipped for the job.Mehta explains that Gen Alpha, born between 2010-ish and last-year-ish, are the first cohort to be born fully post-iPhone. They are spending unprecedented amounts of their early childhoods online, where their interactions cant be effectively monitored. And, due to the massive volumes of content they produce, a lot of the moderation of the risks they face is necessarily being handed to ineffective automatic moderation tools with little parental oversight. Against a backdrop of steadily increasing exposure to online content, Gen Alphas unique linguistic habits pose unique challenges for safety.
0 Comments 0 Shares 41 Views 0 Reviews