
WWW.404MEDIA.CO
Citizen Is Using AI to Generate Crime Alerts With No Human Review. Its Making a Lot of Mistakes
Crime-awareness app Citizen is using AI to write alerts that go live on the platform without any prior human review, leading to factual inaccuracies, the publication of gory details about crimes, and the exposure of sensitive data such as peoples license plates and names, 404 Media has learned.The news comes as Citizen recently laid off more than a dozen unionized employees, with some sources believing the firings are related to Citizens increased use of AI and the shifting of some tasks to overseas workers. It also comes as New York City enters a more formal partnership with the app.Speed was the name of the game, one source told 404 Media. The AI was capturing, packaging, and shipping out an initial notification without our initial input. It was then our job to go in and add context from subsequent clips or, in instances where privacy was compromised, go in and edit that information out, they added, meaning after the alert had already been pushed out to Citizens users.Citizen bills itself as the app where people protect each other and has around 10 million users. People across the U.S. upload videos and photos of what is happening around them in an attempt to alert other users. Maybe thats a fight in progress, or additional footage from a major event. Citizens paid staff also continually listen to police radio feeds and push alerts based on those. Alerts in the app often include a title, a video if available, and sometimes some additional text. The company also sometimes sends people to scenes presenting themselves as normal users but who are actually company workers.For years Citizen employees have listened to radio feeds and written these alerts themselves. More recently, Citizen has turned to AI instead, with humans becoming increasingly bare, one source said. The descriptions of Citizens use of AI come from three sources familiar with the company. 404 Media granted them anonymity to protect them from retaliation.Do you know anything else about how Citizen or others are using AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.Initially, Citizen brought in AI to assist with drafting notifications, two sources said. The next iteration was AI starting to push incidents from radio clips on its own, one added. There was no analyst or human involvement in the information that was being pushed in those alerts until after they were sent.All three sources said the AI made mistakes or included information it shouldnt have. AI mistranslated motor vehicle accident to murder vehicle accident. It interpreted addresses incorrectly and published an incorrect location. It would add gory or sensitive details that violated Citizens guidelines, like saying person shot in face or including a persons license plate details in an unconfirmed report. It would generate a report based on a homeless person sleeping in a location. The AI sometimes blasted a notification about police officers spotting a stolen vehicle or homicide suspect, potentially putting that operation at risk. The AI would sometimes write an alert as if officers had already arrived onto the scene and were verifying the incident, when in actuality the dispatcher was just providing supplemental information while officers were en route, the sources said.And the AI would sometimes duplicate incidents, not understanding that two pieces of dispatch audio are actually related to the same singular event. This especially happened with police chases, where the dispatch is continually providing a new address where the subject is. The AI would just go nuts and enter something at every address it would get and we would sometimes have 5-10 incidents clustered on the app that all pertain to the same thing, one source said. And the AI would sometimes leave out important information, such as whether a person was armed with a weapon or not. Instead of reporting the incident as a robbery in that context, the AI would write up the incident as a theft.All this time, the errors would be visible to Citizen users until an analyst was able to go in and fix the issue, a source added. But some were left online: We wouldnt have time to clean up the mess and would often just pick one incident to continue updating, one source added.This could skew the perception of crime in a particular area, they said, due to the AI creating more and more incidents. However, it seems like upper management was more focused and loved the look of more dots on the map and worried less about whether they were legitimate.While AI might get the majority of incidents correct, it still makes a lot of errors, one of the sources said. AI sometimes just gets stuff horribly wrong and you scratch your head wondering how it got there.Recently Citizen laid off 13 unionized workers, two sources said. Two sources pointed to the use of AI and sending work overseas as potential reasons for the layoffs. One said it seems some of the more outspoken analysts were let go. Those that questioned and pushed back on the declining editorial standards that came with incorporating AI and the shifting focus away from quality to quantity. We previously reported that Citizen used a company called CloudFactory in Nepal and Kenya, where contractors would listen to police radio feeds for $1.50 to $2 an hour.We previously reported on how Andrew Frame, the founder of Citizen, used the app to put a $30,000 bounty on an alleged arsonist's head during the 2021 Palisades, Los Angeles fires. It was the wrong person and they were innocent. At one point during that event, Citizens head of community Prince Mapp said We have mobilized a city to bring one person to justice.Last month Mapp was seen hugging New York Citys mayor Eric Adams, as the city partnered with Citizen by creating its own city-run account called NYC Public Safety. A huge part of building a safer city is ensuring New Yorkers have the information they need to keep themselves and their loved ones safe, Mayor Eric Adams said in a press release. Whether its a heat emergency, a flood warning, a fire or crime, our new NYC Public Safey [sic] account on Citizen will keep New Yorkers informed on threats and how their city government is working to keep them safe.Citizen did not respond to multiple requests for comment.
0 Комментарии
0 Поделились
25 Просмотры
0 предпросмотр