WWW.404MEDIA.CO
Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia'
Jimmy Wales, the founder of Wikipedia, thinks the internets default encyclopedia and one of the worlds biggest repositories of information could benefit from some applications of AI. The volunteer editors who keep Wikipedia functioning strongly disagree with him.The ongoing debate about incorporating AI into Wikipedia in various forms bubbled up again in July, when Wales posted an idea to his Wikipedia User Talk Page about how the platform could use a large language model as part of its article creation process.Any Wikipedia user can create a draft of an article. That article is then reviewed by experienced Wikipedia editors who can accept the draft and move it to Wikipedias mainspace, which makes up the bulk of Wikipedia and the articles youll find when youre searching for information. Reviewers can also reject articles for a variety of reasons, but because hundreds of draft articles are submitted to Wikipedia every day, volunteer reviewers often use a tool called articles for creation/helper script (ACFH), which creates templates for common reasons articles are declined.This is where Wales thinks AI could help. He wrote that he was asked to look at a specific draft article and give notes that might help the article get published.I was eager to do so because I'm always interested in taking a fresh look at our policies and procedures to look for ways they might be improved, he wrote. The person asking me felt frustrated at the minimal level of guidance being given (this is my interpretation, not necessarily theirs) and having reviewed it, I can see why.Wales explains that the article was originally rejected several years ago, then someone tried to improve it, resubmitted it, and got the same exact template rejection again.It's a form letter response that might as well be Computer says no (that article's worth a read if you don't know the expression), Wales said. It wasn't a computer who says no, but a human using AFCH, a helper script [...] In order to try to help, I personally felt at a loss. I am not sure what the rejection referred to specifically. So I fed the page to ChatGPT to ask for advice. And I got what seems to me to be pretty good. And so I'm wondering if we might start to think about how a tool like AFCH might be improved so that instead of a generic template, a new editor gets actual advice. It would be better, obviously, if we had lovingly crafted human responses to every situation like this, but we all know that the volunteers who are dealing with a high volume of various situations can't reasonably have time to do it. The templates are helpful - an AI-written note could be even more helpful.Wales then shared the output he got from ChatGPT. It included more details than a template rejection, but editors replying to Wales noted that it was also filled with errors.For example, the response suggested the article cite a source that isnt included in the draft article, and rely on Harvard Business School press releases for other citations, despite Wikipedia policies explicitly defining press releases as non-independent sources that cannot help prove notability, a basic requirement for Wikipedia articles.Editors also found that the ChatGPT-generated response Wales shared has no idea what the difference between some of these basic Wikipedia policies, like notability (WP:N), verifiability (WP:V), and properly representing minority and more widely held views on subjects in an article (WP:WEIGHT).Something to take into consideration is how newcomers will interpret those answers. If they believe the LLM advice accurately reflects our policies, and it is wrong/inaccurate even 5% of the time, they will learn a skewed version of our policies and might reproduce the unhelpful advice on other pages, one editor said.Wales and editors proceeded to get into it in the replies to his article. The basic disagreement is that Wales thinks that LLMs can be useful to Wikipedia, even if they are sometimes wrong, while editors think an automated system that is sometimes wrong is fundamentally at odds with the human labor and cooperation that makes Wikipedia so valuable to begin with.As one editor writes:The reputational risk to adding in AI-generated slop feedback can not be overstated. The idea that we will feed drafts into a large language model - with all the editorial and climate implications and without oversight or accountability - is insane. What are we gaining in return? Verbose, emoji-laden boilerplate slop, often wrong in substance or tone, and certainly lacking in the care and contextual sensitivity that actual human editors bring to review work. Worse it creates a dangerous illusion of helpfulness, where the appearance of tailored advice masks the lack of genuine editorial engagement. We would be feeding and legitimising a system that replaces mentoring, discourages human learning, and cheapens the standards we claim to uphold. That's the antithesis of Wikipedia, no?It is definitely not the antithesis of Wikipedia to use technology in appropriate ways to make the encyclopedia better, Wales responded. We have a clearly identifiable problem, and you've elaborated on it well: the volume of submissions submits templated responses, and we shouldn't ask reviewers to do more. But we should look for ways to support and help them.Wikipedia Prepares for Increase in Threats to US Editors From Musk and His AlliesThe Wikimedia Foundation says it will likely roll out features previously used to protect editors in authoritarian countries more widely.404 MediaJason KoeblerThis isnt the first time the Wikimedia Foundation, the non-profit that manages Wikipedia, and Wikipedia editors have clashed about AI. In June, the Wikimedia Foundation paused an experiment to use AI-generated summaries at the top of Wikipedia articles after a backlash from editors.A group of Wikipedia editors have also started WikiProject AI Cleanup, an organized effort to protect the platform from what they say is growing number of AI-generated articles and images submitted to Wikipedia that are misleading or include errors. In early August, Wikipedia editors also adopted a new policy that will make it easier for them to delete articles that are clearly AI-generated.Wikipedias strength has been and always will be its human-centered, volunteer-driven model one where knowledge is created and reviewed by people, volunteers from different countries, perspectives, and backgrounds. Research shows that this process of human debate, discussion, and consensus makes for higher-quality articles on Wikipedia, a Wikimedia Foundation spokesperson told me in an email. Nevertheless, machine-generated content is exploding across the internet, and it will inevitably make its way to Wikipedia. Wikipedia volunteers have showcased admirable resilience in maintaining the reliability of information on Wikipedia based on existing community-led policies and processes, sometimes leveraging AI/machine learning tools in this work.The spokesperson said that Wikipedia already uses AI productively, like with bots that revert vandalism and machine translation tools, and that these tools always have a human in the loop to validate automated work.As the founder of Wikipedia, Jimmy regularly engages with volunteers on his talk page to share ideas, test assumptions, and respond to questions, the spokesperson said. His recent comments about how AI could improve the draft review process are an example of this and a prompt for further community conversation."
0 Comments 0 Shares 22 Views 0 Reviews