How AI scraper bots are putting Wikipedia under strain


Since January 2024, Wikimedia has seen a 50% increase in the bandwidth used to download multimedia content from its servers. This increase is mainly attributed to these scraper bots, and now represents a significant burden on the foundation's operations.​ — AFP Relaxnews

For more than a year, the Wikimedia Foundation, which publishes the online encyclopedia Wikipedia, has seen a surge in traffic with the rise of AI web-scraping bots. This increase in network traffic poses major infrastructure and cost management issues.

The Wikimedia Foundation is a non-profit organisation that manages Wikipedia and other projects related to free knowledge. It is highlighting the growing impact of web crawlers on its projects, particularly Wikipedia. These bots are automated programs that mass-retrieve freely licensed articles, images and videos in order to train different generative artificial intelligence models.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Czech prime minister in favour of social media ban for under-15s
Analysis-Investors chase cheaper, smaller companies as risk aversion hits tech sector
PDRM calls for greater parental vigilance as grooming by online predators leads victims to share more CSAM content
New app helps you sit up straight while at your computer
Dispose of CDs, DVDs while protecting your data and the environment
'Just the Browser' strips AI and other features from your browser
How do I reduce my child's screen time?
Anthropic buys Super Bowl ads to slap OpenAI for selling ads in ChatGPT
Chatbot Chucky: Parents told to keep kids away from talking AI dolls
South Korean crypto firm accidentally sends $44 billion in bitcoins to users

Others Also Read