How AI scraper bots are putting Wikipedia under strain


Since January 2024, Wikimedia has seen a 50% increase in the bandwidth used to download multimedia content from its servers. This increase is mainly attributed to these scraper bots, and now represents a significant burden on the foundation's operations.​ — AFP Relaxnews

For more than a year, the Wikimedia Foundation, which publishes the online encyclopedia Wikipedia, has seen a surge in traffic with the rise of AI web-scraping bots. This increase in network traffic poses major infrastructure and cost management issues.

The Wikimedia Foundation is a non-profit organisation that manages Wikipedia and other projects related to free knowledge. It is highlighting the growing impact of web crawlers on its projects, particularly Wikipedia. These bots are automated programs that mass-retrieve freely licensed articles, images and videos in order to train different generative artificial intelligence models.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Rohm, Toshiba, Mitsubishi Elec to begin power chip integration talks, Nikkei says
South Korea to invest $166 million in AI chip startup Rebellions
In NYC classes, teachers can use AI to plan but not to assign grades
Google top India counsel quits in latest departure amid regulatory hurdles, sources say
Uber, Pony.ai and Verne team up to launch Europe's first robotaxi service in Croatia
The EU’s biggest test for device makers: Replaceable batteries
US activists work to connect Iranians via Starlink
New on the iPhone: Shazam songs even when offline with iOS 26.4
First Robot: Melania Trump brings droid to White House event
Why AI means animal testing is not always needed to trial new medicines

Others Also Read