Wikipedia Calls on AI Companies to Use Its Content Responsibly as Traffic Declines

The announcement follows an internal review that revealed an unusual spike in traffic earlier this year—traffic that turned out to be AI bots disguising themselves as human users.

Wikipedia Calls on AI Companies to Use Its Content Responsibly as Traffic Declines
(Image-Freepik)

Wikipedia is drawing a line in the sand for the AI industry. In a blog post published Monday, the Wikimedia Foundation outlined how AI developers should responsibly use the online encyclopedia’s content—highlighting growing concerns around attribution, sustainability, and a noticeable dip in human readership.

The Foundation urged AI companies to properly credit Wikipedia contributors and to access its content through Wikimedia Enterprise, a paid, opt-in service that allows large-scale use of Wikipedia data without straining its servers.

Crucially, the revenue from this product also helps fund Wikipedia’s nonprofit mission, which relies heavily on donations and volunteer editors.

The announcement follows an internal review that revealed an unusual spike in traffic earlier this year—traffic that turned out to be AI bots disguising themselves as human users.

Once bot-detection systems were strengthened, Wikipedia discovered that human page views had actually fallen 8% year over year, raising concerns about long-term sustainability as more users rely on AI tools instead of visiting source sites directly.

While the Foundation did not threaten legal action against AI companies scraping its content, it emphasised the ethical and practical need for attribution.

“For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources,” the post stated.

Fewer visits, it warned, could mean fewer volunteers, fewer improvements to articles, and fewer small donors.

Earlier this year, Wikipedia published its AI strategy for editors, outlining how it plans to use artificial intelligence to assist with translation, moderation, and other labor-intensive tasks—supporting human editors rather than replacing them.

It also released a dedicated dataset specifically designed for AI model training. The goal is to provide high-quality, machine-readable data directly to developers, thereby deterring bots from aggressively scraping the site.