'Poison fountain': Insiders try to trip up AI with bad data
A group of AI insiders just launched Poison Fountain—a project aimed at sabotaging AI models by sneaking buggy, misleading code into the training data these systems collect from the web.
Kicking off in early January 2026, they're encouraging website owners to quietly embed these "poisoned" links so that AI crawlers pick up errors designed to mess with how language models learn.
Who's running this—and why?
Five people are behind Poison Fountain, some reportedly tied to big US AI companies. They say they'll prove their involvement with cryptographic evidence.
Their main point? Since regulation can't keep up with fast-moving tech, poisoning data is a way for people everywhere to push back—especially after seeing some worrying uses of AI by customers.
Why should you care?
This matters because it exposes a real weak spot: most AIs depend on scraping huge amounts of web data, making them vulnerable if even a few documents are tampered with.
A recent Anthropic study found that just a handful of poisoned files can throw off an entire model—so this isn't just theory; it's something anyone online could help make happen.