Next Article
How do you train AI models? This is the process
A leaked Surge AI document just gave us a peek into how tough it is to train big chatbots like Anthropic's Claude.
Turns out, the process relies on a global team—often overworked and underpaid—tasked with sorting through tricky stuff like medical advice and hate speech, all while following strict safety rules.
Guidelines ask workers to avoid harmful stereotypes but still allow jokes
The guidelines ask workers to avoid harmful stereotypes but still allow harmless jokes, making their job even more complicated.
Most of these moderators are based in countries like the Philippines, Pakistan, Kenya, and India.
While Surge AI says this was just a research tool for safer AI, the leak highlights how much responsibility falls on these workers—and how messy building ethical AI can get.