'Unchecked AI could lead to disaster,' warns Oxford professor
Technology
AI might be heading for a major setback, says Professor Michael Wooldridge from Oxford.
He worries that rushing out AI products without enough safety checks—like chatbots with weak guardrails or risky self-driving car updates—could cause a disaster big enough to turn people off AI, just like the Hindenburg crash did for airships.
Wooldridge's concerns about AI
Wooldridge points out that unchecked AI could flood social media with fake news, putting democracy and elections at risk.
He also calls out large language models (LLMs) for being "very, very approximate" and sometimes giving confidently wrong answers.
He thinks AIs should admit when they don't know something instead of faking it.
Who is Wooldridge?
Wooldridge is a leading voice in AI research.