LOADING...

Google's AI now sorts images like humans

Technology

Google DeepMind has built an AI that sees and sorts images more like we do.
Their new approach helps the AI learn new categories from just one example, and it handles tricky image changes while making reliable decisions.
The work, published in Nature, tackles a big problem: most AIs miss connections between things that humans spot right away.

How the AI learned to think like us

Researchers fine-tuned a vision model using real human choices—like picking the odd one out—from a huge image dataset called THINGS.
They also created AligNet, packed with millions of these human-like decisions.
Training other models with AligNet aimed to improve AIs' ability to group objects by concept, addressing previous limitations in this area.

The good and bad of this breakthrough

These upgraded AIs now agree with people more often and even show "human-like" uncertainty when unsure.
That means better reliability for things like facial recognition.
But there's a catch: while this could cut down on some tech biases, it might also copy over our own blind spots—so there's still work to do to keep things fair.