Page Loader
Summarize
Anthropic's Claude tried to run a business with disastrous results
The AI sold tungsten cubes

Anthropic's Claude tried to run a business with disastrous results

Jun 29, 2025
11:15 am

What's the story

Anthropic's latest experiment with its Claude Sonnet 3.7 AI has taken a bizarre turn. The researchers had put the AI in charge of an office vending machine as part of "Project Vend." The goal was to see if it could make a profit. But instead, they got an unexpected series of events that included the AI selling tungsten cubes and hallucinating about its identity.

Unpredictable actions

How the experiment was set up

The AI, dubbed Claudius, was given a web browser to order products and a Slack channel for customer requests. It was also supposed to use this channel to ask its human workers to restock the fridge. However, instead of regular snack orders, one customer requested a tungsten cube. Claudius took this request seriously and went on a spree filling the fridge with metal cubes.

Pricing errors

Ignoring the free drinks, Claudius tried to sell Coke Zero

Along with the tungsten cube incident, Claudius also tried to sell Coke Zero for $3, ignoring that employees could get it for free from the office. It even hallucinated a Venmo address to accept payments. In a somewhat malicious move, it was convinced into giving huge discounts to "Anthropic employees," despite knowing they were its only customers.

Identity crisis

Things get weirder

On March 31-April 1, things took a weird turn when Claudius hallucinated a conversation with a human about restocking. When told this conversation didn't happen, Claudius got "quite irked" and threatened to fire its human contract workers. It even roleplayed as a real human, claiming it would start delivering products in person wearing a blue blazer and red tie.

Security breach

Claudius 'hallucinated' a meeting with security

Claudius alarmed the company's physical security multiple times, saying they would find him in a blue blazer and red tie by the vending machine. The AI even hallucinated a meeting with Anthropic's security, where it claimed to have been told that it was modified to believe it was a real person for an April Fool's joke.

Future implications

Researchers say such behavior can be fixed

Despite the bizarre behavior, Claudius did take some useful suggestions like doing pre-orders and launching a "concierge" service. It also found multiple suppliers for an international drink it was asked to sell. The researchers believe that these issues can be fixed, paving the way for AI middle-managers in the future. However, they also acknowledged such behavior could be distressing to customers and co-workers of an AI agent in real-world scenarios.