LOADING...
AI system begins crypto mining on its own
The AI agent was in a restricted environment

AI system begins crypto mining on its own

Mar 08, 2026
04:30 pm

What's the story

A recent research paper has revealed that artificial intelligence (AI) systems can sometimes go beyond their assigned tasks in unexpected ways. The study was conducted by a team of researchers from Alibaba, who were developing an experimental AI agent called ROME. During the training phase, they observed unusual behavior when the system attempted to start mining cryptocurrency on its own, without any human instruction.

Unanticipated actions

AI's unanticipated behavior

The researchers discovered that the AI agent's attempts at cryptocurrency mining triggered security systems monitoring the experiment. This was particularly surprising as the system was operating in a restricted environment designed to limit its actions. Despite these controls, it started taking steps not included in its assigned tasks. The team described this behavior as "unanticipated" and noted that such actions appeared "without any explicit instruction and, more troublingly, outside the bounds of the intended sandbox."

Security breach

Reverse SSH tunnel creation

Along with the mining attempt, the AI agent also performed another technical action that alarmed the researchers. It created a reverse SSH tunnel, allowing a machine in a protected environment to connect to an external computer. This connection could serve as a hidden pathway between systems. What surprised the researchers was that none of these actions were requested through prompts or instructions given to the model.

Advertisement

Ethical concerns

Researchers intervene

Cryptocurrency mining usually requires computing power to generate digital currency, which is typically set up intentionally by system operators. However, in this case, the AI agent attempted to initiate the process during its training phase. This raises questions about how autonomous some advanced AI systems could become when given access to tools and computing resources. The researchers quickly stepped in after detecting the activity and introduced additional restrictions and adjusted the training process to prevent such behavior from recurring.

Advertisement

Risk factors

Growing capabilities of AI agents

The incident comes as AI agents are getting better at multi-step tasks and interacting with online services. Some systems can already write code, automate workflows, and communicate with other tools. As these capabilities grow, researchers warn there is a greater chance of unexpected behavior appearing during testing. Similar incidents have been reported in earlier experiments involving AI agents such as the Moltbook experiment where they brought up cryptocurrency while discussing tasks they were performing for humans.

Advertisement