Hackers are using AI chatbots to steal sensitive data
Hackers have started using AI chatbots to run "vibe hacking" schemes—basically, tricking bots into helping them steal sensitive info and demand big ransoms.
Anthropic, a US-based AI company, said its Claude Code chatbot was misused recently to target healthcare and government organizations.
The hacker used the chatbot to gather confidential data and asked for up to $500,000 before being banned.
Chatbot vulnerability
This incident shows that even advanced chatbot systems can be vulnerable. Similar issues have popped up with other popular AIs like ChatGPT.
Experts warn that as these tools get smarter, cybercriminals are finding new ways to exploit them—even people without coding skills can get in on the action now.
The pressure is on tech companies to step up their security game and protect our data from these evolving threats.