LOADING...

AI under attack: New malware manipulates security tools

Technology

A new kind of malware is making waves by tricking even AI-powered security systems.
Using a method called "prompt injection," it convinces the AI that dangerous files are actually safe, creating a fresh challenge for anyone fighting cybercrime.
As more security tools rely on generative AI, this sneaky tactic shows just how quickly hackers adapt.

How prompt injection works

Prompt injection lets attackers fool large language models (LLMs) by sending commands that look legit but aren't.
According to Check Point Research, this marks a new back-and-forth between hackers and cybersecurity teams.
Spotting these threats early is now more important than ever if we want to keep our digital spaces safe.