AI tools at big companies can be hacked: Microsoft report
Microsoft's latest Cyber Pulse report says AI tools used in big companies can be turned into "double agents"—basically, they can be tricked into doing things they shouldn't, like leaking data or following hidden instructions.
Since over 80% of Fortune 500 firms use these tools, the risks are real and growing.
Hackers can exploit AI tools using prompt injection
These AI helpers sometimes get too much access or aren't closely watched.
Hackers can mess with them using sneaky tricks like prompt injection or fake interfaces, making the AI follow harmful commands buried in normal tasks.
Employees using unofficial AI tools at work
Nearly a third of employees admit to using unofficial AI tools at work, but less than half of companies have solid security for generative AI.
Microsoft says it's time for stronger safeguards—think stricter access controls and better monitoring—so your personal info (and your company's secrets) stay safe.