Healthcare data increasingly at risk in genAI apps
According to a new Netskope Threat Labs report, 44% of data policy violations involving generative AI included regulated healthcare data.
Nearly all organizations use genAI tools in some way, with most relying on apps that learn from user data.
Shadow AI tools still common
Even though fewer people are using personal accounts for genAI at work (down from 87% to 71%), "shadow AI" tools—those not approved by IT—are still common.
Plus, the overall amount of data (prompts and uploads) sent to these apps has increased more than 30-fold, raising privacy concerns about exposure of sensitive and regulated data.
More organizations using data loss prevention tech
On the bright side, more healthcare organizations are using data loss prevention (DLP) tech—up from 31% to 54%.
These tools seem to work: after getting alerts, 73% of users stopped risky behavior.
As Ray Canzanese from Netskope puts it, keeping an eye on access and using DLP is key, since violations can carry penalties under GDPR and HIPAA (up to €20 million or $1.5 million per violation).