AI-generated code is full of security issues: Report
A new Veracode report says nearly half of AI-generated code comes with serious security issues.
The trend of "vibe coding"—letting AI quickly write software—sounds convenient, but it is raising red flags about how safe that code really is.
Majority of organizations do not review AI-generated code
Surprisingly, a minority of organizations review AI-generated code for security.
Even though many are already using or testing out these coding assistants, many do not really know where or how this code is being used in their projects—which leaves a lot of room for mistakes.
Code written by AI assistants often fails at basic input validation
The report found that a huge chunk of AI-written code fails at basic input validation (think: 86% in cross-site scripting cases). Languages like Java and Python are especially affected.
Plus, misconfigured AI agents often give out too many permissions, making systems easier to attack.
Experts say regular human reviews and better security checks are a must if we want to keep using these tools safely.