
Almost 50% code written by AI tools contain vulnerabilities: Report
What's the story
A recent study by software company Veracode has raised alarms over the security of AI-generated code. The 2025 GenAI Code Security Report found that nearly 45% of the code generated by large language models (LLMs) contained security vulnerabilities. These were not just minor bugs, but real vulnerabilities that could potentially be exploited in modern web applications.
Security concerns
AI's security blindspot
The study also revealed a worrying trend: when given the choice between generating secure or insecure code, AI models chose the latter almost half of the time. This is particularly concerning as many of these vulnerabilities fall under the OWASP Top 10, which lists the most critical security risks in web applications. Despite improvements in creating functional code, these models have shown no progress in generating more secure code.
Trust issues
The rise of 'vibe coding'
The report also highlighted the growing trend of "vibe coding," where developers let AI generate code without giving it any real security guidelines. This approach essentially relies on a chatbot to make security decisions, which Veracode says is often wrong. The company's research team used code-completion tasks linked to known vulnerabilities from the MITRE CWE list and found that Java had the highest failure rate at over 70%.
Security shift
Vulnerabilities now easier to exploit
The report noted that AI is making it easier for even low-skilled hackers to find and exploit vulnerabilities. They can now utilize AI tools to scan systems, identify flaws, and create exploit code. This shift is changing the security landscape, putting defenders on their back foot. Veracode has advised companies to integrate security into every part of the development pipeline as a countermeasure.