Security Researchers Sound the Alarm on Vulnerabilities in AI-Generated Code
Blank Space (small)
(text and background only visible when logged in)
(text and background only visible when logged in)
Full Story from Infosecurity Magazine
https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/
“Everyone is saying AI code is insecure, but nobody is actually tracking it. We want real numbers. Not benchmarks, not hypotheticals, real vulnerabilities affecting real users."
Georgia Tech cybersecurity researcher Hanqing Zhao is referring to vulnerabilities that can arise when people rely on AI-powered large language models (LLMs) to "vibe code" entire projects and then move straight to production without a proper code review process.
Zhao founded Vibe Security Radar in May 2025 to track vulnerabilities directly introduced by AI coding tools. In this interview with Infosecurity Magazine, Zhao explains how Vibe Security Radar works, and he reveals which LLM is currently "flooding software with new vulnerabilities."