Researchers report that a programming style called "vibe coding" is releasing batches of vulnerable code created with generative AI assistance. The Vibe Security Radar, built by the Systems Software & Security Lab (SSLab) at Georgia Tech, scanned public vulnerability sources and found many examples where AI tools contributed insecure code. The teams analysed over 43,000 security advisories to reach these findings.
Graduate research assistant Hanqing Zhao explains how the radar works: it locates the error for each vulnerability, inspects the code's history to find who introduced the bug, and looks for AI tool signatures. The radar has confirmed 74 cases so far — 14 labelled critical and 25 labelled high — including command injection, authentication bypass and server-side request forgery. Zhao notes that AI models often repeat the same mistakes, so millions of developers using the same models can produce the same bugs across projects.
The radar currently traces metadata such as co-author tags and bot emails but cannot identify cases when those markers are removed. The team is moving toward behavioral detection that reads naming patterns, function structure and error handling to find AI-written code without metadata. Researchers are also improving verification and expanding the sources they scan.
- Detection can use metadata like co-author tags and bot emails.
- Behavioral models aim to identify AI code from the code itself.
- Researchers recommend careful review of AI output, especially input handling and authentication.
Zhao warns that as AI agents grow more autonomous — building features, creating files and making architecture decisions — the attack surface increases. In the second half of 2025 the radar found about 18 cases across seven months, then 56 cases in the first three months of 2026; March 2026 alone had 35, more than all of 2025 combined. Claude Code and Copilot account for most detections, partly because they leave the clearest signatures.
Difficult words
- vulnerability — a weakness that attackers can exploit
- generative — produced by a machine learning model
- metadata — data that describes other data
- behavioral — related to patterns of how code behaves
- authentication — process that confirms a user's identity
- attack surface — all possible points an attacker can target
- signature — a unique pattern left by a toolsignatures
- verification — checking that code works and is safe
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- What risks arise when many developers use the same AI models, and how could teams reduce those risks?
- How might behavioral detection change the way organisations find AI-written code without metadata?
- Should companies require metadata like co-author tags from AI tools to help detection? Why or why not?