Researchers say a programming style called "vibe coding" is releasing batches of insecure code. They found many cases where generative AI tools helped create code that had security problems. Vibe coding can involve tools such as Claude, Gemini and GitHub Copilot.
A research team at Georgia Tech built the Vibe Security Radar to find these cases. The radar scans public vulnerability databases and looks at code history to see who introduced a bug. If it finds an AI signature in metadata, it flags the case.
Researchers advise reviewing AI-produced code like a junior developer's pull request. They especially warn to check input handling and authentication and to use tools that search for vulnerabilities.
Difficult words
- insecure — not safe from attacks or security problems
- generative — able to create new content or data
- metadata — data that gives information about other data
- vulnerability — a weakness that allows security problems
- flag — to mark something for attention or actionflags
- authentication — process to check that a user is who they claim
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Would you review AI-produced code like a junior developer's pull request? Why?
- The researchers mention input handling and authentication. Which would you check first?
- Have you used a generative AI tool to write code or text? What did you check?