Start United States USA — software Anthropic ships automated security reviews for Claude Code as AI-generated vulnerabilities surge

Anthropic ships automated security reviews for Claude Code as AI-generated vulnerabilities surge

105
0
TEILEN

Anthropic launches automated AI security tools for Claude Code that scan code for vulnerabilities and suggest fixes, addressing security risks from rapidly expanding AI-generated software development.
Anthropic launched automated security review capabilities for its Claude Code platform on Wednesday, introducing tools that can scan code for vulnerabilities and suggest fixes as artificial intelligence dramatically accelerates software development across the industry.
The new features arrive as companies increasingly rely on AI to write code faster than ever before, raising critical questions about whether security practices can keep pace with the velocity of AI-assisted development. Anthropic’s solution embeds security analysis directly into developers’ workflows through a simple terminal command and automated GitHub reviews.
“People love Claude Code, they love using models to write code, and these models are already extremely good and getting better,” said Logan Graham, a member of Anthropic’s frontier red team who led development of the security features, in an interview with VentureBeat. “It seems really possible that in the next couple of years, we are going to 10x, 100x, 1000x the amount of code that gets written in the world. The only way to keep up is by using models themselves to figure out how to make it secure.”
The announcement comes just one day after Anthropic released Claude Opus 4.1, an upgraded version of its most powerful AI model that shows significant improvements in coding tasks. The timing underscores an intensifying competition between AI companies, with OpenAI expected to announce GPT-5 imminently and Meta aggressively poaching talent with reported $100 million signing bonuses.Why AI code generation is creating a massive security problem
The security tools address a growing concern in the software industry: as AI models become more capable at writing code, the volume of code being produced is exploding, but traditional security review processes haven’t scaled to match. Currently, security reviews rely on human engineers who manually examine code for vulnerabilities — a process that can’t keep pace with AI-generated output.
Anthropic’s approach uses AI to solve the problem AI created. The company has developed two complementary tools that leverage Claude’s capabilities to automatically identify common vulnerabilities including SQL injection risks, cross-site scripting vulnerabilities, authentication flaws, and insecure data handling.
The first tool is a /security-review command that developers can run from their terminal to scan code before committing it. “It’s literally 10 keystrokes, and then it’ll set off a Claude agent to review the code that you’re writing or your repository,” Graham explained. The system analyzes code and returns high-confidence vulnerability assessments along with suggested fixes.
The second component is a GitHub Action that automatically triggers security reviews when developers submit pull requests. The system posts inline comments on code with security concerns and recommendations, ensuring every code change receives a baseline security review before reaching production.

Continue reading...