Anthropic has introduced Code Review within its Claude Code platform, a system that deploys a team of agents to analyze every pull request for bugs before any human developer reviews it.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
Regularly topping the charts in coding and web development tasks, Claude launched Claude Code in late 2025, an agent-based tool that allows developers to produce code very rapidly. However, this speed has led to a massive increase in the volume of pull requests, creating a bottleneck in delivery cycles. Anthropic aims to address this issue with Code Review. Here’s how it works.
An agent team deployed on each pull request
Code Review is a feature within Claude Code that automates code review at the opening of a pull request. When a PR is submitted to an activated repository, several agents are dispatched simultaneously to analyze the changes from various perspectives. A central aggregator agent then consolidates the results, removes duplicates, and ranks the issues by severity.
The results are displayed directly on the pull request: a global summary comment, complemented by inline annotations for each identified bug. The tool categorizes issues into three severity levels using a color code: red for critical anomalies, yellow for issues to monitor, and purple for bugs detected in adjacent code not modified by the PR. Agents do not approve pull requests, leaving this decision to the developers.
The scope is deliberately limited to logical errors, excluding stylistic concerns. Anthropic justifies this choice by aiming to reduce false positives: any logical bug identified, by definition, needs to be corrected. Moreover, agents can scan the entire codebase, not just the modified files, to detect side effects.
A tool already tested internally at Anthropic
Code Review addresses a direct problem arising from the rapid code production enabled by Claude Code: the more code developers produce, the greater the volume of pull requests that need reviewing. Consequently, human reviews are rushed, and bugs slip through the cracks.
Anthropic initially tested Code Review internally on nearly all its pull requests. Before deployment, 16% of PRs received substantial review comments. This rate has increased to 54% since. For large PRs (over 1,000 lines modified), issues are detected in 84% of cases, with an average of 7.5 anomalies identified. Less than 1% of the feedback is considered incorrect by the developers.
Code Review vs Claude Code Security
Available in research preview for Team and Enterprise plans
Code Review is now available in research preview for Team and Enterprise plans of Claude Code. Administrators can activate it from the Claude Code settings, select the relevant GitHub repositories, and the system then runs automatically with each new PR.
Billing is token-based, with an estimated cost ranging from $15 to $25 per review depending on the PR complexity. The average processing time is around 20 minutes. Administrators can set a monthly spending cap, activate the tool selectively by repository, and view a dedicated analytical dashboard.
Similar Posts
- GitHub Boosts AI Integration: Welcomes Claude and Codex to Its Platform
- OpenAI Unveils Codex: Revolutionizing Automated Coding with ChatGPT AI Agent
- GitHub Unveils Agent HQ: Central Hub for AI Web Development Agents
- Claude Unveils Skills: Revolutionize Custom Instruction Manuals Creation!
- Anthropic Unveils Claude 4.5 Sonnet: “World’s Best Coding Model!”

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.