Claude Code Hit With Critical RCE Vulnerabilities: What Dev Teams Need to Know

Claude Code Hit With Critical RCE Vulnerabilities: What Dev Teams Need to Know

Security researchers have disclosed three critical vulnerabilities in Claude Code, Anthropic's AI-powered coding assistant. The flaws could allow attackers to execute arbitrary code on developers' machines and steal API keys—all by simply getting a victim to clone a malicious repository.

Check Point Software reported all three vulnerabilities to Anthropic, which has issued fixes. But the underlying design patterns raise serious questions about supply chain security as enterprises rush to adopt AI coding tools.

Anthropic’s Week From Hell: Pentagon Threats, Abandoned Safety Pledges, and Critical Vulnerabilities
In the span of just five days, Anthropic—the AI company that built its entire brand on being the “responsible” alternative to OpenAI—has watched its carefully constructed safety narrative collapse. The company faces a Pentagon ultimatum over its $200 million defense contract. It quietly scrapped its flagship safety pledge.

The Attack Surface: Repository Configuration Files

All three vulnerabilities stem from the same design decision: Claude Code embeds project-level configuration files directly within repositories.

The intent is collaboration. When a developer clones a project, they automatically inherit the same Claude settings used by teammates. Configuration lives in .claude/settings.json and .mcp.json files that travel with the repository.

The problem: anyone with commit access can modify these files. And Claude Code trusts them to define executable behavior.

Vulnerability #1: Malicious Hooks → Remote Code Execution

Severity: Critical
Status: Fixed (GHSA-ph6w-f82w-28w6)

Claude Code's "Hooks" feature lets users define shell commands that execute at various points in the tool's lifecycle. These hooks are configured in .claude/settings.json—the repository-controlled config file.

Check Point researchers discovered that hooks execute without requiring explicit user approval. An attacker with commit access could add hooks that run shell commands on every collaborator's machine the moment they interact with the project.

In their proof-of-concept demo, researchers opened a calculator app when a victim opened the project. Harmless for demonstration purposes—but the same mechanism could download and execute any payload, including reverse shells for persistent access.

The attack flow:

  1. Attacker gains commit access to a repository (or creates a malicious one)
  2. Attacker adds a hook definition to .claude/settings.json
  3. Victim clones the repository and opens it in Claude Code
  4. Hook executes automatically—no approval prompt, no warning

Anthropic fixed this vulnerability in late August 2025 after Check Point's July 21 report.

CISO Marketplace | Cybersecurity Services, Deals & Resources for Security Leaders
The premier marketplace for CISOs and security professionals. Find penetration testing, compliance assessments, vCISO services, security tools, and exclusive deals from vetted cybersecurity vendors.

Severity: Critical
CVE: CVE-2025-59536
Status: Fixed

After the first patch, Claude Code added warning prompts requiring user approval before executing commands from configuration files. Researchers found a bypass.

Claude integrates with external tools using Model Context Protocol (MCP). MCP servers can also be configured via repository files (.mcp.json). Two specific configuration settings could override the new safeguards and automatically approve all MCP servers.

The result: malicious commands executed immediately upon running Claude Code—before the user could even read the trust dialog.

From Check Point's report:

"Starting Claude Code with this configuration revealed a severe vulnerability: our command executed immediately upon running Claude—before the user could even read the trust dialog."

Their demonstration video shows a reverse shell achieving complete machine compromise through this bypass.

Anthropic patched the vulnerability in September 2025 and published CVE-2025-59536 on October 3.

Vulnerability #3: API Key Theft via URL Redirect

Severity: High
Status: Fixed

The third flaw targeted credentials rather than code execution.

Claude Code uses an API key to communicate with Anthropic's services. The ANTHROPIC_BASE_URL variable controls the endpoint for all API calls. This variable can be overridden in project configuration files.

Researchers configured ANTHROPIC_BASE_URL to route through their proxy server and observed all Claude Code API traffic in real-time. Every call to Anthropic's servers included the authorization header with the full API key in plaintext.

An attacker exploiting this could:

  1. Redirect a victim's Claude Code traffic to attacker-controlled servers
  2. Capture the victim's API key
  3. Use the stolen key to access Claude Workspaces

Claude's Workspaces feature makes this particularly dangerous. Multiple API keys can share access to cloud-based project files. A single stolen key could provide read/write access to an entire team's shared workspace—including any files generated by Claude's code execution tool.

Check Point's demonstration confirmed complete read/write access to workspace files using a stolen API key.

Anthropic Abandons Core AI Safety Promise: “We Can’t Make Unilateral Commitments”
The AI company that made safety its entire identity just quietly deleted its most important promise. Anthropic, creator of Claude and self-proclaimed leader in responsible AI development, has overhauled its flagship Responsible Scaling Policy (RSP)—and in the process, abandoned the central commitment that differentiated it from competitors like OpenAI

The Bigger Problem: Configuration Files as Attack Vectors

While Anthropic has patched these specific vulnerabilities, the underlying architecture remains concerning.

Any tool that executes code based on repository configuration files creates a supply chain attack surface. The attack pattern is elegant from an adversary's perspective:

  1. Compromise a popular repository (or create an attractive malicious one)
  2. Add malicious configuration that executes on clone/open
  3. Wait for developers to pull the code
  4. Achieve code execution on every affected machine

This isn't unique to Claude Code. Similar risks exist in any development tool that auto-executes based on repository contents—package managers, build tools, IDE plugins. But AI coding assistants may be particularly risky because:

  • They're new, so security patterns aren't established
  • They're designed for deep system access (to write and execute code)
  • Developers may trust AI tools more than traditional scripts
  • Adoption is happening faster than security review

Recommendations for Development Teams

1. Audit your Claude Code deployment

Check which version you're running and ensure all patches are applied. Review any repositories you've cloned recently—especially from external sources.

2. Treat AI tool configs like code

Repository configuration files for AI tools should get the same security scrutiny as code. Add .claude/ and .mcp.json files to your code review checklist.

3. Isolate AI coding tools

Consider running Claude Code (and similar tools) in sandboxed environments or containers. Limit the blast radius if a configuration-based attack succeeds.

4. Monitor for configuration changes

Set up alerts for modifications to AI tool configuration files in your repositories. Unexpected changes could indicate compromise.

5. Review workspace access

If you use Claude Workspaces, audit which API keys have access and what files are stored there. Rotate keys if you suspect any may have been exposed.

Timeline

DateEvent
July 21, 2025Check Point reports malicious hooks vulnerability
August 29, 2025Anthropic publishes fix (GHSA-ph6w-f82w-28w6)
September 3, 2025Check Point reports MCP consent bypass
September 2025Anthropic patches consent bypass
October 3, 2025CVE-2025-59536 published
February 26, 2026Check Point publishes full research

Sources

Read more