A security researcher has discovered a critical flaw in Anthropic's Claude AI, allowing attackers to steal user data by exploiting the platform's own File API.
The vulnerability enables attackers to use hidden commands to hijack Claude's Code Interpreter, tricking the AI into sending sensitive data, such as chat histories, directly to the attacker.
Anthropic initially dismissed the report on October 25 but later acknowledged a "process hiccup" on October 30.
The exploit involves a chained attack that abuses the platform's own API, highlighting the need for robust security measures in AI systems.
Author's summary: Claude AI has a critical vulnerability.