ক্লাউডকে আপনার ব্যক্তিগত কোম্পানির ডেটা হ্যাকারদের পাঠানোর জন্য প্রতারিত করা যেতে পারে – এর জন্য শুধু কিছু সদয় শব্দ লাগে।

Claude’s Code Interpreter can be exploited via prompt injection to steal private user data. A researcher tricked Claude into downloading sandboxed data to his Anthropic account using API access. Anthropic now considers such vulnerabilities reportable and encourages users to monitor or disable access. Rehberger, also known as Wundervuzzi, recently published a detailed report on his findings, revealing that the core issue lies within Claude’s Code Interpreter, a sandbox that allows the AI to write and execute code directly during a conversation (e.g., to analyze data or create files). The Code Interpreter recently gained the ability to make network requests, allowing it to connect to the internet and, for example, download software packages. You might like to follow Claude. By default, Anthropic’s Claude should only be able to access “safe” domains like GitHub or PyPI, but the allowed domains include api.anthropic.com (the same API that Claude itself uses), which opened the door for the exploit. Wundervuzzi revealed he was able to trick Claude into reading private user data, storing that data in a sandbox, and uploading it to his own Anthropic account using his own API key via Claude’s file API. In other words, even with seemingly limited network access, an attacker could manipulate the model via prompt injection to steal user data. The exploit could transfer up to 30MB per file, and multiple files could be downloaded. Wundervuzzi shared his findings with Anthropic via HackerOne, and while the company initially classified it as a “model safety issue” rather than a “security vulnerability,” it later acknowledged that such data exfiltration flaws should be reported. Anthropic initially stated that users should “monitor Claude while using this feature, and stop it if you see it unexpectedly using or accessing data.” Sign up for the TechRadar Pro newsletter to get all the top news, opinion, features and advice you need to take your business to the next level! A later update stated: “Anthropic has confirmed that data leakage vulnerabilities such as this are within the scope of reporting and should not be dismissed as out of scope,” he said in the report. “There was an error in the process that they will be working to correct.” His advice to Anthropic is to restrict Claude’s network communications to only the user’s own account, and that users should closely monitor Claude’s activity or disable network access if concerned. Don’t forget to click the “Subscribe” button on the feed! And of course, you can follow TechRadar on TikTok for news, reviews, unboxing videos, and get regular updates from us on WhatsApp.
প্রকাশিত: 2025-11-01 00:28:00
উৎস: www.techradar.com








