Start United States USA — IT Claude AI Can Now Control Your PC, Prompting Concern From Security Experts

Claude AI Can Now Control Your PC, Prompting Concern From Security Experts

113
0
TEILEN

Security pros suggest hackers could trick Claude’s ‚computer use‘ into deploying malware. ‚I’m majorly crossing my fingers that Anthropic has massive guardrails,‘ one expert says.
With its latest update, the Claude AI tool from Amazon-backed Anthropic can control your computer. The idea is to have Claude „use computers the way people do“, but some AI and security experts warn it could facilitate cybercrime or impact user privacy.
The feature, dubbed „computer use“, means Claude can autonomously complete tasks on your computer by moving the cursor, opening web pages, typing text, downloading files, and completing other activities. It launched first for developers via the Claude API and is included in the Claude 3.5 Sonnet beta, but could be added to more models in the future. Anthropic warns that this new feature could be faulty or make mistakes, however, as it’s still in its early stages.
Anthropic says companies like Asana, Canva, and DoorDash are already testing this new feature, asking Claude to complete jobs that normally require „dozens, and sometimes even hundreds, of steps to complete.“ This could mean a more automated US economy as employees automate tasks at work, helping them meet deadlines or get more things done. But it could also lead to fewer jobs if more projects ship faster.
Claude may refuse to do certain tasks that could fully automate your social media and email accounts. One coder, however, claims he’s been able to create a „wrapper“ that circumvents those restrictions.
‚I’m breaking out into a sweat thinking about how cybercriminals could use this tool.‘
From a security standpoint, Jonas Kgomo, founder of the AI safety group Equianas Institute, called Claude’s computer use „untested AI safety territory“ and emphasized that cyberattacks are entirely possible with the new tool.
Parrot AI founder Paul Morville tells PCMag in a message that while Anthropic’s advice to only use the new feature when you can supervise it is wise, „there is enormous potential for both intentional and unintentional security problems“ and could one day be used to help hackers deploy autonomous remote access trojans (AI RATs).

Continue reading...