A groundbreaking feature has been rolled out by Anthropic, allowing its AI model, Claude, to take control of users' computers to perform various tasks. This development marks a significant shift in the way AI models interact with users, blurring the lines between human and machine. The new feature, which is available to Claude Pro and Claude Max subscribers, enables Claude to access users' computers, perform tasks, and even use applications like web browsers and development tools.
For the feature to work, users must be on a qualifying subscription plan and have a computer running MacOS. Claude can perform tasks such as sending files, checking email, and even using the keyboard and mouse to interact with the computer. However, users can stop Claude from performing a task at any time, and the AI model will always ask for permission beforehand.
The feature has raised concerns about security, as giving an AI model control over a computer can leave it vulnerable to attacks. Experts warn that agentic AI can take major actions quickly and with little warning, and claws can be hijacked by malicious actors. Anthropic has implemented safeguards to minimize risks, including automatic scanning for vulnerabilities and prompt injections.
Despite these efforts, Anthropic has warned users about potential errors and suggests not using apps that handle sensitive data. The feature is currently available in research preview and is limited to computers running MacOS.
Anthropic's latest move marks a significant step in the evolution of AI models, but it also raises important questions about security and control. As AI models become more autonomous, it's essential to ensure that they are designed with safety and security in mind. Nigerian developers and startups can learn from Anthropic's approach, which highlights the need for robust safeguards and user consent mechanisms.





