A new feature has been introduced by Anthropic, allowing users to grant their AI model, Claude, control over their computer to perform tasks. This development has significant implications for individuals who rely on AI assistants to manage their daily activities. With Claude now able to interact with computer systems, users can delegate tasks such as sending files, checking emails, and even running tests. However, this increased autonomy also raises concerns about security and the potential for malicious actors to exploit the system.
To use this feature, users must be subscribed to a qualifying plan, and the system will automatically scan for vulnerabilities. Despite these safeguards, Anthropic warns users that the feature is still in its early stages and may contain errors. The company advises against using apps that handle sensitive data, and some of these apps are disabled by default.
The new feature, which is available for Claude Pro and Claude Max subscribers, is limited to computers running MacOS. When combined with Anthropic's Dispatch feature, users can assign tasks to Claude using their phone, enabling them to create a morning briefing or run tests while away.
The introduction of Claude's computer control feature marks a significant step towards more advanced AI capabilities. While this development holds promise for increased productivity, it also highlights the need for robust security measures to prevent potential risks. As AI technology continues to evolve, it's essential for developers to prioritize user safety and address concerns around vulnerability and data protection.