Connect with us

Google Tests Remy AI Agent for Gemini as Focus Shifts to User Control

Google is testing a new AI agent named Remy for its Gemini platform, according to a report from Business Insider. The tool is designed to help users with work and daily tasks by acting on their behalf.

Remy is currently being tested by Google employees only, through a staff only version of the Gemini app. The internal description obtained by Business Insider frames Remy as a “24/7 personal agent” intended to transform Gemini into an assistant that can take autonomous actions for the user.

The report cited an internal document and two people familiar with the project. Google declined to comment on the development. No timeline or public release date has been announced, nor has Google identified which services are part of the employee test.

Expanding Gemini Beyond Chat

Remy represents part of Google’s broader effort to expand Gemini beyond chat based interactions. Google already offers agent related features such as Agent Mode, though access varies by subscription tier and region. Remy is described internally as more advanced than these existing features.

According to the report, Remy is designed to integrate across Google services, monitor user relevant data, handle complex tasks, and learn user preferences over time. This preference learning capability places an emphasis on memory controls and user privacy.

Connected Services and Privacy Controls

Google’s Gemini support documentation shows the current scope of connected services. These include Google Workspace apps such as Gmail, Calendar, Docs, Drive, Keep, and Tasks. Third party services like GitHub, Spotify, YouTube Music, Google Photos, WhatsApp, Google Home, and Android utilities are also included.

Google’s Gemini Privacy Hub provides context on how the assistant works with connected services. Users can review and delete Gemini Apps Activity, adjust auto delete settings, and control whether data is used to improve Google AI. Users can also manage access to third party apps and saved information.

Existing Gemini documentation outlines actions with varying levels of user impact: retrieving information from Workspace apps, creating calendar events, sending messages, opening apps, and controlling device or smart home functions.

Governance and Autonomy Questions

Google Research has stated that AI agents should have well defined human controllers, carefully limited powers, observable actions, and the ability to plan. Google Cloud guidance emphasizes that agent activities should be transparent and auditable through logging and clear action documentation. It also recommends limiting agent powers based on the least privilege principle, aligned with the intended purpose and user risk tolerance.

The report did not provide technical details on Remy’s architecture, the underlying model version, or the level of autonomy being tested. It remains unclear whether Remy can act independently without user confirmation, or how it handles approvals and logs completed actions.

Google describes Remy as a dogfooding project, a term used in technology companies when employees test products before broader release. The report compared Remy’s concept to OpenClaw, an AI agent that could autonomously reply to messages and conduct research. OpenClaw’s creator was reportedly being hired by OpenAI CEO Sam Altman in February.

Google DeepMind CEO Demis Hassabis has previously discussed the goal of building a digital assistant. However, Google has not confirmed whether Remy will become a public Gemini feature.

As of now, no official timeline for public availability has been provided. Further developments will likely depend on internal testing outcomes and Google’s ongoing governance framework for agentic AI systems.

More in Artificial Intelligence