Google has taken a major step towards bridging the gap between artificial intelligence and the physical world with the introduction of two new AI models for robotics. The tech giant introduced Gemini Robotics and Gemini Robotics-ER (extended reasoning), both of which are founded on Gemini 2.0, which Google refers to as its “most capable” AI system to date.
Unlike traditional generative AI that produces text and images, these new models focus on translating AI capabilities into physical action commands that control robots. This represents a significant evolution in how AI can interact with and manipulate the physical environment around us.
Google’s Gemini for Robotics
To advance this technology, Google announced a partnership with Texas-based robotics developer Apptronik to “build the next generation of humanoid robots with Gemini 2.0.” Apptronik brings valuable experience from previous collaborations with industry leaders like Nvidia and NASA.
Google has shown its commitment to this partnership by participating in Apptronik’s recent $350 million funding round.
Demonstration videos released by Google showcase the impressive capabilities of Apptronik robots equipped with the new AI models. The robots were demonstrated doing routine tasks like plugging computer devices from power strips, arranging lunchboxes, moving objects, and closing bags—all by voice. Google has not released the date the technology will be on the market.

Google again emphasized that useful AI robots will need to possess three attributes: they will need to be general enough to learn under various conditions, interactive enough to learn and respond to instructions or changes in the environment in a timely manner, and dexterous enough to manipulate objects with human-like touch.
The Gemini Robotics-ER model has been specifically designed for roboticists to use as a foundation for training their own models.
Beyond Apptronik, Google is making this technology available to “trusted testers” including Agile Robots, Agility Robots, Boston Dynamics, and Enchanted Tools.
Google’s AI Robotics Leap: Gemini 2.0 Enters the Physical World
Google’s push into AI-powered robotics comes amid growing industry interest in this field. In November, OpenAI invested in Physical Intelligence, a startup focused on “bringing general-purpose AI into the physical world” through large-scale AI models and algorithms for robots.
Around the same time, OpenAI hired Meta’s former head of Orion augmented reality glasses to lead its robotics and consumer hardware initiatives. Tesla has also entered the humanoid robotics space with its Optimus robot.
Google CEO Sundar Pichai shared his perspective on X (formerly Twitter), stating that the company views “robotics as a helpful testing ground for translating AI advances in the physical world.” He added that these robots will utilize Google’s multimodal AI models to “make changes on the fly + adapt to their surroundings.”
This accomplishment is a landmark in Google’s AI journey, shifting from virtual interfaces to building systems that can engage and control the physical world in a substantial way.
As AI continues to advance, combining sophisticated models such as Gemini 2.0 with robotics technology has the potential to deliver automation, aid, and human-machine collaboration breakthroughs across numerous industries and daily applications.