BipHoo CA

collapse
Home / Daily News Analysis / Google, Boston Dynamics Teach Robot Dog ‘Spot’ to See, Think, and Act

Google, Boston Dynamics Teach Robot Dog ‘Spot’ to See, Think, and Act

Apr 19, 2026  Twila Rosenbaum  5 views
Google, Boston Dynamics Teach Robot Dog ‘Spot’ to See, Think, and Act

In a remarkable collaboration, Google DeepMind and Boston Dynamics have significantly advanced the capabilities of Spot, the well-known yellow robotic dog. This enhancement is made possible through the integration of the new Gemini Robotics-ER 1.6 model, which moves beyond simple scripted programming towards what the companies describe as 'embodied reasoning'. This innovation enables Spot to autonomously assess and respond to its surroundings, whether navigating a cluttered room or a complex factory floor.

The Gemini Robotics-ER 1.6 model is designed as a 'reasoning-first' system. It empowers robots to interpret physical environments and data, such as accurately reading a needle on a pressure gauge. This development bridges the gap between digital artificial intelligence and physical interaction, allowing Spot to make informed decisions based on real-time observations.

One of the standout features of this new model is its agentic vision. Spot can now focus on intricate details, estimate measurements using coding techniques, and leverage world knowledge to enhance its understanding of the environment. This capability is especially valuable in industrial settings, where Spot can monitor analog instruments or verify the status of chemical sight glasses.

Marco da Silva, Vice President and General Manager of Spot at Boston Dynamics, remarked, 'Capabilities like instrument reading and improved task reasoning will enable Spot to see, understand, and react to real-world challenges autonomously.' This statement underscores the profound impact of the new technology on Spot's operational effectiveness.

Furthermore, the introduction of 'Transparent Reasoning' allows users to gain insight into the decision-making process of the robot. For the first time, operators can observe the rationale behind Spot's actions, alleviating concerns about the AI functioning as a 'black box'.

While the primary applications of Spot remain in heavy industries—such as detecting gas leaks, counting pallets, and identifying spills—a recent demonstration showcased Spot performing tasks that resemble human behavior. The robot was observed reading a handwritten to-do list, organizing footwear, and even walking alongside a real dog using a leash, highlighting its versatility and adaptability.

Addressing Safety and Interaction Challenges

In addition to enhancing Spot's intelligence, Google has implemented a safety benchmark known as ASIMOV. This initiative aims to prevent the robot from making potentially dangerous mistakes, such as placing a glass of water precariously close to the edge of a table. Safety remains a critical aspect of the robot's functionality, ensuring that it can operate effectively in various environments without posing risks.

Despite these advancements, a significant challenge persists: the issue of tactile interaction. Most AI systems, including Spot, are primarily trained on data available on the internet, which predominantly consists of visual and textual information. As a result, Spot currently relies heavily on its camera systems to interpret how to engage with different objects, leading to limitations in its tactile responses.

Availability of the New Model

The Gemini Robotics-ER 1.6 model is now accessible to developers through the Gemini API and Google AI Studio. Google DeepMind has also provided a developer Colab that includes examples for configuring the model and prompting it for embodied reasoning tasks, facilitating a smoother integration process for developers.

For customers of Boston Dynamics, the transition to the Gemini-powered AIVI-Learning model has been officially rolled out for all enrolled clients as of April 8, 2026. This transition marks a significant step forward in utilizing advanced AI capabilities in robotic applications.

In closing, the enhancements made to Spot not only represent a significant technological leap but also open new avenues for the application of robotics in both industrial and everyday settings. As development continues, the potential for robots like Spot to perform increasingly complex tasks autonomously seems boundless.


Source: eWEEK News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy