A password will be e-mailed to you.

Google is reportedly adding AI changes to its Alphabet helper robots 
Google is reportedly implementing new AI changes for its Alphabet helper robots, says reports 

Google is reportedly adding AI changes to its Alphabet helper robots 

Google is reportedly adding AI changes to its Alphabet helper robots 
Image Credits: Reuters

The parent company of Google, Alphabet has been working towards bringing together its duo most ambitious research projects which follow robotics and AI language understanding which will be helping towards understanding and making their helper robot that is capable enough to understand its natural language commands.

Going with a report from Verge from 2019 claimed that Alphabet has been actively working on the research and development of its new robots that can actually help in carrying out many simple tasks which are as basic as getting drinks or clearing surfaces within their office areas.

Google to bring AI changes to its Alphabet helper robots 

This project that was also called the Everyday Robots project is still in an inefficient level of development which is quite slow and hesitant in working. However, Google has added a few bots which brings a great upgrade like improvising and understanding the courtesy of Google’s large language model (LLM) PaLM.

Most of these robots are only capable of responding to just short and simple tasks which can be like getting a bottle of water or something.

However, LLMs which are like GPT-3 and even Google’s Mum has the capability to prase more such oblique commands. Getting to Google’s example maybe you might be able to tell and give commands to these Everyday Robot prototypes by Google that includes “I spilled my drink, can you help?”

These instructions are been filtered by these helping robots through an internal list of possible actions and interpreted it like fetching a sponge from the kitchen to clean the spiller bring. 

Google has reportedly been dubbing the resulting system PaLM-SayCan, the name that is capable enough to capture how the model has combined its language understandable skills from LLMs with the affordance grounding of its entire robots lineup.

Google claimed about it after successfully integrating its PaLM-SayCan into its robot lineup with this implementation now the bots has become capable enough for planning correct responses to 101 user-intruction which has a success accuracy of 84 percent and also it has gained the execution accuracy of 74 percentage which is actually a huge jump considering what the robots could do back in time. 

Although, Google will be considering adding more such capabilities towards the working of these robots which will also include adding more such instructions and also increase the accuracy of understanding user responses and also work towards bringing up the execution accuracy too.



No more articles
Send this to a friend