Google is bringing Gemini to your car with Android Auto

Google is bringing Gemini to your car with Android Auto


Google is bringing Gemini, its generative artificial intelligence, on all the cars that support Android Auto in the coming months, the company has announced to its Android show before the I/O developer conference.

The company states that the addition of Gemini functionality to Android Auto and, by the end of the year, to the cars that manage Google’s integrated operating system, will make the guide “more productive and fun” in the blog post.

“This will really be, we think, one of the greatest transformations in the inter-verician experience that we have seen a long time,” said Patrick Brady, vice president of Android for cars, during a virtual briefing with the members of the media in front of the conference.

Gemini will emerge in the Android automotive experience in two main ways.

Gemini will act as a much more powerful intelligent vowel assistant. The drivers (or passenger-brays said they are not a voice correspondence for anyone who owns the phone that manages the Android car experience) will be able to ask Gemini to send messages, reproduce music and substantially do all the things that Google Assistant was already able to do. The difference is that users will not have to be so robotic with their commands thanks to the skills of the natural language of the twins.

Gemini can also “remember” things as if a contact prefers to receive text messages in a particular language and manage this translation for the user. And Google says that Gemini will be able to make one of the most commonly paraded technological demo in the car: find good restaurants along a planned path. Of course, Brady said that Gemini will be able to extract lists and reviews of Google to respond to more specific requests (such as “Taco places with vegan options”).

The other main way in which Gemini will emerge is with what Google calls “Gemini Live”, which is an option in which the digital IA is essentially always listened to and ready to engage in complete conversations on … whatever. Brady said that those conversations could concern everything, from travel ideas for spring holidays, to brainstorming recipes that a 10 -year -old boy would like to “Roman history”.

If everything seems a little distracted, Brady said Google believes it will not be. He said that the skills of natural language will make it easier to ask Android Auto to carry out specific tasks with less confusion, and therefore Gemini “will reduce the cognitive load”.

It is a bold statement to be made at a time when people are loudly asking that car manufacturers move away from touchscreen and report knobs and physical buttons – a request that many of those companies are starting to oblige.

There is still a lot to solve. For now, Gemini will take advantage of Google’s Cloud elaboration to operate both in Android Auto and on cars with integrated Google. But Brady said that Google is working with the car manufacturers “to build in several calculation so that (Gennini) can work at the limit”, which would help not only with performance but with reliability – a demanding factor in a moving vehicle that could be clinging to new cellular towers every few minutes.

Modern cars also generate many data from on -board sensors and on some models, even internal and external cameras. Brady said Google has “nothing to announce” on the fact that Gemini can take advantage of those multomotal data and that “we talked about it”.

“We certainly think that cars have more and more cameras, there are some really interesting use cases in the future here,” he said.

Gemelli on Android Auto and Google integrated will arrive in all countries that already have access to the model to the company’s generative and support more than 40 languages.

Find out how to look at live streaming and more from Google I/O.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *