Google I/O: always more AI, find all the announcements from 7 p.m.

Close-up on Android 13, new features related to artificial intelligence, and even, exceptionally, new products: a smartphone and a connected watch, Google will hold the opening keynote of its developer conference this evening, starting at 7 p.m. The follow-up and the summary of the announcements will be here.
This is the first of the season and not the least. The Google I/O opened today on the Google campus in Mountain View. Conference dedicated to developers obliges, the software was in the spotlight, with the software, the artificial intelligence.

Google had done a little teasing on Android’s Twitter account and had even offered to wait with a little flipper.
After two years, the Google I/O has resumed in public, with a packed theater, Sundar Pichai still at the helm, and the same desire to understand information and make it easier to access. Here, or in Ukraine, as the boss of Google recalled in the introduction. The opportunity also to announce the arrival of 24 new languages in Google Translate.

Google Maps has also expanded, in particular in Africa with 300 million buildings reproduced on the base map. The mapping data has been made available to everyone.

Google Maps offers an immersive view. Sundar Pichai promises up to 18% less fuel consumed.

Sundar Pichai also announced a whole host of smart features for different tools in its ecosystem. Thus on YouTube, the automatic translation of videos now extends to 16 languages, Ukrainian will be added a little later next month. In Google Docs, it will be possible to automate a TLDR summary for documents that are too long to read…

The summary function could also soon be applied to video meetings via Meet. This would allow you to catch up if you connect late to a meeting. The work is in progress, in any case, according to the boss of Google. Of course, each time, the AI ​​is behind these new features…

Sundar Pichai’s teams then entered the hard… Starting with Google Search, at the heart of the machine.

Google Search, still at the heart of the Google machine

Google’s ambition, in the future, is to allow everyone to search for anything, anywhere, and anyhow…

Voice Search, introduced about ten years ago, is a first step that has met with great success with new users. Google Lens was an extension of the rise of smartphones. It is used eight billion times a month.

Multisearch was introduced last month, this feature allows you to start a search and then add criteria as you go through the results to narrow things down. Near me is a feature to find nearby wanted items online. It works a dish that looks tasty, too, and we’ll ask Google to tell us what dish it is and then find a restaurant that serves it nearby. This function relies in particular on the work of those who add information to Maps.

Google is making its skin color scale open source so that other Internet and tech players can use it. The “Web must be representative of the world”, explained Annie Jean-Baptiste, of Google.

Hey Google, towards more natural…

For some products, like the Nest Hub Max, equipped with a camera, it will be possible to use Look and Talk. The device detects that you are looking at it, you talk to it, it identifies you with your voice (in English, at least) and answers your question.

Quick phrases arrive on Pixel 6, and Nest Hub Max to quickly answer a call, set an alarm, turn off a lamp, etc. Obviously, the objective is that the interaction takes place in natural language.
A year after the first release, Sundar Pichai also introduced LaMDA 2, a new natural conversational AI. We can thus lead a “conversation” on a subject, deepen our questions, and read the answers constructed and written by the AI.
AI Kitchen App can also establish a series of steps, and advice, to follow to complete a project such as establishing a vegetable garden, for example… It’s a way to see what “conversational AI can bring in the world,” according to Sundar Pichai.