On the finish of I/O, Google’s annual developer convention on the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the corporate had stated “AI” 121 occasions. That, primarily, was the crux of Google’s two-hour keynote — stuffing AI into each Google app and repair utilized by greater than two billion folks all over the world. Listed below are all the key updates that Google introduced on the occasion.
Gemini 1.5 Flash and updates to Gemini 1.5 Professional
Google introduced a model new AI mannequin referred to as Gemini 1.5 Flash, which it says is optimised for velocity and effectivity. Flash sits between Gemini 1.5 Professional and Gemini 1.5 Nano, which its the corporate’s smallest mannequin that runs domestically on system. Google stated that it created Flash as a result of builders wished a lighter and cheaper mannequin than Gemini Professional to construct AI-powered apps and providers whereas retaining a number of the issues like an extended context window of 1 million tokens that differentiates Gemini Professional from competing fashions. Later this yr, Google will double Gemini’s context window to 2 million tokens, which implies that will probably be in a position to course of two hours of video, 22 hours of audio, greater than 60,000 traces of code or greater than 1.4 million phrases on the similar time.
Venture Astra
Google confirmed off Venture Astra, an early model of a common assistant powered by AI that Google’s DeepMind CEO Demis Hassabis stated was Google’s model of an AI agent “that can be helpful in everyday life.”
In a video that Google says was shot in a single take, an Astra consumer strikes round Google’s London workplace holding up their telephone and pointing the digital camera at varied issues — a speaker, some code on a whiteboard, and out a window — and has a pure dialog with the app about what it appears. In one of many video’s most spectacular moments, the accurately tells the consumer the place she left her glasses earlier than with out the consumer ever having introduced up the glasses.
The video ends with a twist — when the consumer finds and wears the lacking glasses, we be taught that they’ve an onboard digital camera system and are able to utilizing Venture Astra to seamlessly stick with it a dialog with the consumer, maybe indicating that Google could be engaged on a competitor to Meta’s Ray Ban sensible glasses.
Ask Google Photographs
Google Photographs was already clever when it got here to looking for particular photographs or movies, however with AI, Google is taking issues to the following stage. In case you’re a Google One subscriber within the US, it is possible for you to to ask Google Photographs a posh query like “present me the perfect photograph from every nationwide park I’ve visited” when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.
Veo and Imagen 3
Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.
Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.
Big updates to Google Search
Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.
But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.
Gemini on Android
Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.
There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and textual content, make Gemini accessible within the facet panel in Gmail and Docs, energy a digital AI teammate in Workspace, eavesdrop on telephone calls and detect should you’re being scammed in actual time, and much more.
Make amends for all of the information from Google I/O 2024 proper right here!