Google globally launches enhanced Gemini 2.0 Flash model

Google announced several updates coming to its Gemini large language model (LLM) on December 11th. Some of the updates will be globally available and integrated into select Google products, while others are rolling out to trusted testers or being used in experimental products that could shape future features.

Announcements include an experimental new Gemini 2.0 Flash model rolling out globally in select products and several experimental new tools heading to an expanded trusted tester program, including one that can browse the web and complete tasks for users.

Starting with Gemini 2.0 Flash, Google claims it’s two times faster than the Gemini 1.5 Pro model while offering better performance. Google described Gemini 2.0 as the AI model for the “agentic era.” 2.0 Flash is multimodal, can generate images and supports a variety of voice options with the ability for users to ask it speaker slower or faster. Google says you can even ask it to speak a certain way, such as like a pirate.

Starting on December 11th, Gemini 2.0 Flash will be globally available to Gemini and Gemini Advanced users on desktop and the mobile web. Google will also add the model to AI Overviews in Search. AI Overviews recently expanded to Canada, but the feature has been controversial because of inaccuracies — for example, it garnered a reputation for telling people to do things like putting glue on pizza — and also because of its impact on publishers who rely on web traffic from Google. AI Overviews pull information from websites and repackage it directly in Google Search, meaning people don’t need to leave the Search page to find answers to some queries. Google continues to claim that links included in AI Overviews drive traffic to sites, but that doesn’t line up with the traffic drops publishers saw after the feature was implemented.

Google said Gemini 2.0 Flash will become generally available in January alongside more model sizes. It will also come “soon” to the Gemini mobile app.

Project Mariner can complete tasks for users

Next up, Google went over several updates coming to various Gemini-related experimental projects, including Astra and Mariner.

Project Astra is Google’s experiment testing potential future applications of AI assistants and, while it isn’t available to the public, features that Google tests in Astra may eventually come to public-facing Google apps and services. With that in mind, it’s worth paying some attention to what Google does with Astra but I’d also strongly recommend taking these announcements with a healthy dose of skepticism.

In previous Astra demos Google showed, people have real-time conversations with the AI through their Pixel smartphone. It can also ingest visual information through the phone’s camera, enabling questions and interactivity with the world. In the latest demo Google showed press ahead of its Wednesday announcement, this was very much still the case, but there were some notable additions. For one, Google says it updated Astra with a native audio model that works better with multiple languages and even works when mixing languages together. The search giant also connected Astra to Google apps and services to aid with information retrieval and gave it up to 10 minutes of session memory.

Google shared video demo (above) of Astra (which, again, approach with skepticism — Google has been known to fudge things in its AI video demos before) that depicted someone using Astra on their Pixel 9 Pro to help them in day-to-day tasks. The demo included the person asking Astra to look up information in his email, remember details like an apartment door code and then recall it later, and answer queries about the world. That last one included pointing the phone camera at a bus and asking if the bus would take them to a specific location, and Astra then checked the bus route to see if it went near that location. At the end of the video, it switched to using prototype smart glasses, a la the Ray-Ban smart glasses — perhaps this means we’ll see AI-infused smart glasses from Google in the future. Google Glass 2, anyone?

Google also detailed Project Mariner, an experimental Chrome extension that allows AI to complete web actions for users. This will only be available to trusted testers for now, but Google demoed how Project Mariner works. With the extension installed, users can pull up a sidebar and type instructions for Project Mariner to carry out. The way it’s set up, Project Mariner only works in the active browser tab and users can watch it complete tasks in real time. A demo video showed a users asking it to remember a list of company names and then go search for contact info for those companies. Then Project Mariner navigated to each company’s website and looked for contact information on those websites. While it works, users can see the AI’s reasoning appear in the sidebar.

On one hand, it was neat to see the AI carry out tasks like this. On the other, I can’t help but feel that Project Mariner won’t really save anyone much time if they have to sit and watch it carry out the instructions instead of letting the AI work in the background while they do something else in another tab. Google said that users would need to stop or pause Mariner if they wanted to do something else and could resume it later.

Also of note, Google said that Mariner could take any action on the internet that a user could take, but it set exceptions for tasks that Google felt should be handled by a person instead. For example, Mariner could theoretically complete shopping tasks like finding products and filling an online cart for the user, but it wouldn’t be able to complete the transaction without the user stepping in.

Other updates

Google shared a few other small updates with press, including about its Jules code generation tool, which is now available to trusted testers.

It also demoed a Gemini for Games tool that, very similar to Microsoft’s Copilot gaming demo, allowed players to ask questions about the game they were currently playing. Gemini could respond using the context of what was on display as well as by pulling in external information. The demo showed people playing Clash of Clans and asking Gemini questions about the current meta and strategies to help them win. Another player asked Gemini to remember the steps for their daily quests and then asked it to recall that information later in the play session.

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Content

Judge allows California’s ban on addictive feeds for minors to go into effect

The US Treasury sanctions Iranian and Russian entities over attempted election interference, including a Moscow entity that directed the creation of deepfakes (Raquel Coronell Uribe/NBC News)

LG debuts new Gram laptop series with onboard small language model

Leave a Comment