Google is launching the long-awaited new flagship AI model Gemini 2.0 Pro Experimental as part of a series of launches of other models, including one for "reasoning" and a to be more economical. The company announced the news in its blog.
Google says it has received positive feedback from developers who were the first to test the next generation of the Gemini Pro version. It is positioned as the company's best AI model that can handle coding and complex hints.
Gemini 2.0 Pro Experimental can process 2 million tokens simultaneously, which is about 1.5 million words. For example, it can process all seven Harry Potter books in a single query, with 400,000 words remaining, according to TechCrunch. The model also supports tools such as Google Search.
Gemini 2.0 Pro is available as an experimental model for developers in Google AI Studio and Vertex AI, as well as for Gemini Advanced users. As a reminder, Google accidentally announced 2.0 Pro a week ago, but later removed all references to it.
In addition, the company is releasing a new 2.0 Flash-Lite, a more economical model with a contextual window of 1 million tokens. It is available in public preview in Google AI Studio and Vertex AI.
Among other things, 2.0 Flash with a contextual window of 1 million tokens will alsl be made publicly available via the Gemini API in Google AI Studio and Vertex AI.
And the 2.0 Flash Thinking Experimental model will be added to the Gemini app and available in the model menu on desktop and mobile.