Google has unveiled experimental models for Gemini 2.0, including a new Pro version and a cheaper AI that’s supposed to compete with DeepSeek.
Google has just announced a series of major updates to its Gemini 2.0 artificial intelligence model, introducing new variants designed to be more powerful and accessible. These developments aim to improve speed, reduce costs, and integrate advanced reasoning capabilities. This strategic announcement comes alongside a colossal investment in AI, confirming Google’s desire to strengthen its position against OpenAI and other emerging competitors like DeepSeek.
Gemini 2.0: Flash Thinking and Pro, two promising experimental versions
Google recently introduced two new experimental versions of its Gemini 2.0 AI model: Flash Thinking and Pro. The Flash Thinking Experimental version, as it’s fully named, is designed to improve the model’s reasoning capabilities by breaking down complex queries into successive steps, which should provide more precise and detailed answers, Google says. The company hopes to strengthen the understanding and explainability of the AI’s answers. At the same time, Flash Thinking “can interact with applications such as YouTube, Search, and Google Maps.”
Gemini 2.0 Flash Thinking Experimental shows his thought process so you can see why he responded a certain way, what his assumptions were, and trace the model’s line of reasoning.
The Pro Experimental version, meanwhile, is optimized for increased performance in complex tasks such as programming and mathematics. It is expected to offer greater factual accuracy and the ability to handle larger contexts, with a context window of up to 2 million tokens. Both versions are currently available in preview for developers via Google AI Studio and Vertex AI, as well as for Advanced users of the Gemini app.
Gemini 2.0 Flash-Lite, a competitor for DeepSeek?
In response to growing concerns about the costs associated with AI models, Google has also launched Gemini 2.0 Flash-Lite, a stripped-down version of its model that aims to be more cost-effective. The company explains that Flash-Lite aims to deliver performance comparable to Flash 1.5, while improving quality across most benchmarks, without increasing costs or sacrificing speed.
An almost too perfect presentation of a tool that appears to be a response to DeepSeek and OpenAI with its o3-mini model. According to Reuters, the cost of processing “certain inputs” of Flash-Lite is $0.019 per million tokens, while that of DeepSeek is estimated at $0.014, compared to $0.075 for OpenAI’s 4o-mini model, unveiled in the summer of 2024. For the moment, Flash-Lite is available in Google AI Studio and Vertex AI in “a preliminary version”.
A new update for the Gemini 2.0 Flash model
Finally, Google has rolled out an update to its Gemini 2.0 Flash model. This AI benefits from “improved performance” and, most importantly, is becoming more widely available. It is now integrated into Gemini’s web and mobile application, including enterprise accounts, alongside versions 1.5 Flash and 1.5 Pro. Image generation has also been updated with the latest version of Imagen 3. This, like speech synthesis, should be “available soon,” Google explains without specifying a date.
Google goes all out on AI with $75 billion in investments
To support these developments, Google plans to invest $75 billion in AI development in 2025, more than double the $32.3 billion invested in 2023. This dramatic increase reflects the firm’s desire to strengthen its infrastructure and accelerate research into generative AI.
Faced with increased competition, particularly from OpenAI and new Chinese players, this massive investment aims to ensure Google’s dominant position in the market. It should thus fund the improvement of existing models, the development of new features, and the expansion of computing capacity to meet growing demand.