Home » Google Cloud Next: Gemini 2.5 Flash, new Workspace tools, and agentic AI take center stage

Google Cloud Next: Gemini 2.5 Flash, new Workspace tools, and agentic AI take center stage

by Jacob Langdon
0 comments


To no one’s surprise, there have been a lot of AI-related announcements at the Google Cloud Next event. Even less surprising: Google’s annual cloud computing conference has focused on new versions of its flagship Gemini model and advances in AI agents.

So, for those following the whiplash competition between AI heavy hitters like Google and OpenAI, let’s unpack the latest Gemini updates.

On Wednesday, Google announced Gemini 2.5 Flash, a “workhorse” that has been adapted from its most advanced Gemini 2.5 Pro model. Gemini 2.5 Flash has the same build as 2.5 Pro but has been optimized to be faster and cheaper to run. The model’s speed and cost-efficiency are possible because of its ability to adjust or “budget” its processing power based on the desired task. This concept, known as “test-time compute,” is the emerging technique that reportedly made DeepSeek’s R1 model so cheap to train.

Gemini 2.5 Flash isn’t available just yet, but it’s coming soon to Vertex AI, AI Studio, and the standalone Gemini app. On a related note, Gemini 2.5 Pro is now available in public preview on Vertex AI and the Gemini app. This is the model that has recently topped the leaderboards in the Chatbot Arena.

Mashable Light Speed

Google is also bringing these models to Google Workspace for new productivity-related AI features. That includes the ability to create audio versions of Google Docs, automated data analysis in Google Sheets, and something called Google Workspace Flows, a way of automating manual workflows like managing customer service requests across Workspace apps.

Agentic AI, a more advanced form of AI that reasons across multiple steps, is the main driver of the new Google Workspace features. But it’s a challenge for all models to access the requisite data to perform tasks. Yesterday, Google announced that it’s adopting the Model Context Protocol (MCP), an open-source standard developed by Anthropic that enables “secure, two-way connections between [developers’] data sources and AI-powered tools,” as Anthropic explained.

“Developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers,” read a 2024 Anthropic announcement describing how it works. Now, according to Google DeepMind CEO Demis Hassabis, Google is adopting MCP for its Gemini models.

This will effectively allow Gemini models to quickly access the data they need, producing more reliable responses. Notably, OpenAI has also adopted MCP.

And that was just the first day of Google Cloud Next. Day two will likely bring even more announcements, so stay tuned.





Source link

You may also like

Advertisement

Recent Posts

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2024 Ai Crypto Watch. All rights reserved.