Google Gemini API Now Lets You Mix Tools in One Call

Google Gemini API Now Lets You Mix Tools in One Call

Mixing custom function calls with built-in tools like Google Search used to mean extra API calls, extra complexity, and a lot of duct tape. As of March 17, 2026, that’s no longer the case. Google has rolled out a set of Gemini API tooling updates that let developers combine function calling with built-in tools — Search, Maps grounding, code execution — inside a single request. This is a meaningful shift for anyone building agentic applications on top of Gemini 3.

What’s Actually New in the Gemini API Tooling Update

The headline feature is tool composition. Before this update, if you wanted your agent to call a custom function and ground its response in real-time Google Search data, you had to orchestrate that yourself. Now you can declare both in the same API call and let Gemini handle the sequencing. That’s less code, fewer round trips, and a tighter developer experience overall.

Context Circulation and Why It Matters

There’s also a new context circulation mechanism. In multi-step agentic workflows, keeping track of what the model has already done — which tools it called, what they returned, where the conversation stands — is genuinely hard. Context circulation means the API now passes tool results back through the model’s context window more cleanly, so Gemini can reason over prior tool outputs without you manually managing that state. Think of it as the API doing more of the plumbing work so your code doesn’t have to.

For developers building anything with more than two or three tool calls in sequence, this is the kind of quality-of-life fix that quietly saves hours of debugging.

Maps Grounding Is the Sleeper Feature Here

The Maps grounding capability for Gemini 3 is worth calling out separately. Developers can now ground responses in real-world geographic data from Google Maps — not just web search results. That opens up a genuinely interesting category of applications: local business agents, logistics assistants, travel planners that actually know what’s near you and what’s currently open.

Google has been aggressively integrating AI into Maps on the consumer side for a while now. Bringing that grounding capability into the API means developers can start building on the same data layer. I wouldn’t be surprised if this becomes the most-used new feature within a few months, especially for anything consumer-facing with a location component.

How This Stacks Up Against OpenAI’s Approach

This is clearly Google’s answer to what OpenAI has been building on its side. OpenAI turned the Responses API into a full agent runtime not long ago, with built-in tool support and persistent state. The two companies are essentially racing to make their APIs the default foundation for agentic apps.

Here’s the thing: Google has one structural advantage OpenAI doesn’t — first-party access to Search and Maps at the infrastructure level. When Gemini grounds a response in Google Search, it’s not hitting a web scraper or a third-party connector. It’s tapping the actual index. For applications where freshness and accuracy of real-world data matter, that’s a real edge.

OpenAI can offer web search through partnerships and plugins, but it’s a different kind of integration. Developers building apps where real-time grounding is critical should probably be running benchmarks on both right now, because the gap between them is narrowing fast.

Spend Controls Are Already in Place

One practical concern with complex multi-tool agentic calls: costs can spiral quickly if you’re not careful. Google anticipated this — spend caps in AI Studio were added ahead of these tooling updates, which feels intentional. Chaining five tool calls in a single request across Search, Maps, and a custom function could get expensive at scale, and having hard limits baked in before shipping the feature is the right order of operations.

It’s also worth watching how Google’s broader personal intelligence push across Search and Gemini connects to these API capabilities. The consumer features and the developer APIs are clearly being built from the same underlying infrastructure, and the two roadmaps are converging quickly.

For developers already using the Gemini API, testing tool composition on Gemini 3 should be near the top of the backlog this week. For anyone still evaluating which AI API to build on, this update makes Google’s offering meaningfully more competitive for complex, multi-step applications. The next question is how Google prices tool-heavy requests at production scale — that’s where the real adoption calculus happens.