How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains

How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains

In this tutorial, we explore the latest Gemini API tooling updates Google announced in March 2026, specifically the ability to combine built-in tools like Google Search and Google Maps with custom function calls in a single API request. We walk through five hands-on demos that progressively build on each other, starting with the core tool combination feature and ending with a full multi-tool agentic chain. Along the way, we demonstrate how context circulation preserves every tool call and response across turns, enabling the model to reason over prior outputs; how unique tool response IDs let us map parallel function calls to their exact results; and how Grounding with Google Maps brings real-time location data into our applications. We use gemini-3-flash-preview for tool combination features and gemini-2.5-flash for Maps grounding, so everything we build here runs without any billing setup.

import subprocess, sys


subprocess.check_call(
   [sys.executable, "-m", "pip", "install", "-qU", "google-genai"],
   stdout=subprocess.DEVNULL,
   stderr=subprocess.DEVNULL,
)


import getpass, json, textwrap, os, time
from google import genai
from google.genai import types


if "GOOGLE_API_KEY" not in os.environ:
   os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Gemini API key: ")


client = genai.Client(api_key=os.environ["GOOGLE_API_KEY"])


TOOL_COMBO_MODEL = "gemini-3-flash-preview"
MAPS_MODEL       = "gemini-2.5-flash"


DIVIDER = "=" * 72


def heading(title: str):
   print(f"n{DIVIDER}")
   print(f"  {title}")
   print(DIVIDER)


def wrap(text: str, width: int = 80):
   for line in text.splitlines():
       print(textwrap.fill(line, width=width) if line.strip() else "")


def describe_parts(response):
   parts = response.candidates[0].content.parts
   fc_ids = {}
   for i, part in enumerate(parts):
       prefix = f"   Part {i:2d}:"
       if hasattr(part, "tool_call") and part.tool_call:
           tc = part.tool_call
           print(f"{prefix} [toolCall]        type={tc.tool_type}  id={tc.id}")
       if hasattr(part, "tool_response") and part.tool_response:
           tr = part.tool_response
           print(f"{prefix} [toolResponse]    type={tr.tool_type}  id={tr.id}")
       if hasattr(part, "executable_code") and part.executable_code:
           code = part.executable_code.code[:90].replace("n", " ↵ ")
           print(f"{prefix} [executableCode]  {code}...")
       if hasattr(part, "code_execution_result") and part.code_execution_result:
           out = (part.code_execution_result.output or "")[:90]
           print(f"{prefix} [codeExecResult]  {out}")
       if hasattr(part, "function_call") and part.function_call:
           fc = part.function_call
           fc_ids[fc.name] = fc.id
           print(f"{prefix} [functionCall]    name={fc.name}  id={fc.id}")
           print(f"              └─ args: {dict(fc.args)}")
       if hasattr(part, "text") and part.text:
           snippet = part.text[:110].replace("n", " ")
           print(f"{prefix} [text]            {snippet}...")
       if hasattr(part, "thought_signature") and part.thought_signature:
           print(f"              └─ thought_signature present ✓")
   return fc_ids




heading("DEMO 1: Combine Google Search + Custom Function in One Request")


print("""
This demo shows the flagship new feature: passing BOTH a built-in tool
(Google Search) and a custom function declaration in a single API call.


Gemini will:
 Turn 1 → Search the web for real-time info, then request our custom
          function to get weather data.
 Turn 2 → We supply the function response; Gemini synthesizes everything.


Key points:
 • google_search and function_declarations go in the SAME Tool object
 • include_server_side_tool_invocations must be True (on ToolConfig)
 • Return ALL parts (incl. thought_signatures) in subsequent turns
""")


get_weather_func = types.FunctionDeclaration(
   name="getWeather",
   description="Gets the current weather for a requested city.",
   parameters=types.Schema(
       type="OBJECT",
       properties={
           "city": types.Schema(
               type="STRING",
               description="The city and state, e.g. Utqiagvik, Alaska",
           ),
       },
       required=["city"],
   ),
)


print("▶  Turn 1: Sending prompt with Google Search + getWeather tools...n")


response_1 = client.models.generate_content(
   model=TOOL_COMBO_MODEL,
   contents=(
       "What is the northernmost city in the United States? "
       "What's the weather like there today?"
   ),
   config=types.GenerateContentConfig(
       tools=[
           types.Tool(
               google_search=types.GoogleSearch(),
               function_declarations=[get_weather_func],
           ),
       ],
       tool_config=types.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   Parts returned by the model:n")
fc_ids = describe_parts(response_1)


function_call_id = fc_ids.get("getWeather")
print(f"n   ✅ Captured function_call id for getWeather: {function_call_id}")


print("n▶  Turn 2: Returning function result & requesting final synthesis...n")


history = [
   types.Content(
       role="user",
       parts=[
           types.Part(
               text=(
                   "What is the northernmost city in the United States? "
                   "What's the weather like there today?"
               )
           )
       ],
   ),
   response_1.candidates[0].content,
   types.Content(
       role="user",
       parts=[
           types.Part(
               function_response=types.FunctionResponse(
                   name="getWeather",
                   response={"response": "Very cold. 22°F / -5.5°C with strong Arctic winds."},
                   id=function_call_id,
               )
           )
       ],
   ),
]


response_2 = client.models.generate_content(
   model=TOOL_COMBO_MODEL,
   contents=history,
   config=types.GenerateContentConfig(
       tools=[
           types.Tool(
               google_search=types.GoogleSearch(),
               function_declarations=[get_weather_func],
           ),
       ],
       tool_config=types.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   ✅ Final synthesized response:n")
for part in response_2.candidates[0].content.parts:
   if hasattr(part, "text") and part.text:
       wrap(part.text)

we install the Google GenAI SDK, securely capture our API key, and define the helper functions that power the rest of the tutorial. We then demonstrate the flagship tool combination feature by sending a single request that pairs Google Search with a custom getWeather function, letting Gemini search the web for real-time geographic data and simultaneously request weather information from our custom tool. We complete the two-turn flow by returning our simulated weather response with the matching function call ID and watching Gemini synthesize both data sources into one coherent answer.

heading("DEMO 2: Tool Response IDs for Parallel Function Calls")


print("""
When Gemini makes multiple function calls in one turn, each gets a unique
`id` field. You MUST return each function_response with its matching id
so the model maps results correctly. This is critical for parallel calls.
""")


time.sleep(2)


lookup_inventory = types.FunctionDeclaration(
   name="lookupInventory",
   description="Check product inventory by SKU.",
   parameters=types.Schema(
       type="OBJECT",
       properties={
           "sku": types.Schema(type="STRING", description="Product SKU code"),
       },
       required=["sku"],
   ),
)


get_shipping_estimate = types.FunctionDeclaration(
   name="getShippingEstimate",
   description="Get shipping time estimate for a destination zip code.",
   parameters=types.Schema(
       type="OBJECT",
       properties={
           "zip_code": types.Schema(type="STRING", description="Destination ZIP code"),
           "sku": types.Schema(type="STRING", description="Product SKU"),
       },
       required=["zip_code", "sku"],
   ),
)


print("▶  Turn 1: Asking about product availability + shipping...n")


resp_parallel = client.models.generate_content(
   model=TOOL_COMBO_MODEL,
   contents=(
       "I want to buy SKU-A100 (wireless headphones). "
       "Is it in stock, and how fast can it ship to ZIP 90210?"
   ),
   config=types.GenerateContentConfig(
       tools=[
           types.Tool(
               function_declarations=[lookup_inventory, get_shipping_estimate],
           ),
       ],
   ),
)


fc_parts = []
for part in resp_parallel.candidates[0].content.parts:
   if hasattr(part, "function_call") and part.function_call:
       fc = part.function_call
       fc_parts.append(fc)
       print(f"   [functionCall] name={fc.name}  id={fc.id}  args={dict(fc.args)}")


print("n▶  Turn 2: Returning results with matching IDs...n")


simulated_results = {
   "lookupInventory": {"in_stock": True, "quantity": 342, "warehouse": "Los Angeles"},
   "getShippingEstimate": {"days": 2, "carrier": "FedEx", "cost": "$5.99"},
}


fn_response_parts = []
for fc in fc_parts:
   result = simulated_results.get(fc.name, {"error": "unknown function"})
   fn_response_parts.append(
       types.Part(
           function_response=types.FunctionResponse(
               name=fc.name,
               response=result,
               id=fc.id,
           )
       )
   )
   print(f"   Responding to {fc.name} (id={fc.id}) → {result}")


history_parallel = [
   types.Content(
       role="user",
       parts=[
           types.Part(
               text=(
                   "I want to buy SKU-A100 (wireless headphones). "
                   "Is it in stock, and how fast can it ship to ZIP 90210?"
               )
           )
       ],
   ),
   resp_parallel.candidates[0].content,
   types.Content(role="user", parts=fn_response_parts),
]


resp_parallel_2 = client.models.generate_content(
   model=TOOL_COMBO_MODEL,
   contents=history_parallel,
   config=types.GenerateContentConfig(
       tools=[
           types.Tool(
               function_declarations=[lookup_inventory, get_shipping_estimate],
           ),
       ],
   ),
)


print("n   ✅ Final answer:n")
for part in resp_parallel_2.candidates[0].content.parts:
   if hasattr(part, "text") and part.text:
       wrap(part.text)

We declare two custom functions, lookupInventory and getShippingEstimate, and send a prompt that naturally triggers both in a single turn. We observe that Gemini assigns each function call a unique ID, which we carefully match when constructing our simulated responses for inventory availability and shipping speed. We then pass the complete history back to the model and receive a final answer that seamlessly combines both results into a customer-ready response.

heading("DEMO 3: Grounding with Google Maps — Location-Aware Responses")


print("""
Grounding with Google Maps connects Gemini to real-time Maps data:
places, ratings, hours, reviews, and directions. Pass lat/lng for
hyper-local results. Available on Gemini 2.5 Flash / 2.0 Flash (free).
""")


time.sleep(2)


print("▶  3a: Finding restaurants near a specific location...n")


maps_response = client.models.generate_content(
   model=MAPS_MODEL,
   contents="What are the best Italian restaurants within a 15-minute walk from here?",
   config=types.GenerateContentConfig(
       tools=[types.Tool(google_maps=types.GoogleMaps())],
       tool_config=types.ToolConfig(
           retrieval_config=types.RetrievalConfig(
               lat_lng=types.LatLng(latitude=34.050481, longitude=-118.248526),
           )
       ),
   ),
)


print("   Generated Response:n")
wrap(maps_response.text)


if grounding := maps_response.candidates[0].grounding_metadata:
   if grounding.grounding_chunks:
       print(f"n   {'─' * 50}")
       print("   📍 Google Maps Sources:n")
       for chunk in grounding.grounding_chunks:
           if hasattr(chunk, "maps") and chunk.maps:
               print(f"   • {chunk.maps.title}")
               print(f"     {chunk.maps.uri}n")


time.sleep(2)
print(f"n{'─' * 72}")
print("▶  3b: Asking detailed questions about a specific place...n")


place_response = client.models.generate_content(
   model=MAPS_MODEL,
   contents="Is there a cafe near the corner of 1st and Main that has outdoor seating?",
   config=types.GenerateContentConfig(
       tools=[types.Tool(google_maps=types.GoogleMaps())],
       tool_config=types.ToolConfig(
           retrieval_config=types.RetrievalConfig(
               lat_lng=types.LatLng(latitude=34.050481, longitude=-118.248526),
           )
       ),
   ),
)


print("   Generated Response:n")
wrap(place_response.text)


if grounding := place_response.candidates[0].grounding_metadata:
   if grounding.grounding_chunks:
       print(f"n   📍 Sources:")
       for chunk in grounding.grounding_chunks:
           if hasattr(chunk, "maps") and chunk.maps:
               print(f"   • {chunk.maps.title} → {chunk.maps.uri}")


time.sleep(2)
print(f"n{'─' * 72}")
print("▶  3c: Trip planning with the Maps widget token...n")


trip_response = client.models.generate_content(
   model=MAPS_MODEL,
   contents=(
       "Plan a day in San Francisco for me. I want to see the "
       "Golden Gate Bridge, visit a museum, and have a nice dinner."
   ),
   config=types.GenerateContentConfig(
       tools=[types.Tool(google_maps=types.GoogleMaps(enable_widget=True))],
       tool_config=types.ToolConfig(
           retrieval_config=types.RetrievalConfig(
               lat_lng=types.LatLng(latitude=37.78193, longitude=-122.40476),
           )
       ),
   ),
)


print("   Generated Itinerary:n")
wrap(trip_response.text)


if grounding := trip_response.candidates[0].grounding_metadata:
   if grounding.grounding_chunks:
       print(f"n   📍 Sources:")
       for chunk in grounding.grounding_chunks:
           if hasattr(chunk, "maps") and chunk.maps:
               print(f"   • {chunk.maps.title} → {chunk.maps.uri}")


   widget_token = getattr(grounding, "google_maps_widget_context_token", None)
   if widget_token:
       print(f"n   🗺  Widget context token received ({len(widget_token)} chars)")
       print(f"   Embed in your frontend with:")
       print(f'   <gmp-place-contextual context-token="{widget_token[:60]}...">')
       print(f'   </gmp-place-contextual>')

We switch to gemini-2.5-flash and enable Grounding with Google Maps to run three location-aware sub-demos back-to-back. We query for nearby Italian restaurants using downtown Los Angeles coordinates, ask a detailed question about outdoor seating at a specific intersection, and generate a full-day San Francisco itinerary complete with grounding sources and a widget context token. We print every Maps source URI and title returned in the grounding metadata, showing how easy it is to build citation-rich, location-aware applications.

heading("DEMO 4: Full Agentic Workflow — Search + Custom Function")


print("""
This combines Google Search grounding with a custom booking function,
all in one request. Context circulation lets the model use Search results
to inform which function to call and with what arguments.


Scenario: "Find a trending restaurant in Austin and book a table."
""")


time.sleep(2)


book_restaurant = types.FunctionDeclaration(
   name="bookRestaurant",
   description="Book a table at a restaurant.",
   parameters=types.Schema(
       type="OBJECT",
       properties={
           "restaurant_name": types.Schema(
               type="STRING", description="Name of the restaurant"
           ),
           "party_size": types.Schema(
               type="INTEGER", description="Number of guests"
           ),
           "date": types.Schema(
               type="STRING", description="Reservation date (YYYY-MM-DD)"
           ),
           "time": types.Schema(
               type="STRING", description="Reservation time (HH:MM)"
           ),
       },
       required=["restaurant_name", "party_size", "date", "time"],
   ),
)


print("▶  Turn 1: Complex multi-tool prompt...n")


agent_response_1 = client.models.generate_content(
   model=TOOL_COMBO_MODEL,
   contents=(
       "I'm staying at the Driskill Hotel in Austin, TX. "
       "Find me a highly-rated BBQ restaurant nearby that's open tonight, "
       "and book a table for 4 people at 7:30 PM today."
   ),
   config=types.GenerateContentConfig(
       tools=[
           types.Tool(
               google_search=types.GoogleSearch(),
               function_declarations=[book_restaurant],
           ),
       ],
       tool_config=types.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   Returned parts:n")
fc_ids = describe_parts(agent_response_1)
booking_call_id = fc_ids.get("bookRestaurant")


if booking_call_id:
   print(f"n▶  Turn 2: Simulating booking confirmation...n")


   history_agent = [
       types.Content(
           role="user",
           parts=[
               types.Part(
                   text=(
                       "I'm staying at the Driskill Hotel in Austin, TX. "
                       "Find me a highly-rated BBQ restaurant nearby that's "
                       "open tonight, and book a table for 4 people at 7:30 PM today."
                   )
               )
           ],
       ),
       agent_response_1.candidates[0].content,
       types.Content(
           role="user",
           parts=[
               types.Part(
                   function_response=types.FunctionResponse(
                       name="bookRestaurant",
                       response={
                           "status": "confirmed",
                           "confirmation_number": "BBQ-2026-4821",
                           "message": "Table for 4 confirmed at 7:30 PM tonight.",
                       },
                       id=booking_call_id,
                   )
               )
           ],
       ),
   ]


   agent_response_2 = client.models.generate_content(
       model=TOOL_COMBO_MODEL,
       contents=history_agent,
       config=types.GenerateContentConfig(
           tools=[
               types.Tool(
                   google_search=types.GoogleSearch(),
                   function_declarations=[book_restaurant],
               ),
           ],
           tool_config=types.ToolConfig(
               include_server_side_tool_invocations=True,
           ),
       ),
   )


   print("   ✅ Final agent response:n")
   for part in agent_response_2.candidates[0].content.parts:
       if hasattr(part, "text") and part.text:
           wrap(part.text)
else:
   print("n   ℹ  Model did not request bookRestaurant — showing text response:n")
   for part in agent_response_1.candidates[0].content.parts:
       if hasattr(part, "text") and part.text:
           wrap(part.text)

We combine Google Search with a custom bookRestaurant function to simulate a realistic end-to-end agent scenario set in Austin, Texas. We send a single prompt to Gemini, asking it to find a highly rated BBQ restaurant near the Driskill Hotel and book a table for four. We inspect the returned parts to see how the model first searches the web and then calls our booking function with the details it discovers. We close the loop by supplying a simulated confirmation response and letting Gemini deliver the final reservation summary to the user.

heading("DEMO 5: Context Circulation — Code Execution + Search + Function")


print("""
Context circulation preserves EVERY tool call and response in the model's
context, so later steps can reference earlier results.  Here we combine:
 • Google Search (look up data)
 • Code Execution (compute something with it)
 • Custom function (save the result)


The model chains these tools autonomously using context from each step.
""")


time.sleep(2)


save_result = types.FunctionDeclaration(
   name="saveAnalysisResult",
   description="Save a computed analysis result to the database.",
   parameters=types.Schema(
       type="OBJECT",
       properties={
           "title": types.Schema(type="STRING", description="Title of the analysis"),
           "summary": types.Schema(type="STRING", description="Summary of findings"),
           "value": types.Schema(type="NUMBER", description="Key numeric result"),
       },
       required=["title", "summary", "value"],
   ),
)


print("▶  Turn 1: Research + compute + save (3-tool chain)...n")


circ_response = client.models.generate_content(
   model=TOOL_COMBO_MODEL,
   contents=(
       "Search for the current US national debt figure, then use code execution "
       "to calculate the per-capita debt assuming a population of 335 million. "
       "Finally, save the result using the saveAnalysisResult function."
   ),
   config=types.GenerateContentConfig(
       tools=[
           types.Tool(
               google_search=types.GoogleSearch(),
               code_execution=types.ToolCodeExecution(),
               function_declarations=[save_result],
           ),
       ],
       tool_config=types.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   Parts returned (full context circulation chain):n")
fc_ids = describe_parts(circ_response)
save_call_id = fc_ids.get("saveAnalysisResult")


if save_call_id:
   print(f"n▶  Turn 2: Confirming the save operation...n")


   history_circ = [
       types.Content(
           role="user",
           parts=[
               types.Part(
                   text=(
                       "Search for the current US national debt figure, then use code "
                       "execution to calculate the per-capita debt assuming a population "
                       "of 335 million. Finally, save the result using the "
                       "saveAnalysisResult function."
                   )
               )
           ],
       ),
       circ_response.candidates[0].content,
       types.Content(
           role="user",
           parts=[
               types.Part(
                   function_response=types.FunctionResponse(
                       name="saveAnalysisResult",
                       response={"status": "saved", "record_id": "analysis-001"},
                       id=save_call_id,
                   )
               )
           ],
       ),
   ]


   circ_response_2 = client.models.generate_content(
       model=TOOL_COMBO_MODEL,
       contents=history_circ,
       config=types.GenerateContentConfig(
           tools=[
               types.Tool(
                   google_search=types.GoogleSearch(),
                   code_execution=types.ToolCodeExecution(),
                   function_declarations=[save_result],
               ),
           ],
           tool_config=types.ToolConfig(
               include_server_side_tool_invocations=True,
           ),
       ),
   )


   print("   ✅ Final response:n")
   for part in circ_response_2.candidates[0].content.parts:
       if hasattr(part, "text") and part.text:
           wrap(part.text)
else:
   print("n   ℹ  Model completed without requesting saveAnalysisResult.")
   for part in circ_response.candidates[0].content.parts:
       if hasattr(part, "text") and part.text:
           wrap(part.text)




heading("✅ ALL DEMOS COMPLETE")
print("""
  Summary of what you've seen:


  1. Tool Combination   — Google Search + custom functions in one call
  2. Tool Response IDs  — Unique IDs for parallel function call mapping
  3. Maps Grounding     — Location-aware queries with real Maps data
  4. Agentic Workflow   — Search + booking function with context circulation
  5. Context Circulation — Search + Code Execution + custom function chain


  Key API patterns:
  ┌──────────────────────────────────────────────────────────────────┐
  │  tools=[types.Tool(                                             │
  │      google_search=types.GoogleSearch(),                        │
  │      code_execution=types.ToolCodeExecution(),                  │
  │      function_declarations=[my_func],                           │
  │  )]                                                             │
  │                                                                 │
  │  tool_config=types.ToolConfig(                                  │
  │      include_server_side_tool_invocations=True,                 │
  │  )                                                              │
  │                                                                 │
  └──────────────────────────────────────────────────────────────────┘


  Models:
  • Tool combination:  gemini-3-flash-preview (Gemini 3 only)
  • Maps grounding:    gemini-2.5-flash / gemini-2.5-pro / gemini-2.0-flash
  • Both features use the FREE tier with rate limits.


  Docs:
    https://ai.google.dev/gemini-api/docs/tool-combination
    https://ai.google.dev/gemini-api/docs/maps-grounding
""")

We push context circulation to its fullest by chaining three tools, Google Search, Code Execution, and a custom saveAnalysisResult function, in a single request that researches the US national debt, computes the per-capita figure, and saves the output. We inspect the full chain of returned parts, toolCall, toolResponse, executableCode, codeExecutionResult, and functionCall, to see exactly how context flows from one tool to the next across a single generation. We wrap up by confirming the save operation and printing a summary of every key API pattern we have covered across all five demos.

In conclusion, we now have a practical understanding of the key patterns that power agentic workflows in the Gemini API. We see that the include_server_side_tool_invocations flag on ToolConfig is the single switch that unlocks tool combination and context circulation, which returns all parts, including thought_signature fields, verbatim in our conversation history, is non-negotiable for multi-turn flows, and that matching every function_response.id to its corresponding function_call.id is what keeps parallel execution reliable. We also see how Maps grounding opens up an entire class of location-aware applications with just a few lines of configuration. From here, we encourage extending these patterns by combining URL Context or File Search with custom functions, wiring real backend APIs in place of our simulated responses, or building conversational agents that chain dozens of tools across many turns.


Check out the Full Codes here.  Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us

The post How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains appeared first on MarkTechPost.