Google Gemini models for ADK agents¶
There are two primary APIs for accessing the Gemini model family: Vertex AI API (available via Google Cloud Console) and the Gemini API (available via Google AI Studio). While both provide access to the same state-of-the-art models, the choice between them depends on your specific security requirements, development phase, and deployment environment.
ADK Integration¶
The ADK does not utilize a proprietary API; instead, it acts as a robust abstraction layer that supports accessing Gemini models through both Vertex AI and the Gemini API. By providing a unified interface, the ADK enables you to easily integrate advanced Gemini features, including:
-
Code Execution: Run generated code in a secure environment.
-
Google Search: Ground model responses with real-time web results.
-
Context Caching: Optimize performance and cost for long-context prompts.
-
Computer use: Enable models to interact with digital interfaces.
-
Interactions API: Manage complex conversational flows.
Choosing Vertex AI API or Gemini API¶
When to use Vertex AI¶
Vertex AI is a Google Cloud-managed machine learning platform that provides a unified environment for the entire AI lifecycle. It allows developers, data scientists, and ML engineers to access the latest Gemini models while seamlessly integrating with other Vertex AI services and the broader Google Cloud ecosystem.
Find available models in the Vertex AI documentation.
Consider using Vertex AI in the following cases:
-
Existing Google Cloud Infrastructure: Ideal for teams already leveraging Google Cloud services. Vertex AI uses the same IAM authentication as other GCP services, providing a seamless experience if you are already using the Google Cloud Console.
-
Enterprise Security: For teams that require implementing industry-leading best practices, including VPC Service Controls, Customer-Managed Encryption Keys (CMEK), and Workload Identity Federation to eliminate long-lived secrets.
-
Production Reliability: For applications that require financially-backed SLAs (99.9%+), 24/7 technical support, and Provisioned Throughput (PT) to eliminate 429 rate-limit errors.
-
GCP Ecosystem Integration: Best for use cases requiring deep, native interaction with services like Agent Engine, BigQuery, Cloud Storage, and GKE.
-
Compliance & Governance: For workloads that must adhere to regulatory standards such as HIPAA, SOC 2, ISO, or FedRAMP, or those requiring strict data residency in specific regions.
-
Advanced MLOps: Optimized for teams needing managed tools for model monitoring, evaluation, and fine-tuning, or those deploying third-party and open-source models alongside Gemini.
When to use Gemini API (Google AI Studio)¶
The Gemini API, accessible through Google AI Studio, is an interface dedicated to generative AI models, designed for rapid development and application building. It allows developers to quickly prototype and deploy applications using Gemini models with minimal setup.
It is the ideal choice for developers, startups, and businesses that prioritize speed and agility, offering a straightforward path to production without the initial need for complex cloud infrastructure or enterprise-level governance.
Find all available models on the Google AI for Developers site.
Consider using the Gemini API in the following cases:
-
Speed of Implementation: Start building in minutes using a simple API key, avoiding the overhead of complex project configurations.
-
Agile Prototyping: Optimized for a "build, share, and deploy" workflow, perfect for iterative testing of prompts and model capabilities.
-
Low-Friction Scaling: Features a generous free tier for experimentation and a clear Pay-As-You-Go model for production.
-
Standalone Applications: Best for projects that do not require deep integration with the broader Google Cloud enterprise security stack.
Gemini model authentication¶
A primary difference between Vertex AI and the Gemini API is the authentication mechanism. Vertex AI leverages standard Google Cloud authentication methods—including Application Default Credentials (ADC) for local development—while the Gemini API uses API Keys directly. See Choosing API for more guidance to select Vertex AI or Gemini API.
This section explains how to authenticate for local development using both platforms.
Vertex AI¶
To use Gemini models via Vertex AI for local development, use Application Default Credentials (ADC). This is the standard and recommended method for authenticating with Google Cloud services, as it avoids the security risks associated with long-lived API keys.
Google Cloud Prerequisites¶
-
Sign into Google Cloud:
- If you're an existing user of Google Cloud:
- Sign in via https://console.cloud.google.com
- If you previously used a Free Trial that has expired, you may need to upgrade to a Paid billing account.
- If you are a new user of Google Cloud:
- You can sign up for the Free Trial program. The Free Trial gets you a $300 Welcome credit to spend over 90 days on various Google Cloud products and you won't be billed. During the Free Trial, you also get access to the Google Cloud Free Tier, which gives you free usage of select products up to specified monthly limits, and to product-specific free trials.
- If you're an existing user of Google Cloud:
-
Create a Google Cloud project
- You can use existing project or create a new one on the Create Project page. Find more details in GCP documentation.
-
Get your Google Cloud Project ID
- Make sure to note the Project ID (immutable alphanumeric with hyphens), not the project number (numeric) or project name (mutable human-readable).

-
Enable Vertex AI in your project
- You need to enable the Vertex AI API. Click on the "Enable" button to enable the API. Once enabled, it should say "API Enabled".
-
Grant IAM permissions
- To call Gemini models, your Google account requires the "Vertex AI User" role (roles/aiplatform.user). If you do not have the Owner role or equivalent broad permissions, ensure this role is granted to your account (or any other user requiring access).
- You can grant these permissions in the console following these steps:
- Go to the IAM & Admin page.
- Click on the "Add" button.
- In the "New principals" field, enter your email address.
- In the "Select a role" field, select "Vertex AI User".
- Or by running the following command in the terminal (See gcloud installation instructions):
Find more details and best practices in GCP IAM documentation
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID --member="user:YOUR_EMAIL_ADDRESS" --role="roles/aiplatform.user"
Authentication with Vertex AI¶
With your project ready and permissions set, the final step is to authenticate your environment.
The standard way to authenticate to Vertex AI is using Application Default Credentials (ADC). Ensure you have completed the project prerequisites and follow the steps below to set it up in your local development environment.
- Install the gcloud CLI: Follow the official installation instructions.
-
Create local authentication credentials: This gcloud command opens a browser to authenticate your user account for local development.
-
Set environment variables: Create or validate that a
.envfile or.properties(Java) is in your project's root directory and add the following lines. ADK will automatically load this file.
Note: See ADC documentation to get more details and understand how ADC works.
Vertex AI also offers Vertex AI Express Mode, a simplified, API-key-based setup designed for rapid prototyping. This allows new users to quickly access Gemini models for a 90-day period without the immediate need for full Google Cloud project configuration.
- Sign up for Express Mode and get your API key.
- Set environment variables: Create or validate that a
.envfile or.properties(Java) is in your project's root directory and add the following lines. ADK will automatically load this file.
Gemini API (Google AI Studio)¶
Using the Gemini API (Google AI Studio) is the simplest and fastest method to use Gemini models and is recommended for getting started quickly by using API Keys for authentication.
- Get an API key: Obtain your key from Google AI Studio.
- Set environment variables: Create or validate that a
.envfile or.properties(Java) is in your project's root directory and add the following lines. ADK will automatically load this file.
Secure Your Credentials
Service account credentials or API keys are powerful credentials. Never expose them publicly. Use a secret manager such as Google Cloud Secret Manager to store and access them securely in production.
Gemini model versions
Always check the official Gemini documentation for the latest model names, including specific preview versions if needed. Preview models might have different availability or quota limitations.
Using Gemini in your agents¶
Once you have authenticated and set the required environment variables (.env file), using Gemini models is seamless. The ADK uses these variables to automatically route requests to either Vertex AI or Google AI Studio. The following code examples show a basic implementation for using Gemini models in your agents:
from google.adk.agents import LlmAgent
# --- Example using a stable Gemini Flash model ---
agent_gemini_flash = LlmAgent(
# Use the latest stable Flash model identifier
model="gemini-2.5-flash",
name="gemini_flash_agent",
instruction="You are a fast and helpful Gemini assistant.",
# ... other agent parameters
)
import {LlmAgent} from '@google/adk';
// --- Example #2: using a powerful Gemini model with API Key in model ---
export const rootAgent = new LlmAgent({
name: 'hello_time_agent',
model: 'gemini-2.5-flash',
description: 'Gemini flash agent',
instruction: `You are a fast and helpful Gemini assistant.`,
});
import (
"google.golang.org/adk/agent/llmagent"
"google.golang.org/adk/model/gemini"
"google.golang.org/genai"
)
// --- Example using a stable Gemini Flash model ---
modelFlash, err := gemini.NewModel(ctx, "gemini-2.0-flash", &genai.ClientConfig{})
if err != nil {
log.Fatalf("failed to create model: %v", err)
}
agentGeminiFlash, err := llmagent.New(llmagent.Config{
// Use the latest stable Flash model identifier
Model: modelFlash,
Name: "gemini_flash_agent",
Instruction: "You are a fast and helpful Gemini assistant.",
// ... other agent parameters
})
if err != nil {
log.Fatalf("failed to create agent: %v", err)
}
// --- Example #1: using a stable Gemini Flash model with ENV variables---
LlmAgent agentGeminiFlash =
LlmAgent.builder()
// Use the latest stable Flash model identifier
.model("gemini-2.5-flash") // Set ENV variables to use this model
.name("gemini_flash_agent")
.instruction("You are a fast and helpful Gemini assistant.")
// ... other agent parameters
.build();