1247 Commits

Author SHA1 Message Date
Xiang (Sean) Zhou f159bd9c87 fix: Use str() to calculate fingerprint instead of json.dumps
This is to avoid serialization issue for some fields that are not json serializable.
meanwhile restructure the debug logs in context cache manager for better debugging potential issues.

PiperOrigin-RevId: 811182492
v1.15.0
2025-09-24 22:14:40 -07:00
Ankur Sharma d48679582d feat: Populate AppDetails to each Invocation
AppDetails require two pieces of information:
1) Instructions
2) Tools

Both these pieces of information are gathered using the llm_request that was passed to the model. This approach, slightly invasive, ensures that we capture the "exact" instructions and tools that were given to the model.

PiperOrigin-RevId: 811180648
2025-09-24 22:06:56 -07:00
Google Team Member 2a2da0fe03 feat: Introduce OAuth2DiscoveryManager to fetch metadata needed for OAuth
This is the first step to bring ADK to compliance with MCP Authorization Spec.

PiperOrigin-RevId: 811177152
2025-09-24 21:53:48 -07:00
Ankur Sharma 5a485b01cd feat: Adds Rubric based final response evaluator
The evaluator uses a set of rubrics to assess the quality of the agent's final response.

PiperOrigin-RevId: 811154498
2025-09-24 20:30:51 -07:00
Ankur Sharma 01923a9227 feat: Data model for storing App Details and data model for steps
Details:
1. Data model for storing App Details (the agentic system)
As we move towards LLM as Judge metrics, we see that some of these metrics need information about the Agentic system that was used for inferencing. We add a data model to capture that.

2. Data model for Steps
We refine the concept of intermediate data. Previously it stored data in the form of a multiple lists, thereby losing out on the chronological information. This information is needed for some of the metrics. So we refine the concept of intermediate data as series of logical steps that an Agent Take.

PiperOrigin-RevId: 811122784
2025-09-24 18:41:38 -07:00
Xiang (Sean) Zhou 08f3b48305 chore: Add sample agent to test non-text content in static instruction
PiperOrigin-RevId: 810999310
2025-09-24 13:03:11 -07:00
Xuan Yang 6db096a3f4 chore: remove unsupported 'type': 'unknown' in test_common.py for fastapi 0.117.1
PiperOrigin-RevId: 810673476
2025-09-23 19:44:49 -07:00
Xiang (Sean) Zhou 47bd34ac28 chore: Fix the type annotation
PiperOrigin-RevId: 810611299
2025-09-23 15:50:19 -07:00
Xiang (Sean) Zhou ae5592e242 chore: Add tests for instruction provider and merge test_static_instructions.py to test_intructions.py
PiperOrigin-RevId: 810610507
2025-09-23 15:47:46 -07:00
Xiang (Sean) Zhou 61213ce4d4 feat: Support non-text content in static instruction
move them to user contents and reference them from instruction

PiperOrigin-RevId: 810587466
2025-09-23 15:36:15 -07:00
Xuan Yang e86ca5762a chore: remove internal TODO comment
PiperOrigin-RevId: 810583734
2025-09-23 15:36:06 -07:00
Google Team Member cbb609233b chore: Sample Spanner RAG agent that wraps search_tool
Also modified README to add instructions on when to use which tool.

PiperOrigin-RevId: 810563458
2025-09-23 15:35:57 -07:00
George Weale 657369cffe fix: Adds plugin to save artifacts for issue #2176
PiperOrigin-RevId: 810522939
2025-09-23 15:35:48 -07:00
Xiang (Sean) Zhou c944a12e31 chore: Remove query schema mode, as it doesn't perform well as embedded schema mode
PiperOrigin-RevId: 810517055
2025-09-23 15:35:40 -07:00
Xiang (Sean) Zhou 26990c2622 chore: Add sample agent to test static instruction
PiperOrigin-RevId: 810516925
2025-09-23 15:35:31 -07:00
Xiang (Sean) Zhou f2ce990867 chore: Add experimental annotation to GeminiContextCacheManager
PiperOrigin-RevId: 810503537
2025-09-23 15:35:22 -07:00
shsha4 86dea5b53a fix(mcp): Initialize tool_name_prefix in MCPToolse
Merge https://github.com/google/adk-python/pull/2823

Description
  This change introduces a tool_name_prefix attribute to McpToolset and McpToolsetConfig. This allows for adding a prefix to the
  names of all tools within the toolset, which can help avoid naming collisions and provide better organization.

  The implementation involves updating the McpToolset's __init__ and from_config methods to handle the new tool_name_prefix and
  adding the corresponding field to McpToolsetConfig.

  Testing Plan
  A new unit test file has been added to ensure the functionality works as expected.

   - `tests/unittests/tools/test_mcp_toolset.py`:
     - The test_mcp_toolset_with_prefix test case verifies that the tool_name_prefix is correctly applied to the tool names
       retrieved from the toolset.
     - All tests were run via pytest and passed.

  Related Issue
   - Closes #2814

COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2823 from shsha4:fix/issue-2814 e8e5b0d6d5f406d3875faf2229a96701725b7a5e
PiperOrigin-RevId: 810500616
2025-09-23 15:35:12 -07:00
Xiang (Sean) Zhou 6ca2aee829 ADK changes
PiperOrigin-RevId: 810492858
2025-09-23 15:35:02 -07:00
Xuan Yang 374522197f ADK changes
PiperOrigin-RevId: 810223422
2025-09-23 15:34:53 -07:00
Google Team Member aef1ee97a5 fix: make a copy of the columns instead of modifying it in place
This avoid unintentional modifications, especially in the case of a wrapped tool.

PiperOrigin-RevId: 810175539
2025-09-23 15:34:43 -07:00
Xiang (Sean) Zhou 38bbde6d56 chore: Annotate CachePerformanceAnalyzer as experimental
PiperOrigin-RevId: 809434619
2025-09-23 15:34:34 -07:00
TanejaAnkisetty 78fd4803d5 chore: Set role to user if new_message doesn't have role in Runner.run_async()
Merge https://github.com/google/adk-python/pull/2458

**Summary**
Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM.

**Code to replicate the problem**

```
from google.adk.agents import LlmAgent
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.genai.types import Content, Part
from google.adk.models.lite_llm import LiteLlm
from google.adk.models import LlmRequest
from google.genai import types
from pydantic import Field

import litellm
litellm._turn_on_debug()

import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*InMemoryCredentialService.*")

import os
from dotenv import load_dotenv

# Load environment variables from the agent directory's .env file
load_dotenv()

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

# Define agent with output_key
root_agent = LlmAgent(
    name="name_of_agent",
    model=LiteLlm(model="azure/gpt-4o-mini"),
    instruction="You are a customer agent to help the users with their concerns."
)

# --- Setup Runner and Session ---
app_name, user_id, session_id = "state_app", "user1", "session1"

session_service = InMemorySessionService()

runner = Runner(
    agent=root_agent,
    app_name=app_name,
    session_service=session_service
)

print(f"Runner created for agent '{runner.agent.name}'.")

session = await session_service.create_session(
    app_name=app_name,
    user_id=user_id,
    session_id=session_id
)

# --- Run the Agent ---

async def call_agent_async(query: str, runner, user_id, session_id):

    user_message = Content(parts=[Part(text=query)])

    async for event in runner.run_async(
        user_id=user_id,
        session_id=session_id,
        new_message=user_message
    ):
        print("event")
        print(f"  [Event]\n  Author: {event.author}\n  Type: {type(event).__name__}",
        f"\n  Final: {event.is_final_response()}\n  Content: {event.content}")

    return event

event = await call_agent_async("What is the capital of India.",runner=runner,user_id=user_id,session_id=session_id)
```
**Before the fix (current adk-python code output)**
```
00:29:24 - LiteLLM:DEBUG: utils.py:348 -

00:29:24 - LiteLLM:DEBUG: utils.py:348 - Request to litellm:
00:29:24 - LiteLLM:DEBUG: utils.py:348 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'Handle the requests as specified in the System Instruction.'}], tools=None, response_format=None)
```

**After the fix (after resolving the fix)**
```
00:28:46 - LiteLLM:DEBUG: utils.py:349 -

00:28:46 - LiteLLM:DEBUG: utils.py:349 - Request to litellm:
00:28:46 - LiteLLM:DEBUG: utils.py:349 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'What is the capital of India.'}], tools=None, response_format=None)
```

**Testing**
Following unit test is created to test the applied changes and added in the location as suggested in the guidelines.
adk-python\tests\unittests\models\test_base_llm.py

```
import pytest
from google.genai import types
from google.adk.models.llm_request import LlmRequest
from google.adk.models.lite_llm import _get_completion_inputs

@pytest.mark.parametrize("content_kwargs", [
    # Case 1: Explicit role provided
    {"role": "user", "parts": [types.Part(text="This is an input text from user.")]},
    # Case 2: Role omitted, should still be treated as 'user'
    {"parts": [types.Part(text="This is an input text from user.")]}
])
def test_user_content_role_defaults_to_user(content_kwargs):
    """
    Verifies that user-provided messages are always passed to the LLM as 'user' role,
    regardless of whether the role is explicitly set in types.Content.

    The helper `_get_completion_inputs` should give normalize messages so that
    explicit 'user' and implicit (missing role) are equivalent.
    """
    llm_request = LlmRequest(
        contents=[types.Content(**content_kwargs)],
        config=types.GenerateContentConfig()
    )

    messages, _, _, _ = _get_completion_inputs(llm_request)

    assert all(
        msg.get("role") == "user" for msg in messages
    ), f"Expected role 'user' but got {messages}"
    assert any(
        "This is an input text from user." == (msg.get("content") or "")
        for msg in messages
    ), f"Expected the user text to be preserved, but got {messages}"
```

COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2458 from TanejaAnkisetty:bug/agent-user-content 381b01418d249b9e6bd91ebb518ff25339a8e47b
PiperOrigin-RevId: 809281620
2025-09-23 15:34:21 -07:00
Google Team Member 632bf8b0bc fix: Filter out thought parts when saving agent output to state
PiperOrigin-RevId: 809270320
2025-09-19 18:58:59 -07:00
Wei Sun (Jack) 6e834d3fac feat(conformance): Skips recording for inner runner of AgentTool in conformance tests
PiperOrigin-RevId: 809252704
2025-09-19 17:36:18 -07:00
Xiang (Sean) Zhou 9be9cc2fee feat: Support static instructions
Static instructions:
Always added to system instructions for context caching

Dynamic instructions:
Added to system instructions when no static instruction exists (for backward compatibility), OR inserted before last batch of continuous user content when static instructions exist

PiperOrigin-RevId: 809170679
2025-09-19 13:46:36 -07:00