Details:
1. Data model for storing App Details (the agentic system)
As we move towards LLM as Judge metrics, we see that some of these metrics need information about the Agentic system that was used for inferencing. We add a data model to capture that.
2. Data model for Steps
We refine the concept of intermediate data. Previously it stored data in the form of a multiple lists, thereby losing out on the chronological information. This information is needed for some of the metrics. So we refine the concept of intermediate data as series of logical steps that an Agent Take.
PiperOrigin-RevId: 811122784
Merge https://github.com/google/adk-python/pull/2823
Description
This change introduces a tool_name_prefix attribute to McpToolset and McpToolsetConfig. This allows for adding a prefix to the
names of all tools within the toolset, which can help avoid naming collisions and provide better organization.
The implementation involves updating the McpToolset's __init__ and from_config methods to handle the new tool_name_prefix and
adding the corresponding field to McpToolsetConfig.
Testing Plan
A new unit test file has been added to ensure the functionality works as expected.
- `tests/unittests/tools/test_mcp_toolset.py`:
- The test_mcp_toolset_with_prefix test case verifies that the tool_name_prefix is correctly applied to the tool names
retrieved from the toolset.
- All tests were run via pytest and passed.
Related Issue
- Closes#2814
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2823 from shsha4:fix/issue-2814 e8e5b0d6d5f406d3875faf2229a96701725b7a5e
PiperOrigin-RevId: 810500616
Merge https://github.com/google/adk-python/pull/2458
**Summary**
Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM.
**Code to replicate the problem**
```
from google.adk.agents import LlmAgent
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.genai.types import Content, Part
from google.adk.models.lite_llm import LiteLlm
from google.adk.models import LlmRequest
from google.genai import types
from pydantic import Field
import litellm
litellm._turn_on_debug()
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*InMemoryCredentialService.*")
import os
from dotenv import load_dotenv
# Load environment variables from the agent directory's .env file
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Define agent with output_key
root_agent = LlmAgent(
name="name_of_agent",
model=LiteLlm(model="azure/gpt-4o-mini"),
instruction="You are a customer agent to help the users with their concerns."
)
# --- Setup Runner and Session ---
app_name, user_id, session_id = "state_app", "user1", "session1"
session_service = InMemorySessionService()
runner = Runner(
agent=root_agent,
app_name=app_name,
session_service=session_service
)
print(f"Runner created for agent '{runner.agent.name}'.")
session = await session_service.create_session(
app_name=app_name,
user_id=user_id,
session_id=session_id
)
# --- Run the Agent ---
async def call_agent_async(query: str, runner, user_id, session_id):
user_message = Content(parts=[Part(text=query)])
async for event in runner.run_async(
user_id=user_id,
session_id=session_id,
new_message=user_message
):
print("event")
print(f" [Event]\n Author: {event.author}\n Type: {type(event).__name__}",
f"\n Final: {event.is_final_response()}\n Content: {event.content}")
return event
event = await call_agent_async("What is the capital of India.",runner=runner,user_id=user_id,session_id=session_id)
```
**Before the fix (current adk-python code output)**
```
00:29:24 - LiteLLM:DEBUG: utils.py:348 -
00:29:24 - LiteLLM:DEBUG: utils.py:348 - Request to litellm:
00:29:24 - LiteLLM:DEBUG: utils.py:348 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'Handle the requests as specified in the System Instruction.'}], tools=None, response_format=None)
```
**After the fix (after resolving the fix)**
```
00:28:46 - LiteLLM:DEBUG: utils.py:349 -
00:28:46 - LiteLLM:DEBUG: utils.py:349 - Request to litellm:
00:28:46 - LiteLLM:DEBUG: utils.py:349 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'What is the capital of India.'}], tools=None, response_format=None)
```
**Testing**
Following unit test is created to test the applied changes and added in the location as suggested in the guidelines.
adk-python\tests\unittests\models\test_base_llm.py
```
import pytest
from google.genai import types
from google.adk.models.llm_request import LlmRequest
from google.adk.models.lite_llm import _get_completion_inputs
@pytest.mark.parametrize("content_kwargs", [
# Case 1: Explicit role provided
{"role": "user", "parts": [types.Part(text="This is an input text from user.")]},
# Case 2: Role omitted, should still be treated as 'user'
{"parts": [types.Part(text="This is an input text from user.")]}
])
def test_user_content_role_defaults_to_user(content_kwargs):
"""
Verifies that user-provided messages are always passed to the LLM as 'user' role,
regardless of whether the role is explicitly set in types.Content.
The helper `_get_completion_inputs` should give normalize messages so that
explicit 'user' and implicit (missing role) are equivalent.
"""
llm_request = LlmRequest(
contents=[types.Content(**content_kwargs)],
config=types.GenerateContentConfig()
)
messages, _, _, _ = _get_completion_inputs(llm_request)
assert all(
msg.get("role") == "user" for msg in messages
), f"Expected role 'user' but got {messages}"
assert any(
"This is an input text from user." == (msg.get("content") or "")
for msg in messages
), f"Expected the user text to be preserved, but got {messages}"
```
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2458 from TanejaAnkisetty:bug/agent-user-content 381b01418d249b9e6bd91ebb518ff25339a8e47b
PiperOrigin-RevId: 809281620
Static instructions:
Always added to system instructions for context caching
Dynamic instructions:
Added to system instructions when no static instruction exists (for backward compatibility), OR inserted before last batch of continuous user content when static instructions exist
PiperOrigin-RevId: 809170679
1. add a context cache config in app level which will apply to all agents in the app
2. pass on cache config through invocation context to llm_reqeust
3. store cache metadata in llm_response
4. lookup old cache metadata from latest event for reusing old cache
5. create new cache if old cache cannot be reused
PiperOrigin-RevId: 809158578