Merge https://github.com/google/adk-python/pull/2458
**Summary**
Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM.
**Code to replicate the problem**
```
from google.adk.agents import LlmAgent
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.genai.types import Content, Part
from google.adk.models.lite_llm import LiteLlm
from google.adk.models import LlmRequest
from google.genai import types
from pydantic import Field
import litellm
litellm._turn_on_debug()
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*InMemoryCredentialService.*")
import os
from dotenv import load_dotenv
# Load environment variables from the agent directory's .env file
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Define agent with output_key
root_agent = LlmAgent(
name="name_of_agent",
model=LiteLlm(model="azure/gpt-4o-mini"),
instruction="You are a customer agent to help the users with their concerns."
)
# --- Setup Runner and Session ---
app_name, user_id, session_id = "state_app", "user1", "session1"
session_service = InMemorySessionService()
runner = Runner(
agent=root_agent,
app_name=app_name,
session_service=session_service
)
print(f"Runner created for agent '{runner.agent.name}'.")
session = await session_service.create_session(
app_name=app_name,
user_id=user_id,
session_id=session_id
)
# --- Run the Agent ---
async def call_agent_async(query: str, runner, user_id, session_id):
user_message = Content(parts=[Part(text=query)])
async for event in runner.run_async(
user_id=user_id,
session_id=session_id,
new_message=user_message
):
print("event")
print(f" [Event]\n Author: {event.author}\n Type: {type(event).__name__}",
f"\n Final: {event.is_final_response()}\n Content: {event.content}")
return event
event = await call_agent_async("What is the capital of India.",runner=runner,user_id=user_id,session_id=session_id)
```
**Before the fix (current adk-python code output)**
```
00:29:24 - LiteLLM:DEBUG: utils.py:348 -
00:29:24 - LiteLLM:DEBUG: utils.py:348 - Request to litellm:
00:29:24 - LiteLLM:DEBUG: utils.py:348 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'Handle the requests as specified in the System Instruction.'}], tools=None, response_format=None)
```
**After the fix (after resolving the fix)**
```
00:28:46 - LiteLLM:DEBUG: utils.py:349 -
00:28:46 - LiteLLM:DEBUG: utils.py:349 - Request to litellm:
00:28:46 - LiteLLM:DEBUG: utils.py:349 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'What is the capital of India.'}], tools=None, response_format=None)
```
**Testing**
Following unit test is created to test the applied changes and added in the location as suggested in the guidelines.
adk-python\tests\unittests\models\test_base_llm.py
```
import pytest
from google.genai import types
from google.adk.models.llm_request import LlmRequest
from google.adk.models.lite_llm import _get_completion_inputs
@pytest.mark.parametrize("content_kwargs", [
# Case 1: Explicit role provided
{"role": "user", "parts": [types.Part(text="This is an input text from user.")]},
# Case 2: Role omitted, should still be treated as 'user'
{"parts": [types.Part(text="This is an input text from user.")]}
])
def test_user_content_role_defaults_to_user(content_kwargs):
"""
Verifies that user-provided messages are always passed to the LLM as 'user' role,
regardless of whether the role is explicitly set in types.Content.
The helper `_get_completion_inputs` should give normalize messages so that
explicit 'user' and implicit (missing role) are equivalent.
"""
llm_request = LlmRequest(
contents=[types.Content(**content_kwargs)],
config=types.GenerateContentConfig()
)
messages, _, _, _ = _get_completion_inputs(llm_request)
assert all(
msg.get("role") == "user" for msg in messages
), f"Expected role 'user' but got {messages}"
assert any(
"This is an input text from user." == (msg.get("content") or "")
for msg in messages
), f"Expected the user text to be preserved, but got {messages}"
```
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2458 from TanejaAnkisetty:bug/agent-user-content 381b01418d249b9e6bd91ebb518ff25339a8e47b
PiperOrigin-RevId: 809281620
Static instructions:
Always added to system instructions for context caching
Dynamic instructions:
Added to system instructions when no static instruction exists (for backward compatibility), OR inserted before last batch of continuous user content when static instructions exist
PiperOrigin-RevId: 809170679
1. add a context cache config in app level which will apply to all agents in the app
2. pass on cache config through invocation context to llm_reqeust
3. store cache metadata in llm_response
4. lookup old cache metadata from latest event for reusing old cache
5. create new cache if old cache cannot be reused
PiperOrigin-RevId: 809158578
Currently there is chance for Cloud Monitoring-related errors in logs during shutdown. Let's disable metrics part until it is fixed.
PiperOrigin-RevId: 808930635
The docstrings for `compaction_range` and `compacted_content` are updated to reflect that compaction is based on timestamp ranges rather than sequence IDs, and to use consistent terminology ("compacted" instead of "summarized").
PiperOrigin-RevId: 808770610
Merge https://github.com/google/adk-python/pull/2960
1. All in one authentication sample (has an IDP, Agent and the application) under `contributing/samples/authn-adk-all-in-one/`
2. Documented for all the steps.
3. OAuth 2.0 Authorization Code Grant type used by the agent.
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2960 from nikhilpurwant:main dfcc821602d265c4ae7cc42eb1f5739beaad6f87
PiperOrigin-RevId: 808672120
This add `GoogleMapsGroundingTool`, a built-in tool for Gemini 2 models to ground query results with Google Maps. This tool operates internally within the model and is only available when using the VertexAI Gemini API.
PiperOrigin-RevId: 808650501
Provide a more efficient way to compact LLM context for better agentic performance.
* `app`: the top level abstraction for an ADK application. It contains an root agent, and plugins.
* `content_strategy`: the abstraction for selecting the contents for LLM request.
* `compaction_strategy`: the abstraction for compacting the events.
* Added `sequence_id` and `summary_range` in event class.
PiperOrigin-RevId: 808634224