Runner.run_async()
Merge https://github.com/google/adk-python/pull/2458 **Summary** Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. Before the current fix, if the LlmRequest from the user doesn't have the 'user' role, but has the user content, then the text is being replaced with the standard text - "Handle the requests as specified in the System Instruction." and the content from the user is completely ignored and not passed into the LLM. **Code to replicate the problem** ``` from google.adk.agents import LlmAgent from google.adk.sessions import InMemorySessionService from google.adk.runners import Runner from google.genai.types import Content, Part from google.adk.models.lite_llm import LiteLlm from google.adk.models import LlmRequest from google.genai import types from pydantic import Field import litellm litellm._turn_on_debug() import warnings warnings.filterwarnings("ignore", category=UserWarning, message=".*InMemoryCredentialService.*") import os from dotenv import load_dotenv # Load environment variables from the agent directory's .env file load_dotenv() OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Define agent with output_key root_agent = LlmAgent( name="name_of_agent", model=LiteLlm(model="azure/gpt-4o-mini"), instruction="You are a customer agent to help the users with their concerns." ) # --- Setup Runner and Session --- app_name, user_id, session_id = "state_app", "user1", "session1" session_service = InMemorySessionService() runner = Runner( agent=root_agent, app_name=app_name, session_service=session_service ) print(f"Runner created for agent '{runner.agent.name}'.") session = await session_service.create_session( app_name=app_name, user_id=user_id, session_id=session_id ) # --- Run the Agent --- async def call_agent_async(query: str, runner, user_id, session_id): user_message = Content(parts=[Part(text=query)]) async for event in runner.run_async( user_id=user_id, session_id=session_id, new_message=user_message ): print("event") print(f" [Event]\n Author: {event.author}\n Type: {type(event).__name__}", f"\n Final: {event.is_final_response()}\n Content: {event.content}") return event event = await call_agent_async("What is the capital of India.",runner=runner,user_id=user_id,session_id=session_id) ``` **Before the fix (current adk-python code output)** ``` 00:29:24 - LiteLLM:DEBUG: utils.py:348 - 00:29:24 - LiteLLM:DEBUG: utils.py:348 - Request to litellm: 00:29:24 - LiteLLM:DEBUG: utils.py:348 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'Handle the requests as specified in the System Instruction.'}], tools=None, response_format=None) ``` **After the fix (after resolving the fix)** ``` 00:28:46 - LiteLLM:DEBUG: utils.py:349 - 00:28:46 - LiteLLM:DEBUG: utils.py:349 - Request to litellm: 00:28:46 - LiteLLM:DEBUG: utils.py:349 - litellm.acompletion(model='azure/gpt-4o-mini', messages=[{'role': 'developer', 'content': 'You are a customer agent to help the users with their concerns.\n\nYou are an agent. Your internal name is "name_of_agent".'}, {'role': 'user', 'content': 'What is the capital of India.'}], tools=None, response_format=None) ``` **Testing** Following unit test is created to test the applied changes and added in the location as suggested in the guidelines. adk-python\tests\unittests\models\test_base_llm.py ``` import pytest from google.genai import types from google.adk.models.llm_request import LlmRequest from google.adk.models.lite_llm import _get_completion_inputs @pytest.mark.parametrize("content_kwargs", [ # Case 1: Explicit role provided {"role": "user", "parts": [types.Part(text="This is an input text from user.")]}, # Case 2: Role omitted, should still be treated as 'user' {"parts": [types.Part(text="This is an input text from user.")]} ]) def test_user_content_role_defaults_to_user(content_kwargs): """ Verifies that user-provided messages are always passed to the LLM as 'user' role, regardless of whether the role is explicitly set in types.Content. The helper `_get_completion_inputs` should give normalize messages so that explicit 'user' and implicit (missing role) are equivalent. """ llm_request = LlmRequest( contents=[types.Content(**content_kwargs)], config=types.GenerateContentConfig() ) messages, _, _, _ = _get_completion_inputs(llm_request) assert all( msg.get("role") == "user" for msg in messages ), f"Expected role 'user' but got {messages}" assert any( "This is an input text from user." == (msg.get("content") or "") for msg in messages ), f"Expected the user text to be preserved, but got {messages}" ``` COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2458 from TanejaAnkisetty:bug/agent-user-content 381b01418d249b9e6bd91ebb518ff25339a8e47b PiperOrigin-RevId: 809281620
Agent Development Kit (ADK)
<html>An open-source, code-first Python toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
Important Links: Docs, Samples, Java ADK & ADK Web.
</html>Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. While optimized for Gemini and the Google ecosystem, ADK is model-agnostic, deployment-agnostic, and is built for compatibility with other frameworks. ADK was designed to make agent development feel more like software development, to make it easier for developers to create, deploy, and orchestrate agentic architectures that range from simple tasks to complex workflows.
π₯ What's new
-
Agent Config: Build agents without code. Check out the Agent Config feature.
-
Tool Confirmation: A tool confirmation flow(HITL) that can guard tool execution with explicit confirmation and custom input
β¨ Key Features
-
Rich Tool Ecosystem: Utilize pre-built tools, custom functions, OpenAPI specs, or integrate existing tools to give agents diverse capabilities, all for tight integration with the Google ecosystem.
-
Code-First Development: Define agent logic, tools, and orchestration directly in Python for ultimate flexibility, testability, and versioning.
-
Modular Multi-Agent Systems: Design scalable applications by composing multiple specialized agents into flexible hierarchies.
-
Deploy Anywhere: Easily containerize and deploy agents on Cloud Run or scale seamlessly with Vertex AI Agent Engine.
π€ Agent2Agent (A2A) Protocol and ADK Integration
For remote agent-to-agent communication, ADK integrates with the A2A protocol. See this example for how they can work together.
π Installation
Stable Release (Recommended)
You can install the latest stable version of ADK using pip:
pip install google-adk
The release cadence is roughly bi-weekly.
This version is recommended for most users as it represents the most recent official release.
Development Version
Bug fixes and new features are merged into the main branch on GitHub first. If you need access to changes that haven't been included in an official PyPI release yet, you can install directly from the main branch:
pip install git+https://github.com/google/adk-python.git@main
Note: The development version is built directly from the latest code commits. While it includes the newest fixes and features, it may also contain experimental changes or bugs not present in the stable release. Use it primarily for testing upcoming changes or accessing critical fixes before they are officially released.
π Documentation
Explore the full documentation for detailed guides on building, evaluating, and deploying agents:
π Feature Highlight
Define a single agent:
from google.adk.agents import Agent
from google.adk.tools import google_search
root_agent = Agent(
name="search_assistant",
model="gemini-2.5-flash", # Or your preferred Gemini model
instruction="You are a helpful assistant. Answer user questions using Google Search when needed.",
description="An assistant that can search the web.",
tools=[google_search]
)
Define a multi-agent system:
Define a multi-agent system with coordinator agent, greeter agent, and task execution agent. Then ADK engine and the model will guide the agents works together to accomplish the task.
from google.adk.agents import LlmAgent, BaseAgent
# Define individual agents
greeter = LlmAgent(name="greeter", model="gemini-2.5-flash", ...)
task_executor = LlmAgent(name="task_executor", model="gemini-2.5-flash", ...)
# Create parent agent and assign children via sub_agents
coordinator = LlmAgent(
name="Coordinator",
model="gemini-2.5-flash",
description="I coordinate greetings and tasks.",
sub_agents=[ # Assign sub_agents here
greeter,
task_executor
]
)
Development UI
A built-in development UI to help you test, evaluate, debug, and showcase your agent(s).
Evaluate Agents
adk eval \
samples_for_testing/hello_world \
samples_for_testing/hello_world/hello_world_eval_set_001.evalset.json
π€ Contributing
We welcome contributions from the community! Whether it's bug reports, feature requests, documentation improvements, or code contributions, please see our
- General contribution guideline and flow.
- Then if you want to contribute code, please read Code Contributing Guidelines to get started.
Vibe Coding
If you are to develop agent via vibe coding the llms.txt and the llms-full.txt can be used as context to LLM. While the former one is a summarized one and the later one has the full information in case your LLM has big enough context window.
π License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Happy Agent Building!
