Merge https://github.com/google/adk-python/pull/2937
**Closes #2936**
This Pull Request addresses the issue where `LlmAgent` outputs, when configured with `output_schema` and `tools`, were presenting escaped Latin characters (e.g., `\xf3` for `ó`) in the final response. This behavior occurred because `json.dumps` was being called with `ensure_ascii=True` (its default), which is not ideal for human-readable output, especially when dealing with non-ASCII characters common in many languages like Portuguese.
**Changes Proposed:**
* Modified the `_OutputSchemaRequestProcessor` in `src/google/adk/flows/llm_flows/_output_schema_processor.py` to explicitly set `ensure_ascii=False` when calling `json.dumps` for the `set_model_response` tool's output.
**Impact:**
This change ensures that all non-ASCII characters in the structured model response are preserved in their natural form, improving the readability and user experience of agent outputs, particularly for users interacting in languages with accented characters or other special symbols.
**Testing:**
The fix was verified locally by running an `LlmAgent` with an `output_schema` and confirming that responses containing Latin characters (e.g., "ação", "caminhão", "ícone") are now correctly displayed without escaping.
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2937 from amenegola:fix/issue-2936-escape-chars 6cac00f97aa4cd8d8ccaa97ec5fffc74f57995dc
PiperOrigin-RevId: 808622892
This commit introduces a new ContextFilterPlugin which allows for filtering the LlmRequest contents before they are sent to the LLM. This helps in managing and potentially reducing the size of the LLM context.
The plugin provides two primary filtering mechanisms:
num_invocations_to_keep: Keeps only the specified number of the most recent user-model invocations. An invocation is defined as one or more user messages followed by a model response.
custom_filter: Allows for a user-defined callable to be applied to the contents for more flexible filtering.
Unit tests have been added to cover the different filtering scenarios, including:
Filtering by the last N invocations.
Filtering using a custom function.
Combining both filtering methods.
Handling cases with multiple user turns in a single invocation.
Ensuring no filtering occurs when options are not provided.
Gracefully handling exceptions from custom filter functions."
For example, when num_of_innovacations=2:
-----------------------------------------------------------
Contents:
{"parts":[{"text":"9"}],"role":"user"}
{"parts":[{"text":"I am sorry, I cannot fulfill this request. I need more information on what you would like me to do. I can roll a die or check prime numbers.\n"}],"role":"model"}
{"parts":[{"text":"1"}],"role":"user"}
{"parts":[{"text":"I am sorry, I cannot fulfill this request. I need more information on what you would like me to do. I can roll a die or check prime numbers.\n"}],"role":"model"}
{"parts":[{"text":"10"}],"role":"user"}
-----------------------------------------------------------
PiperOrigin-RevId: 808355316
Cloud Trace, Cloud Monitoring and Cloud Logging integrations are set up via OTel if otel_to_cloud CLI param/fast_api arg is provided.
This is similar to current Cloud Trace integration via trace_to_cloud, just extended to Monitoring and Logging as well.
PiperOrigin-RevId: 807385680
Cloud Trace, Cloud Monitoring and Cloud Logging integrations are set up via OTel if otel_to_cloud CLI param/fast_api arg is provided.
This is similar to current Cloud Trace integration via trace_to_cloud, just extended to Monitoring and Logging as well.
PiperOrigin-RevId: 807285744
Cloud Trace, Cloud Monitoring and Cloud Logging integrations are set up via OTel if otel_to_cloud CLI param/fast_api arg is provided.
This is similar to current Cloud Trace integration via trace_to_cloud, just extended to Monitoring and Logging as well.
PiperOrigin-RevId: 807230668
Similarity search tool supports similarity search on Spanner data by embedding a text query to a vector and run vector search with the embedded vector.
PiperOrigin-RevId: 806502499
Recent change to the updated A2A Client SDK broke the logging utilities. This updates those logging utilities to work with the new A2A SDK structure.
PiperOrigin-RevId: 806482017
Right now the tolls are always running against multi-region US by default. With this change the agent builder can scope the tools to data and compute in a particular BigQuery location.
PiperOrigin-RevId: 806473857
The new test verifies that `output_audio_transcription` and `input_audio_transcription` attributes are unique to each `RunConfig` instance, preventing unintended side effects from modifying one instance.
PiperOrigin-RevId: 806405671
Both are valid YAML, just with indent, it's more visually friend to see the data structure hierarchy.
Before
```
items:
- item1
- item2
- item3
```
After
```
items:
- item1
- item2
- item3
```
PiperOrigin-RevId: 806117290
The old live/bidi agents are using a cache to store context/history during agent transfer etc. As we have added support for session for live/bidi, we are now migrating the context/history cache to it. This improves scalability, efficiency and maintainability.
It introduces several changes:
* AudioTranscriber support is removed as now we are using native transcription from models.
* Transcription is returned as input_transcription/output_transcription fields and no longer as contents.
* We will return a new event with artifact references of file type of audio/pcm.(in addition to existing audio response event. So the users of this api need to do proper filtering here.)
PiperOrigin-RevId: 805997675
Details:
- We plan on introducing Rubric based metrics in subsequent changes. This change introduces the data model needed that allows agent developer to provide rubrics.
- We also introduce a data model for the config that the eval system has been using for quite some time. It was loosely and informally described as a dictionary of metric names and expected thresholds. In this change, we actually formalize it using a pydantic data model, and extend it allow developers to specify rubrics as a part of their eval config.
What is a rubric based metric?
A rubric based metric is the assessment of a Agent's response (final or intermediate) along some rubric. This evaluation of agent's response significantly differs from the strategy where one has to provide a golden response.
PiperOrigin-RevId: 805488436
These tests verify that `ValueError` is raised when `Runner` is initialized without providing either an `app` instance or both `app_name` and `agent`.
PiperOrigin-RevId: 805427256
Use the A2A Python SDK for client support for A2A Remote clients. This enables A2A based agents that use gRPC or RESTful interfaces, as well as the jsonrpc support. This also simplifies creation of clients and provides simpler mechanisms to inject credentials and observability into the remote agent interactions.
PiperOrigin-RevId: 804711466
Changed default values for `session_service`, `artifact_service`, and `run_config` from instances of mutable classes to `None`. Instances are now created within the function body if the argument is not provided, preventing unexpected shared state across function calls.
PiperOrigin-RevId: 804624564
The system instructions for agent transfer now include a NOTE section that lists all agents available for the `transfer_to_agent` function. This also has the target agents and, if there is one that applies, the parent agent. New unit tests are added to verify the correct generation of this NOTE.
PiperOrigin-RevId: 804569691
Changed default values for `session_service`, `artifact_service`, and `run_config` from instances of mutable classes to `None`. Instances are now created within the function body if the argument is not provided, preventing unexpected shared state across function calls.
PiperOrigin-RevId: 804560641