Checked with local wheel and worked as intended. The harness shows suppression works: 0 warnings for all true-like values.
This CL adds ADK_DISABLE_EXPERIMENTAL_WARNING to let the users to suppress warning messages from features decorated with @experimental.
Previously, using experimental features would always trigger a UserWarning. This change creates a way to disable these warnings, which can be good to stop flooding logs.
The warning is suppressed if ADK_DISABLE_EXPERIMENTAL_WARNING is set to a truthy value such as "true", "1", "yes", or "on" (case-insensitive).
Added unit tests to make sure:
Warning suppression for functions and classes when the env var is set.
Case-insensitivity and various truthy values for the env var.
Loading the env var from a .env file.
PiperOrigin-RevId: 796649404
* feat: adding build image to deploy cloud_run options
Gives the ability for a user to set the build image for the deployment step to Cloud Run. Currently it is hard coded to python:3.11-slim, and this is still the default, but this allows that value to be overriden.
* fix: applied formatting scripts
testing:
Added tests to ensure the behavior of the cli remains consistent with when used or omitted.
* chore: next time run the formatter before you commit.
---------
Co-authored-by: Ivan Cheung <ivans.mailbox@gmail.com>
For Vertex model backend, we send response back. This doesn't work for streaming tools that the return type is AsyncGenerator. So the fix here is to ignore the return type when it's AsyncGenerator.
We can't distinguish streaming vs non-streaming tool with AsyncGenerator though as LiveRequestQueue is optional in streaming tool.
Adds an `ignore_response` option to `build_function_declaration` to skip including the return type in the function declaration. This is enabled for tools that return `AsyncGenerator`, as the model does not yet support understanding these return types, while streaming tools can still handle them. Also, removes redundant return statements in `_get_mandatory_params`.
PiperOrigin-RevId: 794392846
This is to address the name conflict issue of tools returned by different toolset. Mainly it's to give each toolset a namespace.
We have a flag `add_tool_name_prefix` to decide whether to apply this behavior
We have a `tool_name_prefix` to let client specify a custom prefix, if not set , toolset name will be used as prefix.
PiperOrigin-RevId: 794306796
Merge https://github.com/google/adk-python/pull/1815
fix: path parameter extraction for complex Google API endpoints
- Fix GoogleApiToOpenApiConverter to handle path parameters in complex endpoints like /v1/documents/{documentId}:batchUpdate
- Use Google Discovery Document 'location' field
- Add comprehensive test suite for Google Docs batchUpdate functionality
- Verify parameter location handling for complex endpoint patterns
- Test schema validation for BatchUpdateDocumentRequest/Response
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/1815 from goldylocks87:fix-issue-1814-path-parameter-extraction af5508ec6975b1ccbc34931a0041e422ee259c16
PiperOrigin-RevId: 794301898
Spanner toolset support basic operations to interact with Spanner table metadata and query results.
Consolidate BigQueryTool into generic GoogleTool, so that BigQueryToolset and SpannerToolset can share.
PiperOrigin-RevId: 794259782
1. Allow developers to specify output schema and tools together.
2. If both are specified, do the following:
2.1 Do not set output schema on the model config
2.2 Add a special tool called set_model_response(result)
2.3 `result` has the same schema as the requested output_schema
2.4 Instruct the model to use set_model_response() to output its final result, rather than output text directly.
2.5 When the set_model_response() is called, ADK will extract its content and put it in a text part, so the client would treat it as the model response.
PiperOrigin-RevId: 792686011