Part 1 of https://github.com/google/adk-python/discussions/3605.
This change adds a new schema that uses JSON serialization to store Events data in the database. A new "adk_internal_metadata" table is also added to store information like schema version. Since we want to keep supporting existing DB, we fork from the original schema and call it "v0", while the new one is called "v1".
The change is no-op for existing users. In later change, the new schema will be used for new databases, and migration scripts will be provided for existing databases.
Co-authored-by: Liang Wu <wuliang@google.com>
PiperOrigin-RevId: 844986248
Merge https://github.com/google/adk-python/pull/3917
To migrate from existing DB :
ALTER TABLE events ALTER COLUMN error_message TYPE TEXT; -- PostgreSQL
ALTER TABLE events MODIFY error_message TEXT; -- MySQL
SQLite: Doesn't enforce VARCHAR length limits anyway. No impact.
### Link to Issue or Description of Change
**1. Link to an existing issue (if applicable):**
n/a
**2. Or, if no issue exists, describe the change:**
**Problem:**
When storing events with error messages longer than 1024 characters using `DatabaseSessionService`, PostgreSQL raises:
```
ERROR: value too long for type character varying(1024)
```
The `error_message` column in `StorageEvent` is defined as `String(1024)`, which maps to `VARCHAR(1024)`. Error messages can exceed 1024 characters.
**Solution:**
Change the column type from `String(1024)` to `Text` to allow unlimited length error messages.
### Testing Plan
**Unit Tests:**
- [x] I have added or updated unit tests for my change.
- [x] All unit tests pass locally.
$ pytest ./tests/unittests/sessions/ -v
======================= 75 passed, 3 warnings in 26.92s ========================
**Manual End-to-End (E2E) Tests:**
- Verified that events with long error messages (>1024 chars) can be stored in PostgreSQL
- Verified backward compatibility with existing databases
### Checklist
- [x] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document.
- [x] I have performed a self-review of my own code.
- [x] I have commented my code, particularly in hard-to-understand areas.
- [x] I have added tests that prove my fix is effective or that my feature works.
- [x] New and existing unit tests pass locally with my changes.
- [x] I have manually tested my changes end-to-end.
- [x] Any dependent changes have been merged and published in downstream modules.
### Additional context
This is a minimal change (1 line) that only affects the `error_message` column type definition.
Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/3917 from hiroakis:main 1474fd552cdbd7206de383e5507fd8a733aecda1
PiperOrigin-RevId: 844845692
This change ensures that file URI parts passed to LiteLLM always include a "format" field. If `mime_type` is not explicitly provided in `FileData`, the system attempts to infer it from the URI's file extension. If inference fails, a default "application/octet-stream" is used. This is necessary because LiteLLM's Vertex AI backend requires the "format" field for GCS URIs.
Close#3787
Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 843753810
Merge https://github.com/google/adk-python/pull/2870
## Summary
Add `token_endpoint_auth_method` field to OAuth2Auth class to allow configuring OAuth2 token endpoint authentication methods. This enables users to specify how the client should authenticate with the authorization server's token
endpoint.
• Add `token_endpoint_auth_method` field to `OAuth2Auth` with default value `"client_secret_basic"`
• Update `create_oauth2_session()` to pass the authentication method to `OAuth2Session`
• Maintain backward compatibility with existing OAuth2 configurations
## Unit Tests
Added unit test coverage with 3 new test methods:
1. `test_create_oauth2_session_with_token_endpoint_auth_method()` - Tests explicit auth method setting (`client_secret_post`)
2. `test_create_oauth2_session_with_default_token_endpoint_auth_method()` - Tests default behavior (`client_secret_basic`)
3. `test_create_oauth2_session_oauth2_scheme_with_token_endpoint_auth_method()` - Tests with OAuth2 scheme using `client_secret_jwt`
**Test Results:**
✅ 16/16 OAuth2 credential utility tests passed
✅ 240/240 auth module tests passed (no regressions)
✅ Tests cover both GOOGLE_AI and VERTEX variants
✅ Pylint score: 9.41/10
## Changes Made
**src/google/adk/auth/auth_credential.py**
- Added `token_endpoint_auth_method: Optional[str] = "client_secret_basic"` to `OAuth2Auth` class
**src/google/adk/auth/oauth2_credential_util.py**
- Updated `create_oauth2_session()` to pass `token_endpoint_auth_method` parameter to `OAuth2Session`
**tests/unittests/auth/test_oauth2_credential_util.py**
- Added 3 comprehensive test methods covering different authentication scenarios
## Backward Compatibility
✅ **Non-breaking change** - All existing OAuth2 configurations continue to work unchanged with the default `client_secret_basic` authentication method.
## Supported Authentication Methods
- `client_secret_basic` (default) - Client credentials in Authorization header
- `client_secret_post` - Client credentials in request body
- `client_secret_jwt` - JWT with client secret
- `private_key_jwt` - JWT with private key
Co-authored-by: Xiang (Sean) Zhou <seanzhougoogle@google.com>
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/2870 from sully90:feat/oauth2-token-endpoint-auth-method 04fe8244598f96b4e3366f0fc79382628382e9c2
PiperOrigin-RevId: 843739984
LiteLLM's StreamHandlers output to stderr by default. In cloud environments like GCP, stderr output is treated as ERROR severity regardless of actual log level, causing INFO-level logs to be incorrectly classified as errors.
This change redirects LiteLLM loggers to stdout in two places:
- In `lite_llm.py`: Immediately after litellm import
- In `logs.py`: When `setup_adk_logger()` is called (with guard to check if litellm is imported)
Close#3824
Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 843393874
LiteLLM's `ollama_chat` provider does not accept array-based content in messages. This change flattens multipart content by joining text parts or JSON-serializing non-text parts before sending the request to the LiteLLM completion API. This ensures compatibility with Ollama's chat endpoint.
Close#3727
Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 843382361
The `_to_litellm_response_format` function now adapts the output format based on the provided model. Gemini models continue to use the "response_schema" key, while OpenAI-compatible models (including Azure OpenAI and Anthropic) now use the "json_schema" key as per LiteLLM's documentation for JSON mode. The schema name is also included in the "json_schema" format.
Close#3713Close#3890
Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 843326850
Explicitly resolve the GCP project from arguments or environment variables before calling `spanner.Client`. This avoids redundant calls to `google.auth.default()` that newer versions of the Spanner library might make.
Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 843320305
Performance: Switched to BigQuery Storage Write API with async batching, reducing agent latency.
Multimodal: Native support for GCS offloading (ObjectRef) for images, video, and large text.
Reliability: Added connection pooling, retries, and a "rescue flush" for safe shutdown on Cloud Run.
Observability: Fixed distributed tracing hierarchy with ContextVars support.
PiperOrigin-RevId: 843062561
This change introduces an add_session_to_memory method to both CallbackContext and ToolContext, allowing agents and tools to explicitly trigger the saving of the current session to the memory service. This enables more fine-grained control over when session data is persisted for memory generation. A ValueError is raised if the memory service is not configured.
Co-authored-by: George Weale <gweale@google.com>
PiperOrigin-RevId: 843021899
Merge https://github.com/google/adk-python/pull/3875
# Problem
The example in `contributing/samples/human_in_loop/README.md` shows:
```python
await runner.run_async(...)
```
However, `run_async` returns an **async generator**, so awaiting it raises:
```
TypeError: object async_generator can't be used in 'await' expression
```
Additionally, the example payload uses `"ticket-id"` while ADK tools and other examples use `"ticketId"`, creating a mismatch that breaks copy/paste usage.
# Solution
- Updated the snippet to consume the async generator correctly:
```python
async for event in runner.run_async(...):
...
```
- Aligned the payload key from `"ticket-id"` → `"ticketId"` for consistency with ADK schema and other examples.
These changes make the example runnable and consistent with the API’s actual behavior.
# Testing Plan
This PR is a **small documentation correction**, so no unit tests are required per contribution guidelines.
- Verified the corrected snippet manually to ensure it no longer raises `TypeError`.
# Checklist
- [x] I have read the CONTRIBUTING.md document.
- [x] I have performed a self-review of my own code.
- [ ] I have commented my code, particularly in hard-to-understand areas. *(N/A – docs only)*
- [ ] I have added tests that prove my fix is effective or that my feature works. *(N/A – docs only)*
- [ ] New and existing unit tests pass locally with my changes. *(N/A – docs only)*
- [x] I have manually tested my changes end-to-end.
- [ ] Any dependent changes have been merged and published in downstream modules. *(N/A)*
COPYBARA_INTEGRATE_REVIEW=https://github.com/google/adk-python/pull/3875 from krishna-dhulipalla:docs/fix-adk-run_async-example 83fc5b430690b63b8b7bf1025ef03b0761264751
PiperOrigin-RevId: 842952362