Files
adk-python/tests/integration
Ankur Sharma 04de3e197d fix: Adding detailed information on each metric evaluation
Additionally, few other small changes.
*   Updated a test fixture to support the latest eval data schema. Somehow I missed doing that previously.
*   Updated the `evaluation_generator.py` to use `run_async`, instead of `run`.
*   Also, raise an informed error when dependencies required eval are not installed.
*   Also, changed the behavior of AgentEvaluator.evaluate method to run all the evals, instead of failing at the first eval metric failure.

PiperOrigin-RevId: 775919127
2025-06-25 18:32:02 -07:00
..
2025-04-17 21:47:59 +00:00
2025-04-17 21:47:59 +00:00
2025-04-17 21:47:59 +00:00
2025-04-17 21:47:59 +00:00
2025-04-18 12:07:52 -07:00