- Release notes

Agents release notes
September 2025
September 25, 2025
Introducing file support in Agents
UiPath Agents now support native file handling through the new Analyze Attachments built-in tool. This enables agents to process files, like images and documents, directly in their workflows.
With this capability, agents can accept files as input arguments and leverage LLMs to analyze their content. Based on natural language instructions, agents can extract information and interpret visual elements, returning structured, context-aware responses back into the agent's context. The following file types are currently supported: GIF, JPE, JPEG, PDF, PNG, WEBP.
File support unlocks a variety of new use cases, including image comparison (spotting differences in marketing assets or product images) and signature verification (assisting in fraud detection by comparing scanned signatures), among many others.
This feature is currently in public preview.
For details, refer to Built-in tools.
September 24, 2025
Design runs and evaluations panel for agents
New bottom panel for agents design time in Studio Web
Use the bottom panel to test and debug in design-time, view live traces, and evaluate agents with datasets made from your test runs or purpose-built sets.
The new bottom panel in the Agent designer canvas gives you three powerful ways to work. The History tab shows past agent runs with full traces and lets you add them directly to evaluation sets. The Evaluations tab brings all your evaluation sets together, displaying recent scores and allowing you to dive into details or rerun tests instantly. And the new Execution Trail tab shows live traces as soon as you run your agent, moving trace details out of the old side panel and into a clearer, more accessible view. For details, refer to Exploring the agent workspace.
Figure 1. The new bottom panel

Fetch runtime traces into evaluation sets
You can now fetch runtime traces directly into evaluation sets, making it easy to turn production feedback into actionable test cases. After running an agent and reviewing traces in Orchestrator or the Agent Instance Management page, use the Fetch runtime traces option in Evaluations to pull those runs into a set. From there, you can edit the inputs and expected outputs, save them, and immediately use them for ongoing evaluation. Once added, these traces are clearly labeled as runtime runs, so you can distinguish them from design-time tests. They also contribute to your agent’s overall evaluation score, giving you instant visibility into how real-world feedback impacts performance. For details, refer to Evaluations.