test-manager
latest
false
- Getting started
- Project management
- Documents
- Working with Change Impact Analysis
- Create test cases
- Assigning test cases to requirements
- Cloning test cases
- Exporting test cases
- Linking test cases in Studio to Test Manager
- Delete test cases
- Manual test cases
- Importing manual test cases
- Document test cases with Task Capture
- Parameters
- Enabling governance at project level
- Disabling governance at project level
- Enabling governance at test-case level
- Disabling governance at test-case level
- Managing approvers for governed test cases
- Managing governed test cases in the In Work state
- Managing governeed test cases in the In Review state
- Managing governed objects in the Signed state
- Managing comments for governed test cases
- Applying filters and views
- Importing Orchestrator test sets
- Creating test sets
- Adding test cases to a test set
- Assigning default users in test set execution
- Enabling activity coverage
- Enabling Healing Agent
- Configuring test sets for specific execution folders and robots
- Overriding parameters
- Cloning test sets
- Exporting test sets
- Applying filters and views
- Accessibility testing for Test Cloud
- Searching with Autopilot
- Project operations and utilities
- Test Manager settings
- ALM tool integration
- API integration
- Troubleshooting
Test Manager user guide
Last updated May 13, 2026
Autopilot helps automate and enhance tasks across your testing lifecycle. Follow these best practices to improve accuracy and usability.
Note:
Autopilot is available only in Test Manager delivered via Test Cloud.
Requirement evaluation
For evaluating requirements you need to:
- Write clear, complete requirements with measurable acceptance criteria.
- Use focused evaluations (for example: security or performance).
- Attach supporting files like guidelines or specs.
- Use the Prompt Library to standardize evaluations.
Manual test generation
For generating manual tests you need to:
- Ensure requirements include full user flows and expected outcomes.
- Add supporting files (mockups, diagrams) for context.
- Tailor instructions for specific scenarios.
- Reuse prompts from the Prompt Library.
Converting text into code
For converting text into code you need to:
- Specify the language and goal clearly (e.g., “Refactor this C# method”).
- Keep prompts short and direct.
Automating manual tests
For automating manual tests you need to:
- Use a consistent object repository and naming conventions.
- Write manual steps using terms that match UiPath activity names.
Generating test data
For generating test data you need to:
- Use prompts to define value ranges, patterns, or combinations.
- Reference existing arguments in your workflows.
Analyzing test results
For analyzing test results you need to:
- Run insights on a large volume of test results for better accuracy.
- Focus on:
- Common Errors: Frequent failure clusters.
- Error Patterns: Recurring test failure themes.
- Recommendations: Actionable fixes to stabilize tests.