- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access control and administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Preparing data for .CSV upload
- Uploading a CSV file into a source
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Uploading data
- Downloading data
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Migration Guide: Exchange Web Services (EWS) to Microsoft Graph API
- Fetching data for Tableau with Python
- Elasticsearch integration
- General field extraction
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more
Communications Mining user guide
Overview
Human-in-the-loop (HITL) in Communications Mining is designed to support operational decision-making when model confidence is insufficient, while preserving the integrity of model training data.
In a production automation, the model is used to classify incoming communications in real time. When the model cannot confidently predict the correct labels, the automation temporarily involves a human user to validate or correct the prediction so that the business process can continue without interruption.
It is important to distinguish between the following:
- Operational validation that end users perform in Action Center.
- Model training and maintenance that model trainers perform later.
HITL validation ensures that:
- The automation can proceed immediately using corrected labels.
- The communication is handled correctly from a business perspective.
However, HITL validation does not directly retrain or update the model. Instead, communications that required human intervention are explicitly marked as exceptions, allowing model trainers to later review and annotate them in a controlled way as part of an ongoing model maintenance process, that is, exception training.
This separation ensures:
- High-quality, consistent training data.
- Protection against incomplete or biased annotations.
- Continuous model improvement without impacting live automation performance.
Workflow
- The Robot picks up communications from the Stream.
- The Robot evaluates the model confidence.
- If confidence is below threshold, validation is required.
- A validation task is created in Action Center. For more details, check Create Form Task.
- The communication content and predicted labels are presented to a human user.
- The human validates or corrects labels in Action Center.
- These corrections are used only for downstream processing, not for model training.
- The Robot tags the communication as an exception through the API. This flags the message for later review by model trainers. For more details, check Tag an exception.
- The Robot continues to process immediately. The communication is not re-processed through the Stream.
- The corrected labels are applied for operational purposes, for example, upload to Communications Mining or downstream systems.
- Later, the model trainer reviews the exception. The trainer annotates the message correctly in Communications Mining. These annotations may be included in future training cycles.
Validation corrections made in Action Center do not automatically retrain or update the model.