- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access control and administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Preparing data for .CSV upload
- Uploading a CSV file into a source
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Uploading data
- Downloading data
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Migration Guide: Exchange Web Services (EWS) to Microsoft Graph API
- Fetching data for Tableau with Python
- Elasticsearch integration
- General field extraction
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Training using Check label and Missed label
You must have assigned the Source - Read and Dataset - Review permissions as an Automation Cloud user, or the View sources and Review and annotate permissions as a legacy user.
Previously, the Teach function, when filtered to reviewed messages, would show messages where the platform thought that the selected label may have either been misapplied, or missed. Check label and Missed label split these into two separate views, with Check label displaying messages with the label potentially misapplied, and Missed label showing messages that may be missing the selected label.
Introduction
Using the Check label and Missed label training modes is the part of the Refine phase where you try to identify any inconsistencies or missed labels in the messages that have already been reviewed. This is different to the Teach label step, which focuses on unreviewed messages that have predictions made by the platform, rather than assigned labels.
Check label shows you messages where the platform thinks the selected label may have been misapplied, that is, it potentially should not have been applied.

Missed label shows you messages that the platform thinks may be missing the selected label, that is, it potentially should have been applied but was not. In this case, the selected label will typically appear as a suggestion, as shown in the following image.

The suggestions from the platform in either mode are not necessarily correct, these are just the instances where the platform is unsure based on the training that's been completed so far. You can choose to ignore them if you disagree with the platform's suggestions after reviewing them.
Using these training modes is a very effective way of finding occurrences where the user may have not been consistent in applying labels. By using them you are able to correct these occasions and therefore improve the performance of the label.
When to use Check label and Missed label
The simplest answer of when to use either training mode is when they are one of the recommended actions in the Model Rating section or specific label view in the Validation page. For more details, check Understanding and improving model performance.
As a rule of thumb, any label that has a significant number of pinned examples but has low average precision, which can be indicated by red label warnings in the Validation page or in the label filter bars, will likely benefit from some corrective training in either Check label and Missed label mode.
When validating the performance of a model, the platform will determine whether it thinks a label has often been applied incorrectly, or where it thinks it's been regularly missed, and will prioritise whichever corrective action it thinks would be most beneficial for improving a label's performance.
Missed label is also a very useful tool if you have added a new label to an existing taxonomy with lots of reviewed examples. Once you've provided some initial examples for the new label concept, Missed label can quickly help you identify any examples in the previously reviewed messages where it should also apply. For more details, check Adding new labels to existing taxonomies.
Using Check label and Missed label
To reach either of these training modes, there are the following main options:
- If it is a recommended action in Validation for a label, the action card acts as a link that takes you directly to that training mode for the selected label.
- Alternatively, you can select either training mode from the dropdown menu at the top of the page in Explore, and then select a label to sort by. You can find an example in the previous image.
Note:
You must first select a label before either Check label or Missed label will appear in the dropdown menu. Both of these modes also disable the ability to filter between reviewed and unreviewed messages, as they are exclusively for reviewed messages.
In each mode, the platform will show you up to 20 examples per page of reviewed messages where it thinks the selected label may have been applied incorrectly, Check label, or may be missing the selected label, Missed label.
Check label
In Check label, review each of the examples on the page to confirm that they are genuine examples of the selected label. If they are, move on without taking action. If they are not, remove the label by selecting the X button when hovering over it, and ensure you apply the correct labels instead.
Review as many pages of reviewed messages as necessary to identify any inconsistencies in the reviewed set and improve the model's understanding of the label.
Correcting labels added in error can have a major impact on the performance of a label, by ensuring that the model has correct and consistent examples from which to make predictions for that label.
Missed label
In Missed label, review each of the examples on the page to see whether the selected label has in fact been missed. If it has, select the label suggestion, as shown in the previous image, to apply the label. If it has not, ignore the suggestion and move on.
Just because the platform is suggesting a label on a reviewed message, does not mean the model considers it to be a prediction, nor will it count towards any statistics on the number of labels in a dataset. If a suggestion is wrong, you can just ignore it.
Review as many pages of reviewed messages as necessary to identify any examples in the reviewed set that should have the selected label but do not.
Partially annotated messages can be very detrimental to the ability of the model to predict a label, as when you do not apply a label to a message, you essentially tell the model that this is not an example of this label concept. If it is in fact a correct example, this can be very confusing for the model, particularly if there are other very similar examples that do have the label applied.
Adding labels that have been missed can therefore have a major impact on the performance of a label, by ensuring that the model has correct and consistent examples from which to make predictions for that label.
Once the model has had time to retrain after your corrective training in these modes, you can check back in Validation to view the positive impact your actions have had on the Model Rating and the performance of the specific labels you've trained.