Prediction models have several performance metrics available, defined during the training:
Cluster-based models will try to predict whether an experimental condition is in a cluster or not
- Precision is the measure of How many conditions the model labelled as part of a cluster are actually part of it?
It is the ratio of True positives/(True positives + False positives)
- Recall is the measure of Out of all the conditions in a cluster, how many did the model correctly label?
It is the ratio of True positives/(True positives + False negatives)
- Specificity is the measure of Out of all the conditions not in a cluster, how many did the model correctly label?
It is the ratio of True negatives/(True negatives + False positives)
The "best" metrics depends of the user needs
Probability is not a metric of the model, but simply the probability that a condition belongs to a cluster