Precision

Isabell Hamecher
March 20, 2026
4 min read
Learn more about what precision entails as a metric to evaluate model performance.

Definition

Precision in AI is a metric used to evaluate the performance of a classification model, particularly in binary classification tasks. It measures the proportion of true positive predictions (correctly predicted positive cases) out of all positive predictions made by the model. In other words, precision indicates how many of the predicted positive instances are actually positive.

Understanding precision

Precision is the proportion of positive predictions that are actual positives. In simple terms, it answers this question: when the model says something is positive, how often is it right?

It is calculated as the number of true positives divided by the total number of positive predictions. That total includes both true positives and false positives.

A model with high precision makes few false positive errors. In other words, it rarely labels something as positive when it is actually negative.A perfect model would make zero false positives and have a precision of 1.0.

Why precision matters

Precision becomes especially important when false positives have serious consequences. In finance, for example, fraud detection systems flag suspicious transactions. If precision is low, many legitimate transactions may be incorrectly flagged as fraudulent. That can frustrate customers and create unnecessary work. Precision is also relevant in areas like medical testing. A false positive result can lead to unnecessary treatment, stress, and cost. In these cases, having high precision means that positive results are more trustworthy.

Precision and recall

Precision is closely linked to another metric called recall, but they measure different things.

  • Precision looks at how many predicted positives were correct
  • Recall looks at how many actual positives were successfully identified

There is often a trade-off between the two. Increasing precision usually reduces recall, and increasing recall often reduces precision. For example, if a model is tuned to be very strict about labelling cases as positive, it may avoid false positives and improve precision. However, it may miss many real positive cases, which lowers recall. Because of this trade-off, the choice between prioritising precision or recall depends on the cost of errors. If false positives are more costly, precision should be prioritised. If false negatives are more costly, recall may be more important.

Interpreting precision in practice

Precision tells us about the quality of the model’s positive predictions, but it does not give the full picture on its own.

In datasets where actual positive cases are extremely rare, precision can be less informative. Both precision and recall are often used together to better understand performance, especially with imbalanced data.

Overall, higher precision means the model returns more relevant results and fewer irrelevant ones when making positive predictions.

Key takeaways

  • Precision measures how many positive predictions are actually correct
  • It is calculated as true positives divided by all positive predictions
  • High precision means fewer false positives
  • It is especially important when false positives carry significant cost or risk
  • Precision and recall often move in opposite directions, creating a trade-off
  • Precision reflects the quality of positive predictions, not overall model accuracy

Related Terms

No items found.