Definition
The proportion of actual positive cases correctly identified by the model.

Sensitivity, also called the true positive rate or recall, describes how well a test or model identifies cases that are genuinely positive. It answers a simple question: of all the real positive cases, how many did the system correctly flag as positive.
- A highly sensitive model misses very few real positive cases
- A model with low sensitivity fails to detect many positives
- Missing a positive case is known as a false negative
False negatives are also called type II errors. They occur when the model treats a truly positive case as negative.
How sensitivity is calculated
Sensitivity is based on two numbers:
- True positives, cases that are positive and correctly predicted as positive
- False negatives, cases that are positive but incorrectly predicted as negative
The formula is:
Sensitivity = True Positives ÷ (True Positives + False Negatives)
This shows the proportion of actual positive cases that were successfully detected.
Simple example
Imagine a model designed to detect fraudulent transactions, where fraud is the positive class.
- There are 100 fraudulent transactions in total
- The model correctly detects 90 of them
- It misses 10 fraudulent cases
Here:
- True positives = 90
- False negatives = 10
Sensitivity = 90 ÷ (90 + 10) = 0.90
This means the model detects 90 percent of all fraudulent transactions. Because only 10 positives were missed, the false negative rate is low.
Why sensitivity matters
Sensitivity is especially important when failing to detect a positive case has serious consequences. It is prioritised when:
- Missing a positive case is risky, such as in healthcare screening or fraud detection
- Early warnings are more important than avoiding false alarms
- Extra flagged cases can be reviewed later by people or additional systems
High sensitivity helps ensure that important or dangerous cases are not overlooked.
Sensitivity vs. specificity
Sensitivity focuses on positive cases. Specificity focuses on negative cases.
- Sensitivity measures how well actual positives are identified
- Specificity measures how well actual negatives are identified
Specificity uses a different formula:
Specificity = True Negatives ÷ (True Negatives + False Positives)
Where:
- True negatives are correctly identified negative cases
- False positives are negative cases incorrectly labelled as positive
A system with high sensitivity but low specificity will catch most positive cases but may produce many false alarms. A system with low sensitivity but high specificity will avoid false alarms but may miss critical positive cases. Choosing between them depends on which type of error is more costly.
The role of the confusion matrix
Sensitivity and specificity are derived from a confusion matrix, a table that compares predictions with the known truth. In a simple two class problem, the matrix contains:
- True positives
- False positives
- True negatives
- False negatives
Sensitivity uses the true positives and false negatives from this table.
Sensitivity and model performance
Sensitivity is one of the main measures used to judge classification models.
- High sensitivity means few false negatives
- Low sensitivity means many real positive cases are being missed
Because models often face a trade off between sensitivity and specificity, improving one can reduce the other. The best balance depends on the application and the cost of different mistakes.
Key takeaways
- Sensitivity, or true positive rate, measures how well a model detects real positive cases
- It is calculated as true positives divided by true positives plus false negatives
- High sensitivity means few positives are missed and a low false negative rate
- False negatives are type II errors and can be serious in high risk settings
- Sensitivity focuses on positives, while specificity focuses on correctly identifying negatives
- The right balance between sensitivity and specificity depends on the real world consequences of errors
