Positive and Negative predictive values

Podcast – Positive and Negative Predictive Values: Critical Appraisal Nugget 11

Welcome back to another instalment of our Critical Appraisal Nugget series with Rick Body and Greg Yates here at St Emlyn’s. In our previous podcast, we delved into the concepts of sensitivity and specificity, laying a strong foundation for understanding diagnostic tests. Today, we’re building on that knowledge by exploring positive predictive value (PPV) and negative predictive value (NPV).

Listening time – 11:16

Revisiting sensitivity and specificity

Before we dive into positive and negative predictive values, let’s briefly revisit sensitivity and specificity. Sensitivity is the ability of a test to correctly identify those with the disease (true positives), while specificity is the ability of a test to correctly identify those without the disease (true negatives). Both metrics are intrinsic properties of the test, meaning they don’t change regardless of the prevalence of the disease in the population.

Positive predictive value (PPV)

Positive Predictive Value (PPV) is the probability that a person has the disease given that they have tested positive. It answers the question, “If the test result is positive, what are the chances that the patient actually has the disease?” We can calculate the PPV as follows: true positives / (true positives + false positives)

Negative predictive value (NPV)

Negative Predictive Value (NPV) is subtly different: it is the probability that a person does not have the disease given that they have tested negative. It answers the question, “If the test result is negative, what are the chances that the patient does not have the disease?” We can calculate the NPV as follows: true negatives / (true negatives + false negatives)

The importance of prevalence

One of the key differences between sensitivity/specificity and PPV/NPV is that positive and negative predictive values are heavily influenced by the prevalence of the disease in the population being tested. As prevalence increases, PPV increases and NPV decreases, and vice versa. This dependency on prevalence makes PPV and NPV highly relevant in clinical practice, where the pre-test probability (or prevalence) can vary significantly across different populations and settings. However, it also gives us reason for caution: if the prevalence is very low then we would expect to see a high NPV even if the test is not very good. That being so, while the NPV and PPV help to give us a practical idea of the post-test probability of disease in a given cohort, we also need to interpret them alongside the sensitivity and specificity, which are less dependent on prevalence and more dependent on the ability of the diagnostic test to differentiate between those who have the disease (or condition) and those who don’t.

Practical Example

Let’s consider a diagnostic test for a disease with a prevalence of 10% in a given population. Suppose the test has a sensitivity of 90% and a specificity of 90%. The 2×2 table below illustrates this:

Disease presentDisease absent
Test positive9090
Test negative10810
2×2 table for a test with 90% sensitivity and 90% specificity with 10% prevalence

In this example, the PPV is equal to 90 / (90 + 90) = 50%. The NPV is equal to 810 / (10 + 810) = 98.8%. So we have a pretty good rule-out test but not such a good rule-in test. Let’s now change the prevalence to 70% – a really high-risk cohort – but let’s keep the sensitivity and specificity of the test the same, at 90% each. Let’s see how the 2×2 table looks:

Disease present (n=100)Disease absent (n=42)
Test positive904
Test negative1038
2×2 table for a test with 90% sensitivity and 90% specificity with 70% prevalence

From this table, you can work out that the sensitivity and specificity are still both equal to 90% – so the test is working just as well. However, the PPV and NPV have changed drastically. The PPV = 90 / (90 + 4) = 95.7%, whereas the NPV = 38 / (10 + 38) = 79.2%. Suddenly, we have a pretty good rule-in test but not such a good rule-out test! So, you can see why we need to take account of both sensitivity and specificity, and the PPV and NPV.

Why PPV and NPV matter

In clinical practice, PPV and NPV are crucial because they provide direct information about the test’s performance in a real-world setting. They help clinicians make more informed decisions about patient care based on the likelihood of disease presence or absence after testing.

Summary

Understanding PPV and NPV is a really important skill for emergency physicians, whether it be for critical appraisal of a paper to decide if you should use a new test in practice, to guide your own research or to inform your daily practice as you apply diagnostic tests in everyday patient care. Remember to consider the prevalence of disease when interpreting PPV and NPV, and be sure to also look at the sensitivity and specificity too. While the PPV and NPV give us a practical idea of the post-test probability of disease, the sensitivity and specificity will also help to reassure us that the test is doing something more than just rolling a dice! We hope you find this CAN podcast helpful. Stay tuned for more critical appraisal nuggets at St Emlyn’s.


Podcast Transcript


Cite this article as: Rick Body, "Podcast – Positive and Negative Predictive Values: Critical Appraisal Nugget 11," in St.Emlyn's, July 24, 2024, https://www.stemlynsblog.org/podcast-ppv-npv/.

2 thoughts on “Podcast – Positive and Negative Predictive Values: Critical Appraisal Nugget 11”

  1. “Sensitivity is the ability of a test to correctly identify those with the disease (true positives), while specificity is the ability of a test to correctly identify those without the disease (true negatives).”

    Should these definitions be reversed?

    1. Actually, that’s exactly right! You might wonder how that fits with SpIN (specificity rules in) and SnOUT (sensitivity rules out), when sensitivity is about patients *with* disease. The point is that if a test accurately identifies almost everyone *with* the disease then there are very few false negatives – patients who will be missed. And if we achieve that (a high sensitivity), then we often say it’s a ‘rule-out’. So the definitions are correct, even though it might seem counter-intuitive at first.

Thanks so much for following. Viva la #FOAMed

Scroll to Top