In this quick post I’m hoping to get you thinking and asking questions about how we measure things in medicine. And I hope that, like me, it might change how you interpret information in your practice. Since writing this in draft and sending it round to the team for review, this brilliant post was published today at PulmCrit, which covers a lot of the same ground. Please check that superbly written post out as well – but I promise that the overlap is total coincidence – the two have been written independently. Clearly I’m not the only one thinking about this right now!
The folly of dichotomy
Outside of medicine, one of my biggest interests is music. I play the piano – badly – but I enjoy it nonetheless. Imagine I walked into a music shop. I saw some new sheet music that I liked. I asked the shop assistant, “How much is this?” Then came her reply. “It’s cheap”.
It would be wonderful to know that the book is cheap, but with a reply like that I think I’d feel that I wasn’t really much wiser than I was before I’d asked. Not deterred, I might want to know if the sheet music was the right level of difficulty for me. Let’s imagine that I asked, “How difficult is it?”
“Difficult”, comes the reply.
Why are those answers so unhelpful? I think it’s because they give such little detail. Knowing that the book is either “cheap” or “expensive” isn’t particularly valuable, really – even if we have a definition for “cheap” (e.g. less than £10) and “expensive” (at least £10). “Difficult” versus “easy” is a similarly uninformative description of the level of difficulty. Both of these are dichotomous (essentially ‘yes/no’ type) answers.
Having more options for the reply would have been more helpful. I wanted to know the price – which can assume any value and is therefore a continuous variable – and I would have really liked to know which exact value it was rather than have the shop assistant simplify the situation for me.
I also wanted to know the difficulty – maybe with different levels (beginner, intermediate, early advanced, advanced, very advanced), which would make it an ordinal variable (where the values can only take one of a certain set, but the order of the numbers means something).
Medicine is similar. If all of the clinical information we collect was only interpreted as either “positive” or “negative”, I imagine that you could get by – but you’d definitely appreciate having more granular information. This applies to everything we do in medicine. At a very basic level, you might ask your patient, “How are you feeling today?” Their response is likely to be full of some wonderfully rich information, which will help you to interpret that patient’s clinical status. It wouldn’t be quite so helpful if you told your patient that they only have two options: “well” or “not well”.
Think of vital signs, too: heart rate, respiratory rate, oxygen saturation, blood pressure, temperature. If we categorised heart rate as <60, 60-100 or >100bpm, we wouldn’t treat patients any differently if they had a heart rate of 101bpm than if it was 210bpm – and that would be quite ridiculous.
So why don’t we do this with biomarker results? Why do we talk about patients being “troponin positive” or “troponin negative”? The same could be said for D-dimer, BNP, CRP, procalcitonin, lactate and lots more.
A troponin of 13ng/L (upper reference limit 14ng/L) is not really so different to a troponin of 15ng/L. But a troponin of 15ng/L is very different to a troponin of 1000ng/L. Dichotomising the result as positive or negative would ignore that.
That’s why, when I derived the T-MACS algorithm, I opted to include troponin as a continuous variable. You can check how this works at MD Calc. Enter a range of troponin values and you’ll see how the probability of ACS changes. I think this a much more practical way to use biomarkers, and you can bet your life that it gets us closer to appreciating the reality of a patient’s diagnosis.
There are a couple of other things about T-MACS, though, that I also felt passionately about, and that I think might be helpful to us for other conditions in the future too. First, we used all of the information collected about a patient together – the troponin, the ECG, the history, the physical examination – as part of our clinical prediction model. This means that we didn’t pretend that troponin by itself is all we need. We acknowledged that clinicians will also take a history, look at an ECG, etc. By itself, troponin does very well in clinical studies – it’s true. But no clinician in their right mind will ever use troponin without taking a history, doing a physical exam and recording an ECG. So why not allow the clinical prediction model to use that information? It’s what we do in our practice every time we see a patient – but our judgements are subjective. Imagine if we could do that in a robust, evidence-based way for all our patients – diagnosing pancreatitis wouldn’t just be about the amylase, diagnosing a fracture wouldn’t just be about the x-ray – and there’d be some strong science to back us up in our judgements.
Lastly, we often acknowledge that in Emergency Medicine we’re really looking to rule out important diagnoses – we’re not necessarily after black and white diagnostic tests. But what does it mean to “rule out” a condition? We can never get to a 0% probability that a patient had a disease – there will always be some risk. So how much risk is acceptable before we rule out, and who should decide? Is it fair to dichotomize that information too?
When I derived T-MACS I thought that if it’s wrong to dichotomise a biomarker result then it’s also wrong to dichotomise the probability that a patient has ACS. So T-MACS doesn’t (just) advise who can be ruled in and ruled out. It gives the clinician the calculated probability that a patient has ACS. The clinician can then use that to personalise their approach to the patient. If a patient has a 3% probability of ACS but there are compelling reasons that they don’t want to be in the hospital for further tests, you can balance the risks, involve the patient, tailor the decision, share the decision. But to do that best you need the granular information about the probability of ACS – not just ‘rule out’ yes or no.
We could use this for so many other things, too. It could help us when weighing up the risks and benefits of different approaches. Imagine if we could calculate the probability that a patient might benefit from a particular treatment, then calculate the probability of a serious side-effect. We could compare the two probabilities and make a more informed judgement about what to do.
In clinical trials, imagine if we could be a bit more sophisticated than simply saying that if “p<0.05” then the treatment being evaluated is effective, but it’s ineffective above that. What if we weighed up the probabilities, factored in the costs of the treatment and the costs of a larger trial? Maybe new treatments would be adopted much faster. And, while the risk of type 1 error (wrongly accepting a treatment as effective) might increase, we could be assured that by balancing the probabilities we’re ensuring that there will be a net benefit for patients – and with ongoing surveillance of the new treatment in practice we can do even better.
So, all in all, I hope I’ve managed to convince you that we can do so much better than interpreting clinical data as simple yes/no answers. By fully appreciating the richness of the data we have available to us, we can do so much more!