Emergency Medicine: A risky business part 5.
Does a correct diagnosis mean that the therapy will work?
In the previous post in this series around diagnostics we have looked at how the performance of most of the clinical tests we use in practice mean that we inevitably have to miss some patients who have the target condition we are looking for. We looked at PE and showed that a test with 98% sensitivity means that one in 50 people with the target disorder will slip through the net BUT because only a minority of patients that we investigate actually have PE then we get away with it most of the time.
I reckon that you’re still worried about those misses though….
If you are then I think that’s fair enough as few clinicians find this reassuring, but let’s explore what we mean by a ‘miss’.
Is missing a diagnosis always a terrible thing to do?
Clinicians generally start off with a fairly simplistic view about diagnosis and therapy and it looks a bit like this. It’s what I was taught at medical school, and I’ve probably promulgated it for many years. Why? Because it feels good to think like this as it gives the illusion of certainty and effectiveness.
However, we now understand a lot more about the process and realise that the diagnostic process does not confirm or refute a diagnosis, rather it moves probabilities around so that the true picture is more complicated.
OK. So this is fine and dandy, but let’s challenge the next assumption about the outcomes from diagnosis. In talking to clinicians, patients and lawyers the next assumption is that the correct diagnoses lead to benefit and false diagnoses lead to harm. Graphically we can draw this out as follows.
This too is still far too simplistic though. If you have been following any of the recent debates around the use of thrombolysis in stroke you will be all too aware that therapy itself has inherent risks, particularly when we consider therapies such as thrombolysis, operation (e.g. appendectomy), cardiac catheterisation etc.
As an ED clinician this is an area where we just need to stop and think about how we as clinicians view outcomes from therapy differently to how patients view outcomes. Our tendency is to look at populations of patients, patients don’t really care that much about other patients (Ed – harsh but I think I know what you mean), rather they want to know what’s in it for them. As an example we can return to the good old days of thrombolysis (still being used in remote settings of course) when I used to have a rehearsed and regular conversation with patients before starting thrombolytic therapy. The aim was to to explain that there were three possible outcomes from the proposed treatment.
- Thrombolysis could improve their outcome both in terms of survival and in terms of longer term cardiac function.
- It might make no difference at all.
- It might cause harm in terms of death, stroke or bleeding.
The patient had the opportunity to experience each of these outcomes with therapy, but interestingly there are also three potential options for those patients who declined therapy.
- They can experience an adverse outcome from their MI which might have been avoided by thrombolytic therapy.
- There would have been no difference either way. Treatment or no treatment would have made no difference.
- They can avoid the complication of therapy by declining the treatment.
In other words there were potential risks and benefits regardless of whether a patient did or did not have therapy. As clinicians we do not typically consider all these potential outcomes (well I don’t anyway).
Now….stop and think back to those conversations….did you, or did you ever overhear, someone (it could have been you) say to a patient…
“If you don’t have the clot buster you’ll die from your heart attack”
I heard this many times, and still do for a variety of different treatments, but the facts are that this is a lie. It gives an illusion of certainty that simply does not exist as everyone who has ever visited the NNT will know (it may also interest you to know that at the time we were doing this the NNT for thrombolysis of an inferior MI was over 100).
So how does this pan out graphically? We can put the various outcomes into a diagram that shows the complexity of the potential outcomes that are possible when we consider diagnoses where the diagnostic process is imperfect and where the therapy and/or the diagnostic process contains inherent harm. Examples in emergency medicine practice are very common with the investigation of pleuritic or cardiac sounding chest pain being perfect examples. All the potential outcomes described below are possible for your patients going through a diagnostic process for the investigation of PE or acute coronary syndrome.
Where does this leave us now? Do diagnostic tests work at all?
Well, yes of course. Diagnostics are important but it should now be clear that simply getting a result from a test is only the start of the process for patients. If we are to understand if patients are to benefit from the diagnostic process then we will be better served by taking a more utilitarian approach to diagnostics. Many diagnostic studies in the literature are designed to answer the simple question regarding whether or not the patient has the target disorder. The diagnostic cohort is the typical model and produces familiar data such as sensitivity and specificity. This is fine, but it does not answer the question about whether a diagnostic test actually benefits patients.
If you have never read the paper by Foex and Body from the EMJ on the philosophy of diagnosis then now is the time.
But surely making a diagnosis is always good isn’t it?
Most of the time that is certainly the case. A diagnosis is usually a good thing to know, but perhaps not always as testing becomes ever more accurate and our ability to pick up sub clinical disease rises. A good example here would be PE where we are increasingly able to pick up tiny PEs in the lungs. The clinical significance of this is being questioned for PE and in many other areas such as high sensitivity Troponin in chest pain (though I am convinced to be honest).
If we recognise and understand the link between diagnosis and therapy, and if we are able to balance the potential benefits and harms from the diagnostic process then we will be able to achieve better outcomes for patients.
But, surely making the diagnosis is enough.
Aren’t RCTs nice but unnecessary?
This is a common complaint/theme/anxiety amongst diagnostic researchers who argue that diagnostic tests can be evaluated in simple trials, which are cheaper and easier to perform. Whilst that is true I increasingly believe that diagnostic cohort studies can only take us so far, and there are increasingly calls for RCTs to be used more frequently in the evaluation of diagnostic testing. There’s a good open access review here from the Ottawa thrombosis program (who know a fair bit about this sort of thing). You can also have a look at the excellent paper from the UK Sheffield group on the evaluation of rapid assessment cardiac panels in the RATPAC trial as an example of an ED based study.
In summary then we really need to think carefully when considering how we use diagnostic tests, how we communicate information to patients and how we reassure ourselves that the diagnostic process actually leads to patient benefit.
To see all posts in this series follow this link to the Library.
For all posts in this series click here
Risky Business Part 7. Risk Proximity
Before you go please don’t forget to…