A Decade of Diagnostics in Emergency Care

When we saw in the last decade, I’d just finished my PhD looking to discover the ‘new troponin’. At the time, patients with suspected acute coronary syndromes (ACS) were routinely admitted to hospital to undergo serial troponin testing with the second test being run at least 12 hours after peak symptoms. Troponin was a ‘late marker’ of myocardial injury, and in my PhD I’d been looking to discover an ‘early marker’ that could be used to ‘rule out’ the diagnosis with one test in the ED.

High-sensitivity troponin

In the middle of 2009, I had continued my research and was interested in the new troponin assay that Roche Diagnostics was developing: ‘high-sensitivity cardiac troponin T’ (hs-cTnT). We were lucky enough to be one of the first to get our hands on it, and testing was complete by the end of summer. In August 2009 I presented my findings on the ‘single test rule-out of AMI using extremely low hs-cTnT concentrations’. Setting the cut-off at 3ng/L, we found 100% sensitivity for AMI. Unfortunately, at that time, we struggled to get the work published – reviewers and editors alike weren’t buying into this concept until (at last, a whole 2 years later) JACC accepted our paper.

https://www.ncbi.nlm.nih.gov/pubmed/21920261

Jump to a decade later, and a very similar strategy (using the limit of detection of the same assay, which is slightly higher at 5ng/L) is now recommended for clinical use by NICE and the European Society of Cardiology. Single test rule-out using high-sensitivity cardiac troponin is now an established part of our practice.

High-sensitivity cardiac troponin assays have really changed our practice in the last decade. But, importantly, we needed more than the technology itself. To be successful, it took clinician researchers like myself and many others to develop practical applications for the technology that would allow us to make better, more efficient clinical decisions. So, we have the 1-hour algorithm, 2-hour algorithms and 3-hour serial testing algorithms recommended by NICE. All of these approaches have used the new assays to identify the optimal applications to rule out and rule as many patients as possible at the earliest possible opportunity. It’s particularly heartening that the strategies we now use are based on a sound evidence base, with large and well-designed studies to justify their use.

Increasingly sophisticated decision aids

Back in 2010, clinical decision rules were an established part of our practice. Many took the form of decision trees – like the Ottawa ankle rule.

https://www.mdcalc.com/ottawa-ankle-rule

Moving on from that, we started to see a move towards scoring systems – whereby we calculate a score and use that to assign patients to risk groups – just like the HEART and EDACS scores, which were specifically designed for use in the undifferentiated ED population. We’ve seen RCTs of the decision aids using different methods, we’ve seen the great work of Simon Mahler and colleagues to use the HEART score with a 2-hour serial testing pathway, and we’ve seen great efforts to educate emergency physicians with fantastic videos from the likes of Will Niven (below).

During my PhD, I’d originally derived a tree-based decision rule for ACS. However, decision trees have important limitations. You have to choose cut-offs for continuous variables like age and biomarker levels. That loses a lot of important information. As the decade has gone on, we’ve begun to see more probabilistic models – where we embrace the reality of the uncertainty we face in clinical decision making and calculate the probability of a diagnosis on an individual patient level. That’s what our own T-MACS decision aid does – it calculates the probability of ACS for every patient. The MI3 algorithm does the same thing. Look out for more like this – but it’s one great sign of progress in this decade, and it promises to help us greatly when it comes to shared decision making and optimising treatment decisions by calculating the anticipated net benefit/harm.

Sophisticated Bayesian thinking

At the start of this decade, we were routinely using D-dimer testing alongside the Wells score to rule out pulmonary embolism (PE) and deep vein thrombosis (DVT) without imaging. That had already revolutionised our approach, and it was a great example of how we apply Bayesian principles in medicine. A normal D-dimer could rule out patients at low clinical risk, but the post-test probability remained too high in patients at higher baseline risk.

However, it still has limitations. We still tend to over-investigate. By 2010 we already had the PERC score. But this decade has seen that taken forward, most notably with the cluster RCT run by Yonathan Freund in 2018. That trial showed that the outcomes of patients ‘ruled out’ using th PERC score were non-inferior to those of patients worked up in the usual way.

Another example of our inefficiency in 2010 came in older people. They have higher D-dimer concentrations even in health. And so we saw the introduction of age-adjusted D-dimer cut-offs – accepting a higher cut-off for those aged >50 to rule out more PEs and DVTs without significantly reducing negative predictive value and sensitivity.

Now, we go one step further, with the introduction of gestalt-adjusted D-dimer cut-offs – adjusting the cut-off based on the prior (pre-test) probability of disease, in the opinion of the treating clinician. I.e. if the pre-test probability is low, you can accept a higher D-dimer cut-off and still rule out PE. Expect to see more like that in the coming decade!

Where next?

As the 2020s arrive and we reflect on the last 10 years of progress, it’s time to think about where we might be in 2030. Here are my top 5 tips for what we might expect to see…

  • An explosion of AI-enhanced diagnostics – whether that be refining and optimising decision aids (like T-MACS – one of our current projects), automating image interpretation (CT scans, ECGs, and maybe even moving images like endoscopies and ultrasound scans) or using routinely collected data to identify indications for new treatments, patient safety incidents, ED crowding and more, and alerting clinicians
  • The rise of precision medicine – building the bridge between diagnosis and treatment, and targeting treatments to those who stand to benefit the most, objectively weighing the projected harms and benefits
  • Dynamic risk stratification – improving our ability to handle uncertainty and incomplete information, and supporting clinicians to understand what test (if any) should come next, based on the data we have
  • More point of care testing. We’re likely to see increasingly sensitive and precise point of care tests, which could be used outside type 1 EDs in the ambulance or urgent care centres. This could really revolutionise our approach to acute care, changing the pattern of presentations to the ED
  • More wearable technology and remote healthcare. We already have smart watches that can record an ECG, a heart rate, detect serious collisions and automatically alert emergency services. We have apps that enable immediate video consultations with doctors. We’re likely to see more and more of this, and don’t be surprised if we start to see more of these technologies being marketed direct to the consumer (bypassing the doctor). A scary prospect? Maybe – we’re going to need good regulation. But it’s also a very exciting future.

What do you think we’ll be doing in 2030? Please leave your comments!

Rick

Cite this article as: Rick Body, "A Decade of Diagnostics in Emergency Care," in St.Emlyn's, January 1, 2020, https://www.stemlynsblog.org/a-decade-of-diagnostics-in-emergency-care/.

1 thought on “A Decade of Diagnostics in Emergency Care”

  1. Pingback: January 2020 round up podcast • St Emlyn's

Thanks so much for following. Viva la #FOAMed

Scroll to Top