JC: The future of evidence based medicine. St Emlyn’s.

Evidence Based Medicine (EBM) is one of the four pillars of St Emlyn’s. It’s something that evolved out of our intent to base our clinical practice on the best available evidence, combined with a realisation that much of our practice is not that evidence based, plus a healthy dose of scepticism. In the world of #FOAMed we are (I think) in good company with many colleagues singing from the same song sheet (a big shout out to Ken Milne here as perhaps the principal exponent of this particular art). We review a lot of papers on the blog and try to push the EBM principles that we believe underpin great clinical care. Those principles are largely based on those derived by Sackett et al back in the 90s, and for the most part, this has served us well. However, times change, and more recently the world of research has been encouraged to adapt to new technologies, challenges and research designs that were not apparent when Sackett and co. wrote their books and articles. The changes in our approach to research design and consumption have been progressive, but the recent COVID pandemic and the generalisation and accessibility of artificial intelligence/machine learning technologies have arguably accelerated our understanding and interest in how the future of EBM might look.

This month we would like to highlight a really interesting article in Nature Medicine by Vivek Subbiah, that explores how EBM might look different over the next few decades. As always we invite you to read the full article yourself, but we’d pick out a few of the key points below. There is also a little twist with this blog, but we’ll leave revealing what it is until the end of the piece (no cheating).

In this article, Subbiah suggests that the next generation of evidence-based medicine will involve more rapid and dynamic approaches to data collection, analysis, and decision-making. He describes several emerging trends that are likely to shape the future of evidence-based medicine, including the use of artificial intelligence and machine learning to analyze large datasets, the adoption of decentralized clinical trials that leverage remote monitoring technologies, and the development of patient-centric approaches that emphasize individualized treatment plans and shared decision-making.

One criticism in the article is around the reliance on RCTs as a gold standard for research generally, and in arguably many of the criticisms are particularly an issue in EM research. Conducting randomized controlled trials (RCTs) in emergency medicine research is challenging due to the time-sensitive nature of emergency care, ethical concerns, and patient recruitment and retention issues. Emergency medicine RCTs often require a tight protocol that can be implemented quickly and efficiently, whilst also avoiding compromising patient safety. Ethical concerns often arise when studying vulnerable patient populations, and obtaining informed consent from patients in acute distress or from surrogate decision-makers can pose challenges (or we simply ignore these groups). The transient nature of emergency medicine care can also make it difficult to follow up patients for long-term outcomes, and the heterogeneity of patient populations can make it challenging to recruit and retain patients for RCTs.

Despite these challenges, RCTs still play an important role in improving patient outcomes and advancing the practice of emergency medicine. RCTs are usually regarded as high-quality evidence to guide clinical practice and improve patient outcomes. Some of the limitations above have been countered by innovative approaches, such as cluster randomized trials, stepped-wedge designs, and adaptive designs, to overcome the challenges of conducting RCTs in emergency medicine, but problems remain. Most notably RCTs tend to be slow and expensive. Subbiah seems to suggest that better collaboration between researchers, emergency medicine clinicians, and patients are needed to ensure that RCTs and other designs meet the needs of all researchers, and perhaps even more so with the unique needs of the emergency medicine setting. RCTs have also been criticised for not reflecting reality, with the findings from RCTs regularly not translating into real world data. This is an area that is particularly pertinent to EM owing to our very diverse populations.

How do real world data (RWD) and data from randomized controlled trials (RCTs) differ and how might they compliment each other in EM practice?

  1. Study Design: RCTs are carefully designed studies that randomly assign participants to either a treatment group or a control group to compare the outcomes of different treatments. On the other hand, RWD is collected in a less structured way from routine clinical practice, electronic health records, claims data, or patient registries.
  2. Patient Populations: RCTs often have strict inclusion and exclusion criteria to ensure that the study population is as homogeneous as possible. RWD, on the other hand, is often collected from a more diverse population, which can include patients with multiple comorbidities, different demographics, and a wide range of clinical backgrounds.
  3. Data Collection: RCTs typically use standardized data collection methods, often with specific instruments and protocols for data entry and management. RWD, on the other hand, is often collected from various sources with different data formats, incomplete data, and varying degrees of quality.
  4. Generalizability: RCTs are designed to provide a high level of evidence for specific treatments in specific populations under controlled conditions. RWD is collected from real-world settings and can be more representative of the general population, but may also have limitations in terms of generalizability to other populations or settings.
  5. Bias: RCTs are designed to minimize bias, often through randomization, blinding, and other techniques. RWD, on the other hand, may be subject to various biases, such as selection bias, measurement bias, or confounding, that can affect the validity of the results.

RCTs may still be considered the gold standard for generating high-quality evidence, but RWD can provide complementary evidence to support decision-making in healthcare, especially for questions related to real-world effectiveness, safety, and cost-effectiveness of treatments. Combining RCTs and RWD can provide a more complete picture of the benefits and risks of treatments in different populations and settings and this is arguably an area that could expand in EM. There are arguably some examples already developing. The TARN database has been used to look at RWD on the impact, treatment and effectiveness of TXA in trauma, and I think we might see other such examples in the future. The article also highlights platform trial development as another way that we might accelerate trial designs, and we have seen good examples of this through the RECOVERY trials during the COVID pandemic. While platform trials have not yet been widely used in major trauma research, there is potential for their application in this field. Major trauma is a complex and multifaceted condition, with many potential interventions and treatment strategies. The use of a platform trial could allow for the testing of multiple interventions or treatment combinations, leading to more efficient and comprehensive research. The use of a master protocol as needed in platform trials would require a high level of coordination and collaboration among different research sites, which can be challenging to achieve, but in a system such as the NHS and with research networks developing between major trauma and prehospital services, developments in this area are not inconceivable. Think how much easier and faster it might be to test novel resuscitation techniques, and interventions using a platform trial. Could trials like SWIFT, CRYOSTAT, iTACTIC, CoMITED exist around a platform trial model? I believe that they could.

The article also highlights several challenges that must be addressed in order to realize the full potential of these new approaches, including concerns around data privacy and security, the need for greater standardization and interoperability of electronic health records and other health data systems, and the potential for algorithmic bias and other ethical issues related to the use of AI and other advanced technologies in healthcare. This has many potential benefits as data collection, electronic records and better systems that share data come to the fore, but it will not be without known and unknown problems in the future.

Overall, the article suggests that the next generation of evidence-based medicine will be more flexible, dynamic, and patient-centered than previous approaches, and will require new collaborations and partnerships between researchers, clinicians, patients, and industry. There is a lot to review in the article and our recommendation is that you spend a little time reading the full article yourself. I think that many of the proposals in the article are already here across a range of specialties, but what is unknown is how quickly research will change and whether it will be equitably spread across the medical research landscape.

What was the twist?

I suspect that many of you will have worked it out already, but if not I will leave my co-author to explain. My co-author this week was ChatGPT as I thought that when reviewing an article about AI and the future, then it would be interesting to get AI input and then edit to the St Emlyn’s style (although I also used the training systems in ChatGPT to emulate previous blogs on our site). It’s an interesting process, and perhaps the flow of the article is impaired by taking chunks from questions/answers in chatGPT and then adjusting them, but using it has raised some interesting points that I might have skipped over myself.

My final question was to ask ChatGPT to ‘Give me approximately 150 words stating that the twist is that this blog was 90% written by chatGPT’. The answer, which I really like, is below.

Simon Carley @EMManchester and ChatGPT

References

1.Subbiah, V. The next generation of evidence-based medicine. Nat Med 29, 49–58 (2023). https://doi-org.manchester.idm.oclc.org/10.1038/s41591-022-02160-z

2. David Sackett, William Rosenberg, Muir Gray, Brian Haynes & Scott Richardson. Evidence based medicine: what it is and what it isn’t [internet]. BMJ; 13 January 1996 [cited 23 May 2013]. 

3. Simon Carley, “Covid19: Why we need Evidence Based Medicine (EBM) more than ever during a pandemic. St Emlyn’s,” in St.Emlyn’s, April 11, 2020, https://www.stemlynsblog.org/covid19-why-we-need-evidence-based-medicine-ebm-more-than-ever-in-a-pandemic-st-emlyns/.

4. Simon Carley, “Differential prescribing of TXA by gender. St Emlyn’s,” in St.Emlyn’s, May 31, 2022, https://www.stemlynsblog.org/differential-prescribing-of-txa-by-gender-st-emlyn-s/.

5. The Skeptic’s guide to emergency medicine. https://thesgem.com/

Cite this article as: Simon Carley, "JC: The future of evidence based medicine. St Emlyn’s.," in St.Emlyn's, February 18, 2023, https://www.stemlynsblog.org/jc-the-future-of-evidence-based-medicine-st-emlyns/.

1 thought on “JC: The future of evidence based medicine. St Emlyn’s.”

  1. I wonder why #qualitative research has no role in this article about the future of EBM for Emergency Medicine and no role in the future of EBM at all.
    Especially interview analysis by AI would add value to understand processes in EM

Thanks so much for following. Viva la #FOAMed

Scroll to Top