I have to agree with all the sentiments about SMACC being the best conference in the world for a blend of science and practicality for critical care medicine. I also find that SMACC conference talks delve into topics that many others do not venture. Some of them as unsexy, but important, as those around principles of research, evidence and reporting. One such was this talk I was fortunate to give at SMACC Gold titled “Why most research is wrong”.
As someone who has found myself in the blended tribe of clinical researchers, I found this topic confronting. Many times over my career I have found myself influenced to change my practice, and only with the passage of time (with of course more research performed), have I and the medical community realised we have headed down the wrong track. Looking back, I clearly used to be an ‘early adopter’ of new treatments and innovation reported in the literature (see Simon Carley’s talk on ‘What to believe and when to change’).
As a new consultant, I wanted to be up with the latest and actively challenging dogma. But it made me question why I had been so strongly influenced by different findings through the years. With experience and a better understanding of how evidence is created, analysed and reported, I am no longer in the ‘early adopters’ camp, and am happy to await a body of strong evidence to be created before embarking on significant change with new developments, and wear the jeering of my more junior colleagues, keen to progress our craft.
A good understanding of the strengths and weakness in reporting of research is essential in our game. This talk explores some of the key issue with research today, and what to consider about the ‘evidence’ before you consider practice change. In no way is this talk a thorough review though, and I strongly recommend getting a good text on critically appraising the literature for those with a keen interest. Merely this talk was designed to highlight the areas where we can all be trapped, and a few tips about how researchers think.
Louise Cullen (@louiseacullen) with huge thanks to Chris, Roger, Oli and the SMACC team.
Don’t forget to check out the Intensive Care Network for more amazing talks from SMACC, and in particular this amazing talk by Tony Brown on ‘Is the peer reviewed journal dead?’
Learn more at The Emergency Cardiology Group
5 thoughts on “Louise Cullen on why most published research is wrong. St.Emlyn’s”
Great work Louise. I was honored to share the stage and I think (believe?) that our talks work really well together.
The data is not wrong (unless falsified), it is just interpreted incorrectly for the patient that is in front of you.
Pingback: What to believe and when to change. St.Emlyn's at SMACC - St Emlyns
‘Best’ evidence and ‘best’ practice?
At one stage or another, a medical student or clinician will explore broader sources of information to enhance their medical knowledge.
Meta-analyses (Cochrane review)
Non conventional online sites (e.g. FOAMed)
Some of the information can be invaluable in elaborating or refining current understanding.
However, despite what conclusions are made or drawn from this material, the novice needs to be wary of indiscriminately applying this in the clinical setting within the vain belief they are invoking ‘best practise’
The main issue is context.
As defined by one of the originators of the EBM concept, Dr. David Sackett
“Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients”
For any given conclusion or recommendation, it must be appreciated that the usefulness or benefit of a test or treatment is conducted within a studied population of patients.
It is entirely possible that the external validity of the study may be completely altered by a different population of patients – such as the one in front of you.
To further add to the complexity is that the studied population may have been over-represented by certain sub-groups which distort the findings and give the impression that the results could be generalised to all other members.
Another way to paraphrase this would be:
The Fallacy of composition – ‘What is true of the studied group, is equally true for everyone else’
The Fallacy of division – ‘What is true of the studied group, is equally true for each individual in that studied group’
Here are some possible factors.
The studied population were:
Sicker (more likely to have disease? higher likelihood of tests being positive? more co-morbidities? failed conventional therapies? different risk:benefit to treatment? more likely to suffer complications from treatment? died/withdrew before study ended?)
Healthier (vice versa)
More compliant (higher success of treatment?)
Less compliant (vice versa)
Treated in a system with special expertise (higher chance of successful intervention?)
Treated in a system with general expertise (vice versa)
Had greater access to health resources and followup (closer monitoring? greater chance of having issues and complications addressed?)
Had Less access to health resources and followup (vice versa)
The enrolment / selection process can significantly alter these factors:
Well established health networks vs Limited health networks
Developed vs Developing world
Metropolitan vs Peripheral centre
Specialist patient vs Primary care patient
Hospital patient vs ambulatory/community patient
High SE class vs Low SE class
So whenever you are tempted to implement a new idea you need to consider:
The composition of the studied population and the context of your patient
Are there alternative factors that may have lead to the observed results (a good knowledge of the social determinants of disease, aetiology, pathophysiology, pathology and therapeutics helps)
If this is applicable or not in your patient
Most importantly, does your patient want it?
Lastly this ignores systematic bias introduced into the study from selection and randomisation.
Therefore it is important to first identify any significant differences in the baseline characteristics (confounders) between comparison groups such that all of the observed differences can be attributed to these rather than the intervention itself.
These can completely invalidate any results that may have been concluded from the study.
‘Best evidence’ is not best for all so that ‘best practise’ leads to ‘inappropriate practise’
Pingback: JC: A decade of reversal: an analysis of 146 contradicted medical practices. - St Emlyns