Here at St Emlyn’s we’ve always prided ourselves on being reflective clinicians. We’ve written blogs on feedback1, reflection2, coaching3 and much more all of which rest on the principle that we it’s important to look back at what happened such that we can learn from events. This is not peculiar to St Emlyn’s of course, the concept of case review (e.g. mortality and morbidity meetings), is well established in medicine4–7.
In addition we’ve also argued that it’s quite tricky to be a good judge of self. My talk at SMACC in Sydney8 revolved around the idea that we cannot truly know ourselves and that we need the objectivity of others to help us understand how we perform in the clinically environment.
But what if that desire to reflect, review and feedback is biased? Wouldn’t that affect the quality of the learning that came come out of any review process? This is a problem known as hindsight bias, that process whereby we judge prior actions based on the outcome rather than the decisions made at the time. How often have you been told about a poor patient outcome only to hear the story of how it happened and to declare ‘I would never have done that’? I’ll admit to thinking this far too often, perhaps it’s a natural protective response, before checking myself and asking the better question of ‘why did that appear to be the right decision to that clinician at the time it was made?’.
The question remains as to how influential hindsight bias is in our opinions, and in particular when we are reviewing exceptional events (positive or negative).
This week and old friend of St Emlyn’s, Prof. Tim Coates, highlighted a paper that examines the influence of hindsight bias in reviewing tough clinical cases9.
The abstract is below, but as we alway say please go and read the full paper yourself.
What kind of paper is this?
This is a survey design, but with elements of randomisation. Participants were delivered an online survey, but the elements of the survey that the participants saw were randomised as described below. It’s an interesting design to test participants understanding of how data can be interpreted in light of clinical outcomes.
So what exactly did they do?
The authors recruited 93 clinicians of various grades. I think this was done through a single institution and opportunistically. Ideally it would have been preferable to have a more systematic approach to participant recruitement as agreement to take part may in itself put bias into the study.
They only recorded participant seniority and so we can’t really know much about the baseline characteristics of the participants. We do know that they were all doctors.
Once they had agreed to participate they took a web-based survey that included three clinical vignettes. From an EM perspective these were all relevant. Chest pain, swollen lower limbs and headache are all important presentations to the ED that can result in significant risk to patient and clinician. Although the vignette description of what happened to the patient on their first visit was always the same, the participants received a random outcome of either alive or dead. They were then asked to rate the quality of care the patient received at the hypothetical first visit from very poor care through to excellent care.
What did they find?
In two of the three scenarios the participants rated the care as much worse when the patient died as opposed to when they lived. This difference did not change when consultants were compared with more junior doctors. The final case (chest pain) did not show the same level of variability (they did not randomise the order that participants encountered the cases).
What is also interesting to me is the range of scores for each of the scenarios. Although there were central tendancies to scoring in all scenarios, all scenarios received the full range of scores (from excellent through to very poor).
So should we believe this paper?
There are a number of caveats here. It’s a small, single centre, UK study where recruitment strategies are unclear and we don’t know much about the participants.
That said, the findings are consistent with what is known about hindsight bias as a concept, and this demonstrates a similar effect in clinical medicine, and in particular in the assessment of adult emergency medicine cases.
So what does this mean?
The authors appear to have demonstrated that hindsight certainly exists when looking at cases where the outcome has been poor, they have also shown significant variability in the opinion of what good quality care is.
For me, the lesson here is that when we review cases, particularly those where the outcomes are less than perfect we need to be mindful of hindsight bias, we need to ask ourselves to try and imagine alternative outcomes and to be kind to those involved.
It’s also a reminder that when we are assessing clinical judgement it’s important that we look at the decision making process and not just what happened. As we’ve said many times before, the clinical outcome of a patient is based upon a combination of clinical decisions PLUS luck/uncertainty. We should never forget this.
Ross Fisher reminded me recently that the term bias itself has a number of associations with it and perhaps a better phrase is ‘cognitive disposition’, as they are inherent in all of us and not easily controlled.
Lastly, the UK is embarking on a new process to examine hospital based deaths. The new Medical Examiner role will review all deaths to look for opportunities to learn and potentially to determine where harms have happened10. The potential impact of hindsight bias on this new process is, as yet, undetermined.
- 1.May N. Testing Testing. St Emlyn’s. https://www.stemlynsblog.org/testing-testing/. Published 2013. Accessed 2019.
- 2.May N. On reflection. St Emlyn’s. https://www.stemlynsblog.org/on-reflection/. Published 2018. Accessed 2019.
- 3.Carley S. How to Coach and Feedback with your team. St Emlyn’s. https://www.stemlynsblog.org/how-to-coach-feedback-team-st-emlyns/. Published 2018. Accessed 2019.
- 4.Rafter N, Hickey A, Condell S, et al. Adverse events in healthcare: learning from mistakes. QJM. July 2014:273-277. doi:10.1093/qjmed/hcu145
- 5.David G. [To make good use of medical error]. Bull Acad Natl Med. 2003;187(1):129-136; discussion 136-9. https://www.ncbi.nlm.nih.gov/pubmed/14556459.
- 6.Higginson J, Walters R, Fulop N. Mortality and morbidity meetings: an untapped resource for improving the governance of patient safety? BMJ Qual Saf. May 2012:576-585. doi:10.1136/bmjqs-2011-000603
- 7.George J. Medical morbidity and mortality conferences: past, present and future. Postgrad Med J. November 2016:148-152. doi:10.1136/postgradmedj-2016-134103
- 8.carley simon. The power of peer review. St Emlyn’s. https://www.stemlynsblog.org/smacc2019-the-power-of-peer-review/. Published 2019. Accessed 2019.
- 9.Banham-Hall E, Stevens S. Hindsight bias critically impacts on clinicians’ assessment of care quality in retrospective case note review. Clin Med. January 2019:16-21. doi:10.7861/clinmedicine.19-1-16
- 10.BMA B. Implementation of the medical examiner system. BMA. https://www.bma.org.uk/advice/employment/ethics/implementation-of-the-medical-examiner-system. Published January 2019. Accessed November 15, 2019.
2 thoughts on “JC: Hindsight bias. St Emlyn’s”
This is very relevant not only for case review but also in incident reporting – ‘near miss’ incidents are taken less seriously if ‘no harm’ occurred. This hindsight bias is alarming – we (deliberate use of the collective noun) need to learn from our mistakes whatever the outcome and before actions or omissions lead to serious incidents.
Another hindsight bias is the ‘missed diagnosis’ bias – suddenly every patient has the diseased and gets investigated.
Yet systems and teams will always be flawed – we’re never going to reach perfection, so when looking back at clinical error, the kindness & compassion we show to patients must be demonstrated in how we view one another.
Pingback: November 2019 podcast round up. St Emlyn's • St Emlyn's