I’ve always been somewhat irritated by that old adage that ‘Those that can do; those that can’t, teach’. Apart from the obvious intended insult to teachers (which is just rude), it’s also patently not true in medicine. If I think back across my medical career it’s clear to me that the best clinicians are almost always fantastic teachers. That is if you use the standard of ‘whether I think they are any good’ and of course that assessment in itself is full of all sorts of bias.
A better measure might be whether those who teach improve patient outcome. Indeed, that is what has been published this month where authors from the US medicaid program have suggested that there is an association between better patient outcome and the teaching hospital status of an organisation. The article is published in JAMA.1
The abstract is below but as always read the full paper.
It’s an observational database study which is hardly the highest level of evidence. Such studies are commonly churned out of large databases of routinely collected information to look for trends in outcome. That sounds great, but such studies are prone to bias23. There are many reasons why we might find a chance association and that does not equate to causality. Thus these type of studies should usually be considered hypothesis generating rather than proving.
On the other hand, it’s rather tricky to look at this question in any other way. We could hardly randomise patients to a different type of facility in a study allocation method. A trial of intervention is not pragmatic so let’s go with the observational data instead.
Talk to me about size.
It’s huge! The study compares the outcome for patients across 15 medical and 6 surgical conditions. They looked at 4483 hospitals and 21 million patients. Wow, you say. That’s amazing and must equate to incredible statistical power, and that’s true. The sheer numbers allow us to statistically look for trends with a degree of precision which is simply not possible with small numbers. However, be careful here. If a trial has methodological flaws that won’t be affected by patient numbers and in fact the reverse can take place. A very large trial with a methodological error may appear to be more robust than a small trial with the same error because the precision estimates (confidence intervals, fragility index, p-values) appear to be better. Precision will certainly improve with size, methodological problems certainly won’t, so we must not be dazzled by big numbers, but focus on the methods and potential biases that result from research design.
Tell me about the patients.
This is really important and something that’s not apparent in the abstract. These are medicare patients (so not everyone), aged over 65 (so not everyone), with one of 16 conditions (so not everything), in US hospitals (so not like us). These limitations significantly restrict the generalisability of the findings, particularly when considering the effect in a country like the UK where we are all teaching hospitals. We do have solely private institutions and we do have major and minor teaching hospitals, but there are clearly significant structural and academic differences between here and the US.
And the outcomes?
7 and 30 day mortality.
What did they find?
In brief (read the abstract above) they found that major teaching hospitals had lower mortality rates than minor teaching hospitals and non-teaching hospitals.. This difference persisted even when the data was adjusted for known confounders. They also looked to see if the size of hospital mattered, but again when similar size hospitals were compared it still looks as though there is a mortality benefit to being cared for in a teaching facility.
How should we handle these findings?
Well it does not answer my original question about individuals abilities to teach. This is an enormous study where nuance and style are swamped by statistical power. We must be careful to remember that association is not the same as causation and there are many reasons why outcomes might be different. The patients, the facilities, the education, the severity and the location of the patients will be different in ways that the statistical adjustments here would not be able to account for and we must always be cautious about this.
The reference standards chosen here do not account for the broad range of conditions we see in clinical practice and it’s possible that hospitals who know which sentinal conditions are going to be studied will put additional resources in those areas, a solution that might be easier in a larger hospital or teaching unit.
At the risk of repetition, the difference between what is a teaching facility and what is not need careful consideration. There are many differences likely to relate to location, funding, socio-economic status, support, staffing, access to resources, sub specialists, specialist interventional procedures, and a whole host of other patient and non-patient factors that could influence results like the ones found here. It is tempting to think that teaching hospitals really are doing better, and coming from a teaching hospital myself I would like to believe the results (Ed – a personal bias which is unfounded in reality) but is there sufficient data to be sure of that? I really don’t think so. Is there sufficient data here to state that teaching is the reason for the difference? Absolutely not. Again, association is not the same as causation and this study is a good reminder of that fact. I shudder at how the press might misinterpret these findings in the coming months. I am told by my friends that this whole topic of teaching vs. teaching hospitals is a controversial area in the US with cultural and political undertones. It’s therefore very important that we subject papers like this to careful review. It’s also very clear to the St.Emlyn’s team that we know some fantastic educators in the #FOAMed workd who are not in mainstream teaching facilities in the US and thus we can’t even equate excellence in education to the designation of the facility.
The differences found, if true, would be clinically important. Post statistical adjustment there is a 1.2% difference in mortality for patients which (as you all know) is an NNH of about 83. That’s a pretty small NNH and one that raises an eyebrow. Remarkable? Yes of course, but perhaps too remarkable to be entirely true and we need to think hard about why that might be.
What does this mean?
Tricky, it probably means that outcomes are different. The question is why? This paper sensibly concludes that we don’t really know, but that we may well need to look quite carefully.
Before you go please don’t forget to…
- Subscribe to the blog (look top right for the link)
- Subscribe to our PODCAST on iTunes
- Follow us on twitter @stemlyns
- PLEASE Like us on Facebook
- Find out more about the St.Emlyn’s team