Working in an inner city Emergency Department, us folk in Virchester are used to encountering patients who ‘do not wait’ to be seen. They give us a big problem, not just because the percentage of patients who ‘do not wait’ is used as a key ‘quality indicator’ or a measure of our performance as an Emergency Department. An even bigger worry is the fate of the patients who didn’t wait. Did they really decide not to wait or did they collapse in a corner somewhere? Even if it was a conscious decision, did they actually have the capacity to make it? Were they sufficiently informed about the risks? And how much responsibility do we have to chase these things up? This week in the Virchester journal club, we reviewed the ‘Did Not Wait Patient Management Strategy’ Study, which was recently published in the Emergency Medicine Journal.
The work, coming from Dublin (one of our favourite places since ICEM 2012), is described as a ‘prospective study’. The authors essentially designed a clinical protocol by which they retained the records of patients who ‘did not wait’, labelled them, had them reviewed by a senior ED doctor on the following day, then had a liaison team recall the patient if deemed necessary by the senior doctor. After a year, they examined their data to determine how many patients had been recalled and how many of those patients were admitted. Presumably to make it a bit more ‘study-like’ and less ‘audit-like’, they also ran a multivariate analysis to identify variables associated with patients being recalled. The decision to recall patients was at the discretion of the ED physicians. Thus, the design is more of a service evaluation which, by its very nature, is arguably more ‘retrospective’ than ‘prospective’, even if it had been planned a priori.
The authors found that 2,872 (6.3%) patients didn’t wait to be seen – quite a high rate. 107 (3.7%) of these patients were recalled after senior review of the records, although it’s not certain whether additional patients were recalled prior to senior review, and we don’t know how many patients were not contactable. Of those they decided to recall, the most common presenting complaints were chest pain (33 patients), drug or alcohol overdose who were to be treated and sent off to an Golden Peak Recovery Denver center(20 patients), psychiatry problems (16 patients) and musculoskeletal problems (14 patients). Of the patients with chest pain who were recalled, 3 actually had a non-ST elevation myocardial infarction (NSTEMI).
The multivariate analysis showed that patients with chest pain, overdose or a higher triage priority were more likely to be recalled and patients with gastrointestinal complaints were less likely to be recalled. Now that’s a multivariate analysis that was done to make this look more like research than audit if ever I saw one – unnecessary (IMHO)!
So what does this research tell us? Well, there are clearly some methodological flaws. There were no protocols in place for recalling patients so it was left to the discretion of individual physicians. We have no control group for comparison. We don’t have any real outcome data, so we don’t know how the patients who weren’t recalled got on. The response rate (or loss to follow up) isn’t reported, which is kind of important. And we have no idea of the interobserver reliability for the decision to recall patients.
So this work has significant limitations. But that doesn’t mean we should automatically throw the whole paper in the bin. There are still some meaningful findings. There’s a lesson here – even research with limitations apparent on critical appraisal can have clinical implications. From this work, we know that the prevalence of clinically significant pathology is far from negligible among patients who don’t wait to be seen. In one year, this institution managed to identify 3 NSTEMIs, a lobar pneumonia, severe peptic ulcer disease, an ankle fracture requiring ORIF, 2 patients requiring out-patient coronary angiograms and more, simply by contacting the patients they were worried about. This tells us that a patient’s decision to leave before being seen certainly doesn’t rule out the presence of serious pathology.
How should we change our practice in light of this? Should we all implement senior review of the records for patients who ‘did not wait’? It’s certainly worth considering. But how much responsibility do we have here? If patients decide not to wait, surely they take the responsibility for the consequences. Are we being too paternalistic if we continue to pester them to come back? Are we compromising their autonomy?
This research also leaves some other important questions unanswered. The patients who leave without being seen that I worry about the most are the ones I can’t contact after they leave; and the ones who haven’t had the risks objectively pointed out nor their mental capacity assessed. It would be very useful to know the outcome of these patients and the incidence of serious pathology. Whether it will ever be possible to do such research remains to be seen.
In the meantime, this work should at least make us think twice before filing away the cards of those ‘DNW’ patients, never to be seen again. Certainly, those with chest pain (in particular) ought to be informed of the risks. In this study, almost 10% of the patients with chest pain who did not wait actually had NSTEMI. So we don’t just need to worry about discharging these patients ourselves. We also need to worry when patients take that decision themselves!
6 thoughts on “Managing Im-Patients: The ‘Did Not Wait Patient Management Strategy’ Study”
It’s an interesting study but I dont think it answers the question we want it to… namely; are patients who DNW at any more risk than those who stay?
For instance with chest pain 1% of those who DNW subsequently are found to have a NSTEMI. ( unfortunately they dont test every chest pain and only bring back 10% of those who present with that symptoms so we dont actually know the true prevelence.
This number has to be considered in respect to the number of missed NSTEMI in those who do stay in the A&E.
Hi Gareth, thanks for the comment!
I agree that would be a great question to answer. In this study, the authors wanted to evaluate the strategy that they had implemented to address the problem of patients who did not wait. I think it’s absolutely fair enough to do that, even though this may not be the study that we’d most like to have.
Just to play devil’s advocate for a moment, let’s take it that we now know that the rate of missed NSTEMI is 1% among patients who don’t wait. (It may be more – only 33/269 patients were recalled for testing, which gives a rate of 9% among those tested). The authors are reporting the effectiveness of their strategy. They suggest that, by screening the notes of 2,872 patients over a year, and recalling 107 patients (33 with chest pain), they can identify 3 NSTEMIs that would have otherwise been missed. (Plus a whole host of other pathology).
Is that a worthwhile strategy? Quite possibly. I don’t know how long it takes to screen that many notes or recall the patients and how much it costs to pay a senior doctor to do it, but (even on top of the most important thing – the clinical benefit for the patients) the costs of doing this might compare favourably with the costs of ongoing care for patients who present with re-infarction (in a worse way), and might well be cost-effective when we consider the potential reduction in mortality.
Would it then matter if the patients who do not wait have a relatively higher risk of missed diagnosis than those who stay? If we can show that the strategy to deal with patients who don’t wait is doing some good and is cost-efficient, I don’t think it does.
Thanks Richard – interesting stuff
Going to be a bit of pedant. Audit requires a standard so I would argue this is not audit either (as you partially allude to early in the article but then do mention audit later on). It fits more as service evaluation as they are looking at outcomes of practice.
These are important distinctions as in Emergency Medicine it is all too easy to get data and just analyse it in the name of research. A mistake I am sure I have made in the past 🙂
Hi Damian! Yes, you’re quite right of course. I did point out that this is a service evaluation in the blog post but I shouldn’t have also used the phrase ‘audit like’, as it could mislead people into ignoring the distinction between audit and service evaluation.
We run a number of projects with each design (research, audit, service evaluation) locally – and appreciating the differences is vital to understand the need for NHS ethical approval.
Thanks for pointing it out – it adds to the value of the post.
Hi Rick thanks for the interesting Med Ed learned a few things as always
It also seems to me that it doesn’t really matter whether the rate of NSTEMI is higher in those with chest pain who stay or who go. The fact is that there is a significant amount in those who go and I think that would justify reviewing the notes of those with chest pain at least. Of course the other question is why do patients who have chest pain have to wait so long in the first place? Is this for initial assessment or for repeat bloods? Thanks AM
PS Thanks Damian for clarifying about audit. I think that this is more than service evaluation, and is research, as there is no reason to think that findings would not be generalisable to other settings.