It is the week of the annual scientific conference for the Royal College of Emergency Medicine again. As usual the St Emlyns team is in the thick of it and when someone mentioned clots I felt obliged to offer our services. As such I was asked to give an update in the late breaking abstracts session regarding the Thromboprophylaxis in Lower Limb Immobilisation (TiLLI) study, which is a collaborative project run by myself, Professor Steve Goodacre from the University of Sheffield, Professor Beverley Hunt from Kings College Hospital, and Tim Cokes/Jon Keenan from Plymouth. This post is designed to provide references and reminders for the content of the talk.
What am I here to talk about?
We put at least 10,000 patients in plaster in the UK, every 3 months, or in excess of 40,000 a year if we extrapolate this figure. These patients are at risk of venous thromboembolism (VTE). Plaster immobilisation or recent non-major orthopaedic surgery would seem to account for about 5% of all VTE events; there are also more proportional pulmonary emboli in this group than in other VTE presentations and on the whole they appear to receive less thromboprophylaxis than other surgical cases, according to European registry data1. We are also occasionally exposed to traumatic anecdote in this population – the young fit athlete who is placed in plaster after a simple limb injury, who then arrives in our department with submassive PE, or in cardiac arrest. These cases are very emotive – they are young active folk, who are on the receiving end of a complication from one of our more common treatments. They cause a lot of soul searching, anguish and reflection. These cases also find their way to the coroner with increasing frequency, and many regions have been subject to recent legal rulings. This is still an issue.
You are pulling at my heart strings…. where’s the scientific evidence?
Well, we can look at the most recent Cochrane review for an up to date assessment of the relevant trial evidence. This suggests that thromboprophylaxis will more than halve your risk of clinically significant VTE if you are placed in plaster immobilisation. But look closer. What do you see from this forest plot? Perhaps you see an absolute event rate in the control group of 2.1%. Not very high, is it? Perhaps you then see the experimental event rate of 0.8% – then you can talk about the absolute risk reduction of 1.3% and the Number Needed to Treat of 77. Maybe you see the Isq of 16% and this makes you think about heterogeneity. Is this evidence worth the potential harms of therapy, no to mention the resource use, cost and inconvenience?
Has this always been an issue? I feel like we are talking about it more but seeing less of it for some reason….
Well, here it gets even more interesting. If you plot the Cochrane review trials in time order, you will see that the control event rate actually falls consistently over time. Perhaps this is due to rising awareness of VTE in general, and the fantastic work performed by the all party parliamentary thrombosis group and VTE exemplar network, not to mention Beverley’s charity thrombosis UK. Yet even with this decline, there is consistency of event rate between 2 and 3%. In a large population cohort this is a lot of clots – at least 800/year by the above initial estimates. That equates to more than 2 potentially preventable symptomatic VTE events per DAY.
What’s the international take on this?
In principle it seems to be that we need more research. Since the publication of the GEMNet guideline in 2012, we have had NICE guidance asking us to have a balanced discussion on risk, American College of Chest Physicians guidance telling us we don’t need to worry, a NICE research recommendation to look at the clinical and cost effectiveness of this intervention and most recently a James Lind Priority Setting Partnership with Emergency Medicine highlighting this topic as a top 15 priority. Until this research is delivered we are left with regional variation in practice and limitations in awareness.
And is there any research coming?
Yes there is. The RCEM clinical studies group championed a project several years ago that has recently been awarded funding through the Health Technology Assessment Programme and commenced in April 2017. This is the TiLLI project which has three arms – systematic reviews and information gathering to estimate risk and the value of risk prediction, Delphi consensus work to bring experts together for discussion on risk prediction and barriers to care, and lastly decision analysis modelling to look at cost effectiveness and the value of further information.
OK sounds good. Where are you up to with it all?
Well, the systematic reviews are nearing completion. The initial review of treatment effect and trial evidence has naturally produced similar findings to the recent Cochrane meta-analysis and so adds little other than methodological assurance. However, we have also completed two systematic reviews on risk prediction. The first looks at what individual risk factors at baseline have been associated with subsequent development of VTE in this population. This work has not been collated before, and offers a valuable insight into characteristics associated with increased likelihood of clot formation; older age, exogenous oestrogen use, obesity, significant comorbidity, active cancer and the use of plaster (rather than removable splint) immobilisation all seem to replicate over several studies as consistent risks with high odds ratios.
Do these findings allow us to target thromboprophylaxis in specific individuals?
Not clearly as yet, but we should theoretically be able to narrow our focus here, and specifically target those patients at higher risk. This could reduce the cost and resource associated with thromboprophylaxis use, but still retain the benefit. This was of course the original intention of the GEMNet guideline. Since then, two other formal scoring systems have been produced and assessed in the literature. Our third systematic review aims to look at any validation, or estimates of utility and performance of these scoring systems in real world cohorts.
And how did that go?
Well, it was interesting. We found two papers that aimed to assess the diagnostic test characteristics of these three rules, using case control cohorts of varying sizes. Watson et al looked at 42 Emergency department patients with an artificial disease prevalence of 50% and Nemeth et al had a larger cohort of approximately 10,000 patients but all taken from a VTE registry, thus including inpatients with disease and many other confounding factors. So neither is a paper perfect really, but they do give us additional information.
First up for review was the GEMNet rule, which had no derivation study and has not as yet been prospectively validated. Watson et al retrospectively applied the GEMNet rule to their database of prospectively collected case control patients, and determined a sensitivity of 85.7% with a specificity of 47.6%. Not actually that bad. They report the negative predictive value (NPV) as very weak (25%), but this is reflective of their inaccurate high prevalence (50% rather than the 2-3% discussed above), which impacts directly on NPV. This inaccuracy seems to be a direct result of their case control methodology (21 active cases, matched with 21 controls).
Second was the Plymouth rule, originally derived and later refined by Tim Nokes (Haematology) and Jon Keenan (Orthopaedics) from Plymouth. Although they have published widely on this issue, those publications unfortunately do not extend to derivation or validation cohorts for their rule. Watson et al assess this method to have a sensitivity of 57%, but a specificity of 52.4%. They report this as having an overall higher accuracy than GEMNet in their paper, but some eyebrows get raised at this point. Accuracy is usually a global measure of diagnostic test performance (or risk assessment performance, as in this case) and usually calculated by True positive + True Negative / all subjects. As above, the false negative rate in both these cohorts is significantly elevated as a result of the incorrect prevalence used within the paper (20 times the estimate of actual prevalence). Therefore not sure how much we can really read into estimates of accuracy.
Last is the L-TRIP rule, derived in a fairly robust fashion within a large case control dataset by Nemeth et al in 2015. After minimising the variables in attempt to produce a clinical and pragmatic rule, the authors then validated this rule in 2 subsequent VTE case control datasets with good AUC values in each. The authors present this data within an estimated cohort prevalence of 2.5% and report a sensitivity of 65.1%, a specificity of 72.2% and a negative predictive value of 98.8% for a cut point of 10 or more on a cumulative scoring tool. Getting better as a SpIN, but limited applicability as a SnOUT here.
How many patients would test positive (and thus require thromboprophylaxis) using these rules?
That depends on your approach to risk. Application of the L-TRIP rule with a cut point of 10 would result in approximately 60% of patients requiring thromboprophylaxis by their estimates. But as per the above figures, this essentially results in a lot of needles, a fair few missed cases and a lot of resource use (the score has 14 points, of different weightings and is cumbersome). It is tricky to estimate rates of positive score from the Watson paper for GEMNet and Plymouth. The recent RCEM audit suggests that if a formal risk assessment is carried out using the GEMNet guidelines, roughly half of patients will require thromboprophylaxis. This is actually less than the amount required using L-TRIP, for a far higher sensitivity. A win-win?
The L-TRIP paper sets out an excellent table looking at diagnostic test characteristics and percent testing positive by additional score, from 1 point all the way up to 14. This information is very useful. A sensitivity over 90% by this score would appear to cost you thromboprophylaxis in over 85% of patients. When you get to that proportion, people start to ask themselves about the point of risk prediction. But would you accept a lower sensitivity? How many clots would you be happy to miss?
Is there a better score on the horizon?
As part of TiLLI we are nearing the final stages of a Delphi consensus group exercise on risk prediction. We have completed 2 full rounds using an expert panel of >20 clinicians, featuring Orthopaedic surgeons, Haematologists, Thrombosis experts, Emergency physicians and trainees. We have good agreement on inclusion of 6 variables, and good agreement on exclusion of 10 variables. We have divergent opinion on approximately 5 variables, many of which have been combined and sequenced at recent nominal group meeting. This has provided an excellent insight into competing priorities by specialty, national practice, patient and public opinion and variation in scoring methodology. However, there is no empirical data for this potential new rule and as such it will be challenging to use it within any formal assessments of performance.
Which brings us to decision analysis modelling?
Yes. Decision analysis modelling essentially compares the expected costs and consequences of decision options through use of already available information. This methodology allows us to synthesise data from previous research and try and calculate objective measures of net benefit and risk at a population level. For this project, we can use previous Health Services Research to estimate the financial cost of acquiring a deep vein thrombosis or pulmonary embolism, represented as loss of Quality Adjusted Life years (QALYs). We can also estimate QALY loss with major bleeding events. We can then use the above systematic review data to inform us of the likely incidence of events with varying strategies of thromboprophylaxis in this cohort and assign QALYS to each of these outcomes. Once the probabilities and pay-offs have been entered, the decision tree is rolled back to allow the expected values of each option to be calculated. These mathematical models can eventually be used to determine the cost effectiveness of an intervention like thromboprophylaxis for lower limb immobilisation, and provide data on whether generic, tailored prescription or omission of thromboprophylaxis is superior at population level.
In addition, this model will hopefully provide a framework for indicating the need for and value of additional research. Interrogation of decision analysis models can sometimes provide detail on the monetary value of eliminating uncertainty within the model. The costs of further research can be balanced against the likely impact on the model, and on eventual changes to patient care. If you like all this stuff, then this is a great synopsis from the BMJ.
Any last words?
Well, it’s been very interesting to see this project unfold from a simple concept idea at the RCEM clinical studies group to a James Lind Alliance priority and now a HTA funded collaborative project. We are not there yet, but I am starting to feel like we have an opportunity for real specialty engagement and consensus output from the work; this is an area with lots of opinion, wide diversity in practice and limited research base. If we can change any of those things with the work we are doing, it will be helpful. In addition this project has highlighted a developing area for meaningful emergency medicine research. All these scores mentioned above are ripe for prospective validation studies. There are no prospective studies looking at DOAC use for this indication. A trainee research network with wide coverage could help hugely here, and deliver a big win for portfolio research with simple observational work.
I will be banging this drum a lot over the coming months to years I expect. Thanks for indulging me and if there are any queries, questions or comments I would be delighted to hear them
2 thoughts on “RCEM ASC 2017 – Update on the TiLLI study”
Pingback: St Emlyn's in Review - October 2017 - St.Emlyn's
Pingback: Thromboprophylaxis in Lower Limb Immobilisation #RCEMASC2019 • St Emlyn's