We all love the idea of a magic bullet for sepsis. Despite having had our fingers burnt here before (like this time, and this time and that time…), the search continues in earnest. The latest challenger…? A vitamin concoction demonstrating an absolute risk reduction for mortality of 31.9%. A risk reduction of just under a third? A number needed to treat of 4? Goodness – no wonder the media is excited……
Hold the phone…..what?
A study recently made available on line in CHEST1 suggests that a tailored protocol of high dose intravenous vitamin C, Thiamine and Steroids for patients with severe sepsis or septic shock delivered at a single centre in America reduced their mortality figures by a huge degree. Interestingly for a small observational cohort study in a single centre, this has really got people talking. Opinions appear somewhat divided about the place of EBM in critical care. Do we always need an RCT to prove benefit before we lose equipose regarding a treatment? If something seems so good and so safe, can’t we just go off face validity, bench research and case data? This is so divisive that many are still yet to make up their mind about which side of the fence they sit on. Some are pretty clear however. Great minds like Rob2 and Josh3 have published thoughtful commentary already. It is an interesting debate.
This I need to read….
Yes you should. As always, we would suggest that you appraise this study yourself properly before availing yourself of the stratospheric social media reaction. We need to interrogate the science before we listen to opinion (however professorial) and this evidence should be placed in context alongside what you already know, and what more is known on the topic. Click on the abstract below and go read the full paper together with any online reviews. Most importantly make your own mind up after reading the evidence.
It appears that from January 2016, the authors began to routinely use a combination therapy of intravenous vitamin C (1.5g 6hourly), Thiamine (200mg 12hourly) and Steroids (50mg 6hourly) after fantastic anecdotal results in 3 patients with fulminant sepsis who were ‘almost certainly destined to die from overwhelming septic shock’. They present some data from small previous trials to support their rationale but do not clearly state an aim for this project in the background.
What were the methods?
This is a single centre retrospective cohort study. After 6 months of using this cocktail, Electronic health records were interrogated prior to the adoption of this novel strategy (June to December 2015), and for a period of 6 months (Jan to July 2016) after. Patients included were those coded with a primary diagnosis of septic shock or severe sepsis, with a serum procalcitonin of >2ng/mL on ICU admission. Patients with limitations of care were excluded from the analysis, as were pregnant patients and those under 18 years of age. Diagnosis of severe sepsis or septic shock were based on the 1992 ACCP definitions, to include sepsis induced organ dysfunction of any kind (Ed – so not the latest ones4 then?). It appears that the authors, rather than any independent clinicians or adjudicators, extracted the data, coded the outcomes and derived the statistics. As a retrospective note review this is to be expected, although it raises some questions about inclusion and attribution of outcomes.
Who were the patients?
After application of the above inclusion/exclusion criteria, the authors note that <50% of patients in each group had septic shock. The remainder presumably had severe sepsis and a raised PCT, although there is no record subtyping organ dysfunction. Most patients had at least 1 comorbidity.
And the results?
47 patients were included in each of the before and after groups. It is challenging to tell whether they found 47 records who had received the intervention, and then selected 47 in the preceding period, or whether an equal number of patients were coded in each period by chance. Just over a quarter of patients in each group had a proven bacteraemia. They present the overall hospital mortality rate as 8.5% in the intervention arm compared to 40.4% in the control group. This gives an absolute risk reduction of 31.9%. The authors go further to report that none of the patients in the intervention arm died of sepsis, and give a variety of reasons why their care was withdrawn or how they died of other causes. They do not clearly present the timing of, or reasons for hospital mortality in the control group patients.
I know right. That kind of treatment effect is very rarely seen with a novel intervention for an established disease process. Therefore we perhaps need to dig into the methods and rigour of this study in more detail.
Tell me about the limitations.
Well, unblinded before and after studies have a lot. And this was uncontrolled in addition, meaning that no data are available at a neighbouring institution or for previous periods looking at outcomes before and after the cutpoint without intervention.
All data in this project was collected retrospectively by the authors, who clearly have a wish to prove the effectiveness of their proposed strategy. The authors were also clinically involved in the delivery of the strategy during the intervention period, which opens up questions of conscious and subconscious bias on decision making. Who did they make treatment limitations on throughout the intervention period? What records were screened but not included? Did they try a bit harder after they decided this protocol was life-saving? Who was delivering bedside care before and after the intervention? Hawthorne effect is also a common issue in studies like this, although as Josh mentions in his post this is unlikely to be responsible for such a large type 1 error. Timing is an issue as well – there are seasonal differences between the groups studied, which may have influenced sepsis subgroups and even staff performance. The authors mention this in their limitations.
We could go on. There are so many potential problems with before and after studies that lengthy articles have been published on the topic. Have a look at this one5 if you want to go into depth. That is not to say that these results don’t warrant close scrutiny.
Didn’t they do propensity score matching and other jiggery pokery to account for all these problems?
They did. Propensity score matching6 aims to reduce the bias due to confounding that can occur when simple dichotomous outcomes (death or no death) are compared between observational groups. However, this statistical technique is not watertight, and naturally is more effective in larger samples where the covariate spread is wider. In addition, if there is inherent bias in the study (such that patients are included/excluded based on clinical decision making by unblinded authors, or clinical management is altered by the study hypothesis) then PSM is unlikely to tease this out in such a small sample.
There are also a lot of other questions raised in the fine detail of this paper. We really don’t know from these results why 19 patients out of 47 died, and when they died, in the control group. And let us remember, only 22 of these patients had septic shock. Therefore the control group mortality for septic shock (if everyone who died, died of septic shock, as the authors subtly infer throughout the paper) is 86%, This seems surprisingly high.
The authors later refer to 9 patients in this group dying of ‘refractory septic shock’ within the text, so perhaps the other 10 patients died of something else? If this is the case, the bench arguments for vitamin C starting to reduce mortality from septic shock start to wobble. It also becomes increasingly complex to think about how subgroups compare when the mortality data lacks transparency.
There was significantly more renal replacement therapy in the control group compared to the intervention. The implication from the paper is that vitamin C was protective here, but there is little data to support this assertion. Could this be reflective of the mortality difference, simply that the control group patients were sicker originally? Or does vitamin C protect against deterioration in renal function?
Also, the authors wanted to measure vitamin C levels in the intervention group patients to prove internal validity. Less than half had levels measured, although the mean was significantly below the normal lower range limit.
I see. But this treatment effect is sooooo good. Haven’t we got ‘nothing to lose’ by just giving it and seeing what happens…?
Well, here I agree with Simon Finfer. See an excerpt of his tweetstorm below. Actually, even if a therapy is likely to be non-toxic, there is still a lot to lose from mandating treatment without clear scientific evidence of benefit. Without studying this drug at high doses in large groups of sick patients, we cannot be entirely sure of associated potential harm. What will almost certainly happen is that we will dedicate nursing time and medical resource to the intervention and we could become distracted from supportive care and interrogation of physiology. Not to mention the associated costs, drug reactions and use of IV lines for non essential treatments. This is NOT to say that the intervention should not be studied further. Of course it should.
So then. Am I supposed to buy a load of vitamin C for my unit tomorrow? And is everyone getting steroids again?
Well that depends on you. This study is essentially a case series of <100 patients, managed by a single centre team led by highly decorated intensivists. If you are happy to change your practice based on face validity, bench rationale, level 4 evidence and experienced opinion then you could certainly consider adopting this intervention. Personally, that is not good enough for me. I am planning to await further studies to see if anyone can replicate this effect. I don’t think this necessarily needs to be an RCT with 1000s of patients, but I would want to see thematic support in multiple observational projects across multiple sites and different working groups before I felt obliged to change my practice. And of course, each published article would need critical appraisal, just like this one.
What I don’t think you can say, is that everyone MUST adopt this protocol. This is weak evidence and there are a lot of unanswered questions. There is little supporting direct clinical evidence in this patient group and there is certainly no building picture of clear clinical benefit as yet.
Very thought provoking and hypothesis generating for me. But that’s all for now. What this trial has certainly reminded us of, is the split between pragmatic clinicians and those who are solely reliant on high quality evidence. I am sure there is a middle ground. Where you sit is for you to decide.
Check out this review over at The SGEM