If you have heard the St.Emlyn’s team talk about innovation then you will know that we have a healthy scepticism for technology. Technology is great of course and we love gadgets as much as the next video-laryngocsopist ( 😉 ), but we also love EVIDENCE. The history of medicine is awash with great ideas that we know will just work. For example it’s obvious that a drug to reduce the number of ventricular dysrhymias will save lives, except it isn’t (Ed – Remember Flecanide). Devices are a little different to drugs of course and the potential harms may be less to the patient, but they have other harms. New equipment costs money and every time we spend money one one thing we can’t spend it on another (opportunity cost in economic speak). As EBM practitioners we should be just as cautious about technology as we are about drugs.
In my experience many clinicians turn off the EBM filter when faced with something that takes batteries.
This week I was pleased to see a paper on the use of devices for vein localisation. These devices are designed to help clinicians find veins and they are pretty nifty. They certainly look incredibly cool when I’ve used them they do really show up the location of veins. Venepuncture is important and can be really challenging in the sick, young and large.
If you’ve not seen one of these devices have a look at this great video from the Australian blood donor service.
‘What’s not to like?’ you may ask. These devices clearly show where veins are but like a Ferrari without an engine it may look great, but does it work? As an emergency physician my outcome is succesful cannulation. The question is whether this improves canullation rates, and not canullation rates in healthy blood donors, or in an elective surgery setting I want to know how it performs in the ED.
This month we have an RCT from Canada that hopes to answer this question.
What question was asked?
This is a trial in kids which is good as that’s a group that is challenging to cannulate. Good trials have simple questions and this trial simply aims to determine first time success rates for children requiring IV access in the ED. This trial compared USS vs near infrared imaging vs a standard approach.
How were patients randomised?
Patients were randomised by computer. A stratified system based on age (<3 and >3) as there are clear differences between an older child and a chubby 2-year old. They also used block randomisation which is a method to balance out the numbers allocated to each group over the period of the trial. This is an added complication for the research team but avoids the possibility patients in one group get allocated early or late in the recruitment period. That’s probably important if users get more skilled over time.
Were the groups similar at baseline?
Yes. There were small differences which do not appear to reach statistically significant difference, but the numbers are small here and a lack of stats significance may not tell us the full story.
Were the groups treated equally?
Apart from the use of technology then yes.
Was everyone accounted for and analysed by intention to treat?
Yes. It’s a small trial and single evernt so no-one was lost.
Were measurements objective and blinded?
The main outcome was first pass success which is pretty objective and clinically important. It’s not really possible to blind participants in this trial so we cannot expect it, though it can still have an effect. Participants with a belief in the technology may prepare and try for longer (and vice versa).
Should we believe the results?
The authors analysis is that there is no difference but I’m not so sure. There a few things we need to consider.
1. Nurses received just 3 hours of training before starting the trial. Is that enough? I think it takes time to adapt to new technology and we will all have learning curves. It is possible that there was simply not enough time here for users to really become familiar with the equipment.
2. The numbers are small. In the group of most interest to me (chubby kids less than 3) just 71 children were included split into 3 groups. That’s far too small to draw any meaningful conclusions.
3. The difference in overall success rates ranged from 65.9% to 74.7% which if it were true would be a clinically important difference. However, the numbers are so small that we cannot know if this is random effect (p=0.3). The power calculation sought an improvement of 15% as statistically significant, and I don’t want to go into stats nerd mode but that seems like a big difference and an assumption that it could only improve. Making an assumption that only an improvement can occur allows you to do a one sided power calculation and so get a number with fewer participants. I can’t tell whether they really did this, but that’s how it reads to me. It’s worth noting that 2177 patients were screened for inclusion to get 418 in the trial. Ultimately I think this trial is underpowered to answer the clinically relevant question.
4. In the sub analysis of the under 3 group the authors found a statistically signficant difference in success rates but I would be cautious. Under 3 is still a rather diverse group and the numbers are so small that patient related bias could have a real effect here. The finding that near infrared performed the least well in this group is interesting but not conclusive IMHO.
5. It’s a single centre study so tricky to generalise to other health care settings.
6. Lastly, this was just one device and there are a range of USS and IR devices out there. What works (or doesn’t) for one does not prove for all.
This trial is the best sort of trial and the most dangerous sort of trial in that it confirms what I thought before I read it. I’ve used the kit in clinical practice and so have already formed a view which is confirmed here. My impressions were that it helped ID veins but it did not improve my cannulation success. Whilst I would like to use this as definitive evidence, it’s not. It is interesting reading and it’s great to see technology subjected to an RCT but this paper is not definitive.