JC: Finger on the Pulse?

If you’re an avid follower of FOAM, you’ll have seen many assertions that manual pulse checks by healthcare providers during cardiac arrest are pretty unreliable at best.

The most commonly advocated alternatives are EtCO2 increase suggesting perfusion (quick and easy but not particularly sensitive), arterial line waveform assessment (an inevitable intervention once you do achieve ROSC and useful for guiding more nuanced adrenaline dosing, but not always easy to achieve during arrest itself) and various uses of point-of-care-ultrasound (POCUS). It’s the use of POCUS to determine the presence of carotid pulsation (rather than cardiac motion) that has been investigated in this paper published in Resuscitation and written by some of my colleagues out here in Sydney.

What is this paper about and what did they do?

The first thing to say is that this was a diagnostic study performed in theatres, in patients undergoing routine cardiopulmonary bypass – so not a study undertaken in ED or in particularly sick patients. This doesn’t invalidate the findings, but it’s important to consider the context under which the results were obtained when we think about generalising them to our regular day-to-day practice.

The patients included were over 18 and undergoing routine bypass surgery, the majority for coronary artery bypass grafting (74%). Interestingly, 70% were male and the median age was 64. Patients were excluded if they had had previous surgery to the great vessels (including the carotid arteries), had known carotid artery stenosis, intra-aortic balloon pump or were considered high risk from the surgery or anaesthetic.

A researcher held the probe of a Butterfly iQ ultrasound in transverse orientation over the middle of the left main carotid artery, ready to record short clips at pre-defined haemodynamic trigger points.

During the study, the patients routinely had arterial lines already inserted, so this was able to give real-time accurate blood pressure measurements for comparison. When cardioplegia was instilled and the aorta was cross-clamped, POCUS footage from the carotid artery was recorded as simulated “cardiac arrest”. They had already recorded 10-second clips of varying blood pressures for comparison (“low”, defined as SBP <70mmHg; “medium”, defined as SBP 70-90mmHg; “high”, defined as SBP >90mmHg).

In total, then, each patient in the study (there were 29 of them) had four clips recorded at “high”, “medium” and “low” invasive blood pressure readings and during cardiac standstill. These were then interpreted by critical care physicians (from anaesthesia, ED and ICU) who were asked to decide whether or not a pulse was present in each case.

There were some very sensible attempts to reduce bias: the videos were standardised to a size and length, and they were randomised so that participants saw 24 videos in total with the same proportions of each BP category and “no pulse” but that videos were not repeated.

The sample size calculation (24 patients’ data, 24 physician interpreters) was based around a difference in specificity from 0.5-0.9 when the prevalence of “no pulse” was 25% (as in, one in each set of four videos was taken during cardioplegia and aortic cross-clamping, so one in four videos viewed was of the carotid artery during simulated “cardiac arrest”). The authors argue for specificity of “no pulse” as their primary outcome as “false negative would result in continuation of CPR.” This seems fair and logical.

What did they find?

23 full sets of videos were used, after 6 patients’ footage was excluded because not all four clips were available. There were 46 physicians involved in the interpretation phase, each viewing 24 clips in total.

Overall, the sensitivity of detection of a pulse was 0.91 (95% confidence interval 0.89-0.93). Unsurprisingly, clinicians were most accurate at determining the presence of a pulse in the “high blood pressure” group (sensitivity 0.96, 95% confidence interval 0.93-0.98) and lowest in the “low blood pressure” group (sensitivity 0.83, 95% confidence interval 0.78-0.87), but let’s remember that the primary outcome here is specificity of no-pulse. This was 0.9 (95% confidence interval 0.86-0.93).

The authors provide a 2×2 table and a table of test characteristics, which might help us to untangle some of the knots in this string of double negatives! A “true positive” was when there was a pulse present and it was correctly identified by the physician. I’ve made my own 2×2 table below to help clarify this.

Arterial WaveformTotal
Physician JudgementPulse PresentPulse Absent
Pulse Present75227779
TRUE POSITIVEFALSE POSITIVE
Pulse Absent75247322
FALSE NEGATIVETRUE NEGATIVE
Total8272741101

This gives us:

  • Sensitivity (probability that the pulse will be identified when there is a pulse present on the arterial waveform, a true positive rate): 90.93% (95% CI 88.76-92.8%)
  • Specificity (probability that no pulse will be identified when there is no pulse present on the arterial waveform, a true negative rate): 90.15% (95% CI 85.99-93.41%)
  • Positive predictive value (probability that there is a pulse when the physician thinks there is one): 96.53%
  • Negative predictive value (probability that there is no pulse when the physician thinks there is not one): 76.71%

What does this mean?

Given that previous studies have found both prolonged time taken and low accuracy in lay person, first responder and healthcare professional determination of the presence or absence of a pulse, this study seems to suggest that ultrasound assessment of the carotid artery is more accurate (90.74%, 95% confidence interval 88.87-92.38%, to be precise).

There are some caveats, though. This was a single centre study, with a limited group of mostly male, mostly healthy(ish) patients. It’s worth noting that the authors found some specific videos caused particular consternation but that their removal from the analysis did not significantly affect the results.

These were pretty good quality ultrasound images, obtained without CPR in progress, by two experienced sonographers. As a self-confessed “rubbish at ultrasound” practitioner, would I be able to obtain good quality images during an actual cardiac arrest scenario? And would it still require 10 full seconds of footage to determine whether a pulse was present or absent?

The authors also note that they did not make their intended sample size due to the arrival of SARS-COV2 in Australia and subsequent distancing requirements – they had intended a sample of 24 patients but only used video images obtained from 23.

Overall, this is an interesting proof-of-concept type paper. The authors have shown that while uncertainty persists surrounding what constitutes pulselessness, the use of ultrasound is probably better than a finger. I’d be interested to know how many of us were doing this in practice, without the evidence… and whether it’s possible to replicate these results in real-life cardiac arrest scenarios.

vb

Natalie May

References

  1. Competence of health professionals to check the carotid pulse https://pubmed.ncbi.nlm.nih.gov/9715777/
  2. Assessing the validity of two-dimensional carotid ultrasound to detect the presence and absence of a pulse https://www.resuscitationjournal.com/article/S0300-9572(20)30500-1/fulltext
  3. Skills of lay people in checking the carotid pulse https://pubmed.ncbi.nlm.nih.gov/9259056/
  4. Checking the carotid pulse check: diagnostic accuracy of first responders in patients with and without a pulse https://pubmed.ncbi.nlm.nih.gov/9025126/
  5. Competence of health professionals to check the carotid pulse https://pubmed.ncbi.nlm.nih.gov/9715777/

Cite this article as: Natalie May, "JC: Finger on the Pulse?," in St.Emlyn's, October 26, 2020, https://www.stemlynsblog.org/jc-finger-on-the-pulse/.

Thanks so much for following. Viva la #FOAMed

Scroll to Top