I work part time for the General Medical Council as an education associate and it’s a really interesting job. I know that the GMC has had a lot of criticism from the the profession over recent political issues and also in it’s role as the regulator of doctors, but it has many other roles as well. I am involved with the education team, trying to ensure that training works in the UK and supporting trainers and trainees in getting the best out of their education. For example I was part of the team that put together this report on Emergency Medicine Education (worth a read IMHO)1. The team I work with are superb, some of the brightest and enthusiastic people I’ve met in healthcare, all of whom have a real desire to make training better (Ed – enough flattery, get on with it).
So, one of the things the GMC2 does is collect outcome data about education. This is important as a lot of so called education interventions rely on process, who is doing what, to whom and how often (!). All too often that means that we are looking at process rather than on what actually works. Defining what really ‘works’ is in itself tricky and ideally we would want patient related outcomes but that’s very challenging as there are so many compounding factors. However, we can look at trainee related outcomes and that means that we can track data like ARCP outcome3 data and exam success for the MRCEM and FRCEM4 exams.
This post explores some of that data, but you will probably want to have a look yourself so click on this link to play around with the reporting tool http://www.gmc-uk.org/education/14105.asp2
What about exam results in Emergency Medicine?
UK training is organised into regions (aka deaneries or HEE offices5). They roughly divide training into different geographical areas linked to health economies. If we compare the pass rates across MRCEM and FRCEM combined between the regions it does look as though there are differences. See the graph below. It’s good news for those in the South West, but not so good in Kent, Surrey, Sussex (KSS6) and real concerns about those trainees who are not in formal training programs. The confidence intervals are pretty tight for this latter group (which means there are lot of them – 699 to be precise).
What about ARCP outcomes across the country?
ARCP3 is the annual review of clinical progress that all UK trainees look forward to (Ed – not really). It’s designed to make sure that trainees are progressing against the curriculum, learning what they should be learning and meeting the targets set out by the college and the GMC. If training is going well they get an outcome 1 and a hearty handshake. If progress is slower than expected, if data is missing, if additional training time is needed then they can get an adverse outcome (2,3,4,5). The graph below shows how adverse ARCP outcomes vary across the country. As with exam results there is quite a variation across the country.
A quick glance suggests that these findings reflect the exam success too. There appears to be a reasonable link between exam pass rates and ARCP outcomes. I’ve plotted these together in the graph below. Be cautious with this (I did it after all) as the data groupings are slightly different, for example HEE NW was separately Mersey and NW (I’ve collated these for the graph).
So there appears to be a weak association. It could be random, though it has some face validity. If your trainees are getting great experience, supervision and training then they should be able to get through their ARCPs, and hopefully that would lead to better exam results. Better supervsion = Better training = Better ARCP = Better exam results. It’s a big maybe though, I accept that it’s one year’s worth of data but I am struck by the variability between areas across the UK.
What about ethnicity and exam results?
There has been significant concern about the influence of ethnicity on career progression. The Royal College of General Practioners were challenged about pass rates related to ethnicity and much work has resulted to look at this. What about EM though? Are there differences in pass rates depending on ethnicity? The answer is yes. White candidates are more likely to pass than BME candidates. Now in the past I have heard it said that this is because there is an association between being BME and training/working abroad. If you trained and worked in a different country then taking an exam which inevitably includes social, cultural and clinical norms will be more challenging. Regardless of issues of language I’d struggle to practice medicine in any other country and so it’s easy to dismiss this without further thought, but wait. The data does not show this.
The GMC data reveals something more worrying. The data below breaks down ethnicity pass rates based on where the candidates received their primary medical degree. The findings are clear. Even if you are trained in the UK the chances of passing the exam are higher if you are white, and it’s the same for our European (EEA) and International (IMG) colleagues. BME is the GMC term for doctors who identify themselves as black & minority ethnic (the term is not welcomed by everyone but it’s what they use).
Well, it’s the same really. The charts below look at trainees badged as EM (so not all doctors working in EM and those in posts ST3-ST6). Irrespective of where you trained, being white means that you are less likely to have an adverse outcome at ARCP. Part of this may be related to exam failure (as that can be a cause of an adverse ARCP), but these are again stark findings. Even if you take exam failure out of the mix the difference is still there. This implies that it’s not just about exams, it’s also about how trainees are perceived to perform through their clinical placements as well. The graph below shows the percentage of adverse outcomes for ARCP again divided up into those who qualified in the UK, those qualified in the European union (EEA) and Internationally (IMG). Note that the difference between UK trained docs who are white vs. BME is 20% with clear statistical significance.
It’s worth noting that this difference in ethnicity associated outcomes is not peculiar to emergency medicine. The EM data reflects the picture across all specialities (you can pull the data yourself on line).
It’s uncomfortable reading and as I’ve discussed this with colleagues it’s interesting to observe how they try to explain it, but then as they explore the data a realisation that this needs serious thought. This data should make us all stop and think. Clearly there is variability in training location and I think the association between exam results and ARCP is worthy of further exploration. The ethnicity data is disturbing. We all like to think that race does not affect us but it’s difficult to see how we can explain this data without accepting that it does. I’m an examiner myself, I’m an educational supervisor and I’m involved in education at all levels. This is unsettling information as perhaps I am part of this too, and thus it’s a trigger for self reflection.
We’ve talked about implicit associations on St.Emlyn’s before, exploring it can affect our clinical judgement and personal judgement. If you’ve not read the post on the effect of race on our practice then you should 7 , it explains how the world around us can affect how we treat patients (with really important things like analgesia in the ED). There are a number of online tests that you can take part in that may help you explore the implicit associations that we all have.
So I will use this information from the GMC to reflect.
What of the College and the GMC though? They lead our exam processes and monitor our outcomes and I’m sure that they will be as interested in these results as I am. They, like me and you, need to think hard about what this data means, why it exists and what if anything we can do to understand and mitigate it. I’m also inclined to wonder if other characteristics such as gender orientation, body habitus or accent make a difference too. We don’t know of course but it raises the question of what affects perceived performance in the exam or in the work place. This is not a ‘college job’ though. The data tells me that this is unlikely to be something structural about the assessment process or the exam design. It would be far too simple to explain this away in that way. The findings are consistent across many specialities and thus probably reflect something about educators, doctors and societies that is really challenging. So don’t blame the college, it needs more thought than that.
It’s also a call to our international colleagues to ask the question. Do your colleges (or equivalent) know this data, publish this data and use it? Perhaps we should all ask of our exam boards; what do you know and what are you doing about it?
It’s also worth mentioning that the GMC have done a good job in publishing this information in the public domain.
For all of you who sat FRCEM this week, or for those who are planning on taking exams in the next few years, don’t panic. You will have done your best and we wish you the very best of luck in passing the exams. Whether you pass or fail though, have a look at the data form the GMC and join the conversation, ask the difficult questions and talk to friends, colleagues and trainees about this data.
Before you go please don’t forget to…
- Subscribe to the blog (look top right for the link)
- Subscribe to our PODCAST on iTunes
- Follow us on twitter @stemlyns
- See our best pics and photos on Instagram
- PLEASE Like us on Facebook
- Find out more about the St.Emlyn’s team