Last year I was contacted by Peter Brindley, Leon Byker and Brent Thoma to collaborate on a paper for the Journal of the Intensive Care Society. The aim was to help people understand a little about quality in assessing #FOAMed and to explain some of the past, current and metrics that have been used to try and quantify quality. The premise is that this is important not because it matters to me or you, but rather that such assessments are made by others and that can have an impact on many aspects of our work. Whether that’s right or wrong is clearly open to debate, but we cannot escape the fact that they exist. You can read the paper here, and the abstract is shown below.
I was delighted to be asked as I’m well known as a bit of a sceptic when it comes to trying to measure #FOAMed. We have published it on the past and in retrospect have been a bit tough on such attempts as I’m not sure that we always understood the motivation. It is arguably inevitable that medical education consumers will make comparisons, that those new to #FOAMed will want to know what ‘quality’ looks like and that there are certain characteristics of #FOAMed sites that are associated with more reliable content. Such ideas led to the development of scores such as the Social Media Index (SMi), the ALiEM AIR score and the METRIQ and rMETRIQ scores. You can read more about these in the paper.
More recently we have seen large search engines such as Google take a more proactive approach to rating medical information through upgrades to the search protocol such as EAT (Expertise, Authoritativeness and Trustworthiness). Another big change is due which I understand will focus on the speed and stability of websites. Something that can be improved by webmasters, but with the expense of time and money which sadly moves us away from the initial aims of #FOAMed, or at least makes them more difficult to achieve.
I got together with the lead author, Peter Brindley to talk through the current measures, and a few of the more ridiculous ones by revealing that we are both science Kardashians (Peter=7, Simon=39!), and to record a podcast on the main points. We hope you enjoy it and learn a little about the complex issues that surround the question of whether a particular #FOAMed blog/podcast/video can be quantified as ‘good’. My personal opinion is that we are not there yet, and in many ways it’s not a destination I would choose to pursue. However, I do recognise that the rest of the world, notably research organisations, employers, academic promotion panels etc. may wish for some quantitative measure. If they do (and they do) then understanding what’s out there already is worth a few minutes of your time.
Assessing on-line medical education resources: A primer for acute care medical professionals and others https://journals.sagepub.com/doi/pdf/10.1177/1751143721999949
Simon Carley, “The Social Media Index (SMi): Can & should we measure #FOAMed?,” in St.Emlyn’s, February 1, 2016, https://www.stemlynsblog.org/the-social-media-index-smi-is-it-flawed/.
Cameron, P, Carley, S, Weingart, S, et al. CJEM debate series: #SocialMedia – Social media has created emergency medicine celebrities who now influence practice more than published evidence. Can J Emerg Med Care 2017; 19: 471–474. https://pubmed.ncbi.nlm.nih.gov/29145923/
Eysenbach, G. Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. J Med Internet Res 2011; 13: e123. https://www.jmir.org/2011/4/e123/
Cadogan, M, Thoma, B, Chan, TM, et al. Free open access meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002-2013). Emerg Med J 2014; 31: e76–e77. https://pubmed.ncbi.nlm.nih.gov/24554447/
Thoma, B, Chan, T, Benitez, J, et al. Educational scholarship in the digital age: a scoping review and analysis of scholarly products. The Winnower 2014; 1:e141827.77297. https://thewinnower.com/papers/educational-scholarship-in-the-digital-age-a-scoping-review-and-analysis-of-scholarly-products
Purdy, E, Thoma, B, Bednarczyk, J, et al. The use of free online educational resources by Canadian emergency medicine residents and program directors. Can J Emerg Med 2015; 17: 101–106. https://pubmed.ncbi.nlm.nih.gov/25927253/
Nickson, CP, Cadogan, MD. Free Open Access Medical education (FOAM) for the emergency physician. Emerg Med Australas 2014; 26: 76–83. https://pubmed.ncbi.nlm.nih.gov/24495067/
Cameron, P. Pundit-based medicine. Emerg Physicians Int 2016.
The Metriq Study , https://metriqstudy.org/ (accessed 1 August 2020).
Ting, DK, Boreskie, P, Luckett-Gatopoulos, S, et al. Quality appraisal and assurance techniques for free open access medical education (FOAM) resources: a rapid review. Semin Nephrol 2020; 40: 309–319. https://www.sciencedirect.com/science/article/abs/pii/S0270929520300528
Sanders, J, Steeg, J, Chan, T, et al. The social media index: measuring the impact of emergency medicine and critical care websites. WestJEM 2015; 16: 242–249. https://pubmed.ncbi.nlm.nih.gov/25834664/
ALiEM Air Series, www.aliem.com/category/clinical/approved-instructional-resources-air-series/ (accessed 1 August 2020).
Paterson, QS, Thoma, B, Milne, WK, et al. A systematic review and qualitative analysis to determine quality indicators for health professions education blogs and podcasts. J Grad Med Educ 2015; 7: 549–554. https://pubmed.ncbi.nlm.nih.gov/26692965/
Thoma, B, Chan, TM, Paterson, QS, et al. Emergency medicine and critical care blogs and podcasts: establishing an international consensus on quality. Ann Emerg Med 2015; 66: 396–402. https://pubmed.ncbi.nlm.nih.gov/25840846/
Lin, M, Thoma, B, Trueger, NS, et al. Quality indicators for blogs and podcasts used in medical education: modified Delphi consensus recommendations by an international cohort of health professions educators. Postgrad Med J 2015; 91: 546–550. https://pubmed.ncbi.nlm.nih.gov/26275428/
ALiEM AIR Series grading tool , www.aliem.com/wp-content/uploads/Air-Series-Grading-Tool.pdf (accessed 25 April 2019).
Thoma, B, Sebok-Syer, SS, Colmers-Gray, I, et al. Quality evaluation scores are no more reliable than Gestalt in evaluating the quality of emergency medicine blogs: a METRIQ study. Teach Learn Med 2018; 30: 294–302. https://pubmed.ncbi.nlm.nih.gov/29381099/
Chan, T, Thoma, B, Krishnan, K, et al. Derivation of two critical appraisal scores for trainees to evaluate online educational resources: a METRIQ study. WestJEM 2016; 17: 574–584. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5017842/
Thoma, B, Sebok-Syer, SS, Krishnan, K, et al. Individual Gestalt is unreliable for the evaluation of quality in medical education blogs: a METRIQ study. Ann Emerg Med 2017; 70: 394–401. https://pubmed.ncbi.nlm.nih.gov/28262317/
Colmers-Gray, IN, Krishnan, K, Chan, TM, et al. The revised METRIQ score: a quality evaluation tool for online educational resources. AEM Educ Train 2019; 3: 387–394. https://pubmed.ncbi.nlm.nih.gov/31637356/
Thoma, B, Chan, TM, Kapur, P, et al. The social media index as an indicator of quality for emergency medicine blogs: a METRIQ study. Ann Emerg Med 2018; 72: 696–702. https://pubmed.ncbi.nlm.nih.gov/29980461/
Life in the Fast Lane: EAT protocol, https://litfl.com/google-medic-update-eat-protocol/ (accessed 1 August 2020).
Hall, N. The Kardashian index: a measure of discrepant social media profile for scientists. Genome Biol 2014; 15. https://genomebiology.biomedcentral.com/articles/10.1186/s13059-014-0424-0