The Social Media Index (SMi): Can & should we measure #FOAMed?

Estimated reading time: 11 minutes

Social Media index2017 update. The following blog was written some time ago when work around social media impact was in its infancy. This blog critiques the SMi and on reflection is pretty harsh about it. In 2017 I’ve looked back at this and realise that I we now view SMi as a waypoint on the academic interpretation of #FOAMed and #SoMe. The authors of this and other work are to be commended on developing techniques and ideas around emergency medical education technologies. Sure, it was not perfect, but it was an effective bridge to work which continues today. That’s good, it’s great to debate papers and it’s fantastic to see academic endevours develop. This is science and it’s a good thing. So, if you don’t already make sure you check out Brent Thoma‘s twitter feed and search for his, and his colleagues work. You’ll certainly learn a lot. You mght not agree with it and that’s fine too, but so long as you enjoy the debate and the discussion we’ll all be better learners and educators.


Why and what is the Social Media Index?

Social media as a tool for medical education has exploded in recent years. Social media related to medicine is commonly referred to as Free Open Access Medical Education (#FOAMed). In emergency medicine and critical care (EMCC) the number of blogs and podcasts now exceeds 300 (LITFL) with the result that many consumers struggle to manage the sheer volume of information available. The feeling that learners are trying to drink from a firehose (ref Scott and Nat) has, in a rather circular fashion, led to further blogs and podcasts on the problems of volume overload and of the quality of the information provided.

These difficulties have led to the search for a way of assessing the impact and quality of social media. To date the most well known method is that derived by Thoma et al in the Western Journal of Emergency medicine. The Social Media Index (SMi) is derived from publicly accessible data to determine the impact, and thus the inferred importance of social media sites.

Screenshot 2016-01-27 17.28.03

The SMi has been proposed as a measure of social media importance and has been described as a measure of impact. This paper questions this assertion and challenges the rationale for the underlying data used to inform the index.


How is the SMi calculated.

The SMi derives its value from three elements; the ALEXA rank, the number of twitter followers and the number of facebook likes. The detailed derivation and weightings of the score are described in a paper by Thoma et al in 2015 and it’s well worth a read. The latest rankings can be viewed online at

The exact calculation is complex as it uses weightings to adjust the relevant perceived importance of the three elements. At the time of writing it is currently calculated as shown below.


At the time of writng the latest available rankings for November 2015 are shown below.

Screenshot 2016-01-19 14.51.10

The SMi is therefore a derived value based on the relative weightings of three elements, ALEXA rankings, Twitter followers and Facebook likes.

Are these elements flawed?

To understand our concerns on the use of SMi it is important to understand how the underpinning elements are open to misinterpretation and abuse.


Alexa rankings are publically available and give a numerical score relating to website traffic. At first glance this is an appealing measure as a determinant of web traffic should be associated with impact and importance.

However, the ALEXA rankings are derived only from those computers that have the ALEXA toolbar installed. This is not present as a default in web browsers and needs to be either pre-installed or installed by a user. It is considered to be malware or spyware by some. ALEXA is not a measure of web traffic in general, it is a measure of web traffic amongst ALEXA users which favours those areas of the world that have it installed, notably North American users. The discrepancy can be illustrated by comparing the distribution of users on the St.Emlyn’s blog as shown on the ALEXA site with the known data (author provided) for the 3 month period up to December 21st 2015. Alexa states that 33% of our traffic comes from the US.  This compares to the data available from our website servers which show that approximately 23.3% of visits originate in the US. (I can pull this data down as required – suffice to say that the UK is our highest ranking site for visits – certainly not the US which site 2nd or 3rd)

Screenshot 2016-01-19 14.53.57
According to ALEXA 33% of St.Emlyn’s visitors are in the US (the largest overall proportion by far). Our data says it’s less than 25%.


This suggests that ALEXA rankings do not reflect true web traffic and that they have a US bias.

The SMi uses a single site to determine an ALEXA rank. However, some #FOAMed producers have multiple sites within the same brand. For example the St.Emlyn’s team produces both a blog and a podcast. These are considered as separate entities within SMi despite being intricately linked and intertwined in terms of content, brand and editorial control.


Twitter followers are arguabvly a measure of impact and sharing, reflecting engagement with blogs and podcasts. However, the use of the twitter follower score is misleading.

  1. The SMi uses the number of Twitter followers from an author associated with a blog or podcast. This may reflect activity for a site with a single author, but not for the increasingly large number of sites with multiple authorship. As an example the St.Emlyn’s site has multiple authors and a separate twitter account for the site itself. The SMi has chosen the author with the largest number of followers but this is a rather arbitrary decision and a single individual from a larger team cannot represent the impact of the brand.
  2. All twitter followers are valued within the SMi. It does not matter if they are clinicians, friends, family or strangers with nothing to do with medicine. For example SC is interested in cycling and is followed by the Edinburgh Bike Compant (@edinbikemanc). This has nothing to do with medicine and does not reflect the impact or quality of the St.Emlyn’s site. Similarly, some of the authors of the original SMi paper are followed by people and organisations as diverse as motoring journalists and escort agencies ( Twitter followers cannot therefore accurately reflect medical impact and quality.
  3. As the type of twitter follower is immaterial to the SMi it is possible to manipulate the score through the purchasing of real, or computer generated twitter. Numerous examples of how to do this are available on the web, for example
  4. Adverts for sites can be placed on Twitter with the intention of increasing the number of followers.


Facebook likes are used in the SMi as a determinant both or engagement and reach. Likes also suggest a degree of positive engagement with content and thus give an illusion of quality.

Facebook likes can be bought online in a similar manner to Twitter followers through websites offering large numbers of likes for relatively small amounts of money.

Facebook itself permits users to do this in an indirect way through facebook advertising. At St.Emlyn’s, in advance of a debate planned for the 2015 SMACC conference and the Teaching Course in New York 2015 we advertised the site firstly in Saudi Arabia and then globally. We did this carefully as to buy followers is arguably dodgy, but advertising? Is that any different from promoting posts on twitter? Arguably not and for a very modest sum you can see the impact below.

The data shows two spikes in likes directly as a result of advertising and an excess of followers in Saudi Arabia and Jeddah which does not reflect our website traffic in any way


Screenshot 2016-01-27 17.45.07

Screenshot 2016-01-27 17.46.20Facebook does not therefore reflect the website impact and is easily manipulated to increase the numbers of followers and thus the SMi.


The SMi is an interesting idea and there are some attraxctions for the FOAMed world to have stable and accessible indicator of impact. The concerns in this paper outline why the metrics used to calculate the SMi are open to geographical bias, irrelevance and manipulation.

In the 2015 paper Thoma et al assert that the reputation of sites would be tarnished if they ‘gamed’ the SMi. Describing such measures as underhanded, artificial and likely to sabotage the professional credibility and reputation of the website owners is a significant challenge in an era where social media advertising is widely promoted and mainstream. Such behaviours range from overt purchasing of followers through legitimate advertising (as in this study), speaking at conferences, handing out flyers, sending tweets, offering incentives to listen or inviting people to sign up to updates via email or other social media outputs. Drawing a line at what is, and what is not reasonable in an ever changing social media landscape may well be pointless and ineffective. Similarly the lack of any scrutiny regarding the type, location or source of the twitter followers or facebook likes seriously questions the credibility of the score.

The search for an impact factor for social media is arguably a reflection of the perceived need for and tradition of impact factors in print publishing. Impact factors in print journals inform research assessment exercises and influence personal and professional progress. Some people use them as markers of quality, although recent scandals in some journals with very high impact factors are a salient lesson in cautious interpretation.

The original paper is worth a read and in fairness the authors have looked to compare the SMi with respect to journal impact factors and they did find a correlation with journal impact factors. That correlation is interesting but may be a result of the similar concerns listed above.

The very question of a need for a quantitative measure of social media impact and quality remains unresolved. Blogs and podcasts are largely secondary sources of information with more in keeping with journal commentaries, editorials and journalism than traditional print journals focusing on primary research. The rationale for a reductionist quantitive measure of what is largely a qualitative narrative has not, in our opinion, been resolved. For some the potential of a quanitfiable score is attractive. In general terms a score might guide novices to quality sites, and as authors of sites a score might be used for personal standing and even promotion, but is that really what we want from #FOAMed? We did not get into #FOAMed for self advancement and promotion (honest), and I don’t think that many of us who blog and podcast did either. A ‘score/index’ might well be used for academic promotion (or excellence awards in the UK) and although it is tempting to use it for those purposes it’s arguably not in the spirit of altruistic medical education (Ed – hypocritically we must admit that we might do that ourselves).

Another criticism would be that the SMi is difficult to calculate as an individual (although the weightings of each component are reasonably easy to see) and although the data is freely available the calculation and formula is difficult to interpret.

Perhaps the principle strength and weakness with the SMi is its reliance on publically accessible data. As a #FOAMed creator and publisher we have great insight into data on a range of metrics such as visitors, page visits, time on site, bounce rates, geographical distribution, revisits etc. These can be used to assess the impact of the site, individual posts and users. Such data could be used to compare sites, but it is not available publically, and it is unlikely to become so without the unlikely consent of the site owners.


The SMI has been designed to assess the impact and quality of medical blogs, podcasts and other forms of social media. We have demonstrated that the 3 components of the SMi are open to geographical bias, manipulation and irrelevance. The question remains as to whether we really need a quantitive index for blogs and podcasts.

Quality may be tricky to measure and define: Perhaps we should trust people to know it when they see it.



We love the team who put this together and applaud their efforts. The purpose of this critique is to challenge the assertions of the model. This is how science works. Brent and the team have put together a model which is in evolution, and much more work has followed and is in progress. This blog post is designed to challenge the assertions in the model such that it can develop further. Brent and the team are continuing their work with a number of further projects in their attempts to define quality in #FOAMed. We will of course look forward to the next stage of the research. Special thanks to Brent for commenting on a draft of this blog, and for improving it. He really is a jolly good sport.



Cite this article as: Simon Carley, "The Social Media Index (SMi): Can & should we measure #FOAMed?," in St.Emlyn's, February 1, 2016,

19 thoughts on “The Social Media Index (SMi): Can & should we measure #FOAMed?”

  1. Victoria Brazil (@SocraticEM)

    Thanks Simon and St Emlyns for getting us thinking, again.

    My (less well informed) thoughts are that looking for markers of quality in #FOAMed is a good idea, but that its unlikely a single number (or index) will capture that.

    Markers of quality should be used in social media critical appraisal processes – a skill we all need to improve, and which our trainees need for both ‘traditional’ and social media.

    The Canadian/ US group who propose the SMi have contributed enormously to the discussion as to what constitutes SoME ‘markers of quality’. Unfortunately popularity and volume are easy to count and so attractive in any measurement process, but not necessarily any guide to quality.Its probably a good idea that debate continues, and may take some time to mature and build consensus. As with traditional critical appraisal quality markers – its unlikely everyone will ever agree completely on what constitutes quality.

    Rank ordering sources of #FOAMed probably doesn’t get me much further in that, but I am pragmatic enough to understand the motivation to recognise those who contribute excellent #FOAMed

    I don’t think we can quite trust people to “know [quality] when they see it”. I’d really like a framework for critical review of #FOAMed resources (and commentary pieces), building on those already existing for traditional literature.

    Thanks again for getting me thinking…


    1. I still hark back to the ideas around a ‘Guild of #FOAMed’ which I think may have been an idea from Minh. My recollection is that was a declaration of what a blog did (quality markers) rather than how much they did.

      To be fair that is exactly what Brent and colleagues have pursued through a further Delphi project and I’m really looking forward to seeing how that develops and if it works. Of course the difficulty and strength of producing a list of ‘good’ criteria is that editors can embark on a box ticking exercise to meet the criteria but not necessarilly the quality.

      As an educationalist I am sure that you recognise this as similar to the concept of ‘Assessment drives learning’, there are always risks that ‘criteria will drive activity’. Of course, that may not always be a bad thing…..



  2. Thanks, Simon. A great overview and critique of the SMi. The SMi is a terrific way to start an important debate about measuring quality of #FOAMed but I totally agree with your points about its limitations. The high impact factor journals may have lots of followers because they are high impact factor journals, hence the correlation between impact factor and followers doesn’t confirm the validity of the SMi.

    I also agree with Victoria (as usual) in that a single measure of quality will be very difficult, if not impossible, to find. What matters most to me is that we make #FOAMed entertaining and accessible. At St. Emlyn’s we want to help people enjoy their jobs, to feel part of a community. We don’t want education to feel like hard work – we want it to be convenient and accessible – something that you actually want to engage with in your spare time. We want to make you think – so that you’re mentally prepared for the challenging jobs that we do every day, and so that you (the reader) might go on to take medicine to new frontiers.

    In the week that Terry Wogan died, it also reminds me of one of his greatest quotes. Someone once asked him about his radio show, “How many listeners do you have?” Terry Wogan might have been expected to quote an answer in the millions, as was undoubtedly the case. Rather, he said, “Just one”.

    That’s what made Terry Wogan an inspirational presenter. And that’s what makes St. Emlyn’s worthwhile for us: the fact that YOU chose to come here. That’s the only marker of quality I need. #FOAMed is about all of this. It’s not an academic career, it’s not about another set of targets or measures or hierarchies. It’s about a community, a conversation and it’s about making what we do fun.


  3. Great stuff everyone – including the comments. Loved this quote. “The very question of a need for a quantitative measure of social media impact and quality remains unresolved. Blogs and podcasts are largely secondary sources of information with more in keeping with journal commentaries, editorials and journalism than traditional print journals focusing on primary research.”

    If we all keep our heads and try not to get too carried away with it all then FOAMed can stay this really useful and just as importantly really fun place to do medicine.

  4. Great debate! Ultimately it’s the individual user who decides what they believe is quality for them. As an analogy, is one piece of music ‘better’ than another because it is more popular? Is one medical lecture ‘better’ than another because the room is packed with ppl? FOAMed content, like a piece of music, can be appreciated in many different ways and ppl can take away vastly different interpretations and misinterpretations.

    Second, The SMi does not fit with the egalitarian communal nature of FOAMed that makes it such a wonderful thing. The potential competition between FOAMed producers to improve their SMi score feels distinctly wrong to me. My drive comes from a desire to help our wonderful specialty flourish, be part of a beautiful world wide community, improve myself as a caring human and become a better Doctor. Competition to improve the EM Cases SMi goes against all of this.

    One of the goals of the SMi was to help establish FOAMed as credible, however I believe we’re past that already. The proof is in the pudding. i think we’d be better off studying specific quality indicators (which Brent has been studying with great success in my opinion), while recognizing that FOAMed resources, like pieces of music, in many ways at least, are unquantifiable.

    BTW, I too think Brent is a jolly good chap. And incredibly smart!

    1. Hi Anton,

      We very much consider you a founder of #FOAMed and so your words and thoughts are very valuable and I must agree with you. I still hold to the egalitarian communal nature of #FOAMed and do hope that we manage to carry that forward irrespective of scores and league tables.

      Thanks for your comments.


      NB if you haven’t checked out emergency medicine cases you are really missing out. Go there now!

  5. Pingback: LITFL Review 218 | LITFL: Life in the Fast Lane Medical Blog

  6. Pingback: LITFL Review 218 – FOAM Ed

  7. Pingback: Waves of FOAM: Does the discussion of quality and impact suggest #FOAMed's maturation? - CanadiEM

  8. Thanks for an interesting ongoing debate which probably will not be resolved any time soon. Just a few thought from someone on the consumer end of the FOAMed resources.
    First of all the obvious limitations of the SMi are hard to argue with. Maybe not everyone will be able to identify quality the moment they see it. Considering FOAMed as an additional, spare time resource available next to traditional educational media, rather than telling people with an index list which resources are providing potentially important, high quality content, I feel addressing how we can make sure consumers are empowered to distinguish quality from nonsense is much more critical. As with any other form of education FOAMed represents a variety of information that requires critical thinking and reflection rather than simple consumption.

    Measuring mostly quantity as substitute for impact is unlikely to tell us much about quality.
    My personal FOAMed journey has not started that long ago. In all honesty I feel it’s unlikely that a list of high impact resources would have helped a lot in terms of guidance. The chance of novices not coming across any of the ‘high impact’ resources is close to zero in my opinion. Not long into your journey one will stumble upon those resources anyway. The beauty of FOAMed to me is that is has something for everyone on almost everything. Content on a small, high quality blog with a niche topic is no less important than those sites providing a variety of topics by multiple authors. It just depends on what you are looking for.

    What makes people read and engage? I doubt it’s solely about impact and quality. We don’t go and read all the articles published in the high impact journals. At least I don’t.
    In my opinion it’s about if you deem the topic relevant. Is this blog, podcast or whatever it is relevant for my education, does it relate to my clinical practice or even more personally does it match with some particular interest in a certain topic I might hold. Maybe someone will surprise me in finding a way to rate FOAMed content that on a personal level I can tell within seconds where I need to go to for the high quality information for what everyone individually considers relevant. Until then I prefer keeping FOAMed at what it primarily is: a fun place to spend some of my spare time that might teach me something while engaging with a bunch of people who share the same passion and interests.

    1. Well said. I think I agree with everything you say, in particular the fact that it is impossible not to stumble across the big sites.

      If anything a way of finding smaller, niche, high quality sites might be useful. However, SMi just looks at how big a site is which is not the same thing at all.

      Bigger does not mean better


  9. Pingback: Calm Seas - A Response to Waves of FOAM

  10. Very glad to see the 2017 addition to this. While the arguments presented against the SMi were reasonable I have always felt uncomfortable about the tone of this debate. It always seemed to me the Brent and colleagues were just trying to research and evaluate the area. I can’t think of any clinician who wouldn’t want to see evidence examined (isn’t that an ethos of FOAM?). It seemed to me the decision that a (#FOAMed) SMi couldn’t work even before we really had one. It may well be that it isn’t seen as credible, feasible or useful but the scientific method would at least suggest we rigorously examine and review this before reaching this conclusion.
    All the best – D

Thanks so much for following. Viva la #FOAMed

Scroll to Top