2017 update. The following blog was written some time ago when work around social media impact was in its infancy. This blog critiques the SMi and on reflection is pretty harsh about it. In 2017 I’ve looked back at this and realise that I we now view SMi as a waypoint on the academic interpretation of #FOAMed and #SoMe. The authors of this and other work are to be commended on developing techniques and ideas around emergency medical education technologies. Sure, it was not perfect, but it was an effective bridge to work which continues today. That’s good, it’s great to debate papers and it’s fantastic to see academic endevours develop. This is science and it’s a good thing. So, if you don’t already make sure you check out Brent Thoma‘s twitter feed and search for his, and his colleagues work. You’ll certainly learn a lot. You mght not agree with it and that’s fine too, but so long as you enjoy the debate and the discussion we’ll all be better learners and educators.
Why and what is the Social Media Index?
Social media as a tool for medical education has exploded in recent years. Social media related to medicine is commonly referred to as Free Open Access Medical Education (#FOAMed). In emergency medicine and critical care (EMCC) the number of blogs and podcasts now exceeds 300 (LITFL) with the result that many consumers struggle to manage the sheer volume of information available. The feeling that learners are trying to drink from a firehose (ref Scott and Nat) has, in a rather circular fashion, led to further blogs and podcasts on the problems of volume overload and of the quality of the information provided.
These difficulties have led to the search for a way of assessing the impact and quality of social media. To date the most well known method is that derived by Thoma et al in the Western Journal of Emergency medicine. The Social Media Index (SMi) is derived from publicly accessible data to determine the impact, and thus the inferred importance of social media sites.
The SMi has been proposed as a measure of social media importance and has been described as a measure of impact. This paper questions this assertion and challenges the rationale for the underlying data used to inform the index.
How is the SMi calculated.
The SMi derives its value from three elements; the ALEXA rank, the number of twitter followers and the number of facebook likes. The detailed derivation and weightings of the score are described in a paper by Thoma et al in 2015 and it’s well worth a read. The latest rankings can be viewed online at http://www.aliem.com/social-media-index/
The exact calculation is complex as it uses weightings to adjust the relevant perceived importance of the three elements. At the time of writing it is currently calculated as shown below.
At the time of writng the latest available rankings for November 2015 are shown below.
The SMi is therefore a derived value based on the relative weightings of three elements, ALEXA rankings, Twitter followers and Facebook likes.
Are these elements flawed?
To understand our concerns on the use of SMi it is important to understand how the underpinning elements are open to misinterpretation and abuse.
Alexa rankings are publically available and give a numerical score relating to website traffic. At first glance this is an appealing measure as a determinant of web traffic should be associated with impact and importance.
However, the ALEXA rankings are derived only from those computers that have the ALEXA toolbar installed. This is not present as a default in web browsers and needs to be either pre-installed or installed by a user. It is considered to be malware or spyware by some. ALEXA is not a measure of web traffic in general, it is a measure of web traffic amongst ALEXA users which favours those areas of the world that have it installed, notably North American users. The discrepancy can be illustrated by comparing the distribution of users on the St.Emlyn’s blog as shown on the ALEXA site with the known data (author provided) for the 3 month period up to December 21st 2015. Alexa states that 33% of our traffic comes from the US. This compares to the data available from our website servers which show that approximately 23.3% of visits originate in the US. (I can pull this data down as required – suffice to say that the UK is our highest ranking site for visits – certainly not the US which site 2nd or 3rd)
This suggests that ALEXA rankings do not reflect true web traffic and that they have a US bias.
The SMi uses a single site to determine an ALEXA rank. However, some #FOAMed producers have multiple sites within the same brand. For example the St.Emlyn’s team produces both a blog and a podcast. These are considered as separate entities within SMi despite being intricately linked and intertwined in terms of content, brand and editorial control.
Twitter followers are arguabvly a measure of impact and sharing, reflecting engagement with blogs and podcasts. However, the use of the twitter follower score is misleading.
- The SMi uses the number of Twitter followers from an author associated with a blog or podcast. This may reflect activity for a site with a single author, but not for the increasingly large number of sites with multiple authorship. As an example the St.Emlyn’s site has multiple authors and a separate twitter account for the site itself. The SMi has chosen the author with the largest number of followers but this is a rather arbitrary decision and a single individual from a larger team cannot represent the impact of the brand.
- All twitter followers are valued within the SMi. It does not matter if they are clinicians, friends, family or strangers with nothing to do with medicine. For example SC is interested in cycling and is followed by the Edinburgh Bike Compant (@edinbikemanc). This has nothing to do with medicine and does not reflect the impact or quality of the St.Emlyn’s site. Similarly, some of the authors of the original SMi paper are followed by people and organisations as diverse as motoring journalists https://twitter.com/Julietmc and escort agencies (https://twitter.com/InezOsier). Twitter followers cannot therefore accurately reflect medical impact and quality.
- As the type of twitter follower is immaterial to the SMi it is possible to manipulate the score through the purchasing of real, or computer generated twitter. Numerous examples of how to do this are available on the web, for example http://buyfollowersguide.com/.
- Adverts for sites can be placed on Twitter with the intention of increasing the number of followers.
Facebook likes are used in the SMi as a determinant both or engagement and reach. Likes also suggest a degree of positive engagement with content and thus give an illusion of quality.
Facebook likes can be bought online in a similar manner to Twitter followers through websites offering large numbers of likes for relatively small amounts of money.
Facebook itself permits users to do this in an indirect way through facebook advertising. At St.Emlyn’s, in advance of a debate planned for the 2015 SMACC conference and the Teaching Course in New York 2015 we advertised the site firstly in Saudi Arabia and then globally. We did this carefully as to buy followers is arguably dodgy, but advertising? Is that any different from promoting posts on twitter? Arguably not and for a very modest sum you can see the impact below.
The data shows two spikes in likes directly as a result of advertising and an excess of followers in Saudi Arabia and Jeddah which does not reflect our website traffic in any way
The SMi is an interesting idea and there are some attraxctions for the FOAMed world to have stable and accessible indicator of impact. The concerns in this paper outline why the metrics used to calculate the SMi are open to geographical bias, irrelevance and manipulation.
In the 2015 paper Thoma et al assert that the reputation of sites would be tarnished if they ‘gamed’ the SMi. Describing such measures as underhanded, artificial and likely to sabotage the professional credibility and reputation of the website owners is a significant challenge in an era where social media advertising is widely promoted and mainstream. Such behaviours range from overt purchasing of followers through legitimate advertising (as in this study), speaking at conferences, handing out flyers, sending tweets, offering incentives to listen or inviting people to sign up to updates via email or other social media outputs. Drawing a line at what is, and what is not reasonable in an ever changing social media landscape may well be pointless and ineffective. Similarly the lack of any scrutiny regarding the type, location or source of the twitter followers or facebook likes seriously questions the credibility of the score.
The search for an impact factor for social media is arguably a reflection of the perceived need for and tradition of impact factors in print publishing. Impact factors in print journals inform research assessment exercises and influence personal and professional progress. Some people use them as markers of quality, although recent scandals in some journals with very high impact factors are a salient lesson in cautious interpretation.
The original paper is worth a read and in fairness the authors have looked to compare the SMi with respect to journal impact factors and they did find a correlation with journal impact factors. That correlation is interesting but may be a result of the similar concerns listed above.
The very question of a need for a quantitative measure of social media impact and quality remains unresolved. Blogs and podcasts are largely secondary sources of information with more in keeping with journal commentaries, editorials and journalism than traditional print journals focusing on primary research. The rationale for a reductionist quantitive measure of what is largely a qualitative narrative has not, in our opinion, been resolved. For some the potential of a quanitfiable score is attractive. In general terms a score might guide novices to quality sites, and as authors of sites a score might be used for personal standing and even promotion, but is that really what we want from #FOAMed? We did not get into #FOAMed for self advancement and promotion (honest), and I don’t think that many of us who blog and podcast did either. A ‘score/index’ might well be used for academic promotion (or excellence awards in the UK) and although it is tempting to use it for those purposes it’s arguably not in the spirit of altruistic medical education (Ed – hypocritically we must admit that we might do that ourselves).
Another criticism would be that the SMi is difficult to calculate as an individual (although the weightings of each component are reasonably easy to see) and although the data is freely available the calculation and formula is difficult to interpret.
Perhaps the principle strength and weakness with the SMi is its reliance on publically accessible data. As a #FOAMed creator and publisher we have great insight into data on a range of metrics such as visitors, page visits, time on site, bounce rates, geographical distribution, revisits etc. These can be used to assess the impact of the site, individual posts and users. Such data could be used to compare sites, but it is not available publically, and it is unlikely to become so without the unlikely consent of the site owners.
The SMI has been designed to assess the impact and quality of medical blogs, podcasts and other forms of social media. We have demonstrated that the 3 components of the SMi are open to geographical bias, manipulation and irrelevance. The question remains as to whether we really need a quantitive index for blogs and podcasts.
Quality may be tricky to measure and define: Perhaps we should trust people to know it when they see it.
We love the team who put this together and applaud their efforts. The purpose of this critique is to challenge the assertions of the model. This is how science works. Brent and the team have put together a model which is in evolution, and much more work has followed and is in progress. This blog post is designed to challenge the assertions in the model such that it can develop further. Brent and the team are continuing their work with a number of further projects in their attempts to define quality in #FOAMed. We will of course look forward to the next stage of the research. Special thanks to Brent for commenting on a draft of this blog, and for improving it. He really is a jolly good sport.
In other news @Brent_Thoma is a jolly good sport.
More on this soon, but honestly – top chap.
— Simon Carley (@EMManchester) January 31, 2016