Appraising the literature
How do you decide which papers to include in your review?
Your search has probably given you a long list of papers that may or may not be relevant to your three part question. You need to decide which papers you are going to include in the eventual BET. We choose which papers to include using the following criteria.
- Relevance to the three part question
- The quality of the papers
- The range of different paper types
It is usually possible to decide if a paper is relevant to the 3-part question by examining the title and abstract of the paper. This will tell you which papers you then need to get hold of and critically appraise. You can be quite ruthless at this stage by concentrating hard on which papers have the potential to answer your 3-part question. Remember, you are not attempting a wide ranging review of a broad subject, you are asking a specific, and focused question.
Once you have found your potentially relevant papers (the numbers should be well down by now) you need to get hold of them and critically appraise them. Getting hold of your papers can be difficult and you may need to get help from your local librarian. You do need to get hold of the full papers though. You simply cannot assess the quality of a paper from the abstracts you may have found on your search.
There are systems described that can help grade papers according to the way in which the trial is conducted. An example is the grading system shown below designed by Bob Phillips et al at the Centre for Evidence Based Medicine that relates the research methodology against the type of question being asked. Such an approach has merit but cannot be used in isolation. Even studies that may appear to be of a high level must still be individually critically appraised
Oxford Centre for Evidence-based Medicine Levels of Evidence (May 2001 published with permission):
|Level of evidence||Therapy / Prevention, Aetiology / Harm||Prognosis||Diagnosis||Differential diagnosis / symptom prevalence study||Economic and decision analyses|
|1a||SR (with homogeneity*) of RCTs||SR (with homogeneity*) of inception cohort studies; CDR† validated in different populations||SR (with homogeneity*) of Level 1 diagnostic studies; CDR† with 1b studies from different clinical centres||SR (with homogeneity*) of prospective cohort studies||SR (with homogeneity*) of Level 1 economic studies|
|1b||Individual RCT (with narrow Confidence Interval‡)||Individual inception cohort study with >80% follow-up; CDR† validated in a single population||Validating** cohort study with good††† reference standards; or CDR† tested within one clinical centre||Prospective cohort study with good follow-up****||Analysis based on clinically sensible costs or alternatives; systematic review(s) of the evidence; and including multi-way sensitivity analyses|
|1c||All or none§||All or none case-series||Absolute SpPins and SnNouts††||All or none case-series||Absolute better-value or worse-value analyses ††††|
|2a||SR (with homogeneity*) of cohort studies||SR (with homogeneity*) of either retrospective cohort studies or untreated control groups in RCTs||SR (with homogeneity*) of Level >2 diagnostic studies||SR (with homogeneity*) of 2b and better studies||SR (with homogeneity*) of Level >2 economic studies|
|2b||Individual cohort study (including low quality RCT; e.g., <80% follow-up)||Retrospective cohort study or follow-up of untreated control patients in an RCT; Derivation of CDR† or validated on split-sample§§§ only||Exploratory** cohort study with good††† reference standards; CDR† after derivation, or validated only on split-sample§§§ or databases||Retrospective cohort study, or poor follow-up||Analysis based on clinically sensible costs or alternatives; limited review(s) of the evidence, or single studies; and including multi-way sensitivity analyses|
|2c||“Outcomes” Research; Ecological studies||“Outcomes” Research||Ecological studies||Audit or outcomes research|
|3a||SR (with homogeneity*) of case-control studies||SR (with homogeneity*) of 3b and better studies||SR (with homogeneity*) of 3b and better studies||SR (with homogeneity*) of 3b and better studies|
|3b||Individual Case-Control Study||Non-consecutive study; or without consistently applied reference standards||Non-consecutive cohort study, or very limited population||Analysis based on limited alternatives or costs, poor quality estimates of data, but including sensitivity analyses incorporating clinically sensible variations.|
|4||Case-series (and poor quality cohort and case-control studies§§)||Case-series (and poor quality prognostic cohort studies***)||Case-control study, poor or non-independent reference standard||Case-series or superseded reference standards||Analysis with no sensitivity analysis|
|5||Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles”||Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles”||Expert opinion without explicit critical appraisal, or based on physiology, bench research or “first principles”||Expert opinion without explicit critical appraisal, or based on economic theory or “first principles”|
Produced by Bob Phillips, Chris Ball, Dave Sackett, Doug Badenoch, Sharon Straus, Brian Haynes, Martin Dawes since November 1998.
There are many publications on critical appraisal techniques for a wide variety of papers. All have their merits, pros and cons. We strongly advocate the use of critical appraisal checklists and have collated a number of these on the BestBETs web site where you can upload your critical appraisal on-line (these can subsequently be linked to the BET when you complete it). Many of the appraisal methods we use are based on the work of Crombie(4), Sackett (5;6) and Greenhalgh (7.)
Critical appraisal will allow you to decide if the conclusions of the study has any relevance to your question and also whether it has any validity. By assessing validity we are determining whether the quality of the study is sufficiently high to ensure that that the conclusions are justified. If during critical appraisal that the study is fatally flawed then make a note of this (and why) and discard it as it should not then be used in the BET. Ideally you will have done your critical appraisal on-line so that other readers can see why you discarded the paper.
Further guidance on the critical appraisal process is available on the BestBETs courses.
The range of paper types
For some topics all the papers you will find will be of similar design and quality. Often however, this will not be the case. As a general rule of thumb as to what to include in the BETs table we refer to the evidence levels shown above and take the papers from the highest level we found and those from the level below. For example you would take level 2 and level 3 papers if level 2 was the highest you found. If there is a high quality, relevant systematic review article (e.g. a Cochrane review) encompassing all relevant papers then include this and any subsequent relevant papers published after this date (i.e. there is no need to include each individual paper incorporated in the systematic review).
Collating the data in the table
If you go to the BestBETs database you will be able to see many examples of how other authors have summarised the data in the table. The basic table should look like this:
|Author, date and country||Patient group||Study type (level of evidence)||Outcomes||Key results||Study weaknesses|
Author, Date and Country
This should be fairly self-explanatory!
There should be enough information in this box so that the reader can understand who was studied and what happened to them. It often closely resembles the patient group/methods section of the abstract. However, it is important that you focus on elements related to the three part question. If you can make this brief that will help the editing process!
Study Type (level of evidence)
The basic study design is stated e.g. Prospective Randomised Controlled Trial, Diagnostic Cohort etc. If possible include the level of evidence from the CEBM table.
State in clear terms what the clinically relevant outcomes were for the paper in question. This should be unambiguous and directly related to the outcomes from the 3-part question. We discourage you from putting “interesting” data from the paper in this section. Remember that the BET is focused on answering your 3-part question and the outcomes should relate to this. By explicitly state we mean that the reader should be in no doubt what was measured. For example:
- Not so good: Movement improved
- Good: Time to return to normal sporting activities (in weeks)
- Better: Time to achieve full weight bearing
The key results and the outcomes sections of the table are directly linked. When the reader looks at the table they should see what was measured directly adjacent to the actual result in the key results section. You should put actual values in this column together with any measure of statistical analysis (e.g. a p value). It is not acceptable to just state that one thing is better than another. For good and bad examples see below:
|Time to return to full sporting activity||Mean of 35 days for tubigrip vs. 56 days for POP (t test p<0.001)|
|Able to fully weight bear||Mean of 14 days for tubigrip vs. 28 days for POP (t test p=0.05)|
|Self reported pain score at 1 week on 100mm VAS||56mm for tubigrip vs. 60mm for POP (Mann-Whitney p=NS)|
|Range of movement||Better for POP|
[Note: I made all these figures up so don’t quote me, it’s just for illustration.]
As you can see the first 3 paired outcomes and key results allow you to interpret the findings in a meaningful way. The last pair lacks the detail required for the reader to know the magnitude or the significance of the effect.
This is where you should put in details of any problems with the study. Generally these will consist of two elements.
Firstly, if there are methodological flaws in the study you should state them here. As we previously mentioned some studies will be so flawed that they were rejected at an earlier stage, they do not need to be included here.
Secondly, you should include comments here that relate to the applicability of the study data to your 3-part question. It is perhaps a little disingenuous to describe these as “study weaknesses” as it is unlikely that the authors had the BET in mind when they did the study. However, it is important for the reader of the BET to understand the applicability of the data to the question and the clinical scenario.
- Mackway-Jones K, Carley S. bestbets.org: odds on favourite for evidence in emergency medicine reaches the world wide web. Journal of Accident & Emergency Medicine 2000; 17(4):235-236.
- Carley SD, Mackway-Jones K, Jones A, Morton RJ, Dollery W, Maurice S et al. Moving towards evidence based emergency medicine: use of a structured critical appraisal journal club.[see comment]. Journal of Accident & Emergency Medicine 1998; 15(4):220-222.
- Mackway-Jones K, Carley SD, Morton RJ, Donnan S. The best evidence topic report: a modified CAT for summarising the available evidence in emergency medicine. Journal of Accident & Emergency Medicine 1998; 15(4):222-226.
- Crombie IK. The pocket guide to Critical Appraisal. London: BMJ Publishing, 1996.
- Sackett DL HRGGTP. Clinical Epidemiology: A basic science for clinical medicine. Boston: Little Brown, 1991.
- Sackett D. How to teach and practice evidence based medicine. 2nd ed. London: Churchill, 2000.
- Greenhalgh T. How to read a paper. The basics of Evidence Based Medicine. 2nd Ed ed. London: BMJ, 2001.