Skip to Main Content

Reviewing the literature

Assessing the evidence

Systematic reviews need to objectively explore the strengths and weaknesses of included studies in order to assess how the study quality affects the findings of the review. These will be taken into account when discussing the findings of your review, and whether the weaknesses will have an impact on the review findings or not. The first step is to consider each article individually:

  • Applicability: the relevance of the topic, the setting and context, the discipline
  • Extrinsic factors: Who wrote it, their affiliations, the journal, the authors' previous publications
  • Intrinsic factors: The study design, sample, methods
  • Generalisability/applicability: How the findings impact on practice, whether findings are transferable (Booth et al., 2022)
Validity

Vailidity considers how the research methods were designed and conducted and whether this could have impacted on the results being 'true'. The rigour of a study looks at how much the study design minimises a risk of bias and considers confounding. Bias is anything that may incorrectly influence the conclusions, such as how the participants were recruited, how data was analysed or outcomes measured. Confounding is where it is not clear whether the effect was due to what you are considering in the study, or another factor, such as age, gender, socio-economic status or ethnicity.

Reliability

Reliability refers to how trustworthy or reproducible the results are and what the possible effects of chance might have been. This can be determined by statistical tests. Generally, if something has the chance of occurring more than 1 in 20 times, it is considered unlikely to be because of chance and is statistically significant

Applicability

Applicability considers whether the results are useful and how strong the recommendations are for practice. This may look at the context or population of the study.  

Checklists
Publication bias

Statistically significant results are more likely to be published than non-statistically significant results. This has the potential to lead to publication bias in reviews as if non-significant results are not published, they cannot be included and analysed in reviews. Consider the following points to identify if your review may be at risk of publication bias: 

  • Are the databases and sources likely to identify non-English or unpublished studies?
  • Does the list of included studies have a high proportion on non-English studies and grey literature?
  • If only including English studies, examine the abstracts of relevant non-English studies to see if the conclusions agree with your included English language studies
Selective outcome reporting bias

Selective reporting bias relates to missing data within a study and is difficult to detect unless you are familiar with the topic. Selective outcome reporting can lead to bias because it presents the outcomes of a selected intervention positively, leading to incorrect decisions on whether the intervention is effective. This occurs when researchers only report on favourable and significant results and leave out unfavourable results. Consider the following to determine if your review may be at risk of selective outcome reporting bias:

  • Do the outcomes described in the literature match the outcomes your stakeholders are interested in?
  • Look at all the outcomes in all included studies and see if any studies have left out any important ones, or report on less than other studies
  • Do the outcomes and measures reported on match what was proposed in protocols or initial results presented n conference abstracts?
  • Contact authors and request additional data on specific or non-reported outcomes
Conflict of interest 

Consider whether the authors may have been influenced by a conflict of interest when reporting results. Any conflict of interest should be reported in the article. If a study has been pre-registered, or has a protocol, examine these to see if how the study was conducted matches. Look at:

  • Where the study funding was from
  • Was the study designed to make a positive result more likely?
Checklists