Inter-reviewer reliability of human literature reviewing and implications for the introduction of machine-assisted systematic reviews: A mixed-methods review

Piet Hanegraaf, Abrham Wondimu, Jacob Jan Mosselman, Rutger De Jong, Seye Abogunrin, Luisa Queiros, Marie Lane, Maarten J. Postma, Cornelis Boersma, Jurjen Van Der Schans*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

31 Downloads (Pure)

Abstract

Objectives: Our main objective is to assess the inter-reviewer reliability (IRR) reported in published systematic literature reviews (SLRs). Our secondary objective is to determine the expected IRR by authors of SLRs for both human and machine-assisted reviews.

Methods: We performed a review of SLRs of randomised controlled trials using the PubMed and Embase databases. Data were extracted on IRR by means of Cohen's kappa score of abstract/title screening, full-text screening and data extraction in combination with review team size, items screened and the quality of the review was assessed with the A MeaSurement Tool to Assess systematic Reviews 2. In addition, we performed a survey of authors of SLRs on their expectations of machine learning automation and human performed IRR in SLRs. 

Results: After removal of duplicates, 836 articles were screened for abstract, and 413 were screened full text. In total, 45 eligible articles were included. The average Cohen's kappa score reported was 0.82 (SD=0.11, n=12) for abstract screening, 0.77 (SD=0.18, n=14) for full-text screening, 0.86 (SD=0.07, n=15) for the whole screening process and 0.88 (SD=0.08, n=16) for data extraction. No association was observed between the IRR reported and review team size, items screened and quality of the SLR. The survey (n=37) showed overlapping expected Cohen's kappa values ranging between approximately 0.6-0.9 for either human or machine learning-assisted SLRs. No trend was observed between reviewer experience and expected IRR. Authors expect a higher-than-average IRR for machine learning-assisted SLR compared with human based SLR in both screening and data extraction.

Conclusion: Currently, it is not common to report on IRR in the scientific literature for either human and machine learning-assisted SLRs. This mixed-methods review gives first guidance on the human IRR benchmark, which could be used as a minimal threshold for IRR in machine learning-assisted SLRs. PROSPERO registration number CRD42023386706.

Original languageEnglish
Article numbere076912
Number of pages10
JournalBMJ Open
Volume14
Issue number3
DOIs
Publication statusPublished - 19-Mar-2024

Keywords

  • Randomized Controlled Trial
  • Surveys and Questionnaires
  • Systematic Review

Fingerprint

Dive into the research topics of 'Inter-reviewer reliability of human literature reviewing and implications for the introduction of machine-assisted systematic reviews: A mixed-methods review'. Together they form a unique fingerprint.

Cite this