TY - JOUR
T1 - Inter-reviewer reliability of human literature reviewing and implications for the introduction of machine-assisted systematic reviews
T2 - A mixed-methods review
AU - Hanegraaf, Piet
AU - Wondimu, Abrham
AU - Mosselman, Jacob Jan
AU - De Jong, Rutger
AU - Abogunrin, Seye
AU - Queiros, Luisa
AU - Lane, Marie
AU - Postma, Maarten J.
AU - Boersma, Cornelis
AU - Van Der Schans, Jurjen
N1 - Publisher Copyright:
© 2024 BMJ Publishing Group. All rights reserved.
PY - 2024/3/19
Y1 - 2024/3/19
N2 - Objectives: Our main objective is to assess the inter-reviewer reliability (IRR) reported in published systematic literature reviews (SLRs). Our secondary objective is to determine the expected IRR by authors of SLRs for both human and machine-assisted reviews.Methods: We performed a review of SLRs of randomised controlled trials using the PubMed and Embase databases. Data were extracted on IRR by means of Cohen's kappa score of abstract/title screening, full-text screening and data extraction in combination with review team size, items screened and the quality of the review was assessed with the A MeaSurement Tool to Assess systematic Reviews 2. In addition, we performed a survey of authors of SLRs on their expectations of machine learning automation and human performed IRR in SLRs. Results: After removal of duplicates, 836 articles were screened for abstract, and 413 were screened full text. In total, 45 eligible articles were included. The average Cohen's kappa score reported was 0.82 (SD=0.11, n=12) for abstract screening, 0.77 (SD=0.18, n=14) for full-text screening, 0.86 (SD=0.07, n=15) for the whole screening process and 0.88 (SD=0.08, n=16) for data extraction. No association was observed between the IRR reported and review team size, items screened and quality of the SLR. The survey (n=37) showed overlapping expected Cohen's kappa values ranging between approximately 0.6-0.9 for either human or machine learning-assisted SLRs. No trend was observed between reviewer experience and expected IRR. Authors expect a higher-than-average IRR for machine learning-assisted SLR compared with human based SLR in both screening and data extraction.Conclusion: Currently, it is not common to report on IRR in the scientific literature for either human and machine learning-assisted SLRs. This mixed-methods review gives first guidance on the human IRR benchmark, which could be used as a minimal threshold for IRR in machine learning-assisted SLRs. PROSPERO registration number CRD42023386706.
AB - Objectives: Our main objective is to assess the inter-reviewer reliability (IRR) reported in published systematic literature reviews (SLRs). Our secondary objective is to determine the expected IRR by authors of SLRs for both human and machine-assisted reviews.Methods: We performed a review of SLRs of randomised controlled trials using the PubMed and Embase databases. Data were extracted on IRR by means of Cohen's kappa score of abstract/title screening, full-text screening and data extraction in combination with review team size, items screened and the quality of the review was assessed with the A MeaSurement Tool to Assess systematic Reviews 2. In addition, we performed a survey of authors of SLRs on their expectations of machine learning automation and human performed IRR in SLRs. Results: After removal of duplicates, 836 articles were screened for abstract, and 413 were screened full text. In total, 45 eligible articles were included. The average Cohen's kappa score reported was 0.82 (SD=0.11, n=12) for abstract screening, 0.77 (SD=0.18, n=14) for full-text screening, 0.86 (SD=0.07, n=15) for the whole screening process and 0.88 (SD=0.08, n=16) for data extraction. No association was observed between the IRR reported and review team size, items screened and quality of the SLR. The survey (n=37) showed overlapping expected Cohen's kappa values ranging between approximately 0.6-0.9 for either human or machine learning-assisted SLRs. No trend was observed between reviewer experience and expected IRR. Authors expect a higher-than-average IRR for machine learning-assisted SLR compared with human based SLR in both screening and data extraction.Conclusion: Currently, it is not common to report on IRR in the scientific literature for either human and machine learning-assisted SLRs. This mixed-methods review gives first guidance on the human IRR benchmark, which could be used as a minimal threshold for IRR in machine learning-assisted SLRs. PROSPERO registration number CRD42023386706.
KW - Randomized Controlled Trial
KW - Surveys and Questionnaires
KW - Systematic Review
UR - http://www.scopus.com/inward/record.url?scp=85188267760&partnerID=8YFLogxK
U2 - 10.1136/bmjopen-2023-076912
DO - 10.1136/bmjopen-2023-076912
M3 - Article
C2 - 38508610
AN - SCOPUS:85188267760
SN - 2044-6055
VL - 14
JO - BMJ Open
JF - BMJ Open
IS - 3
M1 - e076912
ER -