Exploring the application of machine learning to expert evaluation of research impact

Kate Williams*, Sandra Michalska, Eliel Cohen, Martin Szomszor, Jonathan Grant

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
96 Downloads (Pure)

Abstract

The objective of this study is to investigate the application of machine learning techniques to the large-scale human expert evaluation of the impact of academic research. Using publicly available impact case study data from the UK’s Research Excellence Framework (2014), we trained five machine learning models on a range of qualitative and quantitative features, including institution, discipline, narrative style (explicit and implicit), and bibliometric and policy indicators. Our work makes two key contributions. Based on the accuracy metric in predicting high- and low-scoring impact case studies, it shows that machine learning models are able to process information to make decisions that resemble those of expert evaluators. It also provides insights into the characteristics of impact case studies that would be favoured if a machine learning approach was applied for their automated assessment. The results of the experiments showed strong influence of institutional context, selected metrics of narrative style, as well as the uptake of research by policy and academic audiences. Overall, the study demonstrates promise for a shift from descriptive to predictive analysis, but suggests caution around the use of machine learning for the assessment of impact case studies.
Original languageEnglish
Article numbere0288469
JournalPLoS One
Volume18
Issue number8 August
Early online date3 Aug 2023
DOIs
Publication statusE-pub ahead of print - 3 Aug 2023

Fingerprint

Dive into the research topics of 'Exploring the application of machine learning to expert evaluation of research impact'. Together they form a unique fingerprint.

Cite this