TY - JOUR
T1 - Exploring the application of machine learning to expert evaluation of research impact
AU - Williams, Kate
AU - Michalska, Sandra
AU - Cohen, Eliel
AU - Szomszor, Martin
AU - Grant, Jonathan
N1 - Publisher Copyright:
© 2023 Williams et al.
PY - 2023/8/3
Y1 - 2023/8/3
N2 - The objective of this study is to investigate the application of machine learning techniques to the large-scale human expert evaluation of the impact of academic research. Using publicly available impact case study data from the UK’s Research Excellence Framework (2014), we trained five machine learning models on a range of qualitative and quantitative features, including institution, discipline, narrative style (explicit and implicit), and bibliometric and policy indicators. Our work makes two key contributions. Based on the accuracy metric in predicting high- and low-scoring impact case studies, it shows that machine learning models are able to process information to make decisions that resemble those of expert evaluators. It also provides insights into the characteristics of impact case studies that would be favoured if a machine learning approach was applied for their automated assessment. The results of the experiments showed strong influence of institutional context, selected metrics of narrative style, as well as the uptake of research by policy and academic audiences. Overall, the study demonstrates promise for a shift from descriptive to predictive analysis, but suggests caution around the use of machine learning for the assessment of impact case studies.
AB - The objective of this study is to investigate the application of machine learning techniques to the large-scale human expert evaluation of the impact of academic research. Using publicly available impact case study data from the UK’s Research Excellence Framework (2014), we trained five machine learning models on a range of qualitative and quantitative features, including institution, discipline, narrative style (explicit and implicit), and bibliometric and policy indicators. Our work makes two key contributions. Based on the accuracy metric in predicting high- and low-scoring impact case studies, it shows that machine learning models are able to process information to make decisions that resemble those of expert evaluators. It also provides insights into the characteristics of impact case studies that would be favoured if a machine learning approach was applied for their automated assessment. The results of the experiments showed strong influence of institutional context, selected metrics of narrative style, as well as the uptake of research by policy and academic audiences. Overall, the study demonstrates promise for a shift from descriptive to predictive analysis, but suggests caution around the use of machine learning for the assessment of impact case studies.
UR - http://www.scopus.com/inward/record.url?scp=85166575377&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0288469
DO - 10.1371/journal.pone.0288469
M3 - Article
SN - 1932-6203
VL - 18
JO - PLoS One
JF - PLoS One
IS - 8 August
M1 - e0288469
ER -