Could We Prescribe Jobs? Recommendation Accuracy of Job Recommender Systems Using Machine Learning: A Systematic Review and Meta-Analysis

Max Lange, Nikolaos Koutsouleris, Ricardo Twumasi*

*Corresponding author for this work

Research output: Contribution to journalArticle

13 Downloads (Pure)

Abstract

Background: What if we could personalise occupational treatment at scale thus propelling long-term, fulfilling employment that protects from and alleviates symptoms of mental illness (MI)? People with MI are more likely to be discriminated against in the workplace. Machine learning (ML) can match job seekers to vacancies in the form of job recommender systems (JRS) and therefore actively contribute to prevention and recovery. Gaining better insights into the performance of JRS across different job-seeking groups and algorithm settings is a crucial first step to understand whether JRS could be used to actively prescribe jobs for people with MI.

Methods: We searched three online databases (Scopus, Web of Science and IEEE) from January 2000 to January 2023; studies were analysed using the Cochrane Collaboration Review Manager 5, lme4 and rjags packages in R. For quality assessment, the QUADS-2 tool was used, since the more suitable QUADAS-AI had not yet been published, nor was it available in pre-print. Bayesian and frequentist, random-effects meta-analyses were conducted to estimate pooled measures of sensitivity and specificity, Diagnostic Odds Ratio (DOR), as well as Positive and Negative Likelihood Ratios (LR+ and LR-).

Findings: Only five studies were included, as these were the only ones supplying sufficient data for meta-analysis. Forty-seven additional studies would have been included had they adequately supplied performance metrics. Authors of these papers were contacted to gain more information, to no appeal. We still reviewed these studies narratively and attempted to group them into different performance metrics, while additionally reviewing predictive features reported there. Quality could not sufficiently be assessed due to incomplete reporting by studies’ authors and lack of appropriate tools in the literature. Not even the median participant age could be determined as basic participant information was not supplied by any of the included studies. Frequentist pooled sensitivity and pooled specificity were 96.8 (95% CI: 70.2, 99.7) and 85.7 (95% CI:.40.8, 98.1), respectively, which is suggestive of overfitting effects due to lack of thorough cross-validation. Bayesian estimates were similar. HSROC models did not converge.

Interpretation: The current literature makes it difficult to interpret pooled estimates of JRS performance. Transparent and standardized analysis and reporting guidelines are urgently needed that cover the entire JRS lifecycle, ranging from input data, over modelling strategies and performance reporting to model validation and deployment. Ethically acceptable and clinically relevant application of JRS to people with MI can only be achieved when such guidelines are fully implemented in JRS research.

Funding: This project was funded by the London Interdisciplinary Social Science Doctoral Training Programme (LISS-DTP).

Declaration of Interest: We declare no competing interest.
Original languageEnglish
JournalSSRN Electronic Journal
DOIs
Publication statusSubmitted - 7 Jul 2023

Fingerprint

Dive into the research topics of 'Could We Prescribe Jobs? Recommendation Accuracy of Job Recommender Systems Using Machine Learning: A Systematic Review and Meta-Analysis'. Together they form a unique fingerprint.

Cite this