TY - JOUR
T1 - Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata
AU - Zhang, Bohui
AU - Reklos, Ioannis
AU - Jain, Nitisha
AU - Peñuela, Albert Meroño
AU - Simperl, Elena
PY - 2023/1/1
Y1 - 2023/1/1
N2 - In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. We developed a pipeline using LLMs for Knowledge Engineering (LLMKE), combining knowledge probing and Wikidata entity mapping. The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge. The implementation is available at: https://github.com/bohuizhang/LLMKE.
AB - In this work, we explore the use of Large Language Models (LLMs) for knowledge engineering tasks in the context of the ISWC 2023 LM-KBC Challenge. For this task, given subject and relation pairs sourced from Wikidata, we utilize pre-trained LLMs to produce the relevant objects in string format and link them to their respective Wikidata QIDs. We developed a pipeline using LLMs for Knowledge Engineering (LLMKE), combining knowledge probing and Wikidata entity mapping. The method achieved a macro-averaged F1-score of 0.701 across the properties, with the scores varying from 1.00 to 0.328. These results demonstrate that the knowledge of LLMs varies significantly depending on the domain and that further experimentation is required to determine the circumstances under which LLMs can be used for automatic Knowledge Base (e.g., Wikidata) completion and correction. The investigation of the results also suggests the promising contribution of LLMs in collaborative knowledge engineering. LLMKE won Track 2 of the challenge. The implementation is available at: https://github.com/bohuizhang/LLMKE.
UR - http://www.scopus.com/inward/record.url?scp=85179556549&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85179556549
SN - 1613-0073
VL - 3577
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
ER -