Bringing Back Semantics to Knowledge Graph Embeddings : An Interpretability Approach

Antoine Domingues*, Nitisha Jain*, Albert Merono Penuela, Elena Simperl

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

37 Downloads (Pure)


Knowledge Graph Embeddings Models project entities and relations from Knowledge Graphs into a vector space. Despite their widespread application, concerns persist about the ability of these models to capture entity similarity effectively.
To address this, we introduce \textit{InterpretE}, a novel neuro-symbolic approach to derive interpretable vector spaces with human-understandable dimensions in terms of the features of the entities.
We demonstrate the efficacy of \textit{InterpretE} in encapsulating desired semantic features, presenting evaluations both in the vector space as well as in terms of semantic similarity measurements.
Original languageEnglish
Title of host publication18th International Conference on Neural-Symbolic Learning and Reasoning (NeSy 2024)
Publication statusPublished - 2024


Dive into the research topics of 'Bringing Back Semantics to Knowledge Graph Embeddings : An Interpretability Approach'. Together they form a unique fingerprint.

Cite this