Bringing Back Semantics to Knowledge Graph Embeddings: An Interpretability Approach

Antoine Domingues*, Nitisha Jain*, Albert Merono Penuela, Elena Simperl

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

162 Downloads (Pure)

Abstract

Knowledge Graph Embeddings Models project entities and relations from Knowledge Graphs into a vector space. Despite their widespread application, concerns persist about the ability of these models to capture entity similarity effectively. To address this, we introduce InterpretE, a novel neuro-symbolic approach to derive interpretable vector spaces with human-understandable dimensions in terms of the features of the entities. We demonstrate the efficacy of InterpretE in encapsulating desired semantic features, presenting evaluations both in the vector space as well as in terms of semantic similarity measurements.

Original languageEnglish
Title of host publication18th International Conference on Neural-Symbolic Learning and Reasoning (NeSy 2024)
PublisherSpringer, Cham
Pages192-203
Number of pages12
Volume14979
ISBN (Electronic)978-3-031-71167-1
ISBN (Print)978-3-031-71166-4
DOIs
Publication statusPublished - 10 Sept 2024
Event18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024 - Barcelona, Spain
Duration: 9 Sept 202412 Sept 2024

Publication series

NameNeural-Symbolic Learning and Reasoning
Volume14980
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference18th International Conference on Neural-Symbolic Learning and Reasoning, NeSy 2024
Country/TerritorySpain
CityBarcelona
Period9/09/202412/09/2024

Keywords

  • interpretable vectors
  • knowledge graph embeddings
  • semantic similarity

Fingerprint

Dive into the research topics of 'Bringing Back Semantics to Knowledge Graph Embeddings: An Interpretability Approach'. Together they form a unique fingerprint.

Cite this