Towards Interpretable Embeddings: Aligning Representations with Semantic Aspects

Nitisha Jain, Antoine Domingues, Adwait Baokar, Albert Merono Penuela, Elena Simperl

Research output: Contribution to conference typesPaperpeer-review

Abstract

Knowledge Graph Embedding Models (KGEMs) project entities and relations from Knowledge Graphs (KGs) into dense vector spaces, enabling tasks such as link rediction and recommendation systems. However, these embeddings typically
suffer from a lack of interpretability and struggle to represent entity similarities in a way that is meaningful to humans. To address these challenges, we introduce InterpretE, a neuro-symbolic approach that generates interpretable vector spaces aligned with human-understandable entity aspects. By explicitly linking entity representations to their desired semantic aspects, InterpretE not only improves interpretability but also enhances the clustering of similar entities based on these aspects. Our experiments demonstrate that InterpretE effectively produces embeddings that are interpretable and improve the evaluation of semantic similarities, making it a valuable tool in explainable AI research by supporting transparent decision-making. By offering insights into how embeddings represent entities, InterpretE enables KGEMs to be used for semantic tasks in a more trustworthy and reliable manner.

Other

OtherNeurosymbolic Artificial Intelligence (An IOS Press Journal)
Abbreviated titleNAI
Period30/05/2025 → …
Internet address

Fingerprint

Dive into the research topics of 'Towards Interpretable Embeddings: Aligning Representations with Semantic Aspects'. Together they form a unique fingerprint.

Cite this