Abstract
Knowledge Graph Embedding Models (KGEMs) project entities and relations from Knowledge Graphs (KGs) into dense vector spaces, enabling tasks such as link rediction and recommendation systems. However, these embeddings typically
suffer from a lack of interpretability and struggle to represent entity similarities in a way that is meaningful to humans. To address these challenges, we introduce InterpretE, a neuro-symbolic approach that generates interpretable vector spaces aligned with human-understandable entity aspects. By explicitly linking entity representations to their desired semantic aspects, InterpretE not only improves interpretability but also enhances the clustering of similar entities based on these aspects. Our experiments demonstrate that InterpretE effectively produces embeddings that are interpretable and improve the evaluation of semantic similarities, making it a valuable tool in explainable AI research by supporting transparent decision-making. By offering insights into how embeddings represent entities, InterpretE enables KGEMs to be used for semantic tasks in a more trustworthy and reliable manner.
suffer from a lack of interpretability and struggle to represent entity similarities in a way that is meaningful to humans. To address these challenges, we introduce InterpretE, a neuro-symbolic approach that generates interpretable vector spaces aligned with human-understandable entity aspects. By explicitly linking entity representations to their desired semantic aspects, InterpretE not only improves interpretability but also enhances the clustering of similar entities based on these aspects. Our experiments demonstrate that InterpretE effectively produces embeddings that are interpretable and improve the evaluation of semantic similarities, making it a valuable tool in explainable AI research by supporting transparent decision-making. By offering insights into how embeddings represent entities, InterpretE enables KGEMs to be used for semantic tasks in a more trustworthy and reliable manner.
Original language | English |
---|---|
Publication status | Published - 2025 |
Event | Neurosymbolic Artificial Intelligence (An IOS Press Journal) - Duration: 30 May 2025 → … https://neurosymbolic-ai-journal.com/paper/towards-interpretable-embeddings-aligning-representations-semantic-aspects-0 |
Other
Other | Neurosymbolic Artificial Intelligence (An IOS Press Journal) |
---|---|
Abbreviated title | NAI |
Period | 30/05/2025 → … |
Internet address |