As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Lifted Relational Neural Networks (LRNNs) were introduced in 2015 [1] as a framework for combining logic programming with neural networks for efficient learning of latent relational structures, such as various subgraph patterns in molecules. In this chapter, we will briefly re-introduce the framework and explain its current relevance in the context of contemporary Graph Neural Networks (GNNs). Particularly, we will detail how the declarative nature of differentiable logic programming in LRNNs can be used to elegantly capture various GNN variants and generalize to novel, even more expressive, deep relational learning concepts. Additionally, we will briefly demonstrate practical use and computation performance of the framework.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.