Ebook: Deep Learning with Relational Logic Representations
Deep learning has been used with great success in a number of diverse applications, ranging from image processing to game playing, and the fast progress of this learning paradigm has even been seen as paving the way towards general artificial intelligence. However, the current deep learning models are still principally limited in many ways.
This book, ‘Deep Learning with Relational Logic Representations’, addresses the limited expressiveness of the common tensor-based learning representation used in standard deep learning, by generalizing it to relational representations based in mathematical logic. This is the natural formalism for the relational data omnipresent in the interlinked structures of the Internet and relational databases, as well as for the background knowledge often present in the form of relational rules and constraints. These are impossible to properly exploit with standard neural networks, but the book introduces a new declarative deep relational learning framework called Lifted Relational Neural Networks, which generalizes the standard deep learning models into the relational setting by means of a ‘lifting’ paradigm, known from Statistical Relational Learning. The author explains how this approach allows for effective end-to-end deep learning with relational data and knowledge, introduces several enhancements and optimizations to the framework, and demonstrates its expressiveness with various novel deep relational learning concepts, including efficient generalizations of popular contemporary models, such as Graph Neural Networks.
Demonstrating the framework across various learning scenarios and benchmarks, including computational efficiency, the book will be of interest to all those interested in the theory and practice of advancing representations of modern deep learning architectures.
In the recent years, we have seen tremendous resurgence of neural networks, applied with great success in highly diverse domains, ranging from speech recognition to game playing. The unprecedented progress of this new deep learning trend has even been seen as paving our way towards general artificial intelligence. However, the current deep learning models are still limited in many regards. Particularly, in this thesis we address the contemporary problem of learning neural networks from relational data and knowledge representations. While virtually all standard models are limited to data in the form of fixed-size tensors, the relational data are omnipresent in the interlinked structures of the Internet and relational databases. Likewise, in many domains a background knowledge in the form of logic or rich graph-based structures is often available, yet impossible or very difficult to exploit with the standard neural networks.
To address this issue, we introduce a declarative deep relational learning framework called Lifted Relational Neural Networks (LRNNs). The main idea underlying the framework is to approach the neural networks through the lifted modeling paradigm, known otherwise from Statistical Relational Learning (SRL), where it is used to exploit symmetries in learning problems. Similarly to lifted graphical models from SRL, LRNNs are then represented as sets of weighted relational logic rules, used to describe the structure of a given learning setting.
As set out, we demonstrate that this paradigm allows for effective end-to-end neural learning with relational data and knowledge. The encoding through the weighted relational logic rules then provides flexible means for implementing a wide variety of novel modeling concepts incorporating various latent relational patterns. Notably, these also elegantly cover contemporary deep convolutional models, such as Graph Neural Networks, as a simple special case. We explain how to easily generalize these state-of-the-art models towards higher expressiveness, and also demonstrate the general LRNN framework on various practical learning scenarios and benchmarks, including computational efficiency.
Additionally, we introduce several enhancements to the framework. Firstly, we present an automated structure learning of the relational rules, composing the lifted models. Secondly, we introduce two principled optimization techniques used to scale up the integrative framework from both the logical and neural learning perspectives. Both the optimization methods are then effective also separately in the respective approaches to learning. Lastly, we demonstrate the framework on selected use cases in different domains.