Numerous large knowledge graphs, such as DBpedia, Wikidata, Yago and Freebase, have been developed in the last decade, which contain millions of facts about various entities in the world. These knowledge graphs have proven to be incredibly useful for intelligent Web search, question understanding, in-context advertising, social media mining, and biomedicine. As some researchers have pointed out, a knowledge graph is not just a graph database, but it should have a layer of conceptual knowledge, which is usually represented as a set of first-order rules. However, it is challenging to automatically extract first-order rules from large knowledge graphs. In particular, traditional models are usually unable to handle rule learning in large knowledge graphs. This chapter aims to present state-of-the-art techniques and models on learning first-order rules using representation learning. By first recalling some basics of rule learning in knowledge graphs, we will introduce useful techniques of embedding-based rule learning through major models such as RLvLR and TyRuLe, which embed paths in knowledge graphs into latent spaces for candidate rule search. Then we evaluate the rule learning efficiency and the quality of automatically learned rules by applying them in link prediction. Before the chapter is concluded, we will also discuss some future research problems in the area.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 firstname.lastname@example.org
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 email@example.com