

The integration of Large Language Models (LLMs) with logic-based Knowledge Graphs (KGs) and more generally with Knowledge Representation and Reasoning (KRR) methodologies has rapidly emerged as a pivotal area of research. Such a synergy is aimed at enhancing transparency and accountability in AI-driven applications, which is paramount for big data processing and robust decision-making over high-stakes domains such as finance and biomedicine. Indeed, despite the adaptability and human-centric understanding that LLMs bring, they inherently lack systematic reasoning capabilities, often operating opaquely with limited factuality and common sense. On the other hand, ontological reasoning with knowledge graphs offers robust and scalable reasoning, enriched with the step-by-step explainability of the inferred insights, but is often restricted by the rigidity of its structured rule-based formalism and falls short in providing the semantic understanding required in today’s human-data interaction. In this chapter, we address the intrinsic limitations affecting the above paradigms individually and introduce KGLM, a novel neurosymbolic framework that synergistically combines state-of-the-art LLMs with powerful KRR approaches to perform complex reasoning tasks over large knowledge graphs. Through KGLM, language models such as Llama 3 are enhanced with domain awareness and transparency, enabling them to act as natural language interfaces to KGs. Conversely, ontological reasoning systems such as our Vadalog engine are augmented with human-like flexibility to capture semantic nuances in the data. The framework can be seamlessly integrated into existing data processing pipelines and tools to power data-intensive decision-making processes in complex real-world domains.