Ebook: Shallow Discourse Parsing for German
The last few decades have seen impressive improvements in several areas of Natural Language Processing. Nevertheless, getting a computer to make sense of the discourse of utterances in a text remains challenging. Several different theories which aim to describe and analyze the coherent structure of a well-written text exist, but with varying degrees of applicability and feasibility for practical use.
This book is about shallow discourse parsing, following the paradigm of the Penn Discourse TreeBank, a corpus containing over 1 million words annotated for discourse relations. When it comes to discourse processing, any language other than English must be considered a low-resource language. This book relates to discourse parsing for German. The limited availability of annotated data for German means that the potential of modern, deep-learning-based methods relying on such data is also limited. This book explores to what extent machine-learning and more recent deep-learning-based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task.
The end-to-end shallow discourse parser for German developed for the purpose of this book is open-source and available online. Work has also been carried out on several connective lexicons in different languages. Strategies are discussed for creating or further developing such lexicons for a given language, as are suggestions on how to further increase their usefulness for shallow discourse parsing.
The book will be of interest to all whose work involves Natural Language Processing, particularly in languages other than English.
While the last few decades have seen impressive improvements in several areas in Natural Language Processing, asking a computer to make sense of the discourse of utterances in a text remains challenging. There are several different theories that aim to describe and analyse the coherent structure that a well-written text inhibits. These theories have varying degrees of applicability and feasibility for practical use. Presumably the most data-driven of these theories is the paradigm that comes with the Penn Discourse TreeBank, a corpus annotated for discourse relations containing over 1 million words. Any language other than English however, can be considered a low-resource language when it comes to discourse processing.
This dissertation is about shallow discourse parsing (discourse parsing following the paradigm of the Penn Discourse TreeBank) for German. The limited availability of annotated data for German means the potential of modern, deep-learning based methods relying on such data is also limited. This dissertation explores to what extent machine-learning and more recent deep-learning based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task. A pivotal role is played by connective lexicons that exhaustively list the discourse connectives of a particular language along with some of their core properties.
To facilitate training and evaluation of the methods proposed in this dissertation, an existing corpus (the Potsdam Commentary Corpus (Stede and Neumann, 2014)) has been extended and additional data has been annotated from scratch. The approach to end-to-end shallow discourse parsing for German adopts a pipeline architecture (Lin et al., 2014) and either presents the first results or improves over state-of-the-art for German for the individual sub-tasks of the discourse parsing task, which are, in processing order, connective identification, argument extraction and sense classification. The end-to-end shallow discourse parser for German that has been developed for the purpose of this dissertation is open-source and available online.
In the course of writing this dissertation, work has been carried out on several connective lexicons in different languages. Due to their central role and demonstrated usefulness for the methods proposed in this dissertation, strategies are discussed for creating or further developing such lexicons for a particular language, as well as suggestions on how to further increase their usefulness for shallow discourse parsing.