Kolawole John Adebayo, Guido Boella, Luigi Di Caro
175 - 178
We propose a domain specific Question Answering system. We deviate from approaching this problem as a Textual Entailment task. We implemented a Memory Network-based Question Answering system which test a Machine's understanding of legal text and identifies whether an answer to a question is correct or wrong, given some background knowledge. We also prepared a corpus of real USA MBE Bar exams for this task. We report our initial result and direction for future works.
This short paper aims to introduce a theoretical framework in digital forensics based on “Philosophy of Information”. After a preliminary clarification of its key concepts, some general issues concerning “Information Quality” are outlined in digital and cloud forensics. At the end, I offer a few remarks on future researches' perspectives.
Methods for the formal interpretation of normative sources in natural language, e.g. Statute Law and regulations form a neglected part of the field of AI and Law. In our view a frame-based approach is best suited for making formal specifications for normative systems that can be traced back to normative sources. The adequacy of the Flint-language to perform this task, is being compared with that of two existing frame-based solutions.
John Garofalakis, Konstantinos Plessas, Athanasios Plessas
187 - 190
The automatic analysis of legislative texts using Natural Language Processing techniques is able to facilitate several tasks related to the legislation lifecycle, such as the consolidation of different versions of legal documents. We present our work on the automatic identification, extraction and application of textual amendments in Greek legislative texts, based on pattern matching with regular expressions, which is part of a semi-automatic system for the consolidation of Greek laws.
Cristine Griffo, João Paulo A. Almeida, Giancarlo Guizzardi
191 - 194
This paper extends UFO-L, a Legal Core Ontology (LCO) based on Robert Alexy's Theory of Constitutional Rights and grounded on the Unified Foundational Ontology (UFO). We present the first pattern of UFO-L's patterns catalogue and its application. The general idea is to use these ontological patterns to support the modeling of legal concepts in conceptual models of the legal domain. Moreover, our approach has the specific purpose of emphasizing the use of a relational perspective rather than a normative perspective of the Law.
We introduce Computer Assisted Legal Linguistics (CAL2) as a semi-automated method to “make sense” of legal discourse by systematically analyzing large collections of legal texts. Such digital corpora have been increasingly used in computational linguistics in recent years, as part of a quantitative research strategy designed to complement (rather than supplant) the more qualitative methods used hitherto. This use of statistical algorithms to analyze large bodies of text meets with an increasing demand by lawyers for empirical data and the recent turn towards evidence-based jurisprudence. Together, these research strands open exciting avenues for research and for developing useful IT tools to support legal decision-making, as we exemplify using our reference corpus of about 1 billion tokens from the language of German jurisprudence and legal academia.
This paper concerns the recently introduced concept of Legislation Networks, with an application focus on the New Zealand legislation network. Legislation networks have some novel features which make them an excellent test case for new network science tools. We develop several such networks, compute relevant centrality measures, and apply community detection algorithms. We study the relationship between the legislation network measures and legal/political factors.
Kyoko Sugisaki, Martin Volk, Rodrigo Polanco, Wolfgang Alschner, Dmitriy Skougarevskiy
203 - 206
In this paper, we present an on-going research project whose aim is to develop a new database of international investment agreements that complements existing endeavors. In particular, this paper describes our efforts to build a standardized corpus of multi-lingual and multi-format agreement texts in order to enable researchers in the fields of international law and economics systematically investigate investment treaties.
TropICAL is a Domain Specific Language (DSL) for the description of abstract legal policies. Taking inspiration from narrative tropes, our DSL enables the creation of component “policies” that may be reused between case descriptions. These components are compiled to social institutions, which are realised in Answer Set Programming (ASP) code. In this way, the actions of defendant and plaintiff take the shape of a story which must conform to the rules in the ASP description. We propose the use of our DSL in a tool designed for lawyers to generate arguments for the argumentation process.
This paper proposes an extensible model distinguishing between reference types within legal documents. It differentiates between four types of references, namely fully-explicit, semi-explicit, implicit, and tacit references.
We conducted a case study on German laws to evaluate both: the model and the proposed differentiation of reference types. We adapted text mining algorithms to determine and classify the different references according to their type. The evaluation shows that the consideration of additional reference types heavily impacts the resulting network structure by inducing a plethora of new edges and relationships. This work extends the approaches made in network analysis and argues for the necessity of detailed differentiation between references throughout legal documents.