
Ebook: Legal Knowledge and Information Systems

Currently, several artificial intelligence technologies are growing increasingly mature, including computational modeling of reasoning, natural language processing, information retrieval, information extraction, machine learning, electronic agents, and reasoning with uncertainty. Their integration in and adaptation to legal knowledge and information systems needs to be studied. Parallel to this development, e-government applications are gradually gaining ground among local, national, European and international institutions. More than 25 years of research in the field of legal knowledge and information systems has resulted in many models for legal knowledge representation and reasoning. However, authors and reviewers rightly remarked that there are still some essential questions to be solved. First, there is a need for the integration and harmonization of the models. Secondly, there is the difficult problem of knowledge acquisition in a domain that is in constant evolution. If one wants to realize a fruitful marriage between artificial intelligence and e-government, the aid of technologies that automatically extract knowledge from natural language and from other forms of human communication and perception is needed.
This volume contains the Proceedings of the Eighteenth JURIX Conference on Legal Knowledge and Information Systems (JURIX 2005), which is held on December 8–10, 2005 at the Vrije Universiteit Brussel in Brussels, Belgium.
Thirteen full papers and seven extended abstracts are included in these Proceedings. Authors, invited speakers and workshop organizers come from Australia, Austria, Belgium, Canada, China, France, Germany, Italy, Portugal, Russia, Spain, Switzerland, United Kingdom, and the United States of America. It seems that the JURIX conference is becoming the leading European conference on legal informatics and artificial intelligence and law.
A number of papers discuss traditional topics of artificial intelligence and law and draft models of law and legal reasoning. The research of Katie Atkinson and Trevor Bench-Capon is a valuable attempt to integrate different existing models of legal reasoning. Guido Governatori et al. discuss a framework for defeasible reasoning. Alexander Boer, Tom van Engers and Radboud Winkels argue that legal norms are in many contexts best understood as expressions of a ceteris paribus preference, and that this viewpoint adequately accounts for normative conflict and contrary-to-duty norms. The paper of Christopher Giblin et al. introduces a meta-model and method for modeling regulations and managing them in a systematic lifecycle in an enterprise. Jeroen Keppens and Burkhard Schafer discuss evidentiary reasoning and its formalization in a first-order assumption-based reasoning architecture. Moshe Looks, Ronald P. Loui and Barry Z. Cynamon present a mathematical modeling method of agents that pursue their interests and of a legislator who tries to influence the agents in ways that promote the legislator's goals.
Three papers are on the topic of legal knowledge acquisition using natural language processing. Pietro Mercatali et al. discuss the first steps that are needed for the automatic translation of textual representations of laws into formal models. The paper of Farida Aouladomar analyzes the form, presentation, meaning and modes of answering procedural questions (“how”) in the context of online e-Government applications. Paolo Quaresma and Irene Pimenta Rodrigues discuss a question answering system for legal information retrieval.
A number of short papers describe very interesting work in progress and often focus on practical applications such as reducing the legal burden, planning a new bill, classification of legislative documents, and reasoning tools for e-Democracy.
A final section of the Proceedings is devoted to the use of ontologies in describing the law. The paper of Ronny van Laarschot et al. attempts to bridge the gap between a laymen's description and legal terminology. Peter Spyns and Giles Hogben apply and validate an automatic evaluation procedure on ontology mining results from the EU privacy directive. Roberto García and Jaime Delgado present an ontological approach for the management of data dictionaries of intellectual property rights. Finally, Laurens Mommers and Wim Voermans explain how cross-lingual information retrieval is useful in the legal field.
Invited lectures were given by Luc Wintgens and Helmut Horacek.
This conference focuses on two major themes and their integration: Artificial Intelligence and e-government. Currently, several artificial intelligence technologies are growing increasingly mature, including computational modeling of reasoning, natural language processing, information retrieval, information extraction, machine learning, electronic agents, and reasoning with uncertainty. Their integration in and adaptation to legal knowledge and information systems need to be studied. Parallel to this development, e-government applications are gradually gaining ground among local, national, European and international institutions. More than 25 years of research in the field of legal knowledge and information systems have resulted in many models for legal knowledge representation and reasoning. However, authors and reviewers rightly remarked that there are still some essential questions to be solved. First, there is a need for the integration and harmonization of the models. Secondly, there is the difficult problem of knowledge acquisition in a domain that is in constant evolution. If one wants to realize a fruitful marriage between artificial intelligence and e-government, the aid of technologies that automatically extract knowledge from natural language and from other forms of human communication and perception is needed.
The organizing committee of JURIX 2005 consists of Peter Spyns, Greet Janssens, Johan Verdoodt, Pieter De Leenheer and Yan Tang. This committee is very grateful to Koen Deschacht, Toon Lenaerts and Roxana Angheluta for their extra help. We especially thank the members of the program committee of this conference:
• Jon Bing, University of Oslo, Norway
• Kevin D. Ashley, University of Pittsburgh, USA
• Trevor J.M. Bench-Capon, University of Liverpool, UK
• Pascale Berteloot, Office des publications, European Commission
• Karl Branting, BAE systems, USA
• Jaime Delgado, Universitat Pompeu Fabra, Spain
• Aldo Gangemi, Institute of Cognitive Sciences and Technologies, Italy
• Thomas F. Gordon, Fraunhofer FOKUS, Berlin, Germany
• Eduard Hovy, University of Southern California, USA
• Ronald Leenes, Tilburg University, The Netherlands
• Richard Leary, University College London, UK
• Arno Lodder, Vrije Universiteit Amsterdam, The Netherlands
• Anja Oskamp, Vrije Universiteit Amsterdam, The Netherlands
• Henry Prakken, Utrecht University/University of Groningen, The Netherlands
• Giovanni Sartor, Università di Bologna, Italy
• Erich Schweighofer, University of Vienna, Austria
• Peter Spyns, Vrije Universiteit Brussel, Belgium
• Roland Traunmüller, University of Linz, Austria
• Tom van Engers, University of Amsterdam, The Netherlands
• Bart Verheij, Rijksuniversiteit Groningen, The Netherlands
• Radboud Winkels, University of Amsterdam, The Netherlands
• John Zeleznikow, Victoria University, Australia
Leuven, October 26, 2005, Marie-Francine Moens, Chair of the program committee, Peter Spyns, Chair of the organizing committee
In this paper we use our previous work which has examined the different levels involved in reasoning about legal cases to examine some challenges to the relevance of current theoretical work in AI and Law made by Branting. In our model we view the process of legal reasoning as being divided into three distinct but interconnected levels of reasoning. These levels involve a bottom layer concerning facts about the world, a top layer concerning legal consequences, and a layer connecting the two, with conclusions at lower levels acting as premises for higher levels. We use our model to explain Branting's observations and show the relation with other strands of work from the AI and Law community.
This paper proposes a framework based on Defeasible Logic (DL) to reason about normative modifications. We show how to express them in DL and how the logic deals with conflicts between temporalised normative modifications. Some comments will be given with regard to the phenomenon of retroactivity.
In Boer et al. (viz. [9]) we argued that evaluation of draft legislation, change from an old to a new regime, harmonization of legislation from multiple jurisdictions, and the decision to move a good, person, or service over the borders of a jurisdiction, involves a process of integration and comparison of preference structures. This paper argues that legal norms are in many contexts best understood as expressions of a ceteris paribus preference, and that this viewpoint adequately accounts for normative conflict and contrary-to-duty norms.
Recent years have seen a number of high-profile incidents of corporate accounting fraud, security violations, terrorist acts, and disruptions of major financial markets. This has led to a proliferation of new regulations that directly impact businesses. As a result, businesses, in particular publicly traded companies, face the daunting task of complying with an increasing number of intricate and constantly evolving regulations. Together with the growing complexity of today's enterprises this requires a holistic compliance management approach with the goal of continually increasing automation.
We introduce REALM (Regulations Expressed as Logical Models), a metamodel and method for modeling regulations and managing them in a systematic lifecycle in an enterprise. We formalize regulatory requirements as sets of compliance rules in a novel real-time temporal object logic over concept models in UML, together with metadata for traceability. REALM provides the basis for subsequent model transformations, deployment, and continuous monitoring and enforcement of compliance in real business processes and IT systems.
An important cause of miscarriages of justice is the failure of crime investigators and lawyers to consider important plausible explanation for the available evidence. Recent research has explored the development of decision support systems that (i) assist human crime investigators by proposing plausible crime scenarios explaining given evidence, and (ii) provide the means to analyse such scenarios. While such approaches can generate useful explanations, they are inevitably restricted by the limitations of formal abductive inference mechanisms. Building on work presented previously at this venue, this paper characterises an important class of scenarios, containing “alternative suspects” or “hidden objects”, which cannot be synthesised robustly using conventional abductive inference mechanisms. The work is then extended further by proposing a novel inference mechanism that enables the generation of such scenarios.
Many legislative games of interest defy classical assumptions and techniques; they tend to be open-ended, with weakly defined objectives, and either noncompetitive or pseudo-competitive. We introduce a conceptual and mathematical framework for grappling with such systems. Simulation results are presented for basic specifications of the framework that exhibit a number of qualitative phenomena overlapping with real-world dynamics across a broad spectrum of settings, including aspects of financial regulation and academic decision procedures, that as we demonstrate, may be viewed through the lens of our framework.
In many application areas of intelligent systems, natural language communication is considered a major source for substantial progress, even for systems whose pure reasoning capabilities are exceptional. Unfortunately, it turns out to be extremely difficult to build adequate natural language processing facilities for the interaction with such systems.
In this talk, I will expose some fundamental reasons for the difficulties associated with automatically analysing such inference-rich discourse, by elaborating discrepancies between effective human argumentation and efficient machine-based argumentative reasoning. On the human side, these discrepancies manifest themselves in several degrees of explicitness and levels of granularity, concise but ambiguous rhetorical signals, and varying conceptual perspectives, which all need to be related to uniform and fully explicit representations on the side of a machine. I will discuss approaches that aim at bridging these discrepancies to some degree, for both analysis and generation of argumentative texts. Issues addressed include disambiguation methods for discourse markers, identification of expected arguments, and dedicated content planning and structuring techniques.
A combination of UML and text mining, or more in general Information Extraction, can provide a valuable help to people involved in research about the linguistic structure of statutes, and, as a side effect, can be the seed for a new generation of applications in the legal domain. In particular, in this paper we present LexLooter, a prototype for amendments modelling and legislative text coordination based on UML and Natural Language Processing.
Time saving and time flexibility of eGovernment procedures is more attractive than face-to-face services to citizens. Citizens may interact with government via emails, search administrative information via eGovernment portals, or even via large-public search engines. Procedural question-answering systems are of much interest to query legislation, court decisions, guidelines, procedures, etc. In this paper, we present a typology of how-questions asked on the web. Then, we explore facets of procedural texts : their typology and general prototypical structures. We finally present our strategy for answering procedural questions using the notion of questionability of a procedural text.
In this paper we present a question-answering system for Portuguese juridical documents.
The system has two modules: preliminary analysis of documents (information extraction) and query processing (information retrieval). The proposed approach is based on computational linguistic theories: syntactical analysis (constraint grammars); followed by semantic analysis using the discourse representation theory; and, finally, a semantic/pragmatic interpretation using ontologies and logical inference.
Knowledge representation and ontologies are handled through the use of an extension to PROLOG, ISCO, which allows to integrate logic programming and external databases. In this way it is possible to solve scalability problems like the need to represent more than 10 millions of discourse entities.
The system was evaluated with the complete set of decisions from several Portuguese juridical institutions (Supreme Courts, High Court, Courts, and Attorney-General's Office) in a total of 180,000 documents. The obtained results were quite interesting and motivating and allowed the identification of some strong and weak characteristics of the system.
This short paper explains how relative simple technology can help governments to reduce the legal burden.
In this paper a module able to guide the legislative drafter in planning a new bill is presented. This module aims at helping the legislative drafter to build a new act from a conceptual point of view. Using this module the classical drafting process is inverted: the structure of abill is constructed on the basis of its semantics.
This short paper introduces the Interaction Predicate model, which attempts to model some aspects of systematic interpretation of codified law. It introduces an intermediate rule representation containing dynamic reasoning elements which make use of domain knowledge ontologies.
The paper describes the structure and properties of a large linguistic ontology – a new kind of information retrieval thesaurus - Thesaurus on Sociopolitical Life for Conceptual Indexing. The thesaurus is used in various real-scale information-retrieval applications in the legal domain. At present one of the main applications of the Thesaurus is knowledge-based text categorization. Categories are connected with the Thesaurus by flexible relationships. The categorization system can process text collections containing texts different in sizes and types.
It is shown how two tools developed in argumentation theory are useful for AI systems for electronic democracy [2,3] and more generally for formal dialogue systems representing deliberation. The novel part of this analysis is that it represents the speech of proposing as a small dialogue exchange in which one party practically reasons with another, based on premises that both are committed to, as collaborative participants in a deliberation dialogue. The structure of practical reasoning as a type of argument as analyzed in [6] is brought to bear, to bring out special features of the speech act of proposing that make it a nice fit with the formal framework for deliberation dialogue constructed by [4].
The aim of the BEST-project is to support laymen in judging their legal position through intelligent disclosure of case-law in the area of Dutch tort law. A problem we have to face in this context is the discrepancy between the terminology laymen use to describe their case and the terminology found in legal documents. We address this problem by supporting users to describe their case in common sense terms taken from an ontology. We use logical reasoning to automatically determine law articles that are relevant for determining liability of parties in a case based on this description, thus bridging the gap between the laymen's description and the terminology relevant for certain articles that can be found in legal documents. We introduce the BEST-project and describe the ontology built for supporting case descriptions focussing on its use for automatically determining relevant articles of law.
In this paper we validate a simple method to objectively assess the results of extracting material (c.q. triples) from text corpora to build ontologies. The EU Privacy Directive has been used as corpus. Two domain experts have manually validated the results. Several experimental settings have been tried. As the evaluation scores are rather modest (sensitivity or recall: 0.5, specificity: 0.539 and precision: 0.21), we see them as a baseline reference for future experiments. Nevertheless, the human experts appreciate the automated evaluation procedure as sufficiently effective and time-saving for usage in real-life ontology modelling situations.
RDDOnto provides an ontological approach to the Rights Data Dictionary (RDD) part of MPEG-21one of the main Intellectual Property Rights (IPR) Management standardisation efforts. In order to build the ontology, the terms defined in the RDD specification have been modelled using OWL, trying to capture the greatest part of its semantics. The ontology allows formalising a great part of the standard and simplifying its verification, consistency checking and implementation. During the RDDOnto construction, some integrity problems were detected, which even have led to standard corrigenda. Additional checks were possible using Description Logic reasoning in order to test the standard consistency. Moreover, RDDOnto is now helping on how new terms can be added to the RDD and to integrate the RDD with other parts of MPEG-21 also mapped to OWL. Finally, there are the implementation facilities provided by the ontology. They have been used to develop MPEG-21 licenses searching, validation and checking. Existing ontology-enabled tools as semantic query engines or logic reasoners facilitate this.
Access to legal documents has been hampered by the lack of attention for specific user groups accessing such documents. In this article, we focus on one of these user groups (legal professionals), who can benefit from specific types of cross-lingual information retrieval for, e.g., comparative law research. We propose to use legal definitions as anchor points in legal documents. Through the body of EU legislation, these anchor points can support a network of concepts between different jurisdictions. A model is presented containing the different entity types and relations types for building such a network, which can be implemented in the WordNet architecture.