
Ebook: Legal Knowledge and Information Systems

The twenty-fourth edition of the JURIX conference was held in Vienna, Austria on December 14th–16th at the University of Vienna’s Centre for Legal Informatics. The submissions came from authors from 18 different countries, showing the international appeal of the topic and conference. These proceedings comprise 12 full papers, 7 short papers and 3 research abstracts. The papers span a wide range of topics on the advanced management of legal information and knowledge, and cover foundational theories as well as developed applications. Covered by the papers is work on: the analysis of court decisions; argumentation and proof standards; information and rule extraction from legal texts; permissions; compliance controls; precedents and legal stories; the structure of law; relevance and authority in law; online dispute resolution; measuring the evolution of the law; applications for legal education; data privacy; and conceptual models of legal reasoning for AI applications.
This volume contains the proceedings of the twenty-fourth edition of the International Conference on Legal Knowledge and Information Systems (JURIX 2011). The conference was held in Vienna, Austria on December 14th–16th at the University of Vienna's Centre for Legal Informatics. The annual JURIX conferences are held under the auspices of the Dutch Foundation for Legal Knowledge and Information Systems (www.jurix.nl).
The JURIX conference has been running annually for over 20 years and provides an international forum for both academics and practitioners in the field of legal informatics to meet and share research to advance the field of legal knowledge-based systems. Original papers on the advanced management of legal information and knowledge were solicited, covering all aspects of the topic spanning foundations, methods, tools, systems and applications. There were 36 submissions to the conference this year, with the authors of submitted papers coming from 18 different countries. The peer review process was conducted by a programme committee of 35 members who are experts in the field of legal informatics. Of the 36 submissions, 12 were accepted for publication as full papers of ten pages (33%). In addition, 7 short papers of five pages and 3 research abstracts of two pages are included in the proceedings.
The selected papers cover a wide range of topics on the advanced management of legal information and knowledge. Governatori et al. provide an extension of defeasible logic to represent different concepts of defeasible permission. Boer and van Engers present a framework for implementation of compliance in public administration. Ciaghi et al. tackle the problem of managing legal documents by providing software metrics that are analysed with a set of Italian laws. Winkels et al. consider how the network of citations between cases can be used as an indication of relevance and authority in the Dutch legal system. Täks et al. present an approach to find hidden structure in legislation, applying the approach to some Estonian legislation. Carneiro et al. develop conflict resolution models that are able to classify the disputant parties according to their personal conflict style. Prakken and Sartor provide a formal model of argumentation with burdens and standards of proof. Bex et al. show how factual stories can be assessed through the use of precedents. Grabmair et al. present results on extracting semantic information from US state public health legislative provisions. Wyner and Peters look at how rules can be identified and extracted from regulations. Bench-Capon considers the treatment of value conflicts in light of a series of US Supreme Court decisions. Finally, Ashley and Goldin consider how a computer-supported peer-review process among students can be applied in legal education.
Two invited lectures were also part of the conference. Professor Witold Abramowicz from Poznan University of Economics, Poland spoke about the Linked Open Data paradigm and how the vision of reusable public data may be fulfilled using this paradigm. Professor Maria Wimmer from the University of Koblenz-Landau in Germany spoke about research foundations for e-government, covering theoretical grounds and, strategic and policy demands.
No conference can run smoothly without the support of a solid programme committee and many thanks are extended to the committee members for their hard work in producing valuable reviews and discussing the papers submitted. The programme committee members were:
• Kevin Ashley, University of Pittsburgh, USA
• Zsolt Balogh, University of Pecs, Hungary
• Trevor Bench-Capon, University of Liverpool, UK
• Floris Bex, University of Dundee, UK
• Alexander Boer, University of Amsterdam, The Netherlands
• Danièle Bourcier, CNRS CERSA, University of Paris 2, France
• Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain
• Jack G. Conrad, Thomson Reuters, USA
• Enrico Francesconi, ITTIG-CNR, Florence, Italy
• Anne Gardner, Atherton, USA
• Thomas Gordon, Fraunhofer FOKUS, Berlin, Germany
• Guido Governatori, NICTA, Australia
• Davide Grossi, University of Liverpool, UK
• Carole Hafner, Northeastern University, USA
• Rinke Hoekstra, VU University Amsterdam, The Netherlands
• Friedrich Lachmayer, University of Innsbruck, Austria
• L. Thorne McCarty, Rutgers University, USA
• Laurens Mommers, Legal Intelligence, Rotterdam, The Netherlands
• Paulo Novais, Universidade do Minho, Portugal
• Monica Palmirani, University of Bologna, Italy
• Radim Polčák, Masaryk University, Czech Republic
• Henry Prakken, Universiteit Groningen and Universiteit Utrecht, The Netherlands
• Paulo Quaresma, Universidade de Evora and Universidade Nova de Lisboa, Portugal
• Antonino Rotolo, University of Bologna, Italy
• Giovanni Sartor, European University Institute, Florence – Cirsfid, University of Bologna, Italy
• Ken Satoh, National Institute of Informatics and Sokendai, Japan
• Burkhard Schafer, University of Edinburgh, Scotland
• Uri Schild, Bar Ilan University, Israel
• Erich Schweighofer, University of Vienna, Austria
• Tom van Engers, Leibniz Center for Law, The Netherlands
• Bart Verheij, Universiteit Groningen, The Netherlands
• Doug Walton, University of Windsor, Canada
• Radboud Winkels, Leibniz Center for Law, The Netherlands
• Adam Wyner, University of Liverpool, UK
• John Zeleznikow, Victoria University, Melbourne, Australia
Thanks also go to Erich Schweighofer for organising the conference this year, as well as to Radboud Winkels and Henry Prakken in their respective roles as past president and current president of the JURIX foundation, through which they have provided support for this edition of the conference.
Katie M. Atkinson
Programme Chair
Department of Computer Science,
University of Liverpool, UK
K.M.Atkinson@liverpool.ac.uk
Applying Bayesian data analysis to model a computer-supported peer-review process in a legal class writing exercise yielded pedagogically useful information about student-understanding of problem-specific legal concepts and of more general domain-related legal writing criteria and about the criteria's effectiveness. The approach suggests how AI and Law can impact legal education.
In recent years it has become quite usual to view legal decisions in terms of consideration of the values affected by deciding the case for or against a particular party. Often deciding for, say, the plaintiff will promote one value at the expense of another. Precedents are then supposed to guide the way in which this conflict is resolved. In this paper we will consider a series of cases exploring the so-called automobile exception to the requirement of the Fourth Amendment protecting against unreasonable search of persons, houses, papers, and effects. These cases highlight a conflict between the value of law enforcement and the value of privacy as protected by the Fourth Amendment, and will be used to illuminate questions about the treatment of value conflicts arising from previous work in AI and Law.
When reasoning about the facts of a case, we typically use stories to link the known events into coherent wholes. One way to establish coherence is to appeal to past examples, real or fictitious. These examples can be chosen and critiqued using the case-based reasoning (CBR) techniques from the AI and Law literature. In this paper, we apply these techniques to factual stories, assessing a story about the facts using precedents. We thus show how factual and legal reasoning can be combined in a CBR model.
This paper presents a monitoring and diagnosis component of a knowledge cquisition, design, and simulation framework for implementation of compliance in public administration. A major purpose of the framework is to give a methodological justification for the exploration of compliance control policies. The knowledge acquisition approach depends on the storylike character of relevant case law and expert knowledge, and the compliance controls design space is derived from these stories.
The use of technology to support conflict resolution is nowadays well established. Moreover, technological solutions are not only used to solve traditional conflicts but also to solve conflicts that emerge in virtual environments. Therefore, a new field of research has been developing in which the use of Artificial Intelligence techniques can significantly improve the conflict resolution process. In this paper we focus on developing conflict resolution models that are able to classify the disputant parties according to their personal conflict style. Moreover, we present a dynamic conflict resolution model that is able to use that information to adapt strategies in real time according to significant changes in the context of interaction. To do it we follow a novel approach in which an intelligent environment supports the lifecycle of the conflict resolution model with the provision of important context knowledge.
Law-makers, designers of legal information systems and citizens are often challenged by the complexity of bodies of law and the growing number of references needed to interpret a law. Quantifying this complexity is not an easy task. In this paper we present some analyses we conducted on the Italian body of laws, made available through the “Normattiva” website. Some of the metrics we applied are similar to those often used to measure the quality of software systems.
In this paper we propose an extension of Defeasible Logic to represent different concepts of defeasible permission. Special attention is paid in particular to permissive norms that work as exceptions to opposite obligations.
This paper presents preliminary results in extracting semantic information from US state public health legislative provisions using natural language processing techniques and machine learning classifiers. Challenges in the density and distribution of the data as well as the structure of the prediction task are described. Decision tree models trained on a unigram representation with TFIDF measures in most cases outperform the baselines by varying margins, leaving room for further improvement.
A formal model is proposed of argumentation with burdens and standards of proof, overcoming shortcomings of earlier work. The model is based on a distinction between default and inverted burdens of proof. This distinction is formalised by adapting the definition of defeat of the ASPIC+ framework for structured argumentation. Since ASPIC+ generates abstract argumentation frameworks, the model is thus given a Dungean semantics. It is shown to adequately capture shifting proof burdens as well as Carneades' definitions of proof standards.
Normative systems evolve over time and, due to several reasons, form huge and complex, yet not very systematic nor consistent collections of legal norms. So far qualitative, thus subjective methods are used to create a structure in legislation. A new, experimental and quantitative approach is presented, that opens a rough structure of Estonian legislation with the help of graph theory and visualization. As a test case, a deep structural analysis, based on a part of Estonian legislation is introduced.
In this paper we present the results of two studies to see whether the analysis of the network of citations between cases can be used as an indication of the relevance and authority in the Dutch legal system. Fowler e.a. have shown such results for the US common law system, but given the different status of case law in continental tradition it is not clear whether this will hold in the Netherlands. Moreover, we introduce a way to validate the results using selections made by human experts for legal education. We discuss the results and conclude that network analysis of cases is a useful tool for legal research.
Rules in regulations such as found in the US Federal Code of Regulations can be expressed using conditional and deontic rules. Identifying and extracting such rules from the language of the source material would be useful for automating rulebook management and translating into an executable logic. The paper presents a linguistically-oriented, rule-based approach, which is in contrast to a machine learning approach. It outlines use cases, discusses the source materials, reviews the methodology, then provides initial results and future steps.
This work presents a parsing approach able to extract relevant knowledge from judgements. It is based on finite state automata and Hidden Markov Models, as a compromise solution between NLP and machine learning approaches for case texts parsing. The approach is tested on a dataset of Italian court decisions to provide a support to their automatic structuring and semantic indexing.
The paper presents a simple environment to analyze a sufficient amount of data concerning trading transactions in order to reveal the crime schemes. We assumed that the output of the analysis should be presented in a user-friendly graphical form – the Palantir Technologies tool was used as the system's back-end. The front-end is constituted by the knowledge base expert rules used to reveal a carousel fraud scheme and roles of companies implicated in this crime. We utilize the Jess rule engine to do reasoning.
The right of privacy of personal data is fundamental to democratic societies and self-determined individua. The legal fundamentals regarding personal data privacy, however, are not reflected in current technology platforms or data processing systems like databases in a systematic way that is entirely transparent to all the users of such systems. This work introduces PRDL (Privacy Rule Definition Language) as a basis for user-oriented notation of data processing rules that can be automatically processed by IT systems to certify data protection compliance in the long run.
Despite the fact that contracts are, by definition, an agreement between two or more parties, most formal studies limit themselves to contracts regulating only a single party or the parties independently of each other, without looking into how permissions, obligations or prohibitions of one party affect the other. This article deals with the analysis of what different types of permissions mean in the context of contracts. To give formal semantics we use an automata based formalism allowing to model for one party agreeing, delaying or plain refusing on performing certain actions that the other is attempting. This approach also yields a natural notion of contract strictness analysis for each party.
This short paper establishes a baseline for author attribution in the domain of US Supreme Court decisions. It also examines the contribution of four different kinds of features and the size/accuracy tradeoffs that can be made.
The paper addresses the extraction, formalisation, and presentation of public policy arguments. Arguments are extracted from documents that comment on public policy proposals. Formalising the information from the arguments enables the construction of models and systematic analysis of the arguments. In addition, the arguments are represented in a form suitable for presentation in an online consultation tool. Thus, the forms in the consultation correlate with the formalisation and can be evaluated accordingly. The stages of the process are outlined with reference to a working example.
Statute law legislators are usually not in a position to foresee each and every situation or event which may actually occur in real life. That is why lawyers in the course of their everyday practice very often struggle with interpreting the cases which are not expressly regulated in the law. The legal theory and practice has given rise to a wide array of methods to deal with this type of problems. The aim of this study is to develop and implement an instrumental inference model, i.e. one of the non-standard inference mechanisms used to interpret the rules of statutory law.
The paper presents general assumption and the method for system architecture development of the ongoing project Semantic Monitoring of Cyberspace (SMC). Objective of SMC is to design and deploy a tool to support tracking of trade of illegal chemical substances by continuous semantic monitoring of Web 2.0 sources such as forums, thematic news portals, etc.
We attempt to present and compare two methods for representing cases in constraint satisfaction networks. The tension between top-down and bottom-up strategy as regards construction of models of reasoning has long tradition in the literature on AI. The same distinction applies to representations of legal argumentation. Our proposal compares the application of top-down and bottom-up strategy to representation of a chosen judgment of the Court of Justice of the European Union in constraint satisfaction networks. The first (top-down) approach is inspired by systems of defeasible logic and argumentation frameworks. The second approach makes use of bottom-up strategy and is more firmly grounded in the text of the court's decision. The two methods lead to very similar results and yield interesting conclusions concerning the structure of the CJEU reasoning.