Ebook: Legal Knowledge and Information Systems
The 25th edition of the JURIX conference was held in the Netherlands from the 17th till the 19th of December and was hosted by the University of Amsterdam. This year submissions came from 25 countries covering Europe, the Americas, Asia and Australia. These proceedings contain sixteen full and five short papers that were selected for presentation. As usual they cover a wide range of topics. The majority of contributions deals with formal or computational models of legal argumentation and reasoning: questions of coherence, evidential reasoning, visualisation of argumentation and formal representations of legal narratives are amongst other issues addressed. Another group of papers is centred on representing the semantics of sources of law, to facilitate legislative drafting, information retrieval or “data protection by design”. A third group of papers goes beyond the more technical aspects of legal information systems and asks fundamental questions about the nature of legal expert systems or the concept of rights.
For the past 25 yearsthe JURIX conferencesheld under the auspices of the Dutch Foundation for Legal Knowledge Systems (www.jurix.nl), have brought together researchers from computer science and law to promote research in, and development of computer tools in the legal domain. What began as a local collaboration between Dutch and Flemish scientists transformed over the years into one of the internationally leading events that bridge the gap between the “2 cultures”, and unite computer scientists, lawyers, but also social scientists, philosophers and economists in a common endeavour. This volume contains the proceedings of the twenty fifth international Conference on Legal Knowledge and Information Systems (JURIX 2012), which was held December 17th–19th at the University of Amsterdam in the Netherlands, a “homecoming” for the anniversary.
This year we had 35 submissions from 23 countries, representing research groups in Europe, the Americas, Australia and Asia. Each paper was reviewed by three experts of the Program Committee that consisted of 32 people from 14 countries. Of the 35 submissions, 16 were accepted as full papers of ten pages for publication in these proceedings and presentation at the conference. An additional five were accepted as short papers (four pages) with shorter presentations.
The selected papers demonstrate the strong health of the AI and Law research field in general and the JURIX community in particular. Not only is the number of countries represented increasing every year, the coverage of subjects remains equally broad, from formal models of evidential reasoning to legal information retrieval, from tools for policy deliberation to negotiation support systems. Regularly, they tackle important social challenges, such as affordable access to justice or more efficient patent law, privacy protecting software design or assistance in law reform projects. Even more encouraging are the demographics. On the one hand, the JURIX community is stable, with many participants regulars of our conferences for many years. Radboud Winkels is one of this year's presenters who also attended the inaugural JURIX conference 25 years ago, and holds the record for participation (24 conferences). The paper by Trevor Bench-Capon, another regular and with us since the third JURIX conference, was ranked highest after the review process and will mark his 22nd appearance at JURIX. This continuous involvement of internationally leading researchers in the field not only contributes to the high international profile of the conference, it also ensures that ideas and projects have the chance to develop incrementally to maturity.
At the same time, a new generation of researchers is coming through, and often they are the former students of the participants at the earlier JURIX events. This not only ensures the sustainability of the AI and Law research paradigm, it also gives credence to the role that the JURIX conferences and their associated workshops have played over the years for capacity building and the training of the next generation of interdisciplinary researchers.
Finally, this year we welcomed an unusually high number of new faces, researchers who are either already well established in their respective fields and have chosen JURIX as a new outlet for their research, or early career researchers who have identified an affinity of their research questions with those of the JURIX community. This ensures the influx of new ideas and new, and hopefully critical, perspectives that prevent “self-certification”.
The three invited lectures also reflect our achievements and aspirations. Noel Sharkey's talk on robots opens up our field to new research questions and applications, and marks the changing field of AI and law beyond the PC or the internet, into a world where formal renditions of Asimov's laws of robotics may soon become an urgent requirement for technology design. The talk by Ivan Futo from the Hungarian National Tax and Customs Administration reminds us of the close connection to our user community and the relevance of our work for the day to day administration of the legal system. It also highlights the increasing importance of participants from Eastern Europe in the JURIX conferences, with the number of papers from these jurisdictions steadily increasing. Anton Nijholt's talk finally links our work to one of the most technically demanding fields of AI and law, natural language processing, though as dinner speech from a perspective that makes it in every sense of the word more easily digestible.
Also encouraging was the large number of workshops and tutorials that have become a mainstay of the JURIX conferences. They are the seedling for new research fields, and an opportunity to get hands-on experience on the computational and technical aspects of our research, bridging practice and theory.
Acknowledgments
A conference like JURIX is not possible without the effort and support of the members of the international Program Committee:
• Kevin D. Ashley, University of Pittsburgh, USA
• Zsolt Balogh, University of Pecs, Hungary
• Trevor Bench-Capon, University of Liverpool, UK
• Floris Bex, University of Dundee, UK
• Alexander Boer, University of Amsterdam, The Netherlands
• Joost Breuker, University of Amsterdam, The Netherlands
• Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain
• Jack G. Conrad, Thomson Reuters, Switzerland
• Tom van Engers, Leibniz Center for Law, The Netherlands
• Enrico Francesconi, ITTIG-CNR, Florence, Italy
• Anne Gardner, Atherton, USA
• Thomas F. Gordon, Fraunhofer FOKUS, Berlin, Germany
• Guido Governatori, NICTA, Australia
• Carole D. Hafner, Northeastern University, USA
• Rinke Hoekstra, VU University Amsterdam/University of Amsterdam, The Netherlands
• Arno R. Lodder, VU University Amsterdam, The Netherlands
• Thorne McCarthy, Rutgers University, USA
• Marie-Francine Moens, KU Leuven, Belgium
• Laurens Mommers, Universiteit Leiden, The Netherlands
• Paulo Novais, Universidade do Minho, Portugal
• Monica Palmirani, University of Bologna, Italy
• Radim Polcak, Masaryk University, Czech Republic
• Henry Prakken, Universiteit Groningen & Universiteit Utrecht, The Netherlands
• Paulo Quaresma, Universidade de Evora & Universidade Nova de Lisboa, Portugal
• Antonio Rotolo, University of Bologna, Italy
• Giovanni Sartor, European University Institute, Florence – Cirsfid, University of Bologna, Italy
• Ken Satoh, National Institute of Informatics and Sokendai, Japan
• Erich Schweighofer, University of Vienna, Austria
• Uri Schild, Bar Ilan University, Israel
• Bart Verheij, Universiteit Groningen, The Netherlands
• Douglas N. Walton, University of Windsor, Canada
• Radboud Winkels, Leibniz Center for Law, The Netherlands
• Adam Wyner, University of Liverpool, UK
• John Zeleznikow, Victoria University, Melbourne, Australia
We thank all authors for submitting their work, and those of accepted papers for responding to the reviewers' comments and abiding by our production schedule. As always, the hope is that publication of the proceedings is not the end of the discussion, but only the beginning, and that they stimulate debate, criticism and critical reflection at the conference and beyond. Finally a special thanks to the local organisers, Tom van Engers and Radboud Winkels, for taking on the responsibility of organising JURIX 2012.
Burkhard Schäfer
Program Chair
SCRIPT Centre
School of Law
University of Edinburgh
UK
In this paper we present a refined coherence as constraint satisfaction framework as a potent tool for representation of judicial reasoning. We demonstrate usefulness of the framework on a model of the famous Popov v Hayashi case. Although we do not claim that the presented framework can be already considered fully developed we believe that the account constitutes a major improvement over those that have been published previously. The resulting representation is strongly anchored in a raw text of the decision itself and by means of formal logic can be transformed to a graphical representation which is a surprisingly intuitive and transparent account of application of rules in legal cases.
Since the 1980s, AI and Law has attempted to capture legal expertise in computer programs. But what is this expertise? This paper reviews a number of approaches, from the 1980s to the present day, which represent different answers to this question. It argues that our notion, and understanding, of expertise has developed and improved over the decades. As yet, however, only a few rather specific aspects have been addressed in detail, in particular the move from intermediate predicates to legal consequences, and the distinguishing of precedents. Much more, including the moves from evidence to facts and from facts to intermediate predicates, awaits exploration.
In this paper we present a novel method for the automatic classification of multi-label text documents. In principle, automatic classification of text is usually tackled by supervised Machine Learning techniques like Support Vector Machines (SVM), that typically achieve state-of-the-art accuracy in several domains. Nevertheless, SVM can not handle multi-labeled documents, thus a specific preprocessing of the data is needed. In this paper we present a novel technique for the transformation of multi-label data into mono-label that is able to maintain all the information, allowing the use of standard approaches like SVM. We then evaluate our system using JRC-Acquis-it, a large dataset of italian legislation that has been manually annotated according to EuroVoc, demonstrating the potential of our approach compared to the current state of the art.
The paper reports of a collaborative project between computer scientists, lawyers, police officers, medical professionals and social workers to develop a communication in infrastructure that allows information sharing while observing Data Protection law “by design”, through a formal representation of legal rules in a firewall type system.
The e-CODEX project is meant to implement building blocks for a system able to support transnational procedures between European Member States so to increase cross-border relations in a pan-European e-Justice area. In this paper an overview of the e-Delivery platform architecture, as well as the semantic solution conceived to transmit business documents within a scenario characterized by different languages and different legal systems, are described.
We propose a framework for reconstructing the arguments supporting the restrictive interpretations of legal provisions. The idea is that the interpretation of legal concepts may require to change the counts-as rules defining them. Some connections with revision theory techniques are considered.
This paper introduces an argumentation support tool based on Toulmin Diagram. It consists of a factor-tagging editor, a semantics calculation module based on Argumentation Framework and an automated factor extractor. When we input an argumentation record, this system generates a tagged argumentation record and a set of arguments that belong to credulous or skeptical extensions. The automated factor extractor supports to extract factors from an argumentation record by using a machine learning method.
This paper presents a gradual argumentation model of evidential reasoning, which is based on a revised semantic of John L. Pollock's critical link semantics. This model is proposed as a new argument graph, which consists of two kinds of links, incorporated in the ASPIC+ framework with variable degrees of justification. This model aims to provide a revised computation of justification in argument graphs and to generate a new formalization of standards of proof. This model is thus given as a gradual semantic beyond Dungean semantics, which offers a system for regarding arguments as justified to variable degrees in evidential reasoning.
Argumentation is central to law. Written and oral argument structures, however, are often difficult to analyze and employ in instruction. Diagrammatic models of argument offer a potential solution to these problems. In this paper we report on the results of an empirical study into the diagnostic utility of argument diagrams in a legal writing context. The focus is on comparing experts' and student-produced argument diagrams and on the extent to which the latter can be used to predict students' performance on subsequent writing tasks. We present the results and draw some tentative conclusions.
ThinkData is an interactive online service that aims at raising awareness about data protection and transparency within organizations in Switzerland. This service was created by an interdisciplinary group using design thinking techniques. ThinkData was designed with the very pragmatic objective of engaging its users in becoming familiar with data protection and transparency concepts through storytelling. Scenarios can be browsed by activities, by topics and by data types. This paper reports on the design and deployment of this service making the case for the underlying issue of using storytelling as a valuable approach for awareness and training on complex legal issues.
Could legal importance of a judicial decision be established by analyzing its position in a case law citation network? A network of nearly half a million citations was used to test a great variety of social network algorithms against three external benchmarks. All these algorithms were outperformed by a tailored variant of degree centrality. Furthermore, regression analysis was used to explore the possible relevance of eleven variables to the legal importance of case law.
We present a formalization of Kanger's types of rights in the context of interacting two-party systems, such as contracts. We show that in this setting basic rights such as claim, freedom, power and immunity can be expressed in terms of (possibly negated) permissions and obligations over presence or absense of actions. Another way of saying this is that, at least in the context of contracts, neither claim, nor power, nor freedom nor immunity are foundational modalities, as they can be defined in terms of others. We also show that the set of atomic type rights is different from Kanger's original proposal.
A proposal for legal information extraction is described, aiming to automatically populate an ontology. The experiments were performed over a set of legal documents from the European lex site. The proposed approach combines two different processes, namely statistical based and rule-based methodologies, and can be considered an hybrid approach. This methodology showed good results, namely in the precision, recall and f-measure values.
This paper presents a case study in which an opinion of a legal scholar on a legislative proposal is formally reconstructed in the ASPIC+ framework for argumentation-based inference. The reconstruction uses a version of the argument scheme for good and bad consequences that does not refer to single but to sets of consequences, in order to model aggregation of reasons for and against proposals. The case study is intended to contribute to a comparison between various formal frameworks for argumentation by providing a new benchmark example. It also aims to illustrate the usefulness of two features of ASPIC+: its distinction between deductive and defeasible inference rules and its ability to express arbitrary preference orderings on arguments.
Most patent applications almost always face non-obvious/inventive step rejections during the examination stage. The rejections based on non-obviousness/inventive step are increasing substantially each year. In this paper, we propose a mathematical approach called the FSTP Test for determining a non-obviousness indication. The FSTP Test allows an inventor to identify and rework certain aspects of his invention before filing a patent application, which might have been considered as obvious at a later stage.
In this paper we present a prototype for automatically identifying and classifying types of modifications in Italian legal text. The prototype is part of the Eunomos system, a legal knowledge management service that integrates and makes available legislation from various sources, while finding definitions and explanations of legal concepts in a given context. The design of the prototype is grounded on the error analysis of a previous prototype. The latter made use of dependency relations provided by the TUP parser, a multi-purpose parser for Italian. Since those syntactic relations were responsible of the majority of errors, we decided in the present tool to ignore them, and to rewrite an ad-hoc shallow parsing, based on the morphological analysis of the legal text (still provided by the TUP parser). We obtained performances much greater than those of the initial prototype. In particular, the level of precision of the classification in output is now close to 100%.
This article presents a conceptual framework intended to describe and to abstract cases or scenarios of compliance and non-compliance. These scenarios are collected in order to be animated in an agent-based platform for purposes of design and validation of both new regulations and new implementations, or to be used as reference base for a diagnosis tool. In our approach, legal narratives become a source of agent-roles descriptions, i.e. abstractions of individual characters/agents from singular stories, feeding the target applicative framework.
Numerous regulations affect our lives. Compliance with them is quite a challenge. Considering the change management of regulations the situation is even more complex. To propose a solution to this problem we introduce a unified change management of both legislative documents and their formal representations based on the FRBR framework and the direct method of legislative changes. This unified management helps i) tracing legislative changes in formal models, ii) understanding the nature of the changes more deeply and easily, and iii) providing joint management of legal text and their formal representations. To evaluate the approach, we implemented a software prototype and carried out a case study at a large international bank.
In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.
Domain models have proven useful as the basis for the construction and evaluation of arguments to support deliberation about policy proposals. Using a model provides the means to systematically examine and understand the fine-grained objections that individuals might have about the policy. While in previous approaches, a justification for a policy proposal is presented for critique by the user, here, we reuse the domain model to invert the roles of the citizen and the Government: a policy proposal is elicited from the citizen, and a software agent automatically and systematically critiques it relative to the model and the Government's point of view. Such an approach engages citizens in a critical dialogue about the policy actions, which may lead to a better understanding of the implications of their proposals and that of the Government. A web-based tool that interactively leads users through the critique is presented.
To make legal texts machine processable, the texts may be represented as linked documents, semantically tagged text, or translated to formal representations that can be automatically reasoned with. The paper considers the latter, which is key to testing consistency of laws, drawing inferences, and providing explanations relative to input. To translate laws to a form that can be reasoned with by a computer, sentences must be parsed and formally represented. The paper presents the state-of-the-art in automatic translation of law to a machine readable formal representation, provides corpora, outlines some key problems, and proposes tasks to address the problems.