Ebook: Legal Knowledge and Information Systems
In the same way that it has become part of all our lives, computer technology is now integral to the work of the legal profession. The JURIX Foundation has been organizing annual international conferences in the area of computer science and law since 1988, and continues to support cutting-edge research and applications at the interface between law and computer technology.
This book contains the 16 full papers and 6 short papers presented at the 26th International Conference on Legal Knowledge and Information Systems (JURIX 2013), held in December 2013 in Bologna, Italy. The papers cover a wide range of research topics and application areas concerning the advanced management of legal information and knowledge, including computational techniques for: classifying and extracting information from, and detecting conflicts in, regulatory texts; modeling legal argumentation and representing case narratives; improving the retrieval of legal information and extracting information from legal case texts; conducting e-discovery; and, applications involving intellectual property and IP licensing, online dispute resolution, delivering legal aid to the public and organizing the administration of local law and regulations.
The book will be of interest to all those associated with the legal profession whose work involves the use of computer technology.
For more than a quarter century, the Jurix conferences, held under the auspices of the Foundation for Legal Knowledge Based Systems (www.jurix.nl), have supported cutting edge research and applications at the interface between law and computer technology.
This volume contains the proceedings of the Twenty-Sixth International Conference on Legal Knowledge and Information Systems (Jurix 2013), which was held on December 11–13, 2013, at the University of Bologna.
In its surroundings both ancient and charming, the Faculty of Law of the University of Bologna is a delightful venue in which to learn about the most up-to-date research on automating legal reasoning and the delivery of legal services.
Researchers from 13 countries in Europe, North and South America, Asia, and Australia submitted papers, each of which was reviewed by three members of the Program Committee comprising experts from 17 countries around the globe.
Of the 48 submissions, 16 were accepted as full (ten-page) papers for presentation at the conference and publication in these proceedings. An additional six submissions were accepted as short (four-page) papers with shorter presentations.
The papers cover a wide range of research topics and application areas concerning the advanced management of legal information and knowledge, including computational techniques for:
• Classifying and extracting information from, and detecting conflicts in, regulatory texts.
• Modeling legal argumentation and representing case narratives.
• Improving legal information retrieval of, and extracting information from, legal case texts.
• Conducting e-Discovery.
• Applications involving intellectual property and IP licensing, online dispute resolution, delivering legal aid to the lay public, and organizing the administration of local laws and regulations.
For the first time in the Jurix conferences, a doctoral consortium was offered to graduate students enrolled in Ph.D. programs. The goals of the consortium are to introduce the participants to a network of established researchers and other graduate students in the field and to provide feedback on students' research projects and questions.
Two eminent researchers presented invited talks at the conference. Professor Gerd Brewka, a member of the Computer Science Institute's Intelligent Systems Department at the University of Leipzig, delivered an invited talk entitled, “Abstract Dialectical Frameworks and Their Potential for Legal Argumentation”. He presented a new model for argumentation frameworks, Abstract Dialectical Frameworks (ADFs), that captures more general forms of argument interaction than are supported in a Dungian framework. Professor Adeline Nazarenko of the LIPN, Institut Galilée Université Paris-Nord delivered an invited talk entitled, “How to Assist Human Formalization of NL Regulations: Lessons from Business Rules Acquisition Experiments” . She presented a method for formalizing business rules for decision systems and a tool to assist domain experts and knowledge engineers to explore textual sources of business regulations written in natural language and to help them interact in deriving formal business rules from the source documents.
Three interesting workshops were planned for the first day of the conference: the Second International Workshop on Artificial Intelligence and IP Law, a joint workshop comprising the Fifth Workshop on Artificial Intelligence and the Complexity of Legal Systems (AICOL) and the Special Workshop on Social Intelligence and the Law, and the First Workshop on Legal Knowledge and the Semantic Web. Such workshops play a vital role in the field, identifying new opportunities for research and incubating efforts to address them.
Acknowledgments
The Program Committee deserves recognition and thanks for spending so much time and effort in carefully reviewing an unusually large number of submissions under severe time constraints. The Jurix 2013 Conference could not have taken place without the help of the following Program Committee members:
• Michał Araszkiewicz, Jagiellonian University, Poland
• Katie Atkinson, University of Liverpool, UK
• Floris Bex, University of Groningen, The Netherlands
• Alexander Boer, University of Amsterdam, The Netherlands
• Joost Breuker, University of Amsterdam, The Netherlands
• Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain
• Jack G. Conrad, Thomson Reuters, USA
• Tom van Engers, Leibniz Center for Law, The Netherlands
• Enrico Francesconi, ITTIG-CNR, Florence, Italy
• Anne Gardner, Atherton, California, USA
• Thomas F. Gordon, Fraunhofer FOKUS, Berlin, Germany
• Guido Governatori, NICTA, Australia
• Hans Henseler, Amsterdam University of Applied Sciences, The Netherlands
• Rinke Hoekstra, VU University Amsterdam/University of Amsterdam, The Netherlands
• Jeroen Keppens, King's College London, United Kingdom
• Thorne McCarthy, Rutgers University, USA
• Marie-Francine Moens, KU Leuven, Belgium
• Paulo Novais, Universidade do Minho, Portugal
• Monica Palmirani, CIRSFID, University of Bologna, Italy
• Radim Polčák, Masaryk University, Czech Republic
• Henry Prakken, Universiteit Groningen & Universiteit Utrecht, The Netherlands
• Paulo Quaresma, Universidade de Evora & Universidade Nova de Lisboa, Portugal
• Edwina Rissland, University of Massachusetts, USA
• Antonino Rotolo, CIRSFID, University of Bologna, Italy
• Giovanni Sartor, European University Institute, Florence – CIRSFID, University of Bologna, Italy
• Ken Satoh, National Institute of Informatics and Sokendai, Japan
• Burkhard Schäfer, University of Edinburgh, UK
• Uri Schild, Bar Ilan University, Israel
• Erich Schweighofer, University of Vienna, Austria
• Leon van der Torre, University of Luxembourg, Luxembourg
• Bart Verheij, Universiteit Groningen, The Netherlands
• Vern R. Walker, Maurice A. Deane School of Law at Hofstra University, USA
• Douglas N. Walton, University of Windsor, Canada
• Radboud Winkels, Leibniz Center for Law, The Netherlands
• Adam Wyner, University of Liverpool, UK
• John Zeleznikow, Victoria University, Melbourne, Australia
Thanks also to Monica Palmirani and Antonino Rotolo for organizing the Jurix 2013 Conference in Bologna, and to Henry Prakken, chair of the Foundation for Legal Knowledge Based Systems (Jurix) for his constant support and good counsel.
Kevin D. Ashley
Program Chair
School of Law, Intelligent Systems Program
Learning Research and Development Center
University of Pittsburgh, USA
ashley@pitt.edu
In this paper we provide a structured analysis of US Supreme Court Oral Hearings to enable identification of the relevant issues, factors and facts that can be used to construct a test to resolve a case. Our analysis involves the production of what we term ‘argument component trees’ (ACTs) in which the issues, facts and factors, and the relationship between these, are made explicit. We show how such ACTs can be constructed by identifying the speech acts that are used by the counsel and Justices within their dialogue. We illustrate the application of our analysis by applying it to the oral hearing that took place for the case of Carney v. California, and we relate the majority and minority opinions delivered in that case to our ACTs. The aim of the work is to provide a formal framework that addresses a particular aspect of case-based reasoning: enabling the identification and representation of the components that are used to form a test to resolve a case and guide future behaviour.
The Légilocal project aims to help local authorities to improve the quality, interoperability and publication of French local administrative acts in the same way as Légifrance does at the state and EU level. The originality of the approach is to unify the management of contents and the interactions between actors on these contents.The Légilocal platform combines various tools (content management, networking, semantic annotation and search) and resources to assist clerks in the drafting and publication of local acts.
This paper proposes a semi-formal model of legal argumentation concerning statutory interpretation in civil law countries, encompassing set-theoretical analysis of extensions of legal terms and the use of argumentation schemes. An actual example decided by the Polish Supreme Administrative Court is discussed in the context of the proposed model. It is contended that the scheme proposed here should be useful for the development of practically significant legal knowledge bases concerning statutory law.
We describe a system for computer-assisted writing of legal documents via a question-based mechanism. This system relies upon an underlying ontological structure meant to represent the data flow from the user's input, and a corresponding resolution algorithm, implemented within a local engine based on a Last-State Next-State model, for navigating the structure and providing the user with meaningful domain-specific support and insight. This system has been successfully applied to the scenario of civil liability for motor vehicles and is part of a larger framework for self-litigation and legal support.
Users of commercial legal information retrieval (IR) systems often want argument retrieval (AR): retrieving not merely sentences with highlighted terms, but arguments and argument-related information. Using a corpus of argument-annotated legal cases, we conducted a baseline study of current legal IR systems in responding to standard queries. We identify ways in which they cannot meet the need for AR and illustrate how additional argument-relevant information could address some of those inadequacies. We conclude by indicating our approach to developing an AR system to retrieve arguments from legal decisions.
In previous work we presented argumentation schemes to capture the CATO and value based theory construction approaches to reasoning with legal cases with factors. We formalised the schemes with ASPIC+, a formal representation of instantiated argumentation. In ASPIC+ the premises of a scheme may either be a factor provided in a knowledge base or established using a further argumentation scheme. Thus far we have taken the factors associated with cases to be given in the knowledge base. While this is adequate for expressing factor based reasoning, we can further investigate the justifications for the relationship between factors and facts or evidence. In this paper we examine how dimensions as used in the HYPO system can provide grounds on which to argue about which factors should apply to a case. By making this element of the reasoning explicit and subject to argument, we advance our overall account of reasoning with legal cases and make it more robust.
In this paper the activities for introducing the ECLI standard in the Italian judicial documentary system are described. Firstly, the specifications of ECLI for Italian case-law are proposed. Then, the ECLI implementation activities at the Court of Milan as a pilot case are illustrated, in particular a parsing strategy, based on regular expressions, able to detect case law citations, extract the metadata used by the ECLI grammar (typically used in citations) and providing automatic annotation of the references by the ECLI of the target documents.
As part of a larger effort to explore the feasibility of automating the translation of regulatory text to formal, executable rules, we have developed automated methods to classify regulatory paragraphs into various categories of interest, including regulation type and reference structure. We achieve F1 scores of at least .8 for most categories.
This paper shall present a new theory called Transaction Configuration that describes the main task common to contract lawyers in the performance of their work, and was developed in the course of a case study at a magic circle law firm in the City of London. It will be shown how Transaction Configuration provides a practical context for legal normative assessment, as applied with LKIF, and legal reasoning systems in commercial law firms.
In this paper we present an application of argument maps for assessing liability in the field of Air Traffic Management (ATM), developed within the ALIAS (Addressing the Liability Impact of Automated Systems) project. Such maps are used for presenting legal concepts and norms to lawyers and non lawyers (engineers, software developers and other technical personnel), within the cooperative design and assessment of new technologies for ATM.
The Web of Data is assisting to a growth of interest with respect to the open challenge of representing and reasoning in an automated way over licenses and copyright. In this paper, we deal with the problem of checking the composing together a set of licensing terms associated to a single query result on the Web of Data to create a so called composite license. More precisely, we analyze two composition heuristics, AND-composition and OR-composition, showing how they can be used to combine the deontic components specified by the licenses, i.e., permissions, obligations, and prohibitions, and which are the most suitable combinations depending on the starting licenses. Such heuristics are evaluated using the SPINdle logic reasoner.
A high-performance, scalable text processing pipeline for eDiscovery is outlined. The classification module of the pipeline is based on the random forest model which is fast, exible and allows for relevance scoring and feature importance coupled with high-accuracy results. The feature selection approach combines natural language processing with legal domain input, and is based on regular expressions, which allows for linguistic variation and subtle ne-tuning. These two components of the pipeline are described in some detail. Briefly discussed are a number of the other features, which include relevance hypothesis testing, deduping and social communication network analysis.
This study proposes a novel unsupervised approach for extracting keywords from Japanese legal documents by applying knowledge of Japanese syntax. Japanese keywords usually occur in chunks; the task of extracting Japanese keywords is treated as a matter of finding chunks that yield documents' important content. To find these chunks, all chunks in a given document are assigned weights to indicate their importance. Highly weighted chunks are recognized as candidate keywords, which are post-processed to obtain keywords. Although the proposed method employs simple techniques, the experimental results on Japanese legal documents show that the proposed chunk-based approach achieves better performance (10.5% higher on F1-score) than the graph-based ranking approach, the most popular unsupervised method.
Acting under several jurisdictions at the same time is becoming the norm rather than the exception, certainly for companies but also (sometimes without knowing) for individuals. In these circumstances disparities among the different laws are inevitable. Here, we present a mathematical and a computational model of interacting legal specifications, along with a mechanism to find conflicts between them. We illustrate the approach by a case study using European Privacy law.
The French local administration produces a huge amount of heterogeneous and interdependent documents, which creates a content management issue for the administrators and an increasing need of accessibility for citizens. To meet those requirements, the Légilocal project, a French initiative, develops tools to ease the production of local administrative acts, the sharing of information among local administration clerks and the semantic search of documents. This paper presents the underlying Légilocal ontology-based documents model which takes into account not only the semantic annotations of documents, their various types, their logical structures and their different versions but also the document collection viewed as a semantic network of documents.
In most attempts to model legal systems as formal argumentation systems, legal norms are viewed as an argumentation's system inference rules. Since in formal argumentation systems inference rules are generally assumed to be fixed and independent from the inferences they enable, this approach fails to capture the dialectical connection between norms and arguments, where on the one hand legal arguments are based on norms, and on the other hand the validity of norms depends on arguments. The validity of a new norm can be supported by referring to authoritative sources, such as legislation or precedent, but also through interpretations of such sources, or through analogies or a contrario arguments based on existing authoritative norms. In this contribution arguments about norms are modelled as the application of argument schemes to knowledge bases of facts and norms.
EQUALS is the legal decision support component of a larger project to support individuals with mental health problems in employment or seeking employment. The decision-aid uses an interactive interview session to generate legal advice for its user. The EQUALS decision-aid was subjected to two rounds of user acceptance testing (UAT), with the assistance of domain experts. This paper discusses results from the UAT, based on which the potential and feasibility of legal decision aids are discussed.
There are occasions in which an agent lengthens its own action through the implementation of a foreign activity for its own interests. We focus on the occasional dependence relation between a principal agent and a helper agent. In particular, we are interested in the helper's harmful performance that has its origin in extra contractual situations e.g. factual and/or occasional situations based on trust or courtesy which may lead to the emergence of an obligation to compensate third parties.
Legal reasoning can be approached from various perspectives, traditionally argumentation, probability and narrative. The communication between forensic experts and a judge or jury would benefit from an integration of these approaches. In previous papers we worked on the connection between the narrative and the probabilistic approach. We developed techniques for representing crime scenarios in a Bayesian network. But for complex cases, the construction of a Bayesian network structure using these techniques remained a cumbersome task.
In this paper we therefore propose a method called unfolding a scenario and a representation for small variations within a scenario. With these tools, a Bayesian network can be built up step by step, gradually adding more details. The method of unfolding a scenario is intended to support the process of building a Bayesian network, additionally resulting in a well-structured graphical structure.
This paper describes ongoing research on automatically determining relevant context to display to a user of a legislative portal given the article they are retrieving, purely based on ‘objective’ criteria inferred from the network of sources of law. A first prototype is presented and a formative evaluation of it by legal expert users. Results are promising, but there is room for improvement.
The paper reports the outcomes of a study with law school students to annotate a corpus of legal cases for a variety of annotation types, e.g. citation indices, legal facts, rationale, judgement, cause of action, and others. An online tool is used by a group of annotators that results in an annotated corpus. Differences amongst the annotations are curated, producing a gold standard corpus of annotated texts. The annotations can be extracted with semantic searches of complex queries. There would be many such uses for the development and analysis of such a corpus for both legal education and legal research.
There is a need for the development of systems that are compliant with laws in public administration, because their administrative activities are based on laws. When new laws are made or existing laws are amended, however, civil servants need to develop or modify the systems in the short time before the laws are issued. Related work in requirements elicitation from the legal texts includes approaches using ontology but there are difficulties in building an ontology for practical use. In this paper we propose pre-defined templates with the expression of functional requirements to identify legal texts, including their functional requirements, and a support tool consisting of two functions, one for automatic summary creation from complicated legal texts and one for the suggestion of the legal texts, including their functional requirements. We have also applied this approach to Japanese laws and have evaluated its accuracy. Our research revealed that using this approach can identify functional requirements with high accuracy.