Ebook: Legal Knowledge and Information Systems
Computer technology has become an essential part of all our lives, and the legal profession is no exception. For more than 25 years, the annual JURIX conference has provided an international forum for academics and practitioners working at the cutting edge of research into and the application of the interface between law and computer technologies.
This book presents the proceedings of the 28th International Conference on Legal Knowledge and Information Systems (JURIX 2015), which took place in Braga, Portugal in December 2015. The book contains 14 full papers, nine short papers and nine posters delivered at the conference. These address a wide range of topics in legal informatics, and fall into three main subject areas: theory and foundations of AI and law, focusing on themes such as argumentation, reasoning, and evidence; technology of AI and law, which presents technological advancements and solutions; and applications of AI and law, describing implementations of AI and law technology in real world systems.
The book offers an overview of the ways in which current information technology is relevant to the practice of law, and will be of interest to all those whose work involves legal theory, argumentation and practice.
We are very glad to present the proceedings volume of the 28th International Conference on Legal Knowledge and Information Systems (JURIX 2015). For more than 25 years, the JURIX conference has provided an international forum for academics and practitioners for the advancement of cutting edge research in the interface between law and computer technologies. The JURIX conferences are held under the auspices of the Dutch Foundation for Legal Knowledge Systems: JURIX 2015 took place at the Universidade do Minho Law School, Braga, Portugal, on 10–11 December. Special thanks go to Francisco Andrade (Law School, University of Minho), Paulo Novais (Department of Informatics, School of Engineering, University of Minho), and their team for inviting us, hosting the event and for making this conference possible.
The contributions in this volume include a selection of 14 full papers (10 pages each), 9 short papers (4 pages), and 9 posters (2 pages each) chosen from a pool of 62 submissions by 139 authors from 24 countries. The accepted papers address a wide range of topics in legal informatics and fall within the following three major tracks: theory and foundations of AI & Law (focusing on themes such as argumentation, reasoning, norms and evidence), technology of AI & Law (presenting technological advancements and new solutions for AI & Law), and applications of AI & Law (describing implementations of AI & Law technology in real world systems). The accepted papers were carefully selected after a rigorous peer-review process where each paper was evaluated by a panel of at least three members of the international Program Committee. We thank the reviewers for their effort and very valuable contribution; without them it would not be possible to maintain and improve the high scientific standard the conference has now achieved. We thank the authors for submitting good papers, responding to the reviewers' comments, and abiding by our production schedule.
Our two invited speakers this year were Luciano Floridi, Professor of Philosophy and Ethics of Information at Oxford Internet Institute, University of Oxford, and Kasey Chappelle, Global Privacy Officer and Director of Commercial Compliance at American Express Global Business Travel. We are very grateful to them for having accepted our invitation and for their interesting and inspiring talks.
JURIX 2015 also hosted the Doctoral Consortium, which was in its third edition. This initiative was meant to attract and promote PhD researchers in the area of AI & Law and so to enrich the community with innovative and fresh contributions. Many thanks to Monica Palmirani for organising it once again this year.
The conference was preceded by six co-located workshops and a tutorial. The 3rd International Workshop Network Analysis in Law (NAiL 2015) built on the achievements of the first edition held at ICAIL 2013 in Rome and the second edition at JURIX 2014 in Krakow. After the success of the previous editions, the fourth International Workshop on Artificial Intelligence and IP Law (AIIP IV) brought together researchers in copyright law and copyright enforcement with experts in AI and law. The Workshop on Artificial Intelligence and the Complexity of Legal Systems (AICOL), now in its sixth edition, is a well-established event having the aim of developing models of legal knowledge more suitable to the complexity of contemporary legal systems. The Workshop on Legal Data Analysis of the Central European Institute of Legal Informatics (CEILI) intended to focus on representation, analysis and reasoning of legal data in huge text corpora and information systems. The Workshop on Electronic Discovery and Digital Evidence encompassed topics such as the practical challenges concerning the collection, preservation and use of digital evidence in courts. The Workshop on Privacy and Data Protection gathered people from the economic operators, academia, national data protection authorities and legal practitioners to debate the new ways of interaction between citizens, corporations and national states through ICT and its implications in the fundamental rights. We furthermore hosted a tutorial on Coding Smart Contracts for the Blockchain, an important and emergent topic in legal informatics and computer law.
The JURIX 2015 conference was supported by CIIDH (Interdisciplinary Center in Human Rights), Justicrime (Lusophone Institute of Criminal Justice), and ELSA Uminho (European Law Students Association): many thanks to them, whose support helped us to organise this event, and whose technical support contributed to attract many high-quality submissions.
CIRSFID, University of Bologna, Italy
The paper examines two different sets of rulings by the Italian Constitutional Court (“ICC”), namely the in via incidentale (“IVI”), and the in via principale (“IVP”), rulings, vis-à-vis the web of scholarly opinions (“WSO”), devoted to such cases. On the one hand, the paper shows that all such networks, i.e. IVIN, IVPN, and WSO, follow power laws patterns of informational distribution. On the other hand, this stance allows us to deepen the meaning of legal relevance. In addition to the ICC decisions widely debated by scholars, we should take into account cases that are relevant in both IVIN and IVPN and still, WSO scarcely debates them. This is the class of legal cases that scholars ignore at their own risk, since the higher a case is ranked in the citation network of the court, the higher the probability that scholars will have to reflect on such case soon.
Stories and legal cases have much in common, but there are also differences. Both can be seen as a sequence of events, but in a legal case the facts and events are legally qualified. Moreover, the point of a story is usually implicit, whereas the outcome of a legal case is explicitly explained. Stories have been mainly used in AI and Law to explore the evidence presented in legal cases, but here we will explore the relationship on the assumption the facts of the case have already been established, and so include legal qualification and the decision. We illustrate our approach the well known wild animals and Popov v Hayashi cases.
Ontology-based Information Extraction is crucial to translate natural language documents into Linked Data. This connection supports consumers in navigating documents and semantically related data. However, the performances of automated information extraction systems are far from being perfect, and rely heavily on human intervention, either to create heuristics, to annotate examples for inferring models, or to interpret or validate patterns emerging from data.
In this paper, we apply different Active Learning strategies to Information Extraction (IE) from licenses in English, with highly repetitive text, few annotated or unannotated examples available, and very fine precision needed. We show that the most popular approach to active learning, i.e., uncertainty sampling for instance selection, does not provide a good performance in this setting. We show that we can obtain a similar effect to that of density-based methods using uncertainty sampling, by just reversing the ranking criterion, and choosing the most certain instead of the most uncertain instances.
It this paper we address the issue of what it means to comply with or violate norms, and we propose a computationally oriented approach to reason about such notions.
This paper looks at the use of recitals in the interpretation of EU legislation, and mechanisms for connecting them to normative provisions. The purposive approach to the interpretation of EU legislation taken by the European Court of Justice makes frequent references to recitals as helping to establish the purpose of normative provisions. Our research uses a cosine similarity based approach to link articles with relevant provisions to help legal professionals and lay end-users interpret the law. Such support can be used in legal knowledge-based systems.
We present a logical analysis of the relation between social influence and responsibility. In particular, we precisely characterise a notion of influence-based responsibility, namely, a responsibility that depends on the fact that an agent causes a primary violation by another agent. This notion captures the core of the idea of indirect (also called secondary or accomplice) responsibility in legal systems, and can be useful in the governance of multiagent systems. Our analysis uses the STIT logic of action (the logic of seeing to it that). On this basis we shall first formalise a notion of influence between agents, and then the idea of influence-based responsibility.
Defining and characterising conditional permissions has never been easy. Part of the problem, we believe, comes from the fact that there is not one but a whole family of possible deontic operators, all of them distinct and reasonable, that can be labelled as conditional permissions. In this article, rather than disputing the correct interpretation, we revisit a number of different interpretations the term has received in the literature, and propose appropriate formalisations for these interpretations within the context of contract automata.
The power of courts to change law via case law is among the most persistent and contested themes in the study of courts. In this article we empirically investigate whether the Court of Justice of the European Union (the Court) is constrained by its case law, and whether this can have a legitimatizing effect on its decision making. In contrast to previous literature on citation networks, which takes into account entire documents in constructing citation networks, we build a network of references to individual paragraphs of judgments, and their adjacent texts. We then analyze the paragraph texts with the aid of keyword extraction and topic modelling. Our findings can explain the legal relevance of citations and cited cases, as well as their normative force.
This paper examines how judges make decisions in the complex and unstructured legal domain of directors' liability. The research results show that despite the complex legal environment, courts' decisions are highly consistent. This paper is the first to provide a logistic regression model for predicting directors' liability under Dutch company law based on case factors occurring in court decisions (2003–2013).
A multi-lingual term bank of copyright-related terms has been published connecting WIPO definitions, IATE terms and definitions from Creative Commons licenses. These terms have been hierarchically arranged, spanning multiple languages and targeting different jurisdictions. The term bank has been published as a TBX dump file and is publicly accessible as linked data. Models for the RDF data structure are based on Lemon and W3C Recommendations. The term bank has been used to annotate common licenses in the RDFLicense dataset.
Statutory analysis is a significant component of research on almost any legal issue and determining if a statutory provision applies is an integral part of the analysis. In this paper we present the initial results from an attempt to support the applicability assessment in situations where the number of statutory provisions to be considered is large. We propose the use of a framework in which a single human expert cooperates with a machine learning text classification algorithm. Our experiments show that an adoption of the approach leads to a better performance during the relevance assessment. In addition, we suggest how to re-use a classification model trained during one statutory analysis for another related analysis. This points to a new way of capturing and re-using knowledge produced in the course of statutory analysis. Our experiments confirm the viability of this approach.
Many studies have proposed to apply artificial intelligence techniques to legal networks, whether it be for highlighting legal reasoning, resolving conflict or extracting information from legal databases. In this context, a new line of research has recently emerged which consists in considering legal decisions as elements of complex networks and conduct a structural analysis of the relations between the decisions. It has proved to be efficient for detecting important decisions in legal rulings. In this paper, we follow this approach and propose to extend structural analyses with temporal properties. We define in particular the notion of relative in-degree, temporal distance and average longevity and use those metrics to rank the legal decisions of the two first trials of the International Criminal Court. The results presented in this paper highlight non trivial temporal properties of those legal networks, such as the presence of decisions with an unexpected high longevity, and show the relevance of the proposed relative in-degree property to detect landmark decisions. We validate the outcomes by confronting the results to the one obtained with the standard in-degree property and provide juridical explanations of the decisions identified as important by our approach.
Legal reasoning about evidence can be a precarious exercise, in particular when statistics are involved. A number of recent miscarriages of justice have provoked a scientific interest in formal models of legal evidence. Two such models are presented by Bayesian networks (BNs) and argumentation. A limitation of argumentation is that it is difficult to embed probabilities. BNs, on the other hand, are probabilistic by nature. A disadvantage of BNs is that it can be hard to explain what is modelled and how the results came about. Assuming that a forensic expert presents evidence in a way that is either already a BN or expressed in terms that easily map to a simple BN, we may wish to express the same information in argumentative terms. We address this issue by translating Bayesian networks to arguments. We do this by means of an intermediate structure, called a support graph, which represents the variables from the Bayesian network, maintaining independence information in the network, but connected in a way that more closely resembles argumentation. In the current paper we test the support graph method on a Bayesian network from the literature. We argue that the resulting support graph adequately captures the possible arguments about the represented case. In addition, we highlight strengths and limitations of the method that are revealed by this case study.
Bayesian networks have gained popularity as a probabilistic tool for reasoning with legal evidence. However, two common difficulties are (1) the construction and (2) the understanding of a network. In previous work, we proposed to use narrative tools and in particular scenario schemes to assist the construction and the understanding of Bayesian networks for legal cases. We proposed a construction method and a reporting format for explaining or understanding the network. The quality of a scenario, which plays an important role in the narrative approach to evidential reasoning, was not yet included in this method. In this paper, we provide a discussion of what constitutes the quality of a scenario, in terms of the narrative concepts of completeness, consistency and plausibility. We propose a probabilistic interpretation of these concepts, and show how they can be incorporated in our previously proposed method. We also illustrate with an example how these concepts concerning scenario quality can be used to explain or understand a Bayesian network.
Close to 3000 bilateral investment agreements (BITs) have been concluded by 2015 and virtually every country is a signatory. What makes these treaties special is their enforcement mechanism: private investors can sue states directly before international arbitration potentially winning multi-million dollar awards. Given its size and atomized nature, however, practitioners struggle to effectively navigate the BIT universe. To reduce investment law's complexity, this demo introduces a new interactive web-based tool that relies on state-of-the-art technology to allow users to asses similarities and differences between agreements quickly and intuitively. The tool thereby assists investment law practitioners in structuring negotiations around a common denominator in treaty practice or helps litigators to advance their client's case by distinguishing or analogizing treaties.
The paper discusses the complexity problem in the interpretation of statutory legal norms. The authors propose a comprehensive framework that allows the representation of the interpretation process.
Business Process Management Systems (BPMS) are widely recognized as fundamental component of the IT infrastructure supporting middle-large organization, thanks to their capacity of providing easy-to-read models of how the organization works, and to the capability of enacting these business processes, supporting and monitoring their execution. In this work we present results collected during a feasibility study, that aims to apply BPM concepts to a legal domain: decision and enforcement of preliminary injunctions.
We present a research aimed at representing legal rules as argument maps. Such representation supports communication with non-lawyers and the integration of safety arguments with legal arguments. The approach has been used in real test applications in the ATM domain, to assess liability issues of new automated technologies.
In recent years, many empirical studies of legal decision-making process have shown that it incorporates many cognitive, affective, and supra-legal factors. Our goal is to design artificial intelligence systems that model these aspects of legal decision-making. Our vision is to implement a kind of legal assistant that can be used by lawyers and judges to run through different scenarios and produce arguments for different, and possibly contradictory, decisions. We propose a multi-agent blackboard architecture for such an assistive system, employing some insights from our previous work on a context-aware recommender system.
Using the notion of correlativity in the Hohfeldian system of rights we not just gain a better understanding of legal relations, but we can identify and differentiate existing and non-existing rights and duties. One has to see clearly, though, who the agents are in these correlative pairs and dissociate them from the State. Using SDL with an iterable and agent-indexed STIT operator we can outline the characteristics of the relations while clearly separating these roles.
In 2010 the Council of the European Union laid down the standard for the European Case Law Identifier (ECLI). In this paper we assess to what extent and how ECLI has been implemented by national and European courts, as well as the state of play regarding the ECLI search engine on the European e-Justice Portal. With a lot of work to be done, the ongoing developments justify a positive outlook.
To align representations of law, of implementations of law and of concrete behaviours, we designed a common ground representational model for the three domains, based on the notion of position, building upon Petri nets. This paper reports on work to define subsumption between positional models.
Legal reasoning with evidence can be a challenging task. We study the relation between two formal approaches that can aid the construction of legal proof: argumentation and Bayesian networks (BNs). Argument schemes are used to describe recurring patterns in argumentation. Critical questions for many argument schemes have been identified. Due to the increased use of statistical forensic evidence in court it may be advantageous to consider probabilistic models of legal evidence. In this paper we show how argument schemes and critical questions can be modelled in the graphical structure of a Bayesian network. We propose a method that integrates advantages from other methods in the literature.
In this paper we discuss tools for rapid prototyping of legal CBR. We describe how a recent highly quantitative analysis can be realised using a spreadsheet, while a more nuanced approach at a finer level of granularity can be prototyped with the web service Carneades, which displays its results as an argument graph.