Ebook: Legal Knowledge and Information Systems
The JURIX conferences are an established international forum for academics, practitioners, government and industry to present and discuss advanced research at the interface between law and computer science. Subjects addressed in this book cover all aspects of this diverse field: theoretical – focused on a better understanding of argumentation, reasoning, norms and evidence; empirical – targeted at a more general understanding of law and legal texts in particular; and practical papers aimed at enabling a broader technical application of theoretical insights.
This book presents the proceedings of the 27th International Conference on Legal Knowledge and Information Systems: JURIX 2014, held in Kraków, Poland, in December 2014. The book includes the 14 full papers, 8 short papers, 6 posters and 2 demos – the first time that poster submissions have been included in the proceedings.
The book will be of interest to all those whose work involves legal theory, argumentation and practice and who need a current overview of the ways in which current information technology is relevant to legal practice.
It is with distinct pleasure that I present to you the proceedings volume of the 27th International Conference on Legal Knowledge and Information Systems: JURIX 2014, held under the auspices of the Foundation for Legal Knowledge Based Systems (JURIX). With a history of over 25 years, the JURIX conferences are an established international forum for academics, practitioners, government and industry to present and discuss advanced research at the interface between Law, in a broad sense, and Computer Science. We are very happy to receive a warm welcome at Jagiellonian University in Kraków, Poland, this December. Special thanks go to Michał Araszkiewicz and his team for inviting us, and for making this conference possible.
This year, out of 51 submissions (135 authors), we accepted fourteen full papers (10 pages each), eight short papers (6 pages), six posters and two demos (each 2 pages); joint work of 91 authors in total. This is the first time that poster submissions are included in the printed proceedings.
It is encouraging to see that the contributions cover all aspects of this diverse field: theoretical, focused on a better understanding of argumentation, reasoning, norms and evidence; empirical, targeted to a more general understanding of law, and legal texts in particular; as well as practical papers, aimed at enabling a broader technical application of these theoretical insights.
First of all, we can distinguish between theoretical papers that focus on norms, and normative systems, and ones that focus more on legal cases and evidence. In the first category, we can find several papers that present argumentation-based models for statutory interpretation (Araszkiewicz, Sartor et al., Walton et al. and Zurek); such interpretations allow us to provide solid footing for the representation of norms, rights and other fundamental legal concepts. It is interesting to see a shift to models that emphasize the dynamic nature of the world governed by norms (Bench-Capon, Sileno et al., Azzopardi et al.). Of a more applied nature is a concrete model built to explain air passenger rights, implemented using Semantic Web technology (Rodríguez-Doncel et al.).
In the second category, we see papers on extracting (Timmer et al.) and understanding the argumentation that takes place within (Governatori et al., Carneiro et al.) and across legal cases (Al-Abdulkarim et al.); which shows that we are attacking the problem of understanding legal argumentation at different levels. These proceedings include two papers that emphasize the role of evidence in legal reasoning; determining plausible scenarios as explanations for evidence (Vlek et al.), and the various roles played by evidence for punishments and rewards (Boer).
To apply these theoretical views in practice, we require richer representations of legal information (in the broadest sense) than those which are currently available. Three papers promote the availability of legal information as (semantically rich) data (Sheridan, Frosterus et al., Poblet et al.), offering Linked Data platforms for accessing a variety of interconnected legal data for the purposes of retrieval and analysis. And in fact, retrieval and analysis increasingly go hand in hand. From more traditional text and information mining (Łopuszyński, Šavelka et al.), to structural analyses of legislation (Koniaris et al., Waltl and Matthes) and the use of structure to drive retrieval systems (Mimouni et al., Winkels et al.).
The datasets needed to drive such technology are covered as well (Nakamura and Kakuta, Rodríguez-Doncel et al.), including methods to integrate legal datasets creation in existing environments (Palmirani et al.). Law does not exist in isolation, and we see an increasing number of initiatives that adopt crowd sourcing (Crawford et al., Bueno et al., Costantini) and open source intelligence (Casanovas et al.) to build datasets and ontologies that bridge the gap between legal information and society as a whole.
Our two invited speakers this year were Noam Slonim, Senior Research Staff Member at the Analytics Department of the IBM Haifa Research lab, who talked about debating technologies at IBM, and Pieter Adriaans, professor of Learning and Adaptive Systems at the Institute for Logic, Language and Computation and the Informatics Institute of the University of Amsterdam.
The conference was preceded by no less than four workshops and a tutorial. The 2nd International Workshop on Network Analysis in Law (NAIL 2014) builds on the success of the first edition held at ICAIL 2013 in Rome. The Semantic Web for Law (SW4Law 2014) workshop continues our tradition to foster the fruitful combination of Semantic Web technology and Legal information. The combination of the 14th Workshop on Computational Modelling of Natural Argument (CMNA 14) with the 1st International Workshop for Methodologies for Research on Legal Argumentation (MET-ARG 2014) provides a full day program on the study of argumentation. We furthermore organized a tutorial on “Formal Models of Balancing in Legal Cases”, an important topic in AI & Law and legal theory but also useful for legal decision makers and practitioners.
Last but not least, I would like to thank all 56 members of the program committee for their most excellent, diligent and timely work on reviewing the submitted papers. There wasn't a lot of time, and you were great!
Rinke Hoekstra
This paper extends a semi-formal model of statutory interpretation by introducing doctrinal theories. Doctrinal theorists, also referred to as legal dogmatists, contribute significantly to the understanding of statutory law in continental legal cultures. However, this problem has rarely been addressed in AI and Law research. In this paper the concept of doctrinal theory is reconstructed on the basis of set-theoretically defined interpretive statements and the theory of argumentation schemes.
This paper shows how defeasible argumentation schemes can be used to represent the logical structure of the arguments used in statutory interpretation. In particular we shall address the eleven kinds of argument identified MacCormick and Summers [6] and the thirteen kinds of argument by Tarello [11]. We show that interpretative argumentation has a distinctive structure where the claim that a legal text ought or may be interpreted in a certain way can be supported or attacked by arguments, whose conflicts may have to be assessed according to further arguments.
This paper presents a set of argumentation schemes that can be used to identify, analyze and evaluate types of arguments characteristically used in cases of contested statutory interpretation in law. These schemes represent forms of argument already identified in the literature as leading forms of argument used in cases of statutory interpretation where legal disputes about how to interpret a statute have generally arisen.
The main aim of this work is to formalize one of the mechanisms of resolving conflicts between statutory legal rules. The argument from social importance is based on the distinction between axiological contexts of conflicting norms. One of this norms may be more significant from the point of view of social importance and this norm should prevail over the less significant one.
The design and analysis of norms is a somewhat neglected topic in AI and Law. In recent years powerful techniques to model and analyse norms have been developed in the Multi-Agent Systems community. In this paper I consider these techniques from and AI and Law perspective, and suggest a framework for the exploration of these issues.
Rather than as abstract entities, jural relations are analyzed in terms of the bindings they create on the individual behaviour of concurrent social agents. Investigating a simple sale transaction modeled with Petri Nets, we argue that the concepts on the two Hohfeldian squares rely on the implicit reference to a “transcendental” collective entity, to which the two parties believe or are believed to belong. From this perspective, we observe that both liabilities and duties are associated to obligations, respectively of an epistemic or practical nature. The fundamental legal concepts defined by Hohfeld are revisited accordingly, leading to the construction of two Hohfeldian prisms.
Although contract reparations have been extensively studied in the context of deontic logics, there is not much literature using reparations in automata-based deontic approaches. Contract automata is a recent approach to modelling the notion of contract-based interaction between different parties using synchronous composition. However, it lacks the notion of reparations for contract violations. In this article we look into, and contrast different ways reparation can be added to an automaton- and state-based contract approach, extending contract automata with two forms of such clauses: catch-all reparations for violation and reparations for specific violations.
This paper describes a representation of the legal framework in the air transport passenger's rights domain and the foremost incidents that trigger the top of consumer complaints ranking in the EU. It comprises the development of a small network of three ontologies, formalisation of scenarios, specification of properties and identification of relations. The approach is illustrated by means of a case study based in the context of a real life cancelled flight incident. This is part of an intended support-system that aims to provide both consumers and companies with relevant legal information to enhance the decision-making process.
In recent years a powerful generalisation of Dung's abstract argumentation frameworks, Abstract Dialectical Frameworks (ADF), has been developed. ADFs generalise the abstract argumentation frameworks introduced by Dung by replacing Dung's single acceptance condition (that all attackers be defeated) with acceptance conditions local to each particular node. Such local acceptance conditions allow structured argumentation to be straightforwardly incorporated. Related to ADFs are prioritised ADFs, which allow for reasons pro and con a node. In this paper we show how these structures provide an excellent framework for representing a leading approach to reasoning with legal cases.
Recent developments in the forensic sciences have confronted the field of legal reasoning with the new challenge of reasoning under uncertainty. Forensic results come with uncertainty and are described in terms of likelihood ratios and random match probabilities. The legal field is unfamiliar with numerical valuations of evidence, which has led to confusion and in some cases to serious miscarriages of justice. The cases of Lucia de B. in the Netherlands and Sally Clark in the UK are infamous examples where probabilistic reasoning has gone wrong with dramatic consequences. One way of structuring probabilistic information is in Bayesian networks (BNs). In this paper we explore a new method to identify legal arguments in forensic BNs. This establishes a formal connection between probabilistic and argumentative reasoning. Developing such a method is ultimately aimed at supporting legal experts in their decision making process.
In strategic argumentation players exchange arguments to prove or reject a claim. This paper discusses and reports on research about two basic issues regarding the game-theoretic understanding of strategic argumentation games in the law: whether such games can be reasonably modelled as zero-sum games and as games with complete information.
Recent research shows that our performance and satisfaction at work depends more on motivational factors than the number of hours or the intensity of the work. In this paper we propose a framework aimed at managing motivation to improve workplace indicators. The key idea is to allow team managers and workers to negotiate over the conditions of the tasks so as to find the best motivation for the worker within the constraints of what the organization may offer.
In legal knowledge acquisition, the threat of punishment remains an important litmus test for categorizing legal rules: something is a real duty if it is backed – directly or indirectly – by a threat of punishment. In practice, no accounts of how enforcement design patterns are superposed on representations of specific legal rules exist in our field, and the litmus test does not work in modeling legal rules. This paper considers the distinction between punishments and rewards, and points to a more obvious connection with production of evidence, and allocation of burden of proof. Since this work was done in the context of a knowledge acquisition methodology based on petri net markup language diagrams, the result is a generic enforcement pattern expressed as a petri net diagram.
In order to make an informed decision in a criminal trial, conclusions about what may have happened need to be derived from the available evidence. Recently, Bayesian networks have gained popularity as a probabilistic tool for reasoning with evidence. However, in order to make sense of a conclusion drawn from a Bayesian network, a juror needs to understand the context. In this paper, we propose to extract scenarios from a Bayesian network to form the context for the results of computations in that network. We interpret the narrative concepts of scenario schemes, local coherence and global coherence in terms of probabilities. These allow us to present an algorithm that takes the most probable configuration of variables of interest, computed from the Bayesian network, and forms a coherent scenario as a context for these variables. This way, we take advantage of the calculations in a Bayesian network, as well as the global perspective of narratives.
Imagine designing laws using data – quickly and easily researching, interrogating and understanding the statute book as a whole system. For the first time, the technologies needed to do this have become widely available and reasonably affordable. Moving beyond searching for legal databases for strings of text, the ‘big data for law’ project aims to provide researchers with the data, tools and methods that have the potential to transform how the statute book is conceived and managed. In this poster presentation, John Sheridan, Head of Legislation Services at The National Archives in the UK, will present the work and findings of the big data for law project.
Juridical information is important to organizations and individuals alike and is needed in all walks of life. The Finnish government has therefore published Finnish law and related juridical documents on the Web as a service called Finlex. However, even if the documents there are openly available for humans to read, the underlying data has not been open, is based on a traditional XML schema, and does not conform to new semantic metadata standards and Linked Data principles. As a result, the data is difficult to re-use in applications, the datasets are not interoperable with each other, are difficult to link to external data sources, and lots of manual work is needed in producing and using the data. To mitigate these problems, this paper presents Semantic Finlex, the first attempt at publishing Finnish law as a Linked Open Data service, with an analysis and examples of benefits and challenges encountered when applying the technology.
Grants are the steam engine of the research enterprise. Each year, billions of dollars from public and philanthropic organisations enable research projects across the world. However, despite significant investment in research and innovation, the information about grants and their impact is hidden behind disconnected, undiscoverable and inaccessible information systems. In this paper we present recent developments and discuss the need for a coordinated effort toward enabling open access to grant information. The paper also proposes a framework for coordinating technical and policy-making efforts toward linking grants to grant outcomes (publications, patents, research data, etc.). Such a framework can be seen as the institutional side of meta-research innovation.
In this work, automatic analysis of themes contained in a large corpora of judgments from public procurement domain is performed. The employed technique is unsupervised latent Dirichlet allocation (LDA). In addition, it is proposed, to use LDA in conjunction with recently developed method of unsupervised keyword extraction. Such an approach improves the interpretability of the automatically obtained topics and allows for better computational performance. The described analysis illustrates a potential of the method in detecting recurring themes and discovering temporal trends in lodged contract appeals. These results may be in future applied to improve information retrieval from repositories of legal texts or as auxiliary material for legal analyses carried out by human experts.
In this paper we mine statutory texts for highly-specific functional information using NLP techniques and a supervised ML approach. We focus on regulatory provisions from multiple state jurisdictions (Pennsylvania and Florida), all dealing with the same general topic (i.e., public health system emergency preparedness and response). While the number of annotated provisions from any one jurisdiction is not large, we are investigating whether one can improve classification performance on one jurisdiction's statutory texts by including other jurisdictions' annotated statutory texts dealing with the same general topic. Our experiments suggest that data from one jurisdiction can be used to boost the performance of the classifiers trained for different jurisdictions.
Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. Quantifying this complexity is not an easy task. In this paper, we introduce the “Legislation Network” as a novel approach to address related problems. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.
The increasing complexity of legal systems has many origins, which are worth a deeper analysis. This paper is an attempt to unveil the complexity in legal texts driven by structural, lexical and syntactical properties. Thereby we transferred established quantitative methods from structural network analysis and linguistics into the domain of legal text analysis. Based on 3 553 German laws, respectively regulations, we calculated several structural and lexical indicators for complexity and determined highly significant correlations (p ≤ 0.01). The papers' contribution is a set of metrics, enabling a structured and objective comparison of legal texts regarding their complexity.
This paper argues that legal information access systems could be extended to allow for semantic but also graph-based search. The challenge is to find documents not only on the basis of the intertextual relationships they share but also based on their content descriptors. The paper presents a simple graph-based query language defined to meet the needs of the French Légilocal project.
In this paper we present the results of ongoing research aimed at a legal recommender system where users of a legislative portal receive suggestions of other relevant sources of law, given a focus document. We describe how we make references in case law to legislation explicit and machine readable, and how we use this information to adapt the suggestions of other relevant sources of law. We also describe an experiment in categorizing the references in case law, both by human experts and unsupervised machine learning. Results are tested in a prototype for Immigration Law.
This paper presents the application of the Akoma Ntoso XML standard to the Swiss Federal Chancellery, in particular to the Official Publications Centre document workflow in relation of the IRIs/URIs naming convention. A robust proof-of-concept was conducted on a variety of document types and a IRIs/URIs resolver was implemented for managing the dereferencing to the official sources.