
Ebook: Legal Knowledge and Information Systems

The 23rd edition of the JURIX conference was held in the United Kingdom from the 15th till the 17th of December and was hosted by the University of Liverpool. This year submissions came from 18 countries covering all five continents. These proceedings contain thirteen full and nine short papers that were selected for presentation. As usual they cover a wide range of topics. Many contributions deal with formal or computational models of legal reasoning: reasoning with legal principles, two-phase democratic deliberation, burdens and standards of proof, argumentation with value judgments, and temporal reasoning in normative multi-agent systems. Another group of papers deals with research facilitating the access to sources of law: the use of machine learning to map different thesauri, case frames to improve the access to case law, and ways to resolve and standardize case law citations. A third group of papers is centred on grasping the semantics of sources of law using some sort of natural language processing or visualisation techniques.
This volume contains the proceedings of the twenty third international Conference on Legal Knowledge and Information Systems (JURIX 2010), which was held December 15th–17th at the University of Liverpool in the United Kingdom. The Jurix conferences are held under the auspices of the Dutch Foundation for Legal Knowledge Systems (www.jurix.nl).
This year we had 36 submissions from 18 countries from all five continents
We need to start an AI&Law community on Antarctica.
The selected papers cover a wide range of topics, from standards for identifying case law to models of democratic deliberation. Many deal with formal or computational models of (parts of) legal reasoning: Araszkiewicz on reasoning with legal principles, Bench-Capon & Prakken on democratic deliberation, Bex & Walton on burdens and standards of proof, Governatori & Sartor also on burdens of proof, Grabmair & Ashley on argumentation with value judgments, Smith et al. on temporal reasoning in normative multi-agent systems. Another group of papers deals with research facilitating the access to sources of law: Bartoloni & Francesconi discuss the use of machine learning to map different thesauri, Hoekstra et al. the use of case frames to improve the access to case law, van Opijnen a way to resolve and standardize case law citations. A third group of papers is centred around grasping the semantics of sources of law using some sort of natural language processing: de Maat et al. compare machine learning to an explicit pattern based approach to classify sentences in legislation, Takano et al. try to formalize paragraphs of legal text using parsing and text patterns, Wyner & Peters present a method for helping humans to identify legal factors in cases. Finally, Boer & van Engers go back to the generic problem solving tasks of knowledge systems research of more than two decades ago, now in the context of public administrations and with an eye on agents' roles.
Two invited lectures were given, a more theoretical one by prof. Wiebe van der Hoek of the University of Liverpool on “Reasoning about Normative Systems” and a more applied one by John Sheridan, Head of e-Services in the Information Policy and Services Directorate of the UK's National Archives on delivering legislation online using a Linked Data approach. Two workshops were organized, one on “Modelling Legal Cases and Legal Rules” and one on “Online Dispute Resolution”.
Acknowledgments
A conference like JURIX is not possible without the effort and support of the members of the Program Committee:
Kevin D. Ashley (Univ. of Pittsburgh, USA)
Katie Atkinson (University of Liverpool, UK)
Danielle Bourcier (University of Paris 2, France)
Joost Breuker (University of Amsterdam, NL)
Pompeu Casanovas (Autonomous University of Barcelona, Spain)
Jack Conrad (Thomson Legal & Regulatory, USA)
Tom van Engers (University of Amsterdam, NL)
Enrico Francesconi (ITTIG-CNR, Florence, Italy)
Thomas F. Gordon (Fraunhofer FOKUS, Berlin, Germany)
Guido Governatori (University of Queensland, Australia)
Carole D. Hafner (Northeastern University, USA)
Rinke Hoekstra (Vrije Universiteit Amsterdam, NL)
Gloria Lau (FindLaw, USA)
Arno Lodder (Vrije Universiteit Amsterdam, NL)
Qiang Lu (Thomson Reuters, USA)
L. Thorne McCarty (Rutgers University, USA)
Marie-Francine Moens (K.U. Leuven, Belgium)
Laurens Mommers (Universiteit Leiden, NL)
Monica Palmirani (University of Bologna, Italy)
Henry Prakken (Utrecht University, NL)
Paulo Quaresma (Universidade de Evora & Nova de Lisboa, Portugal)
Giovanni Sartor (Università di Bologna, Italy)
Ken Satoh (National Institute of Informatics and Sokendai, Tokyo, Japan)
Burkhard Schafer (University of Edinburgh, Scotland)
Uri Schild (Bar Ilan University, Israel)
Erich Schweighofer (Universität Wien, Austria)
Bart Verheij (University of Groningen, NL)
Fabio Vitali (University of Bologna, Italy)
Douglas N. Walton (Univ. of Windsor, Canada)
Adam Wyner (University of Liverpool, UK)
John Zeleznikow (Victoria University, Australia)
Thanks also to the external referees, for their invaluable support to the work of the Program Committee. We thank all authors for submitting their work, and those of accepted papers for responding to the reviewers' comments and abiding by our production schedule. Finally a special thanks to Katie Atkinson for taking on the responsibility of organising JURIX 2010.
I would like to end with a personal note. This 23rd JURIX conference is a special one for me since this is the last one in my period as president of the JURIX foundation, a position I have held since 2002. One of my goals has always been to increase the international orientation of JURIX and its conferences. The first step was to host the conference outside of the Netherlands, so we moved to London in 2002. The second step was a foreign program chair, Trevor Bench-Capon, from the University of Liverpool in 2002. Since then the conference has gone abroad every other year (Berlin, Brussels, Paris and Florence) and the program committee has become larger and more international as has the number of submitted papers and conference participants. This year we are back in the United Kingdom, in Liverpool. Until now I have attended all 23 JURIX conferences with great pleasure and interest and hope to continue that for a long time.
Radboud Winkels
Program Chair
Leibniz Center for Law, University of Amsterdam, winkels@uva.nl
We develop a model of normative systems, where roughly speaking the model is a transition-based system and a norm is the result of flagging some of the transitions as undesirable. We then use a language that is close to that of Computation Tree Logic to reason about such systems. We demonstrate how our framework facilitates to reason about the following three settings:
(1) Although normative systems, or social laws, have proved to be a highly influential approach to coordination in multi-agent systems, the issue of compliance to such normative systems remains problematic. In all real systems, it is possible that some members of an agent population will not comply with the rules of a normative system, even if it is in their interests to do so. It is therefore important to consider the extent to which a normative system is robust, i.e., the extent to which it remains effective even if some agents do not comply with it.
(2) We then show how power indices, originally developed within voting theory, can be applied to understanding the relative importance of agents when we attempt to devise a coordination mechanism using the paradigm of normative systems. Understanding how pivotal an agent is with respect to the success of a particular social law is of benefit when designing such social laws: we might typically aim to ensure that power is distributed evenly amongst the agents in a system, to avoid bottlenecks or single points of failure.
(3) We then briefly sketch how the notion of a goal can be introduced to a normative system, enabling to conceive such a system as a game where the agents make strategic decisions regarding the balance between norm compliance and the satisfaction of their goals.
How is the web changing the nature of public access to legislation? The Scythian Philosopher, Anacharsis, did not have the World Wide Web or legal informatics in mind when he compared written laws to spiders webs, but the simile is apposite. The web has been a force for massively democratising access to legislation and other legal materials. How does the public interact with primary sources of legislation as part of their engagement with legal systems and processes? What do they think they are looking at when they interact with legislation online or understand about the information they are viewing? This keynote address examines the role and expectations of a public legislation service from the perspective of legislation.gov.uk. It will explore the motivations and needs of users, as well as some of the unique challenges associated with delivering legislation online.
Web technologies provide the foundation for much of the current research in legal informatics. With legislation.gov.uk the UK Government has taken the first steps towards exposing the statute book as Linked Data, as part of the government's transparency agenda. The aim of transparency is to make government more accountable, through services such as data.gov.uk. This keynote address will describe areas of immediate and sometimes surprising utility of a Linked Data approach to the statute book. What is the role of a Linked Data Statute Book in a web of government data, such as that being nurtured by data.gov.uk? The talk will discuss the core elements of legislation.gov.uk that enable its use and exploitation as Linked Data and the role the statute book can play as a resource on the linked data web.
Robert Alexy is one of the main advocates of the so-called Rules and Principles Theory (hereafter: RPT). According to the RPT, legal norms can be divided into legal rules and legal principles. One of the main criteria for this distinction is - Alexy argues - that legal rules are applied by means of the Subsumption Formula, while legal principles - by means of the so-called Weight Formula (hereafter: WF). The WF offers an important insight into the structure of the process of balancing in legal reasoning. Alexy's proposal leads to many doubts and questions, however. The aim of the paper is to examine the appropriateness of the WF and the problem of balancing in legal reasoning from the perspective inspired by the constraint satisfaction theory of coherence, developed by Paul Thagard. My claim is that this theory enables us to elucidate many problematic features of the WF and to recast the structure of legal balancing in more transparent and efficient manner. The existing workable algorithms designed for computing other kinds of coherence-based reasoning (for instance, explanatory reasoning or analogical reasoning), make possible to adopt the programs employing these algorithms for computation of coherence in balancing of principles. However, the analysis presented here is mainly conceptual and it has only preparatory character in relation with the possible computational implementations in the future.
The availability of public administration semantic Web services is strictly linked to the availability of knowledge resources. In particular their interoperability allows knowledge sharing and reuse, so to provide integrated services in a distributed environment. In this paper a machine learning technique for guaranteeing thesaural interoperability by conceptual mapping within an information retrieval framework is presented. In particular the case of thesaural interopearbility for cross-collection legal information retrieval services at EU level is shown.
A formal two-phase model of democratic policy deliberation is presented, in which in the first phase sufficient and necessary criteria for proposals to be accepted are determined (the ‘admissible’ criteria') and in the second phase proposals are made and evaluated in light of the admissible criteria resulting from the first phase. Argument schemes for both phases are defined and formalised in a logical framework for structured argumentation. The process of deliberation is abstracted from and it is assumed that both deliberation phases result in a set of arguments and attack and defeat relations between them. Then preferred semantics is used to evaluate the acceptability status of criteria and proposals.
In this paper, we provide a formal logical account of the burden of proof and proof standards in legal reasoning. As opposed to the usual argument-based model we use a hybrid model for Inference to the Best Explanation, which uses stories or explanations as well as arguments. We use examples of real cases to show that our hybrid reasoning model allows for a natural modeling of burdens and standards of proof.
In public administration, attempts to specify a unified interpretation of law seem to end in a specification with little operational meaning at all. The same unit of discourse in the sources of law usually plays many different knowledge roles, and ends up with – sometimes subtly – different operational meanings in each.
Knowledge acquisition from law in public administration usually subscribes to the notion of tasks to express the use of knowledge, even though it has clear deficiencies in dealing with variations in meaning due to context, particularly when dealing with a social construct like the legal institution. In the academic field we on the other hand see a move to multi-agent systems to explain the meaning of legal institutions as described by the sources of law. These however hardly do justice to the wide variety of problem solving behaviours found in the organization, and the pragmatic reasons for that variety.
In this paper we make an inventory of generic problem solving tasks in public administration, based on our experiences in case studies in a tax administration and an immigration and naturalization administration. We propose a typology of problems and discuss its conceptual connection to social agent roles that can be simulated in a multi-agent simulation environment.
We shall argue that burdens of proof are relevant also to monological reasoning, i.e., for deriving the conclusions of a knowledge-base allowing for conflicting arguments. Reasoning with burdens of proof can provide a useful extension of current argument-based non-monotonic logics, at least a different perspective on them. Firstly we shall provide an objective characterisation of burdens of proof, assuming that burdens concerns rule antecedents (literals in the body of rules), rather than agents. Secondly, we shall analyse the conditions for a burden to be satisfied, by considering credulous or skeptical derivability of the concerned antecedent or of its complement. Finally, we shall develop a method for developing inferences out of a knowledge base merging rules and proof burdens in the framework of defeasible logic.
This paper presents a formalism modeling legal reasoning with fact patterns and their substantive effects on legal values. It centers on the concept of a value judgment, i.e. a determination that one factual situation is preferable over another by virtue of their respective effects on values. This allows the modeling of legal sources as sets of value judgments and legal methodologies as collections of argumentation schemes. The paper briefly derives the formalism from legal theory and elaborates on its use in the context of an example of hypothetical reasoning.
This paper introduces case frames as a way to provide a more meaningful structure to vocabulary mappings used to bridge the gap between laymen and legal descriptions of court proceedings. Case frames both reduce the ambiguity of queries, and improve the ability of users to formulate good quality queries. We extend the BestMap ontology with a formalisation of case frame based mappings in OWL 2, present a new version of BestPortal, and show how case frames impact retrieval results compared to simple contextual mappings and a direct fulltext search.
This paper presents results of an experiment in which we used machine learning (ML) techniques to classify sentences in Dutch legislation. These results are compared to the results of a pattern-based classifier. Overall, the ML classifier performs as accurate (>90%) as the pattern based one, but seems to generalize worse to new laws. Given these results, the pattern based approach is to be preferred since its reasons for classification are clear and can be used for further modelling of the content of the sentences.
Case law is cited by various types of identifiers: neutral citation numbers, vendor specific identifiers, triples of court name, judgment date and case number, in all possible combinations and spelling variants. Such diversity hinders the development of citation indexes, proper hyperlinking and statistical analysis. This paper discusses a solution to recognize, normalize and deduplicate case law citations in unstructured documents.
We address some forms of temporal reasoning within normative MAS, focusing on the combination of temporal logics with multi-modal multi-agent logics. We suggest perspectives on how these combinations can be used for modelling aspects of time within lawful provisions, obligations, and legal principles. The main contributions are the new variant of deontic tense logic using hybrid logic, and the combination of time and obligations.
A logical formulation system functions to verify consistency of legal documents, and to eliminate inconsistent parts from a set of articles. We are studying legal document analysis methods, and in this paper we focus on how to deal with multiple sentences which constitute a paragraph of an article in a law. They need to be processed together because they are semantically dependent on each other. We analyzed National Pension Law of Japan and found that relations between sentences and their logical structures can be classified into four main types. We implemented a logical formulation system, which showed reasonable accuracy in dealing properly with paragraphs consisting of multiple sentences.
Legal case factors are textually represented facts which are represented in reported legal case decisions. Precedent decisions contribute to the decision of a case under consideration. As textually represented facts, factors linguistically encode semantic properties and relationships among the entities which can be leveraged to identify and extract the legal case factors from decisions. We integrate legal and linguistic resources in a text analysis tool with which we annotate textual passages. Using annotations tailored to legal case factors, the legal researcher can rapidly zero in on textual spans which represent specific combinations of factors, participants, and semantic properties which bear on who played what role with respect to a factor. The research reports progress on the development of a tool.
This article reports on the development of a system for analyzing property transfers. Under the common law, the relationship between a transferor's language and the particular present and future interests in property that it creates can be technical and complex. This article describes a system that addresses some challenges in the law the governs property transfers.
The paper describes a collaborative project between computer scientists, lawyers, police officers, medical professionals and social workers to develop a communication infrastructure that allows information sharing while observing Data Protection law “by design”, through a formal representation of legal rules in a firewall type system.
The paper describes the development of a legal decision support guide to relevant case law decisions for owners corporation cases in the state of Victoria, Australia. The rate of growth of owners corporations (also known as body corporate or strata title properties) has increased significantly in the last two decades. Because of this growth, and the need to manage a rapidly expanding population, the governance and management of these entities has become an important concern for government. Conflict and its management within them is an essential element of this concern. The Victorian legislation outlines a three-tiered process for resolving disputes. Cases that can't be settled through negotiation are often referred to the Victorian Civil and Administrative Tribunal (VCAT). Through our system we aim to provide legal decision support for relevant case law decisions determined by VCAT to help guide disputants through the grievance process.
This article presents the FormaLex toolset, an approach to legislative drafting that, based on the similarities between software specifications and some types of regulations, uses off-the-shelf LTL model checkers to perform automatic analysis on normative systems.
This paper briefly shows how Intuitionistic Description Logic can be considered a good alternative to classical ALC as far as formalizing legal knowledge is concerned.
Typically legal reasoning involves multiple temporal dimensions. The contribution of this work is to extend LKIF-rules (LKIF is a proposed mark-up language designed for legal documents and legal knowledge in ESTRELLA Project [3]) with temporal dimensions. We propose an XML-schema to model the various aspects of the temporal dimensions in legal domain, and we discuss the design choices. We illustrate the use of the temporal dimensions in rules with the help of real life examples.