Ebook: Legal Knowledge and Information Systems
The 22nd edition of the JURIX conference was held in Rotterdam on the 17th and 18th December and was hosted by the Erasmus University Rotterdam. While the conference was back to its country of origin, JURIX continues to attract a wide international audience. This year, the conference received submissions from all five continents. This clearly demonstrates the lively and growing interest for the highly interdisciplinary discipline of legal informatics. The selection of papers for this edition of JURIX covers a wide variety of topics in legal informatics, including contributions on established fields such as legal document management, argumentation, case based reasoning, dispute resolution, support for legal drafting and ontologies, to emerging areas such as regulatory compliance, normative multi-agent systems and game theory, as well as application areas, for example, fraud detection, legal tutoring systems and legal decision support systems.
Jurix 2009, the 22nd edition of the Annual Conference on Legal Knowledge and Information Systems, is back to its country of origin, the Netherlands. However, the conference continues to grow internationally. Again, the contibutions are from 4 continents, and 13 countries: Australia, Austria, Belgium, Hungary, Italy, Japan, Luxembourg, the Netherlands, Poland, Portugal, Spain, United Kingdom, and USA. This year we received 34 submissions from 15 countries and all 5 continents. After a rigorous selection process where each paper was assessed by three or four referees, 15 papers were accepted as full papers and 6 as short papers.
The accepted papers range over a very wide spectrum of topics in legal informatics. From the traditional (at least for Jurix) contributions on legal document management, argumentation, case based reasoning, dispute resolution, support for legal drafting, ontologies to new areas such as regulatory compliance, normative multi-agent systems, game theory to application areas, for example, fraud detection, legal tutoring system and legal decision support systems.
The main Jurix conference is complemented by the “AI Approaches to the Complexity of Legal Systems: Multilingual ontologies, Multiagent systems, Distributed networks (AICOL-09)” workshop and two tutorials on
• Natural Language Processing Techniques for Managing Legal Resources on the Semantic Web
• Business Process Compliance
The workshop covers both some of the most significant issues in contemporary legal informatics and challenging opportunities for use inspired research in this fascinating field.
We thank the members of the Program Committee for their effort and very valuable contribution; without them it would not be possible to maintain and improve the high scientific standard the symposium has now reached.
• Kevin D. Ashley, University of Pittsburgh, USA
• Katie Atkinson, University of Liverpool, UK
• Emilia Bellucci, Victoria University, Melbourne, Australia
• Trevor J.M. Bench-Capon, University of Liverpool, UK
• Jon Bing, University of Oslo, Norway
• Guido Boella, University of Torino, Italy
• Danièle Bourcier, CNRS CERSA, University of Paris 2, France
• Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain
• Tom van Engers, University of Amsterdam, The Netherlands
• Enrico Francesconi, ITTIG-CNR, Florence, Italy
• Thomas F. Gordon, Fraunhofer FOKUS, Berlin, Germany
• Carole D. Hafner, Northeastern University, USA
• Joris Hulstijn, Free University Amsterdam, The Netherlands
• Renato Iannella, NICTA, Australia
• Gloria T. Lau, FindLaw & Stanford University, USA
• Arno R. Lodder, VU University Amsterdam & CEDIRE.org, The Netherlands
• Ronald P. Loui, Washington University St. Louis, USA
• Jenny Eriksson Lundström, NITA, Uppsala Universitet, Sweden
• Laurens Mommers, Universiteit Leiden, The Netherlands
• Zoran Milosevic, Deontik, Australia
• Katsumi Nitta, Tokyo Institute of Technology, Yokohama, Japan
• Kees van Noortwijk, Erasmus Universitiy Rotterdam, The Netherlands
• Anja Oskamp, VU University Amsterdam, The Netherlands
• Monica Palmirani, University of Bologna, Italy
• Adrian Paschke, TU Dresden, Germany
• Henry Prakken, Universiteit Groningen & Universiteit Utrecht, The Netherlands
• Paulo Quaresma, Universidade de Evora & Universidade Nova de Lisboa, Portugal
• Antonino Rotolo, University of Bologna, Italy
• Giovani Sartor, European University Institute, Florence & Cirsfid, University of Bologna, Italy
• Ken Satoh, National Institute of Informatics, Japan
• Burkhard Schafer, University of Edinburgh, Scotland
• Daniela Tiscornia, ITTIG-CNR, Florence, Italy
• Leon van der Torre, University of Luxembourg, Luxembourg
• Bart Verheij, Universiteit Groningen, The Netherlands
• Douglas N. Walton, University of Windsor, Canada
• Mary-Anne Williams, University of Technology, Sydney, Australia
• Rad0boud Winkels, University of Amsterdam, The Netherlands
• John Zeleznikow, Victoria University, Melbourne, Australia
Many thanks also to the external referees, for their invaluable support to the work of the Program Committee. We thank the authors for submitting good papers, responding to the reviewers' comments, and abiding by our production schedule. Finally a special thanks to Kees van Noortwijk for taking on the daunting task of organising Jurix 2009 in cooperation with a PC chair from a land down under.
Guido Governatori, Brisbane, October 2009
Validation of XML documents is required in order to maintain consistency in large XML document bases, including document bases of legal texts such as acts, judgments, hansards. Current W3C standards for XML validation either do not provide enough precision (DTD) or are too complex to be immediately authored and read by humans (XML Schema). DTD++ has been proposed as an alternative, and relevant legal standards such as Norme In Rete (Italy), Akoma Ntoso (UN for Africa) and CEN Metalex (European CEN standard) are first written in DTD++ and then converted for standard purposes into XML schema and/or DTD. XDTD is a followup of DTD++, and is a shorter and simplified syntax for XML Schema. XDTD combines the power of the XML Schema model with the readability of DTD. The whole set of features of the XML Schema language, including the new ones in the forthcoming 1.1 version of the language, is available in XDTD, while maintaining the same readability and compactness of the original DTD language. In this paper we show how XDTD simplifies the compilation of vocabularies, with attention to legal standards such as the Akoma Ntoso, Norme In Rete and CEN Metalex legal standards.
This paper studies the use of hypothetical and value-based reasoning in US Supreme-Court cases concerning the United States Fourth Amendment. Drawing upon formal AI & Law models of legal argument a semi-formal reconstruction is given of parts of the Carney case, which has been studied previously in AI & law research on case-based reasoning. The result is compared with Rissland's (1989) analysis in terms of dimensions and Ashley's (2008) analysis in terms of his process model of legal argument with hypotheticals.
In this paper we introduce and discuss five guidelines for the use of normative systems in computer science. We adopt a multiagent systems perspective, because norms are used to coordinate, organize, guide, regulate or control interaction among distributed autonomous systems. They are derived from the computer science literature. From the so-called ‘normchange’ definition of the first workshop on normative multiagent systems in 2005 we derive the guidelines to motivate which definition of normative multiagent system is used, to make explicit why norms are a kind of (soft) constraints deserving special analysis, and to explain why and how norms can be changed at runtime. From the so-called ‘mechanism design’ definition of the second workshop on normative multiagent systems in 2007 we derive the guidelines to discuss the use and role of norms as a mechanism in a game-theoretic setting, and to clarify the role of norms in the multiagent system.
MetaLex XML is an interchange format, a lowest common denominator for other standards, intended not to replace jurisdiction-specific standards and vendor-specific formats in the publications process but to impose a standardized view on legal documents for the purposes of information exchange and interoperability in the context of software development.
This paper elaborates on the naming convention mechanism, explaining certain design decisions to be made by Semantic Web developers implementing or using MetaLex naming that are not made explicit in the official 2009 standard proposal.
There is an ongoing debate in law and accounting about the relative merits of principle-based versus rule-based regulatory systems. In this paper we characterize what kind of reasoning underlies the two styles of regulation. We adapt an original account of Verheij et al. (1998) to take aspects of the implementation context into account, such as the process of adoption of a new norm and the roles of the participants. The model is validated by a comparison between EU and US customs regulations intended to enhance safety and security in international trade. The EU regulations (AEO self-assessment) are essentially principle-based, whereas the American system (C-TPAT) is rule-based.
The advances observed in the last years in telecommunication technologies rapidly brought along new ways of doing business. This new reality, however, has not been so rapidly followed by the entities responsible for dealing with the conflicts that arise from these interactions, now undertaken in an electronic format. Traditional paper-based courts, designed for the industrial era, are now outdated. The answer to this problem may rely on the new tools that can be built using new artifacts from fields such as Artificial Intelligence. Using these tools the parties can simulate outcomes, thus having a better notion of the possible consequences of a legal dispute, namely in terms of the Best and Worst Alternative to Negotiated Agreements. In this paper, we present our agent-based architecture for such a tool, UMCourt, placing special emphasis on a particular agent that, based on the concept of legal precedent, gives its users a set of possible outcomes of a case, based on the observation of past similar cases and learns new cases in order to enrich its knowledge base about the Portuguese labor law.
In the field of legal knowledge engineering development of expert systems is getting more common. At present, we are in the process of building a new environment for developing legal knowledge systems. Our previous knowledge based system development tool, AllexGold includes a rule-based reasoning engine with a transparent interface but struggles from limitations in expressive power. The new environment, Emerald, is being built around Semantic Web technologies, exploiting the benefits of using description logic (DL) reasoners. However, to keep the advantages of rule formalisms and interactive dialogues, we combine DL reasoning with production rules and a backward chaining algorithm. The article gives an overview about the new legal knowledge engineering environment and some of its internals.
In this paper an approach to legal rules modelling based on a semantic model for legislation and oriented to knowledge reusability and sharing is presented. An automatic methodology able to support rules learning is proposed as well: it is based on techniques of knowledge extraction from legislative texts. This combined approach is aimed at giving a contribution to bridge the gap between consensus and authoritativeness in legal knowledge representation.
The BestPortal is part of an initiative that aims to improve the ability of citizens to determine their legal position. Publishing court proceedings is a natural step to improve access and transparency of the legal system. We discuss the limitations of both such an ‘open data’ approach, and of more traditional knowledge intensive approaches, and present a flexible mechanism that allows us to bridge the gap between legal and layman conceptualisations of the world. This approach has been implemented as a publicly accessible portal and uses the BestMap ontology to define mappings between the two vocabularies.
Fuel laundering scam is prevalent in many countries. Typically, a case may concern 100 companies, several hundred people, and up to 100 thousand money transfers/invoices. Analysis of this amount of data is difficult even if it is stored in a database. To gain insight on the mechanism of the case we use in this work the extension of previously proposed ontology model, called the minimal model. The conceptual minimal model consists of eight layers of concepts, that are structured in order to use available data on facts to uncover relations. FuelFlowVis is an intelligent tool that supports continuous visual analytic process by exploiting the following features: navigation between global and local views, filters allowing displaying transactions by value, time, and type of goods. A user can inspect selected flows which give insight into crime patterns. We used the tool for 3 large Polish fuel laundering cases form the 2001–2003 period. For none of the cases we have complete data. We find that the methods to hide the proceeds of crime are very similar between the cases. The evidence as presented by prosecutors is of varied quality, and depends on the size of the crime group. In all the cases prosecutor's had an enormous problem to uncover money flows from the source of money (profit centre) to sinks (where the money leaves companies and goes as cash to organizers of the scheme). This occurs because the use of traditional analytic tools (spreadsheets or non-semantic visualization tools) cannot provide information about chains of transactions – a separate binary relations view does not provide complete insight to the case. Prospects on future reasoning capabilities of the tool will be presented.
Much work on probabilistic evidential reasoning for crime investigation employs probabilities that express subjective expert beliefs. This use of subjective probabilities is inevitable for several reasons, including lack of data, non-specificity of phenomena and fuzziness of concepts in this domain. Numerous representation formalisms and corresponding inference mechanisms have been developed to capture and reason with the intrinsic vagueness in subjective probabilities. In the literature, these schemes are largely presented as though they are diametrically opposed to one another. This paper critically examines what aspects of vagueness are captured by these different approaches. It demonstrates that they are concerned with different aspects of vagueness. This leads to a proposal of a method to combine the different approaches.
This paper presents an ontology based model for an electronic information and transaction portal developed in regard to the electronic implementation of the EU Services Directive in Austria. The web portal will assist service providers in gathering information about rules on access to and exercise of service activities, and finally guide the service provider to the relevant procedures, formalities and institutions. To reach this goal, information from and about different sources and across diverse authorities and geographical and functional jurisdictions has to be represented. This paper focuses on the design of an ontology capable of goal discovery, of connecting goals to electronic transactions, and of selecting, organizing and interlinking goal-relevant information from different information sources. For a first feasibility check the SeGoF framework was used.
Diagrammatic models of argument are increasingly prominent in AI and Law. Unlike everyday language these models formalize many of the the components and relationships present in arguments and permit a more formal analysis of an arguments' structural weaknesses. Formalization, however, can raise problems of agreement. In order for argument diagramming to be widely accepted as a communications tool, individual authors and readers must be able to agree on the quality and meaning of a diagram as well as the role that key components play. This is especially problematic when arguers seek to map their diagrams to or from more conventional prose. In this paper we present results from a grader agreement study that we have conducted using LARGO diagrams. We then describe a detailed example of disagreement and highlight its implications for both our diagram model and modeling argument diagrams in general.
This paper describes extensions to our native XML legislative editor MetaVex, dealing with specific requests of legislative drafters: The automatic generation of amending documents based on the editing of an existing law or proposal, and the automatic consolidation of (proposed) legislation based on the original and its amending documents. Moreover, we try to automatically detect (potential) clashes of amendments and amendments of amendments.
This paper proposes a novel approach to study legal interactions. In particular, we focus on cases of medical liability and investigate the mechanisms governing legal litigations in different judicial environments. To do that, we use an agent-based model where lawyers are explicitly and individually represented in the model. Lawyers in the model are heterogeneous in the sense that they may follow different argumentation strategies to try to win the cases of medical liability they are assigned. They may also change their strategy if they observe that other strategies work better in the particular context they are embedded. In this way, our agent-based model offers a complementary approach to understanding legal interactions within an evolutionary framework. More concretely, we explore how various factors, such as the magnitude of legal expenses and the accuracy of the judicial system, affect the type of litigation strategies that are successful and prevail in a certain judicial context.
Organizing legislative texts into a hierarchy of legal topics enhances the access to legislation. Manually placing every part of new legislative texts in the correct place of the hierarchy, however, is expensive and slow, and therefore naturally calls for automation. In this paper, we assess the ability of machine learning methods to develop a model that automatically classifies legislative texts in a legal topic hierarchy. It is investigated whether such methods can generalize across different codes. In the classification process, the specific properties of legislative documents are exploited. Both the hierarchical structure of legal codes and references within the legal document collection are taken into account. We argue for a closer cooperation between legal and machine learning experts as the main direction of future work.
This paper presents a solution for managing heterogeneous legal XML resources using a native XML repository—called eXistrella—designed to enable a common query layer within a temporal environment equipped to manage the evolution of legislative documents over time. With eXistrella, the structure of a legal document is directly connected with the legal knowledge embedded in the corresponding norms, exploiting the XML syntax used in modeling such norms, and in this way, the gap can be bridged between the container (the document) and the way the content (the norm) is modeled.
In this work we present STIA: a tool for semantic annotation in the Jurisprudence domain. The tool offers an easy interface to domain experts (lawyers, administrative, researchers, …) for annotating relationships of pertinence between portions of text from different laws covering similar topics/circumstances/events. These annotations either constitute a resource on their own (which can be used in the context of semantic search engines to improve retrieval of related laws) as well as a precious feed for tools aiming at automatically extracting more of the above relationships.
The Japanese “theory of presupposed ultimate facts” (called “Yoken-jijitsu-ron” in Japanese) for interpreting the Japanese civil code has been underway for over forty years mainly by judges in the Japanese Legal Training Institute, but not yet formalized in a mathematical way. This paper attempts to mathematically formalize this theory and presents the correspondence between the theory and logic programming with “negation as failure”. It is quite surprising that Japanese judges independently developed such a theory without knowing about logic programming.
Real life law cases can be analyzed and modeled, but the reverse direction is possible, too: Creating artificial cases based on an ontology. Combined with adaptivity, such cases can be used in E-Learning as exercises for learners. This paper presents an approach for generating cases based on assembling text fragments combined with an ontology for verifying whether these fragments are compatible and the resulting case is factually and legally possible. An exemplary implementation of the subject area of domain name disputes is shown as a proof of concept.
More and more people on the working floor are expected to have knowledge of the sources of law that are applicable to their field. While coping with vast volumes of regulations is a challenge, dealing with changes in legislation is even more challenging. Organizational change also affects the amount and complexity of legal rules people in organizations have to deal with. Moreover, not only rules are affected; changes in the organization's environment often create the need to redesign business processes and IT infrastructure, reallocate roles and responsibilities and reorder tasks. This paper explains our approach on dealing with these issues.