Ebook: Information Modelling and Knowledge Bases XXII
Information modeling and knowledge bases have become crucial in recent decades, not only in relation to information systems and computer science for the academic community, but also for the world of business in any area where information technology is applied. This book presents 15 full papers and 10 short papers edited following presentation and discussion at the 20th European Japanese Conference on Information Modeling and Knowledge Bases (EJC2010). This annual conference constitutes a worldwide forum, drawing together both researchers and practitioners for the exchange of scientific results and experiences in computer science and other related disciplines using innovative methods and progressive approaches. These papers, selected after a rigorous review process from 34 submissions, cover a wide variety of topics including: the theory of concepts; database semantics; knowledge representation; software engineering; context-based information retrieval; ontological technology; cultural modeling; the management of WWW information, document data and processes; image, temporal and spatial databases, and many others. The book provides a valuable insight into the latest developments in the field, and will be of interest to all those involved in the application of information technology.
In recent decades information modeling and knowledge bases have become hot topics, not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The 20th European-Japanese Conference on Information Modeling and Knowledge Bases (EJC2010) continues the series of events that originally started as a co-operation initiative between Japan and Finland, back in the second half of the 1980's. Later (1991) the geographical scope of these conferences expanded to cover the whole of Europe and other countries as well.
The EJC conferences constitute a worldwide research forum for the exchange of scientific results and experiences achieved in computer science and other related disciplines using innovative methods and progressive approaches. In this way a platform has been established drawing together both researchers and practitioners who deal with information modelling and knowledge bases. The main topics of EJC conferences target the variety of themes in the domain of information modeling: conceptual analysis, the design and specification of information systems, multimedia information modelling, multimedia systems, ontology, software engineering, knowledge and process management, knowledge bases, cross-cultural communication and context modelling. We also aim at applying new progressive theories. To this end much attention is also paid to theoretical disciplines including cognitive science, artificial intelligence, logic, linguistics and analytical philosophy.
In order to achieve the targets of the EJC, an international program committee selected 15 full papers and 10 short papers in a rigorous reviewing process from 34 submissions. The selected papers cover many areas of information modelling, namely the theory of concepts, database semantics, knowledge representation, software engineering, WWW information management, context-based information retrieval, ontological technology, image databases, temporal and spatial databases, document data management, process management, cultural modelling and many others.
The conference could not be a success without a lot of effort on the part of many people and organizations. In the program committee, 29 reputable researchers devoted a lot of energy to the review process, selecting the best papers and creating the EJC2010 program, and we are very grateful to them. Professor Yasushi Kiyoki and Professor Takehiro Tokuda acted as co-chairs of the program committee while Senior Researcher, Dr. Anneli Heimbürger, and her team took care of the conference venue and local arrangements. Professor Hannu Jaakkola acted as the general organizing chair and Ms. Ulla Nevanranta as conference secretary for the general organizational matters necessary for running the annual conference series. Dr. Naofumi Yoshida and his Program Coordination Team managed the review process and the conference program. We also gratefully appreciate the efforts of all our supporters, especially the Department of Mathematical Information Technology at the University of Jyväskylä (Finland), for supporting this annual event and the 20th jubilee year of EJC.
We believe that the conference was productive and fruitful in the advance of research and application of information modelling and knowledge bases. This book features papers edited as a result of the presentation and discussion at the conference.
The Editors
Anneli Heimbürger, University of Jyväskylä, Finland
Yasushi Kiyoki, Keio University, Japan
Takehiro Tokuda, Tokyo Institute of Technology, Japan
Hannu Jaakkola, Tampere University of Technology (Pori), Finland
Naofumi Yoshida, Komazawa University, Japan
We view the content of ontology via a logic of intensions. This is due to the fact that particular intensions like properties, roles, attributes and propositions can stand in mutual necessary relations which should be registered in the ontology of a given domain, unlike some contingent facts. The latter are a subject of updates and are stored in a knowledge-base state. Thus we examine (higher-order) properties of intensions like being necessarily reflexive, irreflexive, symmetric, anti-symmetric, transitive, etc., mutual relations between intensions like being incompatible, being a requisite, being complementary, and so like. We also define two kinds of entailment relation between propositions, viz. mere entailment and presupposition. Finally, we show that higher-order properties of propositions trigger necessary integrity constraints that should also be included in the ontology. As the logic of intensions we vote for Transparent Intensional Logic (TIL), because TIL framework is smoothly applicable to all three kinds of context, viz. extensional context of individuals, numbers and functions-in-extension (mappings), intensional context of properties, roles, attributes and propositions, and finally hyper-intensional context of procedures producing intensional and extensional entities as their products.
Various knowledge resources are spread to a world-wide scope. Unfortunately, most of them are community-based and never thought to be used among different communities. That makes it difficult to gain “connection merits” in a web-scale information space. This paper presents a three-layered system architecture for computing dynamic associations of events to related knowledge resources. The important feature of our system is to realize dynamic interconnection among heterogeneous knowledge resources by event-driven and event-centric computing with resolvers for uncertainties existing among those resources. This system navigates various associated data including heterogeneous data-types and fields depending on user's purpose and standpoint. It also leads to effective use for the sensor data because the sensor data can be interconnected with those knowledge resources. This paper also represents application to the space weather sensor data.
Partial updates arise when a location bound to a complex value is updated in parallel. Compatibility of such partial updates to disjoint locations can be assured by applying applicative algebras. However, due to the arbitrary nesting of type constructors, locations of complex-value database are often defined at multiple abstraction levels and thereby non-disjoint. Thus, applicative algebras is not as smooth as its simple definition suggests. In this paper, we investigate this problem in the context of complex-value databases, where partial updates arise naturally in database transformations. We show that a more efficient solution can be obtained when generalising the notion of location and thus permitting dependencies between locations. On these grounds we develop a systematic approach to consistency checking for update sets that involve partial updates.
As a computational model of natural language communication, Database Semantics
For an introduction to DBS see [NLC'06]. For a concise summery see [Hausser 2009a].
This paper proposes to realize the principle of balance by sequences of inferences which respond to a deviation from the agent's balance (trigger situation) with a suitable blueprint for action (countermeasure). The control system is evaluated in terms of the agent's relative success in comparison other agents and the absolute success in terms of survival, including the adaptation to new situations (learning).
From a software engineering point of view, the central question of an autonomous control is how to structure the content in the agent's memory so that the agent's cognition can precisely select what is relevant and helpful to remedy a current imbalance in real time. Our solution is based on the content-addressable memory of a Word Bank, the data structure of proplets defined as non-recursive feature structures, and the time-linear algorithm of Left-Associative grammar.
We all use our associative memory constantly. Words and concepts form paths that we can follow to find new related concepts; for example, when we think about a car we may associate it with driving, roads or Japan, a country that produces cars. In this paper we present an approach for information modelling that is derived from human associative memory. The idea is to create a network of concepts where the links model the strength of the association between the concepts instead of, for example, semantics. The network, called association network, can be learned with an unsupervised network learning algorithm using concept co-occurrences, frequencies and concept distances. The possibility to create the network with unsupervised learning brings a great benefit when compared to semantic networks, where the ontology development usually requires a lot of manual labour. We present a case where the associations bring benefits over semantics due to easier implementation and the overall concept. The case focuses on a business intelligence search engine where we modelled its query space using association modelling. We utilised the model in information retrieval and system development.
Classical software development methodologies take architectural issues as granted or pre-determined. They thus neglect the impact decisions for architecture have within the development process. This omission is applicable as long as we are considering monolithic systems. It cannot however been kept whenever we move to distributed systems. Web information systems pay far more attention to users support and thus require sophisticated layout and playout systems. These systems go beyond what has been known for presentation systems.
We thus discover that architecture plays a major role during systems analysis, design and development. We thus target on building a framework that is based on early architectural decisions or on integration of new solutions into existing architectures. We aim at development of novel approaches to web information systems development that allow a co-evolution of architectures and software systems.
This paper presents an image search system with an emotion-oriented context recognition mechanism. Our motivation implementing an emotional context is to express user's impressions for retrieval process in the image search system. This emotional context recognizes the most important features by connecting the user's impressions to the image queries. The Mathematical Model of Meaning (MMM: [2], [4] and [5]) is applied for recognizing a series of emotional contexts for retrieving the most highly correlated impressions to the context. These impressions are then projected to a color impression metric to obtain the most significant colors for subspace feature selection. After applying subspace feature selection, the system then clusters the subspace color features of the image dataset using our proposed Pillar-Kmeans algorithm.
Pillar algorithm is an algorithm to optimize the initial centroids for K-means clustering. This algorithm is very robust and superior for initial centroids optimization for K-means by positioning all centroids far separately among them in the data distribution. It is inspiring that by distributing the pillars as far as possible from each other within the pressure distribution of a roof, the pillars can withstand the roof's pressure and stabilize a house or building. It considers the pillars which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as number of centroids among the gravity weight of data distribution in the vector space. Therefore, this algorithm designates positions of initial centroids in the farthest accumulated distance between them in the data distribution.
The cluster based similarity measurement also involves a semantic filtering mechanism. This mechanism filters out the unimportant image data items to the context in order to speed up the computational execution for image search process. The system then clusters the image dataset using our Pillar-Kmeans algorithm. The centroids of clustering results are used for calculating the similarity measurements to the image query. We perform our proposed system for experimental purpose with the Ukiyo-e image dataset from Tokyo Metropolitan Library for representing the Japanese cultural image collections.
The selection of intermediaries is a fundamental and challenging problem in supply chain management. We propose a conceptual process model to guide the supply chain coordinator through the selection process. Besides the support of our model for the agility, adaptability and alignment of the target supply chain, it also provides extensive automated assistance for the selection of tactics by off-the-shelf tools from the area of artificial intelligence.
Modern applications involving information systems often require the cooperation of several distinct users, and many models of such cooperation have arisen over the years. One way to model such situations is via a cooperative update on a database; that is, an update for which no single user has the necessary access rights, so that several users, each with distinct rights, must cooperate to achieve the desired goal. However, cooperative update mandates new ways of modelling and extending certain fundamentals of database systems. In this paper, such extensions are explored, using database schema components as the underlying model. The main contribution is an effective three-stage process for inter-component negotiation.
Recent developments in mobile technology have enabled mobile phones to work as mobile Web servers. However, the composition of mobile phone applications and Web resources to form new mashup applications requires mobile programming knowledge ranging from how to create user interfaces, network connections and access to Web resources. Furthermore, the unique capabilities of mobile phone applications such as access to camera inputs or sensor data are often limited to local use only. To address these problems, we present a description-based approach and an Integration Model for the composition of mobile mashup applications combining Web applications, Web services and mobile phone applications (i.e., generic components). The compositions appear to require less native mobile programming knowledge. In the current work, to leverage access to these services and applications, an Interface Wrapper was used to transform generic components into mashup components. Composers were able to transform and reuse form-based query results from Web applications and integrate them with wrapped output from users' interaction with mobile phone applications, and other Web services. The final applications can be configured to work two ways: 1) as native mobile phone applications or 2) as a Web application accessible externally via a mobile Web server application.
The term of a “process” is used in Software Engineering (SE) theories and practices in many different ways, which cause confusion. In this paper we give a more formal description a Process-Ontological Model which can be used to analyze some problematic nature of software engineering. Firstly we present a process ontology in which everything is in a process. There are two kinds of processes: “eternal” and actual, where actual processes are divided into physical and mental processes. Secondly, we propose a topological model T for actual processes. Thirdly we propose an algebraic model for eternal processes, i.e. concepts. Lastly, by using category theory we connect these two models of processes together in order to get a category theoretical description of the Process-Ontological Model. That model is a functor category CO(T)op, i.e. the category of presheaves of concepts on T. Moreover, by using the Yoneda embedding we can represent the Process-Ontological Model as certain “structured sets”, and all of their “homomorphisms”.
Fast databases are no longer nice-to-have – they are a necessity. Many modern applications are becoming performance critical. At the same time, the size of some databases has been increasing to levels that cannot be well supported by current technology. Performance engineering is now becoming a buzzword for database systems. At first physical and partially logical tuning methods have been used for support of high performance systems, but they are mainly based on large and not well understood performance and tuning parameters. Nowadays it becomes obvious that we need methods for systematic performance design.
Performance engineering also means, however, support for database's daily operating. Most methods are reactive, i.e. they are using runtime information, e.g. performance monitoring techniques. It is then the operators or administrators business to find appropriate solutions. We target at active methods for performance improvement. One of the potential methods for active performance improvement is performance forecasting based on assumptions on future operating and on extrapolations for the current situation. This paper shows that conceptual performance tuning supersedes physical and logical performance tuning. As a proof of concept we applied our approach within a consolidation project for a databases-intensive infrastructure.
Game programming is part of many IT study programs. In these courses and in game-programming texts games are not considered on abstract, implementation-independent level, but discussion is based on some specific implementation environment: a programming language (C, C++), a software package, preprogrammed libraries etc. Thus instead of discussing games on general, implementation-independent level only specific features of these programming environments are considered.
Here is presented a framework for object-oriented, structural description and specification of games as event-driven object-oriented systems. At first should be considered game visual appearance and game mechanics – they create the “feel”, player's experience. Game structure and logic are considered on implementation-independent level using an easy-to understand specification language. The framework emphasizes separation of data structures from game engine and game logic and thus facilitates maintenance and reuse of game assets and objects. Mechanisms for automatic adjustment of game's difficulty – so that it will be just suitable, not too easy but also not too difficult – are also considered.
The specification method is illustrated with several examples. Specifications of games created with this method are easy to transform into implementations using some concrete game programming environment; this has been tested in a game programming course.
This paper introduces a method for bridging topics designed to facilitate generating stories over documents. First, we present a method for topic extraction based on narrative structure with k-means algorithm. We then model the story generation process and present a method for finding a bridge document between two documents.
This paper presents a combined-image query creation method for expressing user's intentions by combining multiple digital images for image retrieval. This method uses image databases provided for query-creation and performs several set-operators to express user's imagination by combining user's imaginary images and real scenes. The user's intentions are expressed by the operation of subspace projection in the image feature space. This method makes it possible to create an imaginary image as the combined-image query for expressing user's intentions by combining several images and operators in the query creation process. The important feature of this method is to use shape and color features for expressing imaginations by extending our previously proposed method. This paper shows several experimental results to clarify the feasibility and effectiveness of our method.
This paper proposes context modelling and reasoning to enable intelligent services in a ubiquitous campus. An ontology-based modelling includes upper level context modelling and domain-specific modelling for the campus area. Ontological and rule-based inferencing, which facilitate ubiquitous functionality for daily life, are implemented by utilizing the context model developed. A student assistant scenario is presented, demonstrating the usefulness of ontological context modelling and reasoning for highly distributed environments, such as a university campus.
With the advance of ubiquitous computing and mobile environments, we have begun to continuously monitor changes in real-world condition and environment through wireless sensor networks. Opportunities also exist for people to create information related to the world around them by using mobile phones equipped with sensing devices, and share that information online with others. In this paper, we propose a novel approach for the interconnection of earth observation data and spatiotemporal web contents on the basis of spatiotemporal and thematic relationships. In particular, we use the concept of moving phenomena of interests to link between measurement sensing data and people-centric contents on the basis of spatiotemporal proximity and thematic relevance. This paper also shows a simple application that automatically generates semantic tags with respect to natural geographic phenomena, such as typhoons, climate changes, and air pollution, on the basis of our interconnection approach. We are able to easily understand qualitative meanings with respect to a certain phenomenon expressed by quantitative numeric conditions.
In our research, context is defined as a situation a user has at hand. The focus in our study is on modelling contexts in cross-cultural communication environments. These environments can be physical, virtual or hybrid. Cross-cultural communication environment – user – situation is the key triplet in our context research. In our paper we discuss context as a key to situation-specific computing. We introduce our cross-cultural communication context tree and context flow architecture and an example of implementation i.e. Context-Based e-Assistant for Cross-Cultural Communication (CeACCC).
In this paper a practical method is presented for creating documentation of cultural historical targets using an event-centric core ontology. By using semantic documentation templates and an XML-based query language, a domain specific documentation model can be created and flexible user interfaces can be built easily for accessing and editing the documentation.
Software development projects seem to raise many challenges and issues. These issues are exacerbated when the projects are distributed globally and thus software development projects are multicultural. Globalization has increased the need to estimate the effectiveness and cost savings of every project. That is why many projects are outsourced or in other ways distributed to cheaper countries. The main problems of software development projects are related to knowledge sharing, communication, and cultural issues. This paper studies the challenges in global multicultural software development projects and the kinds of collaborative tools available for software development. This paper presents an examination of a collaboration model for global multicultural software development. The model is based on the authors' own work experience and literature research. We propose that this model could be used as a reference model when planning global software projects.
It is becoming important to realize a cross-cultural communication environment among societies with different cultures. Images are effective media for exchanging cultural characteristics across cultures. This paper presents a culture-dependent color-emotion model for cross-culture oriented image retrieval system that realizes color-emotion spaces to search images with human emotion aspects. Many image retrieval systems have been featured by color analysis, but culture-dependent aspects of images are not considered intensively. Our system creates color impression spaces based on Ekman's 17 basic emotions. For the first step, the culture-dependent color impression space is created by using cultural-features. We apply automatic clustering using our previous method “Valley Tracing” in order to generate dynamic representative colors. This system automatically creates a set of culture-dependent color impression metadata.
We present a role accessibility definition based Web application generation. A role accessibility definition specifies what kind of data access can be done by which type of roles of users. From a given role accessibility definition, we can automatically derive data model, business logic and user interface to generate simple Web applications. With additional definitions of page transition and general computation using existing Web service functions, then we can generate more general type of Web applications. We can use fine grained Web service functions for handling tables or external Web service functions on the Web. Our approach will help us (esp. non-programmers) create a variety of Web applications such as questionnaire systems, student assignment evaluation systems, and so on.
NULL is a special marker used in SQL to indicate that a value for an attribute of an object does not exist in the database. Its aim is a representation of “missing information and inapplicable information”. Although NULL is called null ‘value’ is not at all a value. It is a marker. It is only an annotation of incomplete data. Since it is typically interpreted as a value, NULL has led to controversies and and debates because of its treatment by 3-valued logics, of its special requirements for its use in SQL joins, and the special handling required by aggregate functions and SQL grouping operators.
The three-valued logics does not properly reflect the nature of this special marker. Markers should be based on their specific data type. This data type is then different from any other data types used in relational database technology. Due to this orthogonality we can combine any type with the special type. To support this we introduce a non-standard generalisation of para-consistent logics. This logics reflects the nature of these markers. This paper aims in developing a general approach to NULL ‘values’ and shows how they can be used without changing database technology.
Many automatic or semi-automatic extraction techniques have been proposed for building domain ontologies in recent years but the correctness, consistency and completeness of the extracted ontologies is often either not considered or is not formally verified. The issue of detecting potential anomalies in an ontology has not to date been adequately addressed. In this paper we propose a formal technique for ontology representation and inference, based on which an automatic technique for ontology verification can be developed so as to be able to detect and identify potential anomalies in an ontology. The technique makes use of a State Controlled Coloured Petri Net (SCCPN), which is a high level net that combines a Coloured Petri Net and a State Controlled Petri Net. This work presents a formal definition of SCCPN for modeling ontologies and the mapping between them as well as formulating the ontology inference in SCCPN with specified inference mechanisms.