Ebook: Information Modelling and Knowledge Bases XXXII
Information modeling and knowledge bases are important technologies for academic and industrial research that goes beyond the traditional borders of information systems and computer science. The amount and complexity of information to be dealt with grows continually, as do the levels of abstraction and the size of databases.
This book presents the proceedings of the 30th International Conference on Information Modelling and Knowledge Bases (EJC2020), due to be held in Hamburg, Germany on 8 and 9 June 2020, but instead held as a virtual conference on the same dates due to the Corona-virus pandemic restrictions. The conference provides a research forum for the exchange of scientiﬁc results and experiences, and brings together experts from different areas of computer science and other disciplines with a common interest in information modeling and knowledge bases. The subject touches on many disciplines, with philosophy and logic, cognitive science, knowledge management, linguistics and management science, as well as the emerging fields of data science and machine learning, all being relevant areas. The 23 reviewed, selected, and upgraded contributions included here are the result of presentations, comments, and discussions from the conference, and reflect the themes of the conference sessions: learning and linguistics; systems and processes; data and knowledge representation; models and interfaces; formalizations and reasoning; models and modeling; machine learning; models and programming; environment and predictions; modeling emotion; and social networks.
The book provides an overview of current research and applications, and will be of interest to all those working in the field.
Information Modeling and Knowledge bases have become an important technology contributor for the 21st century’s academic and industry research that address the complexities of modeling in digital transformation and digital innovation reaching beyond the traditional boarders of information systems and computer science academic research. The amount and complexity of information itself, the number of abstraction levels of information, and the size of databases and knowledge bases are continuously growing. Conceptual modelling is one of the sub-areas of information modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore, philosophy and logic, cognitive science, knowledge management, linguistics and management science as well as the emerging fields of data science and machine learning are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers.
The international conference on information modelling and knowledge bases originated from the co-operation between Japan and Finland in 1982 as the European Japanese conference (EJC). Then professor Ohsuga in Japan and Professors Hannu Kangassalo and Hannu Jaakkola from Finland (Nordic countries) did the pioneering work for this long tradition of academic collaboration. Over the years, the organization extended to include European countries as well as many other countries. In 2014, with this expanded geographical scope the European Japanese part in the title was replaced by International. The conference characteristics include opening with the appetizer session that allows participants to introduce their topic in a three minute short presentation followed by presentation sessions with enough time for discussions. The limited number of participants is typical for this conference.
The 30th International conference on Information Modeling and Knowledge Bases (EJC2020) held at Hamburg, Germany constitutes a research forum exchanging of scientific results and experiences drawing academics and practitioners dealing with information and knowledge.
The main topics of EJC2020 cover a wide range of themes extending the knowledge discovery through Conceptual Modelling, Knowledge and Information Modelling and Discovery, Linguistic Modelling, Cross-Cultural Communication and Social Computing, Environmental Modeling and Engineering, and Multimedia Data Modelling and Systems extending into complex scientific problem solving. The themes of the conference presentation sessions; Learning and Linguistics, Systems and Processes, Data and Knowledge Representation, Models and Interfaces, Formalizations and reasoning, Models and Modelling, Machine Learning, Models and Programing, Environment and Predictions, Emotion Modeling and Social Networks reflected the coverage of those main themes of the conference.
The proceedings of the 30th International Conference of Information Modeling and Knowledge Bases features 23 reviewed, selected, and upgraded contributions that are the result of presentations, comments, and discussions during the conference. Suggested topics of the call for papers include, but are not limited to:
Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models.
Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling.
Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models.
Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction.
Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems.
Multimedia data modelling and systems: Modelling multimedia information and knowledge; Content-based multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems.
Due to regulations caused by Corona virus the EJC2020 was held online and hosted by the Department for Computer Science of the University of Applied Sciences Hamburg, Germany. The virtual conference was held on 8 and 9 June from 09.00–12.00, German time, and organized with the Zoom platform. Presentations, prepared according to the predefined instructions, each had a 10 minute time slot.
We thank all colleagues for their support in making this conference successful, especially the program committee, organization committee, and the program coordination team, in particular Naofumi Yoshida who maintained the paper submission and reviewing systems and compiled the files for this book.
Trail route networks provide an infrastructure for touristic and recreational walking activities worldwide. They can have a variety of layouts, signage systems, development and management patterns, involving multiple stakeholders and contributors, and tend to be determined by various interests on different levels and dynamically changing circumstances. This paper aims to develop the skeleton of TRAILSIGNER, a sound geospatial conceptual data model suite of trail networks, waymarked routes and their signage systems and assets, which can be used as a basis for creating an information system for the effective, organic and consistent planning, management, maintenance and presentation of trails and their signage. This reduces potential confusion, mistrust and danger for visitors caused by information mismatches including incomplete, incoherent or inconsistent route signposting. To ensure consistency of incrementally planned signposts with each other and with the (possibly changing) underlying trail network, a systematic, set-based approach is developed using generative logical rules and incorporated into the conceptual model suite as signpost logics. The paper also defines a reference ruleset for it. This approach may further be generalized, personalized and adapted to other fields or applications having similar requirements or phenomena.
The most important mechanism of the computer is that various functions are implemented based on programs stored in it. Programs are developed by program languages implementing functions of models. One of the efficient methods to construct a model is to construct it by semantic computation models. Using semantic computation models, we can construct a model in a semantic space. In this paper, we present a mechanism to execute models presented by the semantic spaces. We have presented a mechanism to implement combinational and sequential logic computations based on the semantic space model. The combinational and sequential logic computations are the basic functions in computer systems. However, we still need a control mechanism like that in computers. In this paper, we present a control mechanism based on the semantic space model and some of execution examples. The most important contribution of this paper is that we first present a concept for control and program based on the semantic space model. In order to demonstrate the efficiency of the proposed mechanism, we performed a demonstration experiment. In the experiment, an agent is constructed for unmanned ground vehicle control with the control mechanism. A video camera is used to determine the position of the vehicle and obstacles on the road. The control signals, including “turn left,” “turn right,” “go ahead” and “stop” outputted from the agent are used to demonstrate the efficiency of the mechanism.
As the electricity generation is shifting to renewable energy sources (RES), the grid infrastructure faces multiple challenges, such as intermittency and volatility of a wide range RES. A high penetration of renewables requires profound changes to the current energy distribution system. The conventional grid is increasingly becoming a bottleneck for expanding the share of RES because of its rigid architecture, which is built around centralized energy source. We propose a new energy exchange model for a routed energy distribution system, which can perform electricity routing based on smart routing algorithms and presented protocols. We utilize a concept of an energy router device that controls energy flows and utilizes protocols stack to smartly route the energy between houses in the grid. This paper describes current results with experimental network of Maui village with multiple houses interconnected through energy routers.
Data (conceptual, data, information, knowledge) modelling is still the work of an artisan, i.e. an art in the best case, made by humans, because of the need for human intelligence. Data modelling is an essential part of Information System (IS) design, specifying how data is implemented as part of an IS. The principles of data modelling follow the evolution of IS development paradigms, and these in turn follow the progress of technological changes in computing. Although technology has changed a lot during the decades of commercial use of computers – since the early 1950s to now, close to 70 years – data modelling is still based on the same basic principles as decades ago. Or is it really so? Finding the answer to this question was the main motivation to start writing this paper. Since the future is more interesting than the past, we set our research problem to be “What are the challenges for data modelling in the future?”. The reason for this is that we see some significant changes in the future in the data modelling sector which we wanted to examine. However, the future is a continuum of the past. The future cannot be fully understood without understanding the past. Humans also tend to forget the details of the past. Even the most remarkable innovations from the past have become part of the new normal. Consequently, at the beginning of our paper we look shortly at the progress of data modelling during the era of commercial computing. Our focus is on the recent past and we look at the technological changes that have been of key importance in data modelling in the role of triggers and enablers. To find the answer to our research question, we retrieved some recent studies handling the future of data modelling and analyse the challenges found in these sources. The paper is concluded by some future paradigms. In general, the big changes seem to be the growing importance of Artificial Intelligence (AI) and machine learning (ML) as its fuel. AI not only conducts algorithmic rule-based routines, it has learning capability, which makes it more intelligent and adaptable, and able to compete with human intelligence, even in data management tasks.
Semantic computing is essentially significant for realizing the semantic interpretation of natural and social phenomena and analyzes the changes of various environmental situations. The 5D World Map (5DWM) System [4,6,8] has introduced the concept of “SPA (Sensing, Processing and Analytical Actuation Functions)” for global environmental system integrations [1–4], as a global environmental knowledge sharing, analysis and integration system. Environmental knowledge base creation with 5D World Map is realized for sharing, analyzing and visualizing various information resources to the map which can facilitate global phenomena-observations and knowledge discoveries with multi-dimensional axis control mechanisms. The 5DWM is globally utilized as a Global Environmental Semantic Computing System, in SDGs 9, 11, 14, United-Nations-ESCAP: (https://sdghelpdesk.unescap.org/toolboxes) for observing and analyzing disaster, natural phenomena, ocean-water situations with local and global multimedia data resources. This paper proposes a new semantic computing method as an important approach to semantic analysis for various environmental phenomena and changes in a real world. This method realizes “Self-Contained-Knowledge-Base-Image” & “Contextual-Semantic-Interpretation” as a new concept of “Coral-Health-level Analysis in Semantic-Space for Ocean-environment” for global ocean-environmental analysis [8,9,12,18]. This computing method is applied to automatic database creation with coral-health-level analysis sensors for interpreting environmental phenomena and changes occurring in the oceans in the world. We have focused on an experimental study for creating “Coral-Health-level Analysis Semantic-Space for Ocean-environment” [8,9,12,18]. This method realizes new semantic interpretation for coral health-level with “coral-images and coral-health-level knowledge-chart”.
This work introduces Multi-Fusion Network for human-object interaction detection with multiple cameras. We present a concept and implementation of the architecture for a beverage refrigerator with multiple cameras as proof-of-concept. We also introduce an effective approach for minimizing the required amount of training data for the network as well as reducing the risk of overfitting, especially when dealing with a small data set that is commonly recorded by a person or small organization.
The model achieved high test accuracy and comparable results in a real-world scenario at the Event Solutions in Hamburg 2019. Multi-Fusion Network is easy to scale due to shared learnable parameters. It is also lightweight, hence suitable to run on small devices with average computation capability. Furthermore, it can be used for smart home applications, gaming experiences, or mixed reality applications.
The paper examines the Third Mission of universities from the point of view of company collaboration in the prototype development process. The paper presents an implementation of university-enterprise collaboration in prototype development described by means of process modeling notation. In this article, the focus is on modeling the software prototyping process in a research context. This research paper introduces prototype development in a university environment. The prototypes are made in collaboration with companies, which offered real-world use cases. The prototype development process is introduced by a modeling procedure with four example prototype cases. The research method used is an eight-step process modeling approach. The goal was to find instances of activity, artifact, resource, and role. The results of modeling are presented using textual and graphical notation. This paper describes the data elicitation, where the process knowledge is collected using stickers-on-the-wall technique, and the creation of the model is described. Finally, the shortcomings found in our existing practices and possibilities for improving our prototype development processes and practices are discussed.
This paper describes principles and structure for a software system that implements a dialect of natural logic for knowledge bases. Natural logics are formal logics that resemble stylized natural language fragments, and whose reasoning rules reflect common-sense reasoning. Natural logics may be seen as forms of extended syllogistic logic. The paper proposes and describes realization of deductive querying functionalities using a previously specified natural logic dialect called Natura-Log. In focus here is the engineering of an inference engine employing as a key feature relational database operations. Thereby the inference steps are subjected to computation in bulk for scaling-up to large knowledge bases. Accordingly, the system eventually is to be realized as a general-purpose database application package with the database being turned logical knowledge base.
This paper presents a new knowledge base creation method for personal/collective health data with knowledge of preemptive care and potential risk inspection with a global and geographical mapping and visualization functions of 5D World Map System. The final goal of this research project is a realization of a system to analyze the personal health/bio data and potential-risk inspection data and provide a set of appropriate coping strategies and alert with semantic computing technologies. The main feature of 5D World Map System is to provide a platform of collaborative work for users to perform a global analysis for sensing data in a physical space along with the related multimedia data in a cyber space, on a single view of time-series maps based on the spatiotemporal and semantic correlation calculations. In this application, the concrete target data for world-wide evaluation is (1) multi-parameter personal health/bio data such as blood pressure, blood glucose, BMI, uric acid level etc. and daily habit data such as food, smoking, drinking etc., for a health monitoring and (2) time-series multi-parameter collective health/bio data in the national/regional level for global analysis of potential cause of disease. This application realizes a new multidimensional data analysis and knowledge sharing for both a personal and global level health monitoring and disease analysis. The results are able to be analyzed by the time-series difference of the value of each spot, the differences between the values of multiple places in a focused area, and the time-series differences between the values of multiple locations to detect and predict a potential-risk of diseases.
Well organized data contributes extensively to the classification possibilities and quality of Knowledge Management. XML schemas play an important role in data organization activities, and provide basic foundations for companies and organizations dealing with large amounts of data. In times where knowledge represents the greatest advantage in a competitive economy and is relatively simple to find through different web providers, the quality of internal data structures and efficient management of a company’s valuable information is of the utmost importance. XML schemas are one of the mechanisms that can provide a data organization system in a qualitative manner, and efficient knowledge management as soon as data have been defined or accumulated. A good XML schema support is a way to increase the competitiveness of an organization by ensuring structured data quality and simplifying the Knowledge Management process.
While individuals benefit from the goods and services provided by companies that enrich their lives and that have adapted to a dynamic environment that is always changing, these companies pay a high communication cost to access opportunities to provide these goods and services and to seek a better understanding of individual customers’ changing needs. Although vast amounts of information can be obtained, databases and machine learning are playing an increasingly important role in extracting meaning from this information, turning it into meaningful information assets that consider circumstances and contexts, and individualizing the economy of information. I propose an implementation method for providing information to enrich the profiles of individual customers by consolidating different data, calculating the individual customers’ needs through the relationships between customers and products, evaluating the change in relationships between individual customers and products over time, and providing goods and services to suit different intervals of change to factors such as lifestyle and living environment. As there are different factors involved in estimating the incidence of needs, and different frequencies and rates at which they occur, based on the special characteristics of products, different data are required to estimate such needs. By profiling individuals over the long term, it is possible to build an information provision environment that is conducive to companies’ customer acquisition.
Wafer-defect maps can provide important information about manufacturing defects. The information can help to identify bottlenecks in the semiconductor manufacturing process. The main goal is to recognize random versus patterned defects. A patterned defect shows that a step in the process is not performed correctly. If same defect occurs multiple times, then the yield can rapidly decrease. This article proposes a method for yield improvement and defect recognition by using a feed-forward neural network. The neural network classifies wafer-defect maps into classes. Each class represents certain defect on the map. The neural network was trained, tested and validated using a wafer-defect maps dataset containing real defects inspired from manufacturing process.
This paper deals with the comparison and management of (heterogeneous) temporal datings in pre- and protohistory. It will present a first draft of a conceptual model for the description of the most common types of scales used in this context. The aim is to enable a system to compare objects according to their dating, regardless of the used method and scale. Thus temporally relevant objects can be selected by a query and do not need to be selected manually by an expert. Especially for larger data sets, automated computations, integrity checks, temporal reasoning or pattern mining this is beneficial.
In recent years, with the development of information technology, many cyber-physical systems, in which real space and the information space are linked for data acquisition and analysis, have been constructed. The purpose of constructing a cyber-physical system is to solve and improve social and environmental problems. An important target is the railway space, which aims to provide safe and stable transportation services as part of the social infrastructure. In this paper, we propose a new data model, the “Context Cube Semantic Network”, for the railway space and a metric method that employs an integrated scale based on heterogeneous correlations of purpose, sensibility, and distance for the railway space. Furthermore, we constructed a station guidance system that implements the proposed method and evaluates subjects at the station. As a result, we clarified the effectiveness and applicability of the system.
Respect for privacy is not a modern phenomenon as it has been around for centuries. Recent advances in technologies led to the rise of awareness of the importance of privacy, and to the development of principles for privacy protection to guide the engineering of information systems on one side, and on using the principles to draft legal texts protecting privacy on the other side. In this paper, we analyze how respect for privacy has been implemented in GDPR by automated comparison of the similarity of GDPR’s articles and the text of seven principles of Privacy by Design. We have compared the specific text of GDPR’s first 50 core privacy-protecting articles and the GDPR’s remaining provisions to establish independent supervisory authorities. The first half is observing the privacy by design principles, each of them considerably more than the second half. Our findings show that automated similarity comparison can highlight portions of legal texts where principles were observed. The results can support drafting legal texts to check whether important legal (or other) principles were adequately addressed.
In this paper, we deal with the support in the search for appropriate textual sources. Users ask for an atomic concept that is explicated using machine learning methods applied to different textual sources. Next, we deal with the so-obtained explications to provide even more useful information. To this end, we apply the method of computing association rules. The method is one of the data-mining methods used for information retrieval. Our background theory is the system of Transparent Intensional Logic (TIL); all the concepts are formalised as TIL constructions.
The realisation of smart cities has attracted much attention in recent years from private and governmental actors, as a means to make cities more efficient, climate friendly and socially inclusive through the use of modern technology. However, few studies examine how smart cities are framed and understood within the public sphere. The aim of this study is to compare how domestic smart city initiatives are reported in the news of their respective countries, and to clarify the differences and similarities in media content. In this paper, we present the initial findings of our planned long-term comparative news content analysis. As a first step, we analysed national newspaper articles published between 2011 and 2019 in Japan and Slovenia. Our corpus consists of 41 Japanese and 20 Slovenian articles, written in relation to domestic smart city initiatives. In total, we identified 14 themes, five of which were common in both countries, while the remaining nine appeared exclusively in the news of one country. Our conclusions indicate that the news in both countries differ in what application domains of Smart Cities are discussed (e.g. natural resources and energy, transportation and mobility). We establish a procedure for further cross-cultural analyses, necessary to understand how smart cities are framed in the public sphere. Thereby, we contribute to further discussion on the nature and definition of smart cities and how they are communicated.
The primary purpose of conducting this research is to determine how campus journey application development is progressing. As a result, this research proposes a conceptual model for visitor journey application development. The study included 100 top ranking educational institutes and additionally included Finnish and Estonian universities. 39 virtual campus tour applications and 36 visitor journey applications are benchmarked in total for this study. Provides an example of visitor journey mapping with features, complexities, and best practices that are influential for improving visitor experience during visitor journey application development.
Augmented Reality is a display and interaction method of future computing. It augments digital information in real environments in text, audio, image, or video formats. Augmented reality can be more effective if supported by knowledge about human needs. Basic human needs are finite in number, and with the right methods, they are detectable or predictable. This research develops an ontology that describes the structure and relations between the elements of augmented reality, context information, and human needs, with the ultimate goal of developing a robust conceptual model. Ontology development is a knowledge-driven approach used to represent data and reasoning. This paper focuses on linking the aforementioned concepts to enable correct data representation and reasoning. The research approach, process used, and the evaluation of the ontology is presented as well.
Cross-cultural religious tourism is computational to promote cross-cultural communication and understanding according to impression distance. Our motivation to implement semantic search with an emotion-oriented context into the proposed system is to realize global tourism recommendations expressed in different cultures. The objectives of this paper are (1) to find the religious places by using the tourist’s emotional distance, (2) to find similar religious places not only in the same culture but also in the different cultures with the tourist’s emotional distance calculations. Experimental results demonstrate the feasibility and applicability of this method.
Airlines are of great importance to the transportation sector. With the increase in commercial air travel, airlines require extra flight crews. Aviation industry’s cabin crewmembers are faced with working overtime, working in shifts and long working hours. The shift system causes fatigue for flight crews. Fatigue is of critical importance in the aviation industry. Depending on the physical and psychological fatigue, explicit or implicit results appear. There are a number of approaches in the aviation industry to prevent fatigue. When previous studies are examined, there are few studies examine in the general, and aviation crew’s fatigue treat both pilots and cabin crew alike. The relationship between cabin crew’s fatigue-to-fatigue risk management systems, key fatigue-causing factors, tools to alarm fatigue, and outcome assessments are non-existent. However, various difficulties are encountered in measuring the cabin crews fatigue levels and measurements and are often subjective and not reliable. Therefore, the aim of this study is to create a concept map to be integrated into the aviation cabin crew’s fatigue risk assessment application design and implementation in order to arrive at a comprehensive fatigue risk assessment tool for the aviation industry.
Mental health, an essential factor for maintaining a high quality of life, is determined by one’s nutritional, physical, and psychological situations. Since mental health is influenced by multiple factors, a multidisciplinary approach is effective. Due to the complexity of this mechanism, most non-specialists have little knowledge and access to the related information. There are multiple factors that influence one’s mental health, such as nutrition, physical activities, daily habits, and personal cognitive characteristics. Because of this complexity, it can be hard for non-specialists to find and implement appropriate methods for improving their mental health. This paper presents the 2-Phase Correlation Computing method for interpreting the characteristics of each emotion/mental state, nutrients, exercises, life habits with a vector space. The vector space reflects the roles of neurotransmitters. The 2-Phase Correlation Computing extracts the information expected to be most relevant to the user’s request. In this method, expert knowledge, characteristics of emotions, and mental states are defined in the “Requests” Matrix, and each stimulus into “Nutrients”, “Exercises”, and “Life Habits” Matrixes. “Nutrients”, “Exercises”, and “Life Habits” are expressed and computed to as “Stimuli”. In short, this method introduces logos to the chaotic world of decision making in mental health.
Reinforcement Learning allows us to acquire knowledge without any training data. However, for learning it takes time. In this work, we propose a method to perform Reverse action by using Retrospective Kalman Filter that estimates the state one step before. We show an experience by a Hunter Prey problem. And discuss the usefulness of our proposed method.