
Ebook: Knowledge-Based Software Engineering

The papers in this publication address many topics in the context of knowledge-based software engineering, including new challenges that have arisen in this demanding area of research. Topics in this book are: knowledge-based requirements engineering, domain analysis and modeling; development processes for knowledge-based applications; knowledge acquisition; software tools assisting the development; architectures for knowledge-based systems and shells including intelligent agents; intelligent user interfaces and human-machine interaction; development of multi-modal interfaces; knowledge technologies for semantic web; internet-based interactive applications; knowledge engineering for process management and project management; methodology and tools for knowledge discovery and data mining; knowledge-based methods and tools for testing, verification and validation, maintenance and evolution; decision support methods for software engineering and cognitive systems; knowledge management for business processes, workflows and enterprise modeling; program understanding, programming knowledge, modeling programs and programmers; and software engineering methods for intelligent tutoring systems.
This book summarizes the works and new research results presented at the Eighth Joint Conference on Knowledge-Based Software Engineering 2008 (JCKBSE 2008), that took place in August 25–28, 2008 at the University of Piraeus in Piraeus, Greece. JCKBSE is a well established international biannual conference that focuses on applications of artificial intelligence in software engineering. The eighth JCKBSE Conference was organised by the Department of Informatics of the University of Piraeus and it was the first time that the conference took place in Greece. There were submissions from 15 countries.
This year, the majority of submissions originate from Japan (as usual for this conference) and the second place belongs to Greece. The submissions were reviewed rigorously by at least two reviewers per submission and finally 40 papers were accepted as full papers and 16 papers were accepted as short papers. The papers address many topics in the context of Knowledge-Based Software Engineering including new challenges that have arisen in this demanding area of research:
• Knowledge-based requirements engineering, domain analysis and modelling.
• Development processes for knowledge-based applications.
• Knowledge acquisition.
• Software tools assisting the development.
• Architectures for knowledge-based systems and shells including intelligent agents.
• Intelligent user interfaces and human-machine interaction.
• Development of multi-modal interfaces.
• Knowledge technologies for semantic web.
• Internet-based interactive applications.
• Knowledge engineering for process management and project management.
• Methodology and tools for knowledge discovery and data mining.
• Knowledge-based methods and tools for testing, verification and validation, maintenance and evolution.
• Decision support methods for software engineering and cognitive systems.
• Knowledge management for business processes, workflows and enterprise modelling.
• Program understanding, programming knowledge, modeling programs and programmers.
• Software engineering methods for Intelligent Tutoring Systems.
In JCKBSE 2008 we had two important Keynote Speakers, Professor Lakhmi Jain from the University of South Australia, Adelaide, Australia and Professor Xindong Wu from the University of Vermont, U.S.A. who is also Visiting Chair Professor of Data Mining in the Department of Computing at the Hong Kong Polytechnic University, China. Summaries of their talks are included in this book.
In addition there was an invited special session on “Advances in software technologies and cognitive systems” which focused on presenting and discussing such issues from a Research and Development aspect. A summary of the goals of the special session and a paper presented in the special session are included in this book. Finally a tutorial took place on Hybrid Reasoning with Argumentation Schemes.
We would like to thank the authors of the submitted papers for keeping the quality of the conference at high levels. Moreover, we would like to thank the members of the Program Committee as well as the additional reviewers for having performed rigorous reviews of the submissions. For their help with organizational issues of JCKBSE 2008, we express our thanks to the local organizing co-chairs, Professor Dimitris Despotis and Associate Professor George Tsihrintzis, the publicity co-chairs Professor Nikolaos Alexandris and Professor Evangelos Fountas, as well as the local organizing committee members at the University of Piraeus. Thanks are due to Ari Sako of the University of Piraeus for having customized the software Open Conference Manager and for having developed many software modules that facilitated the submission of papers, reviewing process and registration of conference participants.
Maria Virvou and Taichi Nakamura
JCKBSE'08 conference co-chairs
The Knowledge-Based Intelligent Engineering Systems Centre (KES) was established to provide a focus in South Australia for research and teaching activity in the area of intelligent information systems, defence and health industries. The overall goal is to synergise contributions from researchers in disciplines such as engineering, information technology, science, commerce and security.
Knowledge-based paradigms offer many advantages such as learning and generalisation over conventional techniques. Knowledge-based Intelligent Engineering Systems Centre aims to provide applied research support to the Information, Defence and Health Industries.
This talk will report the progress made in some of our research projects undertaken including aircraft landing support, teaming in multi-agent systems and hidden object detection. This talk will mainly focus on the industrial applications of knowledge-based paradigms.
Data mining seeks to discover novel and actionable knowledge hidden in the data. As dealing with large, noisy data is a defining characteristic for data mining, where the noise in a data source comes from, whether the noisy items are randomly generated (called random noise) or they comply with some types of generative models (called systematic noise), and how we use these data errors to boost the succeeding mining process and generate better results, are all important and challenging issues that existing data mining algorithms can not yet directly solve.
Consequently, systematic research efforts in bridging the gap between the data errors and the available mining algorithms are needed to provide an accurate understanding of the underlying data and to produce enhanced mining results for imperfect, real-world information sources. This talk presents our recent investigations on bridging the data and knowledge gap in mining noisy information sources.
A visually classification method of scenarios using differential information between normal scenarios is presented. Behaviors of normal scenarios of similar purposes belonging to the same problem domain resemble each other. We derive the differential information, named differential scenario, from such normal scenarios and apply the differential scenario in order to classify scenarios. Our method will be illustrated with examples. This paper describes (1) a language for describing scenarios based on a case grammar of actions, (2) an introduction of the differential scenario, and (3) a method of visualizing scenario classification named Scenario Map using the differential scenario.
This paper proposes a conceptual model for analysis method of extracting unexpected obstacles in order to improve the quality of embedded systems. Although embedded software has become increasingly large in scale and complexity, companies are requiring the software to be developed with in shorter periods of time. This trend in the industry has resulted in the oversight of unexpected obstacles and consequently has affected the quality of embedded software. In order to prevent the oversight of unexpected obstacles, we have already proposed two methods for requirements analysis: the Embedded Systems Improving Method (ESIM) using an Analysis Matrix, and a method that uses an Information Flow Diagram (IFD). However, these analysis methods have been developed separately. This paper proposes the conceptual model including both methods, and clarifies abstraction mechanisms of expert engineers for extracting unexpected obstacles of embedded systems. It also describes a case study and discussion of the domain model.
This paper proposes a domain model comprising a static model, dynamic model and scenario generation method for providing a foundation for motivation-based human resource management. One of the main concerns of managers when establishing a management method in an organization, is the individual members' motivation for the method. It is, however, difficult to manage the members' motivation, because the human resource is very complicated. We therefore, propose a domain model by applying Lawler's motivation model. Using this model, we analyze an actual example of successfully establishing CCPM (Critical Chain Project Management) in a company. We discuss primarily the stability of states, motivation, understanding of individuals' roles and their relationship.
The Strategic Dependency Model (SD model) of i* framework is used to model intentions among actors. However, it is not clear how to validate the equivalence of SD models. In this paper, the dependency matrix of intentions is proposed to formalize equivalence of SD models. We also provide a method to develop SD models from dependency matrices. The proposed method can be used to reduce redundant actors using dependency of intentions.
We propose an automatic service composition methodology where, three levels of composition knowledge are distinguished: user level, logical level and implementation level knowledge. We use a knowledge-based software development environment CoCoViLa that enables composition throughout these three levels. A motivation for this approach is a need to overcome the complexity of service composition on very large sets of atomic services we are dealing with in our application domain. The domain concerns federated governmental information systems.
In this paper we review our work on the acquisition of game-playing capabilities by a computer, when the only source of knowledge comes from extended self-play and sparsely dispersed human (expert) play. We summarily present experiments that show how a reinforcement learning backbone coupled with neural networks for approximation can indeed serve as a mechanism of the acquisition of game playing skill and we derive game interestingness measures that are inexpensive and straightforward to compute, yet also capture the relative quality of the game playing engine. We draw direct analogues to classical genetic algorithms and we stress that evolutionary development should be coupled with more traditional, expert-designed paths. That way the learning computer is exposed to tutorial games without having to revert to domain knowledge, thus facilitating the knowledge engineering life-cycle.
The introduction of Knowledge Management (KM) processes is suggested herein for bridging knowledge gaps observed during long-term and high complexity engineering projects, like the Natural Gas Project (NGP) of Greece, with a view to deploying a dedicated KM system in engineering companies. The model of three essential knowledge management processes (acquisition, maintenance and sharing) is demonstrated, applying the IDEF0 method of functional analysis and design. The functionality of the introduced KM processes has been proved by a case study referring to a pipeline river crossing sub-project.
The research presented in this paper introduces a user context approach for the implementation of an adaptive Geographical Information System (GIS). The main focus of the paper is on presenting some implementation issues about the data used and their evaluation with respect to their suitability for the user interacting with the system. For the evaluation of the geographical information, the system uses a simple decision making model and selects the one that seems more appropriate for a user. In this way, the GIS has the ability of adapting its interaction to each user and make interaction more user friendly.
Knowledge is a key focus area in the emerging autonomic paradigm. The cognitive control loop inside an autonomic element revolves around the knowledge base built within the element. But the concept sounds familiar. For years now, knowledge epistemology and, in specific, the knowledge management domain have been focusing on the knowledge acquisition, dissemination and sharing mechanisms inside the big picture of knowledge-intensive organizations. This paper argues that the two domains bear significant resemblances and attempts to pinpoint them by forming the necessary analogies.
Requirements engineering is inherently concerned with discovering and/or predicting purposes, goals, and objectives of software systems. To discover/predict, analyze, elicit, specify, and reason about various requirements of software systems, we need a right fundamental logic system to provide us with a logical validity criterion of reasoning as well as a formal representation and specification language. This short position paper briefly shows that deontic relevant logic is a hopeful candidate for the fundamental logic we need.
There are some programs which give the Malaysian students the pre-education for studying in Japan such as JAD program (Japan Associate Degree Program) in Malaysia. In these programs, it is impossible for the lecturers who are dispatched from Japan to cover all the majors. Therefore, they often give some lessons by the distance learning from Japan, but they have not got the sufficient educational effect yet. Distance learning are the real-time lecture which Japanese lectures give to Japanese students, or the lecture contents which just record these lectures. It is very difficult for the students at JAD programs who are not Japanese natives to understand all the contents of these distance learning. From these background, JAD Program makes the lecture contents with the captions for all the sentences, and conducts the distance learning.
This paper discusses the method to relate with the background information by using the Advanced organizer which D.P.Ausubel proposed. Also, this paper describes the interface technology which uses the visual effect of the captions to express the relationship of Advanced organizer. The authors are conducting the distance learning, using these lecture contents which are based on these methods. The authors aim to inspect the effect of these lecture contents.
One of the most widely used peer-to-peer file sharing protocols is BitTorrent. In this article we focus on the possible use of bittorrent protocol and applications that make use it, as platform for the spread of new computer-worms.
Modern software development companies that have a quality assurance program use measurements and standards to improve product quality as perceived by the users of these products. However, during the entire software life cycle, except for the final customers, different types of ‘users’ also appear. This paper firstly shows the different views of software quality of these types of users. It also presents the internal and external measurement methods that we used in order to measure the users' opinion of software quality, the benefits and drawbacks of each method, as well as information concerning the techniques used to conduct internal and external measurements. Surveys and examples showing whether software metrics and external views of quality are correlated are also presented. The aim of this paper is to determine up to what point and in which cases can we rely on software metrics in order to define the users' perception of software quality.
Assertions are used for unit testing in Object-Oriented Programming. With assertions, a test programmer compares with an expected value the result of a test execution which is obtained by referring to the object containing its result in the object being tested. Therefore, there is a problem that test classes described as testing code depend strongly on the unnecessary objects which are not of concern to unit testing. In this work we developed an assertion mechanism which traverses objects automatically in order to eliminate the dependency from/to such unnecessary objects. Furthermore, we performed experiments on open source products, and confirmed that using this assertion mechanism decreases the coupling between a test class and other classes.
Computers and software are now widely used in human society. With their dissemination, it becomes necessary for disabled and elderly people to use software. However, it is difficult to develop accessible software for them. Also, there are many guidelines and support tools for developing accessible web sites. However, for software, such guidelines and support tools are few. In our research, to develop accessible software easily, we propose a method of evaluating the accessibility of Graphical User Interface (GUI) software. In our method, source programs of GUI software are analyzed, the accessibility of the GUI software is evaluated along with the accessibility guidelines, and the list of unsatisfactory codes for accessibility and indications on how to modify them are shown.
In this paper, we introduce a new language AWL of workflow diagrams for requirement analysis in large scale information system developments. AWL has a merit that one can describe concrete and accurate life cycles of evidences (evidence documents) in workflow diagrams in the language. We also introduce a tool AWDE that supports users to compose consistent workflow diagrams in AWL, by visualizing life cycles of evidences in the workflow diagrams and verifying consistency of the life cycles. As a validation of AWL and AWDE, we report an experiential result in requirement analysis in real large scale information system developments with and without AWDE.
This paper describes some automatic conversion methods and a system which converts Japanese specifications to UML class diagrams. The characteristics and limitation of the methods was also discussed in terms of two experiments with use of specifications in some UML textbooks. The results of evaluation experiments showed the fundamental validity of the methods and system.