
Ebook: Tenth Scandinavian Conference on Artificial Intelligence

The Scandinavian Conference on Artificial Intelligence continues a tradition of being one of the most important regional AI conferences in Europe for ten years now. The topics of this year’s contributions have a broad range, from machine learning, knowledge representation, robotics, planning and scheduling, natural language, computer vision, search algorithms, industrial applications, to philosophical foundations. These contributions exemplify the diversity of research in artificial intelligence today and confirm the achievement and magnitude of 25 years AI research in Scandinavia. In this tenth edition there will be an overview of the past, present and future of artificial intelligence. Furthermore, attention will be paid to the industrial aspects of artificial intelligence and the impressions from Swedish AI through the years. Other topics discussed are biosurveillance and an elaboration on probalistic modelling and learning in a relational world.
The Tenth Scandinavian Conference on Artificial Intelligence continues a tradition of being one of the most important regional AI conferences in Europe.
This year's conference is organised by SICS, the Swedish Institute of Computer Science together with SAIS, the Swedish Artificial Intelligence Society. It is not only the tenth SCAI conference and 20th year SCAI anniversary but also the 25th anniversary for SAIS, and we are proud to celebrate this jubilee year with six special guest speakers and a three day conference. 51% of the papers submitted to the conference where accepted for oral presentation and 17% for poster presentation.
The topics of this years contributions have a broad range, from Machine Learning, Knowledge Representation, Robotics, Planning and Scheduling, Natural Language, Computer Vision, Search Algorithms, Industrial Applications, to Philosophical Foundations. These contributions exemplify the diversity of research in artificial intelligence today and confirm the achievement and magnitude of 25 years AI research in Scandinavia. It has also lead to a strong research community that is well integrated and multi-disciplinary. We continue to cover a balance between theoretical research and valuable application results for use in business, medicine and industry and AI is an increasingly dynamic and interesting research area than ever before.
We are honoured to present invited speakers Kathleen A. McCormick, Ph.D., F.A.C.M.I, Chief Scientist/Vice President of SAIC in Health Solutions, USA; Manfred Jaeger, Associate Professor at Aalborg University, Denmark; Agnar Aamodt, Professor at Norwegian University of Science and Technology, Trondheim, Norway; Erik Sandewall, Professor at Linköping University, Sweden; Patrick Doherty, Professor at Linköping University, Sweden and Carl Gustaf Jansson, Professor at Stockholm University, Sweden.
The editors of this volume would like to thank the members of the Program Committee for their competent assessment of the contributed papers. We also like to thank the members of the Organising Committee, the SAIS Jubilee Committee and the SAIS Master's Thesis Award Committee. We are happy to also be able to introduce the winner of the SAIS Master's Thesis Award 2008, Malin Aktius from the University of Skövde.
We acknowledge the generous support of our industrial sponsors Google, Minst, Robotdalen, Volve AS and also the Swedish Institute of Computer Science whose support and economic warranty made this conference possible.
Stockholm, May 2008
Anders Holst, Per Kreuger, Peter Funk
We present a novel algorithm for visibility approximation that is substantially faster than ray casting based algorithms. The algorithm does not require extensive preprocessing or specialized hardware as most other algorithms do. We test this algorithm in several settings: rural, mountainous and urban areas, with different view ranges and grid cell sizes. By changing the size of the grid cells that the algorithm uses, it is possible to tailor the algorithm between speed and accuracy.
Area under the receiver operating characteristics curve (AUC) is a popular measure for evaluating the quality of binary classifiers, and intuitively, machine learning algorithms that maximize an approximation of AUC should have a good AUC performance when classifying new examples. However, designing such algorithms in the framework of kernel methods has proven to be challenging. In this paper, we address AUC maximization with the regularized least-squares (RLS) algorithm also known as the least-squares support vector machine. First, we introduce RLS-type binary classifier that maximizes an approximation of AUC and has a closed-form solution. Second, we show that this AUC-RLS algorithm is computationally as efficient as the standard RLS algorithm that maximizes an approximation of the accuracy. Third, we compare the performance of these two algorithms in the task of assigning topic labels for newswire articles in terms of AUC. Our algorithm outperforms the standard RLS in every classification experiment conducted. The performance gains are most substantial when the distribution of the class labels is unbalanced. In conclusion, modifying the RLS algorithm to maximize the approximation of AUC does not increase the computational complexity, and this alteration enhances the quality of the classifier.
A reinforcement architecture is introduced that consists of three complementary learning systems with different generalization abilities. The ACTOR learns state-action associations, the CRITIC learns a goal-gradient, and the PUNISH system learns what actions to avoid. The architecture is compared to the standard actor-crititc and Q-learning models on a number of maze learning tasks. The novel architecture is shown to be superior on all the test mazes. Moreover, it shows how it is possible to combine several learning systems with different properties in a coherent reinforcement learning framework.
The ability to give explanations for its reasoning and behaviour is a core capability of an intelligent system. There are a number of different goals a user can have towards such explanations. This paper presents how the knowledge intensive case-based reasoning framework CREEK can support some of these different goals in an ambient intelligence setting.
When developing semantic applications, constructing the underlying ontologies is a crucial part. Construction of both Semantic Web and enterprise ontologies need to be semi-automatic in order to reduce the effort required and the need for expert ontology engineers. Another important issue is to introduce knowledge reuse in the ontology construction process. By basing our semi-automatic method on the principles of case-based reasoning (CBR) we envision a novel semi-automatic ontology construction approach and also a novel application of case-based reasoning. The development of OntoCase is still ongoing work, in this paper we report mainly on the motivation for using CBR and the possible benefits of this.
One of the largest factors affecting the loss for steel manufacturing are defects in the steel strips produced. Therefore the prediction of these defects forehand would be very important. In this study we used classifiers – feedforward neural networks and a support vector machine – to solve this problem. We also used different kinds of feature selection methods such as a preprocessing step for the classifiers. As a result, these two classifiers confirmed the same grade of classification error in this study.
Robots that share their workspace with humans, like household or service robots, need to take into account the presence of humans when planning their actions. In this paper, we present a framework for human-aware planning in which we consider three kinds of human-robot interaction. We focus in particular on the core module of the framework, a human-aware planner that generates a sequence of actions for a robot, taking into account the status of the environment, the goals of the robot and the forecasted plan of the human. We present a first realization of this planner, together with two simple experiments that demonstrate the feasibility of our approach.
The problem of object generalization with account for the necessity of processing the incomplete and inconsistent information stored in real databases is considered. It is suggested to use means of rough sets theory and decision trees to generalize the information stored in real databases. Noise models are presented, and a noise effect on the operation of generalization algorithms using the methods of building decision trees is developed. The algorithm for unknown values reconstruction in learning samples subjected to the noise effect based on the nearest neighbour method is proposed. The results of program modeling are brought out.
We propose a troubleshooting algorithm that can troubleshoot systems with dependent action costs. When actions are performed they may change the way the system is decomposed and affect the cost of future actions. We present a way to model this by extending the traditional troubleshooting model with an additional state that describes which parts of the system that are decomposed. The proposed troubleshooting algorithm searches an AND/OR graph with the aim of finding the repair plan that minimizes the expected cost of repair. We present the heuristics needed to speed up the search and make it competitive with other troubleshooting algorithms. Finally, the performance of the algorithm is evaluated on a probabilistic model of a fuel injection system of a truck. We show that the expected cost of repair can be reduced when compared with an algorithm from previous literature.
Learning preferences between objects constitutes a challenging task that notably differs from standard classification or regression problems. The objective involves prediction of ordering of the data points. Furthermore, methods for learning preference relations usually are computationally more demanding than standard classification or regression methods. Recently, we have proposed a kernel based preference learning algorithm, called RankRLS, whose computational complexity is cubic with respect to the number of training examples. The algorithm is based on minimizing a regularized least-squares approximation of a ranking error function that counts the number of incorrectly ranked pairs of data points. When nonlinear kernel functions are used, the training of the algorithm might be infeasible if the amount of examples is large. In this paper, we propose a sparse approximation of RankRLS whose training complexity is considerably lower than that of basic RankRLS. In our experiments, we consider parse ranking, a common problem in natural language processing. We show that sparse RankRLS significantly outperforms basic RankRLS in this task. To conclude, the advantage of sparse RankRLS is the computational efficiency when dealing with large amounts of training data together with high dimensional feature representations.
Maritime situation awareness is of importance in a lot of areas – e.g. detection of weapon smuggling in military peacekeeping operations, and harbor traffic control missions for the coast guard. In this paper, we have combined the use of Self Organizing Maps with Gaussian Mixture Models, in order to enable situation awareness by detecting deviations from normal behavior in an unsupervised way. Initial results show that simple anomalies can be detected using this approach.
Decision theoretic troubleshooting combines Bayesian networks and cost estimates to obtain optimal or near optimal decisions in domains with inherent uncertainty. In this paper we use the well-known A* algorithm extended with pruning based on the efficiency of actions for finding optimal solutions in troubleshooting. In particular, we focus on models with dependent actions.
In cellular network performance optimization, it is important to be able to predict the amount of user-experienced quality problems such as call blocking with alternative modifications to the network configuration. In this paper, a method based on mathematical optimization and knowledge representation to predict the amount of blocking in GSM networks is presented. The method is based on exploiting the available statistical measurement information describing the individual characteristics of the network elements. In addition, application domain knowledge about blocking is included to the model by using the well known Erlang-B formula to establish a mapping between blocking and existing network measurements. The results of the experiments show that the proposed method performs better than the comparison method based on basic Erlang-B formula and raw measurement data.
Numerous techniques of artificial intelligence have been used for building prediction models. One of such tasks is the prediction of heat consumption in a district heating system. Not only is it required for ensuring sufficient heat production, but also it is necessary to avoid substantial heat loss due to overestimated demand for heat. The work presents the use of multilayer perceptrons for building prediction models. However, instead of building prediction models based on artificial neural networks only, hybrid approach is considered and evaluated. Evolutionary approach used to combine neural networks and a number of simple methods into hybrid prediction models is presented. Such models are developed for groups of consumers sharing similar thermal properties identified by self-organising map. It has been shown that by combining neural networks with simple predictive strategies lower prediction error rates can be achieved than in case of using neural networks only.
The problem of automatically selecting simulation models for autonomous agents depending on their current intentions and beliefs is considered in this paper. The intended use of the models is for prediction, filtering, planning and other types of reasoning that can be performed with simulation models. The parameters and model fragments of the resulting model are selected by formulating and solving a hybrid constrained optimization problem that captures the intuition of the preferred model when relevance information about the elements of the world being modelled is taken into consideration. A specialized version of the original optimization problem is developed that makes it possible to solve the continuous subproblem analytically in linear time. A practical model selection problem is discussed where the aim is to select suitable parameters and models for tracking dynamic objects. Experiments with randomly generated problem instances indicate that a hillclimbing search approach might be both efficient and provides reasonably good solutions compared to simulated annealing and hillclimbing with random restarts.
Artificial intelligence research in Texas Hold'em poker has recently mainly focused on heads-up fixed-limit games. Game theoretic methods used in the poker agents capable of playing at the very best level are not easily generalized to other forms of Texas Hold'em poker. In this paper we present a general heuristic-based approach to build a poker agent, where the betting strategy is defined by estimating the expected values of the actions. The approach is well suited for a larger number of players and it is also easily modified to suite no limit games.
In this paper, several demands placed on concept description algorithms are identified and discussed. The most important criterion is the ability to produce compact rule sets that, in a natural and accurate way, describe the most important relationships in the underlying domain. An algorithm based on the identified criteria is presented and evaluated. The algorithm, named Chipper, produces decision lists, where each rule covers a maximum number of remaining instances while meeting requested accuracy requirements. In the experiments, Chipper is evaluated on nine UCI data sets. The main result is that Chipper produces compact and understandable rule sets, clearly fulfilling the overall goal of concept description. In the experiments, Chipper's accuracy is similar to standard decision tree and rule induction algorithms, while rule sets have superior comprehensibility.
Case-based reasoning (CBR) and decision analysis have been two separate research areas aiming to solve problems from different perspectives. CBR is powerful to offer solutions to problems by reusing previous experiences, while decision theory exhibits its strength in dealing with uncertain, nondeterministic situations subject to likelihoods, risks, and probable consequences. In this paper, we present a novel framework of integrating CBR and decision analysis for the purpose of case-based decision analysis. CBR is employed as a methodology to reason from previous cases for building a decision model given the current situation, while decision theory is applied to the decision model learnt from previous cases to identify the most promising, secured, and rational choices. In such a way, we take advantage of both the ability of CBR to learn without domain knowledge and the strength of decision theory to analyze under uncertainty.
Inspired by the problem of fault isolation we consider Bayesian inference from training data and background knowledge. We discuss how the background knowledge can be translated to equality constraints and show how it is introduced in the computations. The main advantage of combining data and background knowledge is achieved when the amount of data is limited.
We consider the situation where two agents try to solve each their own task in a common environment. A general framework for representing that kind of scenario is presented. The framework is used to model the analysis depth of the opponent agent and to determine an optimal policy under various assumptions on analysis depth of the opponent. The framework is applied on a strategic game, Ausgetrickst, and experiments are reported.
Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making autonomous decisions gives us reasons to talk about a system (an artifact) as being “responsible” for a task. No doubt, technology is morally significant for humans, so the “responsibility for a task” with moral consequences could be seen as moral responsibility. Intelligent systems can be seen as parts of socio-technological systems with distributed responsibilities, where responsible (moral) agency is a matter of degree. Knowing that all possible abnormal conditions of a system operation can never be predicted, and no system can ever be tested for all possible situations of its use, the responsibility of a producer is to assure proper functioning of a system under reasonably foreseeable circumstances. Additional safety measures must however be in place in order to mitigate the consequences of an accident. The socio-technological system aimed at assuring a beneficial deployment of intelligent systems has several functional responsibility feedback loops which must function properly: the awareness and procedures for handling of risks and responsibilities on the side of designers, producers, implementers and maintenance personnel as well as the understanding of society at large of the values and dangers of intelligent technology. The basic precondition for developing of this socio-technological control system is education of engineers in ethics and keeping alive the democratic debate on the preferences about future society.
This paper presents a symbolic framework for program understanding. The framework makes use of three approaches, each of which has the following conceptual spaces: syntax, semantics, simulation, and pragmatics. The paper starts by focusing on grammars and then progresses via semantics and the automaton theory to program proving, which goal is typical for maintainers. The captured argumentative information helps them to create new logic-based knowledge, in which abstract program concepts are used to connect lower-level representations. A tool, JavaMaster, is used for simulating Java and for proving the results.