
Ebook: Twelfth Scandinavian Conference on Artificial Intelligence

Artificial intelligence has become so much a part of everyday life that it is now hard to imagine a world without it. This book presents papers from the 12th Scandinavian Conference on Artificial Intelligence (SCAI), held in Aalborg, Denmark in November 2013. The SCAI conference is the main biennial platform for the AI research community in Scandinavia, and the papers collected here not only include contributions from Scandinavia, but also from other European and non-European countries.
Topics cover the entire range of AI, with a particular focus on machine learning and knowledge representation, as well as uncertainty in AI and applications.
In addition to the 28 regular papers, extended abstracts of the presentations made by Ph.D. students of their research-in-progress to a panel of experts in the doctoral symposium - a new feature at this conference - are also included here. This book will be of interest to all those who wish to keep up-to-date with the latest developments in artificial intelligence.
The Scandinavian Conference on Artificial Intelligence (SCAI) is the main biennial conference for the AI research communities of Scandinavia. In its 12th edition, the conference takes place in Aalborg, Denmark on November 20–22, 2013.
The conference programme consists of plenary presentations of contributed papers, invited talks, a doctoral symposium, as well as the annual meeting of the Swedish AI Society (SAIS).
We received 38 regular paper submissions. In addition to submissions from the Scandinavian countries, papers were also received from other European and non-European countries. After a careful peer reviewing process, in which each paper was reviewed by three programme committee members, 28 papers were selected for presentation at the conference and inclusion in these proceedings. The topics of these papers cover the entire range of AI, with a particularly strong representation of machine learning, knowledge representation, uncertainty in AI, and applications.
We are very pleased that two distinguished speakers have accepted our invitation to give an invited talk at the conference: Professor Danica Kragic from the Royal Institute of Technology, Sweden, who will speak about robot vision, and Professor Paul Davidsson from Malmö University, speaking about intelligent transport and energy systems.
A new feature of this edition of SCAI is a doctoral symposium, in which PhD students present their research-in-progress to a panel of experts. Extended abstracts of these presentations are also included in these proceedings.
The organisers of SCAI-13 would like to thank all those authors who submitted their papers to the conference, and the members of the programme committee for their great work of providing thoughtful reviews.
We also gratefully acknowledge the financial support of the Otto Mønsteds fond, and our industrial sponsors Dezide ApS, Hugin Expert A/S, and Verdande Technology AS.
Manfred Jaeger
Thomas D. Nielsen
Paolo Viappiani
Experiences from different applications of agent technology aiming to make transport and energy systems more efficient are presented. The examples will cover real-time applications on the operational level, as well as, support for long-term planning and decision-making on the strategic level. Some general reflections and insights from the work on these applications conclude the paper.
The current trend in computer vision is development of data-driven approaches where the use of large amounts of data tries to compensate for the complexity of the world captured by cameras. Are these approaches also viable solutions in robotics? Apart from ‘seeing’, a robot is capable of acting, thus purposively change what and how it sees the world around it. There is a need for an interplay between processes such as attention, segmentation, object detection, recognition and categorization in order to interact with the environment. In addition, the parameterization of these is inevitably guided by the task or the goal a robot is supposed to achieve. In this talk, I will present the current state of the art in the area of robot vision and discuss open problems in the area. I will also show how visual input can be integrated with proprioception, tactile and force-torque feedback in order to plan, guide and assess robot's action and interaction with the environment.
In this paper, a new approach is developed that focuses on different aspects of uncertainty when clustering categorical instances using possibility theory. Our proposal, called decremental possibilistic k-modes (DPKM), is an uncertain soft-clustering method based on decremental learning. First, it handles uncertain values by defining a possibility distribution for each attribute. Then, as a soft computing method, it computes the degree of belonging of each object to all clusters. After that, it deals with the decremental learning by removing the most dissimilar cluster to the objects. Hence, the proposed approach takes advantages from both uncertain soft-clustering and decremental learning by saving computing time and improving the clustering performance.
The purpose of this paper is to present a simple extension to an existing inference algorithm on influence diagrams (i.e. decision theoretic extensions to Bayesian networks) that permits these algorithms to be applied to ensemble models. The extension, though simple, is important because of the power and robustness that such ensemble models provide [1]. This paper is intended principally as a ‘recipe’ that can be used even by those unfamiliar with the algorithms extended. Accordingly, I present the algorithms that the original contribution builds upon in full, though references are given to less concise renditions. Those familiar with these algorithms are invited to skip the elucidation. The consequence is a useful paper with more background and less original input than usual.
This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent real world processes that include several distinct phases. In essence a GBN is a model that combines several Bayesian networks (BN) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. These statements are based on combinations of posterior probabilities of the variables in the BNs. Although GBN is a new formalism there are features of GBNs that are similar to other formalisms and research, including influence diagrams, context-specific independence and structural adaptation.
The 15-puzzle is a well-known game which has a long history stretching back in the 1870's. The goal of the game is to arrange a shuffled set of 15 numbered tiles in ascending order, by sliding tiles into the one vacant space on a 4×4 grid. In this paper, we study how Reinforcement Learning can be employed to solve the 15-puzzle problem. Mathematically, this problem can be described as a Markov Decision Process with the states being puzzle configurations. This leads to a large state space with approximately 1013 elements. In order to deal with this large state space, we present a local variation of the Value-Iteration approach appropriate to solve the15-puzzle starting from arbitrary configurations. Furthermore, we provide a theoretical analysis of the proposed strategy for solving the 15-puzzle problem. The feasibility of the approach and the plausibility of the analysis are additionally shown by simulation results.
Clustering algorithms have been used to divide genes into groups according to the degree of their expression similarity. Such a grouping may suggest that the respective genes are correlated and/or co-regulated, and subsequently indicates that the genes could possibly share a common biological role. In this paper, four clustering algorithms are investigated: k-means, cut-clustering, spectral and expectation-maximization. The algorithms are benchmarked against each other. The performance of the four clustering algorithms is studied on time series expression data using Dynamic TimeWarping distance in order to measure similarity between gene expression profiles. Four different cluster validation measures are used to evaluate the clustering algorithms: Connectivity and Silhouette Index for estimating the quality of clusters, Jaccard Index for evaluating the stability of a cluster method and Rand Index for assessing the accuracy. The obtained results are analyzed by Friedman's test and the Nemenyi post-hoc test. K-means is demonstrated to be significantly better than the spectral clustering algorithm under the Silhouette and Rand validation indices.
Finding an optimal elimination ordering is a NP-hard problem of crucial importance for the efficiency of the Influence Diagrams evaluation. Some of the traditional methods for determining the elimination ordering use heuristics that consider that potentials are represented as tables. However, if potentials are represented using binary trees traditional methods may not offer the best results. In the present paper, two new heuristics that consider that potentials are represented as binary trees are proposed. As a result, the storage requirements for evaluating an ID with binary trees is reduced.
In Automated Planning, learning and exploiting structural patterns of plans, domain models and/or problem models, in order to improve plan generation speed-up and increase the scope of problems solved, has attracted much research. Reformulation techniques such as those based on macro-operators or entanglements are very promising, mainly because they are planner-independent. This paper aims to extend and revisit the recent work on inner entanglements, relations between pairs of planning operators and predicates encapsulating exclusivity of predicate ‘achievements’ or ‘requirements’, in order to bring new theoretical results (PSPACE-completeness of deciding inner entanglements), present a new way of encoding of inner entanglements and empirical comparison between different kinds of inner entanglements.
This paper presents an efficient trap escape strategy in stochastic local search for Satisfiability. The proposed method aims to enhance local search by providing an alternative local minima escaping strategy. Our variable selection scheme provides a novel local minima escaping mechanism to explore new solution areas. Conflict variables are hypothesized as variables recently selected near local minima. Hence, a list of backtracked conflict variables is retrieved from local minima. The new strategy selects variables in the backtracked variable list based on the clause-weight scoring function and stagnation weights and variable weights as tiebreak criteria. This method is an alternative to the conventional method of selecting variables in a randomized unsatisfied clause. The proposed tiebreak method favors high stagnation weights and low variable weights during trap escape phases. The new strategies are examined on verification benchmark and SAT Competition 2011 and 2012 application and crafted instances. Our experiments show that proposed strategy has comparable performance with state-of-the-art local search solvers for SAT.
Mixtures of truncated basis functions (MoTBFs) have been recently proposed as a generalisation of mixtures of truncated exponentials and mixtures of polynomials for modelling univariate and conditional distributions in hybrid Bayesian networks. In this paper we analyse the problem of incorporating prior knowledge when learning univariate MoTBFs. We consider scenarios where the prior knowledge is expressed as an MoTBF that is combined with another MoTBF density estimated from the available data. An important property, from the point of view of inference in hybrid Bayesian networks, is that the density yielded after the combination is again an MoTBF. We show the performance of the proposed method in a series of experiments with simulated data. The experiments suggest that the incorporation of prior knowledge improves the estimations, especially with scarce data.
The recovery of sparse signals in underdetermined systems is the focus of this paper. We propose an expanded version of the Variational Garrote, originally presented by Kappen (2011), which can use multiple measurement vectors (MMVs) to further improve source retrieval performance. We show its superiority compared to the original formulation and demonstrate its ability to correctly estimate both the sources' location and their magnitude. Finally evidence is given of the high performance of the proposed algorithm compared to other MMV models.
In cross country skiing there are different skiing techniques, known as gears. For professional skiers it is useful to analyze a ski session with respect to the gears used in different parts of the track. We have developed a statistical machine learning method that uses data from accelerometers in e.g. a mobile phone placed on the chest of a skier to classify the gears used. The statistical model used is based on a Markov chain of multivariate Gaussian distributions. The same model can in addition to classification be used for anomaly detection and unsupervised clustering of skiing movements. The method is evaluated on real data from elite skiers collected during a training race.
We introduce a method for image segmentation that constraints the clustering with map and point data. The method is showcased by applying the spectral clustering algorithm on aerial images for building detection with constraints built from a height map and address point data. We automatically detect the number of clusters using the elongated K-means algorithm instead of using the standard spectral clustering approach of a predefined number of clusters. The results are refined by filtering out noise with a binary morphological operator. Point data is used for semi-supervised labelling of building clusters. Our evaluation show that the combination of constraints have a positive impact on the clustering quality achieved. Finally we argue how the presented constraint types may be used in other applications.
Theatrical performances usually follow strict scripts and actors are not allowed to deviate. A Danish theatrical group, Theater 770° Celsius, has invented a new method called In Real Life, in which only certain events in the storyline are specified and the actors are supposed to improvise to reach these events. The method bears a resemblance to multi-agent systems and we show how it can be formalized using the multi-agent organizational model OperA.
Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results are unfortunately not as striking, when it comes to a continuous stream of gestures. In this paper we introduce a hierarchical system for gesture recognition for use in a gaming setting, with a continuous stream of data. Layer 1 is based on Nearest Neighbor Search and layer 2 uses Hidden Markov Models. The system uses features that are computed from Microsoft Kinect skeletons. We propose a new set of features, the relative angles of the limbs from Kinect's axes to use in NNS. The new features show a 10 percent point increase in precision when compared with features from previously published results. We also propose a way of attributing recognised gestures with a force attribute, for use in gaming. The recognition rate in layer 1 is 68.2%, with an even higher rate for simple gestures. Layer 2 reduces the noise and has aaverage recognition rate of 85.1%. When some simple constraints are added we reach a precision of 90.5% with a recall of 91.4%.
The real-world applicability of automated planners depends on the expressiveness of the problem modeling language. Contemporary planners can deal with causal features of the problem, but only limited forms of temporal, resource and relational constraints. These constraints should be fully supported for dealing with real-world applications. We propose a highly-expressive, action-based planning language which includes causal, relational, temporal and resource constraints. This paper also contributes an approach for solving such rich planning problems by decomposition and constraint reasoning. The approach is general with respect to the types of constraints used in the problem definition language, in that additional solvers need only satisfy certain formal properties. The approach is evaluated on a domain which utilizes many features offered by the introduced language.
In this paper we look at statistical models for predicting the outcome of football matches in a league. That is, our aim is to find a statistical model which, based on the game-history so far in a season, can predict the outcome of next round's matches. Many such models exist, but we are not aware of a thorough comparison of the models' merits as betting models. In this paper we look at some classical models, extract their key ingredients, and use those as a basis to propose a new model. The different models are compared by simulating bets being made on matches in the English Premier League.
This paper presents a software architecture for deployment of decision support systems based on probabilistic graphical models (PGMs). It has been developed to ease the task of deploying a PGM on the Internet as a decision support system. The paper describes how the architecture has served as a strong platform for the development and deployment of two case studies in food safety.
In this paper, we introduce a new facial-expression analysis system designed to automatically recognize facial expressions, able to manage facial-expression intensity variation as well as reducing the doubt and confusion between facial-expression classes. Our proposed approach introduces a new method to segment efficiently facial feature contours using Vector Field Convolution (VFC) technique. Relying on the detected contours, we extract facial feature points which go with facial-expression deformations. Then we have modeled a set of distances among the detected points to define prediction rules through data mining technique. An experimental study was conducted to evaluate the performance of our proposed solution under varying factors.
Recognizing and supporting human activities is an important challenge for ambient assisted living. In this paper we introduce a novel argumentation-based approach for dealing with human activity recognition. By considering a model of the world and a set of observations of the world, hypothetical fragments of activities are built. The hypothetical fragments of activities will be goal-oriented actions and they will be considered defeasible. Therefore we consider extension-based argumentation semantics for local selection of hypothetical fragments of activities. By considering degrees of fulfillment of activities and local selection, a global selection of hypothetical fragments of the activities is defined. Therefore, we can make explicit statements about why one hypothetical activity was performed.
Predictive maintenance is becoming more and more important for the commercial vehicle manufactures, as focus shifts from product- to service-based operation. The idea is to provide a dynamic maintenance schedule, fulfilling specific needs of individual vehicles. Luckily, the same shift of focus, as well as technological advancements in the telecommunication area, make long-term data collection more widespread, delivering the necessary data.
We have found, however, that the standard attribute-value knowledge representation is not rich enough to capture important dependencies in this domain. Therefore, we are proposing a new rule induction algorithm, inspired by Michalski's classical AQ approach. Our method is aware that data concerning each vehicle consists of time-ordered sequences of readouts. When evaluating candidate rules, it takes into account the composite performance for each truck, instead of considering individual readouts in separation. This allows us more exibility, in particular in defining desired prediction horizon in a fuzzy, instead of crisp, manner.