We propose the COMP-AODE classifier, which adopts the compression-based approach  to average the posterior probabilities computed by different non-naive classifiers (SPODEs). COMP-AODE improves classification performance over the well-known AODE  model. COMP-AODE assumes a uniform prior over the SPODEs; we then develop the credal classifier COMP-AODE*, substituting the uniform prior by a set of priors. COMP-AODE* returns more classes when the classification is prior-dependent, namely if the most probable class varies with the prior adopted over the SPODEs. COMP-AODE* achieves higher classification utility than both COMP-AODE and AODE.
This paper is devoted to the proportional representation (PR) problem when the preferences are clustered single-peaked. PR is a “multi-winner” election problem, that we study in Chamberlin and Courant's scheme . We define clustered single-peakedness as a form of single-peakedness with respect to clusters of candidates, i.e. subsets of candidates that are consecutive (in arbitrary order) in the preferences of all voters. We show that the PR problem becomes polynomial when the size of the largest cluster of candidates (width) is bounded. Furthermore, we establish the polynomiality of determining the single-peaked width of a preference profile (minimum width for a partition of candidates into clusters compatible with clustered single-peakedness) when the preferences are narcissistic (i.e., every candidate is the most preferred one for some voter).
The paper investigates a general approach that allows us to solve IQ tests based on Raven's progressive matrices automatically. The 3 × 3 Raven matrices exhibit 8 geometric pictures displayed in 8 cells of the matrix, which have to be logically completed with a 9th picture to be put in the remaining empty matrix cell. In these tests, a set of candidate pictures for the solution is also given. The suggested approach is based on a logical view of analogical proportions (i.e., statements of the form “A is to B as C is to D”). The reading of Raven matrices in terms of such proportions can be applied to a feature-based description of the pictures, but also, in a number of cases, to a very low level representation, i.e., the pixel level. It appears that the analogical proportion reading just amounts here to a recopy of patterns of feature values that already appear in the data (after checking that there is no conflicting patterns). Implementing this principle, the proposed algorithm computes the 9th picture, without the help of any set of candidate solutions, but only on the basis of the 8 known cells of the Raven matrices. A comparison with other approaches is provided, and we emphasize the generality of the approach which is able to provide a simple and uniform mechanism applicable in many situations.
We present SHI3LD, an access control framework for RDF stores. Our solution supports access from mobile devices with context-aware policies and is exclusively grounded on standard Semantic Web languages. Designed as a pluggable filter for generic SPARQL endpoints, the module uses RDF named graphs and SPARQL to protect triples. Evaluation shows faster execution time for low-selective queries and less impact on larger datastores.
This paper clarifies the connection between multiple criteria decision-making and decision under uncertainty in a qualitative setting relying on a finite value scale. While their mathematical formulations are very similar, the underlying assumptions differ and the latter problem turns out to be a special case of the former. Sugeno integrals are very general aggregation operations that can represent preference relations between uncertain acts or between multifactorial alternatives where attributes share the same totally ordered domain. This paper proposes a generalized form of the Sugeno integral that can cope with attributes which have distinct domains via the use of qualitative utility functions. In the case of decision under uncertainty, this model corresponds to state-dependent preferences on act consequences. Axiomatizations of the corresponding preference functionals are proposed in the cases where uncertainty is represented by possibility measures, by necessity measures, and by general monotonic set-functions, respectively. This is achieved by weakening previously proposed axiom systems for Sugeno integrals.
The idea of classifier chains has recently been introduced as a promising technique for multi-label classification. However, despite being intuitively appealing and showing strong performance in empirical studies, still very little is known about the main principles underlying this type of method. In this paper, we provide a detailed probabilistic analysis of classifier chains from a risk minimization perspective, thereby helping to gain a better understanding of this approach. As a main result, we clarify that the original chaining method seeks to approximate the joint mode of the conditional distribution of label vectors in a greedy manner. As a result of a theoretical regret analysis, we conclude that this approach can perform quite poorly in terms of subset 0/1 loss. Therefore, we present an enhanced inference procedure for which the worst-case regret can be upper-bounded far more tightly. In addition, we show that a probabilistic variant of chaining, which can be utilized for any loss function, becomes tractable by using Monte Carlo sampling. Finally, we present experimental results confirming the validity of our theoretical findings.
Stochastic local search for satisfiability (SAT) has successfully been applied to solve a wide range of problems. However, it still suffers from a major shortcoming, i.e. being trapped in local minima. In this study, we explore different heuristics to avoid local minima. The main idea is to proactively avoid local minima rather than reactively escape from them. This is worthwhile because it is time consuming to successfully escape from a local minimum in a deep and wide valley. In addition, revisiting an encountered local minimum several times makes it worse. Our new trap avoidance heuristics that operate in two phases: (i) learning of pseudo-conflict information at each local minimum, and (ii) using this information to avoid revisiting the same local minimum. We present a detailed empirical study of different strategies to collect pseudo-conflict information (using either static or dynamic heuristics) as well as to forget the outdated information (using naive or time window smoothing). We select a bench-mark suite that includes all random and structured instances used in the 2011 SAT competition and three sets of hardware and software verification problems. Our results show that the new heuristics significantly outperform existing stochastic local search solvers (including Sparrow2011 - the best local search solver for random instances in the 2011 SAT competition) on all tested benchmarks.
The efficiency of heuristic search planning crucially depends on the quality of the search heuristic, while succinct representations of state sets in decision diagrams can save large amounts of memory in the exploration. BDDA* – a symbolic version of A* search – combines the two approaches into one algorithm. This paper compares two of the leading heuristics for sequential-optimal planning: the merge-and-shrink and the pattern databases heuristic, both of which can be compiled into a vector of BDDs and be used in BDDA*. The impact of optimizing the variable ordering is highlighted and experiments on benchmark domains are reported.
Temporal Fast Downward (TFD) is a successful temporal planning system that is capable of dealing with numerical values. Rather than decoupling action selection from scheduling, it searches directly in the space of time-stamped states, an approach that has shown to produce plans of high quality at the price of coverage. To increase coverage, TFD incorporates deferred evaluation and preferred operators, two search techniques that usually decrease the number of heuristic calculations by a large amount. However, the current definition of preferred operators offers only limited guidance in problems where heuristic estimates are weak or where subgoals require the execution of mutex operators. In this paper, we present novel methods for refinement of this definition and show how to combine the diverse strengths of different sets of preferred operators using a restarting procedure incorporated into a multi-queue best-first search. These techniques improve TFD's coverage drastically and preserve the average solution quality, leading to a system that solves more problems than each of the competitors of the temporal satisficing track of IPC 2011 and clearly outperforms all of them in terms of IPC score.
Our main contribution is a surprising polynomial-time algorithm for weighed coalitional manipulation of four-candidate Copeland (also known as Llull) elections. On the technical side, our algorithm relies on a polynomial-time routine that solves a variant of the partition problem. We also show that there is a pseudopolynomial-time algorithm for weighted coalitional manipulation with a fixed number of candidates under any anonymous rule with a polynomial-time winner-determination procedure.
Much research has been devoted to the use of argumentation to support inter-agent dialogues. Here, we contribute to this line of research by investigating the strategic behaviour of agents in argumentation-based dialogues, using Assumption-Based-Argumentation (ABA) as the underlying framework. We will focus on information-seeking and inquiry dialogues, giving formalisations thereof and showing how they can be supported by specific classes of strategy-move functions for agents to select suitable utterances.
This paper describes an approach for guiding human choice-making by a computerized agent, in a conversational setting, where both user and agent provide meaningful input. In the proposed approach, the agent attempts to convince a person by providing examples for the person to emulate or by providing justifications for the person to internalize and build or change her preferences accordingly. The agent can take into account examples and justifications provided by the person. In a series of experiments where the task was selecting a location for a school, a computer agent interacted with subjects using a textual chat-type interface, with different agent designs being used in different experiments. The results show that the example-providing agent outperformed the justification providing agent and both, surprisingly, outperformed an agent which pre sented the subject with both examples and justifications. In addition, it was demonstrated that in some cases the best strategy for the agent is to keep silent.
Work about distributional thesauri has now widely shown that the relations in these thesauri are mainly reliable for high frequency words and for capturing semantic relatedness rather than strict semantic similarity. In this article, we propose a method for improving such a thesaurus through its re-balancing in favor of middle and low frequency words. This method is based on a bootstrapping mechanism: a set of positive and negative examples of semantically related words are selected in a unsupervised way from the results of the initial measure and used for training a supervised classifier. This classifier is then applied for reranking the initial semantic neighbors. We evaluate the interest of this reranking for a large set of English nouns with various frequencies.
Reinforcement Learning (RL) suffers from several difficulties when applied to domains with no obvious goal state defined; this leads to inefficiency in RL algorithms. In this paper we consider a solution within the context of a widely-used testbed for RL, that of RoboCup Keepaway soccer. We introduce Argumentation-Based RL (ABRL), using methods from argumentation theory to integrate domain knowledge, represented by arguments, into the SMDP algorithm for RL by using potential-based reward shaping. Empirical results show that ABRL outperforms the original SMDP algorithm, for this game, by improving the optimal performance.
Case-based planning (CBP) re-uses existing plans as a starting point to solve new planning problems. In this work, we address CBP for planning in PDDL domains with real-valued fluents, that are essential to model real-world problems involving continuous resources. Specifically, we propose some new heuristic techniques for retrieving a plan from a library of existing plans that is promising for a new given planning problem, i.e., that can be efficiently adapted to solve the new problem. The effectiveness of these techniques, which derive much of their power from the use of the numerical information in the planning problem specification and in the library plans, is then evaluated by an experimental analysis.
Multiple kernel learning algorithms are proposed to combine kernels in order to obtain a better similarity measure or to integrate feature representations coming from different data sources. Most of the previous research on such methods is focused on classification formulations and there are few attempts for regression. We propose a fully conjugate Bayesian formulation and derive a deterministic variational approximation for single output regression. We then show that the proposed formulation can be extended to multiple output regression. We illustrate the effectiveness of our approach on a single output benchmark data set. Our framework outperforms previously reported results with better generalization performance on two image recognition data sets using both single and multiple output formulations.
We consider problems where a solution is evaluated with a couple. Each coordinate of this couple represents an agent's utility. Due to the possible conflicts, it is unlikely that one feasible solution is optimal for both agents. Then, a natural aim is to find tradeoffs. We investigate tradeoff solutions with guarantees for the agents.The focus is on discrete problems having a matroid structure. We provide polynomial-time deterministic algorithms which achieve several guarantees and we prove that some guarantees are not possible to reach.
A key task in process mining consists of building a graph of causal dependencies over process activities, which can then be used to derive more expressive models in some high-level modeling language. An approach to accomplish this task is presented where the learning process can exploit the background knowledge that, in many cases, is available to the analysts taking care of the process (re-)design. The method is based on encoding the information gathered from the log and the (possibly) given background knowledge in terms of precedence constraints, i.e., constraints over the topology of the graphs. Learning algorithms are eventually formulated in terms of reasoning problems over precedence constraints, and the computational complexity of such problems is thoroughly analyzed by tracing their tractability frontier. The whole approach has been implemented in a prototype system leveraging a solid constraint programming platform, and results of experimental activity are reported.
Coalitional games model scenarios where rational agents can form coalitions so as to obtain higher worths than by acting in isolation. Once a coalition forms and obtains its worth, the problem of how this worth can be fairly distributed has to be faced. Desirable worth distributions are usually referred to as solution concepts. Recent research pointed out that, while reasoning problems involving such solution concepts are hard in general for games specified in compact form (e.g., graph games), some of them, in particular the core, become tractable when agents come partitioned into a fixed number k of types, i.e., of classes of strategically equivalent players. The paper continues along this line of research, by firstly showing that two other relevant solution concepts, the kernel and the nucleolus, are tractable in this setting and independently of the specific game encoding, provided worth functions are given as a polynomial-time computable oracles. Then, it analyzes a different setting where games are still k-typed but the actual player partitioning is not a-priori known. Within this latter setting, the paper addresses the question about how efficiently strategic equivalence between pairs of players can be recognized, and reconsiders the computational complexity of the core, the kernel, and the nucleolus. All such problems and notions emerged to be intractable, thereby evidencing that the knowledge of player types marks the boundary of tractability for reasoning about k-typed coalitional games.
Multi-agent systems usually address one of two forms of interaction. One has completely competitive agents that act selfishly, each maximizing its own gain from the interaction. Auctions and voting scenarios usually assume such agents and follow game theoretic results. The other form of interaction has multiple agents that cooperatively search for some global goal, such as an optimal time slot allocation for all landing aircrafts in an airport.
The present paper proposes a paradigm for multiple agents solving a distributed problem using local search algorithms and acting in a partially cooperative manner. That is, agents with different preferences search for a minimal cost solution to an Asymmetric Distributed Constraints Optimization Problem (ADCOP), while keeping a limited form of self interest.
Two approaches for using local search in the partial cooperative paradigm are proposed. The first, modifies the anytime mechanism introduced by Zivan  so that agents can eliminate solutions which do not satisfy their cooperation thresholds. The second proposes a new local search algorithm that explores only valid solutions.
The performance of two innovative algorithms implementing these two approaches, are compared with state of the art local search algorithms on three different setups. When personal constraints are strict, the proposed algorithms have a large advantage over existing algorithms. We provide insights to the success of existing algorithms within the anytime framework when constraints are loose.
Monte-Carlo Tree Search and specifically the variants of the UCT algorithm have been a break-through in AI of the board game Go. However, UCT has had limited applicability to other domains. We study the limitations of some of the existing variants of UCT in a small-scale Markov decision process (MDP), and propose new variants that can reduce those limitations. Our experiments show great improvements in performance against traditional UCT and comparable performance to estabilished reinforcement learning algorithm, thus opening possibilities for applying UCT in other problem domains.
We study branching-time temporal description logics (TDLs) based on the DLs ALC and EL and the temporal logics CTL and CTL*. The main contributions are algorithms for satisfiability that are more direct than existing approaches, and (mostly) tight elementary complexity bounds that range from PTIME to 2EXPTIME and 3EXPTIME. A careful use of tree automata techniques allows us to obtain transparent and uniform algorithms, avoiding to deal directly with the intricacies of CTL*.
Previous work on voter control, which refers to situations where a chair seeks to change the outcome of an election by deleting, adding, or partitioning voters, takes for granted that the chair knows all the voters' preferences and that all votes are cast simultaneously. However, elections are often held sequentially and the chair thus knows only the previously cast votes and not the future ones, yet needs to decide instantaneously which control action to take. We introduce a framework that models online voter control in sequential elections. We show that the related problems can be much harder than in the standard (non-online) case: For certain election systems, even with efficient winner problems, online control by deleting, adding, or partitioning voters is PSPACE-complete, even if there are only two candidates. In addition, we obtain completeness for coNP in the deleting/adding cases with a bounded deletion/addition limit, and for NP in the partition cases with only one candidate. Finally, we show that for plurality, online control by deleting or adding voters is in P, and for partitioning voters is coNP-hard.
In recent years, domain-independent planning has been applied to a rising number of real-world applications. Usually, the description language of choice is PDDL. However, PDDL is not suited to model all challenges imposed by real-world applications. Dornhege et al. proposed semantic attachments to allow the computation of Boolean fluents by external processes called modules during planning. To acquire state information from the planning system a module developer must perform manual requests through a callback interface which is both inefficient and error-prone. In this paper, we present the Object-oriented Planning Language OPL, which incorporates the structure and advantages of modern object-oriented programming languages. We demonstrate how a domain-specific module interface that allows to directly access the planner state using object member functions is automatically generated from an OPL planning task. The generated domain-specific interface allows for a safe and less error-prone implementation of modules. We show experimentally that this interface is more efficient than the PDDL-based module interface of TFD/M.
The Ranking by Pairwise Comparison algorithm (RPC) is a well established label ranking method. However, its complexity is of O(N2)in the number N of labels. We present algorithms for selecting, before model construction, a subset of comparators of size O(N), to reduce computational complexity without loss in accuracy.