We provide the first anytime algorithm for finding the ε-core in a nontransferable utility coalitional game. For a given set of possible joint actions, our algorithm calculates ε, the maximum utility any agent could gain by deviating from this set of actions. If ε is too high, our algorithm searches for a subset of the joint actions which leads to a smaller ε. Simulations show our algorithm is more efficient than an exhaustive search by up to 2 orders of magnitude.
Debugging consumes a considerable amount of time in software engineering, but it is rarely automated. In this paper, we focus on improving existing fault localization techniques. Spectrum-based fault localization (SFL) and slicing-hitting-set-computation (SHSC) are two techniques based on program execution traces. Both techniques come with small computational overhead and aid programmers to faster identify possible locations of faults. However, they have disadvantages: SHSC results in an undesirable high ranking of statements which are executed in many test cases, such as constructors. SFL operates on block level. Therefore, it cannot provide fine-grained results. We combine SHSC with SFL in order to eliminate these disadvantages. Our objective is to improve the ranking of faulty statements so that they allow for better fault localization than when using the previously mentioned methods separately. We show empirically that the resulting approach reduces the number of statements a programmer needs to check manually. In particular, we gain improvements of about 50 % percent for SHSC and 25 % for SFL.
Empirical data from recent work has indicated that SAT-based solvers can outperform native search-based solvers on certain classes of problems in qualitative temporal reasoning, particularly over the Interval Algebra (IA). The present work shows that, for reasoning with IA, SAT strictly dominates search in theoretical power: (1) We present a SAT encoding of IA that simulates the use of tractable subsets in native solvers. (2) We show that the refutation of any inconsistent IA network can always be done by SAT (via our new encoding) as efficiently as by native search. (3) We exhibit a class of IA networks that provably require exponential time to refute by native search, but can be refuted by SAT in polynomial time.
In this paper, we describe a new framework for breaking symmetries in itemset mining problems. Symmetries are permutations between items that leave invariant the transaction database. Such kind of structural knowledge induces a partition of the search space into equivalent classes of symmetrical itemsets. Our proposed framework aims to reduce the search space of possible interesting itemsets by detecting and breaking symmetries between items. Firstly, we address symmetry discovery in transaction databases. Secondly, we propose two different approaches to break symmetries in a preprocessing step by rewriting the transaction database. This approach can be seen as an original extension of the symmetry breaking framework widely used in propositional satisfiability and constraint satisfaction problems. Finally, we show that Apriori-like algorithms can be enhanced by dynamic symmetry reasoning. Our experiments clearly show that several itemset mining instances taken from the available datasets contain such symmetries. We also provide experimental evidence that breaking such symmetries reduces the size of the output on some families of instances.
We consider a combination of the strategic logic ATL with the description logic ALCO. In order to combine the logics in a flexible way, we assume that every individual can be (potentially) an agent. We also take the novel approach to teams by assuming that a coalition has an identity on its own, and hence its membership can vary. In terms of technical results, we show that the logic does not have the finite model property, though both ATL and ALCO do. We conjecture that the satisfiability problem may be undecidable. On the other hand, model checking of the combined logic is decidable and even tractable. Finally, we define a particular variant of realizability that combines satisfiability of ALCO with model checking of the ATL dimension, and we show that this new problem is decidable.
In this paper we present the ontology matching system LogMap 2, a much improved version of its predecessor LogMap. LogMap 2 supports user interaction during the matching process, which is essential for use cases requiring very accurate mappings. Interactivity, however, imposes very strict scalability requirements; we are able to satisfy these requirements by providing real-time user response even for large-scale ontologies. Finally, LogMap 2 implements scalable reasoning and diagnosis algorithms, which minimise any logical inconsistencies introduced by the matching process.
In the last decade, AI researchers have pointed out the existence of two types of information: positive information and negative information. This distinction has also been asserted in cognitive psychology. Distinguishing between these two types of information may be useful in both knowledge and preference representation. In the first case, one distinguishes between situations which are not impossible because they are not ruled out by the available knowledge, and what is possible for sure. In the second case, one distinguishes between what is not rejected and what is really desired. Besides it has been shown that possibility theory is a convenient tool to model and distinguish between these two types of information. Knowledge/Preference representation languages have also been extended to cope with this particular kind of information. Nevertheless despite solid theoretical advances in this topic, the crucial question of “which reading (negative or positive) one should have” remains a real bottleneck. In this paper, we focus on comparative statements. We present a set of postulates describing different situations one may encounter. Then we provide a representation theorem describing which sets of postulates are satisfied by which kind of information (negative or positive) and conversely. One can then decide which reading to apply depending on which postulates she privileges.
Usually, default rules in the form of conditional statements are built on propositional logic, representing classes of individuals by propositional variables, as in “Birds fly, but penguins don't”. Only few approaches have addressed the problem of giving formal semantics to first-order conditionals that allow (nonmonotonic) inferences both for classes and for individuals. In this paper, we present a semantics for first-order conditionals that is based on ordinal conditional (or ranking) functions which are well-known in the area of propositional default reasoning and makes use of representative individuals to establish conditional relationships. We generalize the c-representation approach of  for inductive reasoning with first-order conditionals, and evaluate our approach via benchmark examples and a catalogue of general properties.
The “Snake-In-The-Box” problem, first described more than 50 years ago, is a hard combinatorial search problem whose solutions have many practical applications. Until recently, techniques based on Evolutionary Computation have been considered the state-of-the-art for solving this deterministic maximization problem, and held most significant records. This paper reviews the problem and prior solution techniques, then presents a new technique, based on Monte-Carlo Tree Search, which finds significantly better solutions than prior techniques, is considerably faster, and requires no tuning.
We formalise and investigate the following problem. A principal must delegate a number of decisions to a collection of agents. Once the decisions are delegated, the agents to whom the decisions are delegated will act selfishly, rationally, and independently in pursuit of their own preferences. The principal himself is assumed to be self-interested, and has some goal that he desires to be achieved. The delegation problem is then, given such a setting, is it possible for the principal to delegate decisions in such a way that, if all the agents to whom decisions have been delegated then make decisions rationally, the principal's goal will be achieved in equilibrium. We formalise this problem using Boolean games, which provides a very natural framework within which to capture the delegation problem: decisions are directly represented as Boolean variables, which the principal assigns to agents. After motivating and formally defining the delegation problem, we investigate the computational complexity of the problem, and some issues surrounding it.
We propose a description logic extending SROIQ (the description logic underlying OWL 2 DL) and at the same time encompassing some of the most prominent monotonic and nonmonotonic rule languages, in particular Datalog extended with the answer set semantics. Our proposal could be considered a substantial contribution towards fulfilling the quest for a unifying logic for the Semantic Web. As a case in point, two non-monotonic extensions of description logics considered to be of distinct expressiveness until now are covered in our proposal. In contrast to earlier such proposals, our language has the “look and feel” of a description logic and avoids hybrid or first-order syntaxes.
Many state of the art Algorithm Selection systems use Machine Learning to either predict the run time or a similar performance measure of each of a set of algorithms and choose the algorithm with the best predicted performance or predict the best algorithm directly. We present a technique based on the well-established Machine Learning technique of stacking that combines the two approaches into a new hybrid approach and predicts the best algorithm based on predicted run times. We demonstrate significant performance improvements of up to a factor of six compared to the previous state of the art. Our approach is widely applicable and does not place any restrictions on the performance measure used, the way to predict it or the Machine Learning used to predict the best algorithm. We investigate different ways of deriving new Machine Learning features from the predicted performance measures and evaluate their effectiveness in increasing performance further. We use five different regression algorithms for performance prediction on five data sets from the literature and present strong empirical evidence that shows the effectiveness of our approach.
Providing convincing explanations to accompany recommendations is a key issue in decision-aiding. In the context of decisions involving multiple criteria, the problem is made very difficult because the decision model itself may involve a complex process. In this paper, we investigate the following issue: when the preferential information provided by the user is incomplete, is there a principled way to define what is a “simple” explanation for a recommended choice? We argue first that explanations may necessitate different levels of detail. Next, we show that even when a detailed explanation is necessary, it is possible to distinguish explanations of different levels of complexity. Our results rely on an original connection we establish between the “mechanics” required to compute supporting coalitions of criteria and the simplicity of the explanation.
Closed world reasoning and circumscription are essential tasks in AI. However, their high computational complexity is a serious obstacle for their practical application. In this work we employ the framework of parameterized complexity theory in order to search for fixed-parameter algorithms. We consider eleven parameters describing different characteristics of the input. For several combinations of these parameters we are able to design efficient fixedparameter tractable algorithms. All our algorithms have a runtime only single-exponential in the parameters and linear in the input size. Furthermore, by providing parameterized hardness results we show that we have actually found all tractable fragments involving these eleven parameters. We hereby offer a complete picture of the parameterized complexity of brave closed world reasoning and circumscription.
In many applications, agents must reason about what other agents know, whether to coordinate with them or to come out on top in a competitive situation. However in general, reasoning in a multiagent epistemic logic such as Kn has high complexity. In this paper, we look at a restricted class of knowledge bases that are sets of modal literals. We call these proper epistemic knowledge bases (PEKBs). We show that after a PEKB has been put in prime implicate normal form (PINF), an efficient database-like query evaluation procedure can be used to check whether an arbitrary query is entailed by the PEKB. The evaluation procedure is always sound and sometimes complete. We also develop a procedure to convert a PEKB into PINF. As well, we extend our approach to deal with introspection.
Knowledge-based programs (KBPs) are high-level protocols describing the course of action an agent should perform as a function of its knowledge. The use of KBPs for expressing action policies in AI planning has been surprisingly underlooked. Given that to each KBP corresponds an equivalent plan and vice versa, KBPs are typically more succinct than standard plans, but imply more online computation time. Here we compare KBPs and standard plans according to succinctness and to the complexity of plan verification.
Filtering by Generalized Arc Consistency (GAC) is a fundamental technique in Constraint Programming. Recent advances in GAC algorithms for extensional constraints rely on direct manipulation of tables during search. Simple Tabular Reduction (STR), which systematically removes invalid tuples from tables, has been shown to be a simple yet efficient approach. STR2, a refinement of STR, is considered to be among the best filtering algorithms for positive table constraints. In this paper, we introduce a new GAC algorithm called STR3 that is specifically designed to enforce GAC during search. STR3 can completely avoid unnecessary traversal of tables, making it optimal along any path of the search tree. Our experiments show that STR3 is much faster than STR2 when the average size of the tables is not reduced drastically during search.
Finding an appropriate semantics for task of updating an inconsistent knowledge base is a challenging problem. In this paper, we consider knowledge bases expressed in Description Logics, and focus on ABox inconsistencies, i.e., the case where the TBox is consistent, but the whole knowledge base is not. Our first contribution is the definition of a new semantics for updating an inconsistent Description Logic knowledge base with both the insertion and the deletions of a set of ABox assertions. We then concentrate on the
DL-Lite family of Description Logics, and present algorithms for updating a possibly inconsistent knowledge base expressed in the most expressive logic of such family. We show that, by virtue of both the characteristics of our semantics, and the limited expressive power of DL-Lite, both insertions and deletions can be done in polynomial time with respect of the size of the ABox.
This paper deals with the implementation of Social Choice Functions in fair multiagent decision problems. In such problems the determination of the best alternatives often relies on the maximization of a non-utilitarian Social Welfare Function so as to account for equity. However, in such decision processes, agents may have incentive to misreport their preferences to obtain more favorable choices. It is well known that, for Social Choice Functions based on the maximization of an affine aggregator of individual utilities, we can preclude any manipulation by introducing payments (VCG mechanisms). Unfortunately such truthful mechanisms do not exist for non-affine maximizers (Roberts' Theorem). For this reason, we introduce here a notion of “almost-truthfulness” and investigate the existence of payments enabling the elaboration of almost-truthful mechanisms for non-additive Social Welfare Functions such as Social Gini Evaluation Functions used in fair optimization.
This paper studies the problem of computing aggregation rules in combinatorial domains, where the set of possible alternatives is a Cartesian product of (finite) domain values for each of a given set of variables, and these variables are usually not preferentially independent. We propose a very general heuristic framework SC* for computing different aggregation rules, including rules for cardinal preference structures and Condorcet-consistent rules. SC* highly reduces the search effort and avoid many pairwise comparisons, and thus it significantly reduces the running time. Moreover, SC* guarantees to choose the set of winners in aggregation rules for cardinal preferences. With Condorcet-consistent rules, SC* chooses the outcomes that are sufficiently close to the winners.
Understanding and developing intelligent agents that simulate human learning has been a long-standing goal in both artificial intelligence and cognitive science. Although learning agents are able to produce intelligent behavior with less human knowledge engineering than in the past, intelligent agent developers are still required to manually encode much prior domain knowledge. We recently proposed an efficient algorithm that acquires representations of the world using an unsupervised grammar induction algorithm, and integrated this representation learner into a simulated student, SimStudent. In this paper, we use the representation learner to automatically generate a set of feature predicates based on the acquired representation, and provide the automatically generated feature predicates to SimStudent as prior domain knowledge. We show that with the automatically-generated feature predicates, the learning agent can perform at a level comparable to when it is given manually-constructed feature predicates, but without the effort required to create these feature predicates.
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided that goals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
We argue that the problem of adversarial plan recognition, where the observed agent actively tries to avoid detection, should be modeled in the game theoretic framework. We define the problem as an imperfect-information extensive-form game between the observer and the observed agent. We propose a novel algorithm that approximates the optimal solution in the game using Monte-Carlo sampling. The experimental evaluation is performed on a synthetic domain inspired by a network security problem. The proposed method produces significantly better results than several simple baselines on a practically large domain.
Dealing with spatial and temporal knowledge is an indispensable part of almost all aspects of human activities. The qualitative approach to spatial and temporal reasoning (QSTR) provides a promising framework for spatial and temporal knowledge representation and reasoning. QSTR typically represents spatial/temporal knowledge in terms of qualitative relations (e.g., to the east of, after), and reasons with the knowledge by solving qualitative constraints. When formulating a qualitative constraint satisfaction problem (CSP), it is usually assumed that each variable could be “here, there and everywhere.” Practical applications e.g. urban planning, however, often require a variable taking values from a certain finite subset of the universe, i.e. require it to be 'here or there'. This paper extends the classic framework of qualitative constraint satisfaction by allowing variables taking values from finite domains. The computational complexity of this extended consistency problem is examined for five most important qualitative calculi, viz. Point Algebra, Interval Algebra, Cardinal Relation Algebra, RCC-5, and RCC-8. We show that the extended consistency problem remains in NP, but when only basic constraints are considered, the extended consistency problem for each calculus except Point Algebra is already NP-hard.
The advent of the Semantic Web has made the problem of inconsistency management especially relevant. Datalog+/− is a family of ontology languages that is in particular useful for representing and reasoning over lightweight ontologies in the Semantic Web. In this paper, we study different semantics for query answering in inconsistent Datalog+/− ontologies. We develop a general framework for inconsistency management in Datalog+/− ontologies based on incision functions from belief revision, in which we can characterize several query answering semantics as special cases: (i) consistent answers, originally developed for relational databases and recently adopted for some classes of description logics (DLs), (ii) intersection semantics, a sound approximation of consistent answers, and (iii) lazy consistent answers, a novel alternative semantics that offers a good compromise between quality of answers and computation time. We also provide complexity results for query answering under the different semantics, including data tractability results.