This work presents some anticipatory mechanisms in an agent architecture, modeling affective behaviours as effects of surprise. Through experiment discussion, the advantages of becoming cautious are presented and is shown how this approach increases not only opportunism and reactivity but also anticipatory capabilities for planning and reasoning. Cautious agents outcomes are analyzed in risky surroundings both on the basis of the environment features and of the internal model of caution. Expectation-driven and caution enabled agents are designed with the BDI paradigm.
The application of Artificial Intelligence technology to the field of music has always been fascinating, from the first attempts in automating human problem solving behavior till this day. Human activities related to music vary in their complexity and in their amenability of becoming automated, and for both musicians and AI researchers various questions arise intuitively, e. g.: What are music-related activities or tasks that can be automated? How are they related to each other? Which problem solving methods have proven well? In which places does AI technology contribute?
Actually, the literature in the intersection of AI and music focuses on single problem classes and particular tasks only, and a comprehensive picture is not drawn. This paper, which outlines key ideas of our research in this field, provides a step toward closing this gap: it proposes a taxonomy of problem classes and tasks related to music, along with methods solving them.
Emotions play a very important role in human behaviour and social interaction. In this paper we present a control architecture which uses emotions in the behaviour selection process of autonomous and social agents. The state of the agent is determined by its internal state, defined by its dominant motivation, and its relation with the external objects including other agents. The behaviour selection is learned by the agent using standard and multiagent Q-learning algorithms. The considered emotions are fear, happiness and sadness. The role of these emotions in this architecture is different, while the learning algorithms use happiness/sadness of the agent as positive/negative reinforcement signals, the emotion fear is used to prevent the agent of choosing dangerous actions as well as a motivation.
Combined negotiation in multi-agent systems is a hard task since it involves several levels of difficulty. In order to improve their payoff, first agents behave in a strategic manner while bargaining since they need to deal with various types of behaviors. Second agents have to react to the proposals of other agents in finding the optimal solutions for their negotiation. This paper tackles the problem of winner determination in combined multi-agent negotiations and addresses two fundamental issues. The first contribution of this work is a winner determination algorithm, which finds an optimal solution, which is a combination of several bids selected from a set of bids at one iteration of a combined negotiation. In addition, the problem of dynamically revising these optimal solutions with regards to changes on bids is considered. These changes occur in multi-agent negotiation processes having several iterations and which use multi-phased protocols. To date no work addresses this problem. The second contribution of this work is an iterative algorithm for updating the optimal solutions to avoid integral and repetitive reapplication of the winner determination algorithm. The results of the experiments carried out using our algorithms have confirmed the importance and the originality of our approach based on the use of shared and unshared item graphs.
A dialogue strategy is the set of rules followed by an agent when choosing a move (act + content) during a dialogue. This paper argues that a strategy is decision problem in which an agent selects i) among the acts allowed by the protocol the best option that, according to some strategic beliefs of the agent will at least satisfy the most important strategic goals of the agent, and ii) among different alternatives (eg. different offers), the best one that according to some basic beliefs of the agent, will satisfy the functional goals of the agent. The paper proposes a formal framework based on argumentation for computing the best move to play at a given step of the dialogue.
Francesco Amigoni, Simone Farè, Michèle Lavagna, Guido Sangiovanni
715 - 716
The possible advantages of employing multiple agents to manage activities on a single space system are largely unexplored. This paper presents the experimental validation of a multiagent scheduler for a low Earth orbit satellite.
Andrea Giovannucci, Jesús Cerquides, Juan Antonio Rodríguez-Aguilar
717 - 718
In this paper we explore whether an auctioneer/buyer may benefit from introducing his transformability relationships (some goods can be transformed into others at a transformation cost) into multi-unit combinatorial reverse auctions. Thus, we quantitatively assess the potential savings the auctioneer/buyer may obtain with respect to combinatorial reverse auctions that do not consider tranformability relationships. Furthermore, we empirically identify the market conditions under which it is worth for the auctioneer/buyer to exploit transformability relationships.
Searle represents constitutive norms as count-as conditionals, written as ‘X counts as Y in context C’. Grossi et al. study a class of these conditionals as ‘in context C, X is classified as Y’. In this paper we propose a generalization of this relation among count-as conditionals, classification and context, by defining a class of count-as conditionals as ‘X in context C0 is classified as Y in context C’. We show that if context C0 can be different from context C, then we can represent a larger class of examples, and we have a weaker logic of count-as conditionals.
In social mechanism design, obligation distribution creates individual or contractual obligations that imply a collective obligation. A distinguishing feature from group planning is that also the sanction of the collective obligation has to be distributed, for example by creating sanctions for the individual or contractual obligations. In this paper we address fairness in obligation distribution for more or less powerful agents, in the sense that some agents can perform more or less actions than others. Based on this power to perform actions, we characterize a trade-off in negotiation power. On the one hand, more powerful agents may have a disadvantage during the negotiation, as they may be one of the few or even the only agent who can see to some of the actions that have to be performed to fulfill the collective obligation. On the other hand, powerful agents may have an advantage in some negotiation protocols, as they have a larger variety of proposals to choose from. Moreover, powerful agents have an advantage because they can choose from a larger set of possible coalitions. We present an ontology and measures to find a fair trade-off between these two forces in social mechanism design.
Souhila Kaci, Leendert van der Torre, Emil Weydert
725 - 726
In this paper we study the fragment of Dung's argumentation theory in which the strict attack relation is acyclic. We show that every attack relation satisfying a particular property can be represented by a symmetric conflict relation and a transitive preference relation in the following way. We define an instance of Dung's abstract argumentation theory, in which 'argument A attacks argument B' is defined as 'argument A conflicts with argument B' and 'argument A is at least as preferred as argument B', where the conflict relation is symmetric and the preference relation is transitive. We show that this new preference-based argumentation theory characterizes the acyclic strict attack relation, in the sense that every attack relation defined as such a combination satisfies the property, and for every attack relation satisfying the property we can find a symmetric conflict relation and a transitive preference relation satisfying the equation.
Least-Squares Policy Iteration  is an approximate reinforcement learning technique capable of training policies over large, continuous state spaces. Unfortunately, the computational requirements of LSPI scale poorly with the number of system agents. Work has been done to address this problem, such as the Coordinated Reinforcement Learning (CRL) approach of Guestrin, et al , but this requires that one have prior information about the learning system such as knowing interagent dependencies and the form of the Q-function. We demonstrate a hybrid gradient-ascent/LSPI approach which is capable of using LSPI to efficiently train multi-agent policies. Our approach has computational requirements which scale as O(N), where N is the number of system agents, and does not have the prior knowledge requirements of CRL. Finally, we demonstrate our algorithm on a standard multi-agent network control problem .
Maria Amalfi, Katia Lo Presti, Alessandro Provetti, Franco Salvetti
737 - 738
This article describes the design and implementation of a prototype that analyzes and classifies transcripts of interviews collected during an experiment that involved lateral-brain damage patients. The patients' utterances are classified as instances of categorization, prediction and explanation (abduction) based on surface linguistic cues. The agreement between our automatic classifier and human annotators is measured. The agreement is statistically significant, thus showing that the classification can be performed in an automatic fashion.
Qualitative Choice Logic adds to classical propositional logic a new connective, called ordered disjunction, used to express preferences between alternatives. We present an alternative inference relation for the QCL language that overcomes some QCL limitations.