As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
A discussion of a fusion problem in multi-agent systems for time critical decision making is presented. The focus is on the problem of distributed learning for classification into several hypotheses of observations representing states of an uncertain environment. Special attention is devoted to reinforcement learning in a homogeneous non-communicating multi-agent system for time critical decision making. A system in which an agent network processes observational data and outputs beliefs to a fusion center module is considered. Belief theory serves as the analytic framework for computing these beliefs and composing them over time and over the set of agents. The agents are modeled using evidential neural networks, whose weights reflect the state of learning of the agents. Training of the network is guided by reinforcements received from the environment as decisions are made. Two different sequential decision making mechanisms are discussed: the first one is based on a “pignistic ratio test” and the second one is based on “the value of information criterion,” providing for learning utilities. Results are shown for the test case of recognition of naval vessels from FLIR image data.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.