As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
When solving large distributed constraint optimization problems (DCOP), belief-propagation and incomplete inference algorithms are candidates of choice. However, these methods perform poorly when the factor graph is very loopy (i.e. cyclic) because of non-convergence and high communication costs. As to improving performances of the Max-Sum inference algorithm when solving loopy constraint optimization problems, we propose to take inspiration from the belief-propagation-guided decimation used to solve sparse random graphs (k-satisfiability). We introduce the DECIMAXSUM method, parameterized in terms of policies to decide when to trigger decimation, which variables to decimate, and which values to assign to decimated variables. Our empirical analysis on a classical BP benchmark indicates that some of these combinations of policies outperform state-of-the-art competitors.