As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In oversubscription planning (OSP), not all goals can be achieved. If a global optimization objective is difficult to fix, then an iterative planning process in which users refine their objective based on sample plans is suitable. Recent work has shown that, in such a process, explanations of plan trade-offs based on goal conflicts – minimal unsolvable goal subsets (MUGS) – are useful. A fundamental limitation of this approach is scalability. Computing MUGS is feasible only in relatively small planning instances; sometimes plan generation in iterative planning also is a limiting factor as users tend to be impatient. Here we address both these limitations by restricting the space of plans considered. We assume that an action policy π for the OSP task has been learned. We restrict both plan generation and MUGS analysis to the action sequences within a given radius r around π, so that r controls the tradeoff between scalability and the degree of approximation. We instantiate this idea with two different kinds of radii around a policy. We experimentally analyze performance as a function of r, for Action Schema Network policies. The results confirm that our approach can scale up further than prior work, and results on instances small enough to compute MUGS exactly indicate that we obtain informative MUGS even with limited runtime and memory.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.