As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
When playing video-games we immediately detect which entity we control and we center the attention towards it to focus the learning and reduce its dimensionality. Reinforcement Learning (RL) has been able to deal with big state spaces, including states derived from pixel images in Atari games, but the learning is slow, depends on the brute force mapping from the global state to the actions values (Q-function), thus its performance is severely affected by the dimensionallity of the state and cannot be transferred to other games or other parts of the same game. We propose different transformations of the input state that combine attention and agency detection mechanisms which both have been addressed separately in RL but not together to our knowledge. We propose and benchmark different architectures including both global and local agency centered versions of the state and also including summaries of the surroundings. Results suggest that even a redundant global local state network can learn faster than the global alone. Summarized versions of the state look promising to achieve input-size independence learning.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.