As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
General Game Playing (GGP), a research field aimed at developing agents that master different games in a unified way, is regarded as a necessary step towards creating artificial general intelligence. With the success of deep reinforcement learning (DRL) in games like Go, chess, and shogi, it has been recently introduced to GGP and is regarded as a promising technique to achieve the goal of GGP. However, the current work uses fully connected neural networks and is thus unable to efficiently exploit the topological structure of game states. In this paper, we propose an approach to applying general-purposed convolutional neural networks to GGP and implement a DRL-based GGP player. Experiments indicate that the built player not only outperforms the previous algorithm and UCT benchmark in a variety of games but also requires less training time.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.