As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Unsupervised pre-training has demonstrated its potential for accurately constructing world models in visual model-based reinforcement learning (MBRL). However, such MBRL approaches exhibit limited generalizability, thereby limiting their practicality in diverse scenarios. These methods produce models that are restricted to the specific task they were trained on, and are not easily adaptable to other tasks. In this work, we introduce a powerful unsupervised pre-training reinforcement learning (RL) framework called VMBRL3, which improves the generalization ability of visual MBRL. VMBRL3 employs task-agnostic videos to pre-train both the autoencoder and world model without access to actions or rewards information. The fine-tuned world model can then be applied to a range of downstream reinforcement learning tasks, allowing for rapid adaptation to diverse environments and facilitating policy learning. We demonstrate that our framework significantly improves generalization ability in a variety of manipulation and locomotion tasks. Furthermore, VMBRL3 doubles the sample efficiency and overall performance compared to previous visual methods of MBRL.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.