Research output: Contribution to journal › Article › peer-review
Tianhong Dai, Yali Du, Meng Fang, Anil Anthony Bharath
Original language | English |
---|---|
Pages (from-to) | 396-406 |
Number of pages | 11 |
Journal | NEUROCOMPUTING |
Volume | 468 |
DOIs | |
Published | 11 Jan 2022 |
Additional links |
In many real-world problems, reward signals received by agents are delayed or sparse, which makes it challenging to train a reinforcement learning (RL) agent. An intrinsic reward signal can help an agent to explore such environments in the quest for novel states. In this work, we propose a general end-to-end diversity-augmented intrinsic motivation for deep reinforcement learning which encourages the agent to explore new states and automatically provides denser rewards. Specifically, we measure the diversity of adjacent states under a model of state sequences based on determinantal point process (DPP); this is coupled with a straight-through gradient estimator to enable end-to-end differentiability. The proposed approach is comprehensively evaluated on the MuJoCo and the Arcade Learning Environments (Atari and SuperMarioBros). The experiments show that an intrinsic reward based on the diversity measure derived from the DPP model accelerates the early stages of training in Atari games and SuperMarioBros. In MuJoCo, the approach improves on prior techniques for tasks using the standard reward setting, and achieves the state-of-the-art performance on 12 out of 15 tasks containing delayed rewards.
King's College London - Homepage
© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454