Additional DQN Extension Methods of Fast Convergence for the Optimal Policy

Authors

  • Young-Man Kwon
  • Gyu-Bong Lee
  • Dong-Keun Chung
  • Myung-Jae Lim

Abstract

To improve the performance of DQN, Deep Mind proposed six extensions. In this paper, we propose two additional DQN extensions. The first extension applies batch normalization to the model of DQN and the second extension applies atrous convolution. We measured the performance of them 30 times on the Atari game Pong. We did the post-hoc-analysis because ANOVA analysis of them is significant at the confidence level of 95%. According to experimental results, we conclude that we can apply the proposed extensions to the vanilla DQN for better performance.

Downloads

Published

2019-12-12

Issue

Section

Articles