首页 | 本学科首页   官方微博 | 高级检索  
     


Transient and asymptotic dynamics of reinforcement learning in games
Authors:Luis R. Izquierdo   Segismundo S. Izquierdo   Nicholas M. Gotts  J. Gary Polhill
Affiliation:aUniversidad de Burgos, Edificio la Milanera, C/Villadiego s/n, 09001, Burgos, Spain;bDepartment of Industrial Organization, University of Valladolid, 47011, Spain;cThe Macaulay Institute, Craigiebuckler, AB15 8QH, Aberdeen, UK
Abstract:Reinforcement learners tend to repeat actions that led to satisfactory outcomes in the past, and avoid choices that resulted in unsatisfactory experiences. This behavior is one of the most widespread adaptation mechanisms in nature. In this paper we fully characterize the dynamics of one of the best known stochastic models of reinforcement learning [Bush, R., Mosteller, F., 1955. Stochastic Models of Learning. Wiley & Sons, New York] for 2-player 2-strategy games. We also provide some extensions for more general games and for a wider class of learning algorithms. Specifically, it is shown that the transient dynamics of Bush and Mosteller's model can be substantially different from its asymptotic behavior. It is also demonstrated that in general—and in sharp contrast to other reinforcement learning models in the literature—the asymptotic dynamics of Bush and Mosteller's model cannot be approximated using the continuous time limit version of its expected motion.
Keywords:Reinforcement learning   Bush and Mosteller   Learning in games   Stochastic approximation   Slow learning   Distance diminishing
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号