首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Emergence of anti-coordination through reinforcement learning in generalized minority games
Authors:Chakrabarti  Anindya S  Ghosh  Diptesh
Institution:1.Economics Area, Indian Institute of Management, Ahmedabad, 380015, India
;2.Production and Quantitative Methods Area, Indian Institute of Management, Ahmedabad, 380015, India
;
Abstract:

In this paper we propose adaptive strategies to solve coordination failures in a prototype generalized minority game model with a multi-agent, multi-choice environment. We illustrate the model with an application to large scale distributed processing systems with a large number of agents and servers. In our set up, agents are assigned responsibility to complete tasks that require unit time. They request servers to process these tasks. Servers can process only one task at a time. Agents have to choose servers independently and simultaneously, and have access to the outcomes of their own past requests only. Coordination failure occurs if more than one agent simultaneously requests the same server to process tasks at the same time, while other servers remain idle. Since agents are independent, this leads to multiple coordination failures. In this paper, we propose strategies based on reinforcement learning that minimize such coordination failures. We also prove a null result that a large category of probabilistic strategies which attempts to combine information about other agents’ strategies, asymptotically converge to uniformly random choices over the servers.

Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号