Multi-agent deep reinforcement learning for traffic signal control with Nash Equilibrium
Publication Name
2021 IEEE 23rd International Conference on High Performance Computing and Communications, 7th International Conference on Data Science and Systems, 19th International Conference on Smart City and 7th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021
Abstract
Traffic signal control is an essential and chal-lenging real-world problem, which aims to alleviate traffic congestion by coordinating vehicles' movements at road in-tersections. Deep reinforcement learning (DRL) combines deep neural networks (DNNs) with a framework of reinforcement learning, which is a promising method for adaptive traffic signal control in complex urban traffic networks. Now, multi-agent deep reinforcement learning (MARL) has the potential to deal with traffic signal control at a large scale. However, current traffic signal control systems still rely heavily on simplified rule- based methods in practice. In this paper, we propose: (1) a MARL algorithm based on Nash Equilibrium and DRL, namely Nash Asynchronous Advantage Actor-Critic (Nash-A3C); (2) an urban simulation environment (SENV) to be essentially close to the real-world scenarios. We apply our method in SENV, obtaining better performance than benchmark traffic signal control methods by 22.1%, which proves that Nash-A3C to be more suitable for large industrial level deployment.
First Page
1435
Last Page
1442
Funding Number
2021JM-344
Link to publisher version (DOI)
http://dx.doi.org/10.1109/HPCC-DSS-SmartCity-DependSys53884.2021.00215