University of Wollongong
Browse

File(s) not publicly available

Multi-agent deep reinforcement learning for traffic signal control with Nash Equilibrium

journal contribution
posted on 2024-11-17, 13:33 authored by Wei Wei, Qiang Wu, Jianqing Wu, Bo Du, Jun Shen, Tinghong Li
Traffic signal control is an essential and chal-lenging real-world problem, which aims to alleviate traffic congestion by coordinating vehicles' movements at road in-tersections. Deep reinforcement learning (DRL) combines deep neural networks (DNNs) with a framework of reinforcement learning, which is a promising method for adaptive traffic signal control in complex urban traffic networks. Now, multi-agent deep reinforcement learning (MARL) has the potential to deal with traffic signal control at a large scale. However, current traffic signal control systems still rely heavily on simplified rule- based methods in practice. In this paper, we propose: (1) a MARL algorithm based on Nash Equilibrium and DRL, namely Nash Asynchronous Advantage Actor-Critic (Nash-A3C); (2) an urban simulation environment (SENV) to be essentially close to the real-world scenarios. We apply our method in SENV, obtaining better performance than benchmark traffic signal control methods by 22.1%, which proves that Nash-A3C to be more suitable for large industrial level deployment.

History

Journal title

2021 IEEE 23rd International Conference on High Performance Computing and Communications, 7th International Conference on Data Science and Systems, 19th International Conference on Smart City and 7th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021

Pagination

1435-1442

Language

English

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC