Learning Cooperative Max-Pressure Control by Leveraging Downstream Intersections Information for Traffic Signal Control
Paper
Paper/Presentation Title | Learning Cooperative Max-Pressure Control by Leveraging Downstream Intersections Information for Traffic Signal Control |
---|---|
Presentation Type | Paper |
Authors | Peng, Yuquan (Author), Li, Lin (Author), Xie, Qing (Author) and Tao, Xiaohui (Author) |
Editors | U, Leong Hou, Spaniol, Marc, Sakurai, Yasushi and Chen, Junying |
Journal or Proceedings Title | Lecture Notes in Computer Science (Book series) |
Journal Citation | 12859, pp. 399-413 |
Number of Pages | 15 |
Year | 2021 |
Publisher | Springer |
Place of Publication | Switzerland |
ISSN | 1611-3349 |
0302-9743 | |
ISBN | 9783030858988 |
9783030858995 | |
Digital Object Identifier (DOI) | https://doi.org/10.1007/978-3-030-85899-5_29 |
Web Address (URL) of Paper | https://link.springer.com/chapter/10.1007/978-3-030-85899-5_29 |
Conference/Event | 5th International Joint Conference on Asia-Pacific Web and Web-Age Information Management, Part II (APWeb-WAIM 2021) |
Event Details | 5th International Joint Conference on Asia-Pacific Web and Web-Age Information Management, Part II (APWeb-WAIM 2021) Event Date 23 to end of 25 Aug 2021 Event Location Guangzhou, China |
Abstract | Traffic signal control problems are critical in urban intersections. Recently, deep reinforcement learning demonstrates impressive performance in the control of traffic signals. The design of state and reward function is often heuristic, which leads to highly vulnerable performance. To solve this problem, some studies introduce transportation theory into deep reinforcement learning to support the design of reward function e.g., max-pressure control, which have yielded promising performance. We argue that the constant changes of intersections’ pressure can be better represented with the consideration of downstream neighboring intersections. In this paper, we propose CMPLight, a deep reinforcement learning traffic signal control approach with a novel cooperative max-pressure-based reward function to leverage the vehicle queue information of neighborhoods. The approach employs cooperative max-pressure to guide the design of reward function in deep reinforcement learning. We theoretically prove that it is stabilizing when the average traffic demand is admissible and traffic flow is stable in road network. The state of deep reinforcement learning is enhanced by neighboring information, which helps to learn a detailed representation of traffic environment. Extensive experiments are conducted on synthetic and real-world datasets. The experimental results demonstrate that our approach outperforms traditional heuristic transportation control approaches and the state-of-the-arts learning-based approaches in terms of average travel time of all vehicles in road network. |
Keywords | Deep reinforcement learning; Traffic signal control; Cooperative max-pressure; Downstream information |
ANZSRC Field of Research 2020 | 460207. Modelling and simulation |
460308. Pattern recognition | |
460502. Data mining and knowledge discovery | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Wuhan University of Technology, China |
School of Sciences | |
Institution of Origin | University of Southern Queensland |
https://research.usq.edu.au/item/q6z5x/learning-cooperative-max-pressure-control-by-leveraging-downstream-intersections-information-for-traffic-signal-control
154
total views8
total downloads3
views this month0
downloads this month