Loading Events

This event has passed.

IORA Seminar Series – Kaidi Yang

October 7 @ 10:00 AM - 11:30 AM

Kaidi Yang is an Assistant Professor in the Department of Civil and Environmental Engineering at the National University of Singapore. He aims to develop efficient and trustworthy algorithms for the design and operation of future mobility systems, with a particular focus on advances in vehicular technology (e.g., connected and automated vehicles, electric vehicles, etc.) and shared mobility. Before joining NUS, he was a postdoctoral scholar with the Autonomous Systems Lab at Stanford University. He received his Ph.D. from ETH Zurich in 2019, M.Sc. in Control Science and Engineering from Tsinghua University in 2014, and dual bachelor’s degrees in Automation and Mathematics from Tsinghua University in 2011.

 

Venue  Innovation 4.0 Building, level 1, Seminar Room (next to the level 1 café)

 

Link to Register

(Hybrid Session)

https://nus-sg.zoom.us/meeting/register/tZYtfuisrjovHNE8qXjzeBwFwSDutMZlaLbu
Title Operation of Traditional and Autonomous Mobility-on-Demand
Abstract The past decade has witnessed the widespread deployment of Mobility-on-Demand (MoD) services, such as the ride-hailing services provided by Uber and Grab. One key operational challenge associated with MoD services is the vehicle imbalances due to asymmetric transportation demand: vehicles tend to accumulate in some regions while becoming depleted in others, giving rise to inefficient operations of the MoD system. We aim to employ emerging automated vehicles (AVs) to improve the operation of MoD systems, leveraging their capability of being globally coordinated. In the first part of the talk, we consider the transition period of AV deployment, whereby an MoD system operates a mixed fleet of AVs and human-driven vehicles (HVs). In such systems, AVs are centrally coordinated by the operator, and the HVs might strategically respond to the coordination of AVs. We model such a system using a Stackelberg framework where the MoD operator serves as the leader and human-driven vehicles serve as the followers. We further develop a real-time coordination algorithm for AVs. In the second part of the talk, we propose a set of reinforcement learning (RL)-based algorithms to improve the efficiency of MoD systems operating a fleet of AVs. We demonstrate that graph neural networks enable RL agents to recover behaviour policies significantly more transferable, generalisable, and scalable than policies learned through other approaches. We further improve the generalisability by integrating meta-learning to transfer to unseen scenarios (e.g., different cities).

 

Categories: