Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Deep reinforcement learning reward function design for autonomous driving in lane-free traffic

Karalakou Athanasia, Troullinos Dimitrios, Chalkiadakis Georgios, Papageorgiou Markos

Full record


URI: http://purl.tuc.gr/dl/dias/65B6F1D7-C3C5-4D8C-9AAE-863D5E1AC00D
Year 2023
Type of Item Peer-Reviewed Journal Publication
License
Details
Bibliographic Citation A. Karalakou, D. Troullinos, G. Chalkiadakis and M. Papageorgiou, “Deep reinforcement learning reward function design for autonomous driving in lane-free traffic,” Systems, vol. 11, no. 3, Mar. 2023, doi: 10.3390/systems11030134. https://doi.org/10.3390/systems11030134
Appears in Collections

Summary

Lane-free traffic is a novel research domain, in which vehicles no longer adhere to the notion of lanes, and consider the whole lateral space within the road boundaries. This constitutes an entirely different problem domain for autonomous driving compared to lane-based traffic, as there is no leader vehicle or lane-changing operation. Therefore, the observations of the vehicles need to properly accommodate the lane-free environment without carrying over bias from lane-based approaches. The recent successes of deep reinforcement learning (DRL) for lane-based approaches, along with emerging work for lane-free traffic environments, render DRL for lane-free traffic an interesting endeavor to investigate. In this paper, we provide an extensive look at the DRL formulation, focusing on the reward function of a lane-free autonomous driving agent. Our main interest is designing an effective reward function, as the reward model is crucial in determining the overall efficiency of the resulting policy. Specifically, we construct different components of reward functions tied to the environment at various levels of information. Then, we combine and collate the aforementioned components, and focus on attaining a reward function that results in a policy that manages to both reduce the collisions among vehicles and address their requirement of maintaining a desired speed. Additionally, we employ two popular DRL algorithms—namely, deep Q-networks (enhanced with some commonly used extensions), and deep deterministic policy gradient (DDPG), which results in better policies. Our experiments provide a thorough investigative study on the effectiveness of different combinations among the various reward components we propose, and confirm that our DRL-employing autonomous vehicle is able to gradually learn effective policies in environments with varying levels of difficulty, especially when all of the proposed rewards components are properly combined.

Available Files

Services

Statistics