Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Deep reinforcement learning reward function design for autonomous driving in lane-free traffic

Karalakou Athanasia, Troullinos Dimitrios, Chalkiadakis Georgios, Papageorgiou Markos

Simple record


URIhttp://purl.tuc.gr/dl/dias/65B6F1D7-C3C5-4D8C-9AAE-863D5E1AC00D-
Identifierhttps://doi.org/10.3390/systems11030134-
Identifierhttps://www.mdpi.com/2079-8954/11/3/134-
Languageen-
Extent28 pagesen
TitleDeep reinforcement learning reward function design for autonomous driving in lane-free trafficen
CreatorKaralakou Athanasiaen
CreatorΚαραλακου Αθανασιαel
CreatorTroullinos Dimitriosen
CreatorΤρουλλινος Δημητριοςel
CreatorChalkiadakis Georgiosen
CreatorΧαλκιαδακης Γεωργιοςel
CreatorPapageorgiou Markosen
CreatorΠαπαγεωργιου Μαρκοςel
PublisherMDPIen
DescriptionThe research leading to these results has received funding from the European Research Council under the European Union’s Horizon 2020 Research and Innovation programme/ ERC Grant Agreement n. [833915], project TrafficFluid.en
Content SummaryLane-free traffic is a novel research domain, in which vehicles no longer adhere to the notion of lanes, and consider the whole lateral space within the road boundaries. This constitutes an entirely different problem domain for autonomous driving compared to lane-based traffic, as there is no leader vehicle or lane-changing operation. Therefore, the observations of the vehicles need to properly accommodate the lane-free environment without carrying over bias from lane-based approaches. The recent successes of deep reinforcement learning (DRL) for lane-based approaches, along with emerging work for lane-free traffic environments, render DRL for lane-free traffic an interesting endeavor to investigate. In this paper, we provide an extensive look at the DRL formulation, focusing on the reward function of a lane-free autonomous driving agent. Our main interest is designing an effective reward function, as the reward model is crucial in determining the overall efficiency of the resulting policy. Specifically, we construct different components of reward functions tied to the environment at various levels of information. Then, we combine and collate the aforementioned components, and focus on attaining a reward function that results in a policy that manages to both reduce the collisions among vehicles and address their requirement of maintaining a desired speed. Additionally, we employ two popular DRL algorithms—namely, deep Q-networks (enhanced with some commonly used extensions), and deep deterministic policy gradient (DDPG), which results in better policies. Our experiments provide a thorough investigative study on the effectiveness of different combinations among the various reward components we propose, and confirm that our DRL-employing autonomous vehicle is able to gradually learn effective policies in environments with varying levels of difficulty, especially when all of the proposed rewards components are properly combined.en
Type of ItemPeer-Reviewed Journal Publicationen
Type of ItemΔημοσίευση σε Περιοδικό με Κριτέςel
Licensehttp://creativecommons.org/licenses/by/4.0/en
Date of Item2024-06-28-
Date of Publication2023-
SubjectDeep reinforcement learningen
SubjectLane-free trafficen
SubjectAutonomous drivingen
Bibliographic CitationA. Karalakou, D. Troullinos, G. Chalkiadakis and M. Papageorgiou, “Deep reinforcement learning reward function design for autonomous driving in lane-free traffic,” Systems, vol. 11, no. 3, Mar. 2023, doi: 10.3390/systems11030134.en

Available Files

Services

Statistics