TY - JOUR
T1 - Train in Austria, Race in Montecarlo
T2 - 19th IEEE Annual Consumer Communications and Networking Conference, CCNC 2022
AU - Bosello, Michael
AU - Tse, Rita
AU - Pau, Giovanni
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Autonomous vehicles have received great attention in the last years, promising to impact a market worth billions. Nevertheless, the dream of fully autonomous cars has been delayed with current self-driving systems relying on complex processes coupled with supervised learning techniques. The deep reinforcement learning approach gives us newer possibilities to solve complex control tasks like the ones required by autonomous vehicles. It let the agent learn by interacting with the environment and from its mistakes. Unfortunately, RL is mainly applied in simulated environments, and transferring learning from simulations to the real world is a hard problem. In this paper, we use LIDAR data as input of a Deep Q-Network on a realistic 1/10 scale car prototype capable of performing training in real-time. The robot-driver learns how to run in race tracks by exploiting the experience gained through a mechanism of rewards that allow the agent to learn without human supervision. We provide a comparison of neural networks to find the best one for LIDAR data processing, two approaches to address the sim2real problem, and a detail of the performances of DQN in time-lap tasks for racing robots.
AB - Autonomous vehicles have received great attention in the last years, promising to impact a market worth billions. Nevertheless, the dream of fully autonomous cars has been delayed with current self-driving systems relying on complex processes coupled with supervised learning techniques. The deep reinforcement learning approach gives us newer possibilities to solve complex control tasks like the ones required by autonomous vehicles. It let the agent learn by interacting with the environment and from its mistakes. Unfortunately, RL is mainly applied in simulated environments, and transferring learning from simulations to the real world is a hard problem. In this paper, we use LIDAR data as input of a Deep Q-Network on a realistic 1/10 scale car prototype capable of performing training in real-time. The robot-driver learns how to run in race tracks by exploiting the experience gained through a mechanism of rewards that allow the agent to learn without human supervision. We provide a comparison of neural networks to find the best one for LIDAR data processing, two approaches to address the sim2real problem, and a detail of the performances of DQN in time-lap tasks for racing robots.
KW - Autonomous Driving
KW - F1tenth
KW - LIDAR
KW - Real-Time Systems
KW - Reinforcement Learning
KW - Robotics
UR - http://www.scopus.com/inward/record.url?scp=85135731239&partnerID=8YFLogxK
U2 - 10.1109/CCNC49033.2022.9700730
DO - 10.1109/CCNC49033.2022.9700730
M3 - Conference article
AN - SCOPUS:85135731239
SN - 2331-9860
SP - 290
EP - 298
JO - Proceedings - IEEE Consumer Communications and Networking Conference, CCNC
JF - Proceedings - IEEE Consumer Communications and Networking Conference, CCNC
Y2 - 8 January 2022 through 11 January 2022
ER -