Abstract
Autonomous vehicles have received great attention in the last years, promising to impact a market worth billions. Nevertheless, the dream of fully autonomous cars has been delayed with current self-driving systems relying on complex processes coupled with supervised learning techniques. The deep reinforcement learning approach gives us newer possibilities to solve complex control tasks like the ones required by autonomous vehicles. It let the agent learn by interacting with the environment and from its mistakes. Unfortunately, RL is mainly applied in simulated environments, and transferring learning from simulations to the real world is a hard problem. In this paper, we use LIDAR data as input of a Deep Q-Network on a realistic 1/10 scale car prototype capable of performing training in real-time. The robot-driver learns how to run in race tracks by exploiting the experience gained through a mechanism of rewards that allow the agent to learn without human supervision. We provide a comparison of neural networks to find the best one for LIDAR data processing, two approaches to address the sim2real problem, and a detail of the performances of DQN in time-lap tasks for racing robots.
Original language | English |
---|---|
Pages (from-to) | 290-298 |
Number of pages | 9 |
Journal | Proceedings - IEEE Consumer Communications and Networking Conference, CCNC |
DOIs | |
Publication status | Published - 2022 |
Event | 19th IEEE Annual Consumer Communications and Networking Conference, CCNC 2022 - Virtual, Online, United States Duration: 8 Jan 2022 → 11 Jan 2022 |
Keywords
- Autonomous Driving
- F1tenth
- LIDAR
- Real-Time Systems
- Reinforcement Learning
- Robotics