- /
- Axes/
- DRONAR - Reinforcement Learning for Drone Fleet Control with Communication Quality Awareness/
DRONAR - Reinforcement Learning for Drone Fleet Control with Communication Quality Awareness
- Scientific lead: MATIGNON Laëtitia, GUERIN-LASSOUS Isabelle
- Is SyCoSMA the project lead? Yes
- Start date: 2023-01-01
- End date: 2024-12-31
This project, in collaboration with LIP (team Hownet) focuses on the quality of radio communications in drone fleets, which has a major impact on the overall flight performance of the fleet.
UAV-based wireless networks can be deployed to provide a network coverage to users who have no or poor network connection. Unlike traditional model-based approaches that require predefined assumptions before UAV deployment, reinforcement learning (RL) offers a promising alternative but requires a realistic simulator for training the proposed strategies. None of the existing open-source simulators provide both realistic wireless communications while enabling the training, in a cluttered environment, of multi-UAVs movement strategies using RL. Thus, in this project, we developed a simulator where we focus on improving the modeling of the network access part, i.e., the communications between the UAVs and the users, by integrating signal propagation, physical rate adaptation and medium access sharing models. We evaluate the performance of a standard independent RL algorithm trained in our simulator across various use case scenarios, and compare the results obtained using different learning objectives. Results show that the classical formulation, commonly found in the literature and based on a a simplified wireless network model, underperforms in terms of communication quality.