Optimization of Autonomous Navigation Systems in Unknown Environments Based on Improved Reinforcement Learning Algorithms

Authors

  • Alex Morgan *

    Department of Electrical and Computer Engineering, University of California, Berkeley, CA 94720, USA

Abstract

Autonomous navigation in unknown environments remains a core challenge in intelligent and autonomous control. This study proposes an improved deep reinforcement learning (DRL) algorithm, namely Adaptive Reward Shaping Deep Q-Network (ARS-DQN), to enhance the navigation performance of autonomous agents. The ARS-DQN optimizes the reward function by integrating environmental exploration progress and collision avoidance safety, addressing the over-exploration and sparse reward problems of traditional DRL algorithms. Comparative experiments with DQN, Double DQN, and Dueling DQN are conducted in simulated unknown environments with different complexity levels. Results show that the ARS-DQN reduces navigation time by 18.3%–25.7% and collision rate by 32.1%–41.5% compared to baseline algorithms. The proposed algorithm also exhibits strong robustness to environmental dynamic changes. This research provides a feasible solution for improving the adaptability and reliability of autonomous navigation systems in unknown environments.

Downloads

Issue

Section

Articles