Mobile robot application with hierarchical start position DQN

Yükleniyor...
Küçük Resim

Tarih

2022

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Hindawi Limited

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Advances in deep learning significantly affect reinforcement learning, which results in the emergence of Deep RL (DRL). DRL does not need a data set and has the potential beyond the performance of human experts, resulting in significant developments in the field of artificial intelligence. However, because a DRL agent has to interact with the environment a lot while it is trained, it is difficult to be trained directly in the real environment due to the long training time, high cost, and possible material damage. Therefore, most or all of the training of DRL agents for real-world applications is conducted in virtual environments. This study focused on the difficulty in a mobile robot to reach its target by making a path plan in a real-world environment. The Minimalistic Gridworld virtual environment has been used for training the DRL agent, and to our knowledge, we have implemented the first real-world implementation for this environment. A DRL algorithm with higher performance than the classical Deep Q-network algorithm was created with the expanded environment. A mobile robot was designed for use in a real-world application. To match the virtual environment with the real environment, algorithms that can detect the position of the mobile robot and the target, as well as the rotation of the mobile robot, were created. As a result, a DRL-based mobile robot was developed that uses only the top view of the environment and can reach its target regardless of its initial position and rotation.

Açıklama

Anahtar Kelimeler

Algorithms, Artificial intelligence, Humans

Kaynak

Computational Intelligence and Neuroscience

WoS Q Değeri

N/A

Scopus Q Değeri

N/A

Cilt

2022

Sayı

Künye

Erkan, E. ve Arserim, M. A. (2022). Mobile robot application with hierarchical start position DQN. Computational Intelligence and Neuroscience, 2022, 4115767.