Development of Push-Recovery control system for humanoid robots using deep reinforcement learning

dc.authorid0000-0002-0181-3658en_US
dc.authorid0000-0002-9913-5946en_US
dc.authorid0000-0002-5253-3779en_US
dc.contributor.authorAslan, Emrah
dc.contributor.authorArserim, Muhammet Ali
dc.contributor.authorUçar, Ayşegül
dc.date.accessioned2023-09-21T12:40:08Z
dc.date.available2023-09-21T12:40:08Z
dc.date.issued2023en_US
dc.departmentDicle Üniversitesi, Silvan Meslek Yüksek Okulu, Bilgisayar Teknolojileri Bölümüen_US
dc.description.abstractThis paper focuses on the push-recovery problem of bipedal humanoid robots affected by external forces and pushes. Since they are structurally unstable, balance is the most important problem in humanoid robots. Our purpose is to design and implement a completely independent push-recovery control system that can imitate the actions of a human. For humanoid robots to be able to stay in balance while standing or walking, and to prevent balance disorders that may be caused by external forces, an active balance control has been presented. Push-recovery controllers consist of three strategies: ankle strategy, hip strategy, and step strategy. These strategies are biomechanical responses that people show in cases of balance disorder. In our application, both simulation and real-world tests have been performed. The simulation tests of the study were carried out with 3D models in the Webots environment. Real-world tests were performed on the Robotis-OP2 humanoid robot. The gyroscope, accelerometer and motor data from the sensors in our robot were recorded and external pushing force was applied to the robot. The balance of the robot was achieved by using the recorded data and the ankle strategy. To make the robot completely autonomous, Deep Q Network (DQN) and Double Deep Q Network (DDQN) methods from Deep Reinforcement Learning (DPL) algorithms have been applied. The results obtained with the DDQN algorithm yielded 21.03% more successful results compared to the DQN algorithm. The results obtained in the real environment tests showed parallelism to the simulation results.en_US
dc.identifier.citationAslan, E., Arserim, M. A. ve Uçar, A. (2023). Development of Push-Recovery control system for humanoid robots using deep reinforcement learning. Ain Shams Engineering Journal, 14(10), 1-11.en_US
dc.identifier.doi10.1016/j.asej.2023.102167
dc.identifier.endpage11en_US
dc.identifier.issn2090-4479
dc.identifier.issue10en_US
dc.identifier.scopus2-s2.0-85147109267
dc.identifier.scopusqualityQ1
dc.identifier.startpage1en_US
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S2090447923000564?via%3Dihub
dc.identifier.urihttps://hdl.handle.net/11468/12577
dc.identifier.volume14en_US
dc.identifier.wosWOS:001001724900001
dc.identifier.wosqualityN/A
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.institutionauthorAslan, Emrah
dc.institutionauthorArserim, Muhammet Ali
dc.language.isoenen_US
dc.publisherAin Shams Universityen_US
dc.relation.ispartofAin Shams Engineering Journal
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectdeep q network(DQN)en_US
dc.subjectDeep reinforcement learningen_US
dc.subjectDouble deep q network(DDQN)en_US
dc.subjectHumanoid roboten_US
dc.subjectPush-recoveryen_US
dc.subjectRobotis op2en_US
dc.titleDevelopment of Push-Recovery control system for humanoid robots using deep reinforcement learningen_US
dc.titleDevelopment of Push-Recovery control system for humanoid robots using deep reinforcement learning
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
Development of Push-Recovery control system for humanoid robots using deep reinforcement learning.pdf
Boyut:
2.87 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Makale Dosyası
Lisans paketi
Listeleniyor 1 - 1 / 1
[ X ]
İsim:
license.txt
Boyut:
1.44 KB
Biçim:
Item-specific license agreed upon to submission
Açıklama: