Comparison of push-recovery control methods for Robotics-OP2 using ankle strategy

dc.authoridASLAN, Emrah/0000-0002-0181-3658
dc.contributor.authorAslan, Emrah
dc.contributor.authorArserim, Muhammet Ali
dc.contributor.authorUcar, Aysegul
dc.date.accessioned2025-02-22T14:08:46Z
dc.date.available2025-02-22T14:08:46Z
dc.date.issued2024
dc.departmentDicle Üniversitesien_US
dc.description.abstractThe main purpose of this study is to develop push-recovery controllers for bipedal humanoid robots. In bipedal humanoid robots, occur balance problems against external pushes. In this article, control methods that will be the solution to the balance problems in humanoid robots are proposed. We aim to ensure that bipedal robots that behave like humans can come to a position of balance against external pushes. When people encounter balance problems as a result of outside pushes, they respond quite successfully. This ability is limited in bipedal humanoid robots. The main reason for this is the complex structures and limited capacities of humanoid robots. In the real world, there are push-recovery strategies created by considering the reactions of people in case of balance disorder. These strategies; are ankle, hip, and step strategies. In this study, the ankle strategy, from the push-recovery strategies, was used. Different control methods have been tried with the ankle strategy. Three different techniques of control were utilized in the applications. These methods are as follows; Classical control method is PD, Model Predictive Control (MPC) based on prediction, and Deep Q Network (DQN) as deep reinforcement learning algorithm. The applications were carried out on the Robotis-OP2 robot. Simulation tests were done in 3D in the Webots simulator. The humanoid robot was tested with three methods and the results were compared. It has been determined that the Deep Q Network algorithm gives the best results among these methods.en_US
dc.identifier.doi10.17341/gazimmfd.1359434
dc.identifier.issn1300-1884
dc.identifier.issn1304-4915
dc.identifier.issue4en_US
dc.identifier.scopus2-s2.0-85195547828en_US
dc.identifier.scopusqualityQ2en_US
dc.identifier.trdizinid1257620en_US
dc.identifier.urihttps://doi.org/10.17341/gazimmfd.1359434
dc.identifier.urihttps://search.trdizin.gov.tr/tr/yayin/detay/1257620
dc.identifier.urihttps://hdl.handle.net/11468/29631
dc.identifier.volume39en_US
dc.identifier.wosWOS:001272222700002
dc.identifier.wosqualityQ3
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.indekslendigikaynakTR-Dizin
dc.language.isotren_US
dc.publisherGazi Univ, Fac Engineering Architectureen_US
dc.relation.ispartofJournal of the Faculty of Engineering and Architecture of Gazi Universityen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.snmzKA_WOS_20250222
dc.subjectPush-Recoveryen_US
dc.subjectRobotis-OP2en_US
dc.subjectDeep Q Networken_US
dc.subjectModel Predictive Controlen_US
dc.subjectPDen_US
dc.titleComparison of push-recovery control methods for Robotics-OP2 using ankle strategyen_US
dc.typeArticleen_US

Dosyalar