Aslan, EmrahArserim, Muhammet AliUcar, Aysegul2025-02-222025-02-2220241300-18841304-4915https://doi.org/10.17341/gazimmfd.1359434https://search.trdizin.gov.tr/tr/yayin/detay/1257620https://hdl.handle.net/11468/29631The main purpose of this study is to develop push-recovery controllers for bipedal humanoid robots. In bipedal humanoid robots, occur balance problems against external pushes. In this article, control methods that will be the solution to the balance problems in humanoid robots are proposed. We aim to ensure that bipedal robots that behave like humans can come to a position of balance against external pushes. When people encounter balance problems as a result of outside pushes, they respond quite successfully. This ability is limited in bipedal humanoid robots. The main reason for this is the complex structures and limited capacities of humanoid robots. In the real world, there are push-recovery strategies created by considering the reactions of people in case of balance disorder. These strategies; are ankle, hip, and step strategies. In this study, the ankle strategy, from the push-recovery strategies, was used. Different control methods have been tried with the ankle strategy. Three different techniques of control were utilized in the applications. These methods are as follows; Classical control method is PD, Model Predictive Control (MPC) based on prediction, and Deep Q Network (DQN) as deep reinforcement learning algorithm. The applications were carried out on the Robotis-OP2 robot. Simulation tests were done in 3D in the Webots simulator. The humanoid robot was tested with three methods and the results were compared. It has been determined that the Deep Q Network algorithm gives the best results among these methods.trinfo:eu-repo/semantics/openAccessPush-RecoveryRobotis-OP2Deep Q NetworkModel Predictive ControlPDComparison of push-recovery control methods for Robotics-OP2 using ankle strategyArticle394WOS:0012722227000022-s2.0-85195547828125762010.17341/gazimmfd.1359434Q2Q3