STEMM Institute Press
Science, Technology, Engineering, Management and Medicine
Research on Adaptive Strategies for Walking Control of Intelligent Robots
DOI: https://doi.org/10.62517/jike.202504403
Author(s)
Bao Yue
Affiliation(s)
School of Electronic and Computer Engineering Chengxian College, Southeast University, Nanjing, Jiangsu, China
Abstract
With the rapid proliferation of intelligent robots in industrial manufacturing, medical assistance, and household services, autonomous walking control in complex environments has gradually become the core bottleneck affecting their large-scale application. Traditional control methods perform well in structured environments, but often exhibit insufficient robustness, response delays, and degraded stability in the presence of dynamic obstacles, diverse terrains, and simulation-to-reality transfer challenges. To address these challenges, this paper proposes a multi-environment adaptive strategy for walking control of intelligent robots. The approach comprises four core modules: terrain recognition and adaptive modeling based on multi-source perception fusion; whole-body closed-loop control tailored for dynamic environments; a perturbation optimization framework integrating reinforcement learning with symbolic planning; and modular system integration with simulation-to-real verification. Multi-source perception fusion for terrain recognition plays a critical role in enabling robots to adapt to unmodeled terrain variations. Experimental results demonstrate that the proposed method significantly enhances the stability, responsiveness, and transfer robustness of robot locomotion in complex scenarios, providing a feasible solution for multi-scene robot deployment and laying a solid foundation for practical applications in service, medical, and logistics industries.
Keywords
Adaptive Walking Control; Multi-Source Perception Fusion; Whole-Body Control; Reinforcement Learning and Symbolic Planning; Sim-to-Real Transfer
References
[1] Kim, D., Di Carlo, J., Katz, B., Bledt, G., & Kim, S. (2019). Highly Dynamic Quadruped Locomotion via Whole-Body Impulse Control and Model Predictive Control. arXiv. https://arxiv.org/abs/1909.06586 (arXiv) [2] Li, H., & Wensing, P. M. (2024). Café-MPC: A cascaded-fidelity model predictive control framework with tuning-free whole-body control. arXiv. https://arxiv.org/abs/2403.03995 (arXiv) [3] Ze, Y., Chen, Z., Araújo, J. P., Cao, Z.-a., Peng, X. B., Wu, J., & Liu, C. K. (2025). TWIST: Teleoperated Whole-Body Imitation System. arXiv. https://arxiv.org/abs/2505.02833 (arXiv) [4] Fadelli, I. (2025, May 15). Whole-body teleoperation system allows robots to perform coordinated tasks with human-like dexterity. Phys.org. https://techxplore.com/news/2025-05-body-teleoperation-robots-tasks-human.html [5] Tan, J., Zhang, T., Coumans, E., Iscen, A., Bai, Y., Hafner, D., Bohez, S., & Vanhoucke, V. (2018). Sim-to-Real: Learning Agile Locomotion for Quadruped Robots. arXiv. https://arxiv.org/abs/1804.10332 (arXiv) [6] Hartley, R., Ghaffari, M., Eustice, R. M., & Grizzle, J. W. (2019). Contact-aided invariant extended Kalman filtering for robot state estimation. arXiv. https://arxiv.org/abs/1904.09251 arXiv [7] Identification by recursive least squares with Kalman filter (RLS-KF) applied to a robotic manipulator. (2021). ResearchGate. ResearchGate [8] Kanoulas, D., Tsagarakis, N. G., & Vona, M. (2020). Curved patch mapping and tracking for irregular terrain modeling: Application to bipedal robot foot placement. arXiv. https://arxiv.org/abs/2004.03405 arXiv [9] Online inertial parameter estimation for robotic loaders. (n.d.). ResearchGate. ResearchGate [10] Han, G., Wang, J., Ju, X., & Zhao, M. (2021). Recursive hierarchical projection for whole-body control with task priority transition. arXiv. https://arxiv.org/abs/2109.07236 (arxiv.org) [11] Lin, S., Qiao, G., Tai, Y., Li, A., Jia, K., & Liu, G. (2025). HWC-Loco: A hierarchical whole-body control approach to robust humanoid locomotion. arXiv. https://arxiv.org/abs/2503.00923 (arxiv.org) [12] Eppe, M., Nguyen, P. D. H., & Wermter, S. (2019). From semantics to execution: Integrating action planning with reinforcement learning for robotic causal problem-solving. arXiv. https://arxiv.org/abs/1905.09683 (arxiv.org) [13] Yang, F., Lyu, D., Liu, B., & Gustafson, S. (2018). PEORL: Integrating symbolic planning and hierarchical reinforcement learning for robust decision-making. arXiv. https://arxiv.org/abs/1804.07779 (arxiv.org) [14] Mehta, B., Handa, A., Fox, D., & Ramos, F. (2020). A User's Guide to Calibrating Robotics Simulators. arXiv. https://arxiv.org/abs/2011.08985 (arxiv.org) [15] Allevato, A., Short, E. S., Pryor, M., & Thomaz, A. L. (2019). TuneNet: One-shot residual tuning for system identification and sim-to-real robot task transfer. arXiv. https://arxiv.org/abs/1907.11200 (arxiv.org) [16] Zhang, X., Liu, S., Huang, P., Han, W. J., Lyu, Y., Xu, M., & Zhao, D. (2024). Dynamics as Prompts: In-Context Learning for Sim-to-Real System Identifications. arXiv. https://arxiv.org/abs/2410.20357 (arxiv.org) [17] Liu, Y., Mou, H., Jiang, H., Li, Q., & Zhang, J. (2025). An improved hierarchical optimization framework for walking control of underactuated humanoid robots using model predictive control and whole body planner and controller. Mathematics, 13(1), 154. https://doi.org/10.3390/math13010154 (mdpi.com) [18] Ahn, J., Lee, H., & Park, J. (2021). Efficient computation of whole-body control utilizing simplified whole-body dynamics via centroidal dynamics. arXiv. https://arxiv.org/abs/2409.10903v2 (arxiv.org) [19] Kumar, A., Fu, Z., Pathak, D., & Malik, J. (2021). RMA: Rapid motor adaptation for legged robots. arXiv. https://arxiv.org/abs/2107.04034 (arXiv) [20] Macenski, S., Foote, T., Gerkey, B., Lalancette, C., & Woodall, W. (2022). Robot Operating System 2: Design, architecture, and uses in the wild. Science Robotics. https://arxiv.org/abs/2211.07752 (arXiv) [21] Boston Engineering. (n.d.). ROS 2 in robotics. Boston Engineering. Retrieved from [source](Boston Engineering) [22] Zhang, J. Z., Howell, T. A., Yi, Z., Pan, C., Shi, G., Qu, G., Erez, T., Tassa, Y., & Manchester, Z. (2025). Whole-Body Model-Predictive Control of Legged Robots with MuJoCo. arXiv. https://arxiv.org/abs/2503.04613 (arxiv.org) [23] Rathod, N., Bratta, A., Focchi, M., Zanon, M., Villarreal, O., Semini, C., & Bemporad, A. (2021). Model predictive control with environment adaptation for legged locomotion. arXiv. https://arxiv.org/abs/2105.05998 (arxiv.org) [24] Franka Research 3 features. (n.d.). In Franka Robotics. Retrieved from https://franka.de/franka-research-3 (turn0search3) [25] Franka Research 3 capabilities. (2025). In PR Newswire. Retrieved from https://www.prnewswire.com/news-releases/franka-research-3-unveiling-new-features-for-enhanced-performance-302400239.html (turn0search9) [26] “Robots: Airports benefit from automation.” Airports International. [27] “Advancing healthcare through mobile collaboration: a survey of intelligent nursing robots research.” PMC. Retrieved from (PMC) [28] “Humanoid robot performs medical procedures using remote control.” New York Post, July 22, 2025. Retrieved from [29] "The Holy Grail of Automation: Now a Robot Can Unload a Truck.” Wall Street Journal, June 23, 2025. [30] Arnold, M., Hildebrandt, L., Janssen, K., et al. (2025). LEVA: A high-mobility logistic vehicle with legged suspension. arXiv. Retrieved from (arXiv) [31] “Unmanned ground vehicle.” Wikipedia. [32] Gemini Robotics Team; Abeyruwan, S.; Ainslie, J.; Alayrac, J.-B.; González Arenas, M. Armstrong, T. Balakrishna, A. Baruch, R. Bauza, M. Blokzijl, M. Bohez, S. … Vanhoucke, V. (2025). Gemini Robotics: Bringing AI into the Physical World. arXiv. https://arxiv.org/abs/2503.20020 [33] “Google’s Gemini Robotics AI Model Reaches Into the Physical World.” (2025, March). WIRED. https://www.wired.com/story/googles-gemini-robotics-ai-model-that-reaches-into-the-physical-world [34] NVIDIA. (2025, August). Introducing NVIDIA Jetson AGX Thor: The Ultimate Platform for Physical AI. NVIDIA Blog. https://developer.nvidia.com/blog/introducing-nvidia-jetson-thor-the-ultimate-platform-for-physical-ai [35] “NVIDIA just built a ‘brain’ that can power humanoid robots and physical AI-and it only costs $3,499.” (2025, August). Windows Central. https://www.windowscentral.com/artificial-intelligence/nvidia-just-built-a-brain-that-can-power-humanoid-robots-and-physical-ai-and-it-only-costs-usd3-499 [36] Peihan Li; An, Z. Abrar, S. Zhou, L. (2025). Large language models for multi-robot systems: A survey. arXiv. https://arxiv.org/abs/2502.03814
Copyright @ 2020-2035 STEMM Institute Press All Rights Reserved