RL-augmented Adaptive Model Predictive Control for Bipedal Locomotion over Challenging Terrain

Under review

System architecture

We propose an RL-augmented MPC controller tailored for bipedal locomotion over rough and slippery terrain. Our method parametrizes three key components of single-rigid-body-dynamics-based MPC: the system dynamics, the swing-leg controller, and the gait frequency. These parameters are learned by a model-free reinforcement learning policy to enhance the robustness and versatility of the MPC controller.

Abstract

Model predictive control (MPC) has demonstrated effectiveness for humanoid bipedal locomotion; however, its applicability in challenging environments, such as rough and slippery terrain, is limited by the difficulty of modeling terrain interactions. In contrast, reinforcement learning (RL) has achieved notable success in training robust locomotion policies over diverse terrain, yet it lacks guarantees of constraint satisfaction and often requires substantial reward shaping. Recent efforts in combining MPC and RL have shown promise of taking the best of both worlds, but they are primarily restricted to flat terrain or quadrupedal robots. In this work, we propose an RL-augmented MPC framework tailored for bipedal locomotion over rough and slippery terrain. Our method parametrizes three key components of single- rigid-body-dynamics-based MPC: system dynamics, swing leg controller, and gait frequency. We validate our approach through bipedal robot simulations in NVIDIA IsaacLab across various terrains, including stairs, stepping stones, and low- friction surfaces. Experimental results demonstrate that our RL-augmented MPC framework produces significantly more adaptive and robust behaviors compared to baseline MPC and RL.

Per-terrain evaluation

Slippery surface

Baseline MPC
RL-augmented MPC

Pyramid stairs

Baseline MPC
RL-augmented MPC

Random stairs

Baseline MPC
RL-augmented MPC

Stepping-stones

Baseline MPC
RL-augmented MPC

Ablation studies

Baseline MPC
Baseline RL
RL-augmented MPC
RL-augmented MPC w/ batch MPC

Batched MPC

We also present preliminary results of a batched MPC controller. CPU based optimizer often struggles with slow RL policy training because of the difficulty of parallelizing the optimization. To this end, we implemented a generic sparse quadratic programming solver in CUDA, enabling massive parallelization and optimization acceleration by 80 times.