Researchers in China and Hong Kong have developed a new artificial intelligence (AI) learning framework that teaches humanoid robots to stand up from an idle position incredibly quickly regardless of position or terrain.
While the research has yet to be submitted for peer review, the team released their findings Feb. 12 on GitHub, including a paper uploaded to the arXiv preprint database, alongside a video demonstrating their framework in action.
The video shows a bipedal humanoid rising to stand after lying on its back, sitting against a wall, lying on a sofa and reclining in a chair. The researchers also tested the humanoid robot’s ability to right itself on varying terrains and inclines — including a stone road, a glass slope and while leaning against a tree.
They even attempted to disrupt the robot by hitting or kicking it while it was trying to get up. In every scenario, the robot can be seen adjusting to its environment and is shown successfully standing up.
Related: 17 weird, wonderful and terrifying robots we saw at CES 2025 — from a humanlike android companion to a robotic mixologist
This remarkable ability to get knocked down and then get up again is thanks to the system called “Humanoid Standing-up Control” (HoST). The scientists achieved this with reinforcement learning, a type of machine learning where the agent (in this case the HoST framework) attempts to perform a task by trial and error. In essence, the robot takes an action, and if that action results in a positive outcome, it is sent a reward signal that encourages it to take that action again the next time it finds itself in a similar state.
Rising to the occasion
The team’s system was a little more complicated than that, using four separate reward groups for more targeted feedback, along with a series of motion constraints including motion smoothing and speed limits to prevent erratic or violent movements. A vertical pull force was also applied during initial training to help direct the early stages of the learning process.
The HoST framework was originally trained in simulations using the Isaac Gym simulator, a physics simulation environment developed by Nvidia. Once the framework had been sufficiently trained on simulations, it was deployed into a Unitree G1 Humanoid Robot for experimental testing, the results of which are demonstrated in the video.
“Experimental results with the Unitree G1 humanoid robot demonstrate smooth, stable, and robust standing-up motions in a variety of real-world scenarios,” the scientists wrote in the study. “Looking forward, this work paves the way for integrating standing-up control into existing humanoid systems, with the potential of expanding their real-world applicability.”
Getting up might seem second nature to we humans, but it’s something that humanoid robots have struggled to replicate in the past, as you can glean from a montage of robots falling over and being unable to return to an upright position. Teaching a robot to walk or run like a human being is one thing, but to be useful in the real world, they need to be able to handle challenging situations like stumbling, tripping and falling over.
Leave a Comment