AI Training Cassie Robot for Running and Jumping

An AI method known as reinforcement learning was utilized to train a bipedal robot, affectionately known as Cassie, to run 400 meters across diverse grounds and perform both standing long jumps and high jumps without explicit instruction for each action. The reinforcement learning method provides rewards or penalties to an AI as it works towards accomplishing a goal. This process allowed Cassie to adapt and react in new situations, a significant improvement over previous robots which would typically freeze in unfamiliar scenarios.

Lead project member, Zhongyu Li, a PhD candidate at the University of California, Berkeley, explained, “Our goal was to push the boundaries of robotic agility. We were focused on teaching the robot to perform a range of dynamic motions just like a human does.”

Cassie’s training was carried out in a simulated environment, an approach that drastically reduces learning time from years to weeks. This allows for the gained skills to be immediately implemented in the real world without additional training. Initially, the neural network that controls Cassie was trained to execute basic tasks such as on-the-spot jumping, walking, and running without falling. The robot learned these tasks through a method of encouragement to imitate demonstrated movements, which incorporated a variety of motion capture data and animations.

The next step involved the introduction of new commands for the robot to execute using the newly acquired skills. As the robot mastered these tasks within the simulated environment, the researchers diversified its training with a method known as task randomization. This method broadens the robot’s aptitude to confront unexpected scenarios through past experiences.

Cassie successfully finished a 400-meter run in two minutes and 34 seconds and performed a 1.4-meter long jump without any additional training.

Alan Fern, a Computer Science professor at Oregon State University involved in the development of the Cassie robot said, “The future of this field lies in humanoid robots that not only do actual work, plan out activities, but also interact with the physical world. The focus will soon shift from simple interactions with the ground to more complex tasks.”

The researchers are now focusing on how this learning method can be used to train robots equipped with onboard cameras. However, performing tasks while processing visual inputs is expected to be a greater challenge.