Home Learn MIT Technology Review How AI taught Cassie the two-legged robot to run and jump

MIT Technology Review How AI taught Cassie the two-legged robot to run and jump

0
MIT Technology Review
How AI taught Cassie the two-legged robot to run and jump

In the event you’ve watched Boston Dynamics’ slick videos of robots running, jumping and doing parkour, you may have the impression robots have learned to be amazingly agile. In actual fact, these robots are still coded by hand, and would struggle to cope with latest obstacles they haven’t encountered before.

Nevertheless, a brand new approach to teaching robots to maneuver could help to cope with latest scenarios, through trial and error—just as humans learn and adapt to unpredictable events.  

Researchers used an AI technique called reinforcement learning to assist a two-legged robot nicknamed Cassie to run 400 meters, over various terrains, and execute standing long jumps and high jumps, without being trained explicitly on each movement. Reinforcement learning works by rewarding or penalizing an AI because it tries to perform an objective. On this case, the approach taught the robot to generalize and respond in latest scenarios, as a substitute of freezing like its predecessors can have done. 

“We desired to push the bounds of robot agility,” says Zhongyu Li, a PhD student at University of California, Berkeley, who worked on the project, which has not yet been peer-reviewed. “The high-level goal was to show the robot to learn the way to do all types of dynamic motions the way in which a human does.”

The team used a simulation to coach Cassie, an approach that dramatically accelerates the time it takes it to learn—from years to weeks—and enables the robot to perform those self same skills in the true world without further fine-tuning.

Firstly, they trained the neural network that controlled Cassie to master a straightforward skill from scratch, comparable to jumping on the spot, walking forward, or running forward without toppling over. It was taught by being encouraged to mimic motions it was shown, which included motion capture data collected from a human and animations demonstrating the specified movement.

After the primary stage was complete, the team presented the model with latest commands encouraging the robot to perform tasks using its latest movement skills. Once it became proficient at performing the brand new tasks in a simulated environment, they then diversified the tasks it had been trained on through a way called task randomization. 

This makes the robot far more prepared for unexpected scenarios. For instance, the robot was able to take care of a gentle running gait while being pulled sideways by a leash. “We allowed the robot to utilize the history of what it’s observed and adapt quickly to the true world,” says Li.

Cassie accomplished a 400-meter run in two minutes and 34 seconds, then jumped 1.4 meters within the long jump without having additional training.

The researchers at the moment are planning on studying how this type of technique could possibly be used to coach robots equipped with on-board cameras. This might be more difficult than completing actions blind, adds Alan Fern, a professor of computer science at Oregon State University who helped to develop the Cassie robot but was not involved with this project.

“The subsequent major step for the sphere is humanoid robots that do real work, plan out activities, and truly interact with the physical world in ways in which will not be just interactions between feet and the bottom,” he says.

LEAVE A REPLY

Please enter your comment!
Please enter your name here