Intel AI Researchers Combine Reinforcement Learning Methods To Teach 3D Humanoid How To Walk

Researchers from Intel’s AI Lab and the Collaborative Robotics and Intelligent Systems Institute at Oregon State University have combined a number of methods to make better-performing reinforcement learning systems that can be applied to things like robotic control, systems governing autonomous vehicle function, and other complex AI tasks.

Collaborative Evolutionary Reinforcement Learning (CERL) can achieve better performance in benchmarks like Humanoid, OpenAI’s Hopper, Swimmer, HalfCheetah, and Walker2D than gradient-based or evolutionary algorithms for reinforcement learning can on their own. Using the CERL approach, researchers were able to make a 3D humanoid agent walk upright with OpenAI’s Humanoid benchmark. READ MORE ON: VENTURE BEAT

intel-ai.jpg