| |

Researchers Design Low-Cost, Obstacle-Ready Robot Dog

LowCostRobotDog400x275

By Aaron Aupperlee, Carnegie Mellon University

(Editor’s Note: This story originally appeared here).

This little robot can go almost anywhere.

Researchers at Carnegie Mellon University’s School of Computer Science and the University of California, Berkeley, have designed a robotic system that enables a low-cost and relatively small legged robot to climb and descend stairs nearly its height; traverse rocky, slippery, uneven, steep and varied terrain; walk across gaps; scale rocks and curbs; and even operate in the dark.

“Empowering small robots to climb stairs and handle a variety of environments is crucial to developing robots that will be useful in people’s homes as well as search-and-rescue operations,” said Deepak Pathak, an assistant professor in the Robotics Institute. “This system creates a robust and adaptable robot that could perform many everyday tasks.”

The team put the robot through its paces, testing it on uneven stairs and hillsides at public parks, challenging it to walk across stepping stones and over slippery surfaces, and asking it to climb stairs that for its height would be akin to a human leaping over a hurdle. The robot adapts quickly and masters challenging terrain by relying on its vision and a small onboard computer. 

The researchers trained the robot with 4,000 clones of it in a simulator, where they practiced walking and climbing on challenging terrain. The simulator’s speed allowed the robot to gain six years of experience in a single day. The simulator also stored the motor skills it learned during training in a neural network that the researchers copied to the real robot. This approach did not require any hand-engineering of the robot’s movements — a departure from traditional methods.

Most robotic systems use cameras to create a map of the surrounding environment and use that map to plan movements before executing them. The process is slow and can often falter due to inherent fuzziness, inaccuracies, or misperceptions in the mapping stage that affect the subsequent planning and movements. Mapping and planning are useful in systems focused on high-level control but are not always suited for the dynamic requirements of low-level skills like walking or running over challenging terrains.

The new system bypasses the mapping and planning phases and directly routes the vision inputs to the control of the robot. What the robot sees determines how it moves. Not even the researchers specify how the legs should move. This technique allows the robot to react to oncoming terrain quickly and move through it effectively.

Because there is no mapping or planning involved and movements are trained using machine learning, the robot itself can be low-cost. The robot the team used was at least 25 times cheaper than available alternatives. The team’s algorithm has the potential to make low-cost robots much more widely available.  

“This system uses vision and feedback from the body directly as input to output commands to the robot’s motors,” said Ananye Agarwal, an SCS Ph.D. student in machine learning. “This technique allows the system to be very robust in the real world. If it slips on stairs, it can recover. It can go into unknown environments and adapt.” 

This direct vision-to-control aspect is biologically inspired. Humans and animals use vision to move. Try running or balancing with your eyes closed. Previous research from the team had shown that blind robots — robots without cameras — can conquer challenging terrain, but adding vision and relying on that vision greatly improves the system.

The team looked to nature for other elements of the system, as well. For a small robot — less than a foot tall, in this case — to scale stairs or obstacles nearly its height, it learned to adopt the movement that humans use to step over high obstacles. When a human has to lift its leg up high to scale a ledge or hurdle, it uses its hips to move its leg out to the side, called abduction and adduction, giving it more clearance. The robot system Pathak’s team designed does the same, using hip abduction to tackle obstacles that trip up some of the most advanced legged robotic systems on the market. 

The movement of hind legs by four-legged animals also inspired the team. When a cat moves through obstacles, its hind legs avoid the same items as its front legs without the benefit of a nearby set of eyes. “Four-legged animals have a memory that enables their hind legs to track the front legs. Our system works in a similar fashion”  Pathak said. The system’s onboard memory enables the rear legs to remember what the camera at the front saw and maneuver to avoid obstacles.

“Since there’s no map, no planning, our system remembers the terrain and how it moved the front leg and translates this to the rear leg, doing so quickly and flawlessly,” said Ashish Kumar a Ph.D. student at Berkeley.

The research could be a large step toward solving existing challenges facing legged robots and bringing them into people’s homes. The paper “Legged Locomotion in Challenging Terrains Using Egocentric Vision,” written by Pathak, Berkeley professor Jitendra Malik, Agarwal and Kumar, will be presented at the upcoming Conference on Robot Learning in Auckland, New Zealand.

Related video:


Robotics World News

  • Seed Group and Ryberg Partner to Bring AI-Assisted Disinfection Technology to the Middle East

    Seed Group and Ryberg Partner to Bring AI-Assisted Disinfection Technology to the Middle East

    Netherlands-based Ryberg combines the latest in computer vision, robotics, and infection prevention-technologies to deliver intelligence and efficiency in disinfection. Ryberg has developed Disinfection Robots, which are self-driving disinfection machines. Ryberg’s patented Disinfection Engine makes intelligent disinfection decisions, allowing the robots to disinfect spaces for up to 6 hours—to bring a layer of defense in the… Read More…

  • Wind River and Airbus Collaborate on Certification of Automatic Air-to-Air Refueling (A3R)

    Wind River and Airbus Collaborate on Certification of Automatic Air-to-Air Refueling (A3R)

    Wind River, a global leader in delivering software for mission-critical intelligent systems, announced it has worked with Airbus to support the A330 Multi-Role Tanker Transport (MRTT) aircraft for automatic air-to-air refueling (A3R). The MRTT aircraft is the world’s first tanker to be certified for automatic air-to-air refueling boom operations in daylight. Airbus uses VxWorks 653 for the A330 MRTT… Read More…


Products for Robots & Cobots

  • Jabil, OSRAM, Artilux Develop Next-Generation 3D Camera Prototype

    Jabil, OSRAM, Artilux Develop Next-Generation 3D Camera Prototype

    Manufacturing solutions provider Jabil has announced that its optical design center in Jena, Germany, is currently demonstrating a prototype of a next-generation 3D camera with the ability to operate in both indoor and outdoor environments up to a range of 20 meters. The prototype was the result of a combination of proprietary technologies from Jabil,… Read More…

  • Product Profile: IDS CMOS Sensor IMX273

    Product Profile: IDS CMOS Sensor IMX273

    What is it? IDS cameras can be found in automotive, packaging, printing, and robotics industries, as well as in medical technology, traffic monitoring, security, kiosk systems, and logistics contexts.  The global shutter CMOS sensor IMX273 in the Sony Pregius series offers high image quality, high sensitivity and wide dynamic range. With a resolution of 1.58… Read More…