It’s an exciting time for robotic learning. Organizations have spent decades building complex datasets and pioneering different ways to teach systems to perform new tasks. It seems we’re on the cusp of some real breakthroughs when it comes to deploying technology that can adapt and learn on the fly.
The past year, we’ve seen a large number of fascinating studies. Take VRB (Vision-Robotics Bridge), which Carnegie Mellon University showcased back in June. The system is capable of applying learnings from YouTube videos to different environments, so a programmer doesn’t have to account for every possible variation.
Last month, Google’s DeepMind robotics team showed off its own impressive work, in the form of RT-2 (Robotic Transformer 2). The system is able to abstract away minutia of performing a task. In the example given, telling a robot to throw away a piece of trash doesn’t require a programmer to teach the robot to identify specific pieces of trash, pick it up and throw it away in order to perform a seemingly simple (for humans, at least) task.
Want the top robotics news in your inbox each week? Sign up for Actuator here.
Additional research highlighted by CMU this week compares its work to early-stage human learning. Specifically, the robotic AI agent is compared to a three-year-old toddler. Putting context, the level of learning is broken up into two categories — active and passive learning.
Passive learning in this instance is teaching a system to perform a task by showing it videos or training it on the aforementioned datasets. Active learning is exactly what it sounds like — going out and performing a task and adjusting until you get it right.
RoboAgent, which is a joint effort between CMU and Meta AI (yes, that Meta), combines these two types of learning, much as a human would. Here that means observing tasks being performed via the internet, coupled with active learning by way of remotely teleoperating the robot. According to the team, the system is able to take learnings from one environment and apply them to another, similar to the VRB system mentioned above.
One of the cooler bits of all of this is the fact that the dataset is open source and universally accessible. It’s also designed to be used with readily available, off-the-shelf robotics hardware, meaning researchers and companies alike can both utilize and build out a growing trove of robot data and skills.
“RoboAgents are capable of much richer complexity of skills than what others have achieved,” says the Robotics Institute’s Abhinav Gupta. “We’ve shown a greater diversity of skills than anything ever achieved by a single real-world robotic agent with efficiency and a scale of generalization to unseen scenarios that is unique.”
This is all super promising stuff when it comes to building and deploying multipurpose robotics systems with an eye toward eventual general-purpose robots. The goal is to create technology that can move beyond the repetitive machines in highly structured environments that we tend to think of when we think of industrial robots. Actual real-world use and scaling is, of course, a lot easier said than done.
We are much closer to the beginning when it comes to these approaches to robotic learning, but we’re moving through an exciting period for emerging multipurpose systems.