Artificial intelligence researchers are improving the technology every day, including at Google parent company, Alphabet, where researchers have taught a robotic arm to stack blocks using reinforcement learning.
According to Venture Beat, the task has previously thought to be nearly impossible to do using only data.
DeepMind, Alphabet’s artificial intelligence research team, developed a new learning model that is like the work of OpenAI, which recently published a paper on methods of transferring skills from agents in simulation to a real-world robot to solve a Rubik’s cube.
DeepMind proposes a two-step procedure for adaption:
- In the first step, the team used a simulated environment to learn a policy that solves the cube-stacking task via synthetic images and proprioception. A state-based agent with access to the simulator and a vision-based agent that uses raw pixel observations train simultaneously. The state-based agent provides data for reinforcement learning of the vision-based agent.
- For the second step, unlabeled real image sequences help adapt the state representation to the real domain, which provides a common objective that applies to both simulation and reality. The DeepMind team says it mitigates the negative effects of the gap between simulation and the robot by leveraging unlabeled data collected by the simulation-trained agent.
DeepMind said their learning method yielded a clear improvement over domain randomization and other self-supervised adaption techniques, resulting in a cube-stacking success rate of 62% compared to the baseline 12% success rate. The technology could be applied to a wide range of robotic manipulation tasks, DeepMind says.
Do you know anyone that would fail at stacking cubes 38% of the time?
Maybe only if they’ve had a few. While it is impressive that AI researchers can get a robot to do mundane tasks without controlling it is an enormous achievement with reinforcement learning, the field of AI robotics has a long way to go.
In the OpenAI study, its robot, Dactyl, was able to solve a Rubik’s Cube with one robotic hand with what appears to be human-like dexterity.
Read Next: Artificial Intelligence Program Helps Fold Proteins
However, Dactyl need some serious work on its motor skills, as it dropped the cube eight out of 10 times in testing.
“I wouldn’t say it’s total hype—it’s not,” Ken Goldberg, a roboticist at UC Berkeley, told Wired. “But people are going to look at that video and think, ‘My God, next it’s going to be shuffling cards and other things,’ which it isn’t.”
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply