The first step towards self-aware machines?
Despite the advent of robots and artificial intelligence (AI), truly self-aware robots - a staple of science fiction - seem like they're light years away, given the prevalent levels of robot intelligence. Yet, it would appear that researchers from Columbia University have taken a small but significant step towards creating self-awareness in robots, and the results, frankly, are fascinating. Here's more.
Most robots today learn using human-provided simulators and models, far from the self-learning capabilities humans have. Taking a break from this tradition, researchers from Columbia Engineering recently published a study in the Science Robotics journal about their robot that can learn about itself and its environment from scratch. And astonishingly enough, the robot demonstrated self-learning capabilities similar to that of a newborn child.
For the study, the researchers, created a robotic arm. The robot, from its 'birth', did not have any sense of what shape it was - whether it was a spider, a bird, or an arm. Armed with no knowledge, the robot randomly flailed about and collected around one thousand trajectories. Then, using a machine learning technique called deep learning, the robot started creating simulated models of itself.
It should be noted here, that the robot did not have any prior knowledge of physics, geometry, or motor dynamics either.
Initially, these models, or its 'self-image', were inaccurate, and the robot still had no idea about what it was or how its joints were connected. Then, after about 35 hours of intense self-computing, the model became consistent with its actual form, and differed by only four centimeters. Subsequently, the robot was given pick-up-and-place tasks to test its understanding of itself.
In a closed loop system that enabled the robot to recalibrate itself based on external feedback, the robot performed the tasks with 100% success. On the other hand, in an open loop system where the robot had to solely rely on its 'self-understanding' to perform the tasks, it had a 44% success rate. Yet, this too, was quite impressive.
"That's like trying to pick up a glass of water with your eyes closed, a process difficult even for humans," observed Robert Kwiatkowski, a PhD student and the study's lead author, on the robot's open loop performance.
The robot was also given the task of writing using a marker. Further, astonishingly, when researchers gave the robot a deformed part, it could understand that it had been damaged, and could re-train its self-model or 'self-image' based on it. This new self-modeled robot, too, could then perform the aforementioned tasks with almost no loss in proficiency.
"This is perhaps what a newborn child does in its crib. We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude...we believe that this ability is on the path to machine self-awareness," said Hod Lipson, a professor of mechanical engineering who was part of the study.
The findings are significant insofar that it demonstrates that robots can self-learn, and this self-learning can also be adaptive. While current AI is what is considered "narrow AI", this research could pave way for more general AI - i.e. self-aware artificial intelligence. The researchers are now exploring whether robots can also model their own minds - in other words, whether they can think about thinking.
"Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control. It's a powerful technology, but it should be handled with care," warned the researchers.