Researchers at Cornell, Stanford and Brown universities and the University of California have come up with a method of teaching robots using the cloud.
Dubbed Robo Brain , the system is a large-scale computational system that learns from publicly available Internet resources. The data is translated and stored in a robot-friendly format that robots can draw on when they need it.
Ashutosh Saxena, assistant professor of computer science at Cornell University said that since laptops and mobile phones don’t have access to all the information we want, the robot can query Robo Brain in the cloud.
Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behaviour.
It speeds up the development time that a robot needs to work out what to do. If a robot sees a teacup, it can learn from Robo Brain not only that it is a teacup and not a coffee mug. It also can learn that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full.
The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Robo Brain knows that chairs are something you can sit on, but that a human can also sit on a stool, a bench or the lawn.
The robot stores the information in a mathematical model, which can be represented graphically as a set of points connected by lines. The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct.
This means that the robot’s brain makes its own chain and looks for one in the knowledge base that matches within those limits.