Robot Central had the opportunity to speak with CTO Sergey Popov of Skilligent about their robot learning technology. We were intrigued with the prospect of a robot that could be taught skills and behaviors without any conventional programming. I’ve asserted that Human Robot Interaction is about to reach a tipping point that will make personal and service robots available to the masses and the technology Skilligent has developed is yet another validation of that assertion.
“We believe that trainable robots utilizing Skilligent will be much more flexible than today’s robots which are either remotely controlled or pre-programmed to perform a few functions,” Popov said. “You could train a robot to do simple palletizing in your garden to assembling furniture in a workshop.”
This range is enabled by the fact that it does not require a specific control system, calling language, and or hardware platform to operate.
Although Skilligent does not yet officially support Microsoft Robotics Studio (“MSRS”), they provide a C# version of their library which is easily consumable by MSRS. Additionally, Skilligent has a professional services group that will work with its customers to enable its technology with MSRS if necessary.
The software does have some explicit dependencies, however. “Our technology requires methods of observing the world and getting positive and negative feedback by the trainer–just as a human would.” As such, a camera is required to observe the world. The only actuators required are those required to perform the desired behavior. If you wanted to teach a robot to open a door it would need a gripper, for example.
In a sequence of three videos created by Skilligent, a robot is taught a basic set of behaviors. In all videos the trainer gains the robot’s attention by shaking an object in front of it. The robot will instinctively follow it until the trainer shifts attention to another object. In video #1, the trainer leads the robot to object #1, a random poster. He repeats this exercise in video #2. In video #3, the robot performs the trained behavior. To the untrained eye, this is a simple matter of record and playback. Upon closer scrutiny, however, we observe that the robot knows when to perform the task and has stored a symbolic representation of the target objects in its database which it can recall later. Had the target objects been moved or rearranged, the robot would have still performed the desired behavior.
Besides task-level behaviors, Skilligent can be trained to execute low-level control policies called “skills.” Task-level behaviors combine low-level skills in a hierarchical structure.
The software provides a skills database abstraction that can be shared with other robots running the software. Furthermore, skills can be used as building blocks to create more complex behaviors. For example, you could teach a robot to fill a watering can with water as the “Fill Watering Can” skill. You could later teach a robot to water the plants which can utilize the “Fill Watering Can” skill.
This opens a new and unknown space. Will the human trainers want to share their robot’s library of behaviors with other trainers? If so, then we’ll have the same problem of requiring homogeneity in actuators, sensors, and mobility in order that the behaviors could be performed consistently across robots. It may be so cheap, easy, and fun to train robots, however, that this space won’t have the same limitations that control systems do.
Skilligent may be on the verge of inventing the killer application for robotics.