Rodney Brooks had it right in his 1991 paper “Intelligence without Reason.“ His approach to Artificial Intelligence is based the emergence of behaviors not explicitly programmed into a system. Instead, functions designed to control a a small part of a robot are organized in a priority hierarchy. Lower priority functions yield to higher priority ones. These functions each make the robot behave in a certain way and are thus called behaviors. When a robot is about the world, behaviors begin switching back and forth very quickly, each taking over the robot–sometimes only milliseconds at a time. You’d think that this quick back-and-forth switching of behaviors would create a chaotic, out of control robot. What really happens is that the robot appears to exhibit higher-level behaviors that were never programmed into the robot. Emergence happens.
I built a maze in my living room from 1 x 8 boards lying on their side and released my robot at one end. The goal was for my robot “Beto” to find the exit on the other side of the 12′ x 12′ labyrinth of wooden walls. The behaviors I programmed were simple. From highest priority (1) to lowest (5), the behaviors were:
- When bumper switch is touched on right, stop, turn left.
- When bumper switch is touched on left or in front, turn right.
- When IR sensor sees something on right, turn left.
- When IR sensor sees something in front or left, turn right.
- Unconditionally drive forward while arcing to the right.
Each behavior was responsible for one single, simple thing. They each ran as discrete processes and each monitored the world for its condition to be true. When the behavior’s condition became true, it took control of the robot and performed its action. If there was a tie where two behaviors tried to take over the robot, the higher priority behavior won. When I turned on the robot in an open space, only behavior #5 was in operation because none of the conditions for the other behaviors was true. The robot began to drive forward with a bias towards the right. Once it came across an obstacle, one of the other behaviors would perform.
I dropped him at the beginning of the maze. The result was fascinating. I had deliberately designed a long narrow corridor in the maze in order to try to confuse the robot. The robot drove right down the middle of the corridor in a straight line, slowed before reaching the end wall, stopped for about a second, turned 180 degrees and proceeded out the way it came from. None of those behaviors were programmed into the robot but the rapid switching between the simple few behaviors caused this complex behavior to emerge.
I spent some time decomposing this kind of emergent behavior and was never able to completely and confidently explain every nuance; however, it was obvious that the emergent behaviors came from the ones that were programmed operating at a few milliseconds at a time–switching tens or hundreds of times a second, impacting motor speeds, voltage levels and performing logic. With these simple few behaviors, a lot of the analysis was speculation and I quickly concluded that in order to create more organic-behaving robots, I had to just let go.
The robot successfully navigated his way through a different maze layout every time–validating another of Dr. Brooks’s tenets that robots should be able to react to a dynamic and changing world environment.
In Mark Buchanan’s article, “Law and Disorder,” Buchanan shares a case in General Motors when in 1992 the company was struggling to optimize schedule of the robots that automatically painted trucks coming off the assembly line. GM’s Dick Morley suggested that the robots should be left to determine their own painting schedules.
Morley put out some simple rules for each machine where each would “bid” for new jobs with an unconditional desire to stay busy. “The results were remarkable, if a little weird. The system saved General Motors more than $1 million each year in paint alone. Yet the line ran to a schedule that no one could predict, made up on the fly by the machines themselves as they responded to emerging needs.”
Steven Wolfram is yet another from this behavior-based camp. In his book, “A New Kind of Science,” he argues that the rules in nature aren’t necessarily limited to traditional mathematics. Instead, he suggests that complex structures emerge from the lower level cellular automata with more generalized rules.
As robotics and Artificial Intelligence enter the dawn of the Singularity, the complexities manifesting from the core threads of behaviors will become as unpredictable as humans. All we need to do is to let go.