Artificial Intelligence

As promised yesterday, my discussion with Heather Knight of Marilyn Monrobot. I’d like to thank Heather for taking the time to speak with us after the Dr. Ishiguro’s lecture. We hope you enjoy the interview below.

Eric Wind: How’d you come to speak at the lecture tonight?

Heather Knight: I was invited to come speak at the lecture. There aren’t that many roboticists in New York City and I’m not sure whether they found me or whether [Erico Guizzo] recommended me because of our common interest in robots and theater.

E: What part of the conversation did you find the most intriuging and most beneficial for the audience?

H: That’s a good question. It’s always interesting to speak to a general public crowd. This was a real interesting evening because you had the Japanese Cultural Society and then the people interested in robots coming. It prompts a more cultural discussion to begin with, because you’re in New York City at this, you know, Japanese house of culture. So, having an American roboticist and a Japanese roboticist, we both have similar research interests in that we think social robots are really important and we want to make these robot companion type situations. Although, I would say that I want robots to bring people to connect, rather than being the connection.

He and I both have a strong interest in theater and thinking about algorithims we can learn from directors, actors that are codified a little bit differently than psychologists. Though, psychology is a field which social robotics adopts methodology from. So, we have a lot of similar interests, but we’re also very different, so it was fun.

You know, he’s a lot more experienced than I am. Hiroshi Ishiguro has been working on robotics for several decades and has a very established lab in Japan. I’ve been working on robots for 11 years, and it’s not like I haven’t done anything, but he’s been a huge inspiration for me and it was really exciting to be able to be in that situation.

E: When did you first hear about his work?

H: Well, it’s been at least 7 or 8 years ago. I started doing social robotics when I was an undergrad at MIT and working with a professor named Cynthia Breazeal. She made this robot for her PhD called Kismet, that was basically a head that had ears and eyes. It wasn’t trying to be a super-humanoid, it was almost creature like. It didn’t use words, but it kind of babbled. It responded to the tone of your voice. Sometimes it would be in the mood to play with toys or color-saturated objects, or sometimes it would be in the mood to socialize. I think one of the more clever aspects of it’s behavior system is that it would get bored. So, if you weren’t being interesting then it would switch to wanting to play with toys, which seems really human. It’s a really simple set of behaviors. But anyone could walk up to this robot without any training and learn how to interact, because it would be like “Hello!” and it’s ears would perk up, or if you said “You’ve been a very bad robot!” it would make a sad face or something.

So, it’s responding with sound. With these facial expressions that are perhaps a little cartoon like, but not fully human but still relateable. I think it’s really compelling to think about the simplest ways to come up with human like robots. That’s often the most difficult thing to do. As an engineer, it’s always more difficult to have a simple solution that’s very clever, than it is to have a convoluted Rube Goldberg machine to making breakfast or something.

E: Could you expand on your background a bit more?

H: Sure. I have a Bachelors and a Masters in Electrical Engineering and Computer Science, and now I’m working on a PhD in Robotics. So, you might think I’ve been in school for the last 15 years but actually I do other things along the way. I’ve taken breaks to travel and do other things, but I got a chance to work at the [NASA] Jet Propulsion Laboratory in California on space stuff. While I was there, I met people who were working in art and technology, and who I ended up collaborating with.

Originally, I was in SyynLabs, we were building installations for events. For example, you’d have a projection wall where people would come up to and dance, and things would fall on them and roll off their shadow. It was these installations that are in an event setting that would get people talking to each other. So it’s that technology that is fun and playful, and when we started building other things, like a bicycle-powered blender or, I don’t know, we had this one installation where you had to create a human circuit to hear a story — so, if you wanted to hear the full story, you needed to have a group of, like, 10 strangers holding hands. You’re using technology to trick people into spending time with each other. We used to call it “technological inebriation.”

It was really technology for people, and I think that’s a great metaphor for some of the robots I want to make. I don’t want to make robots for the sake of replacing people, or, I don’t know, for their own good. I think you can use robotics in these interactive art pieces to bring out features in ourselves or connect us to each other. Like, there’s autism therapy applications, where kids with autism feel more comfortable talking to robots than people because it’s less overwhelming — less sensory overload. If they practiced with this robot, this kind of stepping-stone agent, then they could better integrate generally, or get used to those more useless but still socially important aspects of interaction.

E: What was your Masters thesis on?

H: I did my Masters thesis on this project called the Huggable. It was this robotic teddy bear that had a fully body sensing skin. I was trying to come up with a way to make that sense happen in real time, so it could react naturally. It’s like if you were to pick up something, like a puppy or a baby. So, how do we communicate with puppies and babies? You pet them, hold them or you might tickle them. If they’re asleep, you pat them to wake them up. All of that communication that is happening is very complex. Anyone who has a small child or has played with a small child, they could tell you that the child knows what they want, but it’s not verbal. So how can you create pre-verbal interactions?

My thesis was on what kind of touch gestures do we use to communicate with this robot teddy bear. This involved human studies which included an audio puppeteer. So, if someone was pretending to be the robot, it sees the video and it’s natrually reacting and its sensors are trying to determine how people are communicating with it.
Basically, I get this data corpus to see how people use touch to communicate. It becomes a pattern recognition problem, where you have to categorize how people use touch and then you have to think about “how can I detect this?” Since I was trying to build a system that would work in real-time, one of the things I discovered is with touch, you don’t need really fine tuned sensing. As long as you cover an area that is two by three inches, you’re going to capture most communication. You don’t need a really fine grid.

The second thing is most touch lasts one to five seconds, so the connection doesn’t need to be particularly quick. Within that, you need to do some frequency analysis. For example, tickling is a very noisy signal. It involves a lot of different signals. Petting is more of a regular sine wave. Then, you can see how you differentiate between these different kinds of touch.

My degree was in Electrical Engineering, so it was designing the sensor system but it was also coming up with a simple pattern recognition system.

E: What’s your doctoral thesis, and how’s the progress?

H: I haven’t declared my thesis yet. I have finished my coursework, and I’m in the prep for that. Then we have qualifiers and so on, and I’m in the very final stage of my qualifiers. I will complete those this semester and hopefully put forth my proposal in the fall.

E: Do you have any idea of what your thesis proposal will be?

H: Yeah, so I learned that you’re not supposed to propose until you’ve already finished some of the work. That way, you’re not proposing something you’ve never done but you’re proposing something you’ve already tried out, so you know it has a chance of working.

People usually propose when they have 20 – 40% of the work done, in our department. I’m hoping it’s going to be about expressive motion. Basically, how can the non-anthropomorphic be expressive. I’m interested in how motion can describe the state of a relationship; “Do I know you?” “Do I not know you?” “Do I like or dislike you?” “Are you my boss, or am I your boss?” Power relationships are important. Then there can be room for emotions. Or, something else that’s interesting, is trying to measure how much a robot is in a rush by how quickly it’s going. We can see that with drivers and cars now. It’s just a question of whether we can categorize that in a general way.

I might get better at my elevator pitch in a couple of years, but the basic idea is to see if there are some universals of expression that we can distill to use on non-anthropomorphic robots. It’s basically robot body language.

E: What got you interested in robotics?

H: I didn’t grow up obsessed with robots. I fell in love with robots when I started building them. So, I went home in my Freshman year at MIT, and I was talking to people in my living group and I was asking about an internship. Someone said, “Hey! I work in a robotics lab. I could probably get you a position.”

So, I just started working there, January — maybe 2002. Over the summer, it was the first year my professor, Cynthia Breazeal, was a professor, and we had this big group project to kick off our research group. We built this big interactive terrarium and brought it to a big conference in San Antonio, and we were in the emerging technologies exhibit. You know, it was kind of like Epcot center. There was this big robot that had this hand-thing that would see people, say “Hello!” and then it would get bored and then go play in the waterfall, then it would get tired and turn in for the night in this cave. We went really crazy. There were these rock crystals that would turn on, and these drums you could play with, and these fiber optic tube worms that I got to put together. I was 18 and it was awesome. By the end of the 5 days, I could restart the whole system myself and I could talk to all these different people. It wasn’t just getting to build the robot and see it move, it was seeing people interact with the robots.

E: What do you feel that sets you apart from other roboticists?

H: I don’t know. I definitely have fun with what I do. My father was an engineer, and he would design propulsion systems for ships and submarines. He’s really great at math and physics. My mother was a Peace Corp volunteer, and all about international understanding, so she really wanted to impact the world.

I like building things and I like solving problems, and then my mother’s voice is in the back of my head saying “Well, why do people care about this?” I think that’s one of the reasons I didn’t want to do space stuff anymore. I wanted to impact real human beings. So, I don’t know how different that is but I really like imagining the future.

E: What’s your favorite project that you’ve worked on so far?

H: Well, if you asked what my favorite robot is, then I would be in really big trouble back at home if I didn’t say Data.

I don’t know, there have been so many projects I’ve been involved in in different ways. So, the precursor to the Rube Goldberg machine on Youtube is the OK Go music video. That was the project where I thought, “Oh my god, you could learn so much from professionals.” The band made that machine so much cooler than if we had built it by ourselves. They are professional entertainers and they have this intuition about what audiences care about and how to reach people. It’s part of the motivation I’ve gained in wanting to work with actors.

What I left out before, I want to work with actors, dancers, directors to help craft these expressive emotions that I’m trying to find universals for in robots. I’m really interested in seeing how we can adopt bodies of knowledge from theater into robotics. Or from disclipines of art that people have been spending hundreds or thousands of years honing. Rather than trying to reinvent the wheel as engineers, where we can make engines work, suddenly we’re trying to make these socially intelligent machines out there. Like, are engineers really the best people to be making socially intelligent machines? There’s some sort of weird clash there.

So, I’m trying to distill knowledge from a non-technical field into a world where you can program stuff. Some of that has been about creating interfaces where you can have kinetic conversations.

E: How would you explain social robotics and it’s significance to the average person?

H: Social robotics is the idea that you can make the human-robot interface smooth. So, instead of teaching you how to program the robot, you can just walk up to the robot and communicate and figure out the interface for it.

Social robotics is super-important if you ever want to have humans and robots working together that aren’t programmer-robot. Right now, we don’t really have that. We have tons of robots for industry manufacturing floors, to sort our mail, and we have sent them to the surface of Mars. But, to do every-day things with robots, we have to create an interface to make that possible.

E: What’s the idea Marilyn Monrobot labs and what drove you to start it?

H: I’m really interested in the intersection between robotics and theater. As much as I get to explore that as a researcher, I also think there is artistic value to that intersection. Marilyn Monrobot lets me explore that. So, it’s the umbrella name for our robot theater company. It’s where we do our robot-comedy stuff and the robot film festival. Last year, we did a robot cabaret variety show with 10 acts, exploring how the modern world is already a cyborg society because of our interdependence on phones. It’s allowed us to consider the changing ethical ramifications of our changing relationships with each other, via technology. Like, you hear about Freshmen who arrive at their new college and they have like 200 Facebook friends at their new college but they don’t know how to talk to someone at the orientation party. So, are we losing our humanity to technology? Obviously, I’m not a pessimist about technology but I think it’s equally naive not to think through where technology can go.

E: How did you decide on the name, Marilyn Monrobot?

H: Well, the JPL is really flat. You don’t really have parking garages in earthquake country, so instead we had this 20 minute walk from my office to the enormous parking lot. Of course, seniority is how you actually get close to your office, but since the average age there is 50-something and the average working-span is 30-years, we were kind of the kids. So it just kind of came to me walking through the parking lot.

I also found out later that Marilyn Monrobot was a Futurama episode, or it was a segment, which is fantastic. I didn’t know about that at the time. But, it’s supposed to represent this intersection between robotics and entertainment.

E: Could you tell us about the robot census and how that’s going?

H: So, the robot census started when I first arrived at Carnegie Mellon University. They do this thing where when you first arrive, you don’t know who your adviser is going to be but that is your most important relationship during your PhD. The average time for the degree is 5 and a half years, so some call it the marriage process. It’s longer than some marriages.

I was going to school and there were 500 other people working in robotics in some capacity, and we’re supposed to choose our adviser out of the 80-something professors. We didn’t even know who had what robot. Like, I’m at the Robotics Institute and, obviously I have to partially choose my advisers by what kind of robots they have, right? If this is our marriage, then they have children.

So, I started this census on campus and people thought it was interesting and I opened it up to the world. I think it should be done every four years, kind of like this other census you may have heard of that involves the population of the United States.

E: Is it difficult rounding up information for the robot census?

H: Yeah, even in person on campus. I think campuses should run their own censuses and collect information. We had to had out physical forms and then send links out to the digital form. It was like marketing. I had no idea, but you should feel okay sending up to ten reminders. But we didn’t do that, we went in person after a while. So, there were a few that probably slipped through the cracks but I’m sure that’s true of other censuses.

E: How many robots have you documented?

H: We’ve documented 547 robots on campus. There’s an off-campus facility for robotics, but we didn’t do the census there, though I would love to expand to that.

E: Do you feel that the anxiety people have could be attributable to the perceived lack of sociability of robots?

H: No, I think it’s religion. Fear of robots is a Western culture thing. It’s this idea that we’re usurping the role of God, and it’s kind of like Frankenstein because we’re doing what we should not be doing — you know, what we’re doing is wrong and we will be punished. It’s tapping into mythology.

Storytelling is a cultural phenomenon. It’s not based in reality. It’s based in human perception and culture and so on. So this idea that we’re not supposed to be playing God, and if we try to play God it will go really wrong, that’s a religious thing, in my opinion and others people’s opinion. This is well documented.

Now, if you look at the Shinto faith, they believe that all objects, people, animals, mountains, have the same spirit. There is no hierarchy. They have a really high value of nature, and rocks, and robots, so spiritually everything is on equal footing. The other detail that they have is that these spirits naturally want to be all in harmony. So, when you look at Frankenstein or the Terminator versus… Astroboy, that’s revealing our culture. It’s not about the technology; it’s about the belief system. Regardless of whether you were raised going to church or temple, this permeates our culture.
So, like even in Japan where a lot of people are Christian now, this Shinto belief system has permeated their expectations of what happens with technology.

E: Do you see the robotics industry trending toward social robotics?

H: It’s early research now, but I think charismatic machines have more applications in the short term. Social robotics may be a little longer. Like, the idea of Siri being really popular. That’s a charismatic technology. I think what we learn in social robotics can be cross-applied into real technology because what we’re doing is creating interfaces between technology and people. So, what we learn about sociability can be applied to non-social robot machines. Hiroshi would probably have a different opinion there.

E: What do you find is the biggest barrier in getting people interested in robotics? Do you think it’s exclusively religion or cultural?

H: When people don’t meet it and they’re just thinking theoretically about technology, then you get the Terminators and then you have the Singularity people. Those are like the two most popular mythmaking things at the moment. That doesn’t mean we don’t have positive storytelling. I mean, we have Rosie the Robot and we have Wall-E. I think stories really inspire what we make.

Throwing back to the previous conversation of robots in Japan, they invest so much in companion robots and music and things for the elderly, etc. And what is the U.S. known for in robotics innovation right now? The biggest is military robots. That doesn’t mean there’s not a lot of research in other kinds of robots, but what we’re famous for is military robots.

E: Do you have an end-goal for your research and projects?

H: Shape the future.

E: Are you concerned about people using your technology for negative instances?

H: I think it’s really important to think about that. I should think that would be a common part in engineering education in general, thinking through the ethics and where you’re going with stuff. I think in the world of art, and even architecture, critique is a natural part of the process. And it would be great if we would not only critique our designs based on needing to meet certain performance criteria, and the bigger grant organizations like the National Science Foundation, do ask for broader impact stuff, but they don’t really ask how things can be misused.

E: Do you think there’s a reason for that?

H: For me, and this is theoretical, engineers were never the heads of companies. They were the people who could help the people who started the companies solve specific problems. Historically, in this bigger company construct, our job is not to be creating ideas. These days, withink the last 30 years, engineers and technologists are starting companies and we are the idea people but the education hasn’t shifted. So, we’re still educated as if we are cogs in the larger industrial machine, whereas other people are thinking about “Where is this going?” Sometimes that’s about money but at least there was someone to think about that stuff. Maybe they had training in that, I don’t know.
But, I think it’s a legacy from engineers jobs before.

E: Kind of shifting gears, it seems like robotics, and technology in general, has drawn more men to the field than it has women. From your experience, do you feel that’s the case?

H: Well, I was spoiled because MIT is like 45% women. So, I didn’t really feel that way. When I worked, it was something like 1/3 women and 2/3 men ratio in the U.S. In Europe, it’s more like 9/10 male and 1/10 female.
I never really thought about it until I was several years into doing what I was doing. I always idolized my dad, so I kind of always felt like I wanted to be an engineer. I mean, there are definitely some legacy issues with gender, but things are moving in the right direction for sure. I think it’s much easier to change things at the undergraduate level, but it takes much longer for those changes to percolate into other levels of companies or academia. And you definitely get an idea of that, like, for example, I’m pregnant right now and CMU has no maternity leave policies. And I don’t know, academia just doesn’t think about those things sometimes.

E: Is there anything more that can be done to draw women into the field?

H: We’re actually doing a great job at attracting people, but we’re not doing so great at keeping people.

E: Why?

H: I think there are a lot of great articles about it. I think one of the titles of the articles is The Leaky Pipeline. I don’t know, people identify things like mentoring. It’s really important to have a good mentor, no matter what the gender is, according to research. Just having someone support you, whether you’re a minority, female or any other group that isn’t typically represented.

Since I’m really excited about a world where engineers aren’t just cogs in the machine, and that engineers really are creative, the more you move into that direction, the wider the breadth of people, whether it’s male or female. Just getting more creative people in the field and I would love to see that prioritized.

Enhanced by Zemanta

On February 6, Dr. Hiroshi Ishiguro, Professor of Department of Systems Innovations at Osaka University, traveled to the Japan Society in New York City to give a lecture on the future prospects of humanoid robots — or androids. My wife, Jen, and I made the trip, as well.

The theater at the Japan Society was packed, and covered all ages. There was a bustling energy to the evening, and a slide featuring the Gemenoid-F android was projected prominently. The title on the slide was “Studies on Humanoids and Androids,” though the official title of the lecture was “How to Create Your Own Humanoid.” After everyone settled in, Dr. Ishiguro was introduced and he began.

He is a stately looking man and he did take a professorial stance at the podium. Through the lecture, he gave an overview of his work in android development and what he saw in its future. His talk was divided up in a manner of questions that, as a whole, asked if the line between human and robot would ever diminish. In so many words, the answer: it’s unlikely right now.

The Dr. came to explain that there are so many nuances in human behavior and speech that it would be incredibly difficult to create a robot that could act fully human. It’s a little akin to the Replicants in Blade Runner — “we” had created robots (“Replicants”) that could mimic humans in most ways, but that you could still tell, with a test, whether someone/something was human or Replicant. He even offered up a paradox; with robots, we can create the “perfect” human but then you can’t make a robot human.

He made this point through a number of examples, the most prominent is trying to agitate an android by repeated poking. Its behavior wouldn’t deviate accordingly. Humans have odd ways of reacting to stimuli that robots aren’t capable of. However, to illustrate the point that we can make, at least, “perfect” looking robots, he put up a video of a busy cafe and asked us to point out which one was the robot. I certainly couldn’t.

The unreality of robots aside, Dr. Ishiguro explained that his real motivation behind studying robots is human psychology. The example that stands out to me at this moment, is when he explained an experiment he did with one of his androids. While he was in Osaka, he directed some colleagues to plant an android in a cafeteria in Munich. From Osaka, he spoke through the robot and invited people to come, sit and speak with it. What he found was that people were more than willing to open up and spill about their problems. It was intriging, and I imagine people feel comfortable talking to the robot because of a perceived lack of judgement.

It’s examples like that which drew Dr. Ishiguro to robotics, rather than necessarily making the next big technological advance. With that, the lecture came to a close and the panel with Heather Knight, of Marilyn Monrobot, and Erico Guizzo, of IEEE Spectrum, began.

The panel was kicked off by a poem reading by the Gemenoid-F android, which was equal parts beautiful and creepy. After, Guizzo moderated the discussion between Knight and Dr. Ishiguro. The talk weaved between use of robots in theatrical settings and where social robotics is going. Knight explained her interest in robotics and using her robots in theatrical settings.

After the discussion, the floor opened up to questions. For a night that was dominated by non-technical subjects and trying to have robotics reach a wider audience, the questions were — somewhat disappointingly to me — mainly geared toward the technical aspects of the Gemenoid or the technical aspects of robotics.

Once the talk let out, there was a small reception. After it all wrapped up, we sat down with Heather Knight for a wide-ranging discussion. That interview will be posted up tomorrow.

Were you at the discussion, too? Let us know what your experience was on twitter @RobotCentral.

Enhanced by Zemanta

Artificial Brains

FastCompany’s Lakshmi Sandhana looks at the path to an evolved robot that can walk naturally. The process has necessitated the development of artificial brains. Where we’re going with it all:

Grand dreams aside, what it means at present for the team is evolving brains that can go beyond figuring out simple things like gaits to more intelligent behaviors like learning. They’ve 3-D printed an advanced quadruped robot called Aracna, to further examine evolved gaits. The next step is to evolve larger, more modular brains that will hopefully approach natural brains in complexity opening up the possibility of creating an entirely new breed of robots.

“Evolutionary computation has already produced many things that are better than anything a human engineer has come up with, but its designs still pale in comparison to those found in nature,” states Clune. “As we begin to learn more about how nature produces its exquisite designs, the sky’s the limit: There’s no reason we cannot evolve robots as smart and capable as jaguars, hawks, and human beings.”

Enhanced by Zemanta

Pets Of The Future

pet_robotI’ve always felt that the pursuit of companionship is part of being human, and once we find it we feel whole. For many, pets fill this role of companionship: they need us, they are our confidants, they cheer us up in times of bad and they are there to see our times of pride. Pets are friends, extensions of families, body guards, house alarms and alarm clocks. They are also like perpetual children, always in need of us. We take them to the bathroom (or clean the bathroom for them,) we are responsible for feeding them and for keeping their health up. Some of us even clothe them. For those who don’t want to have the responsibilities, or can’t handle the responsibilities, technology has brought us robotic pets.

In the last decade and some change, digital pets have evolved from the simple Tamagotchi. Now, there are robotic pets available that have a “mind” of their own, like the now-discontinued Sony AIBO. Or, a bit more advanced, the Pleo, which can recognize the touch of a person “petting” it and  react, and it will be able to hear and heed commands. Of course, there’s also the Furby but it’s hard to imagining it as a positive companion to anyone.

Despite the advances, the sales of the AIBO and Pleo have been disappointing, with the AIBO being axed to make Sony profitable, and Pleo going bankrupt and being sold to Jetta. Much of the reason is the expense that goes into making the robots, with the end-result price tag not looking too attractive to consumers who could opt for an actual pet. But could there be something more to it?

This study from the University of Washington looked at how humans respond to robotic dogs (using AIBO as the example in the study) when compared to stuffed animals or live dogs. It found that when children are given a choice to interact with a live animal or a robotic dog, they will tend toward the live animal and view the robotic dog as mechanic, but when the robotic dog is their only choice (such as in areas where dogs aren’t allowed, like hospitals) they tend to feel the similar emotions with a robotic dog as they would with a live animal. That, I suppose, sounds uncontroversial. We often use what we can to fill emotional voids when it becomes necessary.

One of the interesting things the study did find is that the kids (younger and older) that were surveyed in the study said that it is “Not OK” to hit AIBO, for concern over the robot’s psychological and physical welfare. That jibes with past accounts of empathy felt by humans for robots, like the famous example of Mark Tilden’s stick-insect robot:

At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully, he says. Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.

Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.

The human in command of the exercise, however — an Army colonel — blew a fuse.

The colonel ordered the test stopped.

Why? asked Tilden. What’s wrong?

The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.

This test, he charged, was inhumane.

These feelings of empathy and reputability to robots has enabled the pet-bots to be used, with some success, as therapeutic robots  for kids and the elderly. Where robots are an endless source of love and non-messiness, the way pets are an endless source of love but with the messiness, it makes sense that they’ve helped both kids and the elderly. As for other adults, the UW study had this to say:

The tendency to anthropomorphize artifacts is easily triggered (Nass & Moon, 2000; Reeves & Nass, 1998). While it remains unclear exactly what features of a robot maximize this tendency, Lee, Park, and Song (2005) found that adults who interacted with a version of AIBO with software such that the AIBO seemingly developed over time, and in response to human behaviors, perceived AIBO as more socially present, than did adults who interacted with a “fully developed” AIBO.

It went on to say that in future studies, the hesitancy of adults to perceive robots like AIBO as “socially present” may disappear as the robots become more autonomous and the software becomes smarter.

With our perceptions of the robot as, suggestively, a living animal, then that raises interesting ethical concerns as we tread down the path of stronger artificial intelligence. Since the definition of “strong A.I.” is to have a machine just as smart or smarter than humans, then the kind of A.I. we program into robotic pets will probably always be less than that, if we can control it.

We often compare the intelligence of other animals or against other animal species — many speak about how “smart” their dogs are, or how smart and cunning a cat is. Although we can say that a pet robot isn’t as smart as our cats and dogs, we aren’t far off from being at a point where they may be — a point that will be reached much sooner than robots surpassing human intelligence.

With that in mind, we also have animal cruelty laws and pet advocacy groups. If we have machines that can “feel,” respond to human commands just like our canine and feline companions can then is that the point when we start considering safety laws for robotic pets as well?

I feel like I may be getting ahead of myself in trying to answer the question, but I can’t see any reason to not consider robotic pet rights in the vein of our animal rights if that’s the path we’re going down. On the other hand, humans and living animals have a complex, long and storied relationship that we don’t have a strong hold on yet, and it has complicated some things like trying to clearly define standards of animal welfare. Taking that into account, it doesn’t seem right trying to define robotic pet welfare when we haven’t come close to squaring the myriad welfare issues of living pets, including shelters, anthropocentric control of animals, breed bans, cruelty laws, so on and so forth.

Let us know what you think about this topic on Twitter @RobotCentral.

Enhanced by Zemanta

An Empathetic Bot

Dave Hanson showcased his work on TED in a brief but well-presented talk. A quote from the TED site:

Machines are becoming devastatingly capable of things like killing. Those machines have no place for empathy. There’s billions of dollars being spent on that. Character robotics could plant the seed for robots that actually have empathy.

On Monday, I had the opportunity and pleasure to have a discussion with NASA’s Program Executive for Solar System Exploration, Dave Lavery. During the course of the conversation, Dave answered my questions on where robotic exploration stands within NASA, the mission objectives and future of the Mars rovers, his role in robotic education and more. The full interview is below.

Eric Wind: What’s your title at NASA and what do you do?

Dave Lavery: My official title is Program Executive for Solar System Exploration, and that basically means is that I’m the person at NASA who has full-time responsibilities for several of the Mars exploration missions. Several of us have that job; responsibilities for different missions.

How much information do you think we can get out of robots or rovers, or are we reaching the limit of what rovers are capable of doing?

I think there’s still an enormous amount that the rovers and robotic systems can do. Right now, realistically, given it’s the only option we have, at least for the time being, we intend to exercise them as much as we can. Certainly the rovers that are there now – the two that are still operating right now, the Opportunity and Curiosity – are both enormously capable and represent the best that we are able to put on the surface of the planet right now.

We still have plans on

extending those capabilities further; making them more capable, more intelligent and more autonomous as much as we can until we eventually get to the point where we can put humans on the planet.

Having said that, are they as capable as any human being? No, not yet, far from it. But, they are much better than nothing at all, or waiting until we can put a human there which could be decades away.

What’s the role of people in space exploration currently? Is just building these rovers, or continuing ISS missions, or missions for the moon?

Well, right now, humans are obviously building and operating the systems that we’re sending to the planet. We do have humans occupying the International Space Station continuously.

In addition to that, in terms of where we’re going to next, whether it’s going to be to the moon or onto asteroids or onto Mars; all of that is wrapped up in a redefinition process that we’re going through to redetermine and refine our ongoing human exploration strategy. So, that is actually something that is very much in development right now and we hope to have the agencies overall structure and strategy within the next couple months.

What’s the next big thing for robotics and space exploration?

Well, in terms of hard space exploration we’re working on now, we just landed the Mars Curiosity rover just a few months ago and beginning its own explorations of Mars. I think we just announced over the holidays, that we’ll actually be building a second iteration of Curiosity which will be launched around 2020.

The intent is that we’ll build a rover that is Curiosity’s twin sister, if you will, with the main difference being different size payload on board. It will take advantage of everything Curiosity finds and teaches us, and use that knowledge to help us define science package which will answer the next set of questions that Curiosity will raise.

… → Read More

A shift in how our world works may be in the offing against an artificially intelligent background. The most immediate and apparent example comes in the form of intelligent personal assistants, like the flopped Siri from Apple or the more favorably reviewed Google Now. Those devices work based on an artificial intelligence related field called natural language processing (“NLP”) which, pared down, is the process of a computer trying to recognize what you just said or typed into to it.

To see just how much this one aspect of A.I. has set itself in our lives, let’s talk Google again since they’re steeped in NLP. Google Now aside: their search function exhibits word disambiguation and they have fairly accurate machine translation (depending on the language) which are major research points involving computationally parsing natural language.

The company has place a lot of stock in the trend toward A.I., but now with their appointment of Ray Kurzweil as Director of Engineering, it’s going to become a lot move involved. Kurzweil explained his intention to TechCrunch:

Perhaps more than any other company, explains Kurzweil, Google has access to the “things you read, what you write, in your emails or blog posts, and so on, even your conversations, what you hear, what you say.”

Google can combine the personalized recommendations of a friend (who often know us better than we know ourselves) with the sum of all human knowledge, creating a sort of super best friend.

This friend of yours, this cybernetic friend, that knows that you that have certain questions about certain health issues or business strategies. And, It can then be canvassing all the new information that comes out in the world every minute and then bring things to your attention without you asking about them

It’s not just NLP, our phones and in the most widely-used search engine, either. The less-subtle applications include the use of intelligent robots in manufacturing and the return of a “more” intelligent Furby, among other things.

What we’re seeing now, as a whole, is the result of what’s called “weak A.I.” which are machines that do not quite (or are not designed to) match the intelligence of human beings. This kind of A.I. has also earned the descriptor of “applied A.I.” This is opposed to the “strong A.I.” that some propose we’re headed to, where machines match or surpass our intelligence — this event would be called the technological singularity, or popularized by Ray Kurzweil as simply The Singularity. The advances still aren’t moving at a pace which keeps up with the most optimistic hopes, but it is moving quickly. Quickly enough, probably, to avoid the “AI Winters” of past, where funding was cut off to A.I. research for lack of progress that was promised by optimistic researchers.

There are some debates and discussion as to where we are going with artificial intelligence research. On the one hand, there is no doubt that it is here and real, and we see the implementation of more complex examples like autonomous vehicles, though there are questions of the validity of how A.I. is currently evolving. That discussion was had by Noam Chomsky earlier last year.

To Chomsky, the field of A.I. is evolving in what he feels is the wrong way:

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

In other words, he is attacking the current state of A.I. as purely models. In an expanded interview, he goes onto voice displeasure that A.I. as it is doesn’t fit in with the history of science, where science is supposed to tell us about us. The Director of Research at Google, Peter Norvig, wrote a lengthy reply to Chomsky; the clincher of the discussion from Norvig was:

My conclusion is that 100% of these articles and awards are more about “accurately modeling the world” than they are about “providing insight,” although they all have some theoretical insight component as well. I recognize that judging one way or the other is a difficult ill-defined task, and that you shouldn’t accept my judgements, because I have an inherent bias. (I was considering running an experiment on Mechanical Turk to get an unbiased answer, but those familiar with Mechanical Turk told me these questions are probably too hard. So you the reader can do your own experiment and see if you agree.)

This kind of back-and-forth is nothing new in the field of A.I. In 1976, MIT Computer Science professor Joseph Weizenbaum objected to using A.I. to replace positions that he felt needed human emotion and empathy. Journalist Pamela McCorduck objected, saying:

“I’d rather take my chances with an impartial computer,” pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all

Though the ethical and philosophical questions are there, they seem to play a background role in any impending shift toward day-to-day use of artificial intelligence.  Robotics companies are making strides it seems by the month and there’s no sign that DARPA funding for intelligent robotic systems is drying up anytime soon. It is still all within the realm of weak or applicable A.I. but there’s no telling how far off the era of strong A.I. is; particularly when the Director of Engineering at, arguably, one of the most powerful companies in the world is one of it’s major proponents.

Let us know when you think the shift will ultimately happen. We’re on Twitter @RobotCentral.

Enhanced by Zemanta

A.I. Rundown

The imitable Charlie Rose interviewed Brian Christian of ‘The Atlantic’ and  Richard Waters of ‘Financial Times’ about artificial intelligence about the time of IBM’s Watson:

Brian Christian’s article on A.I. is here. Below is a video (rather dry, but still interesting) from MIT Tech about how artificial intelligence learns:

Stux Wars

StuxnetOn a superficial level, Stuxnet has played out like a cliché espionage tale or some sort of political thriller. Jason Bourne, James Bond, or what have you.

The super-worm, distributed via thumb-drives, was authored to disrupt specific machinery manufactured by Siemens. It happens that the machinery were centrifuges used by Iran’s nuclear program. To save some face, the Iranian government indicated that they had cleaned their networks, acknowledging that they had been infected. Beyond Iran’s networks and centrifuges, Pakistani and Indian networks were hit by the Stuxnet virus as well.

After Iran was hit, two new worms – Duqu and Flame – were found to be closely related to the Stuxnet program. However, instead of disrupting machinery, the point of Duqu and Flame was espionage. They were programmed to record keyboard activity, take screenshots, record Skype conversations, among other spying activities. Shortly thereafter, the source code of the worm was leaked on the Internet.

After all that, Iran says that they have just combated the Stuxnet worm yet again.

The authors were, and still officially are, unknown, though most speculation points toward Israel developing the Stuxnet worm with copious amounts of help from the United States. In what seems to be trying to rub Iran’s nose in the situation, American and Israeli officials have reportedly “smiled” at reporters when asked about Stuxnet, and the former IDF Grand Poobah had a going-away shindig that included a video apparently referencing the worm. On top of possibly implicit admissions, countless security experts have come out and said they think the origins of the program were in America or Israel. Of course, nothing can be confirmed for sure. It is still speculation.

There are a few things that makes the Stuxnet program intriguing, which Wired Magazine exhaustively documented in this article. First is how specialized the code was, in that it was designed to specifically hit a single target. A specific machine. If the worm had infected a computer that did not meet the specifications of that target, then it did nothing and likely was no cause of concern. Within that code, it sought to inject a new set of guidelines into the machine in order to destroy, in this case, a centrifuge bought by the Iranians.

Second, in addition to the specialized and sophisticated code, the operation behind getting it out there suggested an author or authors who had access to extraordinary resources that helped them accomplish this task, which lead to the suggestions of United States or Israeli involvement. In this, it established a precedence in cyber warfare. In the words of former CIA Director Michael Hayden, “The rest of the world is looking at this and clearly someone has legitimated this kind of activity as acceptable international conduct.”

Third, the means of distribution. This thing was in the wild for over a year before the infection came to infect the machine it was looking for. It moved around the world, in a way sneaking from machine to machine until it found the target. When imagining the path, it’s hard not to personify this intelligent program as an animal, something more than a mere piece of software.

In the 60 Minutes profile on Stuxnet, they posed a question that people are asking but one that hasn’t been dealt with or answered yet. Commentator Steve Kroft remarked that having Stuxnet’s source code being released onto the web has opened a kind of “Pandora’s Box.” Meaning, since the example is set, there’s no reason that variations can’t be made for a pretty penny and a cyber attack can be launched on our vulnerable infrastructure. The question being, what do we do or what are we doing to prevent an attack like that from happening? As far as we know, there have not been any large scale cyber attacks that resemble something like Stuxnet, but I’m not sure if that’s because computer security has tightened or because someone hasn’t paid out the right price tag for it yet.

Should we even worry about a Stuxnet-like attack on our infrastructure? Let us know what you think on Twitter, @RobotCentral. You can also get us on our Facebook page.

The Matrix IRL

Time Magazine ran an article, exploring whether we actually live in a simulation:

University of Oxford physics professor Nick Bostrom wasn’t the first person to suggest reality could be computer-fied — the idea’s been around since I was a kid, at least, reaching a kind of pop-cultural critical mass in the Matrix films — but he may have been the first to take a stab at a “red pill” explanation, laying out his theory in an actual paper published in 2003. Call it another version of the strong anthropic principle, except the universe’s catalyst would in this instance be an advanced civilization running an unfathomably sophisticated massively multiplayer, um, cosmos game.

In his paper, Bostrom argued that at least one of the following things must be true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

TranshumanismEstablishing The Definitions
I suppose you can’t have a proper discussion unless you lay out exactly what you’re talking about. Since we’re discussing transhumanism the movement, we’ll use its definition of what is a “transhuman.”

That is (according to the World Transhumanist Association):

In its contemporary usage, “transhuman” refers to an intermediary form between the human and the posthuman […].

A posthuman:

Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads […], or they could be the result of making many smaller but cumulatively profound augmentations to a biological human.

And, finally, transhumanism:

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase. We formally define it as follows:

(1) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.

(2) The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.

There are other ideas on what constitutes as a transhuman. The FAQ cited even suggests that we may currently be in a state of transhumanism, due to simple augmentations we undertake, like wearing glasses. I think that’s stretching it and I believe identity politics plays into this, which I’ll discuss in a later post.

So, with our definitions established, I’m going to move on to discussing a topic that’s been doing circles in my head for some time.

Religion and Transhumanism: Life Extension
I wasn’t raised in a religious household, to speak of. Through my teen years, I “experimented” with religion – reading about the world religions, the old pagan religions and the Gnostics. Ultimately, I’ve arrived at and settled on being agnostic, although my moral beliefs have been heavily influenced by my Catholic mother and my father’s Lutheran roots. Ever so often, I attend a Presbyterian church that focuses on social gospel.

My very Christian-oriented roots have always had a stream of ‘naturalism’ to take from. The idea that you should do as little ‘augmenting’ to your body as possible has been a constant theme. As far as functionally augmenting, I just wear glasses. I shy away from pain killers and other medication if at all possible. Aesthetically? I have tattoos. So, admittedly, I’m not a purist nor do I advocate puritanism. But, on the whole, I feel comfortable with things that I can personally identify as natural and human. (That’s not to say I know any better than anyone what is natural or human.)

During my years of studying other religions and philosophies, I spent quite a lot of time with atheists and agnostics. They tended to be very science-oriented, and  some folks were into futurism. Inevitably, the topic of transhumanism came up, and it was always spoken of in an irreligious context. In fact, most self-identified transhumanists are irreligious. If I really thought about it, I wouldn’t have been able to mesh the two together.

Some of my trepidation with transhumanism is that it seems to me, by its very nature, to deny humanism. To my mind, religion is a very human thing. It is a personal connection with the natural — whether it’s god(s) and/or goddess(es) or something more ethereal — that would be lost in total unnatural augmentation, such as is the quest in being posthuman and trying to “overcome” human limitations.

On the flip side, Guillermo Santamaria from H+ Magazine argues otherwise, on the subject of life extension:

Death according to the Bible is not a natural condition of humanity.  It is an aberration.  When man was created he was not created to die, but to live indefinitely.  In fact according to the Bible as all Christians know, “just as sin entered the world through one man, and death through sin, and in this way death came to all men, because all sinned.” Romans 5:12.  So Adam and Eve were not created to die.  Now some might say that transhumanism seeks to deny the influence of sin on humanity or to try to circumvent the decree of God.  But this is not true.  All transhumanism tries to do is extend life.  Even when and if a human consciousness is implanted in a machine, this is still an extension of life.  If one opposes this extension of life, then one would need to consistently resist all attempts at life extension, including all the efforts of physicians, and medical treatments.  Did Jesus endorse this view by his actions and words?  Certainly not.  What do the Christian Scriptures tell us, “Jesus went throughout Galilee, teaching in their synagogues, proclaiming the good news of the kingdom, and healing every disease and sickness among the people.”  Matthew 4:23.

I have to take issue with the underlying premise, though. Death is, according to the Bible, the natural condition of humans due to sin. Only immortality can come as a gift from God through salvation via Christ.

Whoever believes in the Son has eternal life; whoever does not obey the Son shall not see life, but the wrath of God remains on him. John 3:36

Since transhumanism is a human construct, mostly propagated by atheists and agnostics, and not handed down as an instrument of salvation by Jesus Christ, then I’m unsure how life extension, per transhumanism, is compatible with Christian doctrine.

It seems that this one aspect of transhumanism runs afoul with a couple of major religions, at least. Hinduism, for example, doesn’t see death as a mortifying event, but a naturally occurring event that recycles the soul into a new life. Buddhism accepts that death happens and considers it a kind of re-awakening of the human soul.

Beyond this one aspect of transhumanism, I couldn’t find a convincing argument for the meshing of religion and transhumanism. This probably will not upset most transhumanists, but for some people who are of spiritual mind, it doesn’t fit, really in the doctrines of any of the major religions nor does it, to me, feel like they can coalesce well. Going back to the World Transhumanists Association’s FAQ:

Transhumanism is a naturalistic outlook. At the moment, there is no hard evidence for supernatural forces or irreducible spiritual phenomena, and transhumanists prefer to derive their understanding of the world from rational modes of inquiry, especially the scientific method. Although science forms the basis for much of the transhumanist worldview, transhumanists recognize that science has its own fallibilities and imperfections, and that critical ethical thinking is essential for guiding our conduct and for selecting worthwhile aims to work towards.

Religious fanaticism, superstition, and intolerance are not acceptable among transhumanists. In many cases, these weaknesses can be overcome through a scientific and humanistic education, training in critical thinking, and interaction with people from different cultures. Certain other forms of religiosity, however, may well be compatible with transhumanism.

To be fair, transhumanists, like all humans, have different minds and beliefs about different subjects. Even within this post, we saw one transhumanist try to argue for the compatability of Christianity and H+, and then have a worldwide transhumanist club say that, rather bluntly, transhumanism is not compatible with the world’s current spiritual outlooks.

For my part, I remain unconvinced that religion has a role or is compatible with transhumanism. I would like to hear your perspective, though. You can get us on twitter @RobotCentral, or you can comment on our Facebook page here.

The future of humankind is steeped in an unprecedented amount of emerging technology. Increasingly, robots are being used in our houses, in the factories, and in war zones. Nanotechnology could help develop “personalized medicine” that takes the place of conventional treatment. Artificial intelligence researchers are making strides in learning about how the brain works by emulating it in labs.

While these advances are on the one hand exciting, there’s always something else to consider in the equation. What risks are there to consider? What precautions do we have to take? What could be some unintended consequences to any given technology?

A group of philosophers, scientists and entrepreneurs, are working to start the Centre for the Study of Existential Risk. The aim is to have a research center that answers the tough questions about risks posed to humans by technology, the take-away being the more we know the better we can prepare ourselves to deal with new technologies.

… → Read More

A More Human Machine

The National Science Foundation just approved a $1.35 million project being headed at University of Texas at Arlington. The goal:

“Our goal is to make robots and robotic technology more human-like and more human-friendly,” said Popa, who leads UT Arlington’s Next Gen Systems group within the College of Engineering. “Robotic devices need to be safe and better able to detect human intent.

“When someone is wearing a prosthetic, we want that prosthetic to be able to determine when a baseball is being thrown at it, then catch the ball.”

The four-year project is part of the NSF’s National Robotics Initiative, which is aimed at accelerating the development and use of robots in the United States that work beside or cooperatively with people. The UT Arlington team’s grant was the largest among the initiative’s 37 awards this fall.

I’ve been gathering items related to armed autonomous drones for the past week. You can see them here, here and here.

While I’m sympathetic to Human Rights Watch’s position, I don’t think they made a convincing enough argument in favor of banning armed autonomous drones.  And while I don’t share the same optimism that Evan Ackerman does, or agree that we should necessarily swap humans out for robots in war theaters, as Marcelo Rinesi argued, I don’t think it is a technology that needs to be banned outright or have resources put toward their demise. The last part of Rinesi’s post mirrors my position on the issue the most:

Ultimately, the problem of having a killer drone flying over your head is nothing but the problem of having a killer anything flying over your head. The fact of killing by specifically trained and organized groups of people with the explicit backing of their societies is where has always lied, and should continue to lay, the locus of ethical concern.

That, I believe, is the crux of the discussion. The robots themselves are amoral and they still have human programmers behind the wiring. Instead of wasting time on trying to prevent something that is most certainly going to happen, that time can be well-spent on trying to prevent things that are much more within our control, like skirmishes and wars. Even drafting up a new set of laws punishing those who use these machines in an ill-manner may be more productive, rather than trying to impede development of the drones via international law.

Throughout this entire thread, one voice — the voice of reason in most situations — has been ringing through my head:

“The Stealth Banana – Smart Fruit”

Killer Robots, Ctd.

Benjamin Wittes at Lawfare publishes a note from John C. Dehn, of the West Point Military Academy about “killer” robots. Dehn goes into how the report is problematic with it’s definitions:

The report might be discussing only those weapons on the most autonomous end of the spectrum, at one point referring to “fully autonomous weapons that could select and engage targets without human intervention” and at another as “a robot [that] could identify a target and launch an attack on its own power.” Somewhat confusingly, though, the report includes three types of “unmanned weapons” in its definition of “robot” or “robotic weapon”—human-in-the-loop; human-on-the-loop; and human-out-of-the-loop. (p. 2) Thus, the report potentially generates confusion about the precise level of autonomy that the authors of the report intended to target (pun intended), though human-(totally-)out-of-the-loop weapons are the obvious candidate.

Even assuming the report clearly intends “fully autonomous weapons” to include only weapons that independently identify/select and then engage targets, the discussion here (particularly between Ben and Tom) demonstrates that this definition of the term is not without its problems. These problems include: (1) what types of targets should be cause for concern (humans, machines, buildings, infrastructure (roads, bridges, etc.), or munitions (such as rockets and artillery or mortar rounds); and (2) what is meant by target “selection” or “identification.”

The take-away:

Those of us who have spent many years training soldiers on what constitutes “hostile intent” or a “hostile act” justifying the proportionate use of responsive force are familiar with the endless “what ifs” that accompany any hypothetical example chosen. Ultimately, we tell soldiers to use their best “judgment” in the face of potentially infinite variables. This seems to me a particularly human endeavor. While artificial intelligence can deal with an extremely large set of variables with amazing speed and accuracy, it may never be possible to program a weapon to detect and analyze the limitless minutia of human behavior that may be relevant to an objective analysis of whether a use of force is justified or excusable as a moral and legal matter.

Ultimately, it seems, one’s view of the morality and legality of “fully autonomous weapons” depends very much upon what function(s) they believe those weapons will perform. Without precision as to those functions, however, it is hard to have a meaningful discussion. In any case, I fully agree with Ben that existing international humanitarian law and domestic policy adequately deals with potentially indiscriminate weapons, rendering the report’s indiscriminate recommendation unnecessary.

In the beginning of the post, Wittes rounds up his discussion with Kenneth Anderson and Mathew Waxman. Previous RobotCentral round-ups here and here.

Transhumanism“The future of human evolution isn’t biological”

That is the tagline for Robot Central. In a short sentence, it gets to the heart of our possible future on this planet. It implies that we will be able to transcend our biological selves and become something more. The “more” is what we are trying to track here. The transhuman community looks to a new age where we could quickly eradicate disease through nanotechnology, or augment our intelligence, or develop machines to the point where the only work we have to do is the work we want to do, or do innumerable things.  The goal here is to be optimistic about these possibilities.

It’s interesting that the symbol for transhumans has become H+. Human, but more than human. It can be argued that we became transhumans the moment we developed technologies to improve our biological functions, through things as simple as a pair of glasses.   The H+ movement we know today expand on that. Instead of just correcting your vision with glasses, we now have laser eye surgery, which can permanently correct your vision and restore it to near perfect condition. It scales up to being even more complex: people who have lost a limb now have artificial limbs that restore their abilities, at the least, and that they can control as if it were an actual limb, at best. Just a couple of off-the-top-of-my-head examples.

… → Read More

Killer Robots

Human Rights Watch released a report that urged a preemptive ban on armed autonomous robots. Evan Ackerman pushes back:

Whether or not you trust roboticists to develop autonomous or semi-autonomous weaponized systems safely, HRW’s solution of preemptively banning such robots is not practical. Robots are already a major part of the military, and their importance is only going to increase as technology improves and more and more dangerous tasks are given over to robots that don’t have families to go home to. You can’t simply outlaw progress because you think something bad might happen, and attempting to do so seems, frankly, to be rather shortsighted and ignores all of the contributions that military robotics has made and continues to make to the civilian sector.

Essentially, my disagreement with HRW’s proposal comes down to the fact that they are pessimistic about robotics, while I am optimistic. They see autonomous armed robots as something bad, while I see more potential for good. I believe that it’s possible to program robots to act in an ethical manner, and I also believe that robots can act as ethically or more ethically than humans in combat situations. No program is bug-free, and I have no doubt that there will be accidents with autonomous weaponized systems. But what we should be asking ourselves is whether or not the deployment of autonomous armed robots will overall be detrimental or beneficial to humans in conflict.

(Human Rights Watch video, making their case, after the jump.)

… → Read More


On Wednesday night, NOVA ScienceNOW’s season finale showcased emerging technologies. In the three-segment show, there was a lot of David Pogue, The New York Times’ Technology writer, dramatically enthusing over different gadgets. Among the featured were the DARwIn robots playing soccer, the Google Glass and video games that you can control with your mind. Lightly intercut with the techno-gasms was some surface discussion about the ethical implications of these technologies, with MIT professor Sherry Turkle serving as a kind of spokeswoman for these views. While I could have used more of that discussion, it didn’t seem to be the point of the show – so no real complaints. It was an otherwise very engaging show and Pogue is an electric host. … → Read More

These guys are a lot more personable than industrial assembly-line robots.  Combine their personality with their association with capitalism and you’ve got a robotic worker class that’s taken another step into the human routine.  Read about them here.

Basic Emergence Explained

Rodney Brooks had it right in his 1991 paper “Intelligence without Reason.”  His approach to Artificial Intelligence is based the emergence of behaviors not explicitly programmed into a system.  Instead, functions designed to control a a small part of a robot are organized in a priority hierarchy.  Lower priority functions yield to higher priority ones. These functions each make the robot behave in a certain way and are thus called behaviors. When a robot is about the world, behaviors begin switching back and forth very quickly, each taking over the robot–sometimes only milliseconds at a time.  You’d think that this quick back-and-forth switching of behaviors would create a chaotic, out of control robot.  What really happens is that the robot appears to exhibit higher-level behaviors that were never programmed into the robot.  Emergence happens.

I built a maze in my living room from 1 x 8 boards lying on their side and released my robot at one end.  The goal was for my robot “Beto” to find the exit on the other side of the 12′ x 12′ labyrinth of wooden walls.  The behaviors I programmed were simple.  From highest priority (1) to lowest (5), the behaviors were:

  1. When bumper switch is touched on right, stop, turn left.
  2. When bumper switch is touched on left or in front, turn right.
  3. When IR sensor sees something on right, turn left.
  4. When IR sensor sees something in front or left, turn right.
  5. Unconditionally drive forward while arcing to the right.

Each behavior was responsible for one single, simple thing.  They each ran as discrete processes and each monitored the world for its condition to be true.  When the behavior’s condition became true, it took control of the robot and performed its action.  If there was a tie where two behaviors tried to take over the robot, the higher priority behavior won.  When I turned on the robot in an open space, only behavior #5 was in operation because none of the conditions for the other behaviors was true.  The robot began to drive forward with a bias towards the right.  Once it came across an obstacle, one of the other behaviors would perform.

I dropped him at the beginning of the maze.  The result was fascinating.  I had deliberately designed a long narrow corridor in the maze in order to try to confuse the robot.  The robot drove right down the middle of the corridor in a straight line, slowed before reaching the end wall, stopped for about a second, turned 180 degrees and proceeded out the way it came from.  None of those behaviors were programmed into the robot but the rapid switching between the simple few behaviors caused this complex behavior to emerge.

I spent some time decomposing this kind of emergent behavior and was never able to completely and confidently explain every nuance; however, it was obvious that the emergent behaviors came from the ones that were programmed operating at a few milliseconds at a time–switching tens or hundreds of times a second, impacting motor speeds, voltage levels and performing logic.  With these simple few behaviors, a lot of the analysis was speculation and I quickly concluded that in order to create more organic-behaving robots, I had to just let go.

The robot successfully navigated his way through a different maze layout every time–validating another of Dr. Brooks’s tenets that robots should be able to react to a dynamic and changing world environment.

In Mark Buchanan’s article, “Law and Disorder,” Buchanan shares a case in General Motors when in 1992 the company was struggling to optimize schedule of the robots that automatically painted trucks coming off the assembly line.  GM’s Dick Morley suggested that the robots should be left to determine their own painting schedules.

Morley put out some simple rules for each machine where each would “bid” for new jobs with an unconditional desire to stay busy.  “The results were remarkable, if a little weird.  The system saved General Motors more than $1 million each year in paint alone.  Yet the line ran to a schedule that no one could predict, made up on the fly by the machines themselves as they responded to emerging needs.”

Steven Wolfram is yet another from this behavior-based camp.  In his book, “A New Kind of Science,” he argues that the rules in nature aren’t necessarily limited to traditional mathematics.  Instead, he suggests that complex structures emerge from the lower level cellular automata with more generalized rules.

As robotics and Artificial Intelligence enter the dawn of the Singularity, the complexities manifesting from the core threads of behaviors will become as unpredictable as humans.  All we need to do is to let go.