As promised yesterday, my discussion with Heather Knight of Marilyn Monrobot. I’d like to thank Heather for taking the time to speak with us after the Dr. Ishiguro’s lecture. We hope you enjoy the interview below.

Eric Wind: How’d you come to speak at the lecture tonight?

Heather Knight: I was invited to come speak at the lecture. There aren’t that many roboticists in New York City and I’m not sure whether they found me or whether [Erico Guizzo] recommended me because of our common interest in robots and theater.

E: What part of the conversation did you find the most intriuging and most beneficial for the audience?

H: That’s a good question. It’s always interesting to speak to a general public crowd. This was a real interesting evening because you had the Japanese Cultural Society and then the people interested in robots coming. It prompts a more cultural discussion to begin with, because you’re in New York City at this, you know, Japanese house of culture. So, having an American roboticist and a Japanese roboticist, we both have similar research interests in that we think social robots are really important and we want to make these robot companion type situations. Although, I would say that I want robots to bring people to connect, rather than being the connection.

He and I both have a strong interest in theater and thinking about algorithims we can learn from directors, actors that are codified a little bit differently than psychologists. Though, psychology is a field which social robotics adopts methodology from. So, we have a lot of similar interests, but we’re also very different, so it was fun.

You know, he’s a lot more experienced than I am. Hiroshi Ishiguro has been working on robotics for several decades and has a very established lab in Japan. I’ve been working on robots for 11 years, and it’s not like I haven’t done anything, but he’s been a huge inspiration for me and it was really exciting to be able to be in that situation.

E: When did you first hear about his work?

H: Well, it’s been at least 7 or 8 years ago. I started doing social robotics when I was an undergrad at MIT and working with a professor named Cynthia Breazeal. She made this robot for her PhD called Kismet, that was basically a head that had ears and eyes. It wasn’t trying to be a super-humanoid, it was almost creature like. It didn’t use words, but it kind of babbled. It responded to the tone of your voice. Sometimes it would be in the mood to play with toys or color-saturated objects, or sometimes it would be in the mood to socialize. I think one of the more clever aspects of it’s behavior system is that it would get bored. So, if you weren’t being interesting then it would switch to wanting to play with toys, which seems really human. It’s a really simple set of behaviors. But anyone could walk up to this robot without any training and learn how to interact, because it would be like “Hello!” and it’s ears would perk up, or if you said “You’ve been a very bad robot!” it would make a sad face or something.

So, it’s responding with sound. With these facial expressions that are perhaps a little cartoon like, but not fully human but still relateable. I think it’s really compelling to think about the simplest ways to come up with human like robots. That’s often the most difficult thing to do. As an engineer, it’s always more difficult to have a simple solution that’s very clever, than it is to have a convoluted Rube Goldberg machine to making breakfast or something.

E: Could you expand on your background a bit more?

H: Sure. I have a Bachelors and a Masters in Electrical Engineering and Computer Science, and now I’m working on a PhD in Robotics. So, you might think I’ve been in school for the last 15 years but actually I do other things along the way. I’ve taken breaks to travel and do other things, but I got a chance to work at the [NASA] Jet Propulsion Laboratory in California on space stuff. While I was there, I met people who were working in art and technology, and who I ended up collaborating with.

Originally, I was in SyynLabs, we were building installations for events. For example, you’d have a projection wall where people would come up to and dance, and things would fall on them and roll off their shadow. It was these installations that are in an event setting that would get people talking to each other. So it’s that technology that is fun and playful, and when we started building other things, like a bicycle-powered blender or, I don’t know, we had this one installation where you had to create a human circuit to hear a story — so, if you wanted to hear the full story, you needed to have a group of, like, 10 strangers holding hands. You’re using technology to trick people into spending time with each other. We used to call it “technological inebriation.”

It was really technology for people, and I think that’s a great metaphor for some of the robots I want to make. I don’t want to make robots for the sake of replacing people, or, I don’t know, for their own good. I think you can use robotics in these interactive art pieces to bring out features in ourselves or connect us to each other. Like, there’s autism therapy applications, where kids with autism feel more comfortable talking to robots than people because it’s less overwhelming — less sensory overload. If they practiced with this robot, this kind of stepping-stone agent, then they could better integrate generally, or get used to those more useless but still socially important aspects of interaction.

E: What was your Masters thesis on?

H: I did my Masters thesis on this project called the Huggable. It was this robotic teddy bear that had a fully body sensing skin. I was trying to come up with a way to make that sense happen in real time, so it could react naturally. It’s like if you were to pick up something, like a puppy or a baby. So, how do we communicate with puppies and babies? You pet them, hold them or you might tickle them. If they’re asleep, you pat them to wake them up. All of that communication that is happening is very complex. Anyone who has a small child or has played with a small child, they could tell you that the child knows what they want, but it’s not verbal. So how can you create pre-verbal interactions?

My thesis was on what kind of touch gestures do we use to communicate with this robot teddy bear. This involved human studies which included an audio puppeteer. So, if someone was pretending to be the robot, it sees the video and it’s natrually reacting and its sensors are trying to determine how people are communicating with it.
Basically, I get this data corpus to see how people use touch to communicate. It becomes a pattern recognition problem, where you have to categorize how people use touch and then you have to think about “how can I detect this?” Since I was trying to build a system that would work in real-time, one of the things I discovered is with touch, you don’t need really fine tuned sensing. As long as you cover an area that is two by three inches, you’re going to capture most communication. You don’t need a really fine grid.

The second thing is most touch lasts one to five seconds, so the connection doesn’t need to be particularly quick. Within that, you need to do some frequency analysis. For example, tickling is a very noisy signal. It involves a lot of different signals. Petting is more of a regular sine wave. Then, you can see how you differentiate between these different kinds of touch.

My degree was in Electrical Engineering, so it was designing the sensor system but it was also coming up with a simple pattern recognition system.

E: What’s your doctoral thesis, and how’s the progress?

H: I haven’t declared my thesis yet. I have finished my coursework, and I’m in the prep for that. Then we have qualifiers and so on, and I’m in the very final stage of my qualifiers. I will complete those this semester and hopefully put forth my proposal in the fall.

E: Do you have any idea of what your thesis proposal will be?

H: Yeah, so I learned that you’re not supposed to propose until you’ve already finished some of the work. That way, you’re not proposing something you’ve never done but you’re proposing something you’ve already tried out, so you know it has a chance of working.

People usually propose when they have 20 – 40% of the work done, in our department. I’m hoping it’s going to be about expressive motion. Basically, how can the non-anthropomorphic be expressive. I’m interested in how motion can describe the state of a relationship; “Do I know you?” “Do I not know you?” “Do I like or dislike you?” “Are you my boss, or am I your boss?” Power relationships are important. Then there can be room for emotions. Or, something else that’s interesting, is trying to measure how much a robot is in a rush by how quickly it’s going. We can see that with drivers and cars now. It’s just a question of whether we can categorize that in a general way.

I might get better at my elevator pitch in a couple of years, but the basic idea is to see if there are some universals of expression that we can distill to use on non-anthropomorphic robots. It’s basically robot body language.

E: What got you interested in robotics?

H: I didn’t grow up obsessed with robots. I fell in love with robots when I started building them. So, I went home in my Freshman year at MIT, and I was talking to people in my living group and I was asking about an internship. Someone said, “Hey! I work in a robotics lab. I could probably get you a position.”

So, I just started working there, January — maybe 2002. Over the summer, it was the first year my professor, Cynthia Breazeal, was a professor, and we had this big group project to kick off our research group. We built this big interactive terrarium and brought it to a big conference in San Antonio, and we were in the emerging technologies exhibit. You know, it was kind of like Epcot center. There was this big robot that had this hand-thing that would see people, say “Hello!” and then it would get bored and then go play in the waterfall, then it would get tired and turn in for the night in this cave. We went really crazy. There were these rock crystals that would turn on, and these drums you could play with, and these fiber optic tube worms that I got to put together. I was 18 and it was awesome. By the end of the 5 days, I could restart the whole system myself and I could talk to all these different people. It wasn’t just getting to build the robot and see it move, it was seeing people interact with the robots.

E: What do you feel that sets you apart from other roboticists?

H: I don’t know. I definitely have fun with what I do. My father was an engineer, and he would design propulsion systems for ships and submarines. He’s really great at math and physics. My mother was a Peace Corp volunteer, and all about international understanding, so she really wanted to impact the world.

I like building things and I like solving problems, and then my mother’s voice is in the back of my head saying “Well, why do people care about this?” I think that’s one of the reasons I didn’t want to do space stuff anymore. I wanted to impact real human beings. So, I don’t know how different that is but I really like imagining the future.

E: What’s your favorite project that you’ve worked on so far?

H: Well, if you asked what my favorite robot is, then I would be in really big trouble back at home if I didn’t say Data.

I don’t know, there have been so many projects I’ve been involved in in different ways. So, the precursor to the Rube Goldberg machine on Youtube is the OK Go music video. That was the project where I thought, “Oh my god, you could learn so much from professionals.” The band made that machine so much cooler than if we had built it by ourselves. They are professional entertainers and they have this intuition about what audiences care about and how to reach people. It’s part of the motivation I’ve gained in wanting to work with actors.

What I left out before, I want to work with actors, dancers, directors to help craft these expressive emotions that I’m trying to find universals for in robots. I’m really interested in seeing how we can adopt bodies of knowledge from theater into robotics. Or from disclipines of art that people have been spending hundreds or thousands of years honing. Rather than trying to reinvent the wheel as engineers, where we can make engines work, suddenly we’re trying to make these socially intelligent machines out there. Like, are engineers really the best people to be making socially intelligent machines? There’s some sort of weird clash there.

So, I’m trying to distill knowledge from a non-technical field into a world where you can program stuff. Some of that has been about creating interfaces where you can have kinetic conversations.

E: How would you explain social robotics and it’s significance to the average person?

H: Social robotics is the idea that you can make the human-robot interface smooth. So, instead of teaching you how to program the robot, you can just walk up to the robot and communicate and figure out the interface for it.

Social robotics is super-important if you ever want to have humans and robots working together that aren’t programmer-robot. Right now, we don’t really have that. We have tons of robots for industry manufacturing floors, to sort our mail, and we have sent them to the surface of Mars. But, to do every-day things with robots, we have to create an interface to make that possible.

E: What’s the idea Marilyn Monrobot labs and what drove you to start it?

H: I’m really interested in the intersection between robotics and theater. As much as I get to explore that as a researcher, I also think there is artistic value to that intersection. Marilyn Monrobot lets me explore that. So, it’s the umbrella name for our robot theater company. It’s where we do our robot-comedy stuff and the robot film festival. Last year, we did a robot cabaret variety show with 10 acts, exploring how the modern world is already a cyborg society because of our interdependence on phones. It’s allowed us to consider the changing ethical ramifications of our changing relationships with each other, via technology. Like, you hear about Freshmen who arrive at their new college and they have like 200 Facebook friends at their new college but they don’t know how to talk to someone at the orientation party. So, are we losing our humanity to technology? Obviously, I’m not a pessimist about technology but I think it’s equally naive not to think through where technology can go.

E: How did you decide on the name, Marilyn Monrobot?

H: Well, the JPL is really flat. You don’t really have parking garages in earthquake country, so instead we had this 20 minute walk from my office to the enormous parking lot. Of course, seniority is how you actually get close to your office, but since the average age there is 50-something and the average working-span is 30-years, we were kind of the kids. So it just kind of came to me walking through the parking lot.

I also found out later that Marilyn Monrobot was a Futurama episode, or it was a segment, which is fantastic. I didn’t know about that at the time. But, it’s supposed to represent this intersection between robotics and entertainment.

E: Could you tell us about the robot census and how that’s going?

H: So, the robot census started when I first arrived at Carnegie Mellon University. They do this thing where when you first arrive, you don’t know who your adviser is going to be but that is your most important relationship during your PhD. The average time for the degree is 5 and a half years, so some call it the marriage process. It’s longer than some marriages.

I was going to school and there were 500 other people working in robotics in some capacity, and we’re supposed to choose our adviser out of the 80-something professors. We didn’t even know who had what robot. Like, I’m at the Robotics Institute and, obviously I have to partially choose my advisers by what kind of robots they have, right? If this is our marriage, then they have children.

So, I started this census on campus and people thought it was interesting and I opened it up to the world. I think it should be done every four years, kind of like this other census you may have heard of that involves the population of the United States.

E: Is it difficult rounding up information for the robot census?

H: Yeah, even in person on campus. I think campuses should run their own censuses and collect information. We had to had out physical forms and then send links out to the digital form. It was like marketing. I had no idea, but you should feel okay sending up to ten reminders. But we didn’t do that, we went in person after a while. So, there were a few that probably slipped through the cracks but I’m sure that’s true of other censuses.

E: How many robots have you documented?

H: We’ve documented 547 robots on campus. There’s an off-campus facility for robotics, but we didn’t do the census there, though I would love to expand to that.

E: Do you feel that the anxiety people have could be attributable to the perceived lack of sociability of robots?

H: No, I think it’s religion. Fear of robots is a Western culture thing. It’s this idea that we’re usurping the role of God, and it’s kind of like Frankenstein because we’re doing what we should not be doing — you know, what we’re doing is wrong and we will be punished. It’s tapping into mythology.

Storytelling is a cultural phenomenon. It’s not based in reality. It’s based in human perception and culture and so on. So this idea that we’re not supposed to be playing God, and if we try to play God it will go really wrong, that’s a religious thing, in my opinion and others people’s opinion. This is well documented.

Now, if you look at the Shinto faith, they believe that all objects, people, animals, mountains, have the same spirit. There is no hierarchy. They have a really high value of nature, and rocks, and robots, so spiritually everything is on equal footing. The other detail that they have is that these spirits naturally want to be all in harmony. So, when you look at Frankenstein or the Terminator versus… Astroboy, that’s revealing our culture. It’s not about the technology; it’s about the belief system. Regardless of whether you were raised going to church or temple, this permeates our culture.
So, like even in Japan where a lot of people are Christian now, this Shinto belief system has permeated their expectations of what happens with technology.

E: Do you see the robotics industry trending toward social robotics?

H: It’s early research now, but I think charismatic machines have more applications in the short term. Social robotics may be a little longer. Like, the idea of Siri being really popular. That’s a charismatic technology. I think what we learn in social robotics can be cross-applied into real technology because what we’re doing is creating interfaces between technology and people. So, what we learn about sociability can be applied to non-social robot machines. Hiroshi would probably have a different opinion there.

E: What do you find is the biggest barrier in getting people interested in robotics? Do you think it’s exclusively religion or cultural?

H: When people don’t meet it and they’re just thinking theoretically about technology, then you get the Terminators and then you have the Singularity people. Those are like the two most popular mythmaking things at the moment. That doesn’t mean we don’t have positive storytelling. I mean, we have Rosie the Robot and we have Wall-E. I think stories really inspire what we make.

Throwing back to the previous conversation of robots in Japan, they invest so much in companion robots and music and things for the elderly, etc. And what is the U.S. known for in robotics innovation right now? The biggest is military robots. That doesn’t mean there’s not a lot of research in other kinds of robots, but what we’re famous for is military robots.

E: Do you have an end-goal for your research and projects?

H: Shape the future.

E: Are you concerned about people using your technology for negative instances?

H: I think it’s really important to think about that. I should think that would be a common part in engineering education in general, thinking through the ethics and where you’re going with stuff. I think in the world of art, and even architecture, critique is a natural part of the process. And it would be great if we would not only critique our designs based on needing to meet certain performance criteria, and the bigger grant organizations like the National Science Foundation, do ask for broader impact stuff, but they don’t really ask how things can be misused.

E: Do you think there’s a reason for that?

H: For me, and this is theoretical, engineers were never the heads of companies. They were the people who could help the people who started the companies solve specific problems. Historically, in this bigger company construct, our job is not to be creating ideas. These days, withink the last 30 years, engineers and technologists are starting companies and we are the idea people but the education hasn’t shifted. So, we’re still educated as if we are cogs in the larger industrial machine, whereas other people are thinking about “Where is this going?” Sometimes that’s about money but at least there was someone to think about that stuff. Maybe they had training in that, I don’t know.
But, I think it’s a legacy from engineers jobs before.

E: Kind of shifting gears, it seems like robotics, and technology in general, has drawn more men to the field than it has women. From your experience, do you feel that’s the case?

H: Well, I was spoiled because MIT is like 45% women. So, I didn’t really feel that way. When I worked, it was something like 1/3 women and 2/3 men ratio in the U.S. In Europe, it’s more like 9/10 male and 1/10 female.
I never really thought about it until I was several years into doing what I was doing. I always idolized my dad, so I kind of always felt like I wanted to be an engineer. I mean, there are definitely some legacy issues with gender, but things are moving in the right direction for sure. I think it’s much easier to change things at the undergraduate level, but it takes much longer for those changes to percolate into other levels of companies or academia. And you definitely get an idea of that, like, for example, I’m pregnant right now and CMU has no maternity leave policies. And I don’t know, academia just doesn’t think about those things sometimes.

E: Is there anything more that can be done to draw women into the field?

H: We’re actually doing a great job at attracting people, but we’re not doing so great at keeping people.

E: Why?

H: I think there are a lot of great articles about it. I think one of the titles of the articles is The Leaky Pipeline. I don’t know, people identify things like mentoring. It’s really important to have a good mentor, no matter what the gender is, according to research. Just having someone support you, whether you’re a minority, female or any other group that isn’t typically represented.

Since I’m really excited about a world where engineers aren’t just cogs in the machine, and that engineers really are creative, the more you move into that direction, the wider the breadth of people, whether it’s male or female. Just getting more creative people in the field and I would love to see that prioritized.

Enhanced by Zemanta

On February 6, Dr. Hiroshi Ishiguro, Professor of Department of Systems Innovations at Osaka University, traveled to the Japan Society in New York City to give a lecture on the future prospects of humanoid robots — or androids. My wife, Jen, and I made the trip, as well.

The theater at the Japan Society was packed, and covered all ages. There was a bustling energy to the evening, and a slide featuring the Gemenoid-F android was projected prominently. The title on the slide was “Studies on Humanoids and Androids,” though the official title of the lecture was “How to Create Your Own Humanoid.” After everyone settled in, Dr. Ishiguro was introduced and he began.

He is a stately looking man and he did take a professorial stance at the podium. Through the lecture, he gave an overview of his work in android development and what he saw in its future. His talk was divided up in a manner of questions that, as a whole, asked if the line between human and robot would ever diminish. In so many words, the answer: it’s unlikely right now.

The Dr. came to explain that there are so many nuances in human behavior and speech that it would be incredibly difficult to create a robot that could act fully human. It’s a little akin to the Replicants in Blade Runner — “we” had created robots (“Replicants”) that could mimic humans in most ways, but that you could still tell, with a test, whether someone/something was human or Replicant. He even offered up a paradox; with robots, we can create the “perfect” human but then you can’t make a robot human.

He made this point through a number of examples, the most prominent is trying to agitate an android by repeated poking. Its behavior wouldn’t deviate accordingly. Humans have odd ways of reacting to stimuli that robots aren’t capable of. However, to illustrate the point that we can make, at least, “perfect” looking robots, he put up a video of a busy cafe and asked us to point out which one was the robot. I certainly couldn’t.

The unreality of robots aside, Dr. Ishiguro explained that his real motivation behind studying robots is human psychology. The example that stands out to me at this moment, is when he explained an experiment he did with one of his androids. While he was in Osaka, he directed some colleagues to plant an android in a cafeteria in Munich. From Osaka, he spoke through the robot and invited people to come, sit and speak with it. What he found was that people were more than willing to open up and spill about their problems. It was intriging, and I imagine people feel comfortable talking to the robot because of a perceived lack of judgement.

It’s examples like that which drew Dr. Ishiguro to robotics, rather than necessarily making the next big technological advance. With that, the lecture came to a close and the panel with Heather Knight, of Marilyn Monrobot, and Erico Guizzo, of IEEE Spectrum, began.

The panel was kicked off by a poem reading by the Gemenoid-F android, which was equal parts beautiful and creepy. After, Guizzo moderated the discussion between Knight and Dr. Ishiguro. The talk weaved between use of robots in theatrical settings and where social robotics is going. Knight explained her interest in robotics and using her robots in theatrical settings.

After the discussion, the floor opened up to questions. For a night that was dominated by non-technical subjects and trying to have robotics reach a wider audience, the questions were — somewhat disappointingly to me — mainly geared toward the technical aspects of the Gemenoid or the technical aspects of robotics.

Once the talk let out, there was a small reception. After it all wrapped up, we sat down with Heather Knight for a wide-ranging discussion. That interview will be posted up tomorrow.

Were you at the discussion, too? Let us know what your experience was on twitter @RobotCentral.

Enhanced by Zemanta

Artificial Brains

FastCompany’s Lakshmi Sandhana looks at the path to an evolved robot that can walk naturally. The process has necessitated the development of artificial brains. Where we’re going with it all:

Grand dreams aside, what it means at present for the team is evolving brains that can go beyond figuring out simple things like gaits to more intelligent behaviors like learning. They’ve 3-D printed an advanced quadruped robot called Aracna, to further examine evolved gaits. The next step is to evolve larger, more modular brains that will hopefully approach natural brains in complexity opening up the possibility of creating an entirely new breed of robots.

“Evolutionary computation has already produced many things that are better than anything a human engineer has come up with, but its designs still pale in comparison to those found in nature,” states Clune. “As we begin to learn more about how nature produces its exquisite designs, the sky’s the limit: There’s no reason we cannot evolve robots as smart and capable as jaguars, hawks, and human beings.”

Enhanced by Zemanta

Wired is reporting that the U.S. Navy is working on a new kind of robot that would “smell its way to weapons prep” on a ship that would work with an artificial pheromone. The challenge:

It’s not going to be that simple, though. If the project works, the sniffer-robots will begin deep below the carrier’s water-line, hauling bombs from nine levels underneath the flight deck into a series of elevators, before ending up at an assembly point on the deck called the “bomb farm.” Once there, the chemicals will have to withstand winds whipping over the deck, and “must be stable enough during direct contact with petroleum products,” withstand temperatures above 200 degrees Fahrenheit, and fade after a mere 20 minutes — thereby preventing other robot swarms with different instructions from getting confused when moving down the same hallway.

NPR profiles an experiment done by Christoph Bartneck, where he looks at human empathy toward robots. In Bartneck’s experiment, the human subject would play games with the robot, and sometimes the robot would mess up. This meant that the human subject was told to turn the robot off:

At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram’s would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: “They would essentially eliminate everything that the robot was — all of its memories, all of its behavior, all of its personality would be gone forever.”

In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. “You are not really going to switch me off, are you?” the cat robot begs, and the humans sit, confused and hesitating. “Yes. No. I will switch you off!” one female research subject says, and then doesn’t switch the robot off.

“People started to have dialogues with the robot about this,” Bartneck says, “Saying, ‘No! I really have to do it now, I’m sorry! But it has to be done!’ But then they still wouldn’t do it.”

The Rise Of Siri

Bianca Bosker, at Huffington Post, delves into Siri’s origin. She reports that the intelligent virtual assistant was born out of the largest artificial intelligence research project in the country. It is also suggested that Apple has stunted Siri’s potential, but the future holds something different for it:

Siri’s backers know Apple’s version of the assistant has not yet lived up to its potential. “The Siri team saw the future, defined the future and built the first working version of the future,” says Gary Morgenthaler, a partner at Morgenthaler Ventures, one of the two first venture capital firms to invest in Siri. “So it’s disappointing to those of us that were part of the original team to see how slowly that’s progressed out of the acquired company into the marketplace.”

But as a new wave of virtual assistants compete to take on our to-do lists, Apple is under growing pressure to use the technology it already has and turn Siri into the multitasking, proactive helper it once was. Siri’s history suggests a fantastical future of virtual assistants is coming; where we now see Siri as a footnote to the iPhone’s legacy, some day soon the iPhone may be remembered as a footnote to Siri.

“A kinder, gentler HAL is on way its way to the mainstream for sure,” says Kittlaus. “Siri is just a poster child, but it goes way, way beyond that.”

This is an in-depth, well-researched piece that deserves a good chunk of time to be read.


Enhanced by Zemanta

Pets Of The Future

pet_robotI’ve always felt that the pursuit of companionship is part of being human, and once we find it we feel whole. For many, pets fill this role of companionship: they need us, they are our confidants, they cheer us up in times of bad and they are there to see our times of pride. Pets are friends, extensions of families, body guards, house alarms and alarm clocks. They are also like perpetual children, always in need of us. We take them to the bathroom (or clean the bathroom for them,) we are responsible for feeding them and for keeping their health up. Some of us even clothe them. For those who don’t want to have the responsibilities, or can’t handle the responsibilities, technology has brought us robotic pets.

In the last decade and some change, digital pets have evolved from the simple Tamagotchi. Now, there are robotic pets available that have a “mind” of their own, like the now-discontinued Sony AIBO. Or, a bit more advanced, the Pleo, which can recognize the touch of a person “petting” it and  react, and it will be able to hear and heed commands. Of course, there’s also the Furby but it’s hard to imagining it as a positive companion to anyone.

Despite the advances, the sales of the AIBO and Pleo have been disappointing, with the AIBO being axed to make Sony profitable, and Pleo going bankrupt and being sold to Jetta. Much of the reason is the expense that goes into making the robots, with the end-result price tag not looking too attractive to consumers who could opt for an actual pet. But could there be something more to it?

This study from the University of Washington looked at how humans respond to robotic dogs (using AIBO as the example in the study) when compared to stuffed animals or live dogs. It found that when children are given a choice to interact with a live animal or a robotic dog, they will tend toward the live animal and view the robotic dog as mechanic, but when the robotic dog is their only choice (such as in areas where dogs aren’t allowed, like hospitals) they tend to feel the similar emotions with a robotic dog as they would with a live animal. That, I suppose, sounds uncontroversial. We often use what we can to fill emotional voids when it becomes necessary.

One of the interesting things the study did find is that the kids (younger and older) that were surveyed in the study said that it is “Not OK” to hit AIBO, for concern over the robot’s psychological and physical welfare. That jibes with past accounts of empathy felt by humans for robots, like the famous example of Mark Tilden’s stick-insect robot:

At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifully, he says. Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.

Finally it was down to one leg. Still, it pulled itself forward. Tilden was ecstatic. The machine was working splendidly.

The human in command of the exercise, however — an Army colonel — blew a fuse.

The colonel ordered the test stopped.

Why? asked Tilden. What’s wrong?

The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.

This test, he charged, was inhumane.

These feelings of empathy and reputability to robots has enabled the pet-bots to be used, with some success, as therapeutic robots  for kids and the elderly. Where robots are an endless source of love and non-messiness, the way pets are an endless source of love but with the messiness, it makes sense that they’ve helped both kids and the elderly. As for other adults, the UW study had this to say:

The tendency to anthropomorphize artifacts is easily triggered (Nass & Moon, 2000; Reeves & Nass, 1998). While it remains unclear exactly what features of a robot maximize this tendency, Lee, Park, and Song (2005) found that adults who interacted with a version of AIBO with software such that the AIBO seemingly developed over time, and in response to human behaviors, perceived AIBO as more socially present, than did adults who interacted with a “fully developed” AIBO.

It went on to say that in future studies, the hesitancy of adults to perceive robots like AIBO as “socially present” may disappear as the robots become more autonomous and the software becomes smarter.

With our perceptions of the robot as, suggestively, a living animal, then that raises interesting ethical concerns as we tread down the path of stronger artificial intelligence. Since the definition of “strong A.I.” is to have a machine just as smart or smarter than humans, then the kind of A.I. we program into robotic pets will probably always be less than that, if we can control it.

We often compare the intelligence of other animals or against other animal species — many speak about how “smart” their dogs are, or how smart and cunning a cat is. Although we can say that a pet robot isn’t as smart as our cats and dogs, we aren’t far off from being at a point where they may be — a point that will be reached much sooner than robots surpassing human intelligence.

With that in mind, we also have animal cruelty laws and pet advocacy groups. If we have machines that can “feel,” respond to human commands just like our canine and feline companions can then is that the point when we start considering safety laws for robotic pets as well?

I feel like I may be getting ahead of myself in trying to answer the question, but I can’t see any reason to not consider robotic pet rights in the vein of our animal rights if that’s the path we’re going down. On the other hand, humans and living animals have a complex, long and storied relationship that we don’t have a strong hold on yet, and it has complicated some things like trying to clearly define standards of animal welfare. Taking that into account, it doesn’t seem right trying to define robotic pet welfare when we haven’t come close to squaring the myriad welfare issues of living pets, including shelters, anthropocentric control of animals, breed bans, cruelty laws, so on and so forth.

Let us know what you think about this topic on Twitter @RobotCentral.

Enhanced by Zemanta

Rethinking Drones

DroneTo have this discussion, there are a couple of things to keep in mind. One, “drone” is a wide-ranging term that technically means any remote-controlled vehicle. Second, it’s probably fair to say that the lion-share of money for drone technology has been coming from the military, to put toward the development and use of unmanned combat air vehicles, or simply “combat drones.” Due to this, it’s also probably fair to say that the exposure people have to the word “drone” is typically connected to the unmanned planes the United States uses in war.

The two conversations that are eating up headline space lately are the continuing discussion over “killer robots,” which almost always refer to autonomous combat drones, and President Obama’s new CIA nominee, John Brennan, who has become the face of the secret-but-not-so-secret combat drone program. On the peripheral, there is some discussion over drones being used for spying purposes and the privacy concerns that arise from that.

Unfortunately, that’s what you mainly hear about “drones” in the media. What you rarely hear about are non-military projects that could have a day-to-day impact on you or a positive effect on our knowledge and humanity in general. Chris Anderson over at has made such a list. Some of the ones that claimed a good amount of my research/interest time:

Drones being used to monitor ocean wildlife.

The National Oceanic and Atmospheric Administration is conducting a demonstration off Oahu’s North Shore this week of a small unmanned aircraft the agency hopes will improve ocean monitoring and aid environmental research in the Northwestern Hawaiian Islands. The Puma AE, which has a 10-foot wingspan and weighs 13 pounds, can stay aloft for two hours and capture high-definition still photos and video. It is remotely operated.


It was designed to be quiet and avoid detection, which will allow researchers to observe wildlife at close range, Jacobs said.

“We don’t have to risk personnel being landed on the beaches,” he said. “Exotic species introduction potential gets eliminated and we believe it’ll be less potential for any disturbance of the critters that are being surveyed.”

Drones being used to track hurricanes.

“We are still a long ways away from replacing manned flights,” he said. Instead, the UAVs will supplement manned flights by flying at altitudes of up to 60,000 feet, thousands of feet above the thrashing winds and rain. One aircraft is designed to gather data about the environment around a storm, while the other UAV will study the storm itself.

It’s not the first time NASA has turned to spy aircraft for weather research. Since the 1970s, the space agency has used a version of the military’s U-2 aircraft to conduct a range of observations on everything from wildfires to migratory birds, as well as hurricanes. (During the 1960s, NASA unsuccessfully tried to help cover up Francis Gary Powers’s failed U-2 spy mission in the Soviet Union by claiming he got lost while conducting weather research.)

Like the military, NASA and NOAA are now looking to unmanned vehicles to either replace or bolster more traditional vehicles.

Make your own Google maps or we may be close to getting sky-high WiFi that could help quickly get communication lines setup in disaster zones. So on and so forth.

I’m not certain how the media picks up and comes to use certain terms, or whether it is media that influences our usage and abbreviations. It does seem to me though, from time-to-time, the abbreviation of some terms, like “combat drones” to simply “drones” does a disservice to an emerging technology that does need to be discussed in-whole rather than condemned or mulled over in-part.

Let’s talk about the ethical and moral issues of using combat drones in war, and how they figure into the greater context of war itself. Let’s also talk about the potential for unarmed drones to further tornado research, or the drones being used to rescue people. We could even bring up repurposing combat drones to deliver food to people, instead of bullets and bombs.

We can find good stories and still be accurate in how we use terms and perceive technology. That’s all I’m sayin’.

Let us know what you think. Our Twitter is @RobotCentral, or you can check out our Facebook page here.

Enhanced by Zemanta

The Atlantic has run a fantastic piece by Dr. Patrick Lin, on trying to square cyborgs with International human rights law during wartime. This is just one small bit, when trying to take on whether cyborg enhancements used in war are “repugnant to the consciousness of mankind,” per the Biological and Toxin Weapons Convention:

Not all enhancements, of course, are as fanciful as a human-chimeric warrior or a berserker mode, nor am I suggesting that any military has plans to do anything that extreme. Most, if not all, enhancements will likely not be as obviously inhuman. Nonetheless, the “consciousness of mankind” is sometimes deeply fragmented, especially on ethical issues. So what is unobjectionable to one person or culture may be obviously objectionable to another.

Something as ordinary as, say, a bionic limb or exoskeleton could be viewed as unethical by cultures that reject technology or such manipulation of the human body. This is not to say that ethics is subjective and we can never resolve this debate, but only that the ethics of military enhancements — at least with respect to the prohibition against inhumane weapons — requires specific details about the enhancement and its use, as well as the sensibilities of the adversary and international community. That is, we cannot generalize that all military enhancements either comply or do not comply with this prohibition.

Toward the end, we are reminded that this discussion barely scratches the surface:

The above discussion certainly does not exhaust all the legal issues that will arise from military human enhancements. In our new report, funded by The Greenwall Foundation and co-written with Maxwell Mehlman (Case Western Reserve University) and Keith Abney (California Polytechnic State University), we launch an investigation into these and other issues in order to identify problems that policymakers and society may need to confront.

A shift in how our world works may be in the offing against an artificially intelligent background. The most immediate and apparent example comes in the form of intelligent personal assistants, like the flopped Siri from Apple or the more favorably reviewed Google Now. Those devices work based on an artificial intelligence related field called natural language processing (“NLP”) which, pared down, is the process of a computer trying to recognize what you just said or typed into to it.

To see just how much this one aspect of A.I. has set itself in our lives, let’s talk Google again since they’re steeped in NLP. Google Now aside: their search function exhibits word disambiguation and they have fairly accurate machine translation (depending on the language) which are major research points involving computationally parsing natural language.

The company has place a lot of stock in the trend toward A.I., but now with their appointment of Ray Kurzweil as Director of Engineering, it’s going to become a lot move involved. Kurzweil explained his intention to TechCrunch:

Perhaps more than any other company, explains Kurzweil, Google has access to the “things you read, what you write, in your emails or blog posts, and so on, even your conversations, what you hear, what you say.”

Google can combine the personalized recommendations of a friend (who often know us better than we know ourselves) with the sum of all human knowledge, creating a sort of super best friend.

This friend of yours, this cybernetic friend, that knows that you that have certain questions about certain health issues or business strategies. And, It can then be canvassing all the new information that comes out in the world every minute and then bring things to your attention without you asking about them

It’s not just NLP, our phones and in the most widely-used search engine, either. The less-subtle applications include the use of intelligent robots in manufacturing and the return of a “more” intelligent Furby, among other things.

What we’re seeing now, as a whole, is the result of what’s called “weak A.I.” which are machines that do not quite (or are not designed to) match the intelligence of human beings. This kind of A.I. has also earned the descriptor of “applied A.I.” This is opposed to the “strong A.I.” that some propose we’re headed to, where machines match or surpass our intelligence — this event would be called the technological singularity, or popularized by Ray Kurzweil as simply The Singularity. The advances still aren’t moving at a pace which keeps up with the most optimistic hopes, but it is moving quickly. Quickly enough, probably, to avoid the “AI Winters” of past, where funding was cut off to A.I. research for lack of progress that was promised by optimistic researchers.

There are some debates and discussion as to where we are going with artificial intelligence research. On the one hand, there is no doubt that it is here and real, and we see the implementation of more complex examples like autonomous vehicles, though there are questions of the validity of how A.I. is currently evolving. That discussion was had by Noam Chomsky earlier last year.

To Chomsky, the field of A.I. is evolving in what he feels is the wrong way:

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

In other words, he is attacking the current state of A.I. as purely models. In an expanded interview, he goes onto voice displeasure that A.I. as it is doesn’t fit in with the history of science, where science is supposed to tell us about us. The Director of Research at Google, Peter Norvig, wrote a lengthy reply to Chomsky; the clincher of the discussion from Norvig was:

My conclusion is that 100% of these articles and awards are more about “accurately modeling the world” than they are about “providing insight,” although they all have some theoretical insight component as well. I recognize that judging one way or the other is a difficult ill-defined task, and that you shouldn’t accept my judgements, because I have an inherent bias. (I was considering running an experiment on Mechanical Turk to get an unbiased answer, but those familiar with Mechanical Turk told me these questions are probably too hard. So you the reader can do your own experiment and see if you agree.)

This kind of back-and-forth is nothing new in the field of A.I. In 1976, MIT Computer Science professor Joseph Weizenbaum objected to using A.I. to replace positions that he felt needed human emotion and empathy. Journalist Pamela McCorduck objected, saying:

“I’d rather take my chances with an impartial computer,” pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all

Though the ethical and philosophical questions are there, they seem to play a background role in any impending shift toward day-to-day use of artificial intelligence.  Robotics companies are making strides it seems by the month and there’s no sign that DARPA funding for intelligent robotic systems is drying up anytime soon. It is still all within the realm of weak or applicable A.I. but there’s no telling how far off the era of strong A.I. is; particularly when the Director of Engineering at, arguably, one of the most powerful companies in the world is one of it’s major proponents.

Let us know when you think the shift will ultimately happen. We’re on Twitter @RobotCentral.

Enhanced by Zemanta

DARPA Robotics ChallengeAs previously posted, the DARPA Robotics Challenge is on. This year the challenge will focus on disaster response, which was inspired by the “Fukushima 50” — 50 brave people who were instrumental in containing the post-tsunami Fukushima nuclear plant disaster. There are several “Tracks” for participants to compete in:

  • Track A: these teams will design their own robot and write the software for it
  • Track B: these teams will create software for Boston Dynamic’s Atlas robot
  • Track C: these are international teams which can write software based on DARPA’s DRC Simulator
  • Track D: these are teams that want to participate, but not use DARPA money

So far, the Track A teams have been announced. Most of the teams will be building bi-ped, humanoid robots but the one that caught my attention was the NASA-JPL project: RoboSimian.

The RoboSimian, as the name implies, works like a chimp. It has four limbs, which are designed for stability and to grab things with all its limbs. I’m struck by this design mostly for the stability angle.

I’m going to digress here a little: the precursor to Boston Dynamic’s Atlas robot, the Pet Proto, is apparently limber but it also looks like it could come crashing down with relative ease. Obviously, the ability to make it stable is there, though, one thing I’ve been constantly reminded of is the difficulty in creating a robot that can efficiently mimic human movement. On the other hand, take a robot like the BigDog which works on four legs. You can see how well it stands up even when someone is trying to kick it down.

That’s not to say that stability is necessarily an issue for the other humanoid robots, but I have to wonder how well they can move and maneuver compared a four-legged robot.

The other robot that I’ll be following closely over the course of the challenge is Virginia Tech’s Tactical Hazardous Operations Robot a/k/a THOR:

Virginia Tech proposed to develop THOR, a Tactical Hazardous Operations Robot, which will be state-of-the-art, light, agile and resilient with perception, planning and human interface technology that infers a human operator’s intent, allowing seamless, intuitive control across the autonomy spectrum. The team will emphasize three essential themes in developing THOR: hardware resilience, robust autonomy and intuitive operation.

The goals of autonomy and intuition make this project well worth watching.

Once the Track B projects are announced, we’ll comb through those as well. Let us know which project you’re most excited about, or if you’re just plain excited about the whole Robotics Challenge. Our Twitter is @RobotCentral or you can hit us up on our Facebook page here.

Image credit DARPA Robotics Challenge.

Enhanced by Zemanta

A.I. Rundown

The imitable Charlie Rose interviewed Brian Christian of ‘The Atlantic’ and  Richard Waters of ‘Financial Times’ about artificial intelligence about the time of IBM’s Watson:

Brian Christian’s article on A.I. is here. Below is a video (rather dry, but still interesting) from MIT Tech about how artificial intelligence learns:

Stux Wars

StuxnetOn a superficial level, Stuxnet has played out like a cliché espionage tale or some sort of political thriller. Jason Bourne, James Bond, or what have you.

The super-worm, distributed via thumb-drives, was authored to disrupt specific machinery manufactured by Siemens. It happens that the machinery were centrifuges used by Iran’s nuclear program. To save some face, the Iranian government indicated that they had cleaned their networks, acknowledging that they had been infected. Beyond Iran’s networks and centrifuges, Pakistani and Indian networks were hit by the Stuxnet virus as well.

After Iran was hit, two new worms – Duqu and Flame – were found to be closely related to the Stuxnet program. However, instead of disrupting machinery, the point of Duqu and Flame was espionage. They were programmed to record keyboard activity, take screenshots, record Skype conversations, among other spying activities. Shortly thereafter, the source code of the worm was leaked on the Internet.

After all that, Iran says that they have just combated the Stuxnet worm yet again.

The authors were, and still officially are, unknown, though most speculation points toward Israel developing the Stuxnet worm with copious amounts of help from the United States. In what seems to be trying to rub Iran’s nose in the situation, American and Israeli officials have reportedly “smiled” at reporters when asked about Stuxnet, and the former IDF Grand Poobah had a going-away shindig that included a video apparently referencing the worm. On top of possibly implicit admissions, countless security experts have come out and said they think the origins of the program were in America or Israel. Of course, nothing can be confirmed for sure. It is still speculation.

There are a few things that makes the Stuxnet program intriguing, which Wired Magazine exhaustively documented in this article. First is how specialized the code was, in that it was designed to specifically hit a single target. A specific machine. If the worm had infected a computer that did not meet the specifications of that target, then it did nothing and likely was no cause of concern. Within that code, it sought to inject a new set of guidelines into the machine in order to destroy, in this case, a centrifuge bought by the Iranians.

Second, in addition to the specialized and sophisticated code, the operation behind getting it out there suggested an author or authors who had access to extraordinary resources that helped them accomplish this task, which lead to the suggestions of United States or Israeli involvement. In this, it established a precedence in cyber warfare. In the words of former CIA Director Michael Hayden, “The rest of the world is looking at this and clearly someone has legitimated this kind of activity as acceptable international conduct.”

Third, the means of distribution. This thing was in the wild for over a year before the infection came to infect the machine it was looking for. It moved around the world, in a way sneaking from machine to machine until it found the target. When imagining the path, it’s hard not to personify this intelligent program as an animal, something more than a mere piece of software.

In the 60 Minutes profile on Stuxnet, they posed a question that people are asking but one that hasn’t been dealt with or answered yet. Commentator Steve Kroft remarked that having Stuxnet’s source code being released onto the web has opened a kind of “Pandora’s Box.” Meaning, since the example is set, there’s no reason that variations can’t be made for a pretty penny and a cyber attack can be launched on our vulnerable infrastructure. The question being, what do we do or what are we doing to prevent an attack like that from happening? As far as we know, there have not been any large scale cyber attacks that resemble something like Stuxnet, but I’m not sure if that’s because computer security has tightened or because someone hasn’t paid out the right price tag for it yet.

Should we even worry about a Stuxnet-like attack on our infrastructure? Let us know what you think on Twitter, @RobotCentral. You can also get us on our Facebook page.

Gary Marcus explains why we haven’t seen full-featured Rosie-type robots around the house yet:

The two biggest challenges to making general-purposes robots are, as they always have been, hardware and software. Neither challenge is insuperable, but both are harder than one might think. On the hardware side, there are now lots of robots that can do incredibly cool things. One robot runs faster than the fastest human, another dances Gangnam style. Still another, PR2, folds towels and fetches beer. The catch is that, at the moment, each new robot is like a proof of concept. The ones that are fast and physically powerful, like AlphaDog, a quadruped robot, and the headless but amazing PETMAN, are, for now, still dependent on hydraulic actuators powered by industrial-strength pumps and gasoline engines; they work fine in a laboratory-test environment, but you wouldn’t want one roaming around your home. Others, like Baxter and PR2, are capable of fairly sophisticated movements, but at speeds that are still too slow to be practical around the home. It might take five minutes just for PR2 to grab you a beer.

How much longer it might take:

In virtually every robot that’s ever been built, the key challenge is generalization, and moving things from the laboratory to the real world. It’s one thing to get a robot to fold a colorful towel in an empty room; it’s another to get it to succeed in a busy apartment with visual distractions that the machine can’t quite parse. Likewise, the demo of a robot running at cheetah speed is amazing, but it’s conducted on the flat, level ground of a treadmill, not in the uneven territory of the real world. “Film and fiction have raised everyone’s expectations about what robots may be able to do,” Tandy Trower of Hoaloha Robotics and formerly of Microsoft Robotics, said. “I don’t believe we are anywhere near affordable, safe, manipulation on a mobile robot that can generalize such features into consumer operations for at least ten to twenty years.” The iRobot founder Rodney Brooks’s predictions were remarkably similar.

The future of humankind is steeped in an unprecedented amount of emerging technology. Increasingly, robots are being used in our houses, in the factories, and in war zones. Nanotechnology could help develop “personalized medicine” that takes the place of conventional treatment. Artificial intelligence researchers are making strides in learning about how the brain works by emulating it in labs.

While these advances are on the one hand exciting, there’s always something else to consider in the equation. What risks are there to consider? What precautions do we have to take? What could be some unintended consequences to any given technology?

A group of philosophers, scientists and entrepreneurs, are working to start the Centre for the Study of Existential Risk. The aim is to have a research center that answers the tough questions about risks posed to humans by technology, the take-away being the more we know the better we can prepare ourselves to deal with new technologies.

… → Read More

A More Human Machine

The National Science Foundation just approved a $1.35 million project being headed at University of Texas at Arlington. The goal:

“Our goal is to make robots and robotic technology more human-like and more human-friendly,” said Popa, who leads UT Arlington’s Next Gen Systems group within the College of Engineering. “Robotic devices need to be safe and better able to detect human intent.

“When someone is wearing a prosthetic, we want that prosthetic to be able to determine when a baseball is being thrown at it, then catch the ball.”

The four-year project is part of the NSF’s National Robotics Initiative, which is aimed at accelerating the development and use of robots in the United States that work beside or cooperatively with people. The UT Arlington team’s grant was the largest among the initiative’s 37 awards this fall.

I’ve been gathering items related to armed autonomous drones for the past week. You can see them here, here and here.

While I’m sympathetic to Human Rights Watch’s position, I don’t think they made a convincing enough argument in favor of banning armed autonomous drones.  And while I don’t share the same optimism that Evan Ackerman does, or agree that we should necessarily swap humans out for robots in war theaters, as Marcelo Rinesi argued, I don’t think it is a technology that needs to be banned outright or have resources put toward their demise. The last part of Rinesi’s post mirrors my position on the issue the most:

Ultimately, the problem of having a killer drone flying over your head is nothing but the problem of having a killer anything flying over your head. The fact of killing by specifically trained and organized groups of people with the explicit backing of their societies is where has always lied, and should continue to lay, the locus of ethical concern.

That, I believe, is the crux of the discussion. The robots themselves are amoral and they still have human programmers behind the wiring. Instead of wasting time on trying to prevent something that is most certainly going to happen, that time can be well-spent on trying to prevent things that are much more within our control, like skirmishes and wars. Even drafting up a new set of laws punishing those who use these machines in an ill-manner may be more productive, rather than trying to impede development of the drones via international law.

Throughout this entire thread, one voice — the voice of reason in most situations — has been ringing through my head:

“The Stealth Banana – Smart Fruit”

Noam Chomsky, famed linguist and political activist, sat down with The Atlantic for a long discussion on Artificial Intelligence. Here’s just a bit, but you should read the entire interview:

[The Atlantic’s Yarden Katz:] I want to start with a very basic question. At the beginning of AI, people were extremely optimistic about the field’s progress, but it hasn’t turned out that way. Why has it been so difficult? If you ask neuroscientists why understanding the brain is so difficult, they give you very intellectually unsatisfying answers, like that the brain has billions of cells, and we can’t record from all of them, and so on.

Chomsky: There’s something to that. If you take a look at the progress of science, the sciences are kind of a continuum, but they’re broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics — greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.

There are select videos from the interview here.

(h/t io9)

Killer Robots, Ctd.

Benjamin Wittes at Lawfare publishes a note from John C. Dehn, of the West Point Military Academy about “killer” robots. Dehn goes into how the report is problematic with it’s definitions:

The report might be discussing only those weapons on the most autonomous end of the spectrum, at one point referring to “fully autonomous weapons that could select and engage targets without human intervention” and at another as “a robot [that] could identify a target and launch an attack on its own power.” Somewhat confusingly, though, the report includes three types of “unmanned weapons” in its definition of “robot” or “robotic weapon”—human-in-the-loop; human-on-the-loop; and human-out-of-the-loop. (p. 2) Thus, the report potentially generates confusion about the precise level of autonomy that the authors of the report intended to target (pun intended), though human-(totally-)out-of-the-loop weapons are the obvious candidate.

Even assuming the report clearly intends “fully autonomous weapons” to include only weapons that independently identify/select and then engage targets, the discussion here (particularly between Ben and Tom) demonstrates that this definition of the term is not without its problems. These problems include: (1) what types of targets should be cause for concern (humans, machines, buildings, infrastructure (roads, bridges, etc.), or munitions (such as rockets and artillery or mortar rounds); and (2) what is meant by target “selection” or “identification.”

The take-away:

Those of us who have spent many years training soldiers on what constitutes “hostile intent” or a “hostile act” justifying the proportionate use of responsive force are familiar with the endless “what ifs” that accompany any hypothetical example chosen. Ultimately, we tell soldiers to use their best “judgment” in the face of potentially infinite variables. This seems to me a particularly human endeavor. While artificial intelligence can deal with an extremely large set of variables with amazing speed and accuracy, it may never be possible to program a weapon to detect and analyze the limitless minutia of human behavior that may be relevant to an objective analysis of whether a use of force is justified or excusable as a moral and legal matter.

Ultimately, it seems, one’s view of the morality and legality of “fully autonomous weapons” depends very much upon what function(s) they believe those weapons will perform. Without precision as to those functions, however, it is hard to have a meaningful discussion. In any case, I fully agree with Ben that existing international humanitarian law and domestic policy adequately deals with potentially indiscriminate weapons, rendering the report’s indiscriminate recommendation unnecessary.

In the beginning of the post, Wittes rounds up his discussion with Kenneth Anderson and Mathew Waxman. Previous RobotCentral round-ups here and here.

Killer Robots, Ctd.

On the subject of autonomous armed drones, Marcelo Rinesi would rather the killer be robotic and not human:

The counterargument is obvious: have you seen what already happens in human-driven battlefields? Empirically, soldiers’ ethical constraints are anything but foolproof (naturally so, given their training and the context of war); there’s no reason to think even buggy software will be worse, and software, at least, can be debugged and improved.

The more important issue:

Ultimately, the problem of having a killer drone flying over your head is nothing but the problem of having a killer anything flying over your head. The fact of killing by specifically trained and organized groups of people with the explicit backing of their societies is where has always lied, and should continue to lay, the locus of ethical concern.

George Wallach is concerned about new wars, among other issues (via io9):

“A common concern among some military pundits is that it lowers the barriers to starting new wars,” says Wallach, “that it presents the illusion of a quick victory and without much loss of force – particularly human losses.” It’s also feared that these machines would escalate ongoing conflicts and use indiscriminate force in the absence of human review. There’s also the potential for devastating friendly fire.