National Drones

NatGeo takes a look at drones and how they may be used domestically in the United States. Where the situation stands now:

So far only a dozen police departments, including ones in Miami and Seattle, have applied to the FAA for permits to fly drones. But drone advocates—who generally prefer the term UAV, for unmanned aerial vehicle—say all 18,000 law enforcement agencies in the U.S. are potential customers. They hope UAVs will soon become essential too for agriculture (checking and spraying crops, finding lost cattle), journalism (scoping out public events or celebrity backyards), weather forecasting, traffic control. “The sky’s the limit, pun intended,” says Bill Borgia, an engineer at Lockheed Martin. “Once we get UAVs in the hands of potential users, they’ll think of lots of cool applications.”

The biggest obstacle, advocates say, is current FAA rules, which tightly restrict drone flights by private companies and government agencies (though not by individual hobbyists). Even with an FAA permit, operators can’t fly UAVs above 400 feet or near airports or other zones with heavy air traffic, and they must maintain visual contact with the drones. All that may change, though, under the new law, which requires the FAA to allow the “safe integration” of UAVs into U.S. airspace.

If the FAA relaxes its rules, says Mark Brown, the civilian market for drones—and especially small, low-cost, tactical drones—could soon dwarf military sales, which in 2011 totaled more than three billion dollars. Brown, a former astronaut who is now an aerospace consultant in Dayton, Ohio, helps bring drone manufacturers and potential customers together. The success of military UAVs, he contends, has created “an appetite for more, more, more!” Brown’s PowerPoint presentation is called “On the Threshold of a Dream.”

 

 

 

Enhanced by Zemanta

A Step Closer To T-1000?

Gizmodo takes a look at a new nanotech “skin”:

The applications for this sort of tech—beyond just doing stuff that just looks cool—could be anything from extendable antennae to super flexible wires. Use cases where you need metal, but not its characteristic rigidity. Because this sort of coating is in its early stages though, there’s no telling whether or not it’ll be possible to produce little galinstan pellets like this at anything resembling a reasonable cost, or that the coated metal will last long enough to be worth using—for Terminators or otherwise.

Enhanced by Zemanta

Treating Robots As Humans

Fast Company’s Anya Kamenetz looks at studies that suggest robots work better when you treat them as your equal:

If your job is in manufacturing, medicine, mining, automotive repair, underwater or space exploration, maybe even elder care, some of your coworkers are probably semi-autonomous programmable mechanical machines–in a word, robots. But humans and robots don’t understand each other well, and they work very differently: a robot does exactly what it’s told, over and over, taking the same amount of time every time, while a human acts with intention, deliberation, and variation. Their strengths and weaknesses can be complementary, but only if they each have good models of how the other works.

In a breakthrough experiment, the Interactive Robotics Group at MIT discovered that cross-training, which is swapping jobs with someone else on your team to help everyone understand the work better, works even when your coworker doesn’t have a mind. In short, when humans and robots model doing each others’ job they end up working together more smoothly.

Enhanced by Zemanta

Making Robots Legit

Over at Wired UK, Mark Piesing navigates the difficulties of building a legal framework for robots:

While Joseph Engelberger, one of the fathers of robotics, was happy to admit that “I can’t define a robot, but I know one when I see one”, the RoboLaw project decided at the beginning to narrow this down a bit by looking at a wide range of “things” at home, from a robot arm to “softbots”, and hybrid bionic systems such as hand prostheses.

The list, says Salvini, takes into account autonomous robots, including neurobiotics — robots controlled via a brain-computer interface — and service robots that operate in the home, cities and other public roles.

The next task of the project was to do phased research to identify what existing regulations apply to robotic technology, how the consequences vary from country to country, and what is happening in other disciplines. The result was a series of case studies which the roboticists, lawyers and philosophers explored to find possible solutions to future problems.

Now with a year to go they are at last coming to conclusions — even if these are confidential until they’ve been shown to the European Commission.

Enhanced by Zemanta

Housekeeping Note, Ctd.

Hello all,

You may have noticed that my contribution on the blog has dramatically cut back as of late; that is not a fluke and that will continue to be the case from here on out. This is for a few reasons including time, ability and my personal evolving views on robotics and technology. You may see posts from me in the future, but not in the manner as I’ve been contributing thus far. I’d like to thank you, the readers, for letting me talk at you, and also to Ray Renteria, the owner of RC, for giving me this invaluable platform on which to write.

All the best,
Eric Wind

Pop Drones

Joshua Kopstein, at the Verge, explores the effect of drones on pop culture:

One prime example is street artist Essam Attia’s satirical NYPD drone posters, which displayed the Predator using the recognizable branding language of Apple’s iPod ads. Attia, a former geo-spatial analyst who served in the Iraq War, told Animal New York that he placed the mock ads to “start a conversation” about the seemingly inevitable scenario of police surveillance (and maybe even enforcement) by unmanned aircraft — a mission which didn’t sit well with the NYPD, who arrested him on 56 counts of criminal forgery.

In Pakistan’s semi-autonomous tribal regions, where over 3,000 drone-related deathshave been reported since 2004, the effects of flying robots require no such imagining. Last September, an upbeat love song by Pashto singer Sitara Younas became hugely popular on YouTube just before the service was cut off by Pakistani authorities. Its title, and repeated chorus: “My gaze is as deadly as a drone attack.”

“It’s been a hit because people like the music and the movie that it was written for,” lyricist Khalid Shah Jilani told The Guardian. “Now you hear it all the time being played at wedding halls and in cars.”

Enhanced by Zemanta

An Exclusive Interview With Heather Knight

As promised yesterday, my discussion with Heather Knight of Marilyn Monrobot. I’d like to thank Heather for taking the time to speak with us after the Dr. Ishiguro’s lecture. We hope you enjoy the interview below.

Eric Wind: How’d you come to speak at the lecture tonight?

Heather Knight: I was invited to come speak at the lecture. There aren’t that many roboticists in New York City and I’m not sure whether they found me or whether [Erico Guizzo] recommended me because of our common interest in robots and theater.

E: What part of the conversation did you find the most intriuging and most beneficial for the audience?

H: That’s a good question. It’s always interesting to speak to a general public crowd. This was a real interesting evening because you had the Japanese Cultural Society and then the people interested in robots coming. It prompts a more cultural discussion to begin with, because you’re in New York City at this, you know, Japanese house of culture. So, having an American roboticist and a Japanese roboticist, we both have similar research interests in that we think social robots are really important and we want to make these robot companion type situations. Although, I would say that I want robots to bring people to connect, rather than being the connection.

He and I both have a strong interest in theater and thinking about algorithims we can learn from directors, actors that are codified a little bit differently than psychologists. Though, psychology is a field which social robotics adopts methodology from. So, we have a lot of similar interests, but we’re also very different, so it was fun.

You know, he’s a lot more experienced than I am. Hiroshi Ishiguro has been working on robotics for several decades and has a very established lab in Japan. I’ve been working on robots for 11 years, and it’s not like I haven’t done anything, but he’s been a huge inspiration for me and it was really exciting to be able to be in that situation.

E: When did you first hear about his work?

H: Well, it’s been at least 7 or 8 years ago. I started doing social robotics when I was an undergrad at MIT and working with a professor named Cynthia Breazeal. She made this robot for her PhD called Kismet, that was basically a head that had ears and eyes. It wasn’t trying to be a super-humanoid, it was almost creature like. It didn’t use words, but it kind of babbled. It responded to the tone of your voice. Sometimes it would be in the mood to play with toys or color-saturated objects, or sometimes it would be in the mood to socialize. I think one of the more clever aspects of it’s behavior system is that it would get bored. So, if you weren’t being interesting then it would switch to wanting to play with toys, which seems really human. It’s a really simple set of behaviors. But anyone could walk up to this robot without any training and learn how to interact, because it would be like “Hello!” and it’s ears would perk up, or if you said “You’ve been a very bad robot!” it would make a sad face or something.

So, it’s responding with sound. With these facial expressions that are perhaps a little cartoon like, but not fully human but still relateable. I think it’s really compelling to think about the simplest ways to come up with human like robots. That’s often the most difficult thing to do. As an engineer, it’s always more difficult to have a simple solution that’s very clever, than it is to have a convoluted Rube Goldberg machine to making breakfast or something.

E: Could you expand on your background a bit more?

H: Sure. I have a Bachelors and a Masters in Electrical Engineering and Computer Science, and now I’m working on a PhD in Robotics. So, you might think I’ve been in school for the last 15 years but actually I do other things along the way. I’ve taken breaks to travel and do other things, but I got a chance to work at the [NASA] Jet Propulsion Laboratory in California on space stuff. While I was there, I met people who were working in art and technology, and who I ended up collaborating with.

Originally, I was in SyynLabs, we were building installations for events. For example, you’d have a projection wall where people would come up to and dance, and things would fall on them and roll off their shadow. It was these installations that are in an event setting that would get people talking to each other. So it’s that technology that is fun and playful, and when we started building other things, like a bicycle-powered blender or, I don’t know, we had this one installation where you had to create a human circuit to hear a story — so, if you wanted to hear the full story, you needed to have a group of, like, 10 strangers holding hands. You’re using technology to trick people into spending time with each other. We used to call it “technological inebriation.”

It was really technology for people, and I think that’s a great metaphor for some of the robots I want to make. I don’t want to make robots for the sake of replacing people, or, I don’t know, for their own good. I think you can use robotics in these interactive art pieces to bring out features in ourselves or connect us to each other. Like, there’s autism therapy applications, where kids with autism feel more comfortable talking to robots than people because it’s less overwhelming — less sensory overload. If they practiced with this robot, this kind of stepping-stone agent, then they could better integrate generally, or get used to those more useless but still socially important aspects of interaction.

E: What was your Masters thesis on?

H: I did my Masters thesis on this project called the Huggable. It was this robotic teddy bear that had a fully body sensing skin. I was trying to come up with a way to make that sense happen in real time, so it could react naturally. It’s like if you were to pick up something, like a puppy or a baby. So, how do we communicate with puppies and babies? You pet them, hold them or you might tickle them. If they’re asleep, you pat them to wake them up. All of that communication that is happening is very complex. Anyone who has a small child or has played with a small child, they could tell you that the child knows what they want, but it’s not verbal. So how can you create pre-verbal interactions?

My thesis was on what kind of touch gestures do we use to communicate with this robot teddy bear. This involved human studies which included an audio puppeteer. So, if someone was pretending to be the robot, it sees the video and it’s natrually reacting and its sensors are trying to determine how people are communicating with it.
Basically, I get this data corpus to see how people use touch to communicate. It becomes a pattern recognition problem, where you have to categorize how people use touch and then you have to think about “how can I detect this?” Since I was trying to build a system that would work in real-time, one of the things I discovered is with touch, you don’t need really fine tuned sensing. As long as you cover an area that is two by three inches, you’re going to capture most communication. You don’t need a really fine grid.

The second thing is most touch lasts one to five seconds, so the connection doesn’t need to be particularly quick. Within that, you need to do some frequency analysis. For example, tickling is a very noisy signal. It involves a lot of different signals. Petting is more of a regular sine wave. Then, you can see how you differentiate between these different kinds of touch.

My degree was in Electrical Engineering, so it was designing the sensor system but it was also coming up with a simple pattern recognition system.

E: What’s your doctoral thesis, and how’s the progress?

H: I haven’t declared my thesis yet. I have finished my coursework, and I’m in the prep for that. Then we have qualifiers and so on, and I’m in the very final stage of my qualifiers. I will complete those this semester and hopefully put forth my proposal in the fall.

E: Do you have any idea of what your thesis proposal will be?

H: Yeah, so I learned that you’re not supposed to propose until you’ve already finished some of the work. That way, you’re not proposing something you’ve never done but you’re proposing something you’ve already tried out, so you know it has a chance of working.

People usually propose when they have 20 – 40% of the work done, in our department. I’m hoping it’s going to be about expressive motion. Basically, how can the non-anthropomorphic be expressive. I’m interested in how motion can describe the state of a relationship; “Do I know you?” “Do I not know you?” “Do I like or dislike you?” “Are you my boss, or am I your boss?” Power relationships are important. Then there can be room for emotions. Or, something else that’s interesting, is trying to measure how much a robot is in a rush by how quickly it’s going. We can see that with drivers and cars now. It’s just a question of whether we can categorize that in a general way.

I might get better at my elevator pitch in a couple of years, but the basic idea is to see if there are some universals of expression that we can distill to use on non-anthropomorphic robots. It’s basically robot body language.

E: What got you interested in robotics?

H: I didn’t grow up obsessed with robots. I fell in love with robots when I started building them. So, I went home in my Freshman year at MIT, and I was talking to people in my living group and I was asking about an internship. Someone said, “Hey! I work in a robotics lab. I could probably get you a position.”

So, I just started working there, January — maybe 2002. Over the summer, it was the first year my professor, Cynthia Breazeal, was a professor, and we had this big group project to kick off our research group. We built this big interactive terrarium and brought it to a big conference in San Antonio, and we were in the emerging technologies exhibit. You know, it was kind of like Epcot center. There was this big robot that had this hand-thing that would see people, say “Hello!” and then it would get bored and then go play in the waterfall, then it would get tired and turn in for the night in this cave. We went really crazy. There were these rock crystals that would turn on, and these drums you could play with, and these fiber optic tube worms that I got to put together. I was 18 and it was awesome. By the end of the 5 days, I could restart the whole system myself and I could talk to all these different people. It wasn’t just getting to build the robot and see it move, it was seeing people interact with the robots.

E: What do you feel that sets you apart from other roboticists?

H: I don’t know. I definitely have fun with what I do. My father was an engineer, and he would design propulsion systems for ships and submarines. He’s really great at math and physics. My mother was a Peace Corp volunteer, and all about international understanding, so she really wanted to impact the world.

I like building things and I like solving problems, and then my mother’s voice is in the back of my head saying “Well, why do people care about this?” I think that’s one of the reasons I didn’t want to do space stuff anymore. I wanted to impact real human beings. So, I don’t know how different that is but I really like imagining the future.

E: What’s your favorite project that you’ve worked on so far?

H: Well, if you asked what my favorite robot is, then I would be in really big trouble back at home if I didn’t say Data.

I don’t know, there have been so many projects I’ve been involved in in different ways. So, the precursor to the Rube Goldberg machine on Youtube is the OK Go music video. That was the project where I thought, “Oh my god, you could learn so much from professionals.” The band made that machine so much cooler than if we had built it by ourselves. They are professional entertainers and they have this intuition about what audiences care about and how to reach people. It’s part of the motivation I’ve gained in wanting to work with actors.

What I left out before, I want to work with actors, dancers, directors to help craft these expressive emotions that I’m trying to find universals for in robots. I’m really interested in seeing how we can adopt bodies of knowledge from theater into robotics. Or from disclipines of art that people have been spending hundreds or thousands of years honing. Rather than trying to reinvent the wheel as engineers, where we can make engines work, suddenly we’re trying to make these socially intelligent machines out there. Like, are engineers really the best people to be making socially intelligent machines? There’s some sort of weird clash there.

So, I’m trying to distill knowledge from a non-technical field into a world where you can program stuff. Some of that has been about creating interfaces where you can have kinetic conversations.

E: How would you explain social robotics and it’s significance to the average person?

H: Social robotics is the idea that you can make the human-robot interface smooth. So, instead of teaching you how to program the robot, you can just walk up to the robot and communicate and figure out the interface for it.

Social robotics is super-important if you ever want to have humans and robots working together that aren’t programmer-robot. Right now, we don’t really have that. We have tons of robots for industry manufacturing floors, to sort our mail, and we have sent them to the surface of Mars. But, to do every-day things with robots, we have to create an interface to make that possible.

E: What’s the idea Marilyn Monrobot labs and what drove you to start it?

H: I’m really interested in the intersection between robotics and theater. As much as I get to explore that as a researcher, I also think there is artistic value to that intersection. Marilyn Monrobot lets me explore that. So, it’s the umbrella name for our robot theater company. It’s where we do our robot-comedy stuff and the robot film festival. Last year, we did a robot cabaret variety show with 10 acts, exploring how the modern world is already a cyborg society because of our interdependence on phones. It’s allowed us to consider the changing ethical ramifications of our changing relationships with each other, via technology. Like, you hear about Freshmen who arrive at their new college and they have like 200 Facebook friends at their new college but they don’t know how to talk to someone at the orientation party. So, are we losing our humanity to technology? Obviously, I’m not a pessimist about technology but I think it’s equally naive not to think through where technology can go.

E: How did you decide on the name, Marilyn Monrobot?

H: Well, the JPL is really flat. You don’t really have parking garages in earthquake country, so instead we had this 20 minute walk from my office to the enormous parking lot. Of course, seniority is how you actually get close to your office, but since the average age there is 50-something and the average working-span is 30-years, we were kind of the kids. So it just kind of came to me walking through the parking lot.

I also found out later that Marilyn Monrobot was a Futurama episode, or it was a segment, which is fantastic. I didn’t know about that at the time. But, it’s supposed to represent this intersection between robotics and entertainment.

E: Could you tell us about the robot census and how that’s going?

H: So, the robot census started when I first arrived at Carnegie Mellon University. They do this thing where when you first arrive, you don’t know who your adviser is going to be but that is your most important relationship during your PhD. The average time for the degree is 5 and a half years, so some call it the marriage process. It’s longer than some marriages.

I was going to school and there were 500 other people working in robotics in some capacity, and we’re supposed to choose our adviser out of the 80-something professors. We didn’t even know who had what robot. Like, I’m at the Robotics Institute and, obviously I have to partially choose my advisers by what kind of robots they have, right? If this is our marriage, then they have children.

So, I started this census on campus and people thought it was interesting and I opened it up to the world. I think it should be done every four years, kind of like this other census you may have heard of that involves the population of the United States.

E: Is it difficult rounding up information for the robot census?

H: Yeah, even in person on campus. I think campuses should run their own censuses and collect information. We had to had out physical forms and then send links out to the digital form. It was like marketing. I had no idea, but you should feel okay sending up to ten reminders. But we didn’t do that, we went in person after a while. So, there were a few that probably slipped through the cracks but I’m sure that’s true of other censuses.

E: How many robots have you documented?

H: We’ve documented 547 robots on campus. There’s an off-campus facility for robotics, but we didn’t do the census there, though I would love to expand to that.

E: Do you feel that the anxiety people have could be attributable to the perceived lack of sociability of robots?

H: No, I think it’s religion. Fear of robots is a Western culture thing. It’s this idea that we’re usurping the role of God, and it’s kind of like Frankenstein because we’re doing what we should not be doing — you know, what we’re doing is wrong and we will be punished. It’s tapping into mythology.

Storytelling is a cultural phenomenon. It’s not based in reality. It’s based in human perception and culture and so on. So this idea that we’re not supposed to be playing God, and if we try to play God it will go really wrong, that’s a religious thing, in my opinion and others people’s opinion. This is well documented.

Now, if you look at the Shinto faith, they believe that all objects, people, animals, mountains, have the same spirit. There is no hierarchy. They have a really high value of nature, and rocks, and robots, so spiritually everything is on equal footing. The other detail that they have is that these spirits naturally want to be all in harmony. So, when you look at Frankenstein or the Terminator versus… Astroboy, that’s revealing our culture. It’s not about the technology; it’s about the belief system. Regardless of whether you were raised going to church or temple, this permeates our culture.
So, like even in Japan where a lot of people are Christian now, this Shinto belief system has permeated their expectations of what happens with technology.

E: Do you see the robotics industry trending toward social robotics?

H: It’s early research now, but I think charismatic machines have more applications in the short term. Social robotics may be a little longer. Like, the idea of Siri being really popular. That’s a charismatic technology. I think what we learn in social robotics can be cross-applied into real technology because what we’re doing is creating interfaces between technology and people. So, what we learn about sociability can be applied to non-social robot machines. Hiroshi would probably have a different opinion there.

E: What do you find is the biggest barrier in getting people interested in robotics? Do you think it’s exclusively religion or cultural?

H: When people don’t meet it and they’re just thinking theoretically about technology, then you get the Terminators and then you have the Singularity people. Those are like the two most popular mythmaking things at the moment. That doesn’t mean we don’t have positive storytelling. I mean, we have Rosie the Robot and we have Wall-E. I think stories really inspire what we make.

Throwing back to the previous conversation of robots in Japan, they invest so much in companion robots and music and things for the elderly, etc. And what is the U.S. known for in robotics innovation right now? The biggest is military robots. That doesn’t mean there’s not a lot of research in other kinds of robots, but what we’re famous for is military robots.

E: Do you have an end-goal for your research and projects?

H: Shape the future.

E: Are you concerned about people using your technology for negative instances?

H: I think it’s really important to think about that. I should think that would be a common part in engineering education in general, thinking through the ethics and where you’re going with stuff. I think in the world of art, and even architecture, critique is a natural part of the process. And it would be great if we would not only critique our designs based on needing to meet certain performance criteria, and the bigger grant organizations like the National Science Foundation, do ask for broader impact stuff, but they don’t really ask how things can be misused.

E: Do you think there’s a reason for that?

H: For me, and this is theoretical, engineers were never the heads of companies. They were the people who could help the people who started the companies solve specific problems. Historically, in this bigger company construct, our job is not to be creating ideas. These days, withink the last 30 years, engineers and technologists are starting companies and we are the idea people but the education hasn’t shifted. So, we’re still educated as if we are cogs in the larger industrial machine, whereas other people are thinking about “Where is this going?” Sometimes that’s about money but at least there was someone to think about that stuff. Maybe they had training in that, I don’t know.
But, I think it’s a legacy from engineers jobs before.

E: Kind of shifting gears, it seems like robotics, and technology in general, has drawn more men to the field than it has women. From your experience, do you feel that’s the case?

H: Well, I was spoiled because MIT is like 45% women. So, I didn’t really feel that way. When I worked, it was something like 1/3 women and 2/3 men ratio in the U.S. In Europe, it’s more like 9/10 male and 1/10 female.
I never really thought about it until I was several years into doing what I was doing. I always idolized my dad, so I kind of always felt like I wanted to be an engineer. I mean, there are definitely some legacy issues with gender, but things are moving in the right direction for sure. I think it’s much easier to change things at the undergraduate level, but it takes much longer for those changes to percolate into other levels of companies or academia. And you definitely get an idea of that, like, for example, I’m pregnant right now and CMU has no maternity leave policies. And I don’t know, academia just doesn’t think about those things sometimes.

E: Is there anything more that can be done to draw women into the field?

H: We’re actually doing a great job at attracting people, but we’re not doing so great at keeping people.

E: Why?

H: I think there are a lot of great articles about it. I think one of the titles of the articles is The Leaky Pipeline. I don’t know, people identify things like mentoring. It’s really important to have a good mentor, no matter what the gender is, according to research. Just having someone support you, whether you’re a minority, female or any other group that isn’t typically represented.

Since I’m really excited about a world where engineers aren’t just cogs in the machine, and that engineers really are creative, the more you move into that direction, the wider the breadth of people, whether it’s male or female. Just getting more creative people in the field and I would love to see that prioritized.

Enhanced by Zemanta

An Evening With Dr. Hiroshi Ishiguro

On February 6, Dr. Hiroshi Ishiguro, Professor of Department of Systems Innovations at Osaka University, traveled to the Japan Society in New York City to give a lecture on the future prospects of humanoid robots — or androids. My wife, Jen, and I made the trip, as well.

The theater at the Japan Society was packed, and covered all ages. There was a bustling energy to the evening, and a slide featuring the Gemenoid-F android was projected prominently. The title on the slide was “Studies on Humanoids and Androids,” though the official title of the lecture was “How to Create Your Own Humanoid.” After everyone settled in, Dr. Ishiguro was introduced and he began.

He is a stately looking man and he did take a professorial stance at the podium. Through the lecture, he gave an overview of his work in android development and what he saw in its future. His talk was divided up in a manner of questions that, as a whole, asked if the line between human and robot would ever diminish. In so many words, the answer: it’s unlikely right now.

The Dr. came to explain that there are so many nuances in human behavior and speech that it would be incredibly difficult to create a robot that could act fully human. It’s a little akin to the Replicants in Blade Runner — “we” had created robots (“Replicants”) that could mimic humans in most ways, but that you could still tell, with a test, whether someone/something was human or Replicant. He even offered up a paradox; with robots, we can create the “perfect” human but then you can’t make a robot human.

He made this point through a number of examples, the most prominent is trying to agitate an android by repeated poking. Its behavior wouldn’t deviate accordingly. Humans have odd ways of reacting to stimuli that robots aren’t capable of. However, to illustrate the point that we can make, at least, “perfect” looking robots, he put up a video of a busy cafe and asked us to point out which one was the robot. I certainly couldn’t.

The unreality of robots aside, Dr. Ishiguro explained that his real motivation behind studying robots is human psychology. The example that stands out to me at this moment, is when he explained an experiment he did with one of his androids. While he was in Osaka, he directed some colleagues to plant an android in a cafeteria in Munich. From Osaka, he spoke through the robot and invited people to come, sit and speak with it. What he found was that people were more than willing to open up and spill about their problems. It was intriging, and I imagine people feel comfortable talking to the robot because of a perceived lack of judgement.

It’s examples like that which drew Dr. Ishiguro to robotics, rather than necessarily making the next big technological advance. With that, the lecture came to a close and the panel with Heather Knight, of Marilyn Monrobot, and Erico Guizzo, of IEEE Spectrum, began.

The panel was kicked off by a poem reading by the Gemenoid-F android, which was equal parts beautiful and creepy. After, Guizzo moderated the discussion between Knight and Dr. Ishiguro. The talk weaved between use of robots in theatrical settings and where social robotics is going. Knight explained her interest in robotics and using her robots in theatrical settings.

After the discussion, the floor opened up to questions. For a night that was dominated by non-technical subjects and trying to have robotics reach a wider audience, the questions were — somewhat disappointingly to me — mainly geared toward the technical aspects of the Gemenoid or the technical aspects of robotics.

Once the talk let out, there was a small reception. After it all wrapped up, we sat down with Heather Knight for a wide-ranging discussion. That interview will be posted up tomorrow.

Were you at the discussion, too? Let us know what your experience was on twitter @RobotCentral.

Enhanced by Zemanta

Everyone Chill

The Atlantic’s Derek Thompson pleas for a little sanity when discussing the future of robots and humans. Robots are still behind humans in things that matter:

As robots move off the factory floor in the next 20 years, the effect on well-being and income will be complicated and impossible to predict. On the one hand, we should root for more automation. More robotics in the hospital, for example, could make surgeries cheaper and safer. But the mass-market depends on our workers also being our consumers. What does it mean when more work is done by machines that don’t consume anything? Where does the money go, if not to the lucky owners of the robots themselves? If tomorrow’s robots are smarter than people — not just high-school graduates, but also college graduates — what happens to the incentive to invest in education? Should we respond by giving each new born a check? … a stock portfolio!? … a robot of her own?!

These are fun and scary ideas, and they’re fun and scary to think about. But let’s calm our warm-blooded nerves by remembering that the current stock of humanoid robots is still remarkably primitive, as Brynjolfsson and McAfee acknowledge themselves. They look creepy. They struggle with people skills. They fall down stairs. They’re bad at problem-solving. They’re not very creative.

 

Enhanced by Zemanta

Artificial Brains

FastCompany’s Lakshmi Sandhana looks at the path to an evolved robot that can walk naturally. The process has necessitated the development of artificial brains. Where we’re going with it all:

Grand dreams aside, what it means at present for the team is evolving brains that can go beyond figuring out simple things like gaits to more intelligent behaviors like learning. They’ve 3-D printed an advanced quadruped robot called Aracna, to further examine evolved gaits. The next step is to evolve larger, more modular brains that will hopefully approach natural brains in complexity opening up the possibility of creating an entirely new breed of robots.

“Evolutionary computation has already produced many things that are better than anything a human engineer has come up with, but its designs still pale in comparison to those found in nature,” states Clune. “As we begin to learn more about how nature produces its exquisite designs, the sky’s the limit: There’s no reason we cannot evolve robots as smart and capable as jaguars, hawks, and human beings.”

Enhanced by Zemanta

The Driverless Car Future

Felix Salmon considers the ups of driverless cars, including the safety benefits:

If and when self-driving cars really start taking off, it’s easy to see where the road leads. Firstly, they probably won’t be operated on the owner-occupier model that we use for cars today, where we have to leave our cars parked for 97% of their lives just so that we know they’re going to be available for us when we need them. Given driverless cars’ ability to come pick you up whenever you need one, it makes much more sense to just join a network of such things, giving you the same ability to drive your car when you’re at home, or in a far-flung city, or whenever you might normally take a taxi. And the consequence of that is much less need for parking (right now there are more than three parking spots for every car), and therefore the freeing up of lots of space currently given over to parking spots.

Paul Krugman is on board:

By and large, I’m in the camp of those disillusioned about technology — mainly, I think, because the future isn’t what it used to be. A case in point is Herman Kahn’s The Year 2000, a 1967 exercise in forecasting that offered a convenient list of “very likely” technological developments. When 2000 actually did roll around, the striking thing was how over-optimistic the list was: Kahn foresaw most things that actually did happen, but also many things that didn’t (and still haven’t). And economic growth fell far short of his expectations.

But driverless cars break the pattern: even Kahn’s list of “less likely” possibilities only mentioned automated highways, not city streets, which is where we will apparently be in the quite near future.

Enhanced by Zemanta

The Risks Of Nanotech

Sarah Wild, at Business Day in South Africa, explores the risks surrounding nanotechnology, particularly when applied to areas like water treatment. A potential way around the risks:

Prof Wepener suggests that production-side management is the way to manage risks.

“(Detection difficulty) is why you are not going to get environmental legislation in terms of levels in the environment, but you can legislate the amount that can be released…. It requires regulation at the production phase, rather than retrospectively.”

The positive side is that “in the past, SA has been playing catch-up on environmental issues … fortunately, with nanotechnology, we’re involved in the studies as they are happening in Europe and North America.”

Enhanced by Zemanta

Sniffing The Way To Work

Wired is reporting that the U.S. Navy is working on a new kind of robot that would “smell its way to weapons prep” on a ship that would work with an artificial pheromone. The challenge:

It’s not going to be that simple, though. If the project works, the sniffer-robots will begin deep below the carrier’s water-line, hauling bombs from nine levels underneath the flight deck into a series of elevators, before ending up at an assembly point on the deck called the “bomb farm.” Once there, the chemicals will have to withstand winds whipping over the deck, and “must be stable enough during direct contact with petroleum products,” withstand temperatures above 200 degrees Fahrenheit, and fade after a mere 20 minutes — thereby preventing other robot swarms with different instructions from getting confused when moving down the same hallway.

How Easy Would It Be For You To Kill A Robot?

NPR profiles an experiment done by Christoph Bartneck, where he looks at human empathy toward robots. In Bartneck’s experiment, the human subject would play games with the robot, and sometimes the robot would mess up. This meant that the human subject was told to turn the robot off:

At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram’s would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: “They would essentially eliminate everything that the robot was — all of its memories, all of its behavior, all of its personality would be gone forever.”

In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. “You are not really going to switch me off, are you?” the cat robot begs, and the humans sit, confused and hesitating. “Yes. No. I will switch you off!” one female research subject says, and then doesn’t switch the robot off.

“People started to have dialogues with the robot about this,” Bartneck says, “Saying, ‘No! I really have to do it now, I’m sorry! But it has to be done!’ But then they still wouldn’t do it.”

The Rise Of Siri

Bianca Bosker, at Huffington Post, delves into Siri’s origin. She reports that the intelligent virtual assistant was born out of the largest artificial intelligence research project in the country. It is also suggested that Apple has stunted Siri’s potential, but the future holds something different for it:

Siri’s backers know Apple’s version of the assistant has not yet lived up to its potential. “The Siri team saw the future, defined the future and built the first working version of the future,” says Gary Morgenthaler, a partner at Morgenthaler Ventures, one of the two first venture capital firms to invest in Siri. “So it’s disappointing to those of us that were part of the original team to see how slowly that’s progressed out of the acquired company into the marketplace.”

But as a new wave of virtual assistants compete to take on our to-do lists, Apple is under growing pressure to use the technology it already has and turn Siri into the multitasking, proactive helper it once was. Siri’s history suggests a fantastical future of virtual assistants is coming; where we now see Siri as a footnote to the iPhone’s legacy, some day soon the iPhone may be remembered as a footnote to Siri.

“A kinder, gentler HAL is on way its way to the mainstream for sure,” says Kittlaus. “Siri is just a poster child, but it goes way, way beyond that.”

This is an in-depth, well-researched piece that deserves a good chunk of time to be read.

 

Enhanced by Zemanta

Kickstarter Project Of The Day

Team 696, the robotics team at Clark Magnet High School in La Crescenta, CA, is asking for your help on Kickstarter:

We are Team 696, aka The Circuit Breakers, the Robotics Team from Clark Magnet High School in La Crescenta, CA. We are hard-working, enthusiastic students devoted to creating robots for the purpose of spreading awareness of science and technology-based education. Our team comes together to build a new robot each year. In 2013, we plan to build a robot to participate in two FIRST Regional Competitions: one in Long Beach, CA, and the other in San Bernardino, CA.

Our team strives for excellence in all areas; unfortunately, our California school’s budget limits our ability to attain full robotic perfection, and we are always thankful for additional funding to achieve our goals. By supporting us, you will help fund our official 2013 robot, improvements to our engineering lab, and most importantly, the expansion of mentoring and outreach programs within our community.

They have 37 days to go and are almost to their goal of $2,000. The perks range from personalized dog-tags all the way up to being flown out to the robotics competition with The Circuit Breakers.

Enhanced by Zemanta

Drones With Jon Stewart

I haven’t been keeping up with The Daily Show much lately, but every once in a while I’ll hit up their website. I just finished watching Jon’s long interview with MIT professor Missy Cummings. She was also featured on NOVA’s “Rise of the Drones” special.

In particular, I liked how she tried to assuage Jon’s (mostly) comical fear of drones and the topic of do-it-yourself drones came up as well. She even divulged a couple of commercial applications in which drones might take over parcel delivery, and how there’s been a helicopter doing just that in Afghanistan for the last year. All-in-all it was a fun interview, as they primarily are at TDS.

 

 

Enhanced by Zemanta

The Search For Perfection

What happens when a roboticist sets out to make the perfect robot? The short film below, “Lisa,” explores the question:

Enhanced by Zemanta

The Trillion Dollar Driverless Car

Chunka Mui, over at Forbes, begins a new series where he considers Google’s driverless car. He begins with a surprising fact:

In fact, the driverless car has broad implications for society, for the economy and for individual businesses. Just in the U.S., the car puts up for grab some $2 trillion a year in revenue and even more market cap.  It creates business opportunities that dwarf Google’s current search-based business and unleashes existential challenges to market leaders across numerous industries, including car makers, auto insurers, energy companies and others that share in car-related revenue.

The rest of the first installment gives a rundown of the broad changes we can expect with driverless car technology:

Google is claiming its car could save almost 30,000 lives each year on U.S. highways and prevent nearly 2 million additional injuries. Google claims it can reduce accident-related expenses by at least $400 billion a year in the U.S. Even if Google is way off—and I don’t believe it is—the improvement in safety will be startling.

In addition, the driverless car would reduce wasted commute time and energy by relieving congestion and allowing cars to go faster, operate closer together and choose more effective routes. One study estimated that traffic congestion wasted 4.8 billion hours and 1.9 billion gallons of fuel a year for urban Americans. That translates to $101 billion in lost productivity and added fuel costs.

Enhanced by Zemanta

Sawbot Carves Up Stools For Art

A pair of designers showed off their industrial chainsaw robot, which was programmed to make a set of quaint stools out of a log: