2012-09-19

Co.Exist

Learning About Human Connection From A Robotic Friend

Meet Nexi. How do you feel about Nexi? Would you loan Nexi money? It turns out, scientists can change that perception by making subtle cues to Nexi’s robotic face.

In science fiction, robots are often given human characteristics--warmth, trustworthiness, and callousness are just some of the traits ascribed to future intelligent robots. But can robots believably display character traits? It turns out they can--and they can tell us a little something about the way humans act, too.

A study from researchers at Northeastern University (to be published in the journal Psy­cho­log­ical Sci­ence) discovered that there are a distinct set of cues--including touching the hands and face, leaning away, and crossing arms-that make a person seem untrustworthy. They figured this out not just by observing human to human interactions, but by looking at how humans interact with a friendly faced robot called Nexi.

North­eastern Uni­ver­sity psy­chology pro­fessor David DeSteno and his team first asked 86 students to either have an in-person conversation or a web-based chat with another student. The sessions were recorded, and any signs of fidgeting were taken into account. Afterwards, the student pairs played a game. Each student was given four tokens that were each worth $1 if kept to themselves and $2 if given to the student partner. So a student who managed to keep all their tokens and take their partner’s tokens would end up with the most money ($12).

DeSteno and his colleagues found that students, who were not as generous when they didn’t trust their partners, were more adept at sussing out untrustworthiness in-person than in a web-based chat. The cues mentioned above (touching the hands and face, leaning away, and crossing arms) also unconsciously tipped off students that their partner was untrustworthy.

It sounds like standard psychology experiment fare--but after the first round, the researchers repeated the experiment with the Nexi robot, designed by Cyn­thia Breazeal of MIT. When Nexi, which was controlled by experimenters (the students playing the money game with Nexi didn’t know about them), exhibited some of the aforementioned cues, students instinctively felt that the robot was untrustworthy and didn’t share their tokens.

That means these physical cues play an incredibly important role in human analysis of trustworthiness. It also means that humans believe, for whatever reason, that robots can have moral intent. And that says a lot about what kinds of relationships we might have with future humanoid robots.

Add New Comment

0 Comments