In Kinsasha - capital of the Democratic Republic of Congo - two eight-foot-tall robots have been installed in order to direct traffic. As you can see above, the machines look more than a little absurd - like something straight out of a 1960s science fiction film. They aren't exactly intelligent, either. They just tell cars where to go.
And apparently, they do a better job of it than the human operators ever did - apparently, people just tend to listen to them more readily.
It may have something to do with the fact that each of the two robots is equipped with an array of security cameras to record ne'erdowells. They're basically highly visible speed cameras. At this point, you're probably scoffing a bit. After all, this has nothing to do with robotics, right? Those two machines are basically glorified, highly-visible speed cameras. Of course people behave around them! Why wouldn't they?
Not only that, they're something like eight feet tall. I don't know about you, but if a machine that size directed me to do something, I'd more than likely comply - even if I did realize on some level that it was merely an inanimate object. Ultimately, it appears as though the deference people give these simple traffic bots tells us very little about how robots and humans relate to one another.
Uh, yeah. Turns out, it isn't actually that simple. According to scientists in the field of robot-human interaction, the human brain is wired in such a way that we're actually capable of seeing robots as authority figures- and as such, we're hard-wired to obey them. Creepy. isn't it?
Take, for example, a study hosted last year at the University of Manitoba, where a group of 27 volunteers were instructed to work on a selection of repetitive, mundane tasks by a third party named "Jim" - who was either 27-year-old actor or an Aldeberan Nao Robot. Each participant was paid $10 to change file names from .jpg to .png, with a workload that started slow but grew exponentially larger every time the participant completed their task. Whenever any participant protested, "Jim" urged them to keep going.
Apparently, of the thirteen participants who were involved with the robot, ten of them actually viewed it as a legitimate authority figure - and none of them could adequately explain why; a few even tried to strike up a conversation with Robo-Jim. While it's clear that the human certainly did prove more convincing, the results are telling, just the same.
Plus, the study wasn't even perfect. The researchers admitted that the novelty of Nao's design may well have had a negative impact on its perceived authority, while involving humans in the experiment in any capacity may have polluted the results and caused participants to transfer their feeling of responsibility from Nao to the lead researcher of the study.
See, none of the participants who obeyed Nao actually listed pressure from the robot as their reason for doing so. Instead, they said they'd taken an interest in future tasks, trusting the robot's programming and feeling obligated to keep their word to the lead scientist. Ultimately, in spite of all these caveats, one fact still remains: people still obeyed Nao.
"A small, child-like humanoid robot had enough authority to pressure 46 participants to rename files for 80 minutes, even after indicating that they wanted to quit," noted one researcher, "and in spite of expressing concern, they followed a possibly "broken" robot to do something they would rather not do."
There are definitely quite a few factors at play here, and more research is recommended before we can reach any solid conclusion. For the moment, though? Looks like those of us who choose to follow our future robot overlords might not be so unusual, after all.