Log in   •   Sign up   •   Subscribe  feed icon

If Military Robots Are Really Going To Be A Thing, They Need To Have Morals

As I'm certain you're all well aware by now, there's recently been a concerted effort on the international stage to fight against 'killer robots' in the military; autonomous weapons platforms which can identify and attack human targets without requiring any sort of human intervention. It's not exactly difficult to see why such technology makes people nervous: the idea of a robot that can target and kill a person without guidance is a bit of a disturbing concept, which raises a number of troubling ethical and philosophical questions.

A lot of folks feel that there's really no compromise to be had here - either we rid ourselves of the murderous machines now, or we allow them to proliferate to the point that they basically become the new landmine. The United States Office Of Naval Research feels differently. They believe that the solution to the killer robot problem doesn't lie in disallowing them entirely - it lies in teaching them what to kill, and when.

In short, it lies in teaching them morality - which may well be the most human of all thought processes.

Given that robots are still a long ways away from being close to human, that doesn't exactly sound like an easy task, does it? How exactly does one teach a machine right from wrong? It can't simply be as easy as instilling in them Asimov's Three Laws of Robotics, can it? 

In order to find out, the ONR will award $7.5 million in grant money to university researchers from Tufts, Brown, Yale, Georgetown, and Rensselaer Polytechnic Institute. Researchers from all of these institutions will be looking into how they can build a sense of right and wrong into autonomous robots; how they can teach robots to consider the consequences of their actions. 

"Even though today's unmanned systems are ‘dumb' in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we've seen before," explained ONR Director of Cognitive Science Paul Bello to Defense One. "For example, Google's self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake."

Currently, the United States military has a full prohibition on lethal automatons, and even semi-autonomous robots are restricted from selecting or engaging any targets not previously rung through by a human operator. What that means is that if the human operator is cut off, the robots will simply...cease in their duties. Of course, problems of morality extend far beyond the battlefield, Bello continued. 

"Even if such systems aren't armed, they may still be forced to make moral decisions," explained Bello. A rescue bot, for example, might be forced to decide who needs to be evacuated first, while a medical robot may need to choose between two patients. Without moral reasoning, these machines can't be expected to make such decisions.

But...how is it even possible to code something like morality? After all, we haven't even gotten emotions down. What we see our robots expressing isn't real - it's a facsimile, an approximation of the real thing. 

In short, moral agency is one of the most fundamental elements of the human experience, and robots are still struggling with the ability to feel and think on their own. How can they be expected to understand ethics? According to Yale Technology And Ethics Study Group Chair Wendell Wallach, the first step is to approach the problem with a moral framework.

"There's operational morality, functional morality, and full moral agency," he explained. "Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses... Functional morality is where the robot starts to move into situations where the operator can't always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear." 

It won't be that easy, naturally - agency isn't really something you can just 'program.' It requires an understanding of the people around you. It requires a knowledge of what it means to suffer. It requires the capacity to genuinely care, and that's something which robots aren't going to have for a very, very long time.

Still, once we finally do code morality and empathy into our machines - I'm of the belief that it'll happen eventually, we just need a sufficiently unhinged eccentric genius to bring it about - they might actually become better at making ethical choices than humans. Scary thought, isn't it? What happens when our robots are better at being human than we are? 

Comments

Post new comment

To prevent automated spam submissions leave this field empty.