Whenever we talk about improved Artificial Intelligence and the proliferation of robots we end up talking about the enslavement or genocide of humankind. Usually this comes as a result of militarizing robots. The Army Times has an article about Robo-Ethics, based on ideas from the book Governing Lethal Behavior in Autonomous Robots by Ronald C. Arkin. The advantage of War-bots over human soldiers would be mainly physical. They would have greater endurance and better senses, but we wouldn’t want them to get carried away, so:
Robots designed to have guilt operate this way, according to a research paper co-written by Arkin and colleague Patrick Ulam: The robots would be designed with an “ethical adaptor,” while each weapon system they carry would be grouped according to its destructive power and each group of weapons associated with a specific guilt threshold.
The idea is that if the weapon kills “too many” civilians it will be deactivated, and this would simulate a human reaction to killing civilians. On The Agenda last night a panel of AI enthusiasts discussed the future of robotics and the subject of War-bots came up. Someone asked if War-bots would make war more palatable. One panelist said that it’s silly to think war-bots would be programmed with Asimov’s three laws, but the psychological separation between the killing machine and the humans in charge would be so great as to reduce the trauma of war. Another panelist joked that eventually wars could be fought entirely by robots, they could just duke it out on the moon. But of course, that’s what war used to be; expendable soldiers killing just each other in an isolated battlefield.
I think it’s silly to think the people who use war to get what they want will ever play fair. The idea of a war-bot with morals seems contrary to the general concept of war; “it’s okay to kill these people, they’re not really people anyway”.