Armin Krishnan and Autonomous Robots
At the Moral Machines blog, Armin Krishnan, Visiting Professor for Security Studies at the University of Texas at El Paso and author of Killer Robots: Legality and Ethicality of Autonomous Robots was interviewed by Gerhard Dabringer.
In your recent book "Killer Robots: The Legality and Ethicality of Autonomous Weapons" you explore the ethical and legal challenges of the use of unmanned systems by the military. What would be your main findings?
The legal and ethical issues involved are very complex. I found that the existing legal and moral framework for war as defined by the laws of armed conflict and Just War Theory is utterly unprepared for dealing with many aspects of robotic warfare. I think it would be difficult to argue that robotic or autonomous weapons are already outlawed by international law. What does international law actually require? It requires that noncombatants are protected and that force is used proportionately and only directed against legitimate targets. Current autonomous weapons are not capable of generally distinguishing between legitimate and illegitimate targets, but does this mean that the technology could not be used discriminatively at all, or that the technology will not improve to an extent that it is as good or even better in deciding which targets to attack than a human? Obviously not. How flawless would the technology be required to work, anyway? Should we demand a hundred percent accuracy in targeting decisions, which would be absurd only looking at the most recent Western interventions in Kosovo, Afghanistan and Iraq, where large numbers of civilians died as a result of bad human decisions and flawed conventional weapons that are perfectly legal. Could not weapons that are more precise and intelligent than present ones represent a progress in terms of humanizing war?
I don’t think that there is at the moment any serious legal barrier for armed forces to introduce robotic weapons, even weapons that are highly automated and capable of making own targeting decisions. It would depend on the particular case when they are used to determine whether this particular use violated international law, or not. The development and possession of autonomous weapons is clearly not in principle illegal and more than 40 states are developing such weapons, indicating some confidence that legal issues and concerns could be resolved in some way. More interesting are ethical questions that go beyond the formal legality. For sure, legality is important, but it is not everything. Many things or behaviors that are legal are certainly not ethical. So one could ask, if autonomous weapons can be legal would it also be ethical to use them in war, even if they were better at making targeting decisions than humans? While the legal debate on military robotics focuses mostly on existing or likely future technological capabilities, the ethical debate should focus on a very different issue, namely the question of fairness and ethical appropriateness. I am aware that "fairness" is not a requirement of the laws of armed conflict and it may seem odd to bring up that point at all. Political and military decision-makers who are primarily concerned about protecting the lives of soldiers they are responsible for clearly do not want a fair fight. This is a completely different matter for the soldiers who are tasked with fighting wars and who have to take lives when necessary. Unless somebody is a psychopath, killing without risk is psychologically very difficult. Teleoperators of the armed Predator UAVs actually seem to suffer from higher levels of stress than jet pilots who fly combat missions. Remote controlling or rather supervising robotic weapons is not a job well suited for humans or a job soldiers would particularly like to do. So why not just leave tactical targeting decisions to an automated system (provided it is reliable enough) and avoid this psychological problem? This brings the problem of emotional disengagement from what is happening on the battlefield and the problem of moral responsibility, which I think is not the same as legal responsibility. Autonomous weapons are devices rather than tools. They are placed on the battlefield and do whatever they are supposed to do (if we are lucky). The soldiers who deploy these weapons are reduced to the role of managers of violence, who will find it difficult to ascribe individual moral responsibility to what these devices do on the battlefield. Even if the devices function perfectly and only kill combatants and only attack legitimate targets, we will not feel ethically very comfortable if the result is a one-sided massacre. Any attack by autonomous weapons that results in death could look like a massacre and ethically difficult to justify, even if the target somehow deserved it. No doubt, it will be ethically very challenging to find acceptable roles and missions for military robots, especially for the more autonomous ones. In the worst case, warfare could indeed develop into something in which humans only figure as targets and victims and not as fighters and deciders. In the best case, military robotics could limit violence and fewer people will have to suffer from war and its consequences. In the long term, the use of robots and robotic devices by the military and society will most likely force us to rethink our relationship with the technology we use to achieve our ends. Robots are not ordinary tools, but they have the potential for exhibiting genuine agency and intelligence. At some point soon, society will need to consider the question of what are ethically acceptable uses of robots. Though "robot rights" still look like a fantasy, soldiers and other people working with robots are already responding emotionally to these machines. They bond with them and they sometimes attribute to the robots the ability to suffer. There could be surprising ethical implications and consequences for military uses of robots.
You can read the rest here.
Posted: January 15th, 2010
at 3:17pm by Koookiecrumbles
Categories: robots,blogs,ethics
Comments: No comments