Author: Seumas Miller

1     Autonomous Weapons

2     Moral Responsibility and Autonomous Weapons

3     Prohibition of Autonomous Weapons

4     Conclusion

 

Autonomous robots are able to perform many tasks for more efficiently than humans, e.g. tasks performed in factory assembly lines, auto-pilots, driverless cars; moreover, they can perform tasks dangerous for humans to perform, e.g. defuse bombs. However,  autonomous robots can also be weaponised and in a manner such that the robots control their targets (and, possibly, the selection of their weapons). Further, by virtue of developments in artificial intelligence, the robots have superior calculative and memory capacity. In addition, robots are utterly fearless in battle; they don’t have emotions and care nothing for life over death.

New and emerging (so-called) autonomous robotic weapons can replace some military roles performed by humans and enhance others.[1] Consider, for example, the Samsung stationary robot which functions as a sentry in the demilitarized zone between North and South Korea. Once programmed and activated, it has the capability to track, identify and fire its machine guns at human targets without the further intervention of a human operator. Predator drones are used in Afghanistan and the tribal areas of Pakistan to kill suspected terrorists. While the ones currently in use are not autonomous weapons they could be given this capability in which case, once programmed and activated, they could track, identify and destroy human and other targets without the further intervention of a human operator. Moreover, more advanced autonomous weapons systems, including robotic ones, are in the pipeline.

In this paper I address the following two questions. Firstly, do such weapons necessarily compromise the moral responsibility of their human designers, programmers and/or operators and, if so, in what manner and to what extent? Secondly, should autonomous weapons be prohibited?

1      Autonomous Weapons

Autonomous weapons are weapons system which, once programed and activated by a human operator, can – and, if used, do in fact – identify, track and deliver lethal force without further intervention by a human operator. By ‘programmed’ I mean, at least, that the individual target or type of target has been selected and programmed into the weapons system. By ‘activated’ I mean, at least, that the process culminating in the already programmed weapon delivering lethal force has been initiated. This weaponry includes weapons used in non-targeted killing, such as autonomous anti-aircraft weapons systems used against multiple attacking aircraft or, more futuristically, against swarm technology (for example multiple lethal miniature attack drones operating as a swarm so as to inhibit effective defensive measures); and ones used or, at least, capable of being used in targeted killing (for example a predator drone with face-recognition technology and no human operator to confirm a match).

We need to distinguish between so-called ‘human in-the-loop’, ‘human on-the-loop’ and ‘human out-of-the-loop’ weaponry. It is only human out-of-the-loop weapons that are autonomous in the required sense. In the case of human-in-the-loop weapons the final delivery of lethal force (for example by a predator drone), cannot be done without the decision to do so by the human operator. In the case of human on-the-loop weapons, the final delivery of lethal force can be done without the decision to do so by the human operator; however, the human operator can override the weapon system’s triggering mechanism. In the case of human out-of-the-loop weapons, the human operator cannot override the weapon system’s triggering mechanism; so once the weapon system is programmed and activated there is, and cannot be, any further human intervention.

The lethal use of a human-in-the-loop weapon is a standard case of killing by a human combatant and, as such, is presumably, at least in principle, morally permissible. Moreover, other things being equal, the combatant is morally responsible for the killing. The lethal use of a human-on-the-loop weapon is also in principle morally permissible. Moreover, the human operator is, perhaps jointly with others (such as his or her commander – see discussion in section 4 below on collective responsibility as joint responsibility), morally responsible, at least in principle, for the use of lethal force and its foreseeable consequences. However, these two propositions concerning human on-the-loop weaponry rely on the following assumptions:

 

(1) The weapon system is programmed and activated by its human operator and either;

(2) (a) On each occasion and on all occasions of use the delivery of lethal force can be overridden by the human operator and; (b) this operator has sufficient time and sufficient information to make a morally informed, reasonably reliable judgement whether or not to deliver lethal force or;

(3): (a) On each occasion of use, but not on all occasions of use, the delivery of lethal force can be overridden by the human operator and; (b) there is no moral requirement for a morally informed, reasonably reliable judgement on each and every occasion of the final delivery of force.

 

A scenario illustrating (3)(b) might be an anti-aircraft weapons system being used on a naval vessel under attack from a squadron of manned aircraft in a theatre of war at sea in which there are no civilians present.

There are various other possible such scenarios. Consider a scenario in which there is a single attacker on a single occasion in which there is insufficient time for a reasonably reliable, morally informed judgment. Such scenarios might include ones involving a kamikaze pilot or suicide bomber. If autonomous weapons were to be morally permissible the following conditions at least would need to be met: (i) prior clear-cut criteria for identification/delivery of lethal force to be designed-into the weapon and used only in narrowly circumscribed circumstances; (ii) prior morally informed judgment regarding criteria and circumstances, and; (iii) ability of operator to override system. Here there is also the implicit assumption that the weapon system can be ‘switched off’, as is not the case with, for instance, biological agents released by a bioweapon.

What of human out-of-the-loop weapons, i.e. autonomous weapons? As mentioned above, these are weapons systems that once programed and activated can identify, track and deliver lethal force without further intervention by human operator. They might be used for non-targeted killing in which case there is no uniquely identified individual target such as in the above described cases of incoming aircraft and swarm technology. Alternatively, they might be used for targeted killing. An example of this would be predator drone with face-recognition technology and no human operator to confirm match. However, the crucial point to be made here is that there is no human on-the-loop to intervene once the weapons system has been programmed and activated. Two questions now arise. Firstly, are humans fully morally responsible for the killings done by autonomous weapons or is there a so-called responsibility gap? Secondly, should such weapons be prohibited?

 

2      Moral Responsibility and Autonomous Weapons

So-called autonomous robots and, therefore, autonomous weapons are not really autonomous in the sense in which human beings are since they do not choose their ultimate ends and are not sensitive to moral properties. However, the question that now arises concerns the moral responsibility for killings done by autonomous weapons. Specifically, do they involve a responsibility gap such that their human programmers and operators are not morally responsible or, at least, not fully morally responsible for the killings done by the use of these weapons?

Consider the following scenario which, I contend, is analogous to the use of human out-of-the-loop weaponry. There is a villain who has trained his dogs to kill on his command and an innocent victim on the run from the villain. The villain gives the scent of the victim to the killer-dogs by way of an item of the victim’s clothing and then commands the dogs to kill. The killer-dogs pursue the victim deep into the forest and now the villain is unable to intervene. The killer-dogs kill the victim. The villain is legally and morally responsible for murder. However, the killer-dogs are not, albeit they may need to be destroyed on the grounds of the risk they pose to human life. So the villain is morally responsible for murdering the victim, notwithstanding the indirect nature of the causal chain from the villain to the dead victim; the chain is indirect since it crucially depends on the killer-dogs doing the actual physical killing. Moreover, the villain would also have been legally and morally responsible for the killing if the ‘scent’ was generic and, therefore, carried by a whole class of potential victims, and if the dogs had killed one of these. In this second version of the scenario, the villain does not intend to kill a uniquely identifiable individual, but rather one (or perhaps multiple) members of a class of individuals.

By analogy, human out-of-the-loop weapons – so-called ‘killer-robots’ – are not morally responsible for any killings they cause. Consider the case of a human in-the-loop or human-on-the-loop weapon. Assume that the programmer/activator of the weapon and the operator of the weapon at the point of delivery are two different human agents. If so, then other things being equal they are jointly (that is, collectively) morally responsible for the killing done by the weapon (whether it be of a uniquely identified individual or an individual qua member of a class). No-one thinks the weapon is morally or other than causally responsible for the killing. Now assume this weapon is converted to a human out-of-the-loop weapon by the human programmer-activator. Surely this human programmer-activator now has full individual moral responsibility for the killing, as the villain does in (both versions of) our killer-dog scenario. To be sure there is no human intervention in the causal process after programming-activation. But the weapon has not been magically transformed from an entity only with causal responsibility to one which now has moral or other than causal responsibility for the killing.

It might be argued that the analogy does not work because killer-dogs are unlike killer-robots in the relevant respects. Certainly dogs are minded creatures whereas computers are not; dogs have some degree of consciousness and can experience, for example, pain. However, this difference would not favor ascribing moral responsibility to computers rather than dogs; rather, if anything, the reverse is true. Clearly, computers do not have consciousness, cannot experience pain or pleasure, do not care about anyone or anything (including themselves) and, as we saw above, do not choose their ultimate ends and, more specifically, cannot recognize moral properties, such as courage, moral innocence, moral responsibility, sympathy or justice. Therefore, they cannot act for the sake of principles or ends understood as moral in character, such as the principle of discrimination. Given the apparent non-reducibility of moral concepts and properties to non-moral ones and, specifically, physical ones, at best computers can be programmed to comply with some non-moral proxy for moral requirements. For example, ‘Do not intentionally kill morally innocent human beings’ might be rendered as ‘Do not fire at bipeds if they are not carrying a weapon or they are not wearing a uniform of the following description’. However, here as elsewhere, the problem for such non-moral proxies for moral properties is that when they diverge from moral properties, as they inevitably will in some circumstances, the wrong person will be killed or not killed (as the case may be), e.g. the innocent civilian wearing camouflage clothing to escape detection by combatants on either side and carrying a weapon for personal protection is killed while the female terrorist concealing a bomb under her dress is not.

Notwithstanding the above, some have insisted that robots are minded agents; after all, it is argued, they can detect and respond to features of their environment and in many cases they have impressive storage/retrieval and calculative capacities. However, this argument relies essentially on two moves that should be resisted and are, in any case, highly controversial. Firstly, rational human thought, notably rational decisions and judgments, are down-graded to the status of mere causally connected states or causal roles, for example via functionalist theories of mental states. Secondly, and simultaneously, the workings of computers are upgraded to the status of mental states, for example via the same functionalist theories of mental states. For reasons of space I cannot here pursue this issue further. Rather I simply note that this simultaneous down-grade/up-grade faces prodigious problems when it comes to the ascription of (even non-moral) autonomous agency. For one thing, autonomous agency involves the capacity for non-algorithmic inferential thinking, for example the generation of novel ideas. For another, to reiterate, computers do not choose their own ultimate ends. At best they can select between different means to the ends programmed into them. Accordingly, they are not autonomous agents, even non-moral ones. So while killer robots are morally problematic this is not for the reason that they are autonomous agents in their own right but this brings us to our second and final question.

3      Prohibition of Autonomous Weapons

Our final question concerns the prohibition of autonomous weapons in the sense of human out-of-the-loop weapons. This question should be seen in the light of our conclusions that such weapons are not morally sensitive agents and their use does not involve a responsibility gap. Rather there are multiple human actors implicated in the use of autonomous weapons: there is collective moral responsibility in the sense of joint individual moral responsibility[2]. The members of the design team are collectively, i.e. jointly, morally responsible for providing the means to harm (the weapon) and the political and military leaders and those who follow their orders are collectively, i.e. jointly, responsible for these weapons being used against a certain group/individual, e.g., intelligence personnel for providing the means to identify targets, and the operators for its use on a given occasion since they programmed/activated the weapons system. Moreover, all of the above individuals are collectively – in the sense of jointly – morally responsible for the deaths resulting from the use of the weapon, but they are responsible to varying degrees and in different ways, e.g. some provided the means (designed the weapon), others gave the order to kill a given individual, still others pulled the trigger etc. These varying degrees and varying ways are reflected in the different but overlapping collective end content of their cooperative or joint activity, e.g. a designer has the collective end to kill some combatants in some war (this being the purpose of his design-work), a military leader (in issuing orders to subordinates) to kill enemy combatants in this theatre of war, an operator to kill enemy combatants A, B & C here and now.

It is important to note that each contributor to such a joint lethal action is individually morally responsible for his/her own individual action contribution, e.g. an individual weapons operator who chose to deliver lethal force on some occasion or perhaps, in the case of an on-the-loop weapon, not to override the delivery of lethal force by the weapon on this occasion. This is consistent with there being collective, i.e. joint, moral responsibility for the outcome, e.g. the death of an enemy combatant, the death of innocent civilians.

It is also important to note the problem of accountability that arises for morally unacceptable outcomes involving ‘many hands’, i.e. joint action, and indirect causal chains. Consider, for example, an ‘out-of-the-loop’ weapon system that kills an innocent civilian rather than a terrorist because of mistaken identity and the absence of an override function when the mistaken identity is discovered at the last minute. The response to this accountability problem should be to design-in institutional accountability. Thus, in our example the weapons designers ought to be held jointly institutionally and, therefore, jointly morally responsible for failing to design-in an override function i.e. for failing to ensure the safety of the weapon system; likewise the intelligence personnel ought to be held jointly institutionally and, therefore, jointly morally responsible for the mistaken identity. Analogous points can be made with respect to the political and military leaders and the operators.

As we have seen, human-out-of-the-loop weapons can be designed to have an override function and an on/off switch controlled by a human operator. Moreover, in the light of our above example and like cases, in general autonomous weapons ought to have an override function and on/off switch. Indeed to fail to do so would be tantamount to an abnegation of moral responsibility. However, against this it might be argued that there are some situations in which there ought not to be a human on the loop (or in the loop).

Let us consider some candidate situations involving human-out-of-the-loop weapons that might be thought not to require a human in or on the loop.

(1) Situations in which the selection of targets and delivery of force cannot in practice be overridden on all occasions and in which there is no requirement for a context dependent, morally informed judgement on all occasions e.g. there is insufficient time to make the decision to repulse an imminent attack from incoming manned aircraft and there is no need to do so since the aircraft in a theatre of war are clearly identifiable as enemy aircraft.

(2) Situations in which there is a need only for a computer-based mechanical application of a clear-cut procedure (e.g. deliver lethal force), under precisely specified input conditions (e.g. identified as an enemy submarine by virtue of its design etc.) in which there is no prospect of collateral damage (e.g. in open seas in the Arctic).

However, even in these cases it is difficult to see why there would be an objection to having a human on the loop (as distinct from in the loop) especially since there might still be a need for a human on the loop to accommodate the problems arising from false information or unusual contingencies. For instance, the ‘enemy’ aircraft or submarines in question might turn out to be ones captured and operated by members of one’s own forces. Alternatively, one’s own aircraft and submarines might now be under the control of the enemy (e.g. via a sophisticated process of computer hacking) and, therefore, should actually be fired upon.

A further argument in favour of autonomous weapons concerns human emotion. It is argued that machines in conditions of war are superior to humans by virtue of not having emotions since stress/emotions lead to error. Against this it can be pointed out that human emotions inform moral judgment and moral judgment is called for in war. For instance, the duty of care with respect to innocent civilians relies on the emotion of caring; a property not possessed by robots. Moreover, human stress/emotions can be controlled to a considerable extent, e.g. combatants should not be combatants if not appropriately selected/trained, and the influence of stressors can be reduced, e.g. by requiring some decisions to be made by personnel at some distance from the action.

 

The upshot of this discussion is that human out-of-the-loop weapons are neither necessary nor desirable. Rather autonomous weapons should always have a human on-the-loop (if not in-the-loop). Moreover, not to do so would be an abnegation of responsibility. Accordingly, autonomous weapons in the sense of human out-of-the-loop weapons should be prohibited.

4      Conclusion

In this paper I have discussed certain aspects of the morality of autonomous weapons. Specifically, I have described autonomous weapons and specified the sense in which such weapons are autonomous. I have also discussed an alleged responsibility gap in the use of autonomous weapons and concluded that in fact there is no such gap. Human beings are fully morally responsible for the killings involving the use of autonomous weapons. Finally, I have suggested that autonomous weapons, in the sense of human out-of-the-loop weapons are inherently morally problematic and, therefore, should be prohibited.

[1] For an earlier account of these issues see Seumas Miller “Robopocolypse?: Autonomous Weapons,  Military Necessity and Collective Moral Responsibility” in (ed.) Jai Galliott and M. Lotze Super Soldiers: The Ethical, Legal and Social Implications (Ashgate, 2015) pp. 153-166.

 

[2] Seumas Miller “Collective Moral Responsibility: An Individualist Account” in (ed.) Peter A. French Midwest Studies in Philosophy vol.XXX 2006 pp.176-193