Introduction

Autonomous weapon systems [AWS] raise profound legal, ethical and moral concerns. Scholars have asked, for example, whether AWS can comply with international humanitarian law [IHL]; whether their use will lower the threshold on the use of force and undermine jus ad bellum rules and whether their deployment will create an accountability gap in violation of victims’ rights to remedy. While there is no agreed definition of AWS, the United Kingdom House of Lords’ recent report carries definitions that generally describe AWS as robots that, once activated, are able to make targeting decisions without further human intervention.

In the recent United Nations Group of Governmental Experts [GGE] meeting [9-13 April] on Lethal Autonomous Weapon Systems, States reiterated the need to maintain human control over AWS. Notwithstanding the general consensus on maintaining human control over AWS, there is no agreement on the nature of that human control or how it should be defined.

Issues surrounding the concept of human control

The 2018 GGE meeting brought to fore a number of questions on how human control should be defined. States submitted a number of ideas and suggestions. Organisations like the International Committee of the Red Cross noted both legal and ethical reasons why human control must be maintained. Likewise, the International Panel on the Regulation of Autonomous Weapons discussed military and philosophical perspectives on the notion of human control.

Now that various disciplines – e.g. military, law, ethics, religion, philosophy etc. – have standards that are relevant to the notion of human control over AWS, the paramount question is which standard(s) should determine an acceptable level of human control and why? While States and scholars may cite innovative ideas and standards upon which to define the concept of human control, it is paramount to distinguish between relevant standards and those that are obligatory or legally binding upon States. The later ought to serve as the yardstick.

The other issue obfuscating what is meant by “human control over AWS” is the involvement of various actors – designers, programmers, manufacturers, operators etc. – in the development and deployment of AWS. How does the notion of human control apply to all these actors? Should the notion of human control be understood as a cumulative standard – i.e. the total sum of the activities of all the actors involved or is it a standard applicable to each and every actor in their own capacity and imposing different – albeit related – obligations?

Along the above lines, the delegation from Egypt asked whether the GGE discussion on human control should focus first on the operators of AWS – i.e. combatants or fighters. This makes sense in terms of IHL targeting rules that are primarily concerned with the bearers of weapons.

Delegations from other countries also questioned whether thinking of human control in terms of all the actors involved dilutes the question of responsibility over human use of weapons. This is closely linked to the question when should humans exercise control over AWS? Can human control be sufficiently exercised at the programming stage alone? In other words, can human decisions to use lethal force be preprogramed?

These complex questions and lack of agreed answers shows that the notion of human control over AWS means different things to different States. Thus, unsurprisingly, States continue to use different terms such as “meaningful human control”, “sufficient human control”, “appropriate levels of human judgment” in a bid to describe their understanding of human control.

However, as was correctly noted in the African Group statement to the GGE, it does “not matter what name or term is used to describe human control” because “what matters is the substance and standards of that control”.

International law and human control over AWS

This post strongly suggests that relevant international law standards ought to take primacy in the determination of what constitutes adequate human control over AWS. Any suggested form of human control that does not meet the demands of the relevant legal norms is inadequate.

Human control of weapons is inherent in international law that governs the use of force. While ideas from other disciplines are relevant, the ultimate yardstick to determine the standard of human control should be located in international law.

As noted in International Committee for Robot Arms Control’s statement that was submitted to the GGE, the guiding question that States should ask is: What is the Legally Required Level of Human Control [LRLHC]? Grounding the discussion in binding norms of international law eliminates unnecessary noises that currently permeates the discussion on what constitutes adequate human control.

In the above sense, the African Group on Disarmament noted that human control should not only be “understood in terms of legal principles” but “should not be seen as a matter of good-will by States but a legal standard that they ought to fully abide by”.

A few examples can help in thinking of human control in legal terms. There is a legal duty on humans – not machines – to ensure that international humanitarian law and other relevant legal norms are respected during targeting. It is humans who are entrusted with making legal judgements – in real time – whenever force is used. This is to ensure human legal responsibility for use of force.

Human responsibility for use of force in international law

The fundamental purpose of human control over weapons is to retain human responsibility over use of force. In international law, there is a direct relationship between control exercised and responsibility. When considering the notion of control for the purposes of determining responsibility, the basic legal inquiry in different branches of international law is: who is the aggregator of power at the relevant time?

Likewise, in international criminal law – also in domestic law – individual responsibility is anchored on the concept of mens rea or intention. Individual responsibility is undermined where a weapon system has the capacity to perform critical functions – those that relate to selection of targets, making of legal judgements and release of force – without human input. When such a weapon system is used, it is difficult if not impossible to ascertain the intention of the human operator.

Therefore, the LRLHC that the operator or combatant ought to exercise is that which reflects his or her intention for the purpose of establishing individual responsibility. To preserve human intention and hence human responsibility, the human-machine interaction in the targeting cycle should be characterized by machine dependence on human input in real time.

Framing of the human control standard in international law

Human-machine interaction occurs at various stages in the development and deployment of AWS. The Chair of the GGE has referred to such stages as human “touch points”. Human control of AWS is exercisable at different stages and by different actors as already mentioned above. This has led to the question whether the notion of human control is a cumulative standard. Consequently, questions have arisen whether legal responsibility over AWS can be split and shared among the actors involved. It has also led to the question whether human control can be sufficiently exercised at the programming or other developmental stages of AWS.

While all actors in the development and deployment of AWS play an important role, from a legal standpoint, human control is not the total sum of their activities. Rather, as shown below, human control should be understood as a particularised legal standard that is applicable to each and every individual actor in their own capacity. For each actor involved and based on their respective obligations, the question that must be asked is: what is the LRLHC?

In order to determine the control that ought to be exercised by each actor involved, it is important to focus first on the operators of AWS – i.e. combatants, fighters or law enforcement officials. This is because the accountability gap currently envisaged in the AWS debate largely relates to operators of AWS. More importantly, once one determines the level of control that operators must exercise over AWS, then that standard determines the responsibilities of the rest of the actors.

For example, once it is established that there is a legal responsibility for the operator to make all targeting decisions, then it follows that there is a legal responsibility on the programmer or roboticist not to develop a weapon system that has the capacity to make targeting decisions. Thus, the level of human control that ought to be exercised by the operators by virtue of legal obligations that binds them serve as a yardstick and guideline to the design capabilities that must be and must not be in a weapon system.

Conclusion

Various disciplines are relevant in the discussion of the notion of human control over AWS. However, while certain disciplines are more important in showing why we need to maintain human control, other disciplines are more important in showing how we determine the standard of that human control. In determining the level of human control that ought to be maintained, applicable international law norms must be given primacy. The legal purpose of human control over AWS sets the level of human control required.

MENU