Autonomous Weapon Systems (AWS) sometimes called killer robots are robotic weapons which, once activated, can decide when to release force (including lethal force) and against whom; without further human intervention. In this thesis I seek to deal with the challenges presented by AWS in particular, those without Meaningful Human Control both in peace time and in armed conflict. Throughout this thesis and unless specifically mentioned, reference to AWS means those without Meaningful Human Control .
AWS present many advantages which include but are not limited to the following: AWS can potentially save the lives of soldiers as they can do the dull, dirty and dangerous work; AWS do not suffer human weaknesses such as getting tired, acting out of anger, malice, frustration etc. therefore implying they can potentially save the lives of civiliansn too. AWS can help in keeping a digital trail of events which can help in bringing perpetrators to book. More so, AWS will not wilfully commit crimes.
On the other hand, however, AWS present serious threats to rights such as the right to life, dignity and victims right to remedy and may make it all too easy for states to go to war. AWS may not be able to comply with international laws that govern the use of force, and may be unacceptable in terms of the right to dignity that a machine decides who lives and who dies. Furthermore, AWS without Meaningful Human Control may create an accountability gap which affects victims right to a remedy as protected in international law.
To determine an appropriate legal response to AWS, I examine the obligation of states to conduct legal review of new weapons in terms of Article 36 of Additional Protocol I to the Geneva Conventions and how AWS measure up to the established standards. Article 36 provides that new weapons must be reviewed to establish whether they are indiscriminate weapons or those that cause unnecessary suffering or otherwise unacceptable in terms of other standards such as those found under the human rights regime. To start with, I argue that AWS without Meaningful Human Control or those with full autonomy may not be weapons in the strict sense of the word, and the international community must be wary of accepting robot combatants . When the standards enunciated in Article 36 are properly understood, I argue and conclude that AWS without Meaningful Human Control are unacceptable.
I also measure AWS against important rules of International Humanitarian Law such as the rules of humanity, distinction, proportionality, precaution and military necessity. Now that these rules were initially drafted for human combatants with the ability to make legal and moral judgments, machines which are incapable of human judgment will in most cases violate the rules. Furthermore, I take note of the imprecise definitions of IHL terms and the limitations of the current technology which makes it impossible to translate the said definitions into computer programs.
Under the Human Rights Law regime, I take note of the rules that govern the use of force such as those provided by the UN Basic Principles on the use of firearms in law enforcement. Just like in the case of IHL, most of these rules require human judgment, something that machines are incapable of. Moreso, within the Human Rights Law framework, I consider in detail the implications of AWS on the right to dignity. After discussing what the right to dignity entails and its importance in international law, I then get to the conclusion that AWS without Meaningful Human Control are inconsistent with the right to dignity which is the mother right to all other rights.
I further observe that AWS create an accountability gap that adversely affects victims right to a remedy as there may be no one to hold responsible for particular violations. In this regard, I discuss various forms of accountability in international law such as state, corporate, individual and command responsibility noting the challenges presented by AWS. It examines solutions that have been proposed so far; such as the notion of split responsibility and the suggestion to adopt command responsibility to AWS, before concluding that such suggestions are faulty and unworkable.
Mini-dissertation (LLD)--University of Pretoria, 2015.