By Guest Writer Harsh Bansal
The Russia-Ukraine conflict has reached an unprecedented level of intensity, fueled by advancements in military technology and modern warfare systems. As the possibility of the conflict being brought before the International Criminal Court (ICC) looms – which is likely in the backdrop of the issuance of Arrest Warrants by the Pre-Trial Chamber II – questions regarding assigning responsibility and accountability for the use of Autonomous Drones such as the USA’s Switchblade, Turkey’s Bayraktar TB2, Iran’s Shahed-136, and most recently Ukraine’s UJ-22 on the Kremlin Senate will arise.
These drones possess the remarkable capability of autonomous functioning, allowing them to operate independently without direct human control, hence dubbed Autonomous Weapon Systems (AWS). USA’s Defense Directive No. 3000.09 defines AWS as a system that “once activated, can select and engage targets without further intervention by a human operator.”
All drones have different degrees of autonomy, hence making it more difficult to identify the perpetrator. Degrees of autonomy can be divided into three specific categories: Humans-in-the-loop (humans are actively involved in engaging and targeting); Humans-on-the-loop (robot primarily engages and targets, but human input has an overriding effect); and Humans-out-of-the-loop (robots perform all the functions). Mechanically, drones can fall into any of the categories depending upon the configuration by the commander or the superior.
Legally, the dehumanization of armed conflicts poses various challenges, including the ethical concerns surrounding the use of AWS, which can cause excessive and superfluous harm as evidenced by civilian injuries in both countries.
Additionally, there is the issue of assigning individual responsibility for the usage of human-out-of-the-loop AWS, where machines and Artificial Intelligence (AI) are at the forefront, removing the human factor from decision-making. The usage of AWS is legally dubious since, generally speaking, affixing individual responsibility under International Humanitarian Law (IHL) depends on the Command Responsibility doctrine wherein the Commander is held responsible for any war crimes committed by the subordinates.
The matter of assigning liability in the event of AI malfunctioning remains an ongoing challenge, particularly when considering functions that significantly impact human lives and have global implications. AI has achieved a high level of sophistication through the utilization of Machine Learning and Deep Learning, enabling it to operate autonomously without human intervention. In fact, AI systems are now capable of generating commands independently, potentially disregarding the directives of commanders and causing significant havoc.
This situation presents a substantial risk and raises the question of whether Commanders can absolve themselves of responsibility by claiming they had no control over the AWS which operated independently. The current discourse on AWS and fixation of liability for AI malfunction does not offer a definitive answer to this dilemma. Nevertheless, a thorough examination of the prevailing IHL rules and regulations does offer a potential solution.
The Doctrine of Command Responsibility codified as Rule 153, recognized as a principle of Customary International Law, provides a potential avenue for addressing the issue of assigning accountability for violations. According to this doctrine, commanders or superiors can be held responsible if they were aware or had reason to know that their subordinates were committing or planning to commit war crimes, and failed to take necessary and reasonable measures to prevent such actions. Command responsibility does not necessarily require direct intervention, but rather effective control. This holds them accountable for their culpable failure to prevent, suppress, or punish crimes committed under their command. Therefore, individual criminal responsibility for the use of out-of-the-loop AWS can be attributed to commanders if their intention, knowledge, and failure to suppress the usage of drones can be established.
THE MARTENS CLAUSE
The absence of specific laws regarding the responsibility for using AWS creates a legal vacuum when it comes to prosecuting individuals for war crimes. As it is believed that the code on armed conflict is never complete, the Martens Clause – a customary international law principle – can be invoked as a legal basis for assigning responsibility. The Martens Clause states that in cases not covered by existing laws, individuals remain protected by the principles of humanity and the dictates of public conscience. Until comprehensive legislation addressing human agency and control over weapons is enacted, the Martens Clause can serve as a guiding principle for holding those responsible accountable.
The Russia-Ukraine conflict underscores the urgent need for specific rules and regulations governing modern warfare. With technology advancing at a rapid pace, it is imperative to establish comprehensive frameworks to address the ethical and legal challenges arising from the use of advanced weaponry, including AWS. The conduct of both parties involved in the conflict cannot go unchecked, and it is essential to ensure accountability and the enforcement of international humanitarian norms. Governments, international organizations, and legal experts should collaborate to formulate clear guidelines on the usage of AWS and other emerging technologies in armed conflicts. This would serve to mitigate risks, protect civilian lives, and uphold the principles of international law in the face of evolving warfare dynamics.