von JAYANTI DHINGRA
AI-enabled drones, or unmanned aerial vehicles (UAVs), represent one of the most significant recent advancements in modern warfare. Powered by artificial intelligence, these systems integrate advanced algorithms, machine learning, and real-time data processing to autonomously execute military tasks. As the international system remains competitive, the core tenet of realism, that is, security, is being moulded into different interpretations to fit in this new technological world. Maximising power, arms racing and innovation are considered as some of the proponents of the realist theory. With new avenues of warfare coming up, the article examines the realist approach to AI weapons and assesses the challenges associated with them, particularly regarding their compliance with international law.
Lack of Human Control and Yardsticks of International Law
Through self-programming capabilities and machine learning algorithms, AI drones can identify and engage targets without real-time human control. Human control is limited only to the extent of feeding input into its system. However, this autonomy introduces significant concerns. Since these algorithms are trained on data sets curated by humans, they are susceptible to biases and flawed inputs, and hence, the predictability of such drones attacking the exact target becomes highly uncertain and vague. There are three types of human control – ‘humans in-the loop’, where humans input into the system is required for drone operation; ‘humans on-the loop’, means that humans only have a supervisory role and it might involve reviewing the target location identified by the drone; and the third is ‘humans off-the loop’ which means that drones operate on pre-programmed inputs without any human control or oversight. The biggest concern is the level of autonomy that these drones should possess and the third category (humans off-the loop) should be of great concern for the international community as it is difficult for humans to explicate how algorithms reach the final decisions.
Furthermore, it becomes complicated to assess proportionality and accountability that can undermine core principles of international humanitarian law (IHL) which is usually discernible in armed conflicts controlled by humans. In traditional warfare, state responsibility for the use of force lies with state actors and their military personnel. In contrast, AI-enabled systems complicate this. When lethal force is deployed by an autonomous drone, questions arise regarding where the accountability will lie and who will be responsible if there is a harmful or an unlawful target made by the drone; the inaccurate target being hit as a consequence is altogether a different issue to be dealt with. Though the states deploying such weapons would be directly responsible under IHL, AI complicates accountability by shielding humans who actually commissioned its use, that is, whether the manufacturer will be held liable or the programmer. AI further complicates this as armed attack is not to be used when the state has surrendered. However, the operations would be entirely controlled by AI, which could not exercise any empathy or judgement. These killings are “wrapped in secrecy” which can even extend to civilian deaths. Responsibility becomes difficult to attribute when human control is completely off-the loop or completely off the system.
Realism and AI-Enabled Drones
The theory of realism in international relations offers a useful lens to understand why states increasingly allocate significant portions of their budgets towards defence and military technologies, particularly AI-enabled drones. Current international law is not equipped to regulate the use of such weapons as legal frameworks assume human decision-makers decide, and with states being driven primarily for security interests, it would become relevant to analyse from a classic realist theory, that is of prioritizing national security and power accumulation over humanitarian or other matters. As military conflicts intensify across various regions, there appears to be a resurgence of Cold War-era realist thinking, which viewed states as primary actors driven by the pursuit of power, survival, and expansion. According to this theory, there is no rule of law in the international order, rather power determines what the outcome will be.
Neorealism, or structural realism, builds upon classical realism by arguing that the state’s interests are superior and they strive to maximise the relative power among other states to remain globally competitive; hence, technological advancements like AI drones are adopted not for any legal or ethical reason, but to maximise relative power. AI-enabled drones embody the realist logic by offering states a strategic advantage without direct human involvement and without any regulation controlling their operations. Till date, there hasn’t been any internationally recognised regulation to create a legally binding rule to prohibit lethal autonomous weapon systems. The International Committee of the Red Cross has presented ethical and legal challenges to autonomous weapons. They have pointed out that IHL is clearly directed to humans and the combatants are responsible for complying with the legal obligations, and hence, human control must be established and autonomy in weapons should be reduced. AI weapons must be “predictable, reliable, understandable and explainable, and traceable.” Therefore, if weapons are to be given power to target operations, it must be accompanied by a high level of predictability and certainty.
The rapid development and integration of AI-enabled drones align closely with both strands of neorealism. Defensive realism claims that states would adopt such technologies to deter to maintain their state’s security. On the other hand, offensive realism claims that states are driven by power and thus would focus on expansionist policies in order to become a hegemonic power. Thus, states would deploy AI drones either for their own security (defensive realism) or to be abreast of other nations in terms of technological supremacy (offensive realism). Therefore, the states prioritise their own interests in absence of any regulation on technological warfare.
Unlike conventional weapons, AI warfare is relevant under a realist framework. This is because there is lack of human control and hence, there is also danger of bypassing international norms through fully autonomous drones. Their autonomy allows for states to seek technological superiority and deterrence which aligns with realist objectives of power maximisation in the absence of any international protocols deterring it.
The Way Forward
There is an urgent need to establish comprehensive international protocols to govern the development, deployment, and accountability of AI weapons. In the absence of such regulations and adequate safeguards, the future of global legal order can be severely destabilized, potentially triggering a dangerous arms race. Without accountability mechanisms in place, these technologies could be exploited which would undermine the core principles of international humanitarian law (IHL) and the rule of law.
In such grey areas, the Martens Clause, first articulated in the 1899 Hague Convention II, becomes relevant. The clause asserts that in cases not covered by specific legal instruments, the conduct of parties remains governed by “the principles of humanity” and “the dictates of public conscience.” This principle is incorporated in Article 1(2) of Additional Protocol I and the preamble of Additional Protocol II to the Geneva Conventions which is significant to fill up the legal vacuum. This principle should also be equally applied to AI enabled drones as this would act as a safeguard against unrestricted use of such technology. Even if AI drones are now becoming inevitable, there should be more human control so that accountability could be there and the void of moral judgements in AI could be filled by human judgements, which could ensure compliance with IHL and human rights law. This becomes especially important in case of fully autonomous drones. The ultimate decision to engage or attack a target must remain entirely under human control. This ensures that any technologies deployed in drones, function under strict human oversight and accountability. This could also ensure that accountability is attributed to the person operating the drone, complying with IHL.
This intertwining of realism theory and the lack of any international authority to regulate AI weapons presents a complex challenge to be addressed in the international legal order. The rule of law is an inviolable principle of international law, and the main aim of states is to preserve the rule of law in the global world order. The realist approach to AI drones can prove to be fatal in the future; hence, an immediate action is required to adequately address this. With proliferation of private and government players coming to push the boundaries of technological warfare, there is a need to institutionalise and regulate the increasing and largely unrestricted use of drones.
Zitiervorschlag: Dhingra, Jayanti, Analysing AI Warfare Through Realist Critique of International Law, JuWissBlog Nr. 64/2025 v. 17.07.2025, https://www.juwiss.de/64-2025/
Dieses Werk ist unter der Lizenz CC BY-SA 4.0 lizenziert.

