The third Summit on Responsible AI in the Military Domain (REAIM) took place from February 4 to 5 in A Coruña, Spain, bringing together States, non-governmental actors, academics, and tech industry representatives. This year’s Summit aimed to build upon previous gatherings, which focused on establishing a common understanding of the challenges and opportunities associated with military AI governance, by moving towards “concrete, practical, and realistic steps to translate previously agreed principles into effective and tangible measures.”
This action-oriented objective of the third Summit was complimented by an opening plenary panel which focused on technical understandings and considerations around AI—a perspective which is concerningly diluted in the legal and policy discourse of military AI. This set the tone for ongoing discussions on the tension between theoretical framings of AI and technical realities—a tension which surfaces many misconceptions and miscalibrated measures for procuring and implementing this technology.
Unraveling the AI Hype
The pressure to keep up with the fast pace of AI development emerges often in forums on military AI, and REAIM was no different. The opening plenary panel emphasized the speed and scale of AI development in the context of military procurement. The panel highlighted that AI is a rapidly evolving technology in juxtaposition with traditionally slower military procurement cycles. But in evaluating that framing, there is utility in examining what AI is and what it is not (as the panel did, as well).
To start, it is important to quell common misconceptions of AI—an umbrella term representing a spectrum of capabilities, ranging from large language models (such as ChatGPT) to text-to-image models (such as DALL-E).
While these capabilities are varied, they all have the capacity to adapt, or to use the popularized term, to “learn.” Learning is often misconstrued as equivalent to intelligence, a framing which has shaped common understanding, expectations, and perception of AI development.
But contrary to popular belief, no technology (AI included) develops of its own accord. Technology development is the product of meeting public needs and demands while balancing funding limitations, feasible technical decisions, and time constraints, among other factors. Technology development is made into practical reality via systems engineering. It is an interdisciplinary and iterative approach to designing, developing, implementing and managing systems across the entirety of their lifecycle. This process involves a plethora of human actors making decisions at various critical points across the lifecycle of a capability. Unfortunately, “systems engineering” does not sound as alluring as human-like intelligence. This reality is buried under embellished marketing agendas which inaccurately frame AI as an independently evolving and maturing technology that scales rapidly.
The reality, however, is that AI is a sophisticated application of technical methods which date back to the 1940s (machine learning models and transformer models are built on artificial neural networks, a term which was first used in 1943 to describe cognitive activity through logic). What is now new about AI is how these methods are being applied and the capabilities emerging from these applications. These innovative capabilities are packaged and labelled as “miracle” technologies with human-like intelligence that provide solutions to a variety of problems (including those we did not know we had).
The embellished narratives underpinning the AI bubble have promoted over-investment in a technology yet to deliver on its inflated promises. Despite its shortcomings, loud public signalling of AI adoption and leadership continues to emerge from States and civil society. The U.S. Department of Defense’s AI strategy released in January is a recent example of this. The strategy sets out to make the U.S. military more “lethal and efficient,” and claims AI is the singular method for achieving this goal, justifying investment and rapid implementation. Such “AI peacocking,” coupled with the marketing behind AI, has fabricated a fear of falling behind, particularly in the context of military operations.
The public posturing of States widely adopting military AI capabilities has led to concerns of potentially losing strategic military advantages by not having the latest “AI solution”—a term which surfaced frequently during the Summit. However, the problems these AI solutions purport to address, and what these solutions actually entail, are yet to be fully understood. Experts and researchers who are not directly benefiting from the AI boom are publicly questioning these claims based on technical realities and the lack of supporting evidence of these solutions. The claim of military AI solutions does not align with the facts of how technology actually works, how militaries operate and the imbalance between these realities.
AI Solutions or AI Problems?
AI capabilities across both civil and military applications do not always demonstrate technical integrity—a system’s fitness for service and safety and compliance with regulations, including technical regulations. This is a benchmark for the effectiveness of a system against a set of standards. The absence of that benchmark for many AI applications and the obfuscation of agreed-upon standards in the military context is particularly concerning because it eliminates consistency and reliability, two essential components of safety.
Despite these technical shortcomings, AI is being implemented in high-stakes military operations. The Israel Defence Forces (IDF), for example, has used AI decision support systems (AI-DSS) to identify, track, and engage targets at accelerated rates. These systems are framed as efficient and accurate; however, the evidence paints a different picture. The civilian casualty rate in Gaza has outpaced all other modern wars. A June 2025 report by the U.N. Special Rapporteur for the Palestinian territories highlighted Israel’s use of AI-enabled tools with limited human oversight which are, among other factors, a driving force for these statistics. While AI is not the sole enabler of the atrocities in Gaza (the IDF’s long-standing approaches to intelligence gathering and targeting decisions are also playing a critical role), AI did provide the IDF with a method to “achieve the effective results of carpet bombing without losing the legitimacy of a data-driven assault with targets and objectives,” according to Israeli investigative journalist Yuval Abraham.
During the Summit, the topic of “AI solutions” emerged in multiple forums, without defining what the solution entailed, nor the specific problem that the solution was addressing. In some cases, expediting military procurement was encouraged as a means of keeping pace with AI development and ensuring States do not fall behind in acquiring AI solutions. Military procurement processes are intertwined with systems engineering practices, an enabler of technical integrity. While there is room for adjustments to these processes, overly-condensing the procurement process—a recommendation made during the opening plenary panel—to serve a sense of urgency is both shortsighted and strategically futile because it ignores the purpose of military procurement processes. Among other purposes, these legacy processes were established to ensure technical integrity is verified and upheld, and this process takes time. Condensing the procurement process for critical technologies would stifle opportunities for necessary inquiries and interventions into these capabilities.
The REAIM outcome document sets out pragmatic recommendations that encourage robust policies, doctrines and procurement processes, including testing, evaluation, verification and validation (TEVV). While many of the procurement debates during the Summit lacked specificity, likely due to the interdisciplinary composition of the participants and the high level discussion topics, the outcome document itself provides much-needed clarity in its recommendations for operationalizing REAIM principles.
Small Steps Toward Change
Only 39 States endorsed this year’s REAIM outcome document, in comparison to the more than 60 signatories in 2024. But the 2026 outcome document does show a positive progression away from theoretical and conceptual debates to more concrete and action-oriented recommendations.
China, Russia, Israel, and the United States were all notably absent from the endorsement list. China and the United States had both endorsed the outcome document from the first Summit. However, unlike the United States, China did not endorse the blueprint for action from the second Summit in 2024. That Summit’s blueprint for action broadly outlined the key impacts of military AI and the need for responsible AI. By comparison, the outcome document from this year’s Summit is more prescriptive, detailing specific measures for achieving responsible AI in a military context. While there are no official statements from States in relation to the decision to endorse or not endorse the outcome document, the notable absences in endorsements from key States cannot be decoupled from the shift towards more granular, action-oriented measures.
Additionally, the backdrop of anti-regulation and fabricated urgency run counter to the action-oriented nature of the 2026 outcome document. There is an ongoing misalignment between international initiatives like REAIM, Silicon Valley’s narratives around AI capabilities, and the realities of how AI is being used in current conflicts.
International initiatives have grasped on to the concept of responsible AI. Meanwhile, Silicon Valley continues to push narratives of AI solutions and a “move fast and break things” ethos, which contradict responsible AI principles, such as those outlined in the Responsible by Design strategic guidance report developed by the Global Commission for REAIM. And current conflicts, particularly the war in Gaza, present a stark reality of what happens when systems which lack technical integrity are implemented in high-stakes operations.
Silicon Valley’s ethos is ill-fitting in the context of military procurement and operations, which are historically unforgiving to things breaking. The military domain is categorised as safety-critical, where failures or malfunctions result in catastrophic outcomes, including death or serious injury to people, the environment or property. In this context, when something “breaks,” the outcome is often detrimental and potentially existential. The debate on fast-tracking military procurement to secure AI solutions in rapid time is fueled by a fabricated fear of falling behind.
The greatest success of the current AI boom is that of the marketing agenda behind this technology. It has been so successful that it has allowed an ill-founded sense of urgency to infiltrate critical decision-making in militaries across the world.
But one success of the most recent REAIM summit is the pragmatic lens on military AI reflected in the outcome document, which echoes many of the recommendations outlined in the Responsible by Design Strategic Guidance Report developed by the Global Commission on Responsible AI in the Military Domain. This consistency alludes to the beginnings of collective momentum in the international community to move away from the embellished public narratives around AI. Harnessing and driving this momentum forward will be an ongoing challenge, particularly against the backdrop of anti-regulation movements and calls for accelerated procurement pathways, both of which were advocated for in the U.S Department of Defense’s AI strategy and were prominent topics of conversation at the REAIM Summit.
Thus far, the REAIM Summits and their respective output documents have demonstrated incremental shifts towards greater consensus on pathways towards responsible military AI at more granular levels. While the hosts of the next REAIM Summit have not yet been announced, the time between now and the next Summit will be a crucial period for driving the established momentum forward through other international initiatives. A new U.N. First Committee resolution, Artificial intelligence in the military domain and its implications for international peace and security, will put responsible military AI on the upcoming General Assembly’s agenda, set to take place in Geneva this June. This provides an opportunity to continue the progress from the REAIM Summit, working towards actionable and pragmatic measures for actualizing responsible military AI.
FEATURED IMAGE: A U.S. army soldier carries a Merops drone, an AI-powered anti-drone system, during a NATO live-fire demonstration of a counter-UAS system on November 18, 2025 in Nowa Deba, Poland. (Photo by Omar Marques/Getty Images)
Great Job Zena Assaad & the Team @ Just Security for sharing this story.




