National Security

Machine Wars: Integrating Autonomous Systems into Military Frameworks

Published on
August 6, 2024

Several years ago, an aerial strike in Afghanistan required significant human involvement. The operator would track their target, flying an armed MQ-9 Reaper manually and often consulting with specialist image analysts to make out objects of interest in the video stream. They would maintain target lock and, if ordered to engage, physically pull the trigger and dispatch an AGM-114 Hellfire missile towards the ground.

Today, human involvement in this kill chain is wholly optional, and as Ukraine battles troop shortages and constant jamming, increasingly undesirable - pilots are exposed to additional risk, and degraded communication links endanger mission success. Battle-proven loitering munitions like the Switchblade or the Warmate have semi-autonomous modes, but others, like the Saker Scout are increasingly moving towards full-bore autonomy. “Autonomous mass” is the watchword of the day, garnering huge interest from the military and attracting capital from investors and procurement officials alike.

As a result, military capability planners and defense manufacturers have begun to take earnestly the possibility of a future in which autonomous systems – lethal and non-lethal – occupy a central role in the military realm. The US Marine Corps, for instance, is testing four-legged UGVs with fixed gun emplacements1 while Ukraine has established an Unmanned Systems Force, with the country’s Minister for Digital Transformation, Mykhailo Fedorov, saying that autonomous systems are “logical and inevitable”.2  

The star of autonomous systems is rising, and it is unlikely to come down anytime soon.  

The interest is well-founded. Autonomous systems have the potential to revolutionize everything from intelligence, surveillance, and reconnaissance (ISR) missions, through logistics and medical evacuations (MEDEVACs), to combat operations. In the long-term, they may have the same transformational impact on the fabric of war as did the introduction of air power or the invention of gunpowder.

Given this, as well as the vital importance of ensuring that the West remains on the bleeding edge of technological progress in this domain, this article will seek to sketch a roadmap for the integration of autonomous systems into existing military frameworks.  

It will do so by clarifying what an autonomous system is and why they are of paramount importance to Western military power, before identifying three principles to guide the integration of autonomous systems into our armed forces.  

The Autonomous Systems Imperative

Much work has been done to explore the operational, legal, and ethical pros and cons of autonomous systems, predominantly with a focus on their lethal variants. Replicating or summarizing this discourse is not the purpose of this section. Rather, it is to provide a snapshot of the impetus driving the need for autonomous systems today and in the future. In other words, to provide a raison d’être for the question of how one can seek to integrate such systems into existing military frameworks.

To effectively do so, it is important to clarify the term “autonomous system,” which we will understand to be one whose machine-human C2 relationship is characterized by a human-out-of-the-loop; that is endowed with the ability to learn and whose decision-making is not evident from an inspection of its code; and whose proprietary decision pool spans the full range of actions necessary to accomplish its human-set objective.3

There are two core drivers for why such machines are not merely options worth considering, but categorical requirements for future military operations. They are underpinned by the Principle of Unnecessary Risk (PUR), which posits that it is unethical to expose human soldiers to avoidable risks when technological alternatives exist;4 and the Hobbesian imperative for a state to guarantee security for its citizens, soldiers and civilians alike.

Autonomous systems already replace human beings in critical situations where they cannot respond quickly enough or perform tasks with sufficient accuracy or reliability. On a future battlefield, the same reasoning will apply even more strongly: the pace and complexity of warfare will demand capabilities beyond human limits across both lethal and non-lethal spheres. The hazard this represents is twofold.

First, on an individual level, soldiers will increasingly face threats they cannot counter, be they enemy soldiers empowered by novel technologies or (lethal) autonomous systems themselves. In that context, the failure to develop and deploy autonomous systems by a state is not merely a missed opportunity but a violation of both the PUR and the state’s duty towards its soldier-citizens.

No doubt, autonomous systems can, do, and will make mistakes, just as humans do. But dismissing them entirely on this basis overlooks the greater harm that would befall soldiers deprived of such systems, as well as the reality that technology is continuously evolving and improving – errors catalyze evolution.  

Second, on the state level, reliance on traditional manpower against more populous or autonomous system-wielding adversaries becomes ever more dangerous and ineffective as time goes on. Endowed with such technology, our adversaries would obtain and grow an increasingly perilous lead over our own military capabilities.  

Allowing this to happen would contravene the state’s responsibility to its citizenry. In the context of total war or high-stakes military engagements, forgoing such systems could result in catastrophic outcomes on the battlefield and beyond. In peacetime, the degradation of a state’s conventional deterrent power by the absence of autonomous systems would expose it to additional costs imposed by unscrupulous adversaries.  

To prevent this, the West must leverage its immense technical talent, resources, and industrial capacity to bolster its ranks with machinery – deploying not just a few exquisite pieces of technology but masses of autonomous systems capable of sustaining a protracted conflict with a major adversary. Doing so would dramatically impact how we fare at war and significantly enhance our deterrence posture.

None of this is to say that we shouldn’t impose clear and stringent limitations. Autonomous machines, no matter their sophistication, should likely never be granted the ability to initiate conflict or control nuclear weapons. The 1983 close-call involving the USSR’s malfunctioning Dead Hand system for automatic nuclear retaliation is clear evidence of this.  

Outside of such confines, however, autonomous systems are not a distant possibility but an imminent reality. Preparing to integrate them responsibly and effectively into our military strategy is an imperative we can no longer afford to delay. To do so, we can elucidate three broad principles.

A Path to Integration

The Aegis Approach

Any military action where an autonomous machine can reliably and significantly outperform the human operator, and where the risk of failure hazards precipitating grave consequences, represents fertile ground for the development and deployment of autonomous systems.

Autonomy is not a new concept to defense platforms. In fact, several major weapon systems in use today regularly operate under a semi-autonomous mode. The Aegis Combat System and its land-based variant, the Aegis Ashore, are both equipped with a semi-autonomous, human-on-the-loop (HOTL) modus operandi in which the operator can interrupt actions they view as erroneous, but left alone, the system will execute the entirety of the air defense kill chain without human input. The Phalanx Close-in Weapon System (CIWS) is no different and was used recently to defend an American Navy destroyer, the USS Gravely, against an attack by Houthi rebels in the Red Sea.5

The reason that these and other systems like them possess autonomous capability is that the conditions under which these platforms operate mean that no human can act with sufficient speed and accuracy to reliably accomplish the task required, and that failure results in unacceptably high costs – human, financial, and operational. In other words, if the autonomous machine were replaced by a human operator, it would represent a marked increase in the likelihood of a catastrophic outcome.  

To date, this “Aegis benchmark” has been relevant predominantly for air defense scenarios, where attack and defense already unfold at superhuman speed. But the war in Ukraine demonstrates without a doubt that this range of scenarios is expanding. Artillery exchanges, for example, are increasingly approaching machine speed, with FPV drones and sophisticated AI used to identify, locate, and engage targets.  

As a result, the Aegis benchmark represents a useful guide to identifying areas for which autonomous systems could be relevant or even, acutely needed. This does not mean that all areas identified with the Aegis rule can realistically be handed over to machines tomorrow. However, the approach allows for well-targeted innovation to unfold almost immediately and promises to identify operationally fertile areas for which technology acquisition mechanisms are already in place – a core component of integrating such technologies into existing militaries frameworks.    

Action Take the Wheel

Allow action, experimentation, and rapid iteration based on real-world feedback to steer both technological development as well as doctrinal and regulatory stances. Fast-paced pilot programs, the destigmatization of failure, and focus on progress through meaningful lived experiences should drive us forward.

Novel technologies carry with them a vast array of both known and unknown unknowns. Autonomous systems are no different – although a rich body of speculative scenarios and well-informed hypotheses exists, autonomous systems are still in their infancy and have yet to be used at anything resembling meaningful scale or maturity.  

To ensure optimal progress and effective integration and use of end-products, it is critical that militaries and industrial players focus on moving from prototype to field tests rapidly, with as much diversity of technology and use cases as possible while allowing themselves the space to fail, learn, and evolve. In short, they should view action as the very first step, the ultimate upstream point, allowing a meaningful generation of lessons learned and enriching downstream aspects such as new training methods, Standard Operating Procedures (SOPs), doctrine, regulation, and others.  

Part of this effort involves conscious decisions to this effect by military leadership. Another part involves manufacturers identifying the right use cases independently, seeking out end-users for experimentation and testing, and working with them to deliver field-ready technology – a model that has yielded exceptional results in Ukraine by allowing new technologies to permeate existing structures.  

Working through the military grassroots by recruiting such internal allies and acquiring real-world feedback quickly and regularly, private sector actors can leap ahead of competitors both at home as well as – and most importantly – in adversarial countries.  

In the longer term, standardization is, of course, necessary for scale. Establishing training procedures, SOPs, and embedding autonomous systems into military doctrine is fundamental. However, staggering the development of these constraints – as in the case of UAVs in Ukraine and its Unmanned Systems Force – will allow armed forces to make the fastest progress while building procedures from a base of real-world knowledge, and accounting for key legal and ethical issues at the time when those issues become truly relevant: wide-scale adoption and scaling.  

Even as we do the latter, however, we would be wise to note Mr. Fedorov’s comment that any effort at regulating autonomous systems would have to wait until “after our victory”6 – an ethos worth considering in the West as the winds of a new Cold War begin to blow.

One Small Step for Machines, One Giant Leap for Mankind

Granting lethal authority to a machine is technologically a relatively simple feat and stands in stark contrast to the seismic nature of the decision from a human perspective. We should prioritize immediate, non-lethal needs and goals with an optional lethal end-state in mind to facilitate rapid integration.

Discussions of autonomous systems often rapidly veer towards the question of lethality. The magnetism of the subject is understandable – lethal autonomous systems raise a host of both techno-operational and ethical questions that undoubtedly merit serious consideration.

The problem, however, is that a vision of lethal autonomous systems impedes much of the progress that could be made in the domain without ever practically involving the issue of taking human lives. This hinders the many profoundly positive effects that non-lethal autonomous systems can have for the men and women who serve our nations in uniform. Autonomous systems could save lives on ISR and MEDEVAC missions, demining operations, and air defense to name but a few.

To progress on integrating autonomous systems into the armed forces, we should focus on developing and integrating non-lethal variants at the outset. This would more readily fit into many existing military operational and procurement frameworks and ought to be reinforced through large-scale opportunities and fast-paced government R&D programs with more near-term goals.

This would lead to a robust technological foundation and clear operational precedent for autonomous systems, allowing us to develop, manage, and scale them appropriately. When the day comes that we consider granting them lethal authority, our decision will be meaningful, not hypothetical, and whatever it ends up being, our soldiers will have benefited enormously already.

Conclusion

Autonomous systems – whether one likes it or not – are here and here to stay. Nascent today, they are all but certain to become mainstays of future warfare. They represent a significant operational and tactical military advantage, and they enable commanders to safeguard the lives of servicemen and women. Few, if any, states facing the realistic prospect of a total war would eschew their development and deployment, especially if their adversary holds no prejudice against employing such systems.

Facing an increasingly volatile geopolitical environment populated by credible and capable adversaries with lower moral inhibitions, higher risk appetites, and even higher ambitions, it is of paramount importance that the West retains military superiority over those who would seek to harm it. Integrating novel technologies, with autonomous systems at their forefront, into our armed forces is critical to achieving this objective.

By building on existing methods of integrating autonomy and novel technologies into defense platforms and military operations, acting boldly and learning rapidly, and ramping up both our ambition and scale of effort appropriately, we can effectively harness the potential of autonomous systems for the protection of our democracies, our populations, and our way of life.

Like most technological progress, autonomous systems will go through growing pains with unintended, unforeseeable, and at times, harmful consequences. It is our duty to mitigate the risks, navigate the obstacles, and ultimately deliver technology that is central to maintaining our security and position in the world, at the pace of relevance.

Today, autonomy in the kill chain is partial and largely limited to a few, early-generation loitering munitions and a handful of air defense systems. It won’t be either for much longer. The machine wars are coming and whether we face them with better, stronger, and faster metal or our own flesh is up to us.

References

1 Parken, H.A., Oliver (2024). Rifle-Armed Robot Dogs Now Being Tested By Marine Special Operators (Updated). [online] The War Zone. Available at: https://www.twz.com/sea/rifle-armed-robot-dogs-now-being-tested-by-marine-special-operators [Accessed 1 Aug. 2024].

2 Bajak, F. and Arhirova, H. (2023). Drone advances in Ukraine could bring dawn of killer robots. [online] AP NEWS. Available at: https://apnews.com/article/russia-ukraine-war-drone-advances-6591dc69a4bf2081dcdd265e1c986203 [Accessed 1 Aug. 2024].

3 Scharre, P. & Horowitz, M.C., 2015. An Introduction to Autonomy in Weapon Systems. Working Paper. Project on Ethical Autonomy. Centre for a New American Security (CNAS). Available at: https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/Ethical-Autonomy-Working-Paper_021015_v02.pdf (Accessed: 29 July 2024).

4 Strawser, B.J. (2010). Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles. Journal of Military Ethics, 9(4), pp.342–368. doi:10.1080/15027570.2010.536403.  

5 Lendon, B. (2024). A Houthi missile was just seconds from hitting a US warship. The Navy used its ‘last line of defense’. [online] CNN. Available at: https://edition.cnn.com/2024/02/02/middleeast/phalanx-gun-last-line-of-defense-us-navy-intl-hnk-ml/index.html  [Accessed 1 Aug. 2024].

6 Volpicelli, G., Melkozerova, V. and Kayali, L. (2024). ‘Our Oppenheimer moment’ — In Ukraine, the robot wars have already begun. [online] POLITICO. Available at: https://www.politico.eu/article/robots-coming-ukraine-testing-ground-ai-artificial-intelligence-powered-combat-war-russia/ [Accessed 1 Aug. 2024].

Written by
Wojtek Strupczewski
Wojtek drives the NIF’s technology adoption mission, advising deep-tech portfolio companies and working closely with Allied stakeholders to bring cutting-edge technology into NATO armed forces and defense industrial bases. Previously, Wojtek led AI weapons development programs at Helsing, focusing on the air, space, and intelligence domains. Prior to that, he was a corporate strategist at BAE Systems specializing in Europe, and at the EU Council working in counterterrorism and on the intersection of national security and advanced technologies. Wojtek holds a Master of Public Policy from the University of Oxford and an MA in International Security from Sciences Po Paris.
Read more
Subscribe to Karve's quarterly roundup newsletter

Including market trend insights, company updates and info on innovation funding streams, growth strategies and other helpful scale-up tactics for your organisation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post