Contributed by Dr Kimberly Tam
Dr. Kimberly Tam gained a B.S in Computer and System Engineering at Rensselaer Polytechnic Institute in the USA and a PhD in Information Security from Royal Holloway University of London. Dr Tan is an Associate Professor of Maritime Cyber Security at the University of Plymouth, and the Theme Lead for Marine and Maritime at the Alan Turing Institute.
Cyber security and cyber defence typically has two sides: Attacker and Defender.
Each side continues to try and get the upper hand, and usually when one discovers a new tool or tactic, it does not take long for the other to also find uses for that tool. These days, one of the newer “tools” available is Artificial Intelligence (AI), although it be more accurate to say that AI provides a whole new toolbox to both attacker and defender as, in reality, AI is much more than a singular tool.
The phrase AI is an umbrella term – there are lots of different types of AI and machine learning (ML) approaches out there and, as with most things, each type has its own strengths and weaknesses. When we talk seriously about using “AI” for defence against “AI” that attackers use, it is really not as simple as that. The devil, as they say, is in the detail which leads us to the first point of this article: how does AI shape the current cyber battle ground?
How can AI reshape cyber defence
Generally speaking, current cyber defences such as firewalls and intrusion detection systems can be enhanced with AI to enable them to better understand complex behaviours. In the maritime sector, acceptable behaviour depends on the context within which the operation is taking place.
For example, the appropriate behaviours for a ship in a US port may be different from an Australian port; the right behaviours for a lone ship at sea may not be a good choice in a busy port or narrow strait [3]. Crew, location, cargo, and attackers can fundamentally change the nature of a situation and current security tools can struggle with this, creating false positives and negatives and ultimately potentially leading to bad decisions.
Context-aware AI will help reshape cyber defence strategies, and it will be especially effective in complex transportation sectors like maritime. As we move with increasing speed towards a future of autonomous and un-crewed vessels and remotely controlled Critical National Infrastructure assets, such as offshore renewable energy [4], this becomes even more critical.
“In order to know your enemy, you must become your enemy.” - Sun Tzu
Another key factor in the development of AI defence is knowing what we need to defend against.
In a world where AI may be used by attackers to create digital threats (malicious software, otherwise known as malware), AI defences may be well suited for defending against AI enhanced threats. While we humans can usually learn to think like other humans, most AI learning is more alien. Learning to use AI defences to understand and defend against AI attacks and/or attacks enhanced with AI will be critical for future defences.
Strategy versus results
"However beautiful the strategy, you should occasionally look at the results." - Sir Winston Churchill
It is one thing to develop and implement a good strategy, however ultimately what we will be judged on is results.
AI is intensive, it is not cheap to run or to develop. Training takes time, and it often requires a lot of computing power. The amount of computing power (compute) necessary is so significant, there is a real and growing resource constraint in the world. The compute needed to train notable AI models like ChatGPT has doubled roughly every six months since 2010 [1]. Access to compute is costly and tends to rely on existing infrastructure, as the cost of creating new infrastructure is prohibitive. If AI continues on its current growth trajectory, we need to understand and mitigate the carbon footprint of the energy required for development [2].
Often in research, what an academic might term a “better” algorithm may be only fractionally more accurate, less than a percent better, in performance. While the AI might be a beautiful piece of work, in a resource-constrained situation where the defenders must keep up with the attackers, results are what should be prioritised. Keeping the defence at least one step in front is key, but that may mean not using the ‘best performing’ AI solution out there, indeed, in certain situations it may actually mean not using AI at all.
A secure ai lifecycle
“A chain is no stronger than its weakest link” - Thomas Reid
In addition to different types of AI designed for different purposes, it is important to remember that AI has many stages to its lifecycle [5]. To have resilient, AI-based cyber defences, the AI itself needs to be protected. One way to do this, is to consider each link in the AI lifecycle.
(1) Secure design: this section covers understanding the risks at the very beginning of threat modelling, in addition to specific topics and trade-offs to consider on system and model design.
(2) Secure development: developing AI includes using data and computing power, possibly from a third party. Other aspects of AI development security including supply chain, proper documentation for future users/developers, and appropriate asset management.
(3) Secure deployment: once the AI is deployed, it is important to protecting the underlying infrastructure it operates in and the models themselves from compromise, threat/ loss, developing incident management processes, and responsible release.
(4) Secure operation and maintenance: once a system has been deployed, including logging and monitoring, update management and information sharing ensures AI in long-term can update their defences.
While not listed specifically in the general NCSC AI lifecycle suggestions, the secure “disposal” of AI is also useful, ensuring that attackers cannot reverse engineer or study old AI for sensitive information.
Key takeaways
As mentioned, any tool used in defence can also be used by attackers. Attackers can also use AI to understand defences in order to bypass or defeat them. Keeping sensitive details about defences and known vulnerabilities off the wider internet could reduce the amount of knowledge attackers can find and leverage using tools like AI.
In a grey area between attack and defence, using AI to provide penetration testing (pentesting) allows defenders to act like attackers to find vulnerabilities in systems in order to fix them. However, great care is necessary when developing these tools, as creating a “hacking AI” means it could be weaponised if not developed with the right safety measures in place. We need to think ‘secure by design’ here.
In the maritime sector, AI can be used for defences in remote, often hostile locations where autonomy might have removed the need for a human element in operations. AI defences should be built into all the AI developments being proposed. AI is a powerful tool, and within the maritime sector this technology could be used for route/power optimisation, autonomy (collision avoidance, object detection, situational awareness), and more. The developers of all these tools must consider what attackers are capable of and protecting all stages of the AI’s lifecycle [6].
Whilst AI provides a new set of tools for cyber defenders, it does also add capability to attackers at the same time. It is vitally important to understand what a secure R&D AI lifecycle needs to look like. This approach will be critical in allowing us to understand and control attacker capability as it grows. The context in which the AI will be applied is of paramount importance, both in terms of its development and also in terms of its eventual capability and application. More than that, though, it is crucial for us to consider collectively the issues of resource constraints as applied to the development of AI capabilities and to establish an understanding of when AI defence is ‘effective enough’ as opposed to ‘at its best’. As the threat landscape keeps changing, knowing when, where and how to spend the available resources is going to be critical for success in the long game.
References
- Computing Power and the Governance of AI | GovAI Blog. Governance.ai. Published 2024. Source »
- Heikkilä M. AI’s carbon footprint is bigger than you think. MIT Technology Review. Published December 5, 2023. Source »
- Tam, and Jones. "MaCRA: a model-based framework for maritime cyber-risk assessment." WMU Journal of Maritime Affairs 18 (2019): 129-163.
- Knack, Anna., et al. “Enhancing the Cyber Resilience of Offshore Wind” The Alan Turing Institute, CETAS Briefing Paper (2024)
- Guidelines for secure AI system development. Ncsc.gov.uk. Published 2024. Source »
- Walter, Mathew J., et al. "Adversarial AI testcases for maritime autonomous systems." AI, Computer Science and Robotics Technology (2023).