Cybersecurity News that Matters

Cybersecurity News that Matters

UK experts call for strategic regulation of military AI

From left: Paul Lincoln, Second Permanent Secretary at the U.K. Ministry of Defence; Kenneth Payne, Professor of Strategy at King’s College London; Brianna Rosen, Strategy and Policy Fellow at the Blavatnik School of Government, James Black, Assistant Director of Defense and Security at RAND Europe; and Keith Dear, Managing Director of Fujitsu’s Centre for Cognitive and Advanced Technologies. The experts discussed issues at REAIM 2024 on Monday in Seoul. Photo by Minkyung Shin, The Readable

by Minkyung Shin

Sep. 09, 2024
10:02 PM GMT+9

Seoul, South Korea―REAIM 2024―National security and technology experts from the United Kingdom attended REAIM 2024, a conference focused on the responsibilities and international security implications of military artificial intelligence. The experts presented various strategic challenges associated with AI and emphasized the need for regulation from multiple perspectives.

Kenneth Payne, Professor of Strategy at King’s College London, noted that AI could shift the balance of power between nations. He highlighted the uncertainty surrounding AI’s impact on conflict escalation and the unpredictability of interactions with military AI. Payne stressed that the way countries leverage AI could significantly influence their national strategies.

“We are currently too focused on the limitations of AI,” Payne said. “AI is becoming more flexible and powerful, with the potential for creative applications. This advancement could transform it from merely a tool into a strategically important asset. While it is likely to become a significant factor in strategic interactions, this potential is rarely discussed.”

Brianna Rosen, a Strategy and Policy Fellow at the Blavatnik School of Government, noted that AI could escalate conflicts and alter the speed and scope of military decisions. Additionally, it can lead to unpredictable patterns of behavior, complicating the attribution of responsibility, she added.

Rosen stated, “To mitigate these risks, we need to implement stronger transparency and confidence-building measures, which requires engaging with adversaries such as China and Russia. This engagement should start by focusing on shared interests in global stability and prosperity, rather than solely on the current normative framework for responsible AI.”

James Black, Assistant Director of Defense and Security at RAND Europe, stated that while AI can play a crucial role in military strategy, it also introduces new threats, such as information manipulation and deepfakes. He noted that the combination of technologies like robots and drones could provide non-state actors with novel military options. Black also expressed concern that AI could accelerate decision-making and actions, potentially leading to more aggressive behavior.

“Addressing this challenge requires a range of tools and approaches,” Black said. “Norms and agreements alone will not solve the problem. It is crucial to understand international politics and the complex relationships between key states. The dynamics between the U.S. and China, and NATO and Russia, complicate the issue, but there are opportunities to apply lessons from other areas, such as nuclear and cyber domains. The key is to clearly define priorities and develop a comprehensive set of tools to address them.”

Keith Dear, Managing Director of Fujitsu’s Centre for Cognitive and Advanced Technologies, noted that current AI systems are offering more efficient and effective solutions than ever before. However, he emphasized that the recent advancements in AI, which facilitate the digitization and automation of military planning, necessitate a regulatory framework. Dear also pointed out that military AI is utilized across various areas, including strategic and operational planning, and thus requires specific, customized regulations for each area rather than a one-size-fits-all approach.

“There is far too much talking in generalities,” said Dear. “We need to treat even the most sophisticated AI as a tool, not a person, and hold those responsible for delegating authority accountable for each use. What we don’t need are new rules and regulations, but we likely do need updated guidance on how existing rules and regulations apply.”


Related article: International community calls for responsible use of AI in military

Wopke Hoekstra, the minister of foreign affairs of the Kingdom of the Netherlands, at REAIM 2023. Source: REAIM 2023

Leaders and delegates around the world convened on Wednesday to discuss the challenges and risks of using artificial intelligence (AI) and the need to prioritize the accountable adaptation of emerging technologies in the military domain. The summit, which was the first of its kind, was held in The Hague and hosted by the Netherlands and co-hosted by South Korea. The Readable has highlighted some of the important statements by the presenters and discussants in the plenary opening session of the REAIM summit. READ MORE

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Minkyung Shin

    Minkyung Shin serves as a reporting intern for The Readable, where she has channeled her passion for cybersecurity news. Her journey began at Dankook University in Korea, where she pursued studies in...

    View all posts
Editor:
Stay Ahead with The Readable's Cybersecurity Insights