A leading South Korean cybersecurity expert cautioned that the convergence of artificial intelligence with advanced technologies—such as semiconductors, quantum computing, and biotechnology—could bring about serious risks to global peace and security.
Yoon Jung-hyun, a research fellow at the Center for Science, Technology, and Cybersecurity at the Institute for National Security Strategy (INSS), discussed potential security threats arising from the convergence of AI and emerging technologies on Friday at the Korean Association for World Politics of Information’s (KAWPI) Digital Statecraft Conference.
According to Yoon, AI is driving innovation across various industries, including economics, business, and defense. He emphasized that AI has become a national strategic technology essential for all countries, noting that the U.S. government includes AI in its “Critical and Emerging Technologies List,” updated every two years. South Korea likewise recognizes AI as one of its “12 National Strategic Technologies,” the expert further noted.
Yoon pointed out, however, that AI poses significant risks when it is integrated with emerging technologies such as biotechnology, quantum computing, semiconductors, and military applications. The expert noted that, at present, AI is primarily focused on developing “autonomous systems.”
For instance, AI is currently employed to automate processes in semiconductor manufacturing and to create unbreakable passwords for quantum computing applications, thereby enhancing security systems. Additionally, AI serves as a valuable tool in the biotechnology and aerospace sectors by streamlining data analysis, allowing for faster decision-making and more efficient resource allocation—factors that are crucial for innovation and in maintaining a competitive advantage in these rapidly evolving fields.
However, the automation of AI can also present significant risks, particularly if it is harnessed to develop autonomous cyberattack methods and hacking techniques, potentially leading to the creation of sophisticated weapons such as ultra-sophisticated phishing schemes, DDoS attacks, and ransomware. Yoon warns that this could exacerbate existing disparities in AI capabilities between countries, potentially resulting in international conflicts.
Furthermore, this convergence may give rise to unforeseeable threats that we can only partially glimpse, such as automated cyberattacks that adapt in real time, advanced deepfakes capable of disseminating misinformation at unprecedented levels, or AI-driven systems that manipulate critical infrastructure.
In response, Yoon emphasized the urgent need to address these security threats. He noted that the risks posed by automated AI closely resemble those associated with other emerging technologies. Consequently, an evolving security analysis framework should be employed to effectively tackle the threats that automated AI presents.
“The convergence of AI with emerging technologies raises the risk of weaponization and hybrid threats. To address these challenges, we need to establish detailed guidelines and closely assess the level of AI capabilities in each country,” Yoon stated.
Related article: Regulating autonomous AI systems is key to avoiding apocalypse, experts say
Seoul, South Korea―REAIM 2024―Internationally recognized experts in artificial intelligence have come together to raise concerns about autonomous AI systems. These leading voices warn that the development and spread of unregulated autonomous AI in the military sector poses serious risks to global peace and security. They argue that such advancements should be strictly prohibited, similar to global efforts to prevent the spread of nuclear weapons.
Saeed Al Dhaheri, Director of the Center for Future Studies at the University of Dubai, took the stage during the plenary session at the 2024 REAIM (Responsible AI in the Military Domain) Summit on Monday, calling for a robust international framework to regulate the development of autonomous weapons systems integrated with AI.
Referring to a statement published by the International Committee of the Red Cross (ICRC) in March of 2021, which warned that “weapon systems with autonomy in their critical functions of selecting and attacking targets are an immediate humanitarian concern,” Al Dhaheri explained that autonomous AI systems undermine global stability by challenging human accountability and increasing complexity-related risks, such as occur in system malfunctions. READ MORE