Cybersecurity News that Matters

Cybersecurity News that Matters

UN finalizes seven recommendations for global governance on AI

Illustration by Sangseon Kim, The Readable

by Minkyung Shin

Sep. 20, 2024
10:24 PM GMT+9

The United Nations announced a report outlining seven recommendations for the global governance of artificial intelligence. This follows the finalization of an interim report published by the UN in December 2023.

On Thursday, the UN Secretary-General’s High-Level Advisory Body on AI (HLAB-AI) published the final report titled ‘Governing AI for Humanity.’ The report unifies and addresses gaps in the global framework, outlining seven recommendations to protect humans from AI and ensure its responsible use.

The report stated that AI is rapidly developing and having a global impact in areas such as power and wealth. However, no one can fully understand or control AI, so users must develop, deploy, and utilize systems with a sufficient understanding. Without this, negative consequences could arise worldwide, highlighting the need for global governance of AI.

The United Nations’ Final Report on Governing AI for Humanity. Source: UN

In addition, while there are documents and meetings on AI governance, not all countries are participating in these discussions. Specifically, 118 countries are not involved, which could lead to significant problems. Since AI does not recognize borders, there needs to be a framework that ensures safety and rights on a global scale.

UN members emphasized a ‘common’ approach in their seven recommendations. They suggested establishing an international scientific panel on AI to develop a shared understanding of the technology. Additionally, the recommendations call for exchanging AI standards among developers, researchers, and government officials to promote AI governance grounded in human rights.

Furthermore, the report highlights the need to build a network for researchers and entrepreneurs to enhance AI capacity and establish a global AI fund to support countries with an AI gap. It also recommends creating an AI office within the UN Secretariat to implement and support these proposals.

Meanwhile, the responsibility for AI’s military applications is already being shared globally. The Responsible Artificial Intelligence in the Military Domain (REAIM) summit was held in South Korea from September 9 to 10 to discuss the military implications of AI. South Korea, the Netherlands, Singapore, Kenya, and the United Kingdom hosted the summit, where AI and military experts from each country participated in discussions about AI.

Summit participant Paul Dean, Principal Deputy Assistant Secretary in the Bureau of Arms Control, Deterrence, and Stability (ADS) at the U.S. Department of State, emphasized the importance of legal and human accountability in the use of military AI.

“Humans need a process for training and accountability so that they understand the limitations and are protected against issues like automation bias. Additionally, we need basic technical assurance standards to provide confidence that the technology is performing as intended,” said Dean.

Dr. Zena Assaad, Senior Lecturer at the School of Engineering at the Austrian National University, stated that to build trust in AI, humans need to understand the technology and emphasize human responsibility. She added that we should strive to ensure that AI does not operate independently of humans.

“There’s an educational curve that we all need to navigate. First, we must ensure technical accuracy, and second, we must remind people that humans are involved every step of the way. This requires greater accountability and an emphasis on human roles and responsibilities throughout these AI journeys,” said Assaad.


Related article: Regulating autonomous AI systems is key to avoiding apocalypse, experts say

From left: Sohn Jie-ae, Ambassador for Cultural Cooperation, Republic of Korea; General Lee Young-su, Chief of Staff of the Air Force, Republic of Korea; Frederick Choo, Deputy Secretary for Policy, Ministry of Defence, Singapore; Mike Baylor, Chief Data and AI Officer, Lockheed Martin; Saeed Al Dhaheri, Director for Center for Future Studies, University of Dubai; and Paul Scharre, Executive Vice President and Director of Studies, Center for a New American Security (CNAS), during the plenary session at the REAIM 2024 on Monday. Source: REAIM 2024

Seoul, South Korea―REAIM 2024―Internationally recognized experts in artificial intelligence have come together to raise concerns about autonomous AI systems. These leading voices warn that the development and spread of unregulated autonomous AI in the military sector poses serious risks to global peace and security. They argue that such advancements should be strictly prohibited, similar to global efforts to prevent the spread of nuclear weapons.

Saeed Al Dhaheri, Director of the Center for Future Studies at the University of Dubai, took the stage during the plenary session at the 2024 REAIM (Responsible AI in the Military Domain) Summit on Monday, calling for a robust international framework to regulate the development of autonomous weapons systems integrated with AI.

Referring to a statement published by the International Committee of the Red Cross (ICRC) in March of 2021, which warned that “weapon systems with autonomy in their critical functions of selecting and attacking targets are an immediate humanitarian concern,” Al Dhaheri explained that autonomous AI systems undermine global stability by challenging human accountability and increasing complexity-related risks, such as occur in system malfunctions. READ MORE

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Minkyung Shin

    Minkyung Shin serves as a reporting intern for The Readable, where she has channeled her passion for cybersecurity news. Her journey began at Dankook University in Korea, where she pursued studies in...

    View all posts
Editor:
Stay Ahead with The Readable's Cybersecurity Insights