Cybersecurity News that Matters

Cybersecurity News that Matters

Soongsil university launches AI safety center to combat growing threats from AI

Illustration by Areum Hwang, The Readable

by Minkyung Shin

Oct. 23, 2024
7:24 PM GMT+9

On October 22, Soongsil University in South Korea inaugurated its Artificial Intelligence Safety Center to conduct research aimed at addressing the risks and threats associated with the evolution of AI. The work of the center will be maintaining relationships and collaborating with various experts and organizations in the field of AI to promote the safer development and use of AI technologies.

“AI is bringing significant changes to our daily lives. However, there are also growing concerns about its potential threats and abuse of the technology, and such cases have occurred in real life. We launched the AI Safety Center to address these issues, where various field experts gather to discuss and tackle these challenges through multiple approaches in AI policy and technology,” said Choi Dae-seon, director of the AI Safety Center and professor at the School of Software in the College of IT at Soongsil University.

During the inauguration of the center, Director Choi announced that its 89 employees will be tasked with focusing on three main sectors of research. The first is AI risk management, which involves identifying and assessing potential risks associated with AI. The center’s staff and resident experts will conduct tests and technical research to address these issues and will also work to create guidelines for reigning in and guiding AI that align with existing legal frameworks.

Experts participate in the opening ceremony to celebrate the establishment of the AI Safety Center on October 22. Choi Dae-seon, director of the AI Safety Center and professor at the School of Software in the College of IT at Soongsil University, is sixth from the right on the front row. Photo by Minkyung Shin, The Readable

The second area of research focuses on countering AI attacks in defense. The center will investigate and develop offensive AI technologies, such as drones with AI capabilities, to ensure their safety in both defense and civilian contexts.

The third area focuses on improving the reliability of AI to enhance accountability by reducing errors and biases in AI outputs. This research aims to prevent the misuse of AI in areas such as deepfakes and fake news, as well as to mitigate the damage that arises from personal information breaches.

In addition, the center signed a Memorandum of Understanding (MOU) to collaborate with the law firm SHIN & KIM, which has already established an AI Center and is an active force for developing governance for AI. Through this MOU, the center, SHIN & KIM, and other partners will work together on AI legislation. Furthermore, several agencies, including the National Intelligence Service, the Ministry of National Defense, and the Personal Information Protection Commission, will support the center’s activities.

Ha Jae-chul, president of the Korea Institute of Information Security and Cryptology, congratulatorily remarked, “AI is deeply integrated into our lives and is directly related to our future lifestyle. It plays a crucial role in the growth of various industries in South Korea.” He added, “At this time, having an established AI Safety Center is very important.”


Related article: Adversarial prompting: Backdoor attacks on AI become major concern in technology

Choi Dae-seon, a professor in the Department of Software at Soongsil University, is presenting the latest security landscape related to generative AI at HackTheon Sejong on June 20. His research laboratory is involved in several national AI projects in South Korea. Photo by Dain Oh, The Readable

Sejong, South Korea―While the trustworthiness of artificial intelligence models is being rigorously tested by technology researchers, backdoor attacks on large language models (LLMs) present one of the most challenging security concerns for AI, according to an expert on Thursday.

Key to generative AI, an LLM refers to a deep learning algorithm trained on extensive datasets. The neural networks underlying an LLM provide generative AI with self-attention capabilities.

Choi Dae-seon, a professor in the Department of Software at Soongsil University, presented the latest security landscape related to generative AI at HackTheon Sejong. His research laboratory is involved in several national AI projects in South Korea, and the AI Safety Research Center is set to launch this August within the campus.

In the context of generative AI, security discussions fall into three categories: threats that use AI, security enhancements powered by AI, and measures to secure AI. For example, phishing emails have become extremely sophisticated due to generative AI employed by malicious actors. Likewise, it is widely known that hackers are leveraging generative AI to write malware code, such as ransomware. READ MORE

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Minkyung Shin

    Minkyung Shin serves as a reporting intern for The Readable, where she has channeled her passion for cybersecurity news. Her journey began at Dankook University in Korea, where she pursued studies in...

    View all posts
Editor:
Stay Ahead with The Readable's Cybersecurity Insights