On October 22, Soongsil University in South Korea inaugurated its Artificial Intelligence Safety Center to conduct research aimed at addressing the risks and threats associated with the evolution of AI. The work of the center will be maintaining relationships and collaborating with various experts and organizations in the field of AI to promote the safer development and use of AI technologies.
“AI is bringing significant changes to our daily lives. However, there are also growing concerns about its potential threats and abuse of the technology, and such cases have occurred in real life. We launched the AI Safety Center to address these issues, where various field experts gather to discuss and tackle these challenges through multiple approaches in AI policy and technology,” said Choi Dae-seon, director of the AI Safety Center and professor at the School of Software in the College of IT at Soongsil University.
During the inauguration of the center, Director Choi announced that its 89 employees will be tasked with focusing on three main sectors of research. The first is AI risk management, which involves identifying and assessing potential risks associated with AI. The center’s staff and resident experts will conduct tests and technical research to address these issues and will also work to create guidelines for reigning in and guiding AI that align with existing legal frameworks.
The second area of research focuses on countering AI attacks in defense. The center will investigate and develop offensive AI technologies, such as drones with AI capabilities, to ensure their safety in both defense and civilian contexts.
The third area focuses on improving the reliability of AI to enhance accountability by reducing errors and biases in AI outputs. This research aims to prevent the misuse of AI in areas such as deepfakes and fake news, as well as to mitigate the damage that arises from personal information breaches.
In addition, the center signed a Memorandum of Understanding (MOU) to collaborate with the law firm SHIN & KIM, which has already established an AI Center and is an active force for developing governance for AI. Through this MOU, the center, SHIN & KIM, and other partners will work together on AI legislation. Furthermore, several agencies, including the National Intelligence Service, the Ministry of National Defense, and the Personal Information Protection Commission, will support the center’s activities.
Ha Jae-chul, president of the Korea Institute of Information Security and Cryptology, congratulatorily remarked, “AI is deeply integrated into our lives and is directly related to our future lifestyle. It plays a crucial role in the growth of various industries in South Korea.” He added, “At this time, having an established AI Safety Center is very important.”
Related article: Adversarial prompting: Backdoor attacks on AI become major concern in technology
Sejong, South Korea―While the trustworthiness of artificial intelligence models is being rigorously tested by technology researchers, backdoor attacks on large language models (LLMs) present one of the most challenging security concerns for AI, according to an expert on Thursday.
Key to generative AI, an LLM refers to a deep learning algorithm trained on extensive datasets. The neural networks underlying an LLM provide generative AI with self-attention capabilities.
Choi Dae-seon, a professor in the Department of Software at Soongsil University, presented the latest security landscape related to generative AI at HackTheon Sejong. His research laboratory is involved in several national AI projects in South Korea, and the AI Safety Research Center is set to launch this August within the campus.
In the context of generative AI, security discussions fall into three categories: threats that use AI, security enhancements powered by AI, and measures to secure AI. For example, phishing emails have become extremely sophisticated due to generative AI employed by malicious actors. Likewise, it is widely known that hackers are leveraging generative AI to write malware code, such as ransomware. READ MORE