Cybersecurity News that Matters

Cybersecurity News that Matters

South Korea issues guidelines on secure AI development alongside 17 countries

by Kuksung Nam

Nov. 28, 2023
10:40 AM GMT+9

The South Korean intelligence agency released safety guidelines on artificial intelligence on Tuesday in collaboration with seventeen countries, stressing that cutting-edge AI technology must be developed and implemented with security in mind first and foremost.

On November 28, the National Intelligence Service (NIS) announced the publication of the collaborative work of twenty-three international cybersecurity agencies, called the ‘Guidelines for secure AI system development.’ Spearheaded by the United Kingdom and the United States, the task of drafting the new standards involved, in total, eighteen countries working in collaboration, according to the NIS. These countries include Chile, the Czech Republic, Estonia, Israel, Nigeria, Norway, Singapore, and the member states of the Five Eyes alliance.

The newly issued guidelines emphasize that security should be of “critical” importance at every step of implementing AI due to the severe consequences that could arise from neglecting to enact sufficient protections from the earliest stages. The document explains that the latest technologies are fraught with new types of vulnerabilities specific to AI, such as those that might allow attackers to manipulate chatbots to make them behave in unintended ways. Furthermore, insufficient security could allow malicious actors to extract confidential information about the AI models themselves.

“When the pace of development is high, as is the case with AI, security can often be a secondary consideration,” stated the international cybersecurity agencies in the document. “Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”

The international cybersecurity agencies broke down the guidelines into four key components, suggesting multiple considerations and mitigations to help developers reduce security risks at each step of the development process. The four areas encompass the entire cycle of AI development from the initial design phase to the maintenance stage, which is after the system has been deployed to the public. The newly issued guidelines stress that the recommendations are important to those who take part in the development, deployment, and operation of AI systems, including data scientists, managers, and decision-makers.

“Users do not typically have sufficient visibility or expertise to fully understand, evaluate, or address risks associated with the systems they are using,” stated the cybersecurity agencies in the guidelines. “As such, providers of AI components should take responsibility for the security outcomes of users further down the supply chain.”

The South Korean intelligence agency expects that the guidelines will provide practical assistance to IT workers who are tasked with creating safe AI technologies. “The NIS plans to actively defend against the cyber threats posed by emerging technologies, and this in partnership with like-minded countries,” said Baek Jong-wook, the third deputy director of the NIS.

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Kuksung Nam
    : Author

    Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and...

    View all posts
Designer:
Stay Ahead with The Readable's Cybersecurity Insights