Cybersecurity News that Matters

Cybersecurity News that Matters

Human rights commission urged implementation of AI assessment in government policies

Illustration by Daeun Lee, The Readable

by Kuksung Nam

Jul. 09, 2024
11:56 PM GMT+9

South Korea’s national human rights organization has raised concerns about the potential impact of artificial intelligence on civil rights, calling for the government to implement assessment procedures before establishing AI policies.

In a press release on Monday, the National Human Rights Commission of Korea (NHRCK) announced that they sent an opinion statement to the Minister of Science and ICT. This statement, which emphasizes the need to implement human rights assessments during the establishment of policies and projects related to artificial intelligence technology, was delivered to the science minister last May.

According to the statement, the commission suggested that the ministry use a standard titled the “Human Rights Impact Assessment Tool for AI.” This tool consists of 72 assessment questionnaires across four stages, designed to review policies in advance to ensure they align with the protection and promotion of human rights. The tool was developed through the collective efforts of the commission starting in 2022. The details of the assessment tool were revealed to the public on the same day.

The HRIA for AI is applied to high-risk technologies that are either directly developed or procured by government organizations. The commission provided examples of high-risk AI, such as the latest technologies used for identity verification with biometric information, and those adopted in criminal investigations, the military, or intelligence agencies. The commission also noted that the list of high-risk AI could be adjusted based on the potential risk of AI technologies in specific fields.

The commission explained that although impact assessments are established under current laws, they are insufficient in preventing human rights violations resulting from the use of the latest technologies. “Alongside the impact assessments, there is also an AI checklist created by each government agency, but its implementation is mostly left to the discretion of the developers,” said an official of the NHRCK to The Readable. “Moreover, the HRIA for AI stresses the participation of diverse stakeholders. It is difficult to consider the role of an assessment valid if it excludes the participation of users.”

The Readable reached out to the Ministry of Science and ICT for a comment on the NHRCK’s opinion statement. However, they did not respond before the time of publication.


Related article: South Korea issues guidelines on secure AI development alongside 17 countries

The South Korean intelligence agency released safety guidelines on artificial intelligence on Tuesday in collaboration with seventeen countries, stressing that cutting-edge AI technology must be developed and implemented with security in mind first and foremost.

On November 28, the National Intelligence Service (NIS) announced the publication of the collaborative work of twenty-three international cybersecurity agencies, called the ‘Guidelines for secure AI system development.’ Spearheaded by the United Kingdom and the United States, the task of drafting the new standards involved, in total, eighteen countries working in collaboration, according to the NIS. These countries include Chile, the Czech Republic, Estonia, Israel, Nigeria, Norway, Singapore, and the member states of the Five Eyes alliance.

The newly issued guidelines emphasize that security should be of “critical” importance at every step of implementing AI due to the severe consequences that could arise from neglecting to enact sufficient protections from the earliest stages. The document explains that the latest technologies are fraught with new types of vulnerabilities specific to AI, such as those that might allow attackers to manipulate chatbots to make them behave in unintended ways. Furthermore, insufficient security could allow malicious actors to extract confidential information about the AI models themselves. READ MORE

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Kuksung Nam
    : Author

    Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and...

    View all posts
Reviewer:
Stay Ahead with The Readable's Cybersecurity Insights