The South Korean intelligence agency is expected this month to release security guidelines for government officials related to the use of generative artificial intelligence such as ChatGPT.
According to a press release on Sunday, the National Intelligence Service (NIS) stated that the guidelines will provide knowledge on security threats linked to generative AI as well as secure ways to use the latest technologies. This set of standards will be distributed within this month to both public institutions and local governments.
The NIS has been drawing up the code of conduct since last April with the National Security Research Institute (NSR) and academic experts, as the state-of-the-art technology triggered strong concern about security problems such as leakage of confidential and personal information. Furthermore, the NIS and NSR hosted a meeting on June 9 to seek feedback from academic experts and government intelligence security officials.
“Without proper knowledge, users could be exposed to attacks linked to vulnerabilities of generative AI. Also, these technologies could be misused,” said Kwon Tae-kyoung, professor of information at Yonsei University and a vice chairman of the AI security working group at the Korea Institute of Information Security & Cryptology (KIISC), to The Readable. “This guideline aims to notify and warn users of such threats.”
Meanwhile, the South Korean government issued a handbook on the usage and precautions specifically applied to ChatGPT in early May and distributed it to nearly 300 government institutions. In the handbook, the Ministry of the Interior and Safety warned users not to enter sensitive information, such as unpublished government policies or personal data, into the prompt. They also urged officials not to use the prompt to generate answers without an additional fact checking process.