By Dain Oh, The Readable
Jun. 29, 2023 8:58PM GMT+9
The National Intelligence Service of South Korea published a security guideline on Thursday regarding generative artificial intelligence technology. In the 60 pages of instructions, the spy agency specified security threats that may result from using generative AI and recommended that users not enter any classified or sensitive information into the latest technology.
“While ChatGPT came into the limelight, it was difficult for public institutions to make use of it due to the absence of security measures by the government,” described the NIS in its press release. “Moreover, the general public needs something to refer to when they want to use ChatGPT safely,” added the national security observer agency.
The guideline walks through roughly every step of the operation on generative AI services. After introducing the technology and its basic concept, it devoted the rest of the pages to illustrating what abuse cases are and how users can exercise caution while using it. For example, accessing generative AI services through official websites and verifying answers by reviewing multiple sources were recommended as precautions. Fortifying login credentials was also included in the instructions.
The fourth chapter was written for public organizations which plan to build generative AI services. Whether the AI services are for public use or for internal use, the NIS wrote that they must comply with the security guideline from the initial phase. Anything that is not included in this guideline is subject to the National Information Security Basic Guidelines, added the intelligence agency.
The NIS started to work on the guideline in April in response to security concerns regarding generative AI. There were disturbing questions posed to generative AI, such as requests related to producing unethical material like fake news and exposing business secrets or personal information to third parties. The NIS also held a meeting earlier this month where experts both from the public and private sectors put their heads together in order to establish the security guideline on this issue.
In addition to notifying public organizations, the NIS plans to distribute the security guideline to 420 universities in the nation. Universities are one of the earliest players in the adoption of generative AI but have lacked corresponding measures regarding security. “We expect this guidance to contribute to preventing security incidents when using AI,” said the NIS.
ohdain@thereadable.co
The cover image of this article was designed by Areum Hwang.
Dain Oh is a distinguished journalist based in South Korea, recognized for her exceptional contributions to the field. As the founder and editor-in-chief of The Readable, she has demonstrated her expertise in leading media outlets to success. Prior to establishing The Readable, Dain was a journalist for The Electronic Times, a prestigious IT newspaper in Korea. During her tenure, she extensively covered the cybersecurity industry, delivering groundbreaking reports. Her work included exclusive stories, such as the revelation of incident response information sharing by the National Intelligence Service. These accomplishments led to her receiving the Journalist of the Year Award in 2021 by the Korea Institute of Information Security and Cryptology, a well-deserved accolade bestowed upon her through a unanimous decision. Dain has been invited to speak at several global conferences, including the APEC Women in STEM Principles and Actions, which was funded by the U.S. State Department. Additionally, she is an active member of the Asian American Journalists Association, further exhibiting her commitment to journalism.