The National Intelligence Service of South Korea published a security guideline on Thursday regarding generative artificial intelligence technology. In the 60 pages of instructions, the spy agency specified security threats that may result from using generative AI and recommended that users not enter any classified or sensitive information into the latest technology.
“While ChatGPT came into the limelight, it was difficult for public institutions to make use of it due to the absence of security measures by the government,” described the NIS in its press release. “Moreover, the general public needs something to refer to when they want to use ChatGPT safely,” added the national security observer agency.
The guideline walks through roughly every step of the operation on generative AI services. After introducing the technology and its basic concept, it devoted the rest of the pages to illustrating what abuse cases are and how users can exercise caution while using it. For example, accessing generative AI services through official websites and verifying answers by reviewing multiple sources were recommended as precautions. Fortifying login credentials was also included in the instructions.
The fourth chapter was written for public organizations which plan to build generative AI services. Whether the AI services are for public use or for internal use, the NIS wrote that they must comply with the security guideline from the initial phase. Anything that is not included in this guideline is subject to the National Information Security Basic Guidelines, added the intelligence agency.
The NIS started to work on the guideline in April in response to security concerns regarding generative AI. There were disturbing questions posed to generative AI, such as requests related to producing unethical material like fake news and exposing business secrets or personal information to third parties. The NIS also held a meeting earlier this month where experts both from the public and private sectors put their heads together in order to establish the security guideline on this issue.
In addition to notifying public organizations, the NIS plans to distribute the security guideline to 420 universities in the nation. Universities are one of the earliest players in the adoption of generative AI but have lacked corresponding measures regarding security. “We expect this guidance to contribute to preventing security incidents when using AI,” said the NIS.