On Wednesday, Microsoft and OpenAI, companies partnered in the field of artificial intelligence (AI) technologies, disclosed that Kimsuky, a hacking group based in North Korea, has utilized OpenAI’s large language models (LLMs) to aid in their hacking activities.
The group leveraged OpenAI’s technologies to generate content for spear phishing campaigns, which aim to target specific individuals or organizations to extract confidential information. Specifically, they composed emails that impersonated officials from well-known educational institutions and non-governmental organizations (NGOs), soliciting opinions on international policies concerning North Korea.
According to research conducted by the two companies, there have been no confirmed instances where AI technology was directly utilized in hacking incidents. Instead, Kimsuky employed the technology for identifying pre-existing vulnerabilities, obtaining help in repairing web technologies, and scripting simple tasks, such as detecting user activities on a system.
Hacking groups from China, Russia, and Iran were also identified as malevolent entities utilizing OpenAI’s large language models (LLMs) to support their cyber offensive operations. These groups commonly used the technology for activities such as reconnaissance, coding assistance, and leveraging native languages in their operations. Microsoft further stated that it has disabled all accounts associated with these four hacking groups.
The measures taken by Microsoft and OpenAI align with the White House’s executive order issued last October, which mandates the development of “standards, tools, and tests to ensure that AI systems are safe, secure, and trustworthy.” This directive emphasizes the need for companies to ensure that AI technologies do not pose a threat to cybersecurity.