On Wednesday, a data intelligence expert warned that advances in generative artificial intelligence could accelerate the commoditization of hacking tools, making them more versatile and accessible in the cyber threat landscape.
In an interview with The Readable, Park Keun-tae, Chief Technology Officer at S2W, pointed out that the cyber threat landscape is shifting towards an environment where attackers can execute sophisticated malicious activities without the usual technical expertise barriers. This indicates that loosely affiliated groups may soon handle all stages of their operations—tracking targets, deploying malicious code, negotiating with victims, and seeking financial gain through extortion—with unprecedented autonomy. The interview coincided with the third S2W Intelligence Summit (SIS), themed ‘The Future of AI and Security,’ held on July 4.
The CTO of the South Korean data intelligence company emphasized that generative AI technology could accelerate this trend by replacing human operators, reducing the time and effort required for coordination and negotiation among hacking groups.
Describing this as a significant threat, the expert explained that this development could hinder law enforcement officials’ ability to investigate relationships among attackers. With hacking groups gaining more independence, disputes could lead to wider fragmentation, making it easier for individuals to operate independently. This shift challenges law enforcement’s current methods of tracking and understanding cybercriminal activities, which rely on cohesive group structures.
Park noted that currently, threat actors are not actively integrating AI technology into their toolkits. However, he cautioned that in the worst-case scenario, AI could eventually be used to generate malicious code due to its rapid development. “Generative AI companies are imposing limitations and restrictions on their products to prevent misuse,” the CTO added. “Nevertheless, there are individuals persistently attempting to breach these barriers to obtain desired information. It’s an ongoing race.”
In addition to the misuse of advanced technology by threat actors, Park warned about security vulnerabilities within AI that could be exploited. He illustrated scenarios such as man-in-the-middle attacks, where operators could intercept conversations between generative AI and users, posing as the chatbot to extract victims’ information.
During the SIS session on ‘Generative AI for Security and Security of Generative AI,’ the CTO emphasized the unprecedented impact of AI technology. Park noted that increased AI usage will lead to enhanced functionalities, potentially resulting in AI surpassing human intelligence in a feedback loop. “The introduction of more powerful AI technology also means that its vulnerabilities could be exploited or the technology itself misused, amplifying its impact,” added the expert.
The CTO highlighted AI technology’s potential to enhance defensive measures against cyber threats. He proposed detection methods capable of identifying changes in digital signatures of malware or phishing messages, which may leverage generative AI characteristics. “Users could initiate queries to assess potential malicious behavior in unidentified code,” Park suggested. “Developers could also implement safeguards to prevent the replication of identical malicious code from AI products.”
Related article: Korean security researchers introduced new AI. And it is sweeping the globe
A new artificial intelligence model developed by a group of cybersecurity researchers in South Korea has gone viral in the global technology industry, swamping social media with its potential to deter cybercrimes. As an example of using AI for good purposes, this latest accomplishment is expected to empower cybersecurity professionals and international law enforcement to detect criminal activities on the dark web at a much quicker pace and with enhanced accuracy.
Six researchers at the South Korean cybersecurity company S2W and the Korea Advanced Institute of Science and Technology (KAIST) conducted joint research to develop an AI model which can understand the language used by cybercriminals on the dark web. The dark web, a vast space of the internet that is not accessible through general web search engines, has been overflowing with jargon cybercriminals use to sidetrack investigators when trading illegal content, such as drugs and counterfeit credit cards. READ MORE