Cybersecurity News that Matters

Cybersecurity News that Matters

Expert warns of generative AI’s impact on accelerating cybercrime commoditization

Park Keun-tae, CTO of the South Korean AI-based data intelligence company S2W. Image provided by S2W. Illustration designed by Areum Hwang, The Readable

by Kuksung Nam

Jul. 05, 2024
12:14 AM GMT+9

On Wednesday, a data intelligence expert warned that advances in generative artificial intelligence could accelerate the commoditization of hacking tools, making them more versatile and accessible in the cyber threat landscape.

In an interview with The Readable, Park Keun-tae, Chief Technology Officer at S2W, pointed out that the cyber threat landscape is shifting towards an environment where attackers can execute sophisticated malicious activities without the usual technical expertise barriers. This indicates that loosely affiliated groups may soon handle all stages of their operations—tracking targets, deploying malicious code, negotiating with victims, and seeking financial gain through extortion—with unprecedented autonomy. The interview coincided with the third S2W Intelligence Summit (SIS), themed ‘The Future of AI and Security,’ held on July 4.

The CTO of the South Korean data intelligence company emphasized that generative AI technology could accelerate this trend by replacing human operators, reducing the time and effort required for coordination and negotiation among hacking groups.

Describing this as a significant threat, the expert explained that this development could hinder law enforcement officials’ ability to investigate relationships among attackers. With hacking groups gaining more independence, disputes could lead to wider fragmentation, making it easier for individuals to operate independently. This shift challenges law enforcement’s current methods of tracking and understanding cybercriminal activities, which rely on cohesive group structures.

Park Keun-tae, CTO of the South Korean AI-based data intelligence company, S2W, is moderating a session during the third S2W Intelligence Summit (SIS) on July 4 in Seoul. Source: S2W

Park noted that currently, threat actors are not actively integrating AI technology into their toolkits. However, he cautioned that in the worst-case scenario, AI could eventually be used to generate malicious code due to its rapid development. “Generative AI companies are imposing limitations and restrictions on their products to prevent misuse,” the CTO added. “Nevertheless, there are individuals persistently attempting to breach these barriers to obtain desired information. It’s an ongoing race.”

In addition to the misuse of advanced technology by threat actors, Park warned about security vulnerabilities within AI that could be exploited. He illustrated scenarios such as man-in-the-middle attacks, where operators could intercept conversations between generative AI and users, posing as the chatbot to extract victims’ information.

During the SIS session on ‘Generative AI for Security and Security of Generative AI,’ the CTO emphasized the unprecedented impact of AI technology. Park noted that increased AI usage will lead to enhanced functionalities, potentially resulting in AI surpassing human intelligence in a feedback loop. “The introduction of more powerful AI technology also means that its vulnerabilities could be exploited or the technology itself misused, amplifying its impact,” added the expert.

The CTO highlighted AI technology’s potential to enhance defensive measures against cyber threats. He proposed detection methods capable of identifying changes in digital signatures of malware or phishing messages, which may leverage generative AI characteristics. “Users could initiate queries to assess potential malicious behavior in unidentified code,” Park suggested. “Developers could also implement safeguards to prevent the replication of identical malicious code from AI products.”

Related article: Korean security researchers introduced new AI. And it is sweeping the globe

Jin-Woo Chung, the AI team lead at S2W, and his research team have developed a new AI model called DarkBERT in an effort to deter cybercrimes that have been facilitated by the hidden nature of the dark web. DarkBERT is expected to empower cybersecurity professionals to detect cybercriminal activities on the dark web at a much quicker pace by utilizing AI. Photo by Sukwoon Ko, The Readable

A new artificial intelligence model developed by a group of cybersecurity researchers in South Korea has gone viral in the global technology industry, swamping social media with its potential to deter cybercrimes. As an example of using AI for good purposes, this latest accomplishment is expected to empower cybersecurity professionals and international law enforcement to detect criminal activities on the dark web at a much quicker pace and with enhanced accuracy.

Six researchers at the South Korean cybersecurity company S2W and the Korea Advanced Institute of Science and Technology (KAIST) conducted joint research to develop an AI model which can understand the language used by cybercriminals on the dark web. The dark web, a vast space of the internet that is not accessible through general web search engines, has been overflowing with jargon cybercriminals use to sidetrack investigators when trading illegal content, such as drugs and counterfeit credit cards. READ MORE


Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Kuksung Nam
    : Author

    Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and...

    View all posts
Stay Ahead with The Readable's Cybersecurity Insights