Cybersecurity News that Matters

Cybersecurity News that Matters

Expert raises concerns over AI misuse in human verification systems

Song Eun-ji, Principal of Artificial Intelligence Technology at the Financial Security Institute (FSI), delivers a speech at the 30th Network Security Conference Korea (NetSec-KR) on Wednesday. Source: NetSec-KR

by Minkyung Shin

Apr. 25, 2024
10:25 PM GMT+9

Updated Apr. 26, 2024 12:01PM GMT+9

A South Korean financial security expert voiced concerns on Wednesday about the potential for cyberattacks powered by artificial intelligence to dismantle the barriers designed to distinguish between malicious bot and legitimate human activity.

Song Eun-ji, Principal of Artificial Intelligence Technology at the Financial Security Institute (FSI), addressed the financial security risks posed by AI during her presentation at the 30th Network Security Conference Korea (NetSec-KR) in Seoul. She highlighted various potential threats, such as the creation of phishing websites and the generation of malicious software.

Song Eun-ji, Principal of Artificial Intelligence Technology at the Financial Security Institute (FSI), delivers a speech at the 30th Network Security Conference Korea (NetSec-KR) on Wednesday. Source: NetSec-KR

Song emphasized that scammers could exploit AI to automatically circumvent Captcha systems, which are designed to differentiate between human and non-human behavior. She highlighted that advanced AI, such as ChatGPT, is capable of reading and interpreting Captcha challenges. To illustrate this, Song demonstrated a scenario where a user requested a chatbot to read two words generated by a Captcha. The chatbot accurately identified the words as “overlooks” and “inquiry,” indicating the AI’s capability to decipher Captcha challenges. Additionally, Song provided an example where a user asked the chatbot to identify if an image generated by a Captcha system depicted a crosswalk, to which the chatbot correctly responded with “Yes.”

Captcha is extensively employed online as a defense mechanism for both users and companies against malicious activities. One key function is its ability to detect and thwart suspicious transactions by cybercriminals who utilize bots to purchase items of limited availability, such as concert tickets, for resale purposes. Additionally, Captcha serves as a deterrent against bad actors attempting to gain access to users’ account information by entering random combinations of numbers, letters, and special characters.

Song observed that generative AI typically includes a feature that avoids responding to Captcha-related inquiries, a precautionary measure against potential criminal exploitation. However, she highlighted that there are alternative methods that bad actors are able to employ to circumvent and obtain Captcha information from AI, underscoring the importance of vigilance. “While Captcha is no longer the sole authentication method in financial institutions, it remains prevalent across various sectors, necessitating caution,” Song remarked.

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Minkyung Shin

    Minkyung Shin serves as a reporting intern for The Readable, where she has channeled her passion for cybersecurity news. Her journey began at Dankook University in Korea, where she pursued studies in...

    View all posts
Editor:
Stay Ahead with The Readable's Cybersecurity Insights