Cybersecurity News that Matters

Cybersecurity News that Matters

South Korea’s financial security institute leverages AI for fraud detection

Illustration by Daeun Lee, The Readable

by Dain Oh

Feb. 20, 2025
6:44 PM GMT+9

On Thursday, South Korea’s Financial Security Institute (FSI) announced proactive measures to enhance the security and reliability of artificial intelligence applications in the financial sector.

This initiative not only aims to identify security vulnerabilities, but also to enhance the capacity of institutions to detect fraud.

As AI-driven financial services become more widespread, concerns about security vulnerabilities, data breaches, and biased decision-making are on the rise. In response, the FSI has launched the initiative to assess the security measures of firms designated as innovative service providers by South Korea’s Financial Services Commission, the nation’s top financial regulator.

First of all, the FSI is conducting adversarial attack simulations to assess the robustness of AI models. These tests involve manipulating input data to trick AI systems into making incorrect decisions or producing misleading responses. By rigorously evaluating AI models against such threats, the FSI aims to strengthen their resilience and build trust in AI-driven financial services.

The FSI is also leading the development of a collaborative AI model using federated learning: a machine learning technique that enables organizations to collaboratively train AI models while keeping data secure and decentralized. By adopting this decentralized method, financial firms can pool their fraud detection capabilities into a unified, more robust system.

A proof-of-concept for the federated learning-based fraud detection model is set to launch in the first half of the year. This initiative will allow financial institutions to integrate their fraud detection systems, enhancing the industry’s collective defense against fraudulent transactions.

In addition to the proof-of-concept detection model, the FSI is promoting the use of open-source AI models to provide a more accessible and secure development environment. Furthermore, the institute is updating AI guidelines for the financial sector to ensure they align with evolving technological advancements.

“The successful adoption of AI will shape the financial sector’s digital competitiveness,” said FSI CEO Park Sang-won. “True innovation and competitiveness are only possible with a strong foundation of security,” Park added.

According to a press release, the FSI plans a significant investment in AI security. The initiative will also involve expanding its workforce and enhancing technical expertise to support the seamless integration of AI into the financial industry.


Related article: Adversarial prompting: Backdoor attacks on AI become major concern in technology

Sejong, South Korea―While the trustworthiness of artificial intelligence models is being rigorously tested by technology researchers, backdoor attacks on large language models (LLMs) present one of the most challenging security concerns for AI, according to an expert on Thursday.

Key to generative AI, an LLM refers to a deep learning algorithm trained on extensive datasets. The neural networks underlying an LLM provide generative AI with self-attention capabilities.

Choi Dae-seon, a professor in the Department of Software at Soongsil University, presented the latest security landscape related to generative AI at HackTheon Sejong. His research laboratory is involved in several national AI projects in South Korea, and the AI Safety Research Center is set to launch this August within the campus.

In the context of generative AI, security discussions fall into three categories: threats that use AI, security enhancements powered by AI, and measures to secure AI. For example, phishing emails have become extremely sophisticated due to generative AI employed by malicious actors. Likewise, it is widely known that hackers are leveraging generative AI to write malware code, such as ransomware.

Regarding AI-powered security enhancements, Choi mentioned Microsoft Security Copilot, which significantly increases the efficiency of the incident response process. Google also offers similar functionality that helps security teams respond to cyberattacks effectively. READ MORE


Editor’s note: This article was initially written by ChatGPT-4o based on the author’s specific instructions, which included news judgment, fact-checking, and thorough editing before publication.

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Dain Oh
    : Author

    Dain Oh is a distinguished journalist based in South Korea, recognized for her exceptional contributions to the field. As the founder and editor-in-chief of The Readable, she has demonstrated her expe...

    View all posts
Stay Ahead with The Readable's Cybersecurity Insights