Cybersecurity News that Matters

Cybersecurity News that Matters

Expert proposes immediate attention for addressing AI bias in security measures

Yoo Ji-yeon, a professor from the Department of Intelligent Engineering Informatics for Human at Sangmyung University, presenting at the seventh National Strategy Forum on February 6. Photo by Hongeun Im, The Readable

by Hongeun Im

Feb. 07, 2024
3:07 PM GMT+9

At the seventh National Strategy Forum, hosted by the Korean Association of Cybersecurity Studies (KACS) on Tuesday, Yoo Ji-yeon, a professor at Sangmyung University’s Intelligent Engineering Informatics for Human Department, emphasized that addressing bias in artificial intelligence (AI) models is as crucial as dealing with privacy leaks and technology protection in current AI security measures.

Professor Yoo Ji-yeon underscored the importance of scrutinizing bias within AI. She pointed out that while AI models may not be the direct targets of cyberattacks, the tendency of AI to develop biases poses a threat to society. “AI’s deep integration with society affects not only social behaviors but also its own training processes,” she remarked. Highlighting the pervasive influence of bots, as reported by the 2023 Bad Bot Report from Imperva, which stated that 47.4% of internet traffic in 2022 was bot-generated, she raised concerns about children’s ability to discern the authenticity of online content. This led to her proposal for the creation of an evaluation system dedicated to examining AI bias.

AI bias becomes a pressing issue when AI spreads incorrect or misleading information due to data inaccuracies or inadequate training data. Professor Yoo highlighted a case where an AI model inaccurately asserted that the Earth is square, a mistake stemming from flawed information in its dataset. This susceptibility of AI to manipulation, further evidenced by the role it is made to play by bad actors in phishing scams and in shaping the political perceptions and actions of gullible consumers, further illustrates the profound vulnerabilities present in AI technologies—weaknesses that can make AI quite destructive to society.

Professor Yoo outlined potential vulnerabilities at various stages of AI development that could lead to bias, emphasizing the need for robust conformity assessments in AI models. During data collection, an attacker could introduce manipulated or fake data to compromise the system. In the data preprocessing phase, techniques like an Image Scaling Attack (ISA) could be employed, where customized data is included to skew “objective” outcomes in a desired direction. Furthermore, during the training phase, adversaries might utilize adversarial attacks, deliberately inputting data designed to result in incorrect classifications being internalized by the model. These vulnerabilities highlight the critical need for comprehensive evaluations and safeguards at each step of AI development to ensure the integrity and reliability of AI models.

The expert stressed the necessity of regular scrutiny for AI models, noting that unlike traditional cybersecurity systems which remain static, AI possesses qualities of autonomy and adaptability. This distinction necessitates that security assessments for AI bias be conducted more frequently than current practices. The dynamic nature of AI, with its capacity for continuous learning and evolution, means that vulnerabilities and biases could emerge or change over time, underscoring the need for ongoing vigilance in monitoring and evaluating AI systems to ensure their integrity and fairness.

Professor Yoo advocated for creating a specialized research institute or strategy center dedicated to AI in national security. She argued that this would boost the development of AI solutions, specifically for national security challenges. By moving beyond generic AI, data strategies, or privacy laws, the proposed center would focus on unique national security needs, crafting tailored approaches and innovations. This initiative could help strengthen national security through focused AI research and application.

Readable Subscription Form - Expert proposes immediate attention for addressing AI bias in security measures

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Hongeun Im
    : Author

    Hongeun Im is a reporting intern for The Readable. Motivated by her aspirations in cybersecurity and aided by the language skills she honed while living in the United Kingdom, Im aims to write about s...

  • Arthur Gregory Willers

    Arthur Gregory Willers is a copyeditor at The Readable, where he works to make complex cybersecurity news accessible and engaging for readers. With over 20 years in education and publishing, his exper...

  • Dain Oh
    : Reviewer

    Dain Oh is a distinguished journalist based in South Korea, recognized for her exceptional contributions to the field. As the founder and editor-in-chief of The Readable, she has demonstrated her expe...

Stay Ahead with The Readable's Cybersecurity Insights