Cybersecurity News that Matters

Cybersecurity News that Matters

AI expert highlights surge in deepfake pornography crimes

Lee Keun-woo, an attorney at Yoon and Yang LLC, delivers a speech at the 2024 AI Security Day Seminar on September 6. Photo provided by the Ministry of Science and ICT

by Minkyung Shin

Sep. 06, 2024
9:45 PM GMT+9

Seoul, South Korea—2024 AI Security Day Seminar—An attorney specializing in artificial intelligence stated Thursday that nearly all deepfake crimes involve pornography, with female victims being the most common targets. The lawyer emphasized the urgent need for legislation to address these crimes.

Lee Keun-woo, an attorney at Yoon and Yang LLC, highlighted the growing issue of deepfake cybercrimes at the 2024 AI Security Day Seminar organized by the Ministry of Science and ICT. He expressed concern over the rise of deepfake pornography in South Korea and emphasized the need for stronger regulations to address the problem.

Lee emphasized that 96% of deepfake crimes involve pornography, with women being the primary targets. He noted that requests to remove abusive sexual content quadrupled from January to July 2024 compared to the previous year and warned the situation could worsen in 2025.

The attorney pointed out that these crimes persist because there is ongoing demand for deepfake pornography. He stressed that efforts must be made to stop the consumption of such content.

Lee noted that the United States, the European Union, and the United Kingdom are taking steps to address deepfake legislation.

On March 13, the EU adopted the world’s first AI Act, which requires AI systems generating deepfakes to clearly disclose their artificial nature. The Act classifies deepfake technologies, particularly those used to create non-consensual explicit content, as high-risk and subjects them to stricter oversight. It also holds developers and users accountable, imposing penalties of €15 million (approximately $19 million) for non-compliance.

Similarly, in the U.K., the Online Safety Act, effective January 31, 2024, criminalizes the creation and sharing of sexually explicit deepfakes without consent. This offense, known as “cyberflashing,” applies even if the content is not intended for distribution but is meant to cause alarm or distress.

Lee Keun-woo, an attorney at Yoon and Yang LLC, delivers a speech at the 2024 AI Security Day Seminar on September 6. Photo provided by the Ministry of Science and ICT

Steps are also being taken in the U.S. For example, California and Florida have enacted laws targeting deepfake pornography and deepfakes used to interfere with elections, mandating clear disclosure of artificial content. Additionally, at least 10 states, including Georgia, Hawaii, Illinois, and Texas, have implemented legislation related to deepfakes. While there is no comprehensive federal law specifically regulating deepfakes, numerous bills have been proposed to criminalize the creation and distribution of malicious deepfakes.

Lee noted that South Korea is also drafting AI-related legislation. On August 27, a proposed law was introduced that would penalize the creation of deepfake pornography for personal viewing and secondary use, regardless of whether there is intent to distribute it.

The attorney emphasized the need for more specific laws to be enacted quickly. South Korea’s current laws only punish the purpose of disseminating pornography, there is a need for more specific laws concerning individual possession of deepfake content.

“There are many types of deepfake crimes, but laws need to be created to effectively target the most serious offenses,” Lee said. “We need laws that can impose strict and swift criminal penalties on offenders.”


Related article: Abuse of Telegram’s AI Bot fuels rise in online deepfake pornography

Illustration by Areum Hwang, The Readable

Telegram’s artificial intelligence bot, a core feature of the popular messaging app, is being misused to generate deepfake pornography. In response to this alarming trend, South Korean police and the Ministry of Education have formed a task force to investigate and tackle the problem.

On Wednesday, the Cybercrime Investigation Unit of the Seoul Metropolitan Police Agency announced that it has initiated investigations into eight Telegram AI bots used to illegally edit and create fake nude photos. This announcement came just a day after the police agency revealed a broader crackdown on crimes involving deep learning and AI to produce fake pornographic images and videos.

According to a police officer from the Cybercrime Investigation Unit of the Seoul Metropolitan Police Agency, who requested to remain anonymous, the Telegram bot, known simply as “Telegram bot,” is a program that can create fake pornography by editing ordinary photos with nude images.

The Telegram AI bot is one of the application’s features, originally designed to offer user benefits such as keyword alerts, image searches, and simple games. The company states on its website that creating a new bot takes only “a few hours” and provides users with a straightforward application programming interface (API). However, bad actors are exploiting these easily accessible bot features to generate fake pornography. READ MORE

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Minkyung Shin

    Minkyung Shin serves as a reporting intern for The Readable, where she has channeled her passion for cybersecurity news. Her journey began at Dankook University in Korea, where she pursued studies in...

    View all posts
Editor:
Stay Ahead with The Readable's Cybersecurity Insights