Cybersecurity News that Matters

Cybersecurity News that Matters

[Perspective] Deepfake crimes are an ongoing social issue

Minkyung Shin, intern journalist at The Readable. Photo provided by Minkyung Shin. Illustration by Areum Hwang, The Readable

by Minkyung Shin

Nov. 25, 2024
2:23 PM GMT+9

In July, I reported on deepfake crimes and emerging forms of cyberbullying in schools. That article highlighted data from the Seoul Metropolitan Police Agency, which revealed that 63.1% of school violence cases involved cybercrime. These included incidents where deepfake technology was used to create and distribute fake images and videos.

Content abuse has been a serious problem in schools since my high school days. I was deeply shocked, and I struggled to come to terms with acts of voyeurism and the manipulation of photos—splicing someone’s image with nude pictures—for that purpose. Back then, it was done using Photoshop for simple edits and splicing, rather than the advanced creation of deepfakes.

Today, artificial intelligence has become a highly sophisticated and easily accessible tool. While it offers convenience in work, learning, and many other areas of life, it also heightens the risk of criminal misuse. Teenagers are particularly vulnerable, as their familiarity with smartphones makes them potential targets for exploitation.

As an intern journalist at The Readable, I focused on raising awareness about cybersecurity issues. Initially, my reporting covered common topics like hacking and data breaches. However, upon discovering the widespread prevalence of deepfake pornography crimes, I shifted my focus to inform readers about this alarming issue. I believe it is a journalist’s duty to highlight serious social problems, and I felt a profound sense of responsibility in fulfilling that role during my time at The Readable.

During my time as a cybersecurity journalist, I reported on five deepfake-related stories. To deepen my understanding, I interviewed experts from the National Police Agency, local police departments, and the Ministry of Education. One police officer I worked closely with remarked, “The problem these days isn’t just deepfake crimes but also Telegram bots, which make it easy to create photos and videos.”

The issue has garnered international attention, with major outlets like The New York Times and The Associated Press covering it. In response, the South Korean government has announced measures to combat deepfake crimes.

On Nov. 6, the Office for Government Policy Coordination announced plans to amend the law, enabling police to conduct undercover investigations for cases involving both teenage and adult victims. The office also intends to expand its team of specialized investigators and enhance their access to advanced technology for detecting AI-generated manipulative content.

Deepfake crimes are extending beyond pornography-related abuse to financial fraud. On Nov. 8, the National Police Agency reported a new scam involving fake videos created with deepfake technology to extort money. Many victims are elderly individuals who receive calls featuring deepfake images and voices of their children, pleading for financial help.

As AI technology advances, so do the crimes that exploit it. By learning from human data, AI becomes increasingly sophisticated, enabling offenders to misuse it and pose significant threats. To combat this, the government must implement robust regulations and enforce strict sanctions, moving beyond simple measures like requiring watermarks.

Even now, as AI becomes smarter, criminals are exploiting it, and victims are suffering mentally. As an intern journalist at The Readable, I worked to raise awareness of this issue. I will continue to cover and highlight this problem, striving to make the world a better place both as a journalist and as an individual.


Related article: Digital violence escalates with tech-powered fake pornography

Illustration by Areum Hwang, The Readable

Last May, the Cybercrime Investigation Unit of the Seoul Metropolitan Police Agency announced the arrest of five individuals for illegally creating and distributing doctored pornographic images using photos of female acquaintances. Two of the offenders, both graduates of Seoul National University (SNU), took photos of 61 victims, including SNU alumni, from their personal social media accounts without consent. They manipulated these photos by combining them with explicit content to create over 400 doctored images, distributing them through a private Telegram channel. READ MORE

More articles on deepfake crimes:

  • Abuse of Telegram’s AI Bot fuels rise in online deepfake pornography READ MORE
  • South Korea strengthens punishment for deepfake crimes involving children READ MORE
  • AI expert highlights surge in deepfake pornography crimes READ MORE

Subscription

Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Minkyung Shin

    Minkyung Shin serves as a reporting intern for The Readable, where she has channeled her passion for cybersecurity news. Her journey began at Dankook University in Korea, where she pursued studies in...

    View all posts
Reviewer:
Stay Ahead with The Readable's Cybersecurity Insights