Cybersecurity News that Matters

Cybersecurity News that Matters

Middle schoolers face investigation after fabricating inappropriate images of teachers and classmates

by Kuksung Nam

Apr. 17, 2024
8:26 PM GMT+9

On Tuesday, South Korean police began investigating two minors accused of creating and circulating inappropriate content featuring their classmates and teachers. The investigation is focused on determining whether the students used artificial intelligence technology to generate the inappropriate images.

The Ulsan Metropolitan Police have reported that the minors are accused of manipulating the facial images of female classmates and teachers to fabricate sexually explicit content without their consent. The students, whose ages and names have not been disclosed, allegedly viewed this content in classrooms and shared it via online messaging applications. Approximately 10 individuals have been identified as victims of these violations.

The investigation was initiated after a school in Ulsan, a southeastern harbor city in South Korea, reported the incident to the police. “We are currently examining whether the materials were produced using deepfake technology or through simpler methods of fabrication,” stated an official from the Ulsan Metropolitan Police. Due to the involvement of minors, the police refrained from disclosing specific details about the charges, highlighting the need for discretion in handling such cases.

In a similar incident last month, local news outlets reported that five middle school students in Chungcheongbuk-do province were charged with creating and disseminating explicit images of fellow students and teachers. The students allegedly used deepfake technology to superimpose the victims’ faces onto preexisting photographs, thereby creating a fake image intended to appear genuine. As in the case in Ulsan, the perpetrators were accused of viewing these generated images in the classroom and sharing them with fellow students through messaging applications.

The creation and distribution of AI-generated explicit content are offenses under South Korea’s laws against sexual crimes, which were revised in 2020. When convicted, violators of the law face up to five years in prison and/or fines of up to 50 million won (approximately $36,000). The Readable contacted the Chungbuk Provincial Police for further details, but the department declined to comment as the investigation is still ongoing.

South Korea is not alone in tackling issues dealing with AI-generated sexually explicit content. On April 16, the British government announced that creating sexually explicit deepfake images without the consent of the person being depicted would now result in the perpetrator being given a criminal record and receiving an unlimited fine, regardless of the fabricator’s intentions in creating or distributing the images.

In the United Kingdom, the act of sharing deepfake images of this kind was previously criminalized in October of last year when the Online Safety Act was revised. Recently, the U.K. Department of Justice stated that this offense will be introduced as an amendment to the Criminal Justice Bill.


Subscribe to our newsletter for the latest insights and trends. Tailor your subscription to fit your interests:

By subscribing, you agree to our Privacy Policy. We respect your privacy and are committed to protecting your personal data. Your email address will only be used to send you the information you have requested, and you can unsubscribe at any time through the link provided in our emails.

  • Kuksung Nam
    : Author

    Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and...

    View all posts
Stay Ahead with The Readable's Cybersecurity Insights