By Kuksung Nam, The Readable
Nov. 13, 2023 8:30PM GMT+9
The Seoul Metropolitan Government successively removed explicit images and videos of a teenage victim with the help of artificial intelligence. The AI detected the materials within 22 seconds of their being uploaded, according to the Seoul Metropolitan Government on Monday.
In the press release, the Seoul Metropolitan Government described its seven-month progress in adopting an AI model to monitor social media platforms 24 hours a day, 7 days a week, searching for abusive materials. The Seoul Metropolitan Government started working with the Seoul Institute in July of last year. Their aim was to train an AI model that is able to identify illegal video, audio, and text related to sex crimes quickly and efficiently. The new technology, which is the first of its kind in the country, was put to work in the capital’s support center for victims of digital sex crimes in March of this year. Before the adoption of the AI model, eleven skillful human investigators manually investigated for explicit images using search tools such as Google.
The ability of the AI to detect images of the abuse victim so quickly after they had been posted—in this case, in less than 22 seconds—helped ensure that there was little chance for the images to be spread across the Web. The government explained that, in this case, the images were of a 15-year-old child who was groomed by a predator for months and manipulated into sharing explicit images and videos. The perpetrator then threatened the victim with releasing the images already in his possession unless more material was sent. The support center used the original images, supplied by the victim, to guide the AI model in its search for an identical match, which it made almost instantaneously.
“This was the fastest case detected by the AI model,” said an official from the Woman and Family Policy Affairs Office at the Seoul Metropolitan Government. “We submitted the evidence to the police, who are currently in the process of capturing the criminal.”
In addition to this case, the support center, from March until October, had been monitoring more than 457,000 cases of abuse. They found that with AI assistance that detection of such instances of abuse has risen some 1265% since this time last year, when there were only human investigators conducting manual research. The government is pleased with such advances, as now the average time to detect abusive material is approximately 3 minutes where previously it was nearly 2 hours.
“Once the images are exposed on the internet, they exist even after the death of the victim as well as the arrest of the perpetrator,” said the official of the Woman and Family Policy Affairs Office. “That is why it is important to isolate and destroy the images at the moment of their being released.” The official added that work is currently underway to upgrade the AI model’s features with improvements that will automatically request the removal of sexual materials from social media platforms.
The cover image of this article was designed by Areum Hwang. This article was copyedited by Arthur Gregory Willers.
Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and privacy by engaging with industry giants, foreign government officials and experts. Before joining The Readable, Kuksung reported on politics for one of South Korea’s top-five local newspapers, The Kyeongin Ilbo. Her journalistic skills and reportage earned her the coveted Journalists Association of Korea award in 2021 for her essay detailing exclusive stories about the misconduct of a former government official. She holds a Bachelor’s degree in French from Hankuk University of Foreign Studies, a testament to her linguistic capabilities.