ChatGPT maker OpenAI is facing a financial penalty for failing to report a data leakage of South Korean users to the country’s privacy regulator in 24 hours.
In a press release on Thursday, the Personal Information Protection Commission (PIPC) stated that they held the 13th plenary meeting on July 26 and decided to impose a fine of 3.6 million won (almost $2,900) on OpenAI for not notifying them about the data breach that happened last March.
According to OpenAI, 1.2% of ChatGPT Plus users’ details were exposed to other active members for nine hours on March 20. The leaked information included users’ names, email addresses, payment addresses, the last four digits of credit card numbers, and credit card expiration dates. ChatGPT Plus is a subscription service that was released by the company in February of this year. Around 80,000 South Koreans used OpenAI’s paid service last April.
The PIPC explained that although 687 South Korean subscribers were affected by the data leakage, the company did not report the incident to the agency within the legally required time. Under the Personal Information Protection Act, information and communication service providers must inform the regulator in 24 hours after they acknowledge a data breach.
The privacy agency explained the reasons behind their decision to apply the data protection law to the ChatGPT maker. “If the company provides their service directly to South Koreans, we consider that they should be accountable to the country’s privacy law,” said an official of the PIPC who works closely with the matter to The Readable. “OpenAI clearly states that South Korea is one of their supported countries.” The official added that almost 2.2 million South Koreans are using ChatGPT.
The PIPC also ordered the company to come up with a series of preventive measures related to the data breach. The Readable reached out to OpenAI to request for comment on the privacy regulator’s decision but did not receive immediate comment.
Meanwhile, the privacy watchdog is planning to conduct an inspection of artificial intelligence services across the country to minimize the risk of privacy violation. “Currently, we have not specifically decided on which companies to look into,” said the official.