Google Cloud has introduced a new data residency service in South Korea that allows businesses to process artificial intelligence workloads entirely within the country, addressing rising demand for stronger data sovereignty and regulatory compliance among local enterprises.
While Google Cloud has long offered customers control over where their data is stored, the new service goes a step further by ensuring that the machine learning processing takes place directly on local servers. This capability is particularly important for industries subject to strict regulations about keeping certain AI activities within national borders. As a result, South Korean businesses can now use advanced models like Gemini 2.5 Flash while maintaining compliance with local data laws.
Chi Ki-sung, Managing Director of Google Cloud Korea, said the company is committed to giving customers control over where data is stored and processed. He noted that Google Cloud provides comprehensive options for sovereign cloud deployments, including the Google Cloud Data Boundary, which lets customers define and manage the physical and operational limits of their data, and the Google Distributed Cloud Air-gapped solution, which enables highly regulated industries to operate select Google Cloud services entirely within their own data centers, without connecting to public internet networks or external cloud regions. This approach allows South Korean organizations to fully manage their data, operations and software within the country.
The new service was announced Tuesday at Google Cloud Day Seoul, an annual event held at the COEX Convention Center in southern Seoul. Although the event drew business leaders, IT professionals and developers to explore Google Cloud’s infrastructure and generative AI tools, the spotlight was firmly on how the company is enabling local AI innovation and compliance.
One highlight of the event was Wrtn Technologies, a South Korean AI startup whose Chief Operating Officer, Yoo Youngjoon, introduced its newly upgraded service, Wrtn 3.0. The platform enhances AI search, productivity tools and personalized AI features. Yoo explained that Wrtn evaluated several large language models and ultimately adopted Google’s Gemini 2.5 family, citing strong performance, cost efficiency and stability. The company now deploys Gemini 2.5 alongside other models selectively across its services.
Other local companies, including Nol Universe, LG Uplus, NC AI, Kakao Mobility and Mathpresso, joined the event’s Gemini Playground, where they shared examples of how generative AI is being applied across diverse sectors.
- Related stories: Why Google Cloud believes security is its greatest differentiator
- Related stories: Google’s AI-powered security innovation is expanding the digital horizon
Chi said AI is fundamentally transforming how businesses operate, compete and innovate. He emphasized that Google Cloud’s mission is to offer customers not only AI models and hardware but also open and interoperable software tools and platforms that support the development of multi-agent systems. He said this focus on providing businesses with “the power of choice” is enabling organizations to accelerate innovation, improve efficiency and deliver outstanding customer experiences.
This year’s event coincided with the fifth anniversary of the Google Cloud Seoul region, which launched in 2020. Over the past five years, Google Cloud has expanded its computing capacity in South Korea to meet growing demand, helping local businesses adopt enterprise AI capabilities, boost productivity and improve public services.
The Seoul region consists of interconnected infrastructure that includes servers, silicon chips, storage systems and networking equipment. It underpins Google Cloud’s high-performance services and ensures the reliability, security and accessibility of enterprise applications. The region also connects to Google’s private global network, which spans over 2 million miles of fiber optic cables across more than 200 countries and territories, delivering high bandwidth and near-zero latency essential for modern AI workloads.
Continuing its push to support demanding AI applications, Google Cloud recently introduced Ironwood, its seventh-generation Tensor Processing Unit (TPU) and the first TPU accelerator specifically designed for large-scale AI inference. Chi described Ironwood as Google’s most powerful and energy-efficient TPU to date, adding that it reflects a shift toward an “era of inference” where AI systems proactively generate insights rather than merely responding to queries. He noted that models such as Gemini 2.5 and AlphaFold, which contributed to Nobel Prize-winning research, already run on Google’s TPUs, and expressed optimism about the innovations Ironwood could enable for South Korean developers and organizations.
Editor’s note: This article was initially written by ChatGPT-4o based on the author’s specific instructions, which included news judgment, fact-checking, and thorough editing before publication.