South Korea began enforcing its AI Basic Act on January 22, 2026, becoming the first country with a comprehensive law on safe AI use. The law classifies “high-risk AI,” mandates watermarks on AI-generated content, and empowers regulators to fine violators after a one-year grace period.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
South Korea is staking out a distinct position in the AI race: move fast, but with hard-coded safety rails. The AI Basic Act doesn’t just mirror the EU’s risk-based approach; it goes further in explicitly targeting high‑performance “frontier” systems and mandating watermarks for AI‑generated content. That makes Korea a live experiment in whether stringent but pro‑innovation rules can coexist with rapid model deployment.
Strategically, this is a shot across the bow for global platforms like OpenAI and Google. Any frontier model with more than a million daily Korean users, or big local revenue, now needs a designated local representative and must comply with disclosure, watermarking and safety duties. That raises operational complexity but also creates a clear regulatory “on-ramp” in a major semiconductor and telecom hub. Expect Korea to become a template for other mid‑sized, tech‑heavy economies that want leverage over foreign AI providers without shutting them out.
For the broader AGI race, the act is less about slowing research and more about hard-wiring accountability once powerful systems hit the market. If enforcement lands well, it could normalize ideas like high‑risk AI categories and content provenance before truly general systems arrive.


