By Charmaine Tan
Vietnam has taken the lead in Southeast Asia by being the first to implement a comprehensive legal framework for AI, marking a pivotal shift towards how AI is governed in the region. The law came into effect earlier this month, on 1 March 2026.
Where Vietnam’s AI law once comprised fragmented guidelines, the new law (Law No. 134/2025/QH15) is more sweeping; it outlines compliance measures, penalties, and funding for AI use, while streamlining the country’s technological development, paving the way for Vietnam to integrate with international standards while asserting its digital sovereignty
What does the New AI Law comprise?
The law is anchored in a human-centric philosophy: AI should be used to automate tasks and support humans rather than replace them.
To ensure accountability within AI systems, the law is built upon three foundational pillars:
1. Risk-based classification: Systems are classified into three levels of risk (high, medium and low) based on their impact on human rights, safety and security. High-risk systems (such as those used in healthcare, justice, or finance) are subject to the strictest oversight.
2. Mandatory labelling: AI-generated outputs have to be clearly labelled, especially deepfakes and content that could be mistaken for reality. This also impacts content creators, who are now legally required to disclose the use of AI in their work. The only exceptions are: if the content is not publicly circulated, when it is understood that the AI content is fictional, and when AI-enhanced content does not alter its intended message
3. National Infrastructure: To fuel local startup growth and help the nation maintain digital sovereignty, the government established a National AI Development Fund and a National AI Database.
4. The law casts a wide net and applies to both local and foreign organisations, whose AI use and systems impact users within Vietnam.
What has changed from past AI regulations?
Previously, Vietnam’s AI governance was scattered over various cybersecurity and privacy decrees, leading to fragmented enforcement. The law is now centralised under the Ministry of Science and Technology with several mandates. This includes:
1. Transparency Requirements: Companies must disclose to customers when they are interacting with an AI agent (like a chatbot) rather than a human.
2. Strict Prohibitions: The law explicitly bans deceptive and exploitative use of AI. AI use that violates personal data, intellectual property, and cybersecurity.
3. Strict Liability for Harm: In a significant shift, victims of high-risk AI systems that cause harm (such as to property, health, or life) no longer need to prove corporate negligence to obtain compensation. Rather than the previous flat-rate fines, the government now imposes revenue-based penalties for serious infractions - up to 2% of the company’s revenue. Companies responsible for such damages must also suspend operations and notify state authorities.
Although the law took effect on 1 March, existing AI systems have been granted grace periods to ensure a smooth transition toward compliance. Most frameworks have until March 2027 to meet the new standards, while high-stakes sectors, such as finance and healthcare, have an extended window until September 2027.
Balancing Innovation with Growth
Careful not to stifle innovation, the Vietnamese government has implemented several support mechanisms.
Regulatory sandboxes allow companies to test sensitive technology in a controlled, simplified environment with simplified compliance.
The National AI Development Fund aims to develop AI research, development, application and governance, as well as give startups and SMEs financial support to access high-performance computing and GPU power, to cut their R&D costs.
To encourage collaboration, the government plans to establish dedicated digital technology zones, with AI clusters set up within high-tech parks.
How Vietnam’s new law measures up globally
Vietnam’s new AI Law follows in the footsteps of the European Union (EU) AI Act, adopting a tier-based AI risk system and a revenue-based penalty structure. A hefty penalty aims to ensure that even big companies take local regulations seriously.
AI Governance in 2026 and beyond
Almost half the globe - 72 countries - including 27 EU member states and Southeast Asian countries such as Indonesia or the Philippines, already have in place at least binding privacy or data laws with AI provisions. South Korea was the first country in Asia to enforce a comprehensive AI law that took effect in January 2026, focused on safe use and industrial growth.
China was one of the first countries to implement an AI-specific technology law rather than a broad framework. From 2023, the nation introduced targeted measures for Generative AI and Deep Synthesis, requiring developers to undergo security assessments and algorithm filing with the Cyberspace Administration of China (CAC). By 2026, these rules expanded to include mandatory digital watermarking and labelling for all synthetic content. Additionally, new ethical trial measures require companies to establish internal Ethics Committees to review systems for social and national security risks before public deployment.
In the United States, California and Colorado have taken the lead to enact AI transparency and safety, while federal law is still in flux. California’s law is already in place as of 1 January 2026, while Colorado’s law is expected to come into effect by July 2026 after facing some pushback.
Vietnam is the pioneer of AI-specific regulation in Southeast Asia, but is unlikely to remain the only one. By balancing regulation with sandboxes and funding, the government seeks to expand its digital economy without phasing out human workers or compromising user safety.
As the risks of AI begin to outpace its progress, Vietnam’s binding, AI-specific law represents a clear signal in the region that technological innovation and human accountability must go hand-in-hand.
Sources: Online Newspaper of The Government of The Socialist Republic of Viet Nam, Luat Vietnam, TNGlobal, VnEconomy, Viet Nam News, Duane Morris Vietnam, iapp, Tilleke & Gibbins, RMIT University, EU Artificial Intelligence Act, Ministry of Science and ICT, East Asia Forum, Carnegie Endowment for International Peace, AARP, LinkedIn (Justin Azubuike)