PDPC’s Denise Wong on Building Trust in Singapore’s Data and AI Future

2025-07-24
Article Banner

By Clarice Foo, Senior Media & Communications Executive at Straits Interactive


At Personal Data Protection (PDP) Week 2025 in Singapore, conversations around data and AI Governance reached new heights in industry-wide collaboration and shared ambition. In her opening address, Digital Development and Information and Minister-in-charge of Cybersecurity and Smart Nation Group, Mrs Josephine Teo, introduced initiatives such as the Global AI Assurance Sandbox, the Privacy Enhancing Technologies (PETs) Adoption Guide, and the elevation of the Data Protection Trustmark (DPTM) to a new Singapore Standard (SS 714:2025) on par with international data protection benchmarks, reaffirming Singapore’s position as a trusted global hub for technological innovation. 

Last year, we spoke with Denise Wong, Deputy Commissioner of the Personal Data Protection Commission (PDPC), about how the country was setting the stage for a future where innovation and public trust can grow hand-in-hand. This year, as the world continues to grapple with AI’s advances, we caught up with her again at PDP Week to reflect on key milestones, global collaborations, and what lies ahead in building a safe, confident digital ecosystem for all.

Q. Last year, you spoke about Singapore’s proactive stance in balancing digital innovation with data protection. Since then, we’ve seen new initiatives like the expanded Global AI Assurance Sandbox and the PETs Adoption Guide. What milestones so far do you think has had or will have the greatest long-term impact on Singapore’s journey towards a unified and practicable governance standard for data and AI?

Our philosophy has always been about putting in place just the right amount of guardrails for maximal innovation. We want companies to have confidence to innovate, while giving the public confidence to use the technology, and ensuring that AI is deployed for the public good. That is  our North Star. 

There are a couple of things we’ve done to further this principle in the data and AI domain. For AI governance, we rolled out the Model AI Governance Framework for Generative AI with the Infocomm Media Development Authority (IMDA) last year. The framework explains how AI systems work and identify key ethical and governance issues. But what does that mean in terms of actually testing AI systems? 

This is why the testing and assurance of AI systems is important and forms the crux of our work in the new Starter Kit for Safety Testing of Large Language Models (LLMs). The Starter Kit puts together best practices and testing protocols from the Global AI Assurance Pilot that was launched earlier this year. The Pilot saw different use cases where we matched real world applications with third-party testers, and we drew upon some of these experiences to inform the Starter Kit. 

We’ve also worked with the industry to shape safety testing of AI through the AI Verify Foundation, which is an industry consortium of over 200 companies, where we gather feedback on AI development and deployment. The AI Verify Foundation also houses some of our open source toolkits, like Project Moonshot which combine benchmarking and red-teaming capabilities, while allowing developers to assess explainability, robustness, fairness, and safety. It's not perfect, but it's a reference for developers looking to test their own AI systems. In creating these tools, we want to advance the conversation on what it means to build safe and responsible AI for a trusted ecosystem. 


]Missed our last interview with Denise Wong? Get up-to-speed, here.


Q. We’re seeing growing global collaboration on AI governance frameworks. How is Singapore engaging with these international conversations, and what does this mean for businesses operating here?

Singapore is a small country with an open market. Being part of the global conversation has always been in our DNA and AI is no different. This is particularly important in AI and data where there are multiple players, from model development to solutions building, application, third-party testing, and finally the end user. Even though this is nascent technology and there's a lot that we don't know yet, it's important to have international understanding and consensus. We want to figure out our collective priorities and what we think good looks like. There are no easy answers, but we have started the conversation in a few ways. 

Whenever we come up with a framework, we try to map it to other frameworks and jurisdictions. We’ve done this with the AI Verify Testing Framework, the NIST Risk Management Framework, the ISO/IEC 42001, as well as with the G7 Hiroshima AI Process (HAIP) Reporting Framework. This means that if you are an organisation using the AI Verify Testing Framework, then you would most likely be aligned with these other frameworks too, which lowers the cost of compliance for businesses in a tangible sense. 

Aside from the practical things we’ve done, we also participate very actively in international conversations on AI Governance. For instance, I lead a working group on Data Privacy and AI in the OECD.AI Expert Group on AI, Data and Privacy, which comprises a group of 70 or 80 global experts having policy conversations about data and AI. Singapore also participated in the United Nations (UN) High-Level Advisory Body on Artificial Intelligence and we are plugged into conversations at the International Network of AI Safety Institutes, where we lead the track on technical testing together with Japan. 

We also hosted the Singapore Conference On AI (SCAI) 2025 earlier this April, which was a scientific discussion about research priorities in the field of responsible AI among scientists from all over the world, on the back of the International Conference on Learning Representations (ICLR). The scientists then got together and produced a document called the Singapore Consensus on Global AI Safety Research Priorities, which identified important technical AI safety domains for the international research community. In that circumstance, we were simply acting as a convener and a neutral platform for that discussion, this time for the science and research aspect about international collaboration.

Q. When it comes to supporting smaller businesses and startups in navigating an increasingly complex compliance and governance landscape, has the PETs sandbox seen success with smaller outfits? Are there any other plans ahead to support SMEs in data protection & AI governance?

Resourcing is always a concern for any company regardless of size, though, more so for small companies. The PETs Sandbox has been about finding a way to experiment with some of these technologies and having different companies, big and small, come in to trial them. Having run the Sandbox for a few years now, it has helped us understand the technology better and figure out how we can help companies to adopt it in a way that supports their business and is cost-effective. Simultaneously, we’ve observed that the PETs market is increasingly offering low-cost products and is more productised with APIs and SDKs. Businesses are also beginning to understand the value proposition of being able to extract more insights from the data while still protecting privacy. And so these seemed to signal that the market is moving and growing in demand. 

As such, we launched the PETs Adoption Guide to give senior leaders within companies, including small companies, a sense of how you can use this technology and pick the right solution for your use case. Adoption in businesses is the key thing that we want to achieve, beyond what’s written on paper or passing rules and regulations. Which is why we work closely with business associations and, in a more general capacity that includes AI, within the AI Verify Foundation. The foundation has big and small companies, not just big tech players, and are what we call demand-side companies who are users and developers of AI applications. As long as they're interested in responsible AI, we are happy to have them - it helps them understand the technology and lowers the cost of adoption.

This year’s PETs Summit was really our attempt at pulling together that ecosystem. It's our fourth year now and we've seen good traction and representation on both supply and demand sides in conversations on advances in the technology as well as how businesses, big and small, can implement it into their products and offerings.

Q. Since the release of PDPC’s Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems in tandem with IMDA’s Model AI Governance Framework for Generative AI, how has industry adoption been? Have you observed any gaps or challenges among companies trying to operationalise these principles?

When dealing with frontier technology like AI, no one has all the answers. As such, every piece of work that we do - policies, frameworks or otherwise - is always done in close consultation with industry. We iterate drafts with them before we launch it and even after they’ve been released, most of these are living documents that we're open to refine as we go along. 

The two items you’ve mentioned, in particular, the IMDA Model AI Governance Framework for Generative AI, are considered broader, higher-level frameworks. Then, there are frameworks at the implementation level, like the Starter Kit for Safety Testing of LLMs. So far, we’ve had great feedback from industry on these being references that they can adopt. But as the technology evolves, we are open to fine-tuning it for the next version. 

The PDPC’s Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems is of a slightly different texture. It is the PDPC’s interpretation of how data can be used for certain types of AI training, in particular decision making and recommendation systems. Naturally, the next question might be: What about Generative AI? 

Generative AI is a different type of system - it shares similar issues as decision making and recommendation systems, but with different nuances and more complex textures with the way these systems operate. We are currently looking at it, again, in consultation with industry, but still behind closed doors presently. 

Q. With IMDA now introducing a more structured safety testing environment for emerging technologies like agentic AI through the Global AI Assurance Sandbox and the Starter Kit for Safety Testing of LLMs, are there plans to evolve PDPC’s guidelines further to address these advanced AI technologies? 

I believe that regulators or data protection authorities are always looking out for the next wave of technology and how data is being used in these new tools. Definitely, the PDPC is actively trying to figure out what's the next bound and what we have to do to fulfill our mandate.

Going back to my comment at the start, our position is to always ensure we make space for digital innovation in a way that can protect the public and instill confidence. In the case of PDPC, it is about making sure that personal information and data is protected or removed, for example, even as the data is used for AI training. PETs are a great way to do that, because it's a series of techniques that can be mixed and matched to achieve this. 

Q. You’ve often emphasised the importance of agile regulations through principles-based guidelines that are practical for organisations. Do you foresee a future where Singapore might move towards more prescriptive rules for certain high-risk AI applications, like in the EU AI Act?

We believe it's important to put in place initial building blocks in the form of frameworks and testing, so that we can reach a consensus on what good looks like. For us, this was the Model AI Governance Framework for Generative AI, the Starter Kit for Safety Testing of LLMs, and then understanding how the industry is operating through, for example, Global AI Assurance Pilot. 

At the same time, we have moved on legislation to target specific harms where we see they may occur. One example is a time-bound legislation that we moved last year to deal with the potential problem of deep fakes during the General Elections

Our general regulatory approach is about proportionate and targeted legislation to deal with the worst of harms or pressing concerns. At the same time, we have put in place horizontal best practices, codes of practice and practical actions that companies can take in order to make their applications safer.

Q. The role of DPOs is evolving into something broader that encompasses AI Governance or corporate digital ethics - what new skills or mindsets do you think will be critical for DPOs in the next 5 years, especially with AI in the mix?

Great question. That’s actually a theme we’re seeing run throughout PDP Week. Traditionally, the field of data protection has been very much about ensuring that companies’ practices comply with the law. But with all these new technologies coming in, I think there’s now a need to move beyond just pure data protection into a broader concept of data governance.

This means being able to manage and process data in a way that’s responsible and safe, while still supporting the company’s objectives. That definitely requires new skill sets, capabilities, and a mindset shift within the community. DPOs will need to understand the technology - not necessarily down to every technical detail, but enough to grasp how it impacts obligations in data protection, as well as other areas like cybersecurity and intellectual property. There’s a much greater need for technological awareness now.

At the same time, there’s still the core responsibility of keeping data safe and secure. That involves understanding different aspects of security: cybersecurity, technical safeguards, and even ICT management. We often use the acronym BEST: Back up, Encrypt, Secure, and Track. To do this well, a Data Protection Officer (DPO) or Data Governance lead needs to know what these mechanisms are and how to apply them to protect the company’s data assets.

Of course, different companies will organise their governance and data protection functions in different ways. Some place it under Legal, others under Compliance, and some even under Product. It really depends on the industry, sector, and company structure. But regardless of where it sits, someone needs to take on that role and responsibility. That person or team must understand what they’re accountable for, and they’ll need to translate that into language and priorities the leadership and executives can understand. Ultimately, they’re the bridge between responsible AI, data protection, and data governance, and aligning all of that with the goals of the business.

Q. What advice would you give to businesses and DPOs in protecting their organisations and customers from AI-powered threats while still leveraging AI for innovation?

I’d say start by going back to the fundamentals, such as the BEST approach I mentioned. That means making sure you understand where your data sits within the company, how it’s being used, who has access to it, and how it’s being protected. Getting those basics right is already a very strong start.

You might recall the first day’s panel on ”Staying Ahead of Data Breaches” - we had a conversation about the rise of cyber threats and the increasing cases of data exfiltration as a result. One of my fellow panelists pointed out that even as technology advances, it’s often the human element that remains the weakest link.

So my advice would be: focus on the fundamentals. Think carefully about how your data and systems are set up, and make sure there’s clear accountability for managing and securing them. With that solid foundation in place, you’ll be in a much better position to leverage and use AI in ways that benefit the business while keeping your organisation and customers safe.


This article was originally published on 17 July at the Governance Age.


Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles