AI Governance Takes off - Latest AI Guidelines and Rising Demand for Data Governance Roles in Singapore

2024-03-25
banner

By Alvin Toh, Co-founder of Straits Interactive


In a previous newsletter on the 5 Trends That Will Shape the Future of Data Protection in 2024, we forecasted that regulators will take the helm on AI software with more stringent laws and enforcements. This month, we have begun to see various governments make good on what has been in discussion and consultation since last year and earlier. 

EU Takes the Lead in Regulating AI

On March 13 2024, the European Union’s parliament approved the world’s first major set of regulatory ground rules to govern AI use. The regulation is expected to enter into force at the end of the legislature in May, after passing final checks and receiving endorsement from the European Council. This landmark legislation, expected to be implemented from 2025 onwards, focuses on safeguards for general-purpose AI, limitations on the use of biometric identification by law enforcement, bans on social scoring and AI exploitation of user vulnerabilities. It also enforces the rights of consumers to launch complaints and receive meaningful explanations. All this comes with the prohibition of certain applications that threaten citizen’s rights, obligations for due diligence on high-risk systems as well as requirements on transparency. 

Additionally, the World Economic Forum has launched the AI Governance Alliance to champion the responsible global design and release of transparent and inclusive AI systems. All this sets the EU on course to become the first global power to regulate AI and marks a significant step towards ensuring safe and ethical AI use.

Singapore's PDPC Publishes Guidelines on AI Recommendation and Decision Systems

In Singapore, just a week prior to the passing of the EU AI Act, the Personal Data Protection Commission (PDPC) officially published their Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems. This is the finalised version of the Advisory Guidelines, where its proposed version was last drafted for the PDPC’s public consultation in mid-2023.

With six of 10 ASEAN countries developing national AI strategies with AI Governance as a key development pillar, Singapore has emerged the forerunner and is the first one out of the gate. While the Advisory Guidelines are not legally binding, the PDPC is likely to interpret and enforce the PDPA in a way that is consistent with them. As such, organisations are compelled to adhere to the Advisory Guidelines to ensure PDPA compliance for AI systems being developed or deployed. There are three types of activities that would be subject to these guidelines: 

1. Using Personal Data in AI System Development, Testing, and Monitoring:

Organisations can use personal data for AI system development with meaningful consent or under exceptions such as business improvement or research. Criteria are outlined for the application of these exceptions, ensuring appropriate use of personal data. Data protection considerations, including anonymisation and safeguards, are highlighted for organisations developing AI systems.

2. Deployment – Collection and Use of Personal Data in AI Systems:

The PDPA applies to the collection and use of personal data in deployed AI systems, requiring organisations to adhere to consent and notification obligations. Organisations are accountable for ensuring proper handling of personal data in AI systems, including implementing safeguards and transparency measures.

3. Procurement of AI Systems – Best Practices for Service Providers:

Service providers developing bespoke AI systems have obligations under the PDPA. They should ensure data protection measures, such as data mapping and labelling, and adhere to accountability obligations. Data Protection Officers will need to step in and also skill up in getting involved in evaluating AI systems and recommending relevant constraints.

Transparency measures are also encouraged to assure consumers regarding the appropriate use of their personal data in AI systems, as part of good ethical practices and AI governance.

Simultaneously, Singapore's Infocomm Media Development Authority (IMDA) is seeking feedback from other countries on its proposed Model AI Governance Framework, offering practical guidance for private sector organisations to address ethical and governance concerns when deploying AI solutions. 

The IMDA has now entered public consultation on the framework in collaboration with AI Verify Foundation, which serves as an global R&D open source community to develop AI Verify testing tools for the responsible use of AI in Singapore. As part of Singapore’s approach to AI Governance, AI Verify was developed as an AI governance testing framework and a software toolkit so that organisations can concretely operationalise their policies to govern AI use for the workplace. 

All this, combined with rapid digital transformation that started at the height of COVID, has sowed new demand for demonstration of good governance - in particular, data governance - of companies. 

Surge in Demand for Data Governance Professionals

A recent study by the Data Protection Excellence Network tracking Data Protection and Governance jobs in Singapore also found a 173% increase in demand for data governance roles year on year from 2022 to 2023. For the first time in 8 years, data governance-related jobs have exceeded data protection roles by 62%. This is due to the growing transition and emphasis on data governance, especially with the advent of generative AI in organisations.


There is a greater inclination towards hiring seasoned professionals in DPO and data governance as senior managers, emphasising candidates with at least five years of experience. This trend is attributed to the rising strategic importance of data governance roles as digital transformation will be characterised by more complex adoption of technologies. 

Meanwhile, data protection roles have fallen by 35% year on year, suggesting that DPOs should pivot towards data governance - to become ‘enablers’ rather than ‘showstoppers’ - balancing business objectives with compliance demands.

Data Protection Officers (DPOs) are at the forefront of navigating the evolving world of AI Governance, facing the dual challenge of addressing novel ethical dilemmas and mastering emerging AI business skills like prompt engineering. This skill is essential for steering AI outputs to adhere to organisational governance and compliance standards. DPOs must embrace a culture of continuous learning and cross-disciplinary education to stay abreast of data governance requirements. To bolster their expertise, they could consider pursuing specialised certifications such as the AI Governance Professional from the International Association of Privacy Professionals (IAPP) or an Advanced Certification in Generative AI, Ethics and Data Protection

In a climate increasingly concerned with the ethical dimensions of AI, Singapore positions itself as a collaborative leader in promoting responsible AI development and stringent data governance. This stance offers local organisations a unique chance to align with regulatory standards while pioneering innovation and attracting investment more assuredly. 

By keeping up-to-date with the latest regulations, including the PDPC guidelines, and equipping themselves with cutting-edge skills, the trained Data Governance professional can help businesses navigate this shifting terrain more deftly. Leveraging AI’s potential ethically and responsibly becomes not just a regulatory necessity, but a competitive advantage in a world keen on ethical AI use.  


This article was first published on The Governance Age on 22 Mar 2024. 



Just one more step! We've sent an email to .
Please check your inbox or spam and open it to activate your account.

Topics
Related Articles