By Kevin Shepherdson, CEO, Straits Interactive and Sarah Wang Han, Head of Legal Research, Straits Interactive
Data Protection Officers are now standing at a strategic crossroads. While their traditional mandate was anchored in ensuring compliance with data protection laws—such as Singapore’s Personal Data Protection Act (PDPA)—the emergence of AI-driven systems now requires a broader, more future-oriented approach.
The traditional information lifecycle, designed primarily for compliance, is no longer sufficient in the Gen AI age. Instead, DPOs must evolve to oversee a governance-centric data lifecycle that integrates data protection, data governance, and AI ethics — enabled by standards like ISO 38505 (Data Accountability Map) and ISO 38507 (AI Governance Implications).
The Traditional Information Lifecycle (Compliance Objective)
Under the Personal Data Protection Act (PDPA) in Singapore, organisations are required to govern the handling of personal data in line with key obligations that can be mapped to a traditional information lifecycle. This lifecycle typically includes collection, use, disclosure/transfer, storage/disposal, and accountability — each associated with specific compliance objectives.
Traditional Information Lifecycle Under the PDPA (Compliance Objective)
✅ This lifecycle ensures legal compliance, data minimisation, and secure storage.
❌ However, it is linear and reactive, and does not accommodate the dynamic risks introduced by modern AI systems.
The Privacy Risks Presented by Generative AI and LLMs
While Gen AI offers immense potential to enhance productivity, streamline workflows, and create new value, its rapid adoption—especially through freely available tools like ChatGPT, DeepSeek, and various start-up AI apps—has introduced significant privacy and security risks for organisations.
In many cases, staff are experimenting with these tools without proper governance, and organisations are failing to perform due diligence on the privacy practices of third-party GenAI apps. This includes overlooking privacy policies, lack of understanding of how data is used or stored, and failure to assess whether AI outputs may contain or leak personal data.
The nature of LLMs, which are trained on vast corpora and may retain contextual memory in fine-tuned applications, means that even seemingly harmless usage (e.g., entering internal summaries or customer data into prompts) can lead to data leakage, regulatory breaches, or unintended re-disclosure.
In short, unlike traditional IT systems, Gen AI:
1. Generates new content or decisions, often in unpredictable ways,
2. Learns from broad, uncontrolled data sources, including user inputs and public content,
3. Creates outputs that may affect real people, sometimes inaccurately or unfairly.
Key Privacy Risks:
1. Prompt Data Leakage: Sensitive information can be memorised by LLMs, violating data minimisation and confidentiality.
2. Hallucinations and Misidentification: LLMs may create fictitious but plausible content that implicates real individuals.
3. Opaque Processing: Many LLMs lack explainability, complicating individuals' rights to access and correction.
4. Cross-border Transfer and Training: Data scraped or processed may contravene regional data sovereignty laws.
5. Automated Decision Making: When LLMs are used to make or guide decisions without human oversight, individuals may face significant impacts without transparency or recourse, risking violations of fairness, accountability, and due process.
Real-World Incidents:
1. Samsung (2023): Employees submitted proprietary code into ChatGPT, inadvertently exposing trade secrets.
2. Zoom (2023): Faced user backlash over ambiguous consent terms for AI training, risking violation of transparency obligations.
3. Clearview AI: Collected and processed biometric data without consent, highlighting the dangers of scraping and over-collection in training AI.
These incidents underscore that privacy risks now emerge beyond collection and storage — they extend to how AI learns, generates, and reasons.
Introducing the Data Accountability Lifecycle (ISO 38505)
As the limitations of the traditional information lifecycle become increasingly clear in the age of Generative AI, ISO/IEC 38505-1 offers a powerful alternative: the Data Accountability Map. This lifecycle was designed not just for legal compliance but to support data governance, value creation, and risk mitigation throughout the entire data journey.
Unlike the PDPA lifecycle, which ends after data is disclosed or stored, ISO 38505 introduces two additional stages—Report and Decide—that reflect how modern organisations, especially those using AI, extract insights and make decisions from data. These stages are critical in AI environments, where data is used to generate outputs, drive autonomous actions, and even influence human behaviour.
The ISO 38505 Data Accountability Lifecycle
Overview
Details
Why REPORT and DECIDE Are Critical in the AI Context
Traditional data protection laws focus on how data is collected, stored, and shared. But in the age of AI and LLMs, the real risk lies in how data is interpreted and acted upon.
REPORT in AI:
1. Involves extracting meaning from raw or structured data—whether through analytics, visualisation, or algorithmic modeling.
2. In a GenAI context, this includes:
- Training or fine-tuning models with collected data
- Embedding user feedback into model performance
- Identifying patterns that guide AI behavior
3. Risks: Misrepresentation, inference of sensitive data, biased insights, lack of transparency.
DECIDE in AI:
1. Represents the most sensitive phase where data-driven actions are taken.
2. Includes:
- Deploying AI models that generate human-like responses
- Letting AI assist or automate business decisions (e.g., hiring, pricing, recommendations)
3. Risks: Discrimination, unfair decisions, black-box reasoning, failure to justify or reverse outcomes.
By explicitly including these two stages, ISO 38505 allows GRC and privacy professionals to govern AI systems more effectively, ensuring ethical use, accountability, and trust.
This framework goes beyond compliance to enable:
1. Role-based accountability
2. Business-aligned value realisation
3. Lifecycle-wide ethical control points
Aligning AI Ethical Principles to Each Lifecycle Stage
As we move from compliance-centric privacy models toward AI governance, it becomes essential to look beyond traditional privacy obligations like consent, retention, and data transfer. While these remain important, the rise of Gen AI introduces new ethical dimensions that require a broader governance framework.
AI Ethical Principles: Broader Than Privacy
These principles below extend beyond the scope of data protection laws and are vital to ensuring that AI systems are trustworthy, safe, and aligned with societal values:
1. Accountability
2. Security and Privacy
3. Safety and Sustainability
4. Fairness and Inclusiveness
5. Human-Centred Values
6. Explainability and Transparency
When applied across the ISO 38505 lifecycle, these principles help DPOs and GRC professionals govern the entire GenAI ecosystem—from data collection and processing to the final AI-generated output and decision-making.
AI Ethical Principles vs Privacy Implications
These ethical principles are not optional — they are foundational to trustworthy AI and regulatory resilience.
DPOs Must Upskill in AI Governance and Ethics
As organisations increasingly adopt AI and data-driven technologies to fuel digital transformation, the role of Data Protection Officers (DPOs) and data protection professionals must evolve from being compliance gatekeepers to strategic enablers of responsible innovation.
From Compliance to Business Enablement
Traditionally, DPOs have focused on ensuring adherence to data protection laws such as the PDPA. While this remains essential, it is no longer sufficient in today’s intelligent digital economy. DPOs must now upskill and broaden their competencies to:
1. Go beyond compliance objectives to also support business objectives.
2. Help organisations realise value from data, not just mitigate risks.
3. Drive and support the establishment of a Data Governance Committee that aligns data use with corporate strategy and ethical oversight.
4. Promote themselves as business enablers, not "show-stoppers" — by facilitating responsible innovation instead of blocking it due to uncertainty or lack of governance readiness.
5. Play a critical role in governing the use of AI systems, ensuring they are responsible, ethical, and aligned with internal policies and societal expectations.
Supporting Digital Transformation through Responsible AI
As organisations explore Gen AI tools and AI-powered systems to boost productivity and competitiveness, DPOs must help govern how AI is used:
1. By integrating privacy, data ethics, and AI governance into the organisation's digital transformation strategy.
2. Ensuring AI applications respect privacy rights, avoid harm, and are fair and explainable.
4. Participating in or advising internal AI governance bodies or ethics review committees.
Where to Upskill: Recommended Training Pathways
As organisations undergo digital and AI transformation, DPOs and data professionals must:
✅ Understand AI Ethical Principles
These principles — fairness, transparency, explainability, accountability, and harm prevention — extend traditional data protection and are increasingly embedded in laws and policies.
✅ Familiarise with Modern AI Governance Frameworks
By developing familiarity with these frameworks, DPOs can:
1. Influence ethical AI system design
2. Contribute to AI governance boards or committees
3. Embed responsible AI practices into existing privacy management programmes
The age of Gen AI calls for a new breed of DPOs — data ethics champions and AI governance leaders. While traditional information lifecycles (like those based on the PDPA) remain important, they are no longer sufficient in isolation.
By embracing the data accountability lifecycle (ISO 38505) and aligning it with AI governance best practices (ISO 38507, ISO 42001, Singapore's AI Framework, and the EU AI Act), DPOs can build privacy-respecting, ethically sound, and business-enabling governance structures.
Upskilling in AI ethics and governance is no longer a choice — it’s a strategic necessity for DPOs seeking to stay relevant, proactive, and trusted in the age of intelligent systems.
DPOs and data professionals looking to expand their capabilities in Data Governance and AI Governance can attend the following competency-based programmes jointly offered by Straits Interactive and the Singapore Management Academy:
Advanced Certificate in Data Governance Systems
A practical course to equip professionals with the frameworks and tools to establish enterprise-wide data governance systems, including:
1. Data accountability lifecycle (ISO 38505)
2. Governance committees and role structures
3. Bridging compliance and business value
Advanced Certificate in Generative AI, Ethics and Data Protection
This course empowers professionals to:
1. Understand the risks and implications of GenAI
2. Apply AI ethical principles to privacy and security governance
3. Implement responsible AI governance aligned with ISO 42001, Singapore’s Model AI Governance Framework, and the EU AI Act
For the full roadmap of upskilling opportunities across privacy, governance, and AI domains, visit:
🌐 Data Protection Competency Roadmap – DPEX Network