5 Trends That Will Shape the Future of Data Protection in 2024

2024-01-30
banner

By Kevin Shepherdson, CEO & Co-founder of Straits Interactive.


In the last issue, I unpacked 2023’s data protection forecasts and their paths of manifestation in a year defined by AI. Chugging full steam ahead, my team and I recently held our highly-anticipated five key data protection trends of 2024 webinar for the DPEX Network community. Today, I unveil those very predictions to our readers.

I was joined by William Hioe (middle), our Head of Regional Consulting, and Syed Isa Alhabshee (rightmost), our Head of Legal & Consulting, to discuss what lay ahead for the year.

Since last year, rapid integration of AI technologies into workflows has ushered in a new era of challenges and opportunities for businesses, regulators, and individuals alike, particularly in data privacy and security. As such, all five trends this year intersect with generative AI phenomena, each giving rise to the need for AI governance and skilled input by appropriately qualified and experienced business professionals. 

The five trends we’ve identified for 2024 are:

  1. Generative AI will go mainstream in business productivity
  2. More due diligence needed in adoption of generative AI apps
  3. New risks as content creation transitions to content generation
  4. Regulators take the helm on AI software with stringent enforcements
  5. Multimodal generative AI demands new skill sets of data protection professionals

Keeping pace with these trends, organisations and professionals across all fields can chart their next steps in governing the value and risks of this evolving techscape and developing their workplace competencies. New skills in value creation and risk management are now essential to generative AI, which continues to underpin digital transformation. 

To watch the webinar in full, head over to our Youtube channel, where the on-demand recording has been made available.   

1: Generative AI will go mainstream in business productivity

In 2024, generative AI will go mainstream as an enterprise initiative. Office productivity is about to get a major boost from generative AI, with Microsoft Copilot and Duet AI for Google Workspace being just the tip of the iceberg. According to Gartner’s AI Opportunity Radar, we see 2 tribes of AI emerging based on their value impact - Everyday AI, which focuses on productivity gains, and Game-changing AI, which centres on creativity and generating new value. Of course, whether it is a productivity helper or a game-changer depends on the use case for which the technology is employed. It is not about using AI for everything in an organisation; it's about strategic application in areas that deliver real impact.

Such areas of application can span from department-specific uses of generating creative marketing copy, enhancing customer service, writing programme code and facilitating HR-related tasks, to general uses of analysing data, drafting SOPs, business planning to summarising legal documents. 

However, this brings implications for privacy, security and ethics. Businesses must tread carefully, especially in areas like customer service, where AI-powered chatbots may inadvertently expose sensitive information. HR professionals using generative AI to write job descriptions and analyse resumes must be mindful of bias and dehumanisation of the recruitment process. 

As general users in the company leverage public tools for specialised queries and document analysis or creation, there is risk of leaking proprietary or personal data when that information becomes part of the model’s training data set. Users should also not solely rely on the "plausible output" of conversational AI as their underlying algorithms are not principled with factfulness. While generative AI is a powerful tool, it's not a substitute for human judgement and critical thinking. As such, double-checking information when utilising them, especially in critical areas like legal advice, and maintaining vigilant oversight is crucial.

2: More due diligence needed in adoption of generative AI apps

From industry leaders like OpenAI to startups leveraging APIs, and existing apps that integrate generative AI functionalities, each software type poses unique challenges. Therefore, organisations adopting them will find it necessary to conduct thorough due diligence across the following three broad areas of apps

Core Apps 

Developed by pioneering leaders in the generative AI sector, such as OpenAI and Midjourney, who founded proprietary foundation models and now drive innovation with relentless R&D. However, the recent flurry of CEO dismissals and reinstatements at OpenAI as well as the ChatGPT bug that leaked users’ conversation histories, highlight the need for reliable governance alongside technological advancements. IP issues with the base training dataset are also afoot. The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

Clone Apps 

Tools developed by startups and individual developers, funded by VCs, that leverage the API of Core Apps to create solutions for specific niches or industries. While they play a pivotal role in the democratisation and commercialisation of generative AI, our research has uncovered some with questionable privacy practices within this category.

Our study on 100 mobile Clone Apps using OpenAI's GPT APIs revealed significant discrepancies in the declared data safety practices and their actual behaviour, posing potential privacy risks.


Meanwhile, another study on 113 popular AI desktop applications revealed that most fall short of GDPR and AI transparency standards. While 63% cited the GDPR, only 32% were apparently within the GDPR’s purview. And of this 32%, a mere 48% were compliant. While these studies illuminate the pervasive privacy risks associated with generative AI apps, the full scope of dangers are not limited to those of data protection. We will explore more of those in the next trend.

Combination Apps

Lastly these are existing applications that have incorporated generative AI features, such as Microsoft Copilot. They run the risk of exposing non-savvy users to the technology when they may not have prior education on how to safeguard sensitive data. 

Evidently, responsible AI and the development of governance protocols for AI applications is critical in 2024 and further into the future. Businesses must verify software providers' data handling policies, including privacy policies and terms of use, as well as local data protection laws in the user’s country, to guarantee the app’s trustworthiness.

3: New risks as content creation transitions to content generation

The transition from content creation to content generation is expected to create more privacy, security, and ethical-related breaches, whether through malice or ignorance in the use of generative AI. The same ease of content generation is also available to scammers and hackers, enabling them to commit traditional crimes using new techniques. Job scams enhanced by generative AI have taken forms such as fake job listings and phishing attacks targeting job seekers. 

Synthetic content generation, while a boon for marketers and influencers, raises concerns about data privacy and intellectual property. Instances of identity theft using deep fakes and voice cloning are on the rise, emphasising the importance of vigilance. In a recent Channel News Asia feature, we demonstrated how easily a fake avatar of a reporter could be created and made to speak another language.

Even for content generated legitimately, there are risks when humans are “out of the loop”, where there is no supervision, fact checks, or validations. Consider the case of an unchecked AI-generated poll speculating the cause of a woman’s death, appearing next to a Guardian article. The poll caused uproar among readers, and has since been taken down.

Content generation increases the reliance on prompts to fine-tune large language models (LLMs) like chatbots. While this offers exciting possibilities for creative expression and workflow automation, the ease of crafting prompts brings a new category of risks: adversarial prompts. These include prompt injection, prompt leakage and jailbreaking. 


These manipulations can have far-reaching consequences, from spreading misinformation to compromising data security and possibly the integrity of AI systems. Addressing these challenges is paramount in safeguarding responsible and secure development of content generation technologies, allowing us to reap its benefits without compromising trust or safety.

4: Regulators take the helm on AI software with stringent enforcements

Privacy regulators are poised to play a more active role in addressing the risks associated with generative AI. This takes place amid a mounting focus on AI-specific regulations emphasising accountability, transparency, and ethical use. As AI systems are integrated into various sectors, the collaboration between regulators, businesses, and the public becomes crucial in addressing data privacy and security concerns. 

Since August last year, China’s Provisional Administrative Measures of Generative Artificial Intelligence Services (Generative AI Measures) has been in full swing, largely impacting the companies offering generative AI services to the general public in China. Currently, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) are actively contributing to the European Union (EU) Artificial Intelligence Act, which is expected to pass this year. Over on the ASEAN front, in Singapore, the Personal Data Protection Commission (PDPC) has been actively involved in promoting the importance of AI governance and proposing Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems. 

International efforts for coordinated AI governance are gaining traction, reflecting the growing demand for control over AI technologies. The United Nations (UN) has created a 39-member advisory body hailing from 6 continents to address the international governance of AI, with the aim of attaining final recommendations by this summer. Locally, Singapore is seeking feedback from other countries on the Model AI Governance Framework introduced by the Infocomm Media Development Authority (IMDA), which offers practical guidance for private sector organisations to address ethical and governance concerns when deploying AI solutions.

Concurrently, existing privacy laws like the Personal Data Protection Act (PDPA) and General Data Protection Regulation (GDPR) continue to play a crucial role, especially where personal data is processed by AI systems. These laws enforce principles such as consent, data minimisation and purpose limitation, ensuring compliance with data subject rights. In this regard, we expect to see more privacy enforcements on AI applications. 

5: Multimodal generative AI demands new skill sets of Data Protection Professionals

As generative AI incorporates multimodal capabilities, Data Protection Officers (DPOs) and data governance professionals must acquire enhanced privacy management skills. Meaning, these professionals must learn to identify and mitigate privacy risks across different data types. Understanding operational aspects beyond traditional text analysis, such as image and video processing, voice recognition, and other sensory data, is crucial for managing risk and compliance.


DPOs must not only navigate the new ethical challenges but also aforementioned risks such as adversarial prompts. New proficiencies in prompt engineering to guide AI outputs will be required to ensure organisational governance and compliance. Attaining cross-disciplinary knowledge - including a basic understanding of machine learning, language models, legal and ethical principles - are imperative for effective management of data protection in this era. As is a commitment to continuous learning, exemplified by pursuing certifications like the AI Governance Professional under the International Association of Privacy Professionals (IAPP). An Advanced Certification in Generative AI, Ethics and Data Protection is also up for the taking. 

Seeking avenues to be in touch with the latest developments in generative AI and data protection is also key. In the Philippines, we will be holding our annual masterclass at the Asian Institute of Management (AIM), where you can hear and network with expert speakers who will be sharing about the values, risks and constraints of generative AI adoption in businesses. 

Taking a more holistic approach to generative AI with the framework of data governance rather than data protection can help organisations strike a balance between compliance and business objectives. By understanding and addressing both aspects, as well as collaborating with stakeholders, organisations can harness the power of AI to drive innovation and growth while safeguarding trust, safety, and the fundamental right to privacy. 


Our Next-Gen AI Capability-as-a-Service platform, Capabara, is currently available on beta. Stay tuned to our latest announcements on its development by following our CAPABARA Linkedin page or visit capabara.com to find out how it can empower your organisation's digital transformation.


This article was first published on our Linkedin Newsletter, The Governance Age, on 26 Jan 2024.



Just one more step! We've sent an email to .
Please check your inbox or spam and open it to activate your account.

Topics
Related Articles