The 2025 AI Scorecard: A Look Back on Generative AI Trends We Predicted

2025-12-16
Article Banner

By Kevin Shepherdson, Founder & CEO of Straits Interactive and Author of The AI Factory – AI Capability Guide for SMEs


At the beginning of the year, Straits Interactive published our predictions of generative AI trends in the workplace that organisations in the region should monitor, with a focus on their implications for Governance, Risk Management, and Compliance (GRC) professionals, including Data Protection Officers (DPOs).

This marked a deliberate shift from our usual analysis of data protection trends—typically centred on new regulations, major privacy developments, and breaches. The reason was clear: DPOs and GRC professionals can no longer ignore the impact of generative AI, which now has major implications on privacy and security.

As we have shared in our courses, the role of the DPO has been evolving from strictly ensuring compliance with data protection laws to effectively governing data aligned with organisation objectives (i.e., increasing the value of data while reducing its risks - a shift triggered by the COVID-19 pandemic). Now, the need for AI governance and digital transformation (triggered by the advent of ChatGPT) has accelerated this evolution. Therefore, DPOs play a major role in AI Governance and digital transformation and must take into account ethical considerations, besides new privacy and security risks as more personal data are being processed by AI systems.

Our research team had identified six key trends in the workplace - the 6 Cs: Collection of Data, Compute Power, Context Window, Chain-of-Thought, Customisation, and Control - that are critical for harnessing AI's potential responsibly, especially for SMEs.

So how did these trends pan out? 

Reflecting on the developments in 2025

If 2024 was the year of trial-and-error for generative AI, 2025 will be remembered as the year of Reasoning—defined by the arrival of true "Reasoning Engines" and a seismic shift in the open source landscape.

We began the year anticipating incremental gains, but the release of GPT-5 in August and Gemini 3.0 late in the year shattered those expectations, introducing models that didn't just "predict" text but actively "thought" through complex multi-step workflows. Perhaps even more disruptive was the open source explosion. Meta’s Llama 4 (with its staggering 10M token context window) and the surprise release of OpenAI’s "GPT-OSS" models signaled the end of the proprietary-only era. Simultaneously, DeepSeek R1 proved that state-of-the-art reasoning could be commoditised, forcing every major player to compete on efficiency rather than just raw intelligence.

As mentioned earlier, we had tracked six critical trends - the "6Cs" - that would impact the work place. Looking back, here is how those predictions survived against reality of what actually happened.


1. Collection of Data

Trend Overview: The original prediction was that organisations would shift away from generic internet data toward leveraging internal knowledge repositories—utilising customer interactions, policy documents, and archived communications to build contextually relevant models. We also anticipated the rise of Small Language Models (SLMs) allowing companies (especially SMEs) to train AI solely on internal, proprietary data to mitigate copyright and sovereignty risks.

How it Played Out: This trend happened faster than expected, fueled by the open source Renaissance. The release of Llama 4 and Qwen 3 gave enterprises powerful, open-weight alternatives to closed APIs, allowing them to keep proprietary data entirely within their own "walled gardens." The prediction regarding SLMs was spot on: SMEs increasingly adopted efficient models like GPT-5-nano or localised versions of DeepSeek R1, valuing lower compute costs over raw parameter count.

The GRC Impact: For DPOs, the proliferation of open source models created a "Shadow AI" challenge. Departments began downloading models to run locally on laptops, bypassing cloud procurement controls. This made data mapping incredibly difficult, as personal data was no longer just flowing to a known vendor API, but being processed in decentralised, often unmonitored local instances.

New Insight: A clear design trend is emerging around data-minimising AI - systems that limit what personal or sensitive data they ingest. With the availability of capable open models, organisations are beginning to treat model training and retrieval with the same discipline as data minimisation obligations under privacy law - fine-tuning models on only necessary data rather than hoarding broad datasets.


2. Compute Power

Trend Overview: We predicted a demand for scalable, lower-cost, and energy-efficient AI solutions driven by innovations like NVIDIA’s Blackwell architecture. The expectation was that cutting-edge AI would become accessible to SMEs while aligning with sustainability goals, shifting the focus from raw power to operational efficiency.

How it Played Out: While hardware advances delivered massive performance gains, the software layer saw a clear strategic pivot. The “bigger is better” era effectively ended for most organisations. Instead, enterprises adopted a Hybrid Inference strategy - using large frontier models (such as GPT-5) for complex reasoning tasks, while routing routine or high-volume queries to cheaper, faster SLMs. This “mix-and-match” approach proved essential for managing the cost and energy demands of 2025’s compute-intensive reasoning models.

At the same time, technology giants such as Microsoft, Google, Meta, and Amazon, alongside governments including the United States, China, and the UAE (e.g. Dubai), invested hundreds of billions of dollars globally in hyperscale AI infrastructure. These investments focused on AI-optimised data centres, advanced chips, power and cooling systems, and sovereign AI capabilities—positioning compute as both a strategic economic asset and a foundation for next-generation AI development.

The GRC Impact: This shift was strongly influenced by ESG (Environmental, Social, and Governance) reporting obligations. With expanded sustainability disclosure requirements emerging across the EU and Asia, DPOs and Risk Officers increasingly found themselves assessing and auditing the carbon footprint of their AI architectures, including model selection, inference patterns, and reliance on hyperscale cloud services. As a result, “right-sizing” AI models became not only a cost consideration but a compliance and governance requirement aligned with corporate sustainability targets.

New Insight: A new efficiency–compliance hybrid mindset has emerged, where organisations select model size based on business value, risk exposure, and environmental impact, rather than affordability alone. For SMEs, this typically means leading with efficient edge or SLM deployments and escalating only the most complex tasks to large frontier models—effectively managing an internal “AI carbon budget” and shifting the focus from raw affordability to deployment efficiency and governance discipline.


3. Context Window

Trend Overview: We anticipated that advancements in AI’s context window capabilities would redefine workplace collaboration. The trend toward multi-session or even infinite memory would allow AI systems to retain context across multiple interactions, enabling seamless teamwork and the integration of complex datasets for better project continuity.

How it Played Out: 2025 was the year "Context" became "Memory." Llama 4’s Scout variant shattered records with a 10 million token context window, effectively allowing AI to hold an entire corporate archive in working memory. Similarly, GPT-5’s "infinite memory" features allowed assistants to remember project history across months of work.

The GRC Impact: This utility created a massive headache for privacy professionals. If an AI "remembers" everything, how do you exercise the Right to be Forgotten? When a customer asks for their data to be deleted, it is no longer just a row in a database; it is embedded in the contextual memory of the AI agent.

New Insight: By late 2025, leading organisations began treating AI memory as a governance topic rather than a purely technical feature. Context Governance is emerging as a best practice, ensuring that AI retention behaviours align with existing data-retention and data-minimisation policies. This has driven the adoption of “Context Shredding” protocols—automated workflows that sanitise or reset AI memory at the end of a session, project, or workflow to prevent the accumulation of personal, sensitive, or toxic data over time. Importantly, this practice should be aligned with the AI system Retirement phase under ISO/IEC 5338, where organisations are recognising that decommissioning an AI application must also address associated conversation logs, embedded memory, and derived data. Retiring an AI system without managing its retained context and interaction history now presents both privacy and compliance risks, particularly where personal data may persist beyond its intended purpose.


4. Chain-of-Thought

Trend Overview: We forecasted that Chain-of-thought (CoT) reasoning would gain prominence as a critical workplace capability. Rather than just generating text, AI would support problem-structuring, option analysis, and scenario exploration, moving AI up the value chain into strategy and risk support.

How it Played Out: The defining feature of the 2025 frontier models—GPT-5, Gemini 3.0, and DeepSeek R1—was analogous to Kahneman’s "System 2" thinking. These models pause to "reason" before responding. This transformed AI from a content generator into a problem solver capable of structuring complex options analysis for strategy, risk, and compliance teams. An OpenRouter study on the State of AI showed that reasoning models accounted for more than 50% of production traffic, with particularly strong growth in Agentic Inference—where AI systems plan, sequence, and act. AI was no longer just a writer; it had become a planner and a coder.

The GRC Impact: "Black box" reasoning creates liability. If an AI recommends a strategy that leads to a regulatory breach, the organisation must prove why that decision was made. The EU AI Act’s transparency obligations and traceability requirements have forced vendors and deployers to make AI reasoning more inspectable and defensible.

New Insight: By late 2025, Chain-of-Thought was no longer viewed solely as a model capability, but increasingly as a design pattern for agentic systems. Organisations began encoding structured reasoning steps directly into agent system prompts—explicit task sequences, decision checkpoints, validation rules, and escalation conditions. This shift enabled what many now refer to as Auditable Logic. Alongside this, interest grew in Chain-of-Thought Redaction techniques, allowing auditors and DPOs to review an agent’s reasoning pathways for bias, error, or hallucination without exposing raw internal logic to end users. In practice, CoT became not just a way for AI to think, but a way for organisations to govern how AI acts.


5. Customisation

Trend Overview: The trend pointed toward the rise of highly customised AI Agents tailored to specific business functions (HR, Finance, Sales) and the emergence of the Chief AI Officer (CAIO) role. We predicted that organisations would move beyond generic chatbots to specialised agents that could orchestrate workflows and tools.

How it Played Out: The buzzword of 2025 was "Agents." Models were explicitly designed to function as agents - autonomously clicking, scrolling, and executing workflows across software tools. Functions such as HR, finance, and sales moved from "chatting" with AI to assigning it jobs. Concurrently, the CAIO role graduated from a niche title to a critical C-suite position accountable for the strategy and risk of these agent fleets.

The GRC Impact: This moved the risk profile from informational to operational. An AI that writes a bad email is annoying; an AI Agent that autonomously executes a bad trade or deletes a database is catastrophic. DPOs are now working closely with IT to define "Agent Permissions"—applying the Principle of Least Privilege to AI agents just as they would to human employees.

New Insight: Alongside MLOps, a new practice area called AgentOps is taking shape. This involves the monitoring, testing, and assurance of autonomous agents. For GRC professionals, this means approving "Agent Charters"—documents that strictly define what an AI agent is allowed (and forbidden) to do.


6. Control

Trend Overview: Finally, we predicted that Governance would accelerate under regulatory pressure. As AI systems became more autonomous, organisations would need to implement rigorous internal controls, real-time monitoring, and bias assessments to align with frameworks like the EU AI Act and corporate risk appetite.

How it Played Out: As AI systems became more autonomous, the "human in the loop" became harder to maintain. This coincided with the ramp-up of the EU AI Act, which fully applies from August 2026. Organisations began extending internal controls to cover not just model output, but the reasoning steps and agent actions taken by their systems.

The GRC Impact: Control is no longer a post-hoc audit; it is real-time. We are seeing the deployment of "Guardrail Models" - small, fast AI models whose only job is to monitor the output of the larger models for compliance breaches, PII leakage, or toxic content. This should be complemented by an overseeing CAIO or AI Governance Officer well-versed in ISO/IEC 5338 AI system life cycle processes guidelines to exercise final human discretion over AI-generated outputs. 

New Insight: 2025 saw internal audit functions formally bringing AI under their remit. Rather than treating AI as purely technical, audit teams, often led by the DPO, are updating charters to include model governance and data lineage. AI control is becoming a standard part of enterprise risk management, not a side project for technologists.


Conclusion: The Year of the "Governed" AI

As we close the chapter on 2025, the narrative is clear: The "Industrialisation" of AI has arrived, but it did not come in the form of a wild, unchecked explosion of intelligence. Instead, it arrived with a demand for structure.

The release of powerful models like GPT-5 and Gemini 3.0, alongside the democratisation of Llama 4 and DeepSeek, gave organisations the capability to transform. However, it also meant that the GRC function and forward-thinking DPOs must now grapple with a fragmented landscape where data doesn't just sit in a database - it fuels decision-making in reasoning tools that operate alongside us. The challenge has shifted from simply securing static records to governing a decentralised fleet of autonomous agents that operate across a complex mix of proprietary clouds and local devices.

The lesson of the "6Cs" in 2025 is that Collection must be minimised, Compute must be efficient, Context must be governed, Chain-of-Thought reasoning must be auditable, Customisation must be secure, and Control must be enforced.

For the DPO, the transition is complete. You are no longer just the guardian of privacy; you are the architect of trust in the age of the reasoning engine. As we look toward 2026, the question is no longer what AI can do, but how we can trust it.


This article is originally published on 16 Dec at the Governance Age.



Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles