By Kevin Shepherdson, Founder & CEO of Straits Interactive and EXIN Global Ambassador, Alvin Toh, Co-founder of Straits Interactive, Celine Chew, Head of Learning & Development at Straits Interactive, Harish Pillay, AI Governance Adviser at Straits Interactive
Every year, Straits Interactive publishes our analysis and predictions for the coming year. In preparing our 2026 outlook, I discussed emerging patterns with my fellow Co-founder of Straits Interactive, Alvin Toh, drawing from client projects, executive workshops, and conversations with business leaders across the region. Based on these observations and emerging global research, we have identified five key trends in generative AI that will shape business operations in 2026. Organisations, especially SMEs and their leaders, should take note as they plan ahead. Together, these trends signal not just a technological shift, but a deeper transformation in operating models, workforce design, budgeting, and governance.
1. Stepping Up From AI Literacy to Holistic AI Capability
AI literacy - a basic understanding and familiarity with generative AI tools - has become the norm. Simply knowing how to prompt or use generative AI tools is no longer enough. To create measurable value, organisations must now develop holistic AI capabilities: an integration of the right knowledge, skills, tools, processes, and leadership.
In a world where everyone uses the same tools in similar ways, the differentiator will be a company's ability to build internal intelligence - capturing institutional know-how and embedding it into workflows, copilots, and domain-specific agents. Several global banks and consulting firms, such as Standard Chartered and Deloitte, are already doing this, training internal GPT-style assistants on their policies, playbooks, and knowledge bases. The result is that scattered expertise becomes always-on operational intelligence, available to every employee.
This capability is equally critical for GRC (governance, risk management, and compliance) professionals to fully leverage AI's value while addressing its associated risks and constraints.
2. Growing Import of AI Bilingualists and Digital Co-Workers in Operations
Agentic AI, though not yet widely adopted, floats the possibility of future digital co-workers capable of managing complex, multi-step work - from preparing proposals and orchestrating workflows to triaging customer requests.
Simultaneously, the vast majority of business professionals are non-technical, yet they hold the domain expertise essential for successful AI outcomes. Many, however, are concerned about job displacement and may be reluctant to participate in AI projects. On a positive note, surveys show that 20-40% of employees already use AI tools at work, particularly in information-intensive roles.
This dynamic creates a surging demand for AI bilingualists - professionals who can bridge business needs with AI functionality by speaking the language of both business and AI. A term coined by Mrs Josephine Teo, Minister for Digital Development and Information and Minister-in-charge, AI bilingualists are able to blend deep domain expertise with AI fluency, translating strategic goals into effective prompts, workflows, and guardrails, and interpreting AI outputs back into actionable decisions.
The coexistence of AI bilingualists and digital co-workers is crucial for productivity, workforce confidence, and ensuring AI projects are treated as business transformation initiatives, not just IT projects. Early adopters in professional services, finance, and healthcare are formalising roles like "AI Product Owner" or “AI Lead” within business units to oversee digital co-workers, guide responsible use, and ensure human judgment and accountability remain central.
Rather than replacing people, digital co-workers change what people do. They make business AI bilingualism the most valuable career skill of the next decade.
3. Internal AI Transformation Requires More Than IT Spend
As human-AI collaboration deepens, AI investment can no longer be confined to the IT budget - a mindset we still encounter from leaders who have yet to embark on AI transformation.
Global trends show AI is capturing a rapidly growing share of technology and digital budgets, with many organisations already committing a quarter or more of their IT or digital initiative spend to AI and planning further increases. Deloitte’s 2025 Tech Value Survey finds that AI automation already captures a significant share of digital budgets, with many firms allocating between 21% and 50% of their digital initiative budgets to AI. Meanwhile, the latest EY US AI AI Pulse Survey reports that 27% of organisations investing in AI already commit a quarter or more of their IT budget to AI, and that this proportion is expected to rise sharply in the next budget cycle.
Forward‑looking companies are beginning to pool budgets across IT, HR, and operations to fund AI as a total enterprise capability. Analyses of digital and AI transformation from Deloitte and McKinsey’s State of AI: Global Survey 2025 emphasise that successful AI programmes involve coordinated investments across technology, process redesign, and workforce, not just IT line items. Some firms describe allocating a modest but rising share of revenue to AI initiatives across supply chain, customer service, talent acquisition and marketing, treating AI as a socio‑technical transformation rather than a series of isolated technology pilots
This integrated approach involves reshaping roles, processes, and incentives around AI-first operations. The Chief Financial Officer (CFO), Chief Information Officer (CIO), and Chief Human Resources Officer (CHRO) must now have a unified conversation on how to balance headcount, digital labour, and technology spend to maximise outcomes in every function.
4. Automation vs. Augmentation: The Differentiator Between AI Implementation Failure and Success
An MIT report indicates that 95% of generative AI pilots fail to deliver measurable ROI, a finding echoed in feedback from our courses.
Beyond a lack of holistic enterprise-wide AI capability, a recurring cause of disappointment is the failure to distinguish between automation and augmentation. Automation - fully handing a process to AI - can create efficiency but also magnify security, bias, and compliance risks if deployed without transparency or human oversight. Augmentation, conversely, designs AI to extend human capability: suggesting options, surfacing insights, or drafting content while humans retain judgment and accountability.
While automation is often advocated by start-ups or organisations under shareholder pressure, those prioritising augmentation see faster adoption and higher trust. In sectors like healthcare, legal services, and finance, AI systems that support rather than replace professionals are delivering measurable productivity gains with fewer ethical pitfalls. The lesson is clear: make augmentation the default, and reserve full automation for well-understood, auditable workflows with clear safety nets.
5. Rising Breaches Drive Mandatory AI Governance as the EU AI Act Reaches Full Applicability
2025 was hyped as the "year of the agents." In hindsight, it was a year of experimentation for agentic AI.
As generative and agentic AI move from experimentation to enterprise infrastructure, the risk landscape expands. Security researchers warn that agentic systems with browsing or system access can be exploited through prompt injection or data exfiltration attacks. Data leakage, biased outputs, and self-amplifying errors are now recognised as operational risks, not just technical quirks.
We expect to see more AI breaches of accidental, intentional, and malicious nature in 2026. AI developers were cautioned after the global FoloToy Kumma Bear toy scandal to implement AI Governance at all stages of the AI development lifecycle to safeguard against potential breaches and other regulatory infringements. Following allegations that Figma used user design files to train its AI features, corporate users were also warnedto rethink the use of free, freemium or in-platform embedded AI features that may pose significant risks to data privacy and intellectual property.
This growing risk exposure converges with tightening AI regulations. The EU AI Act, the world’s first comprehensive AI law, reaches full applicability on 2 August 2026, introducing legally binding rules for high-risk AI applications. Organisations deploying AI in employment, education, finance, and public services will need to demonstrate robust data governance, documented risk management, human oversight, and continuous monitoring.
In parallel, global standards like ISO/IEC 42001 (AI management systems) provide a blueprint for compliance and assurance. Organisations that invest early in structured AI governance will not only meet these obligations but also gain operational resilience and stakeholder trust at a time when breaches and regulatory scrutiny are accelerating.
When it comes to digital transformation, especially in the age of AI, we have long advocated that Data Protection Officers (DPOs) transition from compliance "show-stoppers" to business enablers - a shift that began during COVID-19. Today, they must become trusted advisors, providing enhanced AI guidance focused on enabling business value while mitigating privacy, security, and ethical risks.
Looking Ahead to 2026
2026 will be the year enterprises move decisively from pilot projects to industrial-scale AI operations. The organisations that prevail will be those that build end-to-end capability, cultivate AI bilingualists, fund AI as an enterprise-wide transformation, prioritise augmentation over ungoverned automation, and anchor their practices in robust governance frameworks.
This article was originally published on January 27, 2026 at the Governance Age.