By Charmaine Tan
Artificial Intelligence (AI) is making decisions faster than most organisations can track. It screens job applicants, drafts legal briefs, flags fraud, and generates content at scale. But when something goes wrong — a biased outcome, a hallucinated fact, a data breach — we become subject to a question that grows increasingly hard to ignore: who is accountable when these things happen?
At the recent Temasek Lead Summit 2026 on 11th March at Temasek Polytechnic, Kevin Shepherdson, Founder and CEO of Straits Interactive, put it plainly: "You can delegate a task to a third party, but not the responsibility."
Regardless of the tools used, organisations bear ultimate blame; their names are stamped on contracts and licenses and are called upon by regulators.
This matters even more when AI is embedded in high-stakes decisions. Earlier this year in Singapore, two lawyers were each fined $5,000 after submitting AI-generated legal citations in court that turned out not to exist. The citations had not been properly verified before filing.
This is not an isolated case. As another Lead Summit panellist, Andy Prakash, cybersecurity expert and CEO of Privacy Ninja, pointed out, AI errors in professional settings carry real consequences. The system may have produced the content, but the human or organisation that deployed it without adequate oversight bears the accountability. While the systems produced the content, the organisation that deployed it without oversight bears the accountability. These were among the issues raised at the Temsek Lead Summit 2026.
Here are some takeaways from the session, with Kevin Shepherdson, Raju Chellam, and Andy Prakash taking to the stage:
1. AI Bilingualism and the Importance of Domain Knowledge
While Singapore’s government has made significant strides in promoting AI literacy, competence alone is an insufficient benchmark. Individuals must also be equipped to implement safeguards and critically assess outputs.
They are coined AI bilingualists, specialists who can move beyond AI literacy into capability, bringing human context and specialised knowledge. They identify and combat potential risks, as well as use AI tools within their workflows. These professionals are able to wield their own sector’s knowledge to use AI not by replacing humans in decision-making processes but to augment their workflows while maintaining oversight. Ultimately, human judgment and reasoning remain the deciding factor.
2. Data Protection in the Age of AI
Another issue associated with AI work processes is data protection - or the lack thereof. AI systems produce outputs by ingesting, learning, and retaining patterns from the data provided to them.
At the session, Mr Raju Chellam, Chief Editor of the AI Ethics & Governance Body of Knowledge, warned that when we use AI, our data is constantly being used to train AI models, and bad actors can weaponise this to steal identities and launch highly targeted scams. As such, organisations must introduce governance measures to combat these risks.
3. Moving forward: The AI Factory
Shepherdson introduced the concept of the AI Factory as one way forward. AI can streamline workflows, but a layer of governance and ethics built in is vital. The difference between ‘using AI’ and having an ‘AI Factory’ mirrors that of buying a loaf of bread and owning the bakery. Most organisations are currently just consumers of AI. To truly transform, they must become manufacturers of their own intelligence.
An AI Factory is a replicable, structured system that converts an organisation’s unique knowledge, processes, and Intellectual Property into secure, scalable AI applications. It moves AI out of silos and into a centralised, governed ecosystem. This isn’t just about software; it is building true AI Capability. The solution combines no-code tools with structured training and hands-on advisory, ensuring that even a non-technical workforce (which makes up roughly 80% of most companies) can build and manage AI safely.
The leaders who will navigate AI responsibly are not the ones who use the most tools. They are the ones who treated their AI capability the way good manufacturers treat their production line: with clear standards, consistent oversight, and full ownership of what comes off it. The era of passive AI consumption is over. Building your own AI factory is not just a competitive move. It is how accountability gets put back where it belongs.