The AI Ethics Debate: Why OpenAI’s Boycott Following Pentagon Deal Signals The Rising Cost Of Public Concern

2026-03-31
Article Banner

By Charmaine Tan


The debate about AI ethics reached a fever pitch after OpenAI’s $200 million deal with the Pentagon earlier this month (March 2026) - the very same deal that Anthropic had turned down due to concerns over how its AI would be used. 

Anthropic had feared that its technology, Claude AI, would be deployed to surveil US citizens or operate autonomous weapons. Despite threats from U.S. government officials, including President Donald Trump, to blacklist Claude across all federal agencies, Anthropic rejected the deal. OpenAI seized the opportunity instead. 

Although OpenAI’s co-founder and CEO, Sam Altman, later declared that the company has the same red lines with integrity as Anthropic, the damage was done. Concerns over mass surveillance were highlighted by Caitlin Kalinowski, Head of OpenAI’s robotics division, who has since publicly resigned.

The public called for a boycott of ChatGPT and amassed a movement of over 4 million supporters to execute one primary goal: QuitGPT. Meanwhile, Claude dethroned ChatGPT as the most downloaded app in the U.S. Apple App Store. 

The contrast in public support for these two AI companies reflects the increasing importance of consumer trust.

This is just the tip of the iceberg. The QuitGPT movement demonstrates that, as AI development and adoption accelerate, consumers are realising the need to be aware of how the technology will be used and with whom LLM companies associate. 

Potential ethical pitfalls of AI

Outcries to boycott OpenAI continue, thrusting AI ethics and trust to the forefront of discussions. Globally, public trust in AI has fallen to 49%, according to the Edelman Trust Barometer. Many consumers using AI retain inherent distrust of it.

Aside from major AI issues of discrimination, bias, and transparency surrounding AI, several other ethical concerns have arisen:

1. Automation bias:  People who lack sufficient AI background knowledge may be over-reliant on automated outputs without fact-checking. When the AI makes errors or hallucinates, humans who blindly trust its output risk spreading misinformation.

2. The “chosen blindness” of AI as seen in the OpenAI saga: when a machine is allowed to operate weapons that can choose between life and death, the issue of escaping accountability arises; this is true even to a lesser extent when organisations shift the blame for AI-made decisions.

3. Data privacy and security: 90% of people do not trust AI with their data, reported Malwarebytes Labs. Some fear the authoritarian risks of AI surveillance or a lack of transparency as to where the data is being used. Open-source AIs, in particular, have fewer safety guidelines as they are easily accessible to parties with ill intent, allowing them to spread disinformation or generate malware.

Free resources for organisations’ AI check

Global policymakers are stepping in to ensure corporate AI adoption does not sideline ethics. In particular, the recently released ASEAN Responsible AI Roadmap offers a guide to ethical AI use, including the following:

1. Suggests the use of AI governance tools: Allows brands to fact-check AI findings. For example, Singapore’s ‘AI Verify toolkit’ is listed as a tool that vets the reliability of outputs. The UK boasts a similar tool, ‘Inspect’, that helps run tests on Large Language Models’ knowledge, reasoning capabilities, and vulnerabilities.

2. Recommends assessing and monitoring AI risks: Many countries are stepping up their frameworks, such as the European Union’s ‘Fundamental Rights Impact Assessment’ that documents the AI’s negative impacts or the United States’ ‘AI Risk Management Framework’ that evaluates the AI’s validity, reliability, safety and bias. The ‘ASEAN AI Risk Impact Assessment Template’ is also readily available for organisations to assess risks throughout the AI lifecycle, and for companies to track their AI systems’ performance and issues.

3. Increased cybersecurity and data protection: Organisations can build their Privacy Enhancing Technologies, such as data anonymisation processes or federated learning (training shared AI models without exchanging personal data), and mitigate AI-specific risks.

The prevailing ethics debate

The OpenAI saga is a prime example that AI ethics is no longer optional for companies to retain public trust. Human safety, privacy and authenticity are the backbone of what consumers seek as they use AI. 

While the appeal of cutting corners now may translate into short-term profits, in the long run, businesses may find themselves on the losing end. To combat falling public trust, organisations must increasingly adapt and utilise the resources available to them, such as governments’ frameworks and tools, to navigate ethical issues and retain public trust. 


As the public becomes increasingly concerned about how AI is being used, courses that look at corporate workflows through an ethical AI lens become increasingly important. Courses such as Data Ethics and AI Governance Frameworks can help employees - and bosses - ensure they are ready to scale AI use, going beyond basic usage to cover both ethical and legal aspects of AI.

Sources: FortuneBBCThe GuardianReuters (Karen Brettell)QuitGPTBusiness InsiderEdelmanMalwarebytes LabsLawfareReuters (A.J. Vicens)Paloalto NetworksASEAN Responsible AI RoadmapAI Verify Foundation 



Unlock these benefits
globe

Get access to news, enforcement cases, events, and actionable tips and guides

email

Get regular email updates and offers

job

Job opportunities, mentorship and career guidance

discuss

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles