Insight: The double-edged sword that is agentic AI

2025-09-19
Article Banner

By Aris Riza Noor Baharin / The Edge Malaysia


This article first appeared in Digital Edge, The Edge Malaysia Weekly on September 8, 2025 - September 14, 2025

Agentic artificial intelligence (AI) is fundamentally reshaping the cyberthreat landscape by automating what were once highly manual, resource-intensive attacks. Instead of relying on human operators, AI-driven agents can self-direct, adapt in real time and scale across multiple targets simultaneously.

This adaptability and increased agility are proving to be both a gift and a curse in cybersecurity, providing good actors with better tools to handle increased threats, while giving the bad actors new tools to exploit security weaknesses.

“Agentic AI, with its autonomous decision-making capabilities, represents a double-edged sword in cybersecurity. Enhancing threat detection and response while introducing new vulnerabilities, it can automate complex tasks, but this ability to learn and adapt could amplify attack vectors, making traditional defences obsolete,” says Ramesh Songukrishnasamy, senior vice-president and chief technology officer of HID Global Corp, a US-based manufacturer of secure identity products.

The dangers of agentic AI are becoming more prevalent as the technology is seeing rapid adoption. According to Blueprisms’ Global Enterprise AI Survey 2025, among 1,650 companies across the globe, 29% are already leveraging agentic AI tools, while a further 44% of participants are planning on implementing agentic tools by 2026.

Ramesh stresses the potential risks AI agents will have in exploiting traditional security protocols, as they are capable of rapidly guessing passwords through brute-force attacks enhanced by machine learning, or intercepting one-time passwords.

He explains that, by leveraging its speed, AI agents can quickly outpace human oversight. New identity spoofing might include AI-generated deepfakes for biometric bypass or adaptive credential stuffing, where agents learn and adapt from failed attempts in real time.

Jane Teh, chief AI security officer at VCI Global Ltd, a Nasdaq-listed business and technology consultancy services firm, similarly encourages caution, given the speed of the increased agentic AI adoption.

“For clients in Asean, [agentic AI] means that traditional risk models, which are based on predictable attacker behaviour and human error margins, no longer fully apply,” says Teh.

“AI agents can sift through massive datasets, exploit regional language differences and customise social engineering payloads with a precision that significantly outpaces conventional phishing or malware campaigns.”

To illustrate this, Teh describes a hypothetical AI agent attack on a financial services firm in Malaysia. She says that these bad actors would start with reconnaissance, using agents to scrape through LinkedIn profiles and press releases to map out organisational hierarchies.

From this, the agents can then generate phishing emails (correspondence designed to deceive and steal personal information) or create deepfakes to impersonate senior executives to gain access to an institution’s system.


“Once inside, the AI dynamically scans the environment, exploiting misconfigured APIs (application programming interfaces) to escalate privileges, all while continuously rewriting its tactics based on the organisation’s defensive responses,” says Teh.

Because of this continuous ability to shift and adapt its attack angle, Teh warns that attitudes towards cyberattacks need to shift from asking if something looks malicious to asking if the activity of a user or device makes sense. Continuous monitoring and screening are needed to catch any subtle discrepancies in network traffic that generative AI (Gen AI) attacks fail to mask.

On the flipside, agentic AI also poses great opportunities for businesses to streamline and enhance their own cybersecurity efforts, as long as the appropriate safeguards are in place.

Kew Yoke Ling, the executive director of Singaporean AI company KewMann, says that the power agentic AI provides is shifting cybersecurity from reaction to anticipation.

“Instead of waiting for a breach, the agent constantly scans for early-warning indicators like repeated login failures, abnormal file transfers or shadow IT usage. Once detected, it doesn’t just alert; it acts by restricting access or applying patches. That’s the cybersecurity equivalent of saving a customer before they churn,” says Kew.

For Kew, agentic AI acts as an intelligent partner that adapts to different lines of defence, depending on the company or institution’s needs. Its strength lies not just in speed, but also in its ability to distinguish signal from noise, filtering out routine anomalies and only focusing on genuine risks.

This completely bypasses the issue with traditional, manual cybersecurity handling. Previously, cybersecurity teams would have to react to every alert. With agentic AI, it promises that only the alerts that matter will be attended to.

Alvin Toh, chief marketing officer of Straits Interactive Pte Ltd, adds that agentic AI enables continuous monitoring of possible attack points, third-party risks and data flows, providing 24/7 insights. It can also help in keeping stakeholders informed through a bot auditing process and automated reports.

Toh highlights that the accessibility of AI technologies means that agentic AI can also help in levelling the playing field, providing tiers of cybersecurity previously only accessible to major companies that could afford large cybersecurity teams.

AI agents can do this through behavioural segmentation and targeting. When the AI builds a baseline of “normal” behaviour across systems, anything that deviates from this baseline — whether too fast, too large or too unusual — triggers an alert.

Kew adds that one unique advantage of agentic AI is its ability to learn from every attempted attack, even near-miss attacks, which creates a stronger defence for the future.

“This creates a continuous cycle of improvement that human-only systems cannot achieve. Ultimately, this is not just about keeping operations safe; it is about ensuring customers can trust that their data remains secure and their confidence in the institution is never shaken,” Kew explains.

This is what makes agentic AI powerful in cybersecurity. Every phishing wave, malware mutation or insider threat attempt feeds the model, and what emerges is a defence system that doesn’t just react, but adapts.

This agility can help organisations keep pace with attackers rather than always playing catch-up. But this, like all AI, is only as strong as the data it is given. Ensuring that the system is powered by relevant data sources is vital.

However, even agentic AI created by corporations is not immune to risks. Due to their independent nature, these agents may have access to sensitive data and the potential for misuse.

IBM Malaysia chief technology officer Eddy Liew says that enterprises jumping on the agentic AI bandwagon must take equal responsibility for auditing, red-teaming and governing their AI agents with the same discipline applied to other mission-critical systems.

Citing an IBM study, Liew says that only 24% of current Gen AI projects have a component to secure the initiatives, even though 82% of respondents say secure and trustworthy AI is essential.

“This means conducting regular audits for transparency and compliance, stress-testing agents against bias, misuse and security threats, and embedding strong governance frameworks that ensure accountability, oversight and alignment,” says Liew.

Having proper AI security means using an enterprise-grade solution that provides visibility and control over your organisation’s AI deployments, helping you discover, secure and govern AI models to prevent risks like data leaks and vulnerabilities.

“Most organisations lack governance to manage AI or detect shadow AI (unauthorised AI tools). IBM’s Cost of a Data Breach Report 2025 reveals that nearly two-thirds of organisations (63%) said they don’t have governance policies in place to manage AI or detect shadow AI,” says Liew.

Protecting against AI agents

The key to balancing agentic AI comes from designing AI systems that are fast and autonomous at execution, but transparent and accountable at oversight with a “human-in-the-loop” model, says Kew.

“This means embedding explainability and auditability into every decision, so customers, employees and regulators can see not just the ‘what’ but also the ‘why’ and the ‘how’.

“For cybersecurity, where decisions often involve blocking access or neutralising risks in real time, transparency reassures stakeholders that these actions are both justified and aligned with institutional ethics. Speed does not have to come at the cost of trust if accountability is built in from the start,” he says.

Teh concurs, noting that a robust defence starts with an Agent Gateway: a single control point which every agent action must pass through.


“By limiting tool access, redacting or restricting sensitive data, enforcing sandbox execution with egress allow lists, and requiring human approval (human-in-the-loop model) for risky actions, organisations can contain much of the risk. Every action should be logged to create visibility and accountability across the agent’s activity,” says Teh.

HID’s Ramesh further adds that security protocols must shift their mindset from static rules to adaptive, context-aware systems that incorporate AI for real-time threat analysis. By integrating behavioural biometrics and continuous authentication, protocols can detect anomalies in user interactions, flagging personalised phishing attempts generated by AI agents.

Ramesh uses HID’s identity orchestration platforms, which use “AI to enforce dynamic access policies, ensuring that even scaled, personalised threats are mitigated through layered defences and user education integrated into security workflows”.

Moreover, identity-centric models like zero trust architecture (ZTA) are stressed to be crucial, as they assume no inherent trust and verify every access request continuously, countering AI’s learning capabilities by enforcing granular policies based on user, device and context.

ZTA is often highlighted among cybersecurity experts. It is a security framework that requires all users and devices, including those on a private network, to be strictly authenticated and authorised to access any resource on the network.

“These models mitigate risks by segmenting networks and using adaptive authentication, making it harder for AI agents to propagate after initial breaches,” says Ramesh.

Beyond implementing frameworks and models, a strong focus on enhancing an organisation’s defences against agentic AI lies in governance and guard rails.

Liew notes that IBM suggests a few key approaches companies need to adopt. The first is fortifying the identities of both man and machine, using AI and automation to improve identity and access management without overburdening understaffed security teams.

Furthermore, as AI agents begin to play bigger roles in organisations, AI agent identities need the same level of protection as humans to work efficiently in heightened security environments.

The second is to elevate AI data security practices, as there is a trend that AI adoption is outpacing security protocols. Liew cites that the IBM Cost of a Data Breach Report 2025 found that 97% of organisations that experienced AI-related attacks lacked proper access controls on AI systems.

This ties directly to the third suggestion, which is that beyond security practices, there needs to be proper AI governance — investing in integrated security and governance software and processes can help organisations discover and govern shadow AI attacks.

The last suggestion is utilising AI security tools. Bad actors are turning to AI for their attacks, and security teams need to adopt AI as well to level the playing field.

Toh laments the troubling trend of most organisations not adopting proper governance guidelines during their rush to adopt AI. He notes the common mindset is that data governance can affect innovation, but contends data governance can instead be an enabler for sustainable growth.

“For companies looking to use AI to support workplace productivity or training, they should at least establish a data protection baseline with a clearly defined privacy policy aligned with the data protection regulations of their country.

“It is also important that a company has a data privacy-first culture among employees when experimenting with new technologies so that there’s a sense of vigilance around its potential dangers,” says Toh.

Ultimately, according to Teh, the most important mindset businesses and organisations need to have is to move away from the traditional “trust then verify” approach and into a continuous verification model.

This means enforcing multi-factor authentication systems, issuing short-lived access tokens and maintaining re-playable logs.

“Coupled with real-time alerts and a kill switch for emergency shutdown, these steps create an environment where AI agents remain under continuous scrutiny and control,” says Teh.

Helping the vulnerable

When it comes to those who are in danger of agentic AI attacks, the targets are often those who are quick to digitalise, but slow to adapt to the risks.

Teh stresses that Asean’s diverse digital economy and the high rate of SME (small and medium enterprise) adoption of cloud services create fertile ground for AI-powered attacks that target weak links.

Ramesh adds that smaller companies in critical industries like healthcare may still utilise legacy or outdated security protocol systems. These companies become prime targets for these bad actors with AI agents, being at risk of operational disruptions or ransomware attacks.

While these parties are vulnerable to AI agents, frameworks like ZTA can help in addressing much of this, but it then becomes a matter of resources over solutions. This leaves many SMEs vulnerable in this wave of agentic AI.

The path forward for SMEs, according to many experts, is to start small and make the right partnerships.

“SMEs should treat agentic AI the same way they treat cloud or AI adoption: start small, scale responsibly and always with security in mind. Cloud-native platforms already provide secure environments affordably, so SMEs can focus on choosing AI services with proven compliance certifications,” says Kew.

Kew further argues that by aligning with trusted vendors and gradually expanding usage, SMEs can gain enterprise-grade protection without enterprise-level budgets.

Liew adds that SMEs need to collaborate with partners that can offer modular, pay-as-you-go solutions, adding that some companies offer industry-specific solutions that embed secure architecture like ZTA and human-in-the-loop approvals.

Save by subscribing to us for your print and/or digital copy.


Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles