The Security Risks of Gen AI Chatbots

2024-04-30
banner

By Alvin Toh 


Generative AI chatbots have risen in prominence as businesses begin to harness conversational AI. They are used for a variety of purposes, such as streamlining customer service, automating tasks, and personalising user experiences. One company, Klarna, made the bold claim that their Gen AI chatbots are able to do the work of 700 customer service agents, serving customers 24/7 worldwide. These levels of productivity are unprecedented, and could be very compelling for organisations as they consider deploying chatbots. Nonetheless, there are also associated security risks which must also be considered.

Global Regulations on Generative AI

As global regulators begin to introduce policies to govern AI, businesses deploying chatbots must first navigate the data protection regulations to safeguard user privacy and avoid regulatory pitfalls. Various policies are being introduced globally such as the EU AI Act, which obliges high risk apps to be more transparent about data usage. Similarly, Singapore’s Model AI Governance Framework identifies 11 key governance dimensions around issues such as transparency, explainability, security, and accountability to safeguard consumer interests, while allowing space for innovation

The recent PDPC AI Guidelines in Singapore likewise encouraged businesses to be more transparent when seeking consent for personal data use, through disclosure and notifications. Businesses have to ensure that AI systems are trustworthy, which provides consumers with confidence over how their personal data is being used.

Internationally, the new ISO 42001 specifies requirements for establishing, implementing, maintaining, and continually improving Artificial Intelligence Management Systems within organisations.

New risks with Gen AI Chatbots

Modern Gen AI-enabled chatbots are able to profile individuals very quickly through a massive amount of historical user interactions and data inputs, enabling detailed profiles to be constructed. Personal information such as interests, preferences, gender and personal identifiers can be deduced from seemingly innocuous conversations with chatbots. This ability raises privacy and manipulation concerns and poses a real threat if the information falls into the wrong hands.

Through adversarial prompt techniques such as prompt injection (inserting malicious content to manipulate the AI's output), prompt leakage (unintentional disclosure of sensitive information in responses), and jailbreaking (tweaking prompts to bypass AI system restrictions), unauthorised access can be gained to sensitive information, including passwords, personally identifiable information (PII) and even training data sets.

Rogue chatbots are also a concern, where malicious actors deploy chatbots with the intention of extracting sensitive information from unsuspecting users. These rogue chatbots may impersonate legitimate entities or services to deceive users into disclosing confidential information.

In addition to data leakages, AI regulators and ethicists are also concerned about bias in AI, especially when deployed in recommendation or decision making systems. Generative AI systems are likely to have biases inherent in their training data or algorithms, which can result in unfair or discriminatory outcomes. It is essential for chatbot developers, as ‘Human- in and over-the-Loop’, to recognise and address these biases in their development


Already a member?  
Unlock these benefits
benefit

Get access to news, enforcement cases, events, and actionable tips and guides

benefit

Get regular email updates and offers

benefit

Job opportunities, mentorship and career guidance

benefit

Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin

Topics
Related Articles