Is it a good idea to integrate AI at your office?

2023-07-03
banner

By Kevin Shepherdson 


Integrating a company’s data with generative AI services like ChatGPT or any other provider involves sharing data with external providers, which can introduce security and privacy risks.

The level of privacy and security depends on several factors, such as the generative AI provider's reputation and trustworthiness, its data handling policies, its purpose of processing (for example, whether the purpose of processing relates to personalised advertising).

It also depends on compliance with regulations such as the PDPA (Personal Data Protection Act in Singapore) or the GDPR (General Data Protection Regulation in the European Union), among others.

Do your due diligence

Before jumping on the ChatGPT bandwagon, companies should do due diligence before adopting the use of any type of generative AI and understand their data policies.

In the case of ChatGPT, they have delineated their API consumer services and their non-API version.

An API (Application Programming Interface) is a set of rules and protocols that enable different software applications to communicate and share data with each other. It acts as an intermediary, allowing developers to access the functionality and features of another service or platform, in this case, ChatGPT, without having to write their own code.

For the API, they clearly stated that any data entered is not used to train or improve their models.

Organisations should carefully assess their risk tolerance and weigh the potential benefits of AI integration against the potential privacy and risks associated with sharing their data. A corporate policy should be in place that governs the use of Generative AI when corporate data is concerned.

Sadly, many organisations are unaware of the privacy and security issues, and even the confidentiality issues associated with sharing corporate data with generative AI apps.    There is also the importance of using the technology ethically and responsibly.

This is why our company, Straits Interactive, will be introducing courses to educate companies in this area of generative AI, ethics and data protection.

Our research team at Straits Interactive is currently analysing this new generation of generative AI applications flooding the market, including Google Play Store and Apple App Store.

We have categorised the generative AI landscape into three categories:

1. Core Apps: These are the original developers and pioneers of generative AI technologies, such as OpenAI (ChatGPT) and DeepMind (AlphaGo).

2. Clones: These are entities that use APIs like ChatGPT to build their applications and innovate around them. Examples include Replika, Copy.ai, and Jasper (formerly Jarvis).

3. Combination Apps: These are existing applications that incorporate generative AI to enhance their functionality or introduce new features. Examples include popular apps from Adobe, Google, Meta, Microsoft, ServiceNow, etc.

Find out how you can soon achieve the region’s first Advanced Certificate in Generative AI, Ethics, and Data Protection, designed to enable a new generation of AI Business Professionals.

Be wary of the clones

The category that clients should most worry about is the "clone" apps.

These Clone Apps are developed by startups with varying levels of AI expertise. Some may have limited experience in AI, relying solely on the power of generative AI APIs to build their applications.

Consequently, privacy and security might not be their priority, and they may lack the necessary competencies to implement robust privacy and security measures. This situation creates potential risks for organisations sharing corporate data with these Clone applications.

Clients should expect a range of data privacy safeguards from generative AI service providers to ensure data security and compliance with relevant data protection regulations. This includes all the relevant data protection principles such as demonstrating accountability, getting consent, limiting the purpose, protecting the data, respecting cross border rules, and limiting sharing with external parties.

It is important to point out that OpenAI API terms of use have changed. They now state they are no longer saving all data shared with the service. The assumption here is that anyone who interacted with it before March 2023 "may have" had their data used.

Regarding Azure OpenAI, at Straits Interactive, we are building an AI service to serve our clients. Initially, we were concerned about the privacy and confidentiality of our data.

According to Azure OpenAI terms of use, it stores data as part of Azure’s storage subscription rules, and isolates that data. Whilst conversations are retained for 30 days they are to be used only for issue resolution and to check for misuse. Developers can opt-out of such storage if sufficient use cases are provided to Microsoft.

Look at data through an ethical lens and learn how to manage large streams of data by taking our Data Ethics and AI Governance Frameworks course.

Review privacy policies and terms of use

As mentioned earlier, the concern is the clone apps. Organisations should thoroughly review the privacy policies and terms of use of these apps. As an example, we found over 100 mobile apps in the Google PlayStore that use the ChatGPT API, all with "ChatGPT" in their names.

Many of these apps offer features that allow you to upload your corporate data, including business plans, policies, and spreadsheets, so that ChatGPT, as an example, can analyse or summarise them for you.

We have seen several recent media reports of senior management of brand-name companies inputting details of corporate strategies, financial projections and other confidential information into generative AI apps with the purpose of generating reports or presentations without understanding that such company confidential data may become publicly available.

Companies need to develop and implement policies about such data inputs so that staff understand what they may do and what they must not do in order to protect corporate data.

Many software providers offer their services through an API-based approach, enabling the development of Clone Apps (including ChatGPT) offering generative AI features and functionalities. Even if the engine is not hosted on the customer's premises, there are still privacy and cybersecurity risks using clone apps that host data in the cloud - as pointed out earlier.

These concerns arise from the fact that the vulnerability of a service depends on factors such as the security measures implemented by the API provider (in this case, OpenAI), the Clone App developers' data handling practices (which should be of greater concern), and the overall architecture of the integrated system.

As a reminder, many developers of Clone Apps are new startups or individuals riding the generative AI wave. In fact, with these APIs, anyone with programming knowledge can tap into these technologies to generate new content and features.

One significant concern is the potential privacy and confidentiality issues that arise when Clone Apps have full access to the information an individual or company shares with them. To generate synthetic content, the Clone App must send the input data to the relevant APIs.

This process grants the Clone App access to potentially sensitive information, which could be mishandled or misused or even made publicly available if the developers lack robust data protection measures.

For access to news updates, blog articles, videos, events and free resources, please register for a complimentary DPEX Network community membership, and log in at dpexnetwork.org.


Kevin Shepherdson is the author of “99 Privacy Breaches to be Aware of”. He is the CEO and Founder of Straits Interactive.



Just one more step! We've sent an email to .
Please check your inbox or spam and open it to activate your account.

Topics
Related Articles