AI clone wars: Ensuring data privacy and safety

2023-04-18
banner

By Kevin Shepherdson 


The generative artificial intelligence (AI) landscape is populated by Core Apps, Clones, and Combination Apps.

In the age of the "Clone Wars," where numerous startups compete for investor attention and market share, organisations must exercise due diligence when sharing corporate data with generative AI providers, especially with clones that may have varying levels of privacy and security practices.

Many of the developers of clone apps are new startups or individuals riding the generative AI wave. In fact, with the application programming interface, or API, of the original developers and pioneers of generative AI technologies, such as OpenAI (ChatGPT), anyone with programming knowledge can tap into these technologies to generate new content and features.

One significant concern is the potential privacy and confidentiality issues that arise when clone apps have full access to the information an individual or company shares with them. In order to generate synthetic content, the clone app must send the input data to the relevant APIs.

This process grants the clone app access to potentially sensitive information, which could be mishandled or misused if the developers lack robust data protection measures.

Before sharing corporate data with a generative AI provider, it is essential for organisations to thoroughly review the provider's privacy policy and terms of use.

These documents outline the provider's data handling practices, data retention policies, data sharing agreements, and other critical aspects of their service. Users often overlook key areas in these documents that may have significant implications for their data's security and privacy.

This is Part Two of our series on generative AI apps - click to read Part One and Part Three.

Look at data through an ethical lens and learn how to manage large streams of data by taking our Data Ethics and AI Governance Frameworks course.


Look out for the ‘privacy nutritional label’

Be cautious of what the generative AI provider declares in the "Data Safety" section on the Google Play Store or equivalent "Privacy Nutritional Label" on the Apple App Store.

In our research, analysing a new generation of generative AI applications that are now flooding the market including Google Play Store and Apple App Store, we have found that many of these apps do not truthfully declare their data collection practices.

Although they claim to collect no personal data, we were bombarded with personalised ads soon after we began using them.

When examining the terms of use, it's essential to pay attention to language that grants the provider broad rights to your data (this includes your images, voice and video recordings as well).

For example, a statement like:

"For all user content that you submit to the Site, you hereby grant us (and those we work with) an assignable, sublicensable, royalty-free, fully paid-up, worldwide licence to use, exploit, host, store, transmit, reproduce, modify, create derivative works of, publish, publicly perform, publicly display, and distribute such content"

(Yes, this is found in quite a few apps we surveyed).

This statement gives the provider extensive rights to use, modify, and distribute your content without any restrictions or compensation. Such broad licensing terms may expose your data to potential misuse or unauthorised access, especially if the provider works with third parties that have lax security or even have questionable ethical and privacy practices.

Besides IP related risks, pay attention to sections such as indemnification clauses, limitations of liability, changes to terms, and termination of service.

Learn how good data governance can not only help you protect data in your organisation, but derive even greater value from it, by taking the modules of the Advanced Certificate in Data Governance Systems.

Recommendations and tips for organisations

1.Evaluate reputation: Assess the provider's track record, including their history of security incidents and customer reviews

2. Review data handling policies: Investigate how the provider processes, stores, and secures your data

3. Verify regulatory compliance: Ensure the provider adheres to relevant data protection regulations, such as the PDPA, GDPR or CCPA

4. Check security measures: Confirm the provider employs robust security measures, such as encryption and access controls

5. Assess the startup's focus on privacy and security: Determine whether the startup has dedicated resources for privacy and security, such as a privacy officer or security team

6. Investigate the startup's AI expertise: Examine the background and qualifications of the startup's team members to understand their level of AI expertise

7. Limit data sharing: Share only the minimum necessary data with the provider. This will reduce potential exposure. Additionally, it's important to establish a corporate policy relating to the use of Generative AI

For access to news updates, blog articles, videos, events and free resources, please register for a complimentary DPEX Network community membership, and log in at dpexnetwork.org.



Just one more step! We've sent an email to .
Please check your inbox or spam and open it to activate your account.

Topics
Related Articles