top of page

The privacy risks of generative AI: What can businesses do to avoid them ?

Author: Dóra Szentkereszti

Publication date: 10.04.2024


More and more businesses are incorporating generative artificial intelligence (AI) into their products and services or are considering how to do so. The technology is ground-breaking and has the potential to boost productivity enormously. Still, it must be used cautiously, as the technology carries several risks, including some exciting privacy challenges.


In this article, we examine the challenges of applying the well-known principles of data security, like transparency, purpose limitation, and access to personal data, to the use of open generative AI systems.



Transparency


Under the GDPR, detailed information must be provided before or during data collection. This information is addressed to those whose data will be processed and includes what the company intends to do with their data, the purposes of the processing, what data will be shared, with whom, and why.

The challenge arises from the fact that businesses are not able to provide prior information on what open AI platforms such as ChatGPT will do with their data. Once the data is entered into a chatbot's neural network, it can, of course, be used to train chatbots, but it can also be used to create products.


All of this is outside the control of the business and in fact, the business has no visibility into what is happening within that neural network. As a result, the information provided by the business will necessarily be inadequate or perhaps too general to comply with the relevant legislation.


Purpose limitation


According to the GDPR, the processing, use, disclosure, and retention of personal data must be necessary and proportionate for the purpose for which the data were collected, as communicated to the data subject in advance. As already discussed, any data entered into open AI platforms will be embedded in the chatbot and used in unpredictable ways. If subject to these laws, a business cannot comply with the legal requirement to restrict processing once the data has been inserted into an open generative AI tool.


In this case, it is entirely possible that the firm would have to treat this data sharing as a "sale" of data, which would entail several regulatory complications.


Access to personal data


Under the GDPR, data subjects have the right to access their data, request the rectification of inaccurate data held by the company, to exercise the right to be forgotten. Data subjects may also ask the company to limit the processing of their data to a specific purpose, to request that their data be in a format that can be transferred to another service provider, to object to the processing of certain sensitive data concerning them, or not to be subject to automated decision-making.

There is no special rule for the processing of data by AI tools, which means that a company may not be able to comply with all the requests made by the data subject when using them.


Security of personal data


The GDPR requires that a business implement adequate security measures to protect the personal data it collects and processes, including when the processing is carried out by the service provider. Several of the well-known open AI chatbots available today have already been found to have security problems, and it is therefore negligent for businesses to use them without a prior security assessment.

It is also required that the business carry out a risk assessment to ensure that the risk of the data subject being affected does not outweigh the benefits to the business from the processing and that the business enters into agreements with other controllers of personal data they process to regulate such processing. In addition, there are several other requirements that businesses may find difficult to comply with when they introduce data into an open AI chatbot or allow the chatbot to process such data.

Caution should be exercised when using these technologies, as they can lead to legal risks if incorrectly assessed or used.


What can a business do to avoid the risks?


The first step, as with all data privacy issues, is to involve risk, legal, and governance experts, as well as the data protection officer, in an early assessment of the risks associated with data management before using any AI tools.

The second step is to put in place strong and consistently enforced procedures for the use of open AI tools, including specific restrictions on the type of data that employees and contractors can bring into the system. The business should limit what personal data can be entered into an open AI tool. It is recommended to use closed systems to process the AI chatbot's data that do not allow the introduction of business data into the neural network used to train the AI tool.


Conclusion


We have observed the privacy risks associated with using generative AI through the principles of transparency, purpose limitation, and access to personal data. In order to avoid the risks, businesses should ensure that risk, legal, governance, and data management teams are involved in all data processing assessments. It is recommended that decisions on data processing be made with all risks in mind, including legal, ethical, security, and public relations risks that may arise if the AI tool uses the data inappropriately or does not provide adequate information about the processing.


 

You can also read about:

 

Reference List

20 views0 comments

コメント


bottom of page