Personal Data in Large Language Models: The Removal Process and Its Impact

Jason Bell
3 min readApr 9, 2023

Introduction

The ever-growing capabilities of large language models, such as OpenAI’s GPT-4, have raised important questions about data privacy and the ethical use of personal information. As these models process and learn from vast amounts of text data, it is crucial to consider the implications of inadvertently including personal or sensitive information.

I will discuss the use of personal data within large language models, the process of requesting the removal of that information, and the impact of such removal on the model’s performance. Ultimately, will removing data from the training set make any difference to a large model?

Personal Data in Large Language Models

Large language models like GPT-4 are trained on massive datasets that include a wide range of text sources, from books and articles to social media and websites. This broad scope is essential for models to learn and generate human-like responses. However, it also increases the likelihood of encountering personal information, such as names, email addresses, or sensitive details about an individual’s life.

Though developers of these models typically employ data filtering and anonymisation techniques, the sheer volume of data makes it difficult to guarantee that no personal information is incorporated. As a result, it is possible that sensitive information can become embedded within the model’s parameters, potentially affecting generated outputs. Could these affected outputs create a butterfly effect on future model output?

The Butterfly Effect

The butterfly effect is a concept in chaos theory that refers to the idea that small events or changes in a system can lead to significant and unpredictable consequences over time. The term was first coined by mathematician and meteorologist Edward Lorenz, who discovered that minor variations in initial conditions of a weather model could result in drastically different forecasts. This sensitivity to initial conditions highlights the inherent complexity and interdependence of systems that exhibit chaotic behaviour, such as weather, ecosystems, and economies. This, in my opinion, can also be applied to large language models and any form of black box algorithm.

The name “butterfly effect” was inspired by the metaphorical example of a butterfly flapping its wings in Brazil causing a tornado in Texas, illustrating the notion that seemingly inconsequential actions can have far-reaching and unpredictable impacts. Although this specific example is not meant to be taken literally, it serves as a powerful reminder of the interconnected nature of complex systems and the importance of considering how small changes can reverberate through time and space.

Requesting Removal of Personal Data

In an effort to address privacy concerns, organisations operating large language models have established mechanisms to allow individuals to request the removal of their personal data. This process typically involves submitting a formal request, specifying the type of data and its source. Once received, the organisation will assess the request and, if deemed valid, remove the relevant data from the training set.

Is There an Impact on the Model?

Removing personal data from a language model’s training set is generally expected to have minimal impact on the overall performance. This is because the models are designed to generalise from a vast array of sources and examples. Removing a small amount of specific data is unlikely to affect the model’s ability to generate coherent and contextually relevant responses.

However, it is important to note that removing data from the training set does not guarantee that the model will no longer generate outputs containing similar information. This is because the model has already been trained and has learned patterns based on the initial data. As a result, the only way to fully remove the influence of the removed data is to retrain the entire model, which can be a costly and time-consuming process. What we are essentially asking by the right to be forgotten is to retrain the model either from scratch or a specific checkpoint before your data was used.

Conclusion

The use of personal data in large language models presents both ethical and privacy challenges that must be carefully considered. While organisations are taking steps to address these concerns, such as providing mechanisms to request data removal, it is important to recognise the limitations of these measures. Ultimately, striking the right balance between utility and privacy will require ongoing research and dialogue between the organisations, developers, users, and regulatory authorities to ensure the responsible development and deployment of large language models.

--

--

Jason Bell

The Startup Quant and founder of ATXGV: Author of two machine learning books for Wiley Inc.