Will ChatGPT Trigger the Next AI Winter?

Jason Bell
3 min readApr 7, 2023

It’s been an interesting 2023 in the field of artificial intelligence, let’s be honest we couldn’t really escape it. At first the hype and then, like a train, the reality.

Photo by Khaliz wu on Unsplash

From a technical standpoint it’s a feat of engineering, data and pure computing power. From a human standpoint it’s a complete shit show. Just looking at the headlines from the last week, we are starting to see the downloads of letting numbers dictate creativity, output and what we class as true or false.

A quick look and we have this simple list:

Mayor threatens to sue after ChatGPT claims he was jailed for bribery. (The Telegraph)

French digital minister: ChatGPT doesn’t respect privacy laws (Euronews)

Italy blocks OpenAI’s ChatGPT over privacy concerns (BBC)

Samsung workers accidently leak trade secrets to the AI chatbot (Mashable)

And the most unsettling….

ChatGPT falsely accuses a law professor of SEX ATTACK against student (Daily Mail).

I’m sure there are good news stories but hey, the media goes this kinda stuff and that’s what people will read.

The Tide May Turn

We’ve been through two major AI winters, where interest, adoption and funding in AI technology declines. Between 1974 and 1980, and again 1987–1993 AI popularity slumped.

Now I appreciate these were before the rise of the popularity of the internet and the world wide web in general. The increase in processing power, scale, distributed computing and cheap storage, this kind of technology was coming, to be honest it’s been over twenty years in the making. I was just waiting for the day to happen.

Once the hype wears off, the media see that ChatGPT headlines aren’t sexy anymore, then what happens? Once we get past what was private data being used without permission. Once we get past the idea that this was ever a good idea….

Once we get past the idea that OpenAI is a commercial organisation, it’s far from open. We are the product AND we’re being charged to use it.

I would very much like to be excluded from this narrative…

How do you claim a right to forget with an existing language model. In the words of Ms Swift, how do you ask and be excluded from a trained large language model?

Would there ever be much change in the number (and remember, that’s all this is, there IS NO MAGIC). How long would it take to regenerate a large language model so large? After over thirty years in the industry I’m still rattling through the probabilities in my head, I’m still not there yet.

I’m at the point of thinking that once removed the difference will be so slight that it won’t make any difference.

Can the right to be forgotten apply to large language models like ChatGPT?

I asked it:

The right to be forgotten, also known as the right to erasure, is a legal concept that allows individuals to request the deletion or removal of their personal information from public databases or search engines. However, it is currently unclear whether this right can be extended to large language models like ChatGPT.

One of the challenges in applying the right to be forgotten to language models is that the data they rely on is often aggregated and anonymized, making it difficult to identify and remove specific pieces of personal information. Additionally, language models like ChatGPT are designed to generate text based on the patterns and relationships present in their training data, so deleting specific data points could have unintended consequences for the model’s performance.

That being said, there is ongoing discussion and debate about the ethical and legal implications of large language models, including issues related to data privacy and control. It is possible that future regulations or guidelines may address the right to be forgotten in the context of language models and other machine learning systems.

Remember, it’s just numbers…. lots of them.

--

--

Jason Bell

A polymath of ML/AI, expert in container deployments and engineering. Author of two machine learning books for Wiley Inc.