this post was submitted on 29 Nov 2023
363 points (98.9% liked)

Privacy

32023 readers
1175 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that's referenced in the article can be found here

you are viewing a single comment's thread
view the rest of the comments
[–] gerryflap@feddit.nl 42 points 11 months ago (3 children)

Obviously this is a privacy community, and this ain't great in that regard, but as someone who's interested in AI this is absolutely fascinating. I'm now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn't generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it's even an expected thing. After all, we as humans also have the ability to recite pieces of "training data" if we seem them interesting enough.

[–] j4k3@lemmy.world 14 points 11 months ago (1 children)

I bet these are instances of over training where the data has been input too many times and the phrases stick.

Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.

I've seen models that almost always error into The Great Gatsby too.

[–] TheHobbyist@lemmy.zip 9 points 11 months ago (1 children)

This is not the case in language models. While computer vision models train over multiple epochs, sometimes in the hundreds or so (an epoch being one pass over all training samples), a language model is often trained on just one epoch, or in some instances up to 2-5 epochs. Seeing so many tokens so few times is quite impressive actually. Language models are great learners and some studies show that language models are in fact compression algorithms which are scaled to the extreme so in that regard it might not be that impressive after all.

[–] j4k3@lemmy.world 4 points 11 months ago* (last edited 11 months ago)

How many times do you think the same data appears after a model has as many datasets as OpenAI is using now? Even unintentionally, there will be some inevitable overlap. I expect something like data related to OpenAI researchers to reoccur many times. If nothing else, overlap in redundancy found in foreign languages could cause overtraining. Most data is likely machine curated at best.

[–] Socsa@sh.itjust.works 10 points 11 months ago

Yup, with 50B parameters or whatever it is these days there is a lot of room for encoding latent linguistic space where it starts to just look like attention-based compression. Which is itself an incredibly fascinating premise. Universal Approximation Theorem, via dynamic, contextual manifold quantization. Absolutely bonkers, but it also feels so obvious.

In a way it makes perfect sense. Human cognition is clearly doing more than just storing and recalling information. "Memory" is imperfect, as if it is sampling some latent space, and then reconstructing some approximate perception. LLMs genuinely seem to be doing something similar.

[–] Cheers@sh.itjust.works 3 points 11 months ago

They mentioned this was patched in chatgpt but also exists in llama. Since llama 1 is open source and still widely available, I'd bet someone could do the research to back into the weights.