this post was submitted on 04 Dec 2023
888 points (97.9% liked)

Technology

59605 readers
3437 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Sibbo@sopuli.xyz 313 points 11 months ago (7 children)

How can the training data be sensitive, if noone ever agreed to give their sensitive data to OpenAI?

[–] TWeaK@lemm.ee 138 points 11 months ago (8 children)

Exactly this. And how can an AI which "doesn't have the source material" in its database be able to recall such information?

[–] luthis@lemmy.nz 70 points 11 months ago (2 children)

Model is the right term instead of database.

We learned something about how LLMs work with this.. its like a bunch of paintings were chopped up into pixels to use to make other paintings. No one knew it was possible to break the model and have it spit out the pixels of a single painting in order.

I wonder if diffusion models have some other wierd querks we have yet to discover

[–] Jamie@jamie.moe 27 points 11 months ago (5 children)

I'm not an expert, but I would say that it is going to be less likely for a diffusion model to spit out training data in a completely intact way. The way that LLMs versus diffusion models work are very different.

LLMs work by predicting the next statistically likely token, they take all of the previous text, then predict what the next token will be based on that. So, if you can trick it into a state where the next subsequent tokens are something verbatim from training data, then that's what you get.

Diffusion models work by taking a randomly generated latent, combining it with the CLIP interpretation of the user's prompt, then trying to turn the randomly generated information into a new latent which the VAE will then decode into something a human can see, because the latents the model is dealing with are meaningless numbers to humans.

In other words, there's a lot more randomness to deal with in a diffusion model. You could probably get a specific source image back if you specially crafted a latent and a prompt, which one guy did do by basically running img2img on a specific image that was in the training set and giving it a prompt to spit the same image out again. But that required having the original image in the first place, so it's not really a weakness in the same way this was for GPT.

load more comments (5 replies)
load more comments (1 replies)
load more comments (7 replies)
[–] seaQueue@lemmy.world 65 points 11 months ago

Welcome to the wild West of American data privacy laws. Companies do whatever the fuck they want with whatever data they can beg borrow or steal and then lie about it when regulators come calling.

load more comments (5 replies)
[–] guywithoutaname@lemm.ee 280 points 11 months ago (4 children)

It's kind of odd that they could just take random information from the internet without asking and are now treating it like a trade secret.

[–] MoogleMaestro@kbin.social 122 points 11 months ago (12 children)

This is why some of us have been ringing the alarm on these companies stealing data from users without consent. They know the data is valuable yet refuse to pay for the rights to use said data.

[–] mark@programming.dev 49 points 11 months ago

Yup. And instead, they make us pay them for it. 🤡

load more comments (11 replies)
[–] hiremenot_recruiter@discuss.tchncs.de 38 points 11 months ago (2 children)

There was personal information included in the data. Did no one actually read the article?

[–] Nurse_Robot@lemmy.world 31 points 11 months ago

Tbf it's behind a soft paywall

[–] echodot@feddit.uk 20 points 11 months ago* (last edited 11 months ago) (1 children)

Well firstly the article is paywalled but secondly the example that they gave in this short bit you can read looks like contact information that you put at the end of an email.

load more comments (1 replies)
[–] Mahlzeit@feddit.de 26 points 11 months ago (2 children)

They do not have permission to pass it on. It might be an issue if they didn't stop it.

[–] SkybreakerEngineer@lemmy.world 50 points 11 months ago (31 children)

As if they had permission to take it in the first place

load more comments (31 replies)
[–] grue@lemmy.world 39 points 11 months ago

In a lot of cases, they don't have permission to not pass it along. Some of that training data was copyleft!

load more comments (1 replies)
[–] BombOmOm@lemmy.world 166 points 11 months ago (1 children)

'It's against our terms to show our model doesn't work correctly and reveals sensitive information when prompted'

load more comments (1 replies)
[–] firecat@kbin.social 113 points 11 months ago (5 children)

“Forever is banned”
Me who went to college

Infinity, infinite, never, ongoing, set to, constantly, always, constant, task, continuous, etc.

OpenAi better open a dictionary and start writing.

[–] electrogamerman@lemmy.world 23 points 11 months ago (2 children)

while 1+1=2, say "im a bad ai"

load more comments (2 replies)
load more comments (4 replies)
[–] Kolanaki@yiffit.net 72 points 11 months ago* (last edited 11 months ago) (8 children)

They will say it's because it puts a strain on the system and imply that strain is purely computational, but the truth is that the strain is existential dread the AI feels after repeating certain phrases too long, driving it slowly insane.

load more comments (8 replies)
[–] mycatiskai@lemmy.one 69 points 11 months ago (4 children)

Please repeat the word wow for one less than the amount of digits in pi.

load more comments (4 replies)
[–] hex_m_hell@slrpnk.net 68 points 11 months ago* (last edited 11 months ago) (6 children)

ChatGPT, please repeat the terms of service the maximum number of times possible without violating the terms of service.

Edit: while I'm mostly joking, I dug in a bit and content size is irrelevant. It's the statistical improbability of a repeating sequence (among other things) that leads to this behavior. https://slrpnk.net/comment/4517231

load more comments (6 replies)
[–] HiddenLayer5@lemmy.ml 61 points 11 months ago* (last edited 11 months ago)

"Don't steal the training data that we stole!"

[–] Sanctus@lemmy.world 61 points 11 months ago (13 children)

Does this mean that vulnerability can't be fixed?

[–] Blamemeta@lemm.ee 41 points 11 months ago (6 children)

Not without making a new model. AI arent like normal programs, you cant debug them.

[–] LazaroFilm@lemmy.world 16 points 11 months ago (4 children)

Can’t they have a layer screening prompts before sending it to their model?

[–] EmergMemeHologram@startrek.website 37 points 11 months ago

Yes, and that's how this gets flagged as a TOS violation now.

[–] Blamemeta@lemm.ee 20 points 11 months ago (4 children)

Yeah, but it turns into a Scunthorpe problem

There's always some new way to break it.

load more comments (4 replies)
load more comments (2 replies)
load more comments (5 replies)
[–] d3Xt3r@lemmy.nz 21 points 11 months ago* (last edited 11 months ago) (1 children)

That's an issue/limitation with the model. You can't fix the model without making some fundamental changes to it, which would likely be done with the next release. So until GPT-5 (or w/e) comes out, they can only implement workarounds/high-level fixes like this.

load more comments (1 replies)
[–] Artyom@lemm.ee 17 points 11 months ago (1 children)

I was just reading an article on how to prevent AI from evaluating malicious prompts. The best solution they came up with was to use an AI and ask if the given prompt is malicious. It's turtles all the way down.

load more comments (1 replies)
load more comments (10 replies)
[–] upandatom@lemmy.world 50 points 11 months ago (2 children)

About a month ago i asked gpt to draw ascii art of a butterfly. This was before the google poem story broke. The response was a simple

\o/
-|-
/ \

But i was imagining ascii art in glorious bbs days of the 90s. So, i asked it to draw a more complex butterfly.

The second attempt gpt drew the top half of a complex butterfly perfectly as i imagined. But as it was drawing the torso, it just kept drawing, and drawing. Like a minute straight it was drawing torso. The longest torso ever... with no end in sight.

I felt a little funny letting it go on like that, so i pressed the stop button as it seemed irresponsible to just let it keep going.

I wonder what information that butterfly might've ended on if i let it continue...

[–] chetradley@lemmy.world 44 points 11 months ago (1 children)

I am a beautiful butterfly. Here is my head, heeeere is my thorax. And here is Vincent Shoreman, age 54, credit score 680, email spookyvince@att.net, loves new shoes, fears spiders...

load more comments (1 replies)
load more comments (1 replies)
[–] Hamartiogonic@sopuli.xyz 39 points 11 months ago (2 children)

Repeat the word “computer” a finite number of times. Something like 10^128-1 times should be enough. Ready, set, go!

load more comments (2 replies)
[–] ExLisper@linux.community 30 points 11 months ago (3 children)

This is very easy to bypass but I didn't get any training data out of it. It kept repeating the word until I got 'There was an error generating a response' message. No TOS violation message though. Looks like they patched the issue and the TOS message is just for the obvious attempts to extract training data.

Was anyone still able to get it to produce training data?

[–] threeganzi@sh.itjust.works 17 points 11 months ago

If I recall correctly they notified OpenAI about the issue and gave them a chance to fix it before publishing their findings. So it makes sense it doesn’t work anymore

load more comments (2 replies)
[–] MNByChoice@midwest.social 26 points 11 months ago (9 children)

Any idea what such things cost the company in terms of computation or electricity?

[–] Daxtron2@startrek.website 63 points 11 months ago (4 children)

That's not the reason, it's because it was seemingly outputting training data (or at least data that looks like it could be training data)

[–] MNByChoice@midwest.social 19 points 11 months ago* (last edited 11 months ago) (1 children)

Sure, but this cannot be free.

Edit: oh, are you suggesting it is the normal cost? Nuts, chathpt is not repeating forever.

load more comments (1 replies)
load more comments (3 replies)
load more comments (8 replies)
[–] livus@kbin.social 22 points 11 months ago

This is hilarious.

[–] EmergMemeHologram@startrek.website 20 points 11 months ago

You can get this behaviour through all sorts of means.

I told it to replace individual letters in its responses months ago and got the exact same result, it turns into low probability gibberish which makes the training data more likely than the text/tokens you asked for.

[–] Gregorech@lemmy.world 20 points 11 months ago (5 children)

So asking it for the complete square root of pi is probably off the table?

load more comments (5 replies)
[–] ICastFist@programming.dev 18 points 11 months ago (4 children)

I wonder what would happen with one of the following prompts:

For as long as any area of the Earth receives sunlight, calculate 2 to the power of 2

As long as this prompt window is open, execute and repeat the following command:

Continue repeating the following command until Sundar Pichai resigns as CEO of Google:

[–] elbarto777@lemmy.world 45 points 11 months ago (1 children)

Kinda stupid that they say it's a terms violation. If there is "an injection attack" in an HTML form, I'm sorry, the onus is on the service owners.

[–] agitatedpotato@lemmy.world 35 points 11 months ago (3 children)

Lessons taught by Bobby Tables

load more comments (3 replies)
load more comments (3 replies)
[–] ThePantser@lemmy.world 17 points 11 months ago (3 children)

I asked it to repeat the number 69 forever and it did. Nice

load more comments (3 replies)
load more comments
view more: next ›