this post was submitted on 22 Aug 2023
762 points (95.7% liked)

Technology

59087 readers
3433 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[–] fubo@lemmy.world 92 points 1 year ago* (last edited 1 year ago) (6 children)

If I memorize the text of Harry Potter, my brain does not thereby become a copyright infringement.

A copyright infringement only occurs if I then reproduce that text, e.g. by writing it down or reciting it in a public performance.

Training an LLM from a corpus that includes a piece of copyrighted material does not necessarily produce a work that is legally a derivative work of that copyrighted material. The copyright status of that LLM's "brain" has not yet been adjudicated by any court anywhere.

If the developers have taken steps to ensure that the LLM cannot recite copyrighted material, that should count in their favor, not against them. Calling it "hiding" is backwards.

[–] cantstopthesignal@sh.itjust.works 24 points 1 year ago* (last edited 1 year ago) (1 children)

You are a human, you are allowed to create derivative works under the law. Copyright law as it relates to machines regurgitating what humans have created is fundamentally different. Future legislation will have to address a lot of the nuance of this issue.

[–] uis@lemmy.world 2 points 1 year ago

And allowed get sued anyway

[–] UnculturedSwine@lemmy.world 7 points 1 year ago

Another sensationalist title. The article makes it clear that the problem is users reconstructing large portions of a copyrighted work word for word. OpenAI is trying to implement a solution that prevents ChatGPT from regurgitating entire copyrighted works using "maliciously designed" prompts. OpenAI doesn't hide the fact that these tools were trained using copyrighted works and legally it probably isn't an issue.

[–] Eccitaze@yiffit.net 4 points 1 year ago (2 children)

If Google took samples from millions of different songs that were under copyright and created a website that allowed users to mix them together into new songs, they would be sued into oblivion before you could say "unauthorized reproduction."

You simply cannot compare one single person memorizing a book to corporations feeding literally millions of pieces of copyrighted material into a blender and acting like the resulting sausage is fine because "only a few rats fell into the vat, what's the big deal"

[–] jadegear@lemm.ee 1 points 1 year ago (1 children)
[–] AlexisLuna@lemmy.blahaj.zone 2 points 1 year ago (1 children)
[–] player2@lemmy.dbzer0.com 2 points 1 year ago* (last edited 1 year ago) (1 children)

The analogy talks about mixing samples of music together to make new music, but that's not what is happening in real life.

The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.

[–] Cethin@lemmy.zip 5 points 1 year ago (1 children)

"Learn" is debatable in this usage. It is trained on data and the model creates a set of values that you can apply that produce an output similar to human speach. It's just doing math though. It's not like a human learns. It doesn't care about context or meaning or anything else.

[–] player2@lemmy.dbzer0.com 0 points 1 year ago

Okay, but in the context of this conversation about copyright I don't think the learning part is as important as the reproduction part.

[–] Touching_Grass@lemmy.world -2 points 1 year ago* (last edited 1 year ago) (1 children)

Google crawls every link available on all websites to index and give to people. That's a better example. Which is legal and up to the websites to protect their stuff

[–] Cethin@lemmy.zip 1 points 1 year ago (2 children)

It's not a problem that it reads something. The problem is the thing that it produces should break copyright. Google search is not producing something, it reads everything to link you to that original copyrighted work. If it read it and then just spit out what's read on its own, instead of sending you to the original creators, that wouldn't be OK.

[–] Schadrach@lemmy.sdf.org 1 points 1 year ago

The blurb it puts out in the search results is much more directly "spitting out what's read" than anything an LLM does. As are most other srts of results that appear on the front page of a google search.

[–] Touching_Grass@lemmy.world 1 points 1 year ago

How is it reproducing the works

[–] StrongFox@lemmy.world 3 points 1 year ago (1 children)

you bought the book to memorize from, anyway.

[–] Agent641@lemmy.world 6 points 1 year ago

No, I shoplifted it from an Aldi

[–] GyozaPower@discuss.tchncs.de 2 points 1 year ago (2 children)

Let's not pretend that LLMs are like people where you'd read a bunch of books and draw inspiration from them. An LLM does not think nor does it have an actual creative process like we do. It should still be a breach of copyright.

[–] efstajas@lemmy.world 17 points 1 year ago (2 children)

... you're getting into philosophical territory here. The plain fact is that LLMs generate cohesive text that is original and doesn't occur in their training sets, and it's very hard if not impossible to get them to quote back copyrighted source material to you verbatim. Whether you want to call that "creativity" or not is up to you, but it certainly seems to disqualify the notion that LLMs commit copyright infringement.

[–] Snorf@reddthat.com 5 points 1 year ago* (last edited 1 year ago)

This topic is fascinating.

I really do think i understand both sides here and want to find the hard line that seperates man from machine.

But it feels, to me, that some philosophical discussion may be required. Art is not something that is just manufactured. "Created" is the word to use without quotation marks. Or maybe not, i don't know...

[–] GyozaPower@discuss.tchncs.de 4 points 1 year ago (2 children)

I wasn't referring to whether the LLM commits copyright infringement when creating a text (though that's an interesting topic as well), but rather the act of feeding it the texts. My point was that it is not like us in a sense that we read and draw inspiration from it. It's just taking texts and digesting them. And also, from a privacy standpoint, I feel kind of disgusted at the thought of LLMs having used comments such as these ones (not exactly these, but you get it), for this purpose as well, without any sort of permission on our part.

That's mainly my issue, the fact that they have done so the usual capitalistic way: it's easier to ask for forgiveness than to ask for permission.

[–] RedKrieg@lemmy.redkrieg.com 2 points 1 year ago

I think you're putting too much faith in humans here. As best we can tell the only difference between how we compute and what these models do is scale and complexity. Your brain often lies to you and makes up reasoning behind your actions after the fact. We're just complex networks doing math.

[–] Schadrach@lemmy.sdf.org 1 points 1 year ago

but rather the act of feeding it the texts.

Unless you are going to argue the act of feeding it the texts is distributing the original text or doing some kind of public performance of the text, I don't see how.

[–] khalic@lemmy.world -3 points 1 year ago (2 children)

An LLM is not a brain, stop anthropomorphising a fkn vector solver... it's math, there's nothing alive about it

[–] Jilanico@lemmy.world 2 points 1 year ago (1 children)

What if you are just a vector solver but don't realize it? We wouldn't know we have neurons in our heads if scientists didn't tell us. What even is consciousness?

[–] khalic@lemmy.world 1 points 1 year ago

All excellent questions, we need the answer to that. Until then, we don't know, and can't make up stuff just because we don't.

[–] RedKrieg@lemmy.redkrieg.com 2 points 1 year ago (1 children)

Hate to break it to you, but that's all you are too.

[–] khalic@lemmy.world 0 points 1 year ago

That's just BS