this post was submitted on 22 Aug 2023
762 points (95.7% liked)

Technology

59574 readers
3196 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[–] Touching_Grass@lemmy.world 0 points 1 year ago (2 children)

What's the issue against openAI?

[–] Corkyskog@sh.itjust.works 10 points 1 year ago (1 children)

They used to be a non profit, that immediately turned it into a for profit when their product was refined. They took a bunch of people's effort whether it be training materials or training Monkeys using the product and then slapped a huge price tag on it.

[–] Touching_Grass@lemmy.world -1 points 1 year ago

I didn't know they were a non profit. I'm good as long as they keep the current model. Release older models free to use while charging for extra or latest features

[–] BURN@lemmy.world 2 points 1 year ago (3 children)

They’re stealing a ridiculous amount of copyrighted works to use to train their model without the consent of the copyright holders.

This includes the single person operations creating art that’s being used to feed the models that will take their jobs.

OpenAI should not be allowed to train on copyrighted material without paying a licensing fee at minimum.

[–] uzay@infosec.pub 2 points 1 year ago (2 children)

Also Sam Altman is a grifter who gives people in need small amounts of monopoly money to get their biometric data

[–] LifeInMultipleChoice@lemmy.ml 2 points 1 year ago

So hypothetical here. If Dreddit did launch a system that made it so users could trade Karma in for real currency or some alternative, does that mean that all fan fictions and all other fan boy account created material would become copyright infringement as they are now making money off the original works?

[–] Touching_Grass@lemmy.world -1 points 1 year ago (1 children)

If they purchased the data or the data is free its theirs to do what they want without violating the copyright like reselling the original work as their own. Training off it should not violate any copyright if the work was available for free or purchased by at least one person involved. Capitalism should work both ways

[–] BURN@lemmy.world 1 points 1 year ago (1 children)

But they don’t purchase the data. That’s the whole problem.

And copyright is absolutely violated by training off it. It’s being used to make money and no longer falls under even the widest interpretation of free use.

[–] GroggyGuava@lemmy.world -1 points 1 year ago* (last edited 1 year ago) (1 children)

You need to expand on how learning from something to make money is somehow using the original material to make money. Considering that's how art works in general, I'm having a hard time taking the side of "learning from media to make your own is against copyright". As long as they don't reproduce the same thing as the original, I don't see any issues with it. If they learned from Lord of the rings to then make "the Lord of the rings" then yes, that'd be infringement. But if they use that data to make a new IP with original ideas, then how is that bad for the world/ artists.

[–] BURN@lemmy.world 2 points 1 year ago

Creating an AI model is a commercial work. They’re made to make money. Now these models are dependent on other artists data to train on. The models would be useless if they weren’t able to train on anything.

I hold the stance that using copyrighted data as part of a training set is a violation of copyright. That still hasn’t been fully challenged in court, so there’s no specific legal definition yet.

Due to the requirement of copywritten materials to make the model function I feel that they are using copyrighted works in order to build a commercial product.

Also AI doesn’t learn. LLMs build statistical models based on sentence structure of what they’ve seen before. There’s no level of understanding or inherent knowledge, and there’s nothing new being added.