this post was submitted on 29 Sep 2023
439 points (93.5% liked)

Technology

59466 readers
3332 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

you are viewing a single comment's thread
view the rest of the comments
[–] FontMasterFlex@lemmy.world -1 points 1 year ago (2 children)

they were compensated when the company using the book, purchased the book. you can't tell me what to do with the words written in the book once I've purchased it. nor do you own the ideas or things I come up with as a result of your words in your book. of course this argument only holds up if they purchased the book. if it was "stolen" then they are entitled to the $24.95 their book costs.

[–] kromem@lemmy.world 2 points 1 year ago (1 children)

That's the thing -- they weren't.

The case has two prongs.

One is that training the AI on copyrighted material is somehow infringement, which is total BS and a dangerous path for the world to go down.

The other is that copyrighted material was illegally downloaded by OpenAI, which is pretty much an open and shut case, as they didn't buy up copies of 100k books, they basically torrented them.

And because of ridiculous IP laws bought by industry lobbyists in the dawn of the digital age, the damages are more like $250,000 per book if willful infringement, not $24.95.

Had they purchased them, these cases would very likely be headed for the dumpster heap.

That said, there's a certain irony to Lemmy having pirate subs as one of the most popular while also generally being aggressively pro-enforcement on IP infringement.

[–] BURN@lemmy.world -1 points 1 year ago (1 children)

Training AI on copyrighted material is infringement and I’ll die on that hill. It’s use of copyrighted material to create a commercial product. Doesn’t get any more clear cut than that.

I know as an artist/musician/photographer I’d rather not put my creations out there at all if it means some corporation is going to be able to steal it.

[–] kromem@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Courts look at how the party claiming fair use is using the copyrighted work, and are more likely to find that nonprofit educational and noncommercial uses are fair.

This does not mean, however, that all nonprofit education and noncommercial uses are fair and all commercial uses are not fair; instead, courts will balance the purpose and character of the use against the other factors below.

Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work.

You can stand wherever you like on any hill you'd like, but the question of nonprofit use vs commercial use is only one part of determining fair use, and where your stance is going to have serious trouble is the fact that the result of the training is extremely transformed from the training data, with an entirely different purpose and character and cannot even reproduce any of the works used in training in their entirety. And the areas where they can reproduce in part are likely not even the direct result of using the work itself in training, but additional reinforcement from other additional secondary uses and quotations of the reproducible parts of works in question.

And don't worry. Within about a year or so (by the time any legal decision gets finalized or new legislation is passed) no one is going to care about 'stealing' your or anyone else's creations, as training is almost certainly moving towards using primarily synthetic data and curated content creation to balance out edge cases.

Use of preexisting works was a stepping stone hack that acted like jumper cables starting the engine. Now that it's running, there's a rapidly diminishing need for the other engine.

Edit: And you'd have a very hard time convincing me that StableDiffusion using Studio Ghibli movies to train a neural network that can produce new and different images in that style is infringement while Weiden+Kennedy commercially making money off of producing this ad is not.

[–] pavnilschanda@lemmy.world 1 points 1 year ago

Good point. I guess this aspect is much different from the AI Art scene, where the producers of the dataset are usually not compensated for their drawings.