this post was submitted on 29 Sep 2023
439 points (93.5% liked)

Technology

59087 readers
3244 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

you are viewing a single comment's thread
view the rest of the comments
[–] Wander@kbin.social 19 points 1 year ago (1 children)

Are you saying the writers of these programs have read all these books, and were inspired by them so much they wrote millions of books? And all this software is doing is outputting the result of someone being inspired by other books?

[–] Grimy@lemmy.world -3 points 1 year ago (2 children)

Clearly not. He's saying that other authors have done the same as the software does. The software creators implemented the same principle into their llm. You are being daft on purpose.

[–] newthrowaway20@lemmy.world 18 points 1 year ago* (last edited 1 year ago) (1 children)

It's not the same principle. Large language models aren't 'inspired' to write new works. Software can't be inspired. It follows instructions. Even though large language models might feel like somebody is talking back to you and giving you new information, it's just code following instructions designed to predict output based on the input provided and the data supplied. There's no inspiration to be had, and to attribute inspiration to language models is a huge mischaracterization of what's happening under the hood. Can a language model, without being told what to do, actually use any of the data it was fed to create something? No. Every single large language model requires some sort of input from a user to act as a seed before any sort of response can begin.

This is why it's so stupid to call this shit AI, because people start thinking it's actual intelligence. Really, It's just a fancy illusion.

[–] lloram239@feddit.de -3 points 1 year ago

This is why it’s so stupid to call this shit AI

It is using the term as defined. Maybe stop being a stupid parrot just repeating crap you heard else where and use your brain for a moment. I am losing hope that humans are capable of thought reading all this junk.

[–] mojo@lemm.ee 11 points 1 year ago

They purchased their books to get inspiration from, the original author gets paid, and the author consented to selling it. That's the difference.

Also the LLM can post entire snippets or chapters of books, which of course you'll take at face value even if it hallucinates and makes the author look like a worse author then they are.