this post was submitted on 28 Dec 2023
323 points (97.4% liked)
Technology
59605 readers
3216 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Except no one is claiming that LLMs are the problem, they're claiming GPT, or more specifically GPTs training data, is the problem. Transformer models still have a lot of potential, but the question the NYT is asking is "can you just takes anyone else's work to train them".
There's a similar suit against Meta for Llama.
And yes, we will end up seeing as the dust settles if training a LLM is fair use in case law.