this post was submitted on 28 Dec 2023
323 points (97.4% liked)
Technology
59605 readers
3216 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What's the value of old journalism?
It's a product where the value curve is heavily weighted towards recency.
In theory, the greatest value theft is when the AP writes a piece and two dozen other 'journalists' copy the thing changing the text just enough not to get sued. Which is completely legal, but what effectively killed investigative journalism.
A LLM taking years old articles and predicting them until it can effectively learn relationships between language itself and events described in those articles isn't some inherent value theft.
It's not the training that's the problem, it's the application of the models that needs policing.
Like if someone took a LLM, fed it recently published news stories in the prompts with RAG, and had it rewrite them just differently enough that no one needed to visit the original publisher.
Even if we have it legal for humans to do that (which really we might want to revisit, or at least create a special industry specific restriction regarding), maybe we should have different rules for the models.
But to try to claim a LLM that's allowing coma patients to communicate or to problem solve self-driving algorithms or to diagnose medical issues is stealing the value of old NYT articles in its doing so is not really an argument I see much value in.
Except no one is claiming that LLMs are the problem, they're claiming GPT, or more specifically GPTs training data, is the problem. Transformer models still have a lot of potential, but the question the NYT is asking is "can you just takes anyone else's work to train them".
There's a similar suit against Meta for Llama.
And yes, we will end up seeing as the dust settles if training a LLM is fair use in case law.
Really gave me a whole new perspective. Thanks for that.