this post was submitted on 24 Aug 2023
164 points (94.6% liked)

Technology

59422 readers
2931 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The New York Times blocks OpenAI’s web crawler::The New York Times has officially blocked GPTBot, OpenAI’s web crawler. The outlet’s robot.txt page specifically disallows GPTBot, preventing OpenAI from scraping content from its website to train AI models.

you are viewing a single comment's thread
view the rest of the comments
[–] rtxn@lemmy.world 7 points 1 year ago (3 children)

If you claim to fully understand machine learning technology, you should also understand why it's considered theft by many. Everything that a generative AI churns out is ultimately derived from human works. Some of it is legally unencumbered, but much of it is protected by copyright and integrated into an AI model without the author's permission or knowledge, and reused without attribution.

I have no love for the NYT, but in this, they're right.

[–] kava@lemmy.world -2 points 1 year ago

Everything anyone churns out is ultimately derived from human works. I know that 2+2 = 4 because my teacher taught me that. I can read Hegel and understand it because both he and I read Kant. The corpus of work created by humanity collectively builds on itself.

When you listen to a song on the radio, there has been an infinitely long chain of influence that goes back hundreds of years.

Everytjing is built on everything else. AI isn't fundamentally different. It's just done automatically by a mathematical model.

In my opinion instead of trying to prevent this technology like a neo-luddite we need to be looking at new models for our creators to survive. I'm a big fan of the Patreon model. We don't have to use Patreon of course (and we shouldn't)

But imagine a world where all content is free and people with money choose to support the creators they enjoy. Even a dollar or two when done en masse would be enough to sustain someone's lifestyle and reliably reward them for work.

We need to think forward and not act like conservatives. This technology isn't going away. It's simply going to accelerate and break a lot of things while it picks up speed.

[–] joe@lemmy.world -3 points 1 year ago* (last edited 1 year ago)

I can't say I fully understand how LLMs work (can't anyone??) but I know a little and your comment doesn't seem to understand how they use training data. They don't use their training data to "memorize" sentences, they use it as an example (among billions) of how language works. It's still just an analogy, but it really is pretty close to LLMs "learning" a language by seeing it used over and over. Keeping in mind that we're still in an analogy, it isn't considered "derivative" when someone learns a language from examples of that language and then goes on to write a poem in that language.

Copyright doesn't even apply, except perhaps on extremely fringe cases. If a journalist put their article up online for general consumption, then it doesn't violate copyright to use that work as a way to train a LLM on what the language looks like when used properly. There is no aspect of copyright law that covers this, but I don't see why it would be any different than the human equivalent. Would you really back up the NYT if they claimed that using their articles to learn English was in violation of their copyright? Do people need to attribute where they learned a new word or strengthened their understanding of a language if they answer a question using that word? Does that even make sense?

Here is a link to a high level primer to help understand how LLMs work: https://www.understandingai.org/p/large-language-models-explained-with