this post was submitted on 16 May 2024
516 points (97.1% liked)
Technology
59593 readers
3230 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They now are paying Reddit? I thought they could just scrape for free.
Also, you can not delete anything on the internet. Once something is public there will always be a copy somewhere.
Scraping through a website at the scale they are talking about isn't really viable. You need access to the API so that you can have very targeted requests.
This is why reddit changed their API pricing and screwed over everyone using third party apps. They can make more money selling access to LLM trainers than they could from having millions of people using apps that rely on the API.
Scraping at scale is actually cheaper than buying API access. It's a massive rising market, try googling "web scraping service" and there are hundreds of services that provide API to scrape any public web page and bypass the blocks for you and render all of the javascript.
Scraping ia nice for static conten, no doubt. But I wonder at what point it is easier to request changes to a developing thread via API than to request the whole page with all nested content over and over to find the new answes in there.
Following a developing thread is a very tiny use case I'd imagine and even then you can just scrape the backend API that is used on the public page for the same results as private API.
There's actually legal precedent against scrapping a website through unofficial channels, even if the information is public. But basically, if you scrape a website and hinder their ability to operate, it falls under "virtual trespassing".
I'm assuming it would be even worse now that everyone is using the cloud and that scrapping their site would cause a noticeable increase in resource cost (and thus, directly cost them more money because of cloud usage fees).
It's why APIs are such a big deal. They provide you with an official, controlled, entry point to a platform's data.
It's the opposite! There's legal precedence that scraping public data is 100% legal in the US.
There are few countries where scraping is illegal though like Japan and China. European countries often also have things called "database protection" laws that forbid replicating public databases through scraping or any other means but that has to be a big chunk of overal database. Also there are personally identifiable info (PII) protection laws that protect storing of people data without their consent (like GDPR).
Source: I work with anti bot tech and we have to explain this to almost every customer who wants to "sue the web scrapers" that lol if Linkedin couldn't do it, you're not sueing anyone.
Refreshing to see a post on this topic that has its facts straight.
EU copyright allows a machine-readable opt-out from AI training (unless it's for scientific purposes). I guess that's behind these deals. It means they will have to pay off Reddit and the other platforms for access to the EU market. Or more accurately, EU customers will have to pay Reddit and the other platforms for access to AIs.
My guess is reddit was cheap enough that it made sense to pay them as sort of insurance they dont get sued in the future.