Updating user agent doesn't natter unless NYT is actively blocking that, too. Updating robots.txt is purely a "gentleman's agreement" that OpenAI will respect it. OpenAI would be dumb to ignore it, hat all said, because it'd trigger the lawyer shenanigans to ensue.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
NYT is already considering a lawsuit against OpenAI. So, not just dumb but arrogantly stupid when the lawyers are already in the room.
The burden of proof will fall upon the NYT and it will be extremely difficult to prove OpenAI is culpable for any infringement that it's end users perform.
It's new territory and will be expensive, but NYT is old money and has the liquidity to burn cash all day.
This is the best summary I could come up with:
Based on the Internet Archive’s Wayback Machine, it appears NYT blocked the crawler as early as August 17th.
The change comes after the NYT updated its terms of service at the beginning of this month to prohibit the use of its content to train AI models.
OpenAI didn’t immediately reply to a request for comment.
The NYT is also considering legal action against OpenAI for intellectual property rights violations, NPR reported last week.
If it did sue, the Times would be joining others like Sarah Silverman and two other authors who sued the company in July over its use of Books3, a dataset used to train ChatGPT that may have thousands of copyrighted works, as well as Matthew Butterick, a programmer and lawyer who alleges the company’s data scraping practices amount to software piracy.
Update August 21st, 7:55PM ET: The New York Times declined to comment.
The original article contains 202 words, the summary contains 146 words. Saved 28%. I'm a bot and I'm open source!
But all those reposts on Reddit and Lemmy are still fair game...
shared by humans is not the same as crawled by bots...
I wonder how much of a boost sites get from Reddit and lemmy, etc. Even with posts that have the text copy/pasted I imagine it has to give them traffic.