this post was submitted on 03 Feb 2024
163 points (100.0% liked)

Technology

37727 readers
626 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
all 36 comments
sorted by: hot top controversial new old
[–] mozz@mbin.grits.dev 72 points 9 months ago* (last edited 9 months ago) (1 children)

I think we can safely say that "web page don't go down anymore" is not the real reason. Two possibilities I see for the real reason:

  • They don't feel it's necessary anymore to cache the actual pages. It's not productive to have them to track and analyze SEO abuse, or even just to tune their algorithm by rescanning things if someone notices there's a problem, even given that it costs them a fraction of a fraction of a fraction of the income they make from search, and it makes life easier for anyone who's working on it.
  • They used to make it available just because it was a neat thing to be able to offer and there was no reason not to, since they were caching the pages anyway. But, the current Google management has wandered so far afield of the original mindset that made them successful in the first place that that means nothing to them, more like a bizarre confusion of concepts than any kind of statement that makes coherent sense. So fuck the users. Yes we're still caching them, because the search engine needs them. No you can't have them for free, you fucking hippie. Now get out.

I know which explanation I favor.

[–] Powerpoint@lemmy.ca 5 points 9 months ago

Definitely B

[–] rho50@lemmy.nz 46 points 9 months ago (3 children)

This is probably an attempt to save money on storage costs. Expect cloud storage pricing from Google to continue to rise as they reallocate spending towards ML hardware accelerators.

Never been happier to have a proper NAS setup with offsite backup 🙃

[–] kubica@kbin.social 18 points 9 months ago (2 children)

I don't think they are going to stop storing it somewhere, just stop delivering it.

[–] rho50@lemmy.nz 14 points 9 months ago (1 children)

Idk… in theory they probably don’t need to store a full copy of the page for indexing, and could move to a more data-efficient format if they do. Also, not serving it means they don’t need to replicate the data to as many serving regions.

But I’m just speculating here. Don’t know how the indexing/crawling process works at Google’s scale.

[–] evatronic@lemm.ee 2 points 9 months ago

Absolutely. The crawler is doing some rudimentary processing before it ever does any sort of data storage saving. That's the sort of thing that's being persisted behind the scenes, and it's almost certainly both not enough to reconstruct the web page, nor is it (realistically) human-friendly. I was going to say "readable" but it's probably some bullshit JSON or XML document full of nonsense no one wants to read.

[–] pre@fedia.io 1 points 9 months ago

Seems unlikely they'll deleted it. If they're started deleting data that's quite a change. They might save from bandwidth costs of delivering it to people I suppose.

Maybe something to do with users filling the AIs from the google cache? Google wanting to ensure only they can train from the google-cache.

@kubica@kbin.social @Powderhorn@beehaw.org @rho50@lemmy.nz

[–] morry040@kbin.social 2 points 9 months ago

I think it's more about the web visitor cost. Handling traffic and API calls becomes a financial problem when there are a growing number of companies using bots to scrape data. Larger companies are moving their content behind paywalls, which acts as a bot filter, and have also identified that they can generate a revenue stream from subscriptions and API connections. Old content on the web is not deemed to have much business value, so it's a decision of either charging for it or scrapping it.

[–] ciferecaNinjo@fedia.io 1 points 9 months ago* (last edited 9 months ago)

This is probably an attempt to save money on storage costs.

That’s in fact what the article claims as Google’s reason. But seems irrational. Google still needs to index websites for the search engine. So the storage is still needed since the data collection is still needed. The only difference (AFAICT) is Google is simply not sharing that data. Also, there are bigger pots of money in play than piddly storage costs.

[–] stoy@lemmy.zip 44 points 9 months ago

Google never backed up the internet, sure they did cache pages, but that isn't even close to backing up the internet

[–] alyaza@beehaw.org 42 points 9 months ago* (last edited 9 months ago) (3 children)

Google "Search Liaison" Danny Sullivan confirmed the feature removal in an X post, saying the feature "was meant for helping people access pages when way back, you often couldn't depend on a page loading. These days, things have greatly improved. So, it was decided to retire it."

okay but... has it? this seems like an unfounded premise, intuitively speaking

[–] Semi-Hemi-Demigod@kbin.social 19 points 9 months ago (1 children)

"What excuse could we use for this cost-cutting measure?"

"Uh, we could just say that people don't need it anymore."

"Johnson, get that man a promotion!"

[–] otter@lemmy.ca 9 points 9 months ago

Yea I've been using it more and more recently, although part of that is sites like Twitter or Reddit randomly hiding content

[–] ciferecaNinjo@fedia.io 3 points 9 months ago

Bingo. When I read that part of the article, I felt insulted. People see the web getting increasingly enshitified and less accessible. The increased need for cached pages has justified the existence of 12ft.io.

~40% of my web access is now dependant on archive.org and 12ft.io.

So yes, Google is obviously bullshitting. Clearly there is a real reason for nixing cached pages and Google is concealing that reason.

[–] Evil_Shrubbery@lemm.ee 34 points 9 months ago (1 children)

Not dead, just non-public.

AIs are hungry, the value of stored shitposts skyrocketed.

[–] Smoke@beehaw.org 3 points 9 months ago

There's ways to rate limit, like increasing response time per IP address per hour to make rapid, massed requests slower and easier to handle. Taking them all down at once is an extreme move.

[–] JoMiran@lemmy.ml 25 points 9 months ago (1 children)
[–] smeg@feddit.uk 6 points 9 months ago (1 children)

"Enshittification" isn't just company did bad thing, you know

[–] TheRtRevKaiser@beehaw.org 18 points 9 months ago (2 children)

It isn't, but I think this probably fits. Enshittification is when a company provides useful, good services to gain users, then once those users are locked in they start degrading those service or removing features to cut costs, right? That seems like a pretty close analogy to what's going on here, I'd think.

[–] Powerpoint@lemmy.ca 3 points 9 months ago

I doubt there's even a cost cut here. They're most likely still doing the work, just not making it available.

[–] smeg@feddit.uk 2 points 9 months ago

I think that's still just "what businesses do in general", enshittification is specifically:

  1. Offer a great service as a middleman so users want to use your platform and customers want to sell through it (i.e. get the market share)
  2. Once the users are used to using it and are sort of locked in, crank up the costs so your customers get their returns
  3. One they are locked in, crank up the costs for them so you profit

From the original post that defined it:

First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

[–] cupcakezealot@lemmy.blahaj.zone 15 points 9 months ago (1 children)

google be like stop bypassing paywalls and ads using our cache

[–] ruination@discuss.tchncs.de 2 points 9 months ago

Wait, how does Google make money off of paywalled contents?

[–] bedrooms@kbin.social 7 points 9 months ago (1 children)

Maybe they don't want to give rival AI devs data access? It's not typical for Google to give up data.

[–] ciferecaNinjo@fedia.io 2 points 9 months ago

As far as we know, Google is not giving up any data. The crawler still must store a copy of the text for the index. The only certainty we have is that Google is no longer sharing it.

[–] ciferecaNinjo@fedia.io 4 points 9 months ago* (last edited 9 months ago)

From the article:

"was meant for helping people access pages when way back, you often couldn't depend on a page loading. These days, things have greatly improved. So, it was decided to retire it." (emphasis added)

Bullshit! The web gets increasingly enshitified and content is less accessible every day.

For now, you can still build your own cache links even without the button, just by going to "https://webcache.googleusercontent.com/search?q=cache:" plus a website URL, or by typing "cache:" plus a URL into Google Search.

You can also use 12ft.io.

Cached links were great if the website was down or quickly changed, but they also gave some insight over the years about how the "Google Bot" web crawler views the web. … A lot of Google Bot details are shrouded in secrecy to hide from SEO spammers, but you could learn a lot by investigating what cached pages look like.

Okay, so there’s a more plausible theory about the real reason for this move. Google may be trying to increase the secrecy of how its crawler functions.

The pages aren't necessarily rendered like how you would expect.

More importantly, they don’t render the way authors expect. And that’s a fucking good thing! It’s how caching helps give us some escape from enshification. From the 12ft.io faq:

“Prepend 12ft.io/ to the URL webpage, and we'll try our best to remove the popups, ads, and other visual distractions.

It also circumvents #paywalls. No doubt there must be legal pressure on Google from angry website owners who want to force their content to come with garbage.

The death of cached sites will mean the Internet Archive has a larger burden of archiving and tracking changes on the world's webpages.

The possibly good news is that Google’s role shrinks a bit. Any Google shrinkage is a good outcome overall. But there is a concerning relationship between archive.org and Cloudflare. I depend heavily on archive.org largely because Cloudflare has broken ~25% of the web. The day #InternetArchive becomes Cloudflared itself, we’re fucked.

We need several non-profits to archive the web in parallel redundancy with archive.org.

[–] ciferecaNinjo@fedia.io 3 points 9 months ago* (last edited 9 months ago)

Here’s the heart of the not-so-obvious problem:

Websites treat the Google crawler like a 1st class citizen. Paywalls give Google unpaid junk-free access. Then Google search results direct people to a website that treats humans differently (worse). So Google users are led to sites they cannot access. The heart of the problem is access inequality. Google effectively serves to refer people to sites that are not publicly accessible.

I do not want to see search results I cannot access. Google cache was the equalizer that neutralizes that problem. Now that problem is back in our face.

[–] autotldr@lemmings.world 3 points 9 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryGoogle Search's "cached" links have long been an alternative way to load a website that was down or had changed, but now the company is killing them off.

The feature has been appearing and disappearing for some people since December, and currently, we don't see any cache links in Google Search.

Cached links used to live under the drop-down menu next to every search result on Google's page.

As the Google web crawler scoured the Internet for new and updated webpages, it would also save a copy of whatever it was seeing.

That quickly led to Google having a backup of basically the entire Internet, using what was probably an uncountable number of petabytes of data.

In 2020, Google switched to mobile-by-default, so for instance, if you visit that cached Ars link from earlier, you get the mobile site.


Saved 68% of original text.

[–] Marsupial@quokk.au 3 points 9 months ago (2 children)

Is there any sort of way to self host a limited version of this? L

I’d love to be able to have my own Searx also cache everything I visit as I go to it, it’d at least let me refind information I’ve previously found.

[–] WarmSoda@lemm.ee 16 points 9 months ago (2 children)

You want to self host... the Internet?

[–] Marsupial@quokk.au 6 points 9 months ago* (last edited 9 months ago) (1 children)

No, I want to automatically cache pages I’ve searched for and visited and have them show up on my searx.

We’re talking like maybe 10 pages a week if that.

I know there’s ArchiveBox, but I’m after something less manual and more integrated.

[–] WarmSoda@lemm.ee 4 points 9 months ago

I know what you meant. I was just messing with ya

[–] reddthat@reddthat.com 1 points 9 months ago

You could use something like archivebox as that saves the whole page, or you could use Waybackmachine and force it to save the page via an add-on.

You could also setup your own yacy index and everytime you find an interesting site you could add it to yacy.
But this is kind of not what you are asking for. Archivebox is probably the closest, or using squidcache and literally caching every url you go to. 😅

[–] someguy3@lemmy.ca 1 points 9 months ago

Never seemed feasible to begin with.