this post was submitted on 23 Feb 2025
58 points (95.3% liked)

Selfhosted

42834 readers
1001 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Long story short, my VPS, which I'm forwarding my servers through Tailscale to, got hammered by thousands of requests per minute from Anthropic's Claude AI. All of which being from different AWS IPs.

The VPS has a 1TB monthly cap, but it's still kinda shitty to have huge spikes like the 13GB in just a couple of minutes today.

How do you deal with something like this?
I'm only really running a caddy reverse proxy on the VPS which forwards my home server's services through Tailscale. "

I'd really like to avoid solutions like Cloudflare, since they f over CGNAT users very frequently and all that. Don't think a WAF would help with this at all(?), but rate limiting on the reverse proxy might work.

(VPS has fail2ban and I'm using /etc/hosts.deny for manual blocking. There's a WIP website on my root domain with robots.txt that should be denying AWS bots as well...)

I'm still learning and would really appreciate any suggestions.

top 27 comments
sorted by: hot top controversial new old
[–] possiblylinux127@lemmy.zip 1 points 2 hours ago

Honestly we need some sort of proof of work (PoW)

[–] mat@jlai.lu 5 points 9 hours ago (1 children)

I guess sending tar bombs can be fun

[–] slazer2au@lemmy.world 2 points 8 hours ago
[–] breadsmasher@lemmy.world 18 points 20 hours ago (1 children)

Im struggling to find it, but theres like an “AI tarpit” that causes scrapers to get stuck. something like that? Im sure I saw it posted on lemmy recently. hopefully someone can link it

[–] sailorzoop@lemmy.librebun.com 12 points 19 hours ago (1 children)

I did find this github link as the first search result, looks interesting, thanks for letting me know the term "tar pit".

[–] quantenzitrone@lemmings.world 9 points 15 hours ago (1 children)
[–] N0x0n@lemmy.ml 6 points 9 hours ago* (last edited 5 hours ago)

Now I just want to host a web page and expose it with nepenthes...

First, because I'm a big fan of carnivorous plants.

Second, because it let's you poison LLMs, AI and fuck with their data.

Lastly, because I can do my part and say F#CK Y0U to those privacy data hungry a$$holes !

I don't even expose anything directly to the web (always accessible through a tunnel like wireguard) or have any important data to protect from AI or LLMs. But just giving the opportunity to fuck with them while they continuously harvest data from everyone is something I was already thinking off but didn't knew how.

Thanks for the link !

[–] drkt@scribe.disroot.org 10 points 18 hours ago (2 children)
[–] mholiv@lemmy.world 6 points 17 hours ago (2 children)

They want to reduce the bandwidth usage. Not increase it!

[–] sxan@midwest.social 17 points 16 hours ago (1 children)

A good tar pit will reduce your bandwidth. Tarpits aren't about shoving useless data at bots; they're about responding as slow as possible to keep the bot connected for as long as possible while giving it nothing.

Endlessh accepts the connection and then... does nothing. It doesn't even actually perform the SSL negotiation. It just very... slowly... sends... an endless preamble, until the bot gives up.

As I write, my Internet-facing SSH tarpit currently has 27 clients trapped in it. A few of these have been connected for weeks. In one particular spike it had 1,378 clients trapped at once, lasting about 20 hours.

[–] mholiv@lemmy.world 4 points 6 hours ago (2 children)

Fair. But I haven’t seen any anti-ai-scraper tarpits that do that. The ones I’ve seen mostly just pipe 10MB of /dev/urandom out there.

Also I assume that the programmers working at ai companies are not literally mentally deficient. They certainly would add .timeout(10) or whatever to their scrapers. They probably have something more dynamic than that.

[–] sem@lemmy.blahaj.zone 4 points 3 hours ago (1 children)

There's one I saw that gave the bot a long circular form to fill out or something, I can't exactly remember

[–] sxan@midwest.social 2 points 3 hours ago

Yeah, that's a good one.

[–] sxan@midwest.social 3 points 2 hours ago (1 children)

Ah, that's where tuning comes in. Look at the logs, take the average time-out, and tune the tarpit to return a minimum payload consisting of a minimal HTML containing a single, slightly different URL back to the tar pit. Or, better yet, JavaScript that loads a single page of tarpit URLs very slowly. Bots have to be able to run JS, or else they're missing half the content on the web. I'm sure someone has created a JS forkbomb.

Variety is the spice of life. AI botnet blacklists are probably the better solution for web content; you can run ssh on a different port and run a tarpit on the standard port, and it will barely affect you. But for the web, if you're running a web server you probably want visitors, and tarpits would be harder to set up to catch only bots.

[–] mholiv@lemmy.world 1 points 2 hours ago (1 children)

I see your point but like I think you underestimate the skill of coders. You make sure your timeout is inclusive of JavaScript run times. Maybe set a memory limit too. Like imagine you wanted to scrape the internet. You could solve all these tarpits. Any capable coder could. Now imagine a team of 20 of the best coders money can buy each paid 500.000€. They can certainly do the same.

Like I see the appeal of running a tar pit. But like I don’t see how they can “trap” anyone but script kiddies.

[–] sxan@midwest.social 1 points 20 minutes ago

Nobody is paying software developers 500.000€. It might cost the company that much, but no developers are making that much. The highest software engineer salaries are still in the US, and the average is $120k. High-end salaries are $160k; you might creep up a little more than that, but that's also location specific. Silicon Valley salaries might be higher, but then, it costs far more to live in that area.

In any case, the question is ROI. If you have to spend $500,000 to address some sites that are being clever about wasting your scrapers' time, is that data worth it? Are you going to make your $500k back? And you have to keep spending it, because people keep changing tactics and putting in new mechanisms to ruin your business model. Really, the only time this sort of investment makes sense is when you're breaking into a bank and are going to get a big pay-out in ransomware or outright theft. Getting the contents of my blog is never going to be worth the investment.

Your assumption is that slowly served content is considered not worth scraping. If that's the case, then it's easy enough for people to prevent their content from being scraped: put in sufficient delays. This is an actual a method for addressing spam: add a delay in each interaction. Even relatively small delays add up and cost spammers money, especially if you run a large email service and do it at scale.

Make the web a little slower. Add a few seconds to each request, on every web site. Humans might notice, but probably not enough to be a big bother, but the impact on data harvesters will be huge.

If you think this isn't the defense, consider how almost every Cloudflare interaction - and an increasingly large number of other sites - are including time-wasting front pages. They usually say something like "making sure you're human" with a spinning disk, but really all they need to be doing is adding 10 seconds to each request. If a scraper of trying to indeed only a million pages a day, and each page adds a 10s delay, that's wasting 2,700 hours of scraper computer time. And they're trying to scrape far more than a million pages a day; it's estimated (they don't reveal the actual number) that Google indexes billions of pages every day.

This is good, though; I'm going to go change the rate limit on my web server; maybe those genius software developers will set a timeout such that they move on before they get any content from my site.

[–] drkt@scribe.disroot.org 7 points 16 hours ago

Bots will blacklist your IP if you make it hostile to bots

This will save you bandwidth

[–] douglasg14b@lemmy.world 2 points 17 hours ago

Cool, lots of information provided!

[–] Greg@lemmy.ca 10 points 18 hours ago

What are you hosting and who are your users? Do you receive any legitimate traffic from AWS or other cloud provider IP addresses? There will always be edge cases like people hosting VPN exit nodes on a VPS etc, but if its a tiny portion of your legitimate traffic I would consider blocking all incoming traffic from cloud providers and then whitelisting any that make sense like search engine crawlers if necessary.

[–] crony@lemmy.cronyakatsuki.xyz 14 points 20 hours ago* (last edited 20 hours ago) (1 children)

Try crowdsec.

You can set it up with list's that are updated frequetly and have it look at caddy proxy logs and then it can easilly block ai/bot like traffic.

I have it blocking over 100k ip's at this moment.

https://www.crowdsec.net/

[–] sailorzoop@lemmy.librebun.com 13 points 20 hours ago

Not gonna lie, the $3900/mo at the top of the /pricing page is pretty wild.
Searched "crowdsec docker" and they have docs and all that. Thank you very much, I've heard of crowdsec before, but never paid much attention, absolutely will check this out!

[–] waspentalive@lemmy.one 6 points 17 hours ago

Too bad you can't post a usage notice that anything scrapped to train an AI will be charged and will owe $some-huge-money, then pepper the site with bogus facts, occasionally ask various AI about the bogus fact and use that to prove scraping and invoice the AI's company.

[–] LodeMike 3 points 15 hours ago

Read access logs and 403 user agents or IPs

[–] poVoq@slrpnk.net 8 points 20 hours ago* (last edited 19 hours ago) (1 children)

It seems any somewhat easy to implement solution gets circumvented by them quickly. Some of the bots do respect robots.txt through if you explicitly add their self-reported user-agent (but they change it from time to time). This repo has a regularly updated list: https://github.com/ai-robots-txt/ai.robots.txt/

In my experience, git forges are especially hit hard, and the only real solution I found is to put a login wall in front, which kinda sucks especially for open-source projects you want to self-host.

Oh and recently the mlmym (old reddit) frontend for Lemmy seems to have started attracting AI scraping as well. We had to turn it off on our instance because of that.

[–] sailorzoop@lemmy.librebun.com 3 points 19 hours ago* (last edited 19 hours ago) (1 children)

In my experience, git forges are especially hit hard

Is that why my Forgejo instance has been hit twice like crazy before...
Why can't we have nice things. Thank you!

EDIT: Hopefully Photon doesn't get in their sights as well. Though after using the official lemmy webui for a while, I do really like it a lot.

[–] poVoq@slrpnk.net 2 points 19 hours ago

Yeah, Forgejo and Gitea. I think it is partially a problem of insufficient caching on the side of these git forges that makes it especially bad, but in the end that is victim blaming 🫠

Mlmym seems to be the target because it is mostly Javascript free and therefore easier to scrape I think. But the other Lemmy frontends are also not well protected. Lemmy-ui doesn't even allow to easily add a custom robots.txt, you have to manually overwrite it in the reverse-proxy.

[–] solrize@lemmy.world 6 points 19 hours ago

Might be worth patching fail2ban to recognize the scrapers and block them in iptables.