this post was submitted on 23 Feb 2025
78 points (96.4% liked)

Selfhosted

42834 readers
778 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Long story short, my VPS, which I'm forwarding my servers through Tailscale to, got hammered by thousands of requests per minute from Anthropic's Claude AI. All of which being from different AWS IPs.

The VPS has a 1TB monthly cap, but it's still kinda shitty to have huge spikes like the 13GB in just a couple of minutes today.

How do you deal with something like this?
I'm only really running a caddy reverse proxy on the VPS which forwards my home server's services through Tailscale. "

I'd really like to avoid solutions like Cloudflare, since they f over CGNAT users very frequently and all that. Don't think a WAF would help with this at all(?), but rate limiting on the reverse proxy might work.

(VPS has fail2ban and I'm using /etc/hosts.deny for manual blocking. There's a WIP website on my root domain with robots.txt that should be denying AWS bots as well...)

I'm still learning and would really appreciate any suggestions.

you are viewing a single comment's thread
view the rest of the comments
[–] doodledup@lemmy.world 1 points 6 hours ago (1 children)

I don't quiet understand how this is deployed. Hosting this behind a dedicated subdomain or path kind of defeats the purpose as the bots are still able to access the actual website no problem.

[–] Natanael@infosec.pub 1 points 3 hours ago (1 children)

The trick is distinguishing them by behavior and switching what you serve them

[–] doodledup@lemmy.world 1 points 2 hours ago (1 children)

How would I go about doing that? This seems to be the challenging part. You don't want false positives and you also want replayability.

[–] Natanael@infosec.pub 2 points 2 hours ago

If you've already noticed incoming traffic is weird, you try to look for what distinguishes the sources you don't want. You write rules looking at the behaviors like user agent, order of requests, IP ranges, etc, and put it in your web server and tells it to check if the incoming request matches the rules as a session starts.

Unless you're a high value target for them, they won't put endless resources into making their systems mimic regular clients. They might keep changing IP ranges, but that usually happens ~weekly and you can just check the logs and ban new ranges within minutes. Changing client behavior to blend in is harder at scale - bots simply won't look for the same things as humans in the same ways, they're too consistent, even when they try to be random they're too consistently random.

When enough rules match, you throw in either a redirect or an internal URL rewrite rule for that session to point them to something different.