this post was submitted on 13 Sep 2024
465 points (99.4% liked)

Technology

58115 readers
4078 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] beerclue@lemmy.world 30 points 5 days ago (5 children)

Are You guys really pulling more than 40 images per hour? Isn't the free one enough?

[–] pop@lemmy.ml 14 points 5 days ago

On Lemmy, it's a sin to make money off your work, especially if it is opensource core projects providing paid infrastructure/support. You can only ask for donations and/or quit. No in-between.

[–] gencha@lemm.ee 9 points 5 days ago (1 children)

A single malfunctioning service that restarts in a loop can exhaust the limit near instantly. And now you can't bring up any of your services, because you're blocked.

I've been there plenty of times. If you have to rely on docker.io, you better pay up. Running your own NexusRM or Harbor to proxy it can drastically improve your situation though.

Docker is a pile of shit. Steer clear entirely of any of their offerings if possible.

[–] beerclue@lemmy.world 5 points 5 days ago (1 children)

I use docker at home and at work, nexus at work too. I really don't understand... even a malfunctioning service should not pull the image over and over, there should be a cache... It could be some fringe case, but I have never experienced it.

[–] gencha@lemm.ee -1 points 2 days ago

Ultimately, it doesn't matter what caused you to be blocked from Docker Hub due to rate-limiting. When you're in that scenario, it's most cost efficient to buy your way out.

If you can't even imagine what would lead up to such a situation, congratulations, because it really sucks.

Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest "latest" tag. Every tag floats, not just "latest". Lots of people don't pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.

Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can't even get on the Internet, because your pihole doesn't start. You can't recover, because you're rate-limited.

I've been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.

This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I'm not talking commercial.

[–] sugar_in_your_tea@sh.itjust.works 10 points 5 days ago (1 children)

Even at work we don't pull that many, and we have dozens of developers.

[–] Schmeckinger@lemmy.world 4 points 5 days ago

Also its 40 per hour per user

[–] aniki@discuss.tchncs.de 8 points 5 days ago

We just build from scratch and pull nothing

[–] Pieisawesome@lemmy.world 2 points 5 days ago (1 children)

One of the previous places I worked at had about a dozen outbound IP addresses (company VPN).

We also had 10k developers who all used docker.

We exhausted the rate limit constantly. They paid for an unlimited account and we just would queue an automation that would pull the image and mirror it into the local artifact repo

[–] model_tar_gz@lemmy.world 6 points 4 days ago (1 children)

A enterprise company that has 10k developers should just invest in their own image hub. It’s not really that hard to do. Docker even open-sourced it under Apache2.0.

[–] Pieisawesome@lemmy.world 1 points 4 days ago

They did.

Regardless they need a way to pull new ones.