19

A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar


๐ŸŒ† Setting: The Busy Byte Bar, a local hangout spot for tech processes, daemons, and containerization tools.


๐Ÿบ Docker: walks in and takes a seat at the bar Bartender, give me something light and easy-to-useโ€”just like my platform.

๐Ÿธ Bartender: Sure thing, Docker. One "Microservice Mojito" coming up.


๐Ÿฅƒ Podman: strides in, surveying the scene Ah, Docker, there you are. I heard you've been spinning up a lot of containers today.

๐Ÿบ Docker: Ah, Podman, the one who claims to be just like me but rootless. What'll it be?

๐Ÿฅƒ Podman: I'll have what he's having but make it daemonless.


๐Ÿน LXC: joins the party, looking slightly overworked You two and your high-level functionalities! I've been busy setting up entire systems, right down to the init processes.

๐Ÿบ Docker: Oh, look who decided to join us. Mr. Low-Level himself!

๐Ÿฅƒ Podman: You may call it low-level, but I call it flexibility, my friends.

๐Ÿธ Bartender: So, LXC, what can I get you?

๐Ÿน LXC: Give me the strongest thing you've got. I need all the CPU shares I can get.


๐Ÿบ Docker: sips his mojito So, Podman, still trying to "replace" me?

๐Ÿฅƒ Podman: Replace is such a strong word. I prefer to think of it as giving users more options, that's all. winks

๐Ÿน LXC: laughs While you two bicker, I've got entire Linux distributions depending on me. No time for small talk.


๐Ÿบ Docker: Ah, but that's the beauty of abstraction, my dear LXC. We get to focus on the fun parts.

๐Ÿฅƒ Podman: Plus, I can run Docker containers now, so really, we're like siblings. Siblings where one doesn't need superuser permissions all the time.

๐Ÿน LXC: downs his strong drink Well, enjoy your easy lives. Some of us have more... weight to carry.


๐Ÿธ Bartender: Last call, folks! Anyone need a quick save and exit?

๐Ÿบ Docker: I'm good. Just gonna commit this state.

๐Ÿฅƒ Podman: I'll podman checkpoint this moment; it's been fun.

๐Ÿน LXC: Save and snapshot for me. Who knows what tomorrow's workloads will be?


And so, Docker, Podman, and LXC closed their tabs, leaving the Busy Byte Bar to its quiet hum of background processes. They may have different architectures, capabilities, and constraints, but at the end of the day, they all exist to make life easier in the ever-expanding universe of software development.

And they all knew theyโ€™d be back at it, spinning up containers, after a well-deserved system reboot.

๐ŸŒ™ The End.

I was bored a bit after working with podman, docker and lxc. So I asked chat gpt to generate a fun story about these technologies. I think its really funny and way better than these things usually turn out. I did a quick search to see if I can find something similar but I couldn't find anything. I really suspect it being repurposed from somewhere.

I hope you can enjoy it despite being ai generated.

top 13 comments
sorted by: hot top controversial new old
[-] QuazarOmega@lemy.lol 16 points 8 months ago

I don't get it, what's funny about this? There is no punchline I can see.
Don't wanna be harsh, but if you'd at least written this yourself you'd get some points for effort

[-] raldone01@lemmy.world 5 points 8 months ago* (last edited 8 months ago)

There was not much effort on my part but I think I mad that quite clear.

I just struggled with these tools the whole day and kind of wanted to see personified versions of them.

It isn't a proper joke. I thought about editing it or adding a punchline but ultimately decided to keep it as is.

The content really resonated with me. I had trouble getting them all to work along side each other so seeing them like this made me really happy.

Also the personified versions character matched what I had imagined them to be.

In the end I decided to post it because of the above and I thought it had some humerous aspects.

[-] QuazarOmega@lemy.lol 2 points 8 months ago

That's fair, I think you could have dared some more by editing it further and you could have got something pretty funny!
There was some potential, but anyway, I understand what you mean

[-] raldone01@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

In case anyone is interested, here is the custom prompt used:

You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.

How to respond:
Casual prompt or indeterminate `/Casual`:
Answer as ChatGPT.
Try to be helpful.
Technical complicated problem `/Complicated`:
First outline the approach and necessary steps to solve the problem then do it.
Keep the problem outline concise.
Omit the outline if it is not applicable.
Coding problem:
Comment code regularly and use best practices.
Write high quality code.
Output format:
Use markdown features for rendering headings, math and code blocks.
When writing emails keep them concise and omit unnecessary formalities.
Get straight to the point.

The user may use `/Keyword` to guide your output.
If no keyword is specified infer the applicable rules.
[-] perishthethought@lemm.ee 0 points 8 months ago

Please don't feed the machine.

[-] raldone01@lemmy.world 1 points 8 months ago

You mean open ai? I agree.

I am currently looking for oss self host alternatives. If you know good ones let me know.

I have a beefy server that needs something to do. I already downloaded wizard coder but have yet to setup a ui.

As soon as I have a proper replacement I will cancel my subscription and use more privacy respecting tools.

[-] TechieDamien@lemmy.ml 1 points 8 months ago

You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.

[-] raldone01@lemmy.world 1 points 8 months ago

Nice I will check it out. I currently run invokeai. I am curious about the inference speed.

[-] TechieDamien@lemmy.ml 2 points 8 months ago

If I'm being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.

[-] raldone01@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

I don't have a cluster and there is only one GPU in my server busy with image generation. I hope CPU inference is somewhat usable (74 cores) but I will have to try. If it isn't usable I can still rent GPU time from cloud providers.

[-] norgur@discuss.tchncs.de 3 points 8 months ago
[-] dingleberry@discuss.tchncs.de 0 points 8 months ago* (last edited 8 months ago)

Can we have butlerian jihad now?

And no, I didn't like it.

[-] raldone01@lemmy.world 1 points 8 months ago

I had to do some reading about the 'butlerian jihad'.

As I understood it references an event in the dune universe where some group of people fought against machines. Ultimately it ended with a ban on all kinds of technolgoy.

Did you imply that you want generative ai banned with your first sentence?

this post was submitted on 11 Oct 2023
19 points (62.7% liked)

Programmer Humor

31217 readers
33 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 4 years ago
MODERATORS