this post was submitted on 05 Sep 2024
919 points (99.0% liked)

Programmer Humor

19503 readers
998 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] kokesh@lemmy.world 131 points 2 months ago (2 children)

Especially server accessible only by SSH....

[–] cypherix93@lemmy.world 88 points 2 months ago (1 children)

I can't be bothered to walk down to the basement, so practically my server is also only accessible by SSH

[–] RustyShackleford@programming.dev 16 points 2 months ago

Especially after age 40 and a knee surgery... I'm tired boss! 😩

[–] 30p87@feddit.org 40 points 2 months ago (1 children)

I'm 150+km away from my server, with literally everything on it lol

[–] yhvr@lemm.ee 10 points 2 months ago (2 children)

I'm at college right now, which is a 3 hour drive away from my home, where a server of mine is. I just have to ask my parents to turn it back on when the power goes out or it gets borked. I access it solely through RustDesk and Cloudflare Tunnels SSH (it's actually pretty cool, they have a web interface for it).

I have no car, so there's really no way to access it in case something catastrophic happens. I have to rely on hopes, prayers, and the power of a probably outdated Pop!_OS install. Totally doesn't stress me out I'll just say I like to live on the edge :^)

[–] ironhydroxide@sh.itjust.works 9 points 2 months ago (1 children)

Setup a pikvm as ipmi and you'll have at least another layer of failure required to completely lose connectivity

load more comments (1 replies)
[–] 30p87@feddit.org 6 points 2 months ago

Currently the server(s) are in my room, which is so messy my dad probably wouldn't even enter it voluntarily. And in the case grub/fstab/crypttab/etc. are messed up, which is probably the most common error, he probably couldn't solve it by himself. Soon everything's gonna live in its own little room in the basement, so it's gonna be accessible easier actually.

[–] wintermute_oregon@lemm.ee 62 points 2 months ago (2 children)

In the old days some of the servers took at hour to reboot. That was stressful when you couldn’t ping it at an hour.

[–] NocturnalMorning@lemmy.world 46 points 2 months ago (1 children)

Don't say stuff like that. You're gonna give me a heart attack.

[–] wintermute_oregon@lemm.ee 36 points 2 months ago (5 children)

The more disk you had, the longer it took. It walked the scsi bus which took forever. So if you had more disk. It took even longer.

Since everything was remote, you’d have to call hands and they weren’t technical. Also no cameras since it was the 90’s.

Now when I restart a vm or container. I panic if it’s not back up in 10 minutes.

[–] NocturnalMorning@lemmy.world 16 points 2 months ago (1 children)

I get annoyed if my pc isn't restarted in 30 seconds now.

[–] wintermute_oregon@lemm.ee 9 points 2 months ago (1 children)

I think mine takes like 2 minutes. It’s ten years old. I’ve putting off upgrading to the cost of videos cards

[–] Thassodar@lemm.ee 8 points 2 months ago (1 children)

I got an M.2 drive last year after having a motherboard capable of it for 3-4 years, and naturally named it "Plash Speed".

load more comments (1 replies)
load more comments (4 replies)
[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 20 points 2 months ago (1 children)

I like how posting got fairly fast. Then we started putting absurd amounts of ram into servers so now they're back to slow.

Like we have a high clock speed dual 32 core AMD server with 1TB of ram that takes at least 5 minutes to do it's RAM check. So every time you need to reboot you're just sitting there twiddling your thumbs waiting anxiously.

[–] wintermute_oregon@lemm.ee 9 points 2 months ago (6 children)

I will date myself. These machines had a lot of memory as well which added to the slow reboot. I think it was 16 gigs.

The r series for IBM took forever. The p series was faster but was still slow

load more comments (6 replies)
[–] xmunk@sh.itjust.works 46 points 2 months ago

Initializing VPC...

Configuring VPC...

Constructing VPC...

Planning VPC...

VPC Configuration...

Step (31/12)...

Spooling up VPC...

VPC Configuration Finished...

Beginning Declaration of VPC...

Declaring Configuration of VPC...

Submitting Paperwork for VPC Registration with IANA...

Redefining Port 22 for official use as our private VPC...

Recompiling OpenSSH to use Port 125...

Resetting all open SSH connections...

Your VPC declaration has been configured!

Initializing Declared VPC...

[–] SaharaMaleikuhm@feddit.org 46 points 2 months ago (2 children)

Never update, never reboot. Clearly the safest method. Tried and true.

[–] bamfic@lemmy.world 15 points 2 months ago

Found the debian user!

[–] naeap@sopuli.xyz 14 points 2 months ago

Never touch a running system
Until you have a inviting hole in your system

Nevertheless, I'm panicking every time I update my sever infrastructure...

[–] mikyopii@programming.dev 44 points 2 months ago (1 children)

When you make a potentially system breaking change and forgot to make a snapshot of the VM beforehand...

[–] MystikIncarnate@lemmy.ca 17 points 2 months ago (1 children)

There's always backups... Right?

.... Right?

[–] WhyJiffie@sh.itjust.works 18 points 2 months ago (2 children)

oh there is. from 3 years ago, and some

[–] Buddahriffic@lemmy.world 8 points 2 months ago

Someone set up a script to automatically create daily backups to tape. Unfortunately, it's still the first tape that was put in there 3.5 years ago, every backup since that one filled up failed. It might as well have failed silently because everyone who received the email with the error message filtered them to a folder they generally ignored.

load more comments (1 replies)
[–] nick@midwest.social 43 points 2 months ago (2 children)

Just had to restart our main MySQL instance today. Had to do it at 6am since that’s the lowest traffic point, and boy howdy this resonates.

2 solid minutes of the stack throwing 500 errors until the db was back up.

[–] xmunk@sh.itjust.works 20 points 2 months ago (2 children)

If you have the bandwidth... it is absolutely worth it to invest in a maintenance mode for your system, just check some flat file on disk for a flag before loading up a router or anything and then, if it's engaged, just send back a static html file with ye olde "under construction" picture.

[–] dondelelcaro@lemmy.world 14 points 2 months ago

Bonus points if your static site sends a 503 with a retry after header.

load more comments (1 replies)
load more comments (1 replies)
[–] umbrella@lemmy.ml 35 points 2 months ago (5 children)

this week i sudo shutdown now our main service right at the end of the workday because i tought it was a local terminal.

not a bright move.

[–] savvywolf@pawb.social 39 points 2 months ago (1 children)

There's a package called molly-guard which will check to see if you are connected via ssh when you try to shut it down. If you are, it will ask you for the hostname of the system to make sure you're shutting down the right one.

Very usefull program to just throw onto servers.

[–] umbrella@lemmy.ml 14 points 2 months ago (1 children)

nice. got it installed to test it out

[–] trolololol@lemmy.world 12 points 2 months ago

We got the Trojan in, let's move move move!

[–] mox@lemmy.sdf.org 9 points 2 months ago (1 children)

Oops.

Since you're using sudo, I suggest setting different passwords on production, remote, and personal systems. That way, you'll get a password error before a tired/distracted command executes in the wrong terminal.

[–] umbrella@lemmy.ml 17 points 2 months ago* (last edited 2 months ago)

i have different passwords but i type them so naturally it didnt even register.

"wrong password."

"oh, i'm on the server, here's the right password:"

"no wait"

[–] Trainguyrom@reddthat.com 8 points 2 months ago

I was making after hours config changes on a pair of mostly-but-not-entirely redundant Cisco L3 switches which basically controlled the entire network at that location. While updating the running configs I mixed up which ssh session was which switch and accidentally gave both switches the same IP address, and before I noticed the error I copied the running config to the startup config.

Due to other limitations and the fact that these changes were to fix DNS issues (and therefore I couldn't rely on DNS to save me) I ended up keeping sshing in by IP until I got the right switch and trying to make the change before my session died due to dropped packets from the mucked up network situation I had created. That easily added a couple of hours of cleanup to the maintainence I was doing

[–] naeap@sopuli.xyz 8 points 2 months ago

Happens to everyone

Just having a multitude of terminals open with a mix of test environment and (just for comparison) an open connection to the production servers...

We were at a fair/exhibition once and on the first day people working on an actual customer project asked us, if they could compare with our code.
Obviously they flashed the wrong PLC and we were stuck dead at the first hours of the exhibition.
I still think that this place was cursed, as we also had to do multiple re-soldering of some connections of our robot and the sherry on top was the system flash dying - where I had fucked up, because I just finished everything late at night and didn't made a complete backup of everything.
But it seems, if luck runs out, you lose on all fronts.

At least I was able to restore everything in 20mins. Which must be some kind of record.
But I was shaking so much from the stress, that I couldn't efficiently type anymore and was lucky to have a colleague to just calmly enter what I told him to and with that we're able to get the show case up and running again.

Well, at least the beer afterwards tasted like the liquid of the gods

[–] LiveLM@lemmy.zip 6 points 2 months ago (1 children)

Best thing I did was change my shell prompt so I can easily tell when it isn't my machine

[–] umbrella@lemmy.ml 7 points 2 months ago* (last edited 2 months ago) (2 children)

you mean the user@machine:$ thing? how do you have yours?

load more comments (2 replies)
[–] ignotum@lemmy.world 27 points 2 months ago (5 children)

I have more than once typed shutdown instead of reboot when working on a remote machine... always fun

[–] RandomLegend@lemmy.dbzer0.com 20 points 2 months ago* (last edited 1 week ago)

Make an alias for when you type shutdown it does restart and if you want to shutdown make an alias that goes like

Yesireallywanttoshutdown

[–] chatokun@lemmy.dbzer0.com 7 points 2 months ago

Networking, we had a remote office in Europe (I'm in the US) and wanted to reset a phone. Phone was on port 10 of the Cisco switch, port 1 went to the firewall (not my design, already in place).

Helping my coworker, I tell her to shut port 10.

Shut port 1, enter.

Ok... office is offline and on another continent...

[–] umbrella@lemmy.ml 6 points 2 months ago

i have the horrible habit of using shutdown now because of my personal computers. a lot more fun.

load more comments (2 replies)
[–] NastyNative@mander.xyz 22 points 2 months ago

Tbh there is nothing more taxing on my mental health than doing maintenance on our production servers.

[–] WagnasT@lemmy.world 17 points 2 months ago* (last edited 2 months ago) (2 children)

when it was the wrong server and you're hoping it comes back up before 5 minutes and nagios starts sending alerts

[–] sep@lemmy.world 9 points 2 months ago

I install molly-guard on important machines for this reason. So fast to do a reboot on the wrong ssh session

[–] tiramichu@lemm.ee 8 points 2 months ago

If a tree falls in the woods...

[–] pedz@lemmy.ca 11 points 2 months ago* (last edited 2 months ago)

I work with IBM i/AS400 servers and those are not exactly the quickest thing to "reboot" (technically an IPL). Especially the old ones. I have access to the HMC/console but even this sometimes takes several minutes (if not dozens) just to show what's going on.

It's always a bit stressful to see the codes passing one after the other and then it stops on one and seems to get stuck there for a while before continuing the IPL process. Maybe it's applying PTFs (updates) or something, and you just have to wait while even the console is blank.

I've been monitoring those servers for years and I'm still sometimes wondering if it hanged during the IPL or if it's just doing its thing, because this part, even with codes, is not very verbose.

Fortunately it's also very stable so it pretty much always comes back a few minutes after you start wondering why the hell it's taking so long.

[–] shoulderoforion@fedia.io 9 points 2 months ago (2 children)

....... and you're updating it remotely

load more comments (2 replies)
[–] lnxtx@feddit.nl 7 points 2 months ago (1 children)

Dell PowerEdge R620, I'm talking to you.

load more comments (1 replies)
[–] draughtcyclist@lemmy.world 7 points 2 months ago

Y'all need high availability in your lives.

[–] 30p87@feddit.org 5 points 2 months ago

And then you wonder if you typed reboot or poweroff

(Or 6/0 for the debian people)

load more comments
view more: next ›