this post was submitted on 30 Jan 2024
844 points (98.2% liked)
Technology
59392 readers
3377 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It is definitely the exact opposite of this. Even though I understand why you would think this.
The thing with systems like these is they are mission critical, which is usually defined as failure = loss of life or significant monetary loss (like, tens of millions of dollars).
Mission critical software is not unit tested at all. It is proven. What you do is you take the code line by line, and you prove what each line does, how it does it, and you document each possible outcome.
Mission critical software is ridiculously expensive to develop for this exact reason. And upgrading to deploy on different systems means you'll be running things in a new environment, which introduces a ton of unknown factors. What happens, on a line by line basis, when you run this code on a faster processor? Does this chip process the commands in a slightly different order because they use a slightly different algorithm? You don't know until you take the new hardware, the new software, and the code, then go through the lengthy process of proving it again, until you can document that you've proven that this will not result in any unusual train behavior.