this post was submitted on 21 Jul 2024
233 points (99.2% liked)

World News

39210 readers
2056 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.

It’s the first time a figure has been put on the incident and suggests it could be the worst cyber event in history.

The glitch came from a security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.

Microsoft, which is helping customers recover said in a blog post: "We currently estimate that CrowdStrike’s update affected 8.5 million Windows devices."

all 47 comments
sorted by: hot top controversial new old
[–] thisbenzingring@lemmy.sdf.org 36 points 4 months ago (2 children)

All i know is that I had to personally fix 450 servers myself and that doesn't include the workstations that are probably still broke and will need to be fixed on Monday

😮‍💨

[–] qjkxbmwvz@startrek.website 14 points 4 months ago (2 children)

Is there any automation available for this? Do you fix them sequentially or can you parallelize the process? How long did it take to fix 450?

Real clustermess, but curious what fixing it looks like for the boots on the ground.

[–] thisbenzingring@lemmy.sdf.org 18 points 4 months ago* (last edited 4 months ago)

Thankfully I had cached credentials and our servers aren't bitlocker'd. Majority of the servers had iLO consoles but not all. Most of the servers are on virtual hosts so once I got the fail over cluster back, it wasn't that hard just working my way through them. But the hardware servers without iLO required physically plugging in a monitor and keyboard to fix, which is time consuming. 10 of them took a couple hours.

I worked 11+ hours straight. No breaks or lunch. That got our production domain up and the backup system back on. The dev and test domains are probably half working. My boss was responsible for those and he's not very efficient.

So for the most part I was able to do most of the work from my admin pc in my office.

For the majority of them, I'd use the Widows recovery menu that they were stuck at to make them boot into safe mode with network support ( in case my cached credentials weren't up-to-date). Then start a cmd and type out that famous command

Del c:\windows\system32\drivers\crowdstrike\c-00000291*.sys

I'd auto complete the folders with tab and the 5 zero's ... Probably gonna have that file in my memory forever

Edit: one painful self inflicted problem was my password is 25 random LastPass generatied password. But IDK how I managed it, I never typed it wrong. Yay for small wins

[–] magikmw@lemm.ee 14 points 4 months ago (3 children)

You need to boot into emergency mode and replace a file. Afaik it's not very automatable.

[–] Jtee@lemmy.world 12 points 4 months ago (1 children)

Especially if you have bitlocker enabled. Can't boot to safe mode without entering the key, which typically only IT has access to.

[–] magikmw@lemm.ee 7 points 4 months ago

You can give up the key to user and force a replacement on next DC connection, but get people to enter a key that's 32 characters long over the phone... Not automatable anyway.

[–] HeyJoe@lemmy.world 6 points 4 months ago

Servers would probably be way easier than workstations if you ask me. If they were virtual, just bring up the remote console and you can do it all remotely. Even if they were physical I would hope they have an IP KVM attached to each server so they can also remotely access them as well. 450 sucks but at least they theoretically could have done every one of them without going anywhere.

There are also options to do workstations as well, but almost nobody ever uses those services so those probably need to be touched one by one.

[–] prashanthvsdvn@lemmy.world 2 points 4 months ago

I read this in a passing YouTube comment, but I think theoretically be possible to setup an ipxe boot server that sets up an Windows PE environment and can deploy the fix there and then all you have to do in the affected machines is to configure the boot option to the ipxe server you setup. Not fully sure though if it’s feasible or not.

[–] danc4498@lemmy.world 26 points 4 months ago (1 children)

I wonder how much this cost people & businesses.

For instance, people’s flights were canceled because of this resulting in them having to stay in hotels overnight. I’m sure there’s many other examples.

[–] TexasDrunk@lemmy.world 6 points 4 months ago (1 children)

For businesses, a lot of them are hiring IT companies (consultants, MSPs, VARs, and whoever the hell else they can get) at a couple to a few hundred bucks an hour per person to get boots on the ground to fix it. Some of them have everyone below the C levels with any sort of technical background doing entry level work so there's also lost opportunity cost.

I was in that industry for a long time and still have a lot of colleagues there. There's a guy I know making almost $200k/yr out there at desks trying to help fix it. He moved into an SRE role years ago so that's languishing this week while he's going desk to desk and office to office with support staff and IT contractors.

At least two large companies have an API where they're paying for a pile of compute and currently have a small fraction of use. Companies are paying to use those APIs but can't.

I don't know if there's a good way to actually figure out how much this is costing because there are so many variables. But you can bet there are a few people at the top funneling that money directly to themselves, never to be seen again.

[–] danc4498@lemmy.world 4 points 4 months ago (1 children)

That’s kind of what I thinking. There’s countless ways this costs money. And not an insignificant amount either.

Also, I work IT and have been in vacation. So sad I am missing all this!

[–] TexasDrunk@lemmy.world 3 points 4 months ago

Something I didn't think about but has since come to my attention (group chat is getting spicy) is that there are a lot of mid level IT folks on salary who are getting the absolute dog shit worked out of them right now without seeing an extra dime. So the costs are beyond monetary.

[–] Mothra@mander.xyz 18 points 4 months ago (1 children)

8.5M worldwide? I was expecting higher numbers, interesting

[–] ArtVandelay@lemmy.world 16 points 4 months ago

Even if 8.5m is correct, with many being servers, the total people affected is much much higher.

[–] negativenull@lemmy.world 17 points 4 months ago (1 children)

The downstream effects are likely much much greater. If an auth server/DB server/API server/etc (for example) got taken down, the failure cascades

[–] teejay@lemmy.world 7 points 4 months ago (1 children)

The idea that any such servers would be running windows... shudder

[–] PlutoniumAcid@lemmy.world 3 points 4 months ago

In the corpo that I work in, we had about 3000 servers down, plus probably twice as many workstations including laptops of remote workers. Yeah, fun!

[–] mat@jlai.lu 7 points 4 months ago

For some of these systems, I don't understand why they are not running openbsd like medical equipment that should be as secure as possible... And more broadly, most of the world depending on one OS and its environment is only a path for disasters (this one, wanna cry, spying from three letters agencies...)

[–] markr@lemmy.world 6 points 4 months ago (1 children)

There are a lot of misunderstandings about what happened. First, the ‘update’ was to a data file used by the crowdstrike kernel components (specifically ‘falcon’.) while this file has a ‘.sys’ name, it is not a driver, it provides threat definition data. It is read by the falcon driver(s), not loaded as an executable.

Microsoft doesn’t update this file, crowdstrike user mode services do that, and they do that very frequently as part of their real-time threat detection and mitigation.

The updates are essential. There is no opportunity for IT to manage or test these updates other than blocking them via external firewalls.

The falcon kernel components apparently do not protect against a corrupted data file, or the corruption in this case evaded that protection. This is such an obvious vulnerability that i am leaning toward a deliberate manipulation of the data file to exploit a discovered vulnerability in their handling of a malformed data file. I have no evidence for that other than resilience against malformed data input is very basic software engineering and crowdstrike is a very sophisticated system.

I’m more interested in how the file got corrupted before distribution.

[–] PlutoniumAcid@lemmy.world 3 points 4 months ago (1 children)

Yeah, how the hell did this failure pass testing, is what I want to know!

[–] lechatron 4 points 4 months ago

That's the neat thing, Crowdstrike bypassed the rigorous testing process to get Kernel software updates signed by Microsoft by having the part that was tested and signed by Microsoft load another update file. Still unclear how Crowdstrike missed it before releasing it though.

This is a pretty good break down of what happened by a retired windows dev. Including how software operates between Kernel and user zones. The break down of what he thinks happened is about 6:40.

[–] autotldr@lemmings.world 3 points 4 months ago (1 children)

This is the best summary I could come up with:


Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.It’s the first time that a number has been put on the incident, which is still causing problems around the world.The glitch came from a cyber security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.Microsoft, which is helping customers recover said in a blog post: "we currently estimate that CrowdStrike’s update affected 8.5 million Windows devices."

The post by David Weston, vice-president, enterprise and OS at the firm, says this number is less than 1% of all Windows machines worldwide, but that "the broad economic and societal impacts reflect the use of CrowdStrike by enterprises that run many critical services".The company can be very accurate on how many devices were disabled by the outage as it has performance telemetry to many by their internet connections.The tech giant - which was keen to point out that this was not an issue with it’s software - says the incident highlights how important it is for companies such as CrowdStrike to use quality control checks on updates before sending them out.“It’s also a reminder of how important it is for all of us across the tech ecosystem to prioritize operating with safe deployment and disaster recovery using the mechanisms that exist,” Mr Weston said.The fall out from the IT glitch has been enormous and was already one of the worst cyber-incidents in history.The number given by Microsoft means it is probably the largest ever cyber-event, eclipsing all previous hacks and outages.The closest to this is the WannaCry cyber-attack in 2017 that is estimated to have impacted around 300,000 computers in 150 countries.

There was a similar costly and disruptive attack called NotPetya a month later.There was also a major six-hour outage in 2021 at Meta, which runs Instagram, Facebook and WhatsApp.

But that was largely contained to the social media giant and some linked partners.The massive outage has also prompted warnings by cyber-security experts and agencies around the world about a wave of opportunistic hacking attempts linked to the IT outage.Cyber agencies in the UK and Australia are warning people to be vigilant to fake emails, calls and websites that pretend to be official.And CrowdStrike head George Kurtz encouraged users to make sure they were speaking to official representatives from the company before downloading fixes.

"We know that adversaries and bad actors will try to exploit events like this," he said in a blog post.Whenever there is a major news event, especially one linked to technology, hackers respond by tweaking their existing methods to take into account the fear and uncertainty.According to researchers at Secureworks, there has already been a sharp rise in CrowdStrike-themed domain registrations – hackers registering new websites made to look official and potentially trick IT managers or members of the public into downloading malicious software or handing over private details.Cyber security agencies around the world have urged IT responders to only use CrowdStrike's website to source information and help.The advice is mainly for IT managers who are the ones being affected by this as they try to get their organisations back online.But individuals too might be targeted, so experts are warning to be to be hyper vigilante and only act on information from the official CrowdStrike channels.


The original article contains 551 words, the summary contains 552 words. Saved -0%. I'm a bot and I'm open source!

[–] dentoid@sopuli.xyz 16 points 4 months ago

Upvoted just for the tagline "reduced article from 551 to 552 words" 😁 Wacky bot

[–] Resol@lemmy.world 2 points 4 months ago

Y2K, delayed 24 years, 7 months, and 19 days.

What worries me even more is that something pretty similar could happen to 32-bit devices in 2038.

[–] SeattleRain@lemmy.world 2 points 4 months ago

It's the cyber 9/11 they always worried about.

[–] istanbullu@lemmy.ml -1 points 4 months ago (1 children)

In case you needed to another reason to switch to Linux.

Windows is so unreliable that even Microsoft runs Linux internally.

[–] Blaster_M@lemmy.world 6 points 4 months ago (1 children)

When this happened to Linux and MacOS users of Crowdstrike some time ago, no one cared.

[–] Rentlar@lemmy.ca 2 points 4 months ago* (last edited 4 months ago)

https://forums.rockylinux.org/t/crowdstrike-freezing-rockylinux-after-9-4-upgrade/14041

The bug seems to have only affected certain Linux kernels and versions. Of course no one cared, because it didn't simultaneously take out hospital systems and airline systems worldwide to an extent that you'd only think you'd see in movies.

Linux has comparitive advantages for being so diverse. Since there are so many different update channels it would be hard to pull off such a large outage, intentionally or unintentionally. Yet, if we imagine a totally equivalent scenario of a CrowdStrike update causing kernel panics in most Linux distribitions, this is what could be done:

  • Ubuntu, Redhat, and other organizations who make money from supporting and ensuring reliability of their customers' systems, would be on the case to find a working configuration, as soon as they find out it's not an isolated incident or user error.
  • If one finds a solution, it will likely quickly be shared to other organizations and adapted.
  • The error logs, and inner workings of the kernel and where it fails are clearly available to admins, customer support personnel and tech nerds, so they aren't fully at the mercy of the maintainers of the proprietary blobs (both Microsoft and Crowdstrike, for Windows, but only Crowdstrike for Linux) to determine the cause and potential solutions that would be available.
  • The Linux internet-facing component updates can be rolled back and inspected/installed separately to the Crowdstrike updates. The buggy update to Microsoft Azure and from Crowdstrike happening together on the same day muddied the waters as to what exactly went wrong in the first several hours of the outage.
  • There's more flexibility to adjust the behaviour of the kernel itself, even in a scenario CrowdStrike was dragging its feet. Emergency kernel patches could just set to ignore panics caused by the faulty configuration files identified, at least as a potential temporary fix.