They have about 24,000 clients so that comes out at around 350 impacted machines per client which is reasonable. It only takes a few impacted machines for thousands of people to be impacted if they are important enough.
Comment on CrowdStrike’s faulty update crashed 8.5 million Windows devices, says Microsoft
gravitas_deficiency@sh.itjust.works 3 months ago
I feel like that’s not even close to what the real number is, considering the impact it had.
Sami@lemmy.zip 3 months ago
SchmidtGenetics@lemmy.world 3 months ago
My bothers work uses VMs so if the server is down there’s probably 50k computers right there.
gravitas_deficiency@sh.itjust.works 3 months ago
As far as I know, none of the OSes used for virtualization hosts at scale by any of the major cloud infra players are Windows.
Not to mention: any company that uses any AWS or azure or GCP service is “using VMs” in one form or another (yes, I know I am hand waving away the difference between VMs and containers). It’s basically what they build all of their other services on.
SchmidtGenetics@lemmy.world 3 months ago
Banks use VMs and banks were down without access to their systems to login into the VM, so they could work. They were bricked by extension.
gravitas_deficiency@sh.itjust.works 3 months ago
No, the clients were bricked. The VMs themselves were probably fine - and in fact, probably auto-rollbacked the update to a working savepoint after the update failed (assuming the VM infrastructure was properly set up).
Godort@lemm.ee 3 months ago
No, but HyperV is used extensively in the SMB space.
VMWare is popular for a reason, but its also insanely expensive if you only need an AD server and a file share.
biscuitswalrus@aussie.zone 3 months ago
That’s how supply chains work. A link in the chain is broken, the whole thing doesn’t work. Also 10% of major companies being affected, is still giant. But you’re here using online services, probably still buying bread probably got fuel, probably playing video games. It’s huge in the media, and it saw massive affects but there’s heaps of things that just weren’t even touched that information spread on. Like TV news networks seemingly kept going enough to report on it non stop unaffected. Tbh though any good continuity and disaster recovery plan should handle this with impact but continuity.
remotelove@lemmy.ca 3 months ago
The only companies I have seen with workable BCDR plans are banks, and that is because they handle money for rich people. It wouldn’t surprise me if many core banking systems are hyper-legacy as well.
I honestly think that a majority of our infrastructure didn’t collapse because of the lack of security controls and shitty patch management programs.
Sure. Compliance programs work for some aspects of business but since the advent of “the cloud”, BCDR plans have been a paperwork drill.
(There are probably some awesome places out there with quadruple-redunant networks with the ability to outlast a nuclear winter. I personally haven’t seen them though.)
biscuitswalrus@aussie.zone 3 months ago
It’s impossible to tell and you’re probably more close to the truth than not.
One fact alone, bcdr isn’t an IT responsibility. Business continuity should be inclusive of things like: when your CNC machine no longer has power, what do you do? Cause 1: power loss. Process: Get the diesel generator backup running following that SOP. Cause 2:broken. Process: Get the mechanic over, or get the warranty action item list. Rely on the SLA for maintenance. Cause 3: network connectivity. Process: use USB following SOP.
I’ve been a part of a half dozen or more of these over time, which is not that many for over 200 companies I’ve supported.
I’ve even done simulations, round table “Dungeons and dragons” style with a person running the simulation. Where different people have to follow the responsibilities in their documented process. Be it calling clients and customers and vendors, or alerting their insurance, or positing to social media, all the way through to the warehouse manager using a Biro, ruler, and creating stock incoming and outgoing by hand until systems are operational again.
So I only mention this because you talk about IT redundancy, but business continuity is not an IT responsibility, although it has a role. It’s a business responsibility.
Further kind of proving your point since anyone who’s worked a decade without being part of a simulation or contribute to their improvement at least, probably proves they’ve worked at companies who don’t do them. Which isn’t their fault but it’s an indicator of how fragile business is and how little they are accountable for it.
remotelove@lemmy.ca 3 months ago
You aren’t wrong about my description. My direct experience with compliance is limited to small/medium tech companies where IT is the business. As long as there is an alternate work location and tech redundancy, the business can chug along as usual. (Data centers are becoming more rare so cloud redundancy is more important than ever.) Of course, there is still quite a bit that needs to be done depending on the type of emergency, as you described: It’s just all IT, customer and partner centric.
Unfortunately, that does make compliance an IT function because a majority of the company is in some IT engineering function, less sales and marketing.
I can’t speak to companies in different industries whereas you can. When physical products and manufacturing is at stake, that is way out of scope with what I could deal with.
ByteOnBikes@slrpnk.net 3 months ago
I wonder if a large percentage of impact is internal facing systems.
And we won’t know until Monday.
Godort@lemm.ee 3 months ago
If this figure is accurate, the massive impact was likely due to collateral damages. If this took down every server at an enterprise and left most of the workstations online, then that still means that those workstations were basically paperweights.