"Sorry Mr. Cooper, the computers are down, we have no ticket to sell you, please have a seat, things should be back up in 15 hours or so."
I work in IT, it was a harrowing day. Our entire infrastructure depends on a densely connected network of computers that are in some sense too tightly connected and bound together by standardized levels of software and centralization (cloud). While this provides for amazing efficiency, it is also prone to this type of event.
At the end of the day, this was human error--which is nothing new, a bad line of code and failure to test and identify it prior to mass roll out. It's a head scratcher for me as to how this wasn't caught.
It's definitely a wake up call. The mission of Crowdstrike is to prevent this type of incident, thus the irony is quite thick. I'm sure there will be a lot of changes to come out of this, but the simplest measure is to STOP pushing these mandatory updates out in mass scale, do it in a more measured manner.