When the Microsoft cloud had a momentary failure in early May, most of us had an uncomfortable hour with limited access to our files and systems. It seems that some red-faced engineer at Microsoft made a simple small mistake that impacted systems all around the world and it took about hour for them to track it down.
A few weeks previously, many of our clients were adversely impacted by a major outage at one ISP that lasted almost two days. From what we understand, a software update on some network equipment at Chorus was incompatible with something at the internet provider. I can’t begin to imagine the pressure they would have felt as they sought to fix it.
These were clearly accidents, but we all know that our biggest threat is deliberate malicious activities. I was just reading about something called ‘RobbinHood’ impacting the city of Baltimore in May, which in turn followed a similar outage in Greenville, North Carolina.
There is no easy fix.
It’s understandable to say ‘Wow, let’s just eliminate the risk and minimise reliance on the internet’.
Unfortunately, its not that simple. Firstly, we know that security measures on the large, reputable, cloud providers is greater than any of us could reasonably afford otherwise – in fact it’s more secure by a quantum. Hackers are starting to target businesses, and the less defences you have, the easier you are to hack.
Secondly, every organisation we work with wants to be more connected to their customers and stakeholders. They need to operate faster, with more automation, and that simply means more connectivity.
Even if that isn’t important, the external systems we rely on, are also intimately connected across the internet. Whether its obvious things like eCommerce sites or banking, we’re reliant on cloud systems. If you think about the simple action of a physical delivery truck taking stock to your local shops, just consider the amount of IT that simple trip relies on. The orders will be electronic. Invoicing and payments will be digital. The warehouse dispatchers will rely on connectivity to find the items on their shelves and increasingly the stock picking will be robotic. The truck driver will be rostered, and the truck’s maintenance will also be tracked and allocated by software. The route will be guided by a traffic system and the truck is probably GPS tracked and monitored. All of that happens before any of the order is unloaded at your local shop.
We’re reliant on connectivity!
We therefore need to think about how to manage this. Each organisation will have a slightly different answer depending on their circumstances. Ultimately, they need to decide what it’s worth to them to protect themselves, and what they can afford. That’s because there is a law of diminishing returns. Nothing can make an organisation 100% safe, but they can reduce their exposure, up to a point that makes good sense for them.
We suggest working this through in a structured process. It’s why our Best Practice Review is important. We recommend it’s done annually because things change, with new opportunities, new challenges and new options. That defines the parameters to work within, and then the tools become more apparent.
This is what guides us as we work though options including redundant systems and failover connectivity, or manual backup processes. Understanding vulnerabilities and the relative value of these is vital to good decision making, and that ranges across not just your own infrastructure, but the tools and resources of those that you rely on.