Sometimes, it feels like there is only bad news. All businesses face challenges with suppliers, our own people and even some of our customers. As managers, we work hard to get ahead of issues and create systems to mitigate problems, but there’s always something. Occasionally, it feels like an ongoing sequence, one after the other.
Today, that problem was the internet. It got wet, and it broke.
Such a simple problem – it looks like a door was left open in the rain, and a lot of water flooded in, leaked through the floor to space below, which just happened to have only one thing in it – our internet connection! Unbelievably, it could have dripped anywhere else, and we’d have been ok, but luck wasn’t on our side, and it landed in exactly the worst place.
The good news is that our disaster recovery plan worked.
The good news is, we had a plan. It worked. We got back running fairly quickly, within our expected timeframe. As you would expect, we’ll review our disaster plan again, and link it into our own internal ‘Kinetics Flight Plan roadmap’ to reassess and review our decisions.
Like all business, we have to weigh up the risk and costs, and make a business decision about how much protection is in place. Regular readers know we LOVE THE CLOUD, and as such, most of our systems are in the cloud. They kept working. The problem was, with the internet cut off, our office-based staff had to pack up and work from home to access them, and of course, post-Covid, we’re all very good at that. The only disruption is having to pack up and drive back home in Auckland’s weather-impacted traffic.
However, not all of our systems are in the cloud. There are a couple that just don’t make sense to move (*yet – we are certain that will change). More frustratingly, we’d hatched a plan to set up a duplicate of those two systems in our Christchurch office, but that is still a work-in-progress, mainly due to the fact we’ll be moving to a new office in Christchurch soon and it makes sense to set it up there. So those systems couldn’t operate, just as we expected. It was okay, and a little costly, but not the end of the world.
It got me thinking though. The reality is that things can go wrong at any time. As managers, we set out to pre-empt these as best we can. It’s why we maintain assets, and why we hold insurance. When did you last review your IT risk matrix? This is part of our Flight Plan and we might make it more explicit now.
|Risk||Description||Mitigation Strategy||Impact||Probability||Cost to Mitigate|
|System downtime||System failure creating downtime and people unable to work||Active maintenance of all key assets, monitoring, backups etc||3||3||3|
|Data leak||Someone sends data in error to the wrong recipient, resulting in brand damage||Education, Configure 365 Data leak protection settings||4||1||2|
|Cyber-threat||Breach through a malicious cyber-criminal – cost & brand reputation damage|
Cyber risk assessment
Layered cyber protection plan
Will you be ready?
We all need to know what to do on that day when one of team comes up to us and gives us that piece of bad news – and it could be worse than a key unit getting soaked in a water leak, It might be something like: “What should we do, we can’t work, everything is encrypted, and we’ve just had a ransomware demand”, or “the hacker is saying they will publish our data on the dark web”.
What will you do? Will you be ready?