Thinking back on the Hawaii emergency management snafu (a WWII acronym meaning “Situation Normal: All F****d up), it was clear that they lacked proper controls and situational planning. Apparently, there were no controls to prevent routine tests from sending out real messages. Furthermore, there was no redundancy in the system (e.g. confirmation messages, secondary approvals) and no plan for immediately recalling the message. Supposedly, the governor couldn’t Tweet because he forgot his password.
So, Hawaii was sent into a panic.
This just points out that emergency plans need to be periodically reviewed (what better time than Q1?). Do we have proper backups and system redundancy, state of the art firewalls and virus detection, and plans for managing and communicating hacks? Can we manage operations if a key partner API crashes or content partner shutters? Have we run through PR nightmare scenarios such as the one that H&M recently suffered (and which resulted in their South African stores being vandalized)?
There is no lack of risks (data security, physical security, financial, brand, supply chain, key executive health). Most are highly unlikely. But we’ve seen information services firms suffer problems over the past few years including the theft of the D&B NetProspex contact file from a data licensor and the Equifax hack. So, while many seem remote, the lack of scenario planning makes them more likely and costlier.
Of course, risk planning and mitigation need to be realistic. If it is simply a “Duck and Cover” campaign for PR purposes, then it will do little to prevent the risk or manage the situation should the emergency happen. Emergency planning needs to be robust.
Emergency planning suffers from some of the same issues as data quality. Both are boring investments based upon reduction of hypothetical risks and costs. But part of a C-level executive’s mandate is to plan for business continuity and mitigate risk.