7 Best Practices for Government IT Service Continuity
Disasters such as weather, earthquakes, fire, cyber-attacks, terrorism and even human error, have all impacted government systems at one point or another and will continue to do so.
Although the financial cost of government downtime isn’t as high as it is in corporate sectors such as the finance industry, the public sector is acutely aware of the impact that a disaster can have on operations. “In many ways the public sector thinks about it more often because they are the resource of last resort,” Sanjay Castelino, vice president at SolarWinds, told Government Computer News (GCN) back in 2012.
While uninterruptible power outages are the biggest cause of data center outages, the rise of cyber-crimes now represents the fastest growing cause of data center outages, rising from two percent in 2010 to 22 percent in 2016, according to a survey from the Ponemon Institute.
Today’s COOPs: Not Enough Testing, Too Many Holes
Continuity of operation planning (COOP), being vigilant and building redundancy into networks and systems is something that IT managers have been doing for years. Yet only 51 percent of government IT workers are certain or very confident that their agencies could be up and running within 18 hours of a significant disaster.
There are several reasons for this, the first is testing. Most agencies only test their plans once a year, while only 9.8 percent test them quarterly.
In addition to infrequent and overly optimistic testing plans, another problem, reports GCN, is that key areas are often overlooked or underestimated, including:
• Missing digital IDs
• Incomplete backups
• Too few remote user licenses
• Backups that are too hard to validate
• Weak notification systems
• Electronics vulnerable to “falling water”
Plans Need to Evolve, not Remain Static
Planning for unplanned outages can certainly avoid unwelcome costs, but agencies need to raise the bar on business continuity preparedness. Technologies, business processes, and intra-agency relationships are changing all the time, which means those plans need to be continuously validated, re-assessed and refined. It’s a new way of thinking that stresses automation, virtualization, testing, system prioritization, and mobility, among other things.