Cybersecurity 101: Asset Management [Video]

The first order of business in cybersecurity, indeed in IT management in general, is to have a reliable system inventory:  physical systems, virtual machines, software and associated configurations and vulnerabilities, and – most importantly – data.  Asset inventory is not just common sense, though, it’s the law.  The NIST 800-53 security controls require accurate tabulation of hardware and software, and other, non-governmental standards also give it pride of place.  For example, in the SANS list of 20 critical security controls, the first two address inventory, and the PCI-DSS credit card standard also mandates inventory management.

So, the principle of good asset management is clear, but the practice can be more difficult than one might suppose.  If you’re having trouble with inventory in your on-premise data center, do not expect relief from moving to the cloud, without strict controls.  Cloud deployment, properly managed, can facilitate inventory acquisition and management.  Uncontrolled, it can make things worse.  Let’s take a look at the cause of these difficulties, and explore some ways to deal with them.

Recall that NIST defines five essential characteristics of cloud computing.  Four of those– on-demand self-service, broad network access, resource pooling, and rapid elasticity – directly affect inventory management.  On-demand self-service allows anyone with credentials to provision a server at will, at any time.  Broad network access means users can use their choice of device to set up and access cloud services, and resource pooling is the use of a single resource by multiple users.  Rapid elasticity allows for automatic, unattended creation and termination of virtual machines.  These four essential characteristics make cloud deployments flexible and responsive, but also hard to track.

First, let’s explore on-demand self-service.  The good news – and bad news -- about on-demand self-service is that it allows IT shops to respond quickly to user demand and changing requirements.  So, what’s the bad news? Meeting user needs quickly and flexibly is obviously a boon, but on-demand self-service can also enable IT staff to provision virtual systems without tracking or monitoring.  Test systems, demo systems, pilots, and other temporary systems spin up quickly, but can turn into an unmanaged mess just as fast.  The result is a phenomenon of “VM sprawl”, or “shadow IT”:  unauthorized systems that meet a legitimate need, but are unmanaged, and frequently insecure.

Broad network access also complicates the task of tracking systems.  The access devices themselves – phones, tablet, workstation, or laptop – are proliferating in number, and are no longer under exclusive the ownership, management, and control of the organization.  Instead of tracking a physical phone provided by the government, for instance, the IT shop now needs to track a container on a user’s personal phone.

Resource pooling also plays into the inventory picture.  For instance, in a traditional data center, you would walk up to a rack, inspect the disk array, make sure it matched the procurement paperwork, and you were done.  In a shared resource environment such as a public cloud, community or hybrid cloud, a physical disk drive might belong to the cloud service provider, but store data belonging to other organizations along with yours.  So, in the cloud, it’s not so easy to count up the drives, or other pieces of equipment.

Finally, let’s look at rapid elasticity.  Unlike physical servers, virtual machine (VM) instances can spin up on a moment’s notice, and terminate automatically.  Left unmanaged, such dynamic instances might well be based on machine image files with deficient security controls, such as out-of-date patches or incorrect configurations.  So, the price of flexibility is the potential for intractability.

The most important asset, though, is data, regardless of whether it is stored in an on-premise data center or in a cloud.  Some key elements of data inventory include the current and projected quantity (which of course drives storage requirements), an up-to-date data-flow architecture, sensitivity level, and accessibility by outside organizations.  If you’re sharing data with an outside organization, you’ll also need an up-to-date memorandum of understanding or equivalent document such as an interconnected system agreement (ISA).  While most of the previous discussion has focused on cloud inventory, data inventory is essential and difficult in both the cloud and traditional data centers.

To meet these difficulties, asset discovery and management tools, have grown in importance and market prominence.  As the IT environment becomes more dynamic – tablets, smart phones, cloud deployments and automated machine deployments – the need for automated asset management tools grows in importance.

DLT has a variety of offerings that meet this need.  Alien Vault, for instance, has Unified Security Management (USM) tools for both on premise and Amazon Web Services, and BDNA’s Analyze and Discovery tools can also tame the wild inventory beast.  Dell’s KACE tools are in wide use, and Oracle has a comprehensive set of tools particularly well-suited to Oracle shops.  For data management, it’s hard to beat Symantec’s Data Insight and DLP offerings.

Finally, those who adopt Amazon AWS gain, by default, immediate visibility into all resources created under a given account.  They key, of course, proper management of access to that account.