Patterns of Cloud Adoption in the Public Sector & Educational Markets
At DLT Solutions, we are seeing a pattern emerge with the types of workloads being deployed to public Infrastructure as a Service (IaaS) platforms by our customers. DLT primarily engages the Public Sector and Educational entities, so the relevance of our observed trends rests there.
When the Cloud first began gaining widespread acceptance, DLT expected development work loads and variable scale web presence systems would be the most common workloads for public cloud deployments. However intuitive that thought had seemed at the time, it is not the pattern that has established itself in practice. Below, we analyze some of the reasons why.
Improvement vs. Sunk Cost.
In many cases large infrastructures that support critical business functions such as ERP systems, eCommerce, and full lifecycle Development systems have been placed on large scale, new equipment in recent years. While the overall operation of those systems may be cheaper on the Amazon Web Services (AWS) platform, a sunk cost exists over a longer period of time for these types of systems. Since these markets tend to maximize the amount of time they can depreciate their equipment, migrations not driven by equipment refreshes are rare.
While the AWS platform can provide considerable operational advantages, in the case where the existing platforms are still within their depreciation cycle, the reality is that shifting the workload to the cloud would in essence cause costly hypervisor licenses, support contracts, and equipment licenses with terms of 2 or more years to go underutilized. The savings of operating those systems in the cloud does not break even against those kinds of anchored costs, at that scale.
Applications such as DNS Hosting, Certificate Repositories, Batch Processing utilities, and basic web presence are low on the complexity and demand scale, but can rain great chaos on an organization if not properly operating. These workloads are easily forklifted to AWS, and often they can be deployed easily into high-availability patterns with little additional work.
As an addition to existing functionality, Content Delivery (CDN) architectures can be easily deployed into an environment using CloudFront from AWS, or by utilizing the Akamai Network. As an added benefit, both of these CDN services can be utilized against hybrid cloud environments as well, and sometimes they can be used as a migration tool in their own right.
Another unexpected workload leader is the custom / in-house built/developed application – or applications that augment and tie together multiple business systems. In the case of Java or .NET applications, Elastic Beanstalk migrations are popular because of the intuitive nature of the system. While problematic for evolving development platforms, once coding is complete, the system’s ability to provision itself based on the code presented to it is a compelling feature that often leads to a “fail fast” trial and error migration of important systems. If it operates and checks out after a limited deployment, a full data copy of the system is moved to the cloud. If the system fails in a way that can’t be corrected quickly, the system is left on premise, and the next system is evaluated for migration.
Data solutions are also a leading area of cloud migration. On premise data systems are under siege from Cloud Providers; the combination of strong security controls, inherent reliability, and the low per gigabyte cost of storage have triggered a tipping point in Data Repository planning.
Data gravity is always a concern, however when moving large amounts of data into the cloud, customers have become aware of the fact that the centralized nature of the cloud – paired with sometimes better than on premise networking – make the cloud a more logical place for certain types of data. Furthermore, in traditional scenarios the data is bound within customers’ data centers which often have limited space and additional compute capacity, however, having these data sets in the cloud introduces the benefit of having an elastic pool of compute always available with which that data can be analyzed.
We are currently seeing an uptick in the movement of all types of infrequently accessed data to the Cloud. A common scenario is the use of the Storage Gateway from AWS. This system locally caches data that has been recently accessed or is protected by a rule statement. Rarely accessed data is retired to low cost S3 storage until such time as it is needed, whereupon it is retrieved to the Storage Gateway’s cache for use. Based on the Gateway’s policies data can then be moved back to S3 after it has been inactive for a set period of time.
Augmentation or replacement of tape systems is another place that cloud storage is making great headway by using all types of AWS storage to provide low cost tiers of storage – from frequently accessed data on EBS volumes, to cold storage using Glacier.
“Phase Two” Projects
Very often, cloud migrations occur in multiple phases. There has been a pattern in the types of second phase projects, and they seem to revolve around collaboration systems.
Email is a critical system that has high visibility within an organization. There is typically resistance to moving email first, and given the seemingly critical nature of email to most organizations, this hesitancy is understandable.
However once the stability and utility of the cloud is realized by the trail blazing workloads previously mentioned, email and collaboration systems are usually the next platforms examined. These systems also tend to have shorter update cycles and they are frequently driven by the growth of an organization as well as the increasing gravity of its data.
Following closely behind those platforms are usually green field launches, or refits of dynamic data driven systems like Drupal, Content Management, BI tools, analytics, etc. The confidence in the AWS platform’s ability to meet demand, as well as the security controls available and understood by the IT teams after the first wave of migrations and/or adoptions, establish the trust in – and knowledge of – the systems to move forward with more sensitive data platforms.