Technology, like everything else, has trends or cycles
Public cloud started more than 10 years ago and was the hot, new tech trend. But now…are things starting to shift? Are organizations thinking twice before automatically moving essential workloads to the public cloud?
The answer is yes and for a variety of reasons. A few born-in-the-cloud companies have now moved from the public cloud back to on-premises data centers — DropBox is a high-profile example. And public cloud’s performance (or lack thereof) was a big reason why.
Reality check: Public cloud is all about capacity, not performance
When businesses choose to put their applications in the public cloud, they are sharing infrastructure with a lot of other people. Of course, this can be a good solution because it means that you only pay for what you need when you need it. Public cloud also gives businesses the ability to scale up or down based upon demand.
But don’t forget the whole business model of public cloud: time-sharing. The provider is giving everyone a slice of the timeshare pie, which means that the provider is promising capacity — not performance. I am not the first person to highlight this drawback, I just want to reiterate it: yes, public cloud providers do place performance limits on the services they provide.
Of course, for workloads you deploy on-premises, you get to decide what the performance slice should be. Having this choice is imperative for applications that require reduced latency, such as those for big data and financial services.
Are new technologies making data centers new again?
Looking forward, two new technologies are now available that can boost performance for all applications. These technologies are containers and composable infrastructure. Running containers on composable infrastructure can ensure better performance for all applications.
Containers are open source software development platforms that share a common lightweight Linux OS and only keep the different pieces that are unique to that application within the container. This type of OS-level virtualization means you can hold a lot more containers on a particular server compared to virtual machines (VMs).
A big benefit of containers is increased performance. And when you run containers on bare-metal, performance is increased even more! This is because containers running on bare-metal don’t require a hardware emulation layer that separates the applications from the server.
HPE and Docker tested the performance of applications running inside of a single, large VM or directly on top of a Linux® operating system installed on an HPE server. When bare-metal Docker servers were used, performance of CPU-intensive workloads increased up to 46%. For businesses where performance is paramount, these results tell a compelling story.
Yet, some companies have hesitated to move containers out of virtual machines and on to bare-metal because of perceived drawbacks of running containers on bare-metal servers. These drawbacks, such as difficulties with managing physical servers, are definitely relevant when considering yesterday’s data center technologies. Composable infrastructure helps overcome these challenges by making management simple through highly automated operations controlled through software.
Composable infrastructure consists of fluid pools of compute, storage, and fabric that can dynamically self-assemble to meet the needs of an application or workload. These resources are defined in software and controlled programmatically through a unified API, thereby transforming infrastructure into a single line of code that is optimized to the needs of the application.
Because composable infrastructure is so simple to deploy and easy to use, it removes many of the drawbacks you would traditionally encounter when deploying containers on bare-metal. The end result is better performance at lower costs within your own data center. The combination of containers and composable infrastructure is a marriage made in heaven.
A hybrid IT cloud strategy solves the performance problem of public cloud
When considering where to deploy, first consider the performance needs of your application. Then compare those performance needs against the service levels offered by public cloud vendors and what you can deliver on premises. As I wrote in this article about control over workload performance, businesses need to determine which workloads should be in the public cloud and which ones should remain on traditional IT or a private cloud. And thanks to today’s new technologies, containers and composable infrastructure, staying with traditional data-center deployments may just be the better choice.
About Gary Thome
Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. He is responsible for the technical and architectural directions of converged datacenter products and technologies including HPE Synergy. To learn how composable infrastructure can help you achieve a hybrid IT environment in your data center, download the free HPE Synergy for Dummies eBook.
To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.