Enterprise Featured

Stop juggling and increase productivity with better IT infrastructure management

pretty young girl standing and juggling with red balls

Those in IT are usually quite adept at juggling — keeping lots of balls in the air to ensure the organization’s entire IT infrastructure operates as efficiently as possible

From providing desktop support to provisioning and updating compute, storage, and fabric resources — the job of the IT professional is never done.

And it seems like the demands on IT become greater every day. That’s because the pace of innovation continues to accelerate, and IT services are becoming ever more complex. Businesses are now managing and consuming IT services across a hybrid infrastructure, and they’re trying to use infrastructure that is usually not designed for these demands. In addition, complex manual processes and non-integrated tools fail to provide the speed and simplicity needed to support current tasks, much less new ideas and applications.

To compete more successfully, CIOs want to move faster — without IT holding them back. The IT team wants to increase productivity by automating and streamlining tasks. And everyone wants less complexity, so they can spend more time innovating.

So what’s the answer? How can businesses move faster, remove management complexity, and increase productivity? To answer these questions, let’s look at a real world example of a business that achieved those goals with better IT infrastructure management.

No more juggling for Porsche Informatik

Porsche Informatik is one of Europe’s largest developers of custom software solutions for the automobile industry. With more than 1,600 virtual servers and 133 blade servers in two geographically-dispersed data centers, Porsche provides IT services to 450 employees. With 500TB of storage and 12,000 end devices, its environment carries out 1.5 million automated tasks a month. Business-critical applications run across the entire data center, from physical Microsoft Windows clusters to VMware® HA clusters, including in-house developed and third-party programs.

To reduce complexity and streamline management, Porsche Informatik needed a single, integrated management platform. The company turned to HPE OneView, an infrastructure automation engine built with software-defined intelligence and designed to simplify lifecycle operations and increase IT productivity.

HPE OneView allowed Porsche Informatik to stop juggling and improve productivity:

  • Reduced new configuration deployment times by 90%
  • Cut admin and engineer management time by 50%
  • Sped up the detection and correction of routine problems by 50%
  • Improved system availability by 30% to ensure the delivery of business-critical applications
  • Freed IT staff from routine tasks; enabling them to react more quickly to business requirements

An added benefit: a unified API

A key feature in HPE OneView is the unified API that provides a single interface to discover, search, inventory, configure, provision, update, and diagnose the HPE composable infrastructure. A single line of code fully describes and can provision the infrastructure required for an application, eliminating time-consuming scripting of more than 500 calls to low-level tools and interfaces required by competitive offerings.

HPE realizes the value of APIs, which is why they have created the HPE Composable Partner Program. Together, HPE and their partners build solutions that let customers reduce the time they spend managing their data center or cloud environment. That means businesses can stop juggling and spend more time innovating. A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate their solutions.

Peter Cermak, IT systems engineer at Porsche Informatik sums it up well. “The unified API and global dashboard provide a much better, intuitive view of our HPE infrastructure,” says Cermak. “Even people with only basic training can easily see the state of this part of our infrastructure. Not only do we now save a lot of time adding new servers and VLANs, it is also a fire-and-forget task. Previously, we had to re-check and debug profile-related issues but that is no longer necessary. In one operation, staff can configure many servers with identical settings and the time we save enables us to concentrate our work on customer requirements.”

Read the complete success storyLearn more about HPE OneView.

Stop juggling and increase productivity with better IT infrastructure management TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

How Hyperconvergence is Evolving toward Composable Infrastructure

Concept View Of Data Technology Against Skyscraper In City

Recently, I was scrolling through my twitter feed, and I came across an article on TheNextPlatform.com titled, The Evolution of Hyperconverged Storage to Composable Systems

The article discusses the evolution and growth of hyperconverged infrastructure (HCI) as a category, and details the future of HCI as moving toward composable infrastructure. What caught my eye was that the article’s image features HPE Synergy, the first platform built from the ground up for composable infrastructure.

After reading the article, I couldn’t agree more with the author—the evolution of hyperconverged storage is composable. As businesses evolve and grow beyond hyperconvergence, composable infrastructure helps IT cut costs, increase storage for all workloads, and improve networking — while accelerating and simplifying everything.  Here are three examples:

  1. Composable infrastructure lowers costs
    Composable infrastructure lets IT quickly assemble infrastructure building blocks using software-defined resource templates. These templates can easily be composed and then recomposed based on the demands of applications. By maximizing resource utilization, IT can eliminate overprovisioning and stranded infrastructure while ensuring right-sized resource allocation for each of the applications they are running. This enables customers to spend less money on infrastructure and significantly increase the speed to provision infrastructure, which can now be accomplished in minutes.
  2. Composable storage provides flexibility and simplicity for all workloads
    Composable infrastructure aggregates all stranded or unused storage into pools to meet the needs of any workload, enabling IT to quickly scale up and scale down storage needs as workloads dictate. For example, in HPE Synergy, a single storage module can hold up to 40 drives, which can be zoned to one or multiple compute modules. If the compute module needs more capacity, the storage pool can be automatically reallocated among compute modules to meet the needs of the workloads.
  3. Composable fabric simplifies your network
    The network interconnect is typically one of the biggest headaches for IT organizations to manage. To maintain workload performance, most customers will over provision their resources, which increases cost.  With composable infrastructure, you can dynamically change network allocation and bandwidth to meet your needs.

For example, HPE Synergy is an enterprise-grade solution built on industry standards that can easily integrate into existing heterogeneous data centers and connect seamlessly to existing networks and SANs. It abstracts away operational minutia, replacing it with high-level, automated operations. Change operations, such as updating firmware, adding additional storage to a service, or modifying network connectivity, are automatically implemented via a template, significantly reducing manual interaction and human error. IT can configure the entire infrastructure for development, testing, and production environments using one interface that is implemented in one simple step.

Hyperconverged combined with composable infrastructure

Today HPE customers are using both hyperconverged and composable infrastructure to achieve more flexibility with more cost-effective results, both at the core of their data center and at the edge of their business.

One example of this is an international bank that has deployed hyperconverged infrastructure and composable infrastructure. These combined technologies provide compelling functionality, workload consolidation, and simplified IT operations. Using HPE OneViewHPE SimpliVity, and HPE Synergy, this bank is improving its IT infrastructure across its entire business. Benefits result in simplified IT ops management, a smaller data center footprint, workload consolidation, and enhanced business agility. These solutions are setting this bank apart, letting them offer a growing number of new digital services that improve their customers banking experience.

To learn more about the future of composable infrastructure for your business, download the Forrester research report, Hybrid IT strategy insights: Composable infrastructure and business breakthroughs. And for more details on composable infrastructure, download the free e-book, Composable Infrastructure for Dummies.


About Paul Miller

How Hyperconvergence is Evolving toward Composable Infrastructure TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

The World’s Smartest People Can Work for You with the Right Infrastructure Management

Technology of AI Artificial intelligence dig data machine deep learning

More than 20 years ago, Sun Microsystems’ co-founder Bill Joy offered up an insightful thought: “No matter who you are, most of the smartest people work for someone else.”

This thought eventually became known as Joy’s Law, and today its applicability is only compounded by the sheer vastness of the tech industry.

The world’s most intelligent, innovative people are spread across the globe, all working on the industry’s next generation technologies. The difficult truth is that likely only a handful, or quite possibly, none of these people work for you. But that doesn’t have to be the case, and with the right IT strategy, you can start to put the smartest people from all over the planet to work for you – all in your own data center. To get there, you’ll need to get serious about turning your environment into a software-defined data center (SDDC).

Software-define your data center

First, let’s start with the benefits software-defined infrastructure delivers for the group of innovative people who are currently working for you. Your innovators – your developers – all work in the world of software. But your IT Ops organization typically deals in hardware. By software-defining your infrastructure, you bridge the gap between the two groups so that everyone is working in the same sphere.

In an SDDC, all of your infrastructure is managed by software, helping you to automate tasks, reduce risk, and move faster with less hands-on work to maintain infrastructure. But it’s not just about managing with software. It’s also about running software tools that help your developers build the apps and services your business is trying to deliver. Getting these two layers to work together seamlessly is key to building out the best software-defined data center, and where you have the opportunity to start recruiting the world’s brightest.

In some cases, you might already have many of these people “working” for you. For example, if you’ve already virtualized much of your data center using VMware or RedHat products, you’ve got some of them. If you’re using Docker to containerize your applications so you can move apps between your on-premises infrastructure and the cloud, you’ve got some more of them. But what you might be missing is having them all in the same place.

To make it easier for you to take advantage of the top-notch tools coming out of companies such as Docker, VMware, Red Hat, Puppet, Chef, Microsoft, and more, you need an infrastructure management solution that is built on a rich, unified API. Why? Because it will allow you to “recruit” those wonderful people Bill Joy was talking about in minutes not hours. A unified API allows you to quickly integrate the tools other companies have created with just a single line of code, and you can manage these tools from a single interface across your infrastructure. With a unified API, you can present infrastructure to your developers in one common language.

Public cloud is a popular solution today because of the speed with which developers can directly access the resources they need without going through their company’s IT organization. With the right tools, you can bring the same kind of speed and agility to your data center and be a better service provider for your business.

Right infrastructure, right people

HPE OneView is a management tool that lets you easily transform your infrastructure into a software-defined environment. At the core of HPE OneView is a rich, unified API that supports an extensive partner ecosystem. The world’s smartest people can get you started with integrations for the best tools in the industry across DevOps, IT Ops, cloud engines, facilities engines, and developer toolkits.

With the best tools and the smartest people working for your company, you can create a fast, flexible environment that moves like the cloud. You’ll quickly see how easy it is to flex and customize your environment, accelerate innovation, and support new business growth.

To learn more about the benefits of HPE OneView and the extensive partner ecosystem HPE has built to help you on your software-defined journey, download the e-book: HPE OneView for Dummies.


About the Author

The World’s Smartest People Can Work for You with the Right Infrastructure Management TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience. To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

Cost and Utilization Challenges of a Hybrid Cloud Environment

Cloud Computing – Datacenter

It’s been more than a decade since Amazon launched Elastic Compute Cloud and forever changed how businesses consume compute resources

Over the years, the popularity of cloud computing has continued to grow. That’s because many businesses are attracted to the promise of increased agility, faster innovation, and low startup costs that the cloud provides.

As enterprises expand to using multiple clouds, many have struggled to control costs. Effectively managing costs across multiple clouds in a hybrid IT environment has become a significant challenge—commonly resulting in unexpected charges and cost overruns.

High-tech analyst firm Moor Insights & Strategy looked into this challenge, producing a comprehensive report on how to simplify enterprise hybrid cloud cost management. I’ve summarized their findings in this article.

What causes cost overruns in a hybrid cloud environment?

Incomplete planning for actual costs seems to be at the core of the challenge organizations face. Typically, cost overruns occur for the following reasons:

  • Capacity planning didn’t allow for uncertainty
  • Cloud infrastructure utilization is lower than planned
  • Dev/test needs beyond production were not anticipated
  • Smaller costs (data transfer, load balancing, other services) were not accounted for
  • Resources are not being de-provisioned when finished using
  • Higher cost services are being used more than originally planned

Solving the cost issue

The next phase in hybrid cloud maturity must include better cost control and utilization. Having a hybrid cloud strategy will provide businesses with a more accurate forecast of expenses, yet this is only one part of the answer. The second part to achieving control over costs and utilization is through better visibility and accounting of the cloud infrastructure once it is in use.

If that goal sounds easy, don’t be deceived; it isn’t.

That’s because individual cloud providers have their own infrastructure with a toolset based on maximizing the value of their own cloud platform—not the broader hybrid cloud environment or experience that most enterprises want. These individual cloud toolsets do nothing to increase visibility and/or ease IT operations across complex, hybrid cloud environments.

In addition, individual cloud toolsets present IT operators with a continuous challenge as they try to impose some sort of accountability of infrastructure usage on developers. These monthly bills often include multiple teams with multiple cloud providers. And often, costs are only captured and categorized for each individual cloud provider.

Accountability is also often missed when using private clouds because they lack effective cost accountability. Additionally, it can take a long time to provision a VM (days to weeks), so teams are reluctant to release them back to IT. These two factors result in underutilized, stranded capacity.

What’s the answer for better hybrid cloud cost management?

To solve this challenge, businesses need easy-to-use, self-service tools that manage costs and application deployment across all public and private cloud environments. These tools should include features that not only help IT better manage the entire hybrid IT environment, but also make it easier for developers to get the resources they need to get their work done. Additionally, these tools should provide analytics that help the business better control costs and utilization.

Key elements of cloud management tools should include the following capabilities:

  • Self-service infrastructure that empowers a developer’s environment
  • Structure for tagging resources upon provisioning (with reporting capabilities)
  • API-driven, services-based SaaS platform that allows users to add existing cloud infrastructures and options for application developer use
  • Insights dashboard for visibility into cloud spend and utilization
  • Features that build cost visibility for budgeting, control, and optimization by the project owners (including drill down by cloud, project, and users)
  • Capacity and/or spend limits configurable per project (to avoid surprises)
  • SaaS-based platform to minimize setup and keep operational burdens low

Empowering developers with the tools they need

IT leaders realize that any cloud management initiative is more likely to be successful if developers are empowered instead of controlled. As I mentioned above, application developers typically lack any type of visibility and accountability into their infrastructure use. This lack of accountability contributes to both cost and utilization inefficiencies.

As developers adopt a more cloud-native structure, including architectures such as microservices, they control the iterative development and deployment of their applications. When provided with tools for visibility and control, developers can also manage the costs of their application, according to their assigned budgets.

Simplifying hybrid cloud

As enterprise cloud adoption continues to mature, organizations are developing a comprehensive strategy for managing both on- and off-premises infrastructure. Hewlett Packard Enterprise (HPE) is the only vendor with enterprise experience that is currently offering a comprehensive, software service that supports this initiative.

Moor Insights & Strategy recommends that IT leaders consider HPE OneSphere as a hybrid cloud management platform for addressing the cost and utilization challenges in a hybrid cloud environment. HPE OneSphere empowers application developers—a key constituent for success—with easy-to-use, self-service tools for cost management.

The full report, Simplify Enterprise Hybrid Cloud Cost Management with HPE OneSphere, is now available for download. More information on HPE OneSphere is available at: www.hpe.com/info/onesphere


About Gary Thome

Cost and Utilization Challenges of a Hybrid Cloud Environment TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Hewlett Packard Enterprise (HPE) has assembled an array of resources that are helping businesses succeed in a hybrid cloud world. Learn about HPE’s approach to developing and managing hybrid cloud by checking out the HPE OneSphere website. And to find out how HPE can help you determine an application placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.

Failback in Seconds: Customers Share Disaster Recovery Stories from the Data Center

Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers.

Hurricanes, floods, cyberattacks, and simple human error — I’m sure you’ve heard your share of these types of data center disaster stories

Some disasters are predictable and give you plenty of time to prepare. Businesses in the “tornado belt,” for example, experience a much higher probability of weather-related outages in certain months and can plan ahead. Other businesses, unfortunately, get completely blindsided. For the unprepared, recovery often proves to be impossible.

So why doesn’t every IT department have a bullet-proof disaster recovery (DR) plan in place? Typically, organizations have a myriad of perfectly good reasons: other projects take priority, current backups seem to be good enough, or staff is unavailable to work on a DR strategy. Even businesses with a DR plan are at risk if the plan is untested or complicated to operate, or if recovery doesn’t happen fast enough to mitigate damages.

Disaster recovery — keep it simple 

The reality is that preparing for disaster recovery can be daunting, but you can create a solid plan in a few short steps if you invest some time up front. To begin with, select a solution that is easy to deploy and can provide a simple recovery process. You’ll benefit most from a flexible infrastructure that is easy to maintain when future adjustments are needed. Work backwards from there to build out a recovery plan to protect your data. Once your plan is in place, test it regularly so it can be executed by almost anyone — if you’ve chosen a simple solution, testing should require very little time and effort.

Below I summarize how four different businesses implemented simple, yet effective, DR plans using hyperconverged infrastructure. They represent different industries and range in size from small local businesses to midsize enterprises with multiple remote office sites around the globe. Yet, they all have one thing in common: When disaster struck, their data centers were back up and running quickly with minimal or no data loss.

Weathering a Florida hurricane

Florida is no stranger to natural disasters. Schroeder-Manatee Ranch, a land management and agri-business in Manatee and Sarasota counties, set up a resilient hyperconverged infrastructure in its data center shortly before it got hit by Hurricane Irma. Aaron Brosseau, system administrator at the ranch, was relieved to have his data protected by the new solution. With a DR plan in place, they were able to bring their data center back up quickly after the storm. “The current DR setup is a relief,” he said, “because I never have an issue with the backups now. I simply check them every week just to make sure everything still stays protected.” He went on to say, “I see no real downsides to hyperconvergence. It makes so much more sense than any other option we considered.” Read Brosseau’s story.

Surviving disaster thanks to ‘proof-of-concept’ system

When McCullough Robertson refreshed its storage devices, the Australian law firm re-examined its entire IT stack. The company decided to temporarily deploy a hyperconverged system to test the new technology in its production environment. Two weeks later, the company had a system power outage in Brisbane. According to IT Systems Engineer Brodon Hirst, “The entire building was turned off, and we had to failover to our Sydney data center. We were still only in the ‘proof-of-concept’ stage for our DR failover, which made it all the more nerve-wracking.” The hyperconverged system was instrumental in bringing their data back online fast. Hirst brought 50 mission-critical VMs up, “late on a Friday night, whereas our previous DR exercises took an entire day…. These systems were then failed back at the end of the weekend… with the only outage to the business taking place when the connection cutover occurred.” Read the full story.

Recovering from a cyberattack

Worth & Co., full-service mechanical contractors in the eastern US, wanted to modernize its infrastructure, simplify management, and cut costs. CIO Woody Muth was not disappointed with the solution they chose. The firm replaced three full equipment racks of legacy gear with three 2U hyperconverged nodes in its data center, and three additional nodes in another location. The geographically distributed configuration helps to ensure continuous availability in the event of hardware failures or application mishaps. “The product’s built-in data protection capabilities were a major differentiator for us…. When hit with the CryptoWall virus, we were able to restore all of our critical applications to a known working state within a matter of hours.” Read the whole story.

Running DR tests, fails back in seconds

Brigham Young University College of Life Sciences had a long list of requirements for its new infrastructure: minimize downtime and OPEX; reduce management hours; and provide single vendor accountability, easy implementation, and offsite disaster recovery. Its hyperconverged solution delivered on all of that, within budget, and fully guaranteed. The college tested the failover capabilities and watched it failback in seconds. In system administrator Danny Yeo’s words, “I was blown away.” Watch the video clip.

All of these customers have one thing in common: HPE SimpliVity.  The award-winning hyperconverged solution provides three powerful capabilities that help business prepare for disasters: a resilient hyperconverged infrastructure, built-in data protection features, and an extremely simple data recovery process. HPE SimpliVity also offers HPE RapidDR, an optional software program that guides you through disaster recovery planning, and provides a 1-click failback feature for recovery. You can learn more about the feature and how to combat cyberattacks in this whitepaper on mitigating ransomware risks.

Comprehensive backup and recovery plans are an essential part of business. HPE SimpliVity hyperconverged solutions are inherently simple, and built-in data protection can help reduce the risk of data loss through natural and human-caused disasters. If a disaster does hit your data center, the benefits of a resilient DR solution extend far beyond simple peace of mind.

About Jesse St. Laurent

Failback in Seconds: Customers Share Disaster Recovery Stories from the Data Center TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook. 

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

Insights on the Gartner Magic Quadrant for Hyperconverged Infrastructure

Business, Technology, Internet And Network Concept. Young Businessman Working On A Virtual Screen Of

Gartner Inc. recently released the first ever Magic Quadrant for Hyperconverged Infrastructure and the corresponding Critical Capabilities Report 

As with all Magic Quadrant reports, the tech community was eagerly anticipating its release. Vendors, customers, and those interested in following hyperconvergence wanted to see Gartner’s assessment of each vendor’s offerings according to their ability to execute and vision.

Hyperconverged Infrastructure (HCI) is a growing trend for today’s data centers because it reduces costs, increases scalability, and protects data. HCI is reported to be the fastest-growing segment at a CAGR of 48%, slated to exceed $10 billion by 2021. And according to Gartner, in less than two years, 20% of business-critical applications currently deployed on three-tier IT infrastructure will transition to hyperconverged infrastructure.

As the first of its kind, this new report focuses exclusively on HCI, which is a sub-segment of Gartner’s previous Integrated Systems category. This segmentation allows the Gartner Magic Quadrant for HCI to be more focused on the software within its environment, as opposed to physical infrastructure highlighted in the previous Integrated Systems Magic Quadrant reports.

HCI and the software-defined datacenter

The data center is moving to a software-defined state, which makes this Magic Quadrant important reading if you are considering investing in HCI or just keeping up with new software-defined trends.

Three to four years ago, HCI was viewed with some level of skepticism and was only being used in non-critical areas. But in today’s IT environments, HCI is creeping into many mission critical areas of the data center, such as the consolidated data center, business-critical projects, cloud, ROBO, and VDI.

HPE, a leader in HCI

Hewlett Packard Enterprise (HPE) was once again recognized as a leader by Gartner in the Magic Quadrant. And in the corresponding Critical Capabilities report, Gartner praised HPE SimpliVity’s data services, and the related ability to avoid separate backup solutions. Gartner also recognized the reach, support, and reputation of HPE. 

HPE SimpliVity hyperconverged solutions allow you to streamline and enable IT operations at a fraction of the cost of traditional and public cloud solutions by combining your IT infrastructure and advanced data services into a single, integrated all-flash solution. HPE SimpliVity is a powerful, simple, and efficient hyperconverged platform that joins best-in-class data services with the world’s best-selling server and offers the industry’s most complete guarantee.

What’s new in HCI since the report?

As interesting as this first Magic Quadrant for HCI is, a word of caution is in order. Gartner only considered product functionality from participating vendors up through April 2017 — almost 10 months ago. Since that time, much has happened at HPE in its HCI offering. For example:

  • HPE SimpliVity 380 released an Gen10 all-flash version 
  • HPE SimpliVity 380 released multiple versions of RapidDR for, improving performance and disaster recovery automation for virtualized environments
  • Extended the industry leading GreenLake Flex Capacity consumption offering to include hyperconverged solutions

I am sure many other vendors have released similar updates, and because HCI represents the fastest growing segment in the data center, these types of new and innovative solutions are being introduced all the time. Because the market opportunity in this area is massive, expect a lot of changes — both from new vendors and new solutions that will be reviewed during the next release of the HCI Magic Quadrant.

Download the complete 2018 Gartner Magic Quadrant for Hyperconverged Infrastructure report here.

Download the complete Gartner Critical Capabilities Report here.

Gartner Magic Quadrant for Hyperconverged Infrastructure, John McArthur, George J. Weiss, Kiyomi Yamada, Hiroko Aoyama, Philip Dawson, Arun Chandrasekaran, Julia Palmer, 6 February 2018. Gartner Critical Capabilities for Hyperconverged Infrastructure, Philip Dawson, George J. Weiss, Kiyomi Yamada, Hiroko Aoyama, Arun Chandrasekaran, Julia Palmer, John McArthur, 7 February 2018. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


About Jesse St. Laurent

Insights on the Gartner Magic Quadrant for Hyperconverged Infrastructure TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook.

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

Successful Hybrid IT Deployment by Accident? Nope, It Takes Planning

61e76495-73c7-4ce5-9350-04542e8519fd

You may be familiar with the story of the accidental discovery of the world’s first antibiotic, penicillin

The breakthrough occurred in 1928 when a doctor returned to his lab after a two-week vacation to find a “fluffy white mass” in a petri dish previously contaminated with bacteria. Upon closer examination, he was surprised to find that he had discovered a mold that prevented the normal growth of bacteria.

History is full of examples of people achieving positive results through accidental endeavors. A recent survey by Forrester Research tells us that a successful hybrid IT implementation is NOT one of them. To implement a successful hybrid IT strategy, comprehensive planning is vital.

In the fall of 2017, Forrester conducted an online survey with 562 IT decision makers worldwide who are trying to lead their organization through a digital transformation. The survey posed questions regarding technologies they were using, challenges they experienced, and benefits of a hybrid IT model. Forrester complied the results into a new white paper, Hybrid IT strategy insights: Composable infrastructure and business breakthroughs.

Three key findings in the Forrester research stand out:

  • Organizations with a comprehensive hybrid IT strategy are more likely to achieve success.
  • Adopting two technologies (composable and continuous delivery automation) lead to better business benefits.
  • Successful hybrid IT strategies are designed so that IT maintains an essential role.

Develop hybrid IT by design – not by accident

It seems hard to believe, but the survey found that just 33% of organizations actually designed a comprehensive hybrid IT strategy from the ground up before implementing one. That means a full two-thirds of the respondents said that they ended up with hybrid IT by accident. And those that just happened upon hybrid IT are often seeing their implementations spin out of control. According to the Forrester report:

“The result is a hybrid model by accident: integrating public cloud with on premises tech without standardizing on a common infrastructure-as code practice, shadow IT cloud ‘experiments’ that suddenly become production, and outdated governance practices that slow everyone down. Leaders end up with a model that fails to elevate IT beyond back-end operations, fails to live up to the hybrid IT’s potential, and ultimately confuses operations for everyone.”

Two key factors promote success

Businesses that design their hybrid IT strategy by implementing two key technologies are more successful. These technologies include the use of continuous delivery automation and composable infrastructure.

Continuous delivery is important because it promotes a constant, iterative development environment that is essential for keeping up with the changing needs of users. Composable infrastructure is also vital because it allows infrastructure to be treated as software code. IT operators can quickly and easily construct new infrastructure from a collection of building blocks, using software-defined, policy-based templates.

Businesses that adopt continuous delivery combined with composable infrastructure report greater control over their workloads— 61% say they have extremely high levels of control, compared to 24% of those without these two technologies. Both technologies used together allow organization to better overcome challenges, realize innovation faster, and gain greater control over workloads. (It’s interesting to note that when a business only deployed one of these technologies, it did not experience the same high-quality benefits as those that adopted both.)

The role of IT leaders in hybrid IT

IT leaders who wish to remain relevant are taking a more active role as their businesses strive to compete more effectively in today’s digital world. Fifty-six percent of the Forrester survey respondents state that positioning IT as a central part of the organization is an important element of their hybrid IT strategy — more than any other element.

IT must think beyond maintaining back-end functions. Making this transition means concentrating more on maximizing speed, scale, reliability, and cost flexibility. A successfully implemented hybrid IT infrastructure helps IT deliver the benefits of public cloud while maintaining control and reducing costs. And it also elevates IT as an essential part of the business.

Hybrid IT by design is the better option

IT professionals have a lot riding on their hybrid IT strategy. These models must have the flexibility and scalability to handle increasingly complex environments where workloads can live anywhere. Yet, success rarely happens by accident. To stack the odds of a successful hybrid IT deployment in your favor (both for yourself and your business), develop a comprehensive plan that includes the use of continuous delivery automation and composable infrastructure.

Download the Forrester research report, Hybrid IT strategy insights: Composable infrastructure and business breakthroughs, to see the complete analysis, statistics, and recommendations.


About Gary Thome

Successful Hybrid IT Deployment by Accident? Nope, It Takes Planning TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. He is responsible for the Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Hewlett Packard Enterprise (HPE) has assembled an array of resources that are helping businesses succeed in a hybrid cloud world. Learn about HPE’s approach to developing and managing hybrid cloud by checking out the HPE OneSphere website. And to find out how HPE can help you determine an application placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.

The Art of Sizing Your IT Infrastructure

Miniature Network Engineers At Work

Sizing an IT infrastructure today can be tough

Although science is used to determine data center needs, there is still an art that goes into the decision process. The reason is simple: a variety of IT devices of various ages from various vendors — each with its own lifecycle, costs, and on-going maintenance issues — often present unforeseen consequences.

Most IT equipment is expected to actively contribute for at least 3 years without any additional purchases. But it is difficult to anticipate things like IOPS and capacity one year in advance, much less 3 to 5 years ahead. And how do you plan for future technologies that haven’t been invented yet? Faced with so many variables, it’s surprising more IT professionals don’t consult Ouija boards before they make sizing decisions. All joking aside, sizing your infrastructure does include an element of fortune-telling, and some amount of risk is inherent in the process.

Too small, too large, too many changes

If you select a small system with a low upfront costs, you could have problems down the road. Your end users may become angry because they don’t get the performance they expect. Additionally, many under-sized infrastructures require forklift upgrades a year later, eliminating any initial cost savings. And those costs can be compounded. To compensate for the inadequate size of the initial deployment and to guarantee the subsequent upgrade won’t fall short, departments will go to great expense to oversize the environment, well aware that the business won’t be able to take full advantage of the new products for years.

Organizations that look too far into the future will intentionally round up to a solution that is too large for their needs hoping to stay ahead of these sorts of forklift upgrades. This strategy is no better than under-sizing. Imagine the cost of over-investment if the director of IT, VP of IT, and CIO each round up the configuration to mitigate risk. Planned and purchased with the best intentions, these systems can become outdated before the value is ever realized.

Even the most accurate predictions can go wrong when business priorities change. Today’s business models change far faster than the average lifespan of enterprise IT equipment, requiring new levels of agility. Legacy products with a 5-year lifecycle become process and innovation anchors, keeping the business tied to the time period when the technology was acquired.

Don’t be fooled by cost, capacity, or cloud

It’s easy to run the risk of falling into the over/under sizing trap, lured by the lowest upfront costs or the greatest capacity for the dollar. Even public cloud, with its elastic approach to infrastructure, can be expensive in the long run. Continuously adaptable to meet business needs, this flexibility comes with trade-offs, including cost, compliance complexity, performance based on user and data locality, and data sovereignty. Although you can grow your environment in very small increments and only when needed, costs often go up very quickly if sizing is off.

Don’t be drawn into a quick decision based on a system that promises to deliver top results five years from now. Instead, reign your sights in to the next 1-2 years and assess your current state before making any decisions. Do you need all-flash performance to support tier-1 apps? Do you have remote and branch offices with sizing requirements of their own? Take a good look at your business needs and size your infrastructure according to your current workload and use cases, then find a solution that leaves options open for future growth.

Sizing for your environment

Each environment has unique data capacity and performance requirements, which is why one size won’t fit all, even in the most flexible, agile infrastructure. VDI is a great example of where sizing can be very difficult to anticipate and potentially costly to address, if the core system proves to be inadequate. The performance needs of the desktops in a new VDI environment, even when science is applied, cannot be accurately anticipated. As ESG points out, predicting what these performance needs will be, how quickly users will adapt, or which user types will be best suited is challenging, especially when planning a VDI environment.

Hyperconvergence offers a cost-effective, scale-as-you-need architecture that is particularly well-suited to VDI and remote office deployments. While it will not provide you with the ability to predict business needs 5 years in advance, it will allow you to set up a reasonably sized infrastructure for the next year and grow in a direction that makes sense. Every hyperconverged node includes all the components needed, so cost and capacity are easy to calculate. These smaller building blocks don’t require large upfront investment or forklift upgrades, so customers can adapt easily to changing dynamics.

Better sizing with hyperconvergence

Sizing is still important, because you want the overall investment to last 5 years. Hyperconvergence offers scalability that is well-defined and simple to implement, allowing customers to start with a simple proof of concept to better understand the viability of a technology and the performance envelope it delivers, then grow the environment as needed.

No crystal ball can ensure perfect sizing, but with hyperconvergence the costs are well-known up front so contingency budgets can be planned for. And hyperconvergence allows the performance to grow linearly, making capacity planning simple. HPE SimpliVity hyperconverged solutions offer a huge advantage for businesses that want the agility and incremental cost advantages of the public cloud in their own data center.

Examine the total economic impact of deploying a hyperconverged solution in this Forrester report.


About Jesse St. Laurent

The Art of Sizing Your IT Infrastructure TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

IT Ops, Developers, and Business Leaders Can Now Seamlessly Manage Their Hybrid Cloud Environment

ser21

Across datacenters worldwide, cloud conversations have changed dramatically over the last few years

Five years ago, the only way organizations could start new projects quickly or instantly deploy applications was to leverage the public cloud. While this offered many benefits, it also created some new challenges.

Hybrid cloud management simplified

One of the biggest challenges involved managing applications deployed in different locations. When an organization’s public cloud, on-premises private cloud, and traditional IT each exist in physical and operational silos, how do they seamlessly manage them all? The answer lies in providing a unified approach to all cloud resources.

To achieve management across everything, it is important to understand who is using these hybrid cloud resources, what mode of operations exists, and how these operations can be improved. In a typical business, three groups of people want different things.

  • IT Ops: IT operations want a simple solution. They want to use standardized infrastructure building blocks in their environment, drive as much automation as much as possible, and have APIs to integrate anything new with anything existing. They also want to focus less IT resources on ops and more IT resources on apps. In short, they want to focus more of their time on improving business outcomes and reduce time maintaining.
  • Developers: Developers want to have a common experience—wherever they are building and deploying applications. They also need simplicity, flexibility, and speed. They want to be self-sufficient and have the freedom to access cloud-native tools to more easily build and deploy apps.
  • Business leaders: For business leaders, driving digital transformation is imperative. They want a competitive advantage by investing in their digital business to grow revenue streams. They also want to manage costs and don’t want to be surprised with unexpected expenses at the end of a billing cycle. Business execs need a real-time view into usage and spending so that they can adjust and optimize along the way.

HPE OneSphere helps each group work together better

A hybrid cloud management software product is now available that addresses the needs of IT ops, developers, and business leaders. HPE OneSphere, the industry’s first SaaS-based, hybrid cloud management solution for on-premises IT and public clouds, is designed around the idea of helping these groups work together more efficiently.

  • Deliver clouds faster: Because HPE OneSphere is a SaaS solution, IT Ops benefit from the time-to-value and ability to offload the management overhead. IT can build and deploy clouds in minutes and provide ready-to-consume resources, tools, and services faster, which improves everyone’s productivity.
  • Enable fast app deployment: HPE OneSphere gives developers easy access to tools, templates, and applications in a cross-cloud service catalog, so they can quickly access the tools they know. No need for changes in the way applications are built and deployed.
  • Manage spend: Business leaders can access consumption and cost analytics across their hybrid cloud environment. This capability lets them act on insights in real time, enabling better—and faster—decision making.

Having it all with simple, hybrid cloud management

Data and applications distributed across multiple clouds, inside data centers, and at the edge create management challenges. Now generally available, HPE OneSphere is designed to streamline collaboration across the entire business — changing how organizations of all sizes get their arms around their hybrid cloud resources.


About Gary Thome

IT Ops, Developers, and Business Leaders Can Now Seamlessly Manage Their Hybrid Cloud Environment TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Hewlett Packard Enterprise (HPE) has assembled an array of resources that are helping businesses succeed in a hybrid cloud world. Learn about HPE’s approach to developing and managing hybrid cloud by checking out the HPE OneSphere website. And to find out how HPE can help you determine an application placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.

5 Things to Look for Before Jumping on Board with an Azure Stack Vendor

Silhouette of a young skater on a bright red background

When someone tells me I can have it all, it’s usually too good to be true

Yet, that’s exactly what Microsoft® is delivering to cloud customers with its popular Microsoft Azure Stack cloud platform.

Microsoft Azure Stack is a hybrid cloud platform that enables you to deliver Azure-consistent services within your own data center. That means you can have the power and flexibility of Azure public cloud services — completely under your own control.

Sound too good to be true? Not according to the myriad of customers who have already deployed Azure, making it one of the fastest-growing cloud platforms today. And its growth shows no signs of slowing down.

Azure Stack lets you leverage the same tools and processes to build apps for both private and public cloud, and then deploy them to the cloud that best meets your business, regulatory, and technical needs. It also allows you to speed development by using pre-built solutions from the Azure Marketplace, including many open-source tools and technologies.

The 5 key capabilities to keep in mind when choosing an Azure Stack solution provider

Before you get started with Azure Stack, take some time to determine the best solution that will fit your needs. Azure Stack is only available as an integrated system from a select group of vendors. Each of these vendors offer different capabilities, features, and services, so make sure the solution you select delivers in these five key areas:

1. Configuration flexibility

Flexibility is important because you want your solution to fit seamlessly into your existing IT environment. Look for a solution that gives you the greatest number of configuration options possible. After all, the more customizable a solution is, the more compatible it will be with your current environment and future needs.  

As a fully customizable solution, you want:

  • The exact size to meet your application requirements
  • The processor type that’s right for your workloads
  • Your choice of memory
  • Scalable storage capacity
  • Support for third-party networking switches, power supplies, and rack options

2. High capacity and performance

Capacity and performance are important because you want to run workloads as fast as possible. Many applications – such as analytics – demand extremely high levels of performance, which can be a challenge when using the public cloud. Running your workloads in an on-premises Azure Stack environment can give you the performance you need. But make sure you check out all of your hardware options to ensure you are getting the highest capacity and performance possible for your money.

3. Pay-as-you-consume pricing

If you deploy your Azure Stack solution using a consumption-based model, you’ll be able to reduce costs by leveraging cloud-style economics for the hardware and the cloud services. This approach gives you:

  • Rapid scalability
  • Variable costs aligned to metered usage
  • No upfront expense
  • Enterprise-grade support

4. High level of expertise

When choosing a solution, look for the vendors that can provide the expertise you will need to help you develop a comprehensive hybrid cloud strategy. Also look for a team that can deliver professional services that will meet your use case, design, and implementation needs.

5. Try before you buy

What could be better than a try before you buy approach to Azure Stack? Select a vendor that can provide you with an innovation center. These centers can get you can get up to speed on the Azure Stack solution prior to purchase, giving you the information you need to make the right decisions. At an innovation center, you can:

  • Access the latest Azure Stack software and hardware
  • Implement a proof-of-concept
  • Test your use cases to your hybrid cloud

And finally, an innovation center lets you see all of the key capabilities in action. You can experiment using highly flexible configurations while testing performance and capacity. And you can see how everything works together to reduce risk and accelerate time to value.

How the Azure Stack offering from HPE stacks up

HPE ProLiant for Microsoft Azure Stack is a fully integrated hybrid cloud solution that delivers Azure-compatible, software-defined infrastructure as a service (IaaS) and platform as a service (PaaS) on hardware manufactured by industry-leader, Hewlett Packard Enterprise (HPE).

HPE ProLiant for Microsoft Azure Stack is the only solution available that meets the five key capabilities listed above.

  • It is the most customizable solution available, providing the greatest number of configuration options.
  • It is uniquely architected to achieve both high memory capacity at 768 GB RAM and high performance at full 2400 MHz memory speed, increasing memory bandwidth by up to 28% compared to other, same capacity solutions.
  • HPE is the only infrastructure provider that gives you true, consumption-based IT for Azure Stack, making pay-as-you-consume pricing with HPE GreenLake Flex Capacity an attractive option for many.
  • Over 4,000 HPE experts are trained on Azure and hybrid cloud, ensuring you will deploy with confidence.
  • The HPE-Microsoft Innovation Centers, run jointly by HPE and Microsoft, let you try before you buy, while working with hybrid cloud and Azure experts.

Co-engineered by HPE and Microsoft and based on a 30-year partnership, this joint solution provides the five key capabilities you want in your Azure Stack solution.

To learn more, visit HPE ProLiant for Microsoft Azure Stack or watch this short 2 minute video. You can also view the webinar, Hewlett Packard Enterprise: the clear choice for your Azure Stack solution.


About Paul Miller

5 Things to Look for Before Jumping on Board with an Azure Stack Vendor TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Learn about HPE’s approach to developing and managing hybrid cloud infrastructure by checking out the HPE OneSphere website. Read more about composable infrastructure through the e-book, Composable Infrastructure for Dummies. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

1 2 3 6
Page 1 of 6