Enterprise Featured

The Resurgence of On-Premises Infrastructure, Along with More Options

Corridor Of  Server Room With Server Racks In Datacenter. 3D Ill

It wasn’t that long ago that industry experts were quick to declare on-premises computing a thing of the past

It seemed like everyone was moving toward the public cloud, and pundits were calling on-premises computing dead and buried. 

Fast forward a few years, and those very same experts were  beginning  to backtrack. Maybe on-premises infrastructure wasn’t as dead as we’d thought. An interesting headline appeared on Forbes.com:  Reports of the Data Center’s Death Are Greatly Exaggerated.  The author explains that although public cloud is pervasive, the data center is  actually thriving. Industry  analysts seem to agree. Even public cloud vendors recognized the market is not satisfied with only public cloud – and they began pursuing on-premises opportunities. With the announcement of  AWS Outposts, AWS, the largest public cloud vendor, admitted that not everything is moving to the public cloud. 

The future is all about choice 

Previously, technology limited organizations to two choices – either go to the public cloud or stay on premises. In the future, businesses large and small will have a wide range of options of locations and consumption models. And the lines separating all of those options will blur as IT embraces the cloud experience. 

As organizations evaluate their best options, many understand that cloud is not a destination. Instead, it is new way of doing business focused on speed, scalability, simplicity, and economics. This new business model allows IT to distribute data and apps across a wide spectrum of options. It also shifts the focus of IT away from optimizing infrastructure to positively impacting applications. Instead, they will manage applications to deliver the best experience wherever the infrastructure is located. 

Choose what’s best for each individual workload 

In the future, organizations will place data and applications where each performs best. And constantly changing requirements of service delivery, operational simplicity, guaranteed security, and optimization of costs will dictate placement. For example, the General Data Protection Regulation had far-reaching changes in terms of how global businesses secure customer data. Consequently, these changes led organizations to make adjustments in how they deployed applications. 

Organizations who have done their homework will deploy applications and data where it makes the most sense – on a myriad of points along the two ends of the public cloud and on-premises spectrum. Some may choose colocation because it provides the infrastructure and security of a dedicated data center without the costs of maintaining a facility. Other workloads are best served in a private cloud using traditional on-premises infrastructure with consumption-based pricing. Another application may demand high security and control, yet flexibility and scalability, which would make on-premises private cloud the best alternative. 

Having choices clearly gives organizations better deployment and consumption options for each individual application. And as needs change, deployment and consumption models will also change. The beauty of having numerous choices is that it gives organizations more flexibility to manage costs, security, and technical needs. 

More choices may equal more complexity 

The downside to all of this choice is that it can mean more complexity, as each deployment model is different. And as new technologies are introduced, the lines between all of these options are often obscured. For example, consumption-based pricing gives customers the flexibility to pay for what they use but still manage the infrastructure themselves, which fits neither the traditional on-premises nor the public cloud model. 

As technology advances and choices continue to expand, it’s often difficult for an organization to adapt. To solve this challenge, they need a new mindset, one that is agile, adjusting to IT changes quickly. Too many times, they are constrained by legacy thinking, infrastructure, and tools. Better training and tools from industry experts can solve these issues. 

Required: Expertise and a trusted advisor 

To succeed in this agile yet often complex environment, many businesses will need valuable expertise. They should seek out partners that provide more than just the two options of on-premises or public cloud. Instead, savvy organizations will choose experts who provide solutions along the spectrum of deployment options. For instance, a vendor such as Hewlett Packard Enterprise (HPE) provides a wide range of solutions: Offerings include traditional on-premises infrastructure, private cloud (both owned and rented with consumption-based pricing), as well as colocation.   

A successful organization will also need tools and professional services to help support as many options as possible.  HPE advisory services  can help identify the best place to run applications, while  HPE OneSphere, an as-a-service multi-cloud management platform, can help organizations gain more control over the  complexity of hybrid clouds. In addition,  Cloud Technology Partners (CTP), a HPE company, works with IT teams to enhance learning, conquer cloud challenges, and accelerate successful digital transformation. 

It’s time to stop limiting choices to only on-premises versus public cloud. Instead, consider all the options available for new opportunities and long-term success. Compliance, cost, performance, control, complexity of migration – all of these factors will determine  the right mix for deployment of data and applications. 


About Gary Thome 

The Resurgence of On-Premises Infrastructure, Along with More Options TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents. 

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog. 

It’s a Brave, New Cloud World Out There. Are You Ready?

Steps and door

Cloud adoption across an enterprise is a major undertaking, fundamentally changing how IT works

What organizations used to accomplish in man power and labor is now transformed and automated. Using software, a business can manage their datacenter as code. And it’s not just compute, storage, and fabric; it’s all the ancillary software services that surround it – identity, encryption, logging, monitoring, and continuous governance. What used to take months to plan and develop now takes days and weeks to implement.

But, there’s a downside. Organizations aren’t well-prepared for or knowledgeable about this monumental shift in how the new IT needs to work. It’s a brave new DevOps world out there, and organizations are struggling to be successful within it.

As I travel the world, I see the same problems play out time and again—and many of these challenges involve the changing roles of people within an IT organization. In this article, I describe how to identify and embrace these roles as you migrate to the cloud and transform your business.

It’s a new world, so you need new roles

Current roles are evolving to encompass new tasks brought on by the daily operations in the public cloud. One of the biggest problems for organizations is a lack of skilled people. And when qualified people are found, they are often expensive. To combat this challenge, many organizations choose to develop a training, up-skilling, and enablement process—an expensive investment in people and a process that will take years.

Changing roles requires people to think differently, transitioning old skills to new skills. The first step in finding the right skills is to identify what you need. To help you determine the learning tracks necessary for your people, I’ve structured the new cloud roles into eight fundamental skill zones:

  • People
  • Data
  • Innovation and new apps
  • Existing portfolio and apps
  • Strategy
  • Security
  • DevOps practice
  • Operations and service quality

The chart below lists these eight skill zones; under each, I break down the skills needed per zone.

It’s a Brave, New Cloud World Out There. Are You Ready? TechNative

This chart contains a wealth of information you can use to identify the roles you need to train your people for a successful cloud transformation. For example, let’s review the top left skill zone: people. In this section, you will see how the roles of human resources (HR) and learning and development (L&D) staff should change.

I list six key skill areas that will need retraining under the people zone. HR and L&D will want to invest in cloud talent enablement programs, along with rethinking what curriculums they offer employees for continuous learning. The people in this group must develop a training initiative targeted at each new role from the other seven areas listed. The goal of HR and L&D staff should be to ensure learning services for this new cloud world are constantly available. They must find and provide educational resources such as virtual instructor-led training, podcasts, webinars, and progressive learning management systems (LMS).

HR and L&D must also understand all of the new cloud roles and weigh different needs in their recruiting and compensation packages. And finally, they will want to look at talent retention differently by determining how to engage current employees better in their new cloud roles.

Break down the silos – the new roles are interconnected

A key thing to keep in mind is that everyone in each skill zone interacts with the others. For example, let’s say you are a developer writing a new application. You’ve been trained in all six areas in your specialty: you understand the new cloud tools and software, automation techniques, platform as a service (PaaS) management, along with the other key areas listed.

But…if you haven’t thought about the economics involving your app, you may not last long. As soon as your CFO (grouped in the top right zone under strategy) sees the latest cloud bill, all of your hard work may very well be scrapped due to runaway deployment costs.

In this new cloud world, no one is an island; everyone is co-dependent on each other. And communication between all eight skill zones is only the beginning. Those in each zone must have a basic knowledge of each roles and responsibilities in each of the other zones.

Successfully navigating the new cloud world

Cloud adoption across an enterprise is a major undertaking. Organizations are finding themselves in this brave, new cloud world without the proper compass to navigate through it. To be successful, you must start at the beginning – by first identifying key roles and the skills you will need.

This article is the fourth in a series on how to train your employees for a successful cloud transformation. You can read the first three articles here: Admitting you have a problem with your cloud transformation, 5 proven tactics to break up the cloud deployment logjam, and IT Operations and Developers: Can’t we all just get along. For more information on a smooth transition to multi-cloud, visit the CTP website. To learn more about how to ease your digital transformation, click here.


About the Author

Robert Christiansen is a cloud technology leader, best-selling author, mentor, and speaker. In his role as VP of Global Cloud Delivery at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise company, Christiansen oversees the delivery of professional services and innovation for HPE’s public cloud business. His client base includes Fortune 500 and Global 2000 customers, and his team’s leadership model encompasses the entire IT transformation journey, from inception to execution. To read more articles by Robert, please visit the HPE Shifting to Software-Defined blog.

Two IT Industry Analysts Discuss Taming Multi-Cloud Complexity

Wireless communication network, IoT Internet of Things and ICT Information Communication Technology concept. Blue sky with cloud in sunny day. Connection background.

Much has changed for businesses in the last 40 years

In the 1980s, personal computer growth lead to microcomputers (servers), and by the 1990s, data centers were commonplace. Then, virtualization and the need to process an explosion of data fueled data center growth in the early 2000s. When Amazon launched its commercial web service (EC2) in 2006, cloud computing dramatically changed how businesses handle their data – and their businesses.

As an IT industry analyst, Martin Hingley, President and Market Analyst at IT Candor Limited, based in Oxford, UK, had a front row seat to all of this change. In a recent BriefingsDirect podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, discusses some of these changes with Hingley. The two analysts examine how artificial intelligence, orchestration, and automation are helping tame complexity brought about by continuous change.

After 30 years of IT data center evolution – are we closer to simplicity?

Gardner began the interview by asking Hingley if a new era with new technology is helping organizations better manage IT complexity. Hingley responded, “I have been an IT industry analyst for 35 years, and it’s always been the same. Each generation of systems comes in and takes over from the last, which has always left operators with the problem of trying to manage the new with the old.”

Hingley recalled the shift to the client/server model in the late 1980s and early 1990s with the influx of PC servers. “At that point, admins had to manage all of these new systems, and they couldn’t manage them under the same structure. Of course, this problem has continued over time.”

Management complexity is especially difficult for larger organizations because they have such a huge mix of resources. “Cloud hasn’t helped,” Hingley explained. “Cloud is very different from your internal IT stuff — the way you program it, the way you develop applications. It has a wonderful cost proposition; at least initially. But now, of course, these companies have to deal with all of this complexity.” Managing multi-cloud resources (private and public) combined with traditional IT is much more difficult.

Massive amounts of data: get your house in order using AI

Additionally, consumers and businesses create massive amounts of data, which are not being filtered properly. According to Hingley, “Every jetliner flying across the Atlantic creates 5TB of data; and how many of these fly across the Atlantic every day?” In order to analyze this amount of data properly, we need better techniques to pick out the valuable bits of data. “You can’t do it with people. You have to use artificial intelligence (AI) and machine learning (ML).”

Hingley emphasized how important it is to get a handle on your data – not only for simplicity, but also for better governance. For example, The European Union (EU) General Data Protection Regulation (GDPR) reshapes how organization must handle data, which has far-reaching consequences for all businesses.

“The challenge is that you need a single version of the truth,” explains Hingley. “Lots of IT organizations don’t have that. If they are subpoenaed to supply every email that has the word Monte Carlo in it, they couldn’t do it. There are probably 25 copies of all the emails. There’s no way of organizing it. Data governance is hugely important; it’s not nice to have, it’s essential to have. These regulations are coming – not just in the EU; GDPR is being adopted in lots of countries.”

Software-defined and composable cloud

Along with AI, organizations will also need to create a common approach to the deployment of cloud, multi-cloud, and hybrid-cloud, thereby simplifying management of diverse resources. As an example of such a solution, Gardner mentioned the latest composable news from Hewlett Packard Enterprise (HPE).

Announced in November 2018, the HPE Composable Cloud is the first integrated software stack built for composable environments. Optimized for applications running in VMs, containers, clouds, or on bare metal, this hybrid cloud platform gives customers the speed, efficiency, scale, and economics of the public cloud providers. These benefits are enabled through built-in AI-driven operations with HPE InfoSight, intelligent storage features, an innovative fabric built for composable environments, and HPE OneSphere, the as-a-Service hybrid cloud management solution.

“I like what HPE is doing, in particular the mixing of the different resources,” agreed Hingley. “You also have the HPE GreenLake model underneath, so you can pay for only what you use. You have to be able to mix all of these together, as HPE is doing. Moreover, in terms of the architecture, the network fabric approach, the software-defined approach, the API connections, these are essential to move forward.”

Automation and optimization across all of IT

New levels of maturity and composability are helping organizations attain better IT management amidst constantly changing and ever-growing complex IT environments. Gaining an uber-view of IT might finally lead to automation and optimization across multi-cloud, hybrid cloud, and legacy IT assets. Once this challenge is conquered, businesses will be better prepared to take on the next one.

To hear the full interview, click here. To learn more about the latest insights, trends and challenges of delivering IT services in the new hybrid cloud world, check out the IDC white paper, Delivering IT Services in the New Hybrid Cloud: Extending the Cloud Experience Across the Enterprise.


About Chris Purcell

Two IT Industry Analysts Discuss Taming Multi-Cloud Complexity TechNativePurcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for HPE Synergy, HPE OneView, HPE SimpliVity hyperconverged solutions, and HPE OneSphere. To read more from Chris Purcell, please visit the HPE Shifting to Software-Defined blog.

 

“Instant Everything” Poses 3 Unique Challenges for Datacenters at the Edge

EdgeComputingHPE

We live in a world where digital needs are more immediate than ever

We don’t just want results now, we expect them. We play games and download movies on-demand, share photos and upload videos almost instantly. We don’t even like waiting a few extra seconds for data to travel from the core datacenter to our digital devices. This expectation for “instant everything” is driving a change in where data resides, a shift from the datacenter to the edge.

To accommodate increased demand at the edge, remote office/branch office (ROBO) sites have evolved into modular datacenters. No longer limited to brick-and-mortar offices, edge sites can include scientific labs in the wilderness, manufacturing facilities, airplane cockpits, or IT closets on an oil rig or a cruise ship. These sites have unique challenges, and each challenge can increase costs significantly.

Challenge #1: Sizing and scaling at the edge

Poorly sized infrastructures are a notorious money drain. IT organizations rarely experience a budget surplus, yet business requirements demand the best technology available, which makes sizing even more important in datacenter modernization projects. When complex environments require a refresh, IT needs to estimate business growth rates and system requirements 2-3 years in advance to ensure sufficient data capacity and system performance. Too little capacity limits business growth; too much wastes money as components sit idle in the rack.

Accurate sizing becomes critical at the edge. Remote locations often have smaller budgets than the main office and very limited physical space. Organizations can run into literal walls when they attempt to modernize technology. Expanding physical facilities to accommodate new IT racks, if even possible, is expensive, and power and cooling costs for the additional devices quickly add up.

Challenge #2: Ongoing maintenance and management costs

With the increasing number of datacenters at the edge, it is not feasible to employ specialized IT staff with expertise in storage, servers, networking, and backup and recovery at every site, even for larger enterprises. While individual systems might be inexpensive up front, they are rarely cost effective in the long run. In fact, siloed systems can incur significant OPEX costs for ongoing maintenance and management, including travel and fees for multiple technology specialists, and downtime to synchronize systems.

Challenge #3: The cost of data protection

Fractured data protection strategies put data at risk, and that risk increases with the number of different methods at each site, such as backing up to tape or portable hard drives. Costs for specialists who are contracted to build and test DR plans can escalate if contractors are needed on-site to change tapes and keep backup apps compatible with servers and storage upgrades. For midsize businesses, these costs are prohibitive. If a remote site experiences a natural disaster or a cyberattack, these organizations have the additional costs of downtime and data loss.

How hyperconvergence keeps costs down at the edge

Hyperconverged infrastructure (HCI) addresses every one of these challenges, and these compact systems are increasingly popular at the edge. Easy to deploy and scale, the most comprehensive hyperconverged solutions consolidate storage, servers, and advanced data services, including data protection, inside each node. That means IT only has one integrated system to manage, one solution to upgrade and maintain, and only one tech refresh cycle.

Organizations can start with as few as two nodes per location and scale out incrementally, expanding the system as needs grow and eliminating the cost of guessed-at future requirements. In an HPE SimpliVity infrastructure, hyperconverged devices at all sites are part of the same federation and can be monitored and managed together from a single interface by an IT administrator without specialized training. And multiple nodes protect data across the entire business, from core to edge, in case of a drive, node, cluster or site failure.

A recent study revealed how cost-effective HPE SimpliVity hyperconverged infrastructure can be for ROBO sites. Enterprise Strategy Group set up a use case study with a single remote office deployment and found that hyperconverged solutions provide a savings of 49% when compared with a traditional SAN. They then increased the deployments and discovered that the savings become greater as the number of sites grow. In total cost of ownership (TCO), efficiencies in HCI architecture were found to boost savings to as much as 55% in remote and branch office deployments.

With so much importance placed on data at the edge, organizations need a simple, cost-effective infrastructure that can grow with them and keep their data available, no matter where it resides. Learn more about HCI for remote datacenters in the Gorilla Guide to Hyperconverged Infrastructure.


About the Author

“Instant Everything” Poses 3 Unique Challenges for Datacenters at the Edge TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. With over 26 years of experience in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, Goepel has held various engineering, marketing, and consulting positions in R&D, sales and services. To read more article by Thomas Goepel, visit the HPE Shifting to Software-Defined blog

IT’s Vital Role in Multi-cloud, Hybrid IT Procurement

Futuristic cloud technology digital Network technology background, communication technology illustration Abstract background

Changes in cloud deployment models are forcing a rethinking of IT’s role, along with the demand for new tools and expertise

For a time, IT seemed to be underappreciated or simply bypassed. With a swipe of a credit card, some business units and developers found a quick and easy way to get the services they needed—without engaging with their IT counterparts. Now things are changing. With the complexities of managing different workloads on and off premises, these same users are once again seeking the help of IT to cut through the chaos and ensure the right safeguards are in place.

In a recent BriefingsDirect podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, talks with Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The two discuss the changing IT procurement landscape as more organizations move to multi-cloud and hybrid IT operating models.

IT’s new role – back to the future with an emphasis on hybrid

Dillingham began the interview by explaining why procurement of hybrid and multi-cloud services are changing. “What began as organic adoption — from the developers and business units seeking agility and speed — is now coming back around to the IT-focused topics of governance, orchestration across platforms, and modernization of private infrastructure.” Organizations are now realizing that IT has to be involved in order to provide much needed governance.

Gardner mentioned that many organizations are looking beyond simply choosing between public or private clouds. Dillingham agrees that there is a growing interest in hybrid cloud and multi-cloud. “Some organizations adopt a single public cloud vendor. But others see that as a long-term risk in management, resourcing, and maintaining flexibility as a business. So they’re adopting multiple cloud vendors, which is becoming the more popular strategic orientation.”

Of course, effectively managing multi-cloud and hybrid IT in many organizations creates new challenges for IT. Dillingham explains that as IT complexity grows, “The business units and developers will look to IT for the right tools and capabilities, not necessarily to shed accountability but because that is the traditional role of IT, to enable those capabilities. IT is therefore set up for procurement.”

Multi-cloud economics 101 — an outside expert can help

Dillingham suggests that organizations consider all types of deployments in terms of costs. Large, existing investments in data center infrastructure will continue to serve a vital interest, yet many types of cloud deployments will also thrive. And all workloads will need cost optimization, security, compliance, auditability, and customization.

He also recommends businesses seek out consultants to avoid traps and pitfalls, which will help better manage their expectations and goals. Outside expertise is extremely valuable not only with customers in the same industry, but also across industries. “The best insights will come from knowing what it looks like to triage application portfolios, what migrations you want across cloud infrastructures, and the proper set up of comprehensive governance, control processes, and education structures,” explains Dillingham.

Gardner added that systems integrators, in addition to some vendors, are going to help organizations make the transition from traditional IT procurement to everything-as-a service. “That’s more intelligent than trying to do this on your own or go down a dark alley and make mistakes. As we know, the cloud providers are probably not going to stand up and wave a flag if you’re spending too much money with them.”

To help in the governance of all deployments, Dillingham says IT will want to implement the ultimate toolset that will work across both public and private infrastructures. “A vendor that’s looking beyond just public cloud, like Hewlett Packard Enterprise (HPE), and delivers a multi-cloud and hybrid cloud management orientation, is set up to be a potential tour guide and strategic consultative adviser.”

Advice for optimizing hybrid and multi-cloud economics

Gardner concludes the interview by discussing how managing multi-cloud and hybrid cloud environments is incredibly dynamic and complex. “It’s a fast-moving target. The cloud providers are bringing out new services all the time. There are literally thousands of different cloud service SKUs for infrastructure-as-a-service, for storage-as-a-service, and for other APIs and third-party services.”

Dillingham advises that the best IT adoption plan comes down to the requirements of each specific organization. He cautions that the more business units the organization has, the more important it is that IT drives “collaboration at the highest organizational level and be responsible for the overall cloud strategy.” Collaboration must encompass all aspects, including “platform selection, governance, process, and people skills.”

HPE can help IT teams simplify their hybrid and multi-cloud experience with modern technologies and software-defined solutions such as composable infrastructure, hyperconvergence, infrastructure management, and multi-cloud management. Listen to the full podcast here. Read the transcript here.


About the Author

IT’s Vital Role in Multi-cloud, Hybrid IT Procurement TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and Project New Hybrid IT Stack.

To read more from Chris Purcell, please visit the HPE Shifting to Software-Defined blog.

 

What If You Could Get 54 Minutes Back Every Hour?

Time Ripples

In all corners of the world, people are looking for ways to spend more time on what’s important to them

Businesses are responding with efficient, consolidated services and devices. Online superstores provide one-stop shopping, dinner-in-a-box services deliver meal prep kits for the entire week, and smart watches deliver a GPS locator with a fitness tracker that links to your phone. The trend toward consolidation and convergence is giving people much more than multi-functional solutions – it’s giving them time back.

Imagine incorporating that kind of consolidation in the data center. Think how much more an IT organization could accomplish with technology so efficient, it gives hours of time back every day. Hyperconverged infrastructure (HCI) can do that.

The Vicious Cycle of Inefficiency

Inefficiency often stems from complexity. Data center infrastructure can include a dozen siloed components from almost as many vendors. Each device and software product has its own management interface, compatibility issues, and upgrade schedule. Administrators must manage capacity on secondary storage, perform upgrades on WAN optimization devices, and set policies for workloads regarding backup and recovery. Whenever a workload is moved or a new component is added to the mix, policies and devices across the environment need to be upgraded or modified. With nearly a dozen devices all aging at different rates, this process can turn into a seemingly endless maintenance cycle.

Legacy infrastructures were never meant to support the volume or diversity of today’s workloads. Aging, siloed components can slow down the entire environment, straining IT resources and taking IT managers away from more productive work or, worse, making them give up nights and weekends to keep the infrastructure functional.

HCI Transforms Data Centers

Organizations today are transforming their IT departments, replacing inefficient systems with more cost-effective, intelligent solutions. Operational efficiency is a key reason for the rapid adoption of HCI. The efficiency comes from the integration of multiple IT components into a compact, software-defined platform that is simple enough to be managed without specialized training. Because it’s software-defined, dozens of mundane tasks can be automated in an HCI environment. Hyperconvergence can provide enormous time savings – if it is the right hyperconverged infrastructure.

In a recent study, IDC underscores that fact: Although converging servers and storage into a single compact device makes every hyperconverged solution more space efficient compared to traditional infrastructure, it is the software-defined technology and built-in data services that determine overall operational efficiency. According to the study, organizations that use HPE SimpliVity gain back as much as 91 percent of their time. In other words, an organization that currently spends an hour on a particular task in a traditional environment could accomplish that same task in less than 6 minutes with HPE SimpliVity.

A biomedical research institution interviewed in the study decided to move to HCI because they were “…overworked and understaffed, so how can we find a way for the staff to get more done? I see HCI as a way to do that. [HPE] SimpliVity is driving real operational simplicity.”

Another organization in the report operates more than 10 resorts and supports 25 different business applications. The hospitality company was similarly won over by the operational simplicity and centralized VM-centric management. One executive said, “We will soon be at a point where the data in our datacenter will strictly be on [HPE] SimpliVity.”

With HPE SimpliVity hyperconverged infrastructure, IT administrators can complete complex tasks that used to take hours or days in minutes, giving them time – and possibly their weekends – back. The hyper-efficient solution includes built-in backup and disaster recovery, WAN optimization, dedupe, and compression. VM-centric, policy-based management and mobility further simplify the infrastructure, giving additional time back to the IT organization for strategic activities.

To see the full results of the HCI study and learn more about the operational efficiencies that customers experienced, read the IDC report. For more information on hyperconvergence, download the Dummies Guide to HCI.


About the Author

What If You Could Get 54 Minutes Back Every Hour? TechNativeLauren Whitehouse is the marketing director for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). She is a serial start-up marketer with over 30 years in the software industry, joining HPE from the SimpliVity acquisition in February 2017. Lauren brings extensive experience from a number of executive leadership roles in software development, product management, product marketing, channel marketing, and marketing communications at market-leading global enterprises. She also spent several years as an industry analyst, speaker, and blogger, serving as a popular resource for IT vendors and the media, as well as a contributing writer at TechTarget on storage topics.

To read more articles from Lauren, check out the HPE Shifting to Software-Defined blog.

Reducing Infrastructure Complexity: There Has to be a Better Way

Destroyed Maze Concept Showing The Man Standing In A Burnt Labyr

Most people manage numerous financial accounts such as checking accounts, savings accounts, credit cards, mortgages, and various other reoccurring bills

Historically, we needed to deal with each account separately by mailing in payments, keeping a record of each transaction, and ensuring the correct funds were available. This process was cumbersome and took too much time. Today’s online banking technology has solved those complexities. Now people can remotely manage all their accounts from one interface, saving time, adding flexibility of how money is managed, and significantly improving the overall banking experience.

Needed: A common management solution for the data center

The old-style banking analogy is very similar to the multiple types of IT infrastructure management tools companies have used in the past (and some still use today) to run their business. Over the years, businesses built their IT infrastructure environment with separate servers, storage, and networking components, making their data center too complex and difficult to manage. These IT environments quickly grew out of control, along with the myriad of management tools and specialized resources needed to maintain them.

Just as online banking simplified and streamlined banking activities, many companies need one common management solution to consolidate all their infrastructure silos. Customers need a single solution, with one unified management interface, to help address the complexities of managing their hybrid IT environments that include server, storage, and networking.

Most infrastructure vendors struggle to provide such a solution, as many management applications address only server management and may require separate applications to handle different storage and networking components. The challenge is that such a tool would need to provide an at-a-glance view of the health status of multiple servers, profiles, and enclosures all over the world (including virtual and physical appliances). Just like online banking capabilities, the management tool would also need to provide faster access to information, which would enable better decision making.

The future is now

The type of tool companies need to solve their ever-growing complexity issues in the data center is not a dream for the future. This type of infrastructure management software for data centers is already available and being used by over 1 million customers around the world. HPE OneView enables customers to transform their servers, storage, and networking into software-defined infrastructure that eliminates complex manual processes, spurs IT collaboration, and increases the speed and flexibility of IT service delivery. The infrastructure management solution takes a software-defined, programmatic approach through efficient workflow automation, a modern dashboard, and a comprehensive partner ecosystem.

Using one management interface, customers can pull together servers, storage, and networking for many infrastructure platforms. And if a customer has multiple instances of HPE OneView, the HPE OneView Global Dashboard consolidates their hybrid infrastructure environment into a single view, delivering much-needed simplicity.

One customer, Porsche Informatik is one of Europe’s largest developers of custom software solutions for the automobile trade. The company needed to streamline the management of over 1,600 virtual servers and 133 blade servers – and they chose HPE OneView to help them do it.

“HPE OneView gives us a much better and more intuitive view of our HPE server infrastructure. This dramatically speeds up many system management tasks, including the deployment of new ESXi servers and VLAN configurations, which are 90% faster,” explains Gerald Nezerka, Windows Services Team Manager for Infrastructure & Common Platforms, Porsche Informatik.

Another customer, Opus Interactive, has provided cloud hosting and colocation services, supplemented by backup and recovery services, for more than 22 years. When the company was smaller, they were fine using standalone infrastructure management tools on an ad hoc basis. But after experiencing rapid growth over the last several years, Opus Interactive needed a simpler way to maintain its infrastructure in order to adhere to service-level agreements (SLAs) with more than 300 customers. They turned to HPE OneView.

“HPE OneView enables us to do more with less,” explains Eric Hulbert, President and Cofounder of Opus Interactive. “In order to maintain our level of service as we continue our rapid growth, we need to get as much infrastructure and as many customers managed per engineer or administrator as possible. Because it is now easier to manage all our devices, HPE OneView makes this happen.”

Just like online banking, infrastructure management software simplifies the view into hybrid IT environments and allows IT managers to save significant time and reduce complexity. And new features within HPE OneView now allow customers to consolidate all of their hybrid infrastructure into a single view – no matter where they are located. A better way to manage your infrastructure is here, and organizations everywhere are using it.

To learn more about HPE OneView, check out the free e-book: HPE OneView for Dummies. Or visit the website here.


About the Author

Reducing Infrastructure Complexity: There Has to be a Better Way TechNativeMcLeod Glass is Vice President and General Manager of HPE SimpliVity & Composable at Hewlett Packard Enterprise. In this role, he is responsible for all aspects of the product lifecycle across HPE Software-Defined Infrastructure. This includes HPE Composable Infrastructure, Hyperconverged and Cloud Software product lines. McLeod has held positions in engineering and product marketing across multiple HPE business units managing server, storage, and software product line.

4 Best Practices to Help Organizations Succeed in a Hybrid Cloud World

img

ESG Research Insights paper describes behaviors organizations should adopt to improve multi-cloud management.

Hybrid cloud continues to grow in popularity, fueled by its agility and scalability. Yet, many organizations realize that a hybrid cloud model (a combination of private, on-prem, and public cloud) also introduces complexity, which slows innovation. A hybrid model also makes it more difficult to view global utilization or track and control costs.

A recent ESG Research Insights Paper, Multi-cloud Management Maturity, Tangible Reasons Management Excellence Is Required in a Hybrid Computing World, details how organizations are managing heavily hybridized environments. In the paper, ESG surveyed 600 IT decision makers in organizations of at least 1,000 employees to determine a multi-cloud management maturity score.

Those surveyed use public cloud for nearly a quarter of their workloads – and the majority utilize multiple cloud service providers. They also implement on-premises workloads in the following percentages:

  • 37% of on-premises workloads are run on traditional physical servers.
  • 36% are run on VMs that are still predominantly managed as traditional servers.
  • 27% are run within a private cloud that inherits the core attributes of public cloud services.

Survey results – only 15% are “Transformed”

Based on the results, ESG divided the organizations into 4 groups: Unrealized, Modernized, Automated, and Transformed, ranking them from lowest to highest according to their degree of success in a hybrid landscape. The survey found most organizations fall somewhere in the middle of the multi-cloud maturity spectrum:

  • 15% Unrealized
  • 35% Modernized
  • 35% Automated
  • 15% Transformed

These results didn’t surprised me, as successful multi-cloud management involving public cloud and on-premises private is complex – and very few tools that solve the complexity problem are available.

However, I found a couple of results particularly interesting. Respondents could earn a total of 100 maturity points, yet the highest score achieved was only 86. And if an organization earned at least a score of 67.5, ESG included them in the “Transformed” category. ESG noted that the most advanced organizations still have lots of room to grow in terms of improving their cloud management maturity.

Another interesting finding in the report was that even incremental improvements resulted in substantial gains. Organizations that moved from one tier to the next realized substantial benefits throughout the enterprise.

Transformed organizations – what’s their secret?

According to ESG’s research, organizations that want to improve their multi-cloud management maturity should implement four best practices. To join the ranks of the Transformed multi-cloud management organizations, enterprises should do the following:

  1. Invest heavily in converged/hyperconverged infrastructure (CI/HCI) for on-premises workloads.

Ninety percent of all Transformed organizations have deployed CI/HCI platforms in their environments to support legacy workloads, while 88% have done so for newly developed applications. Instead of waiting for legacy infrastructure to depreciate fully, it’s interesting that many of these organizations are implementing these newer technologies proactively.

  1. Actively automate IT operations so staff can focus on other areas.

Transformed organizations report that they have either completely or mostly automated processes such as VM provisioning (86%), application deployment (88%), and performance/problem monitoring (86%). Once IT staff automates these processes, they have more time to focus on other initiatives such as supporting application development or re-architecting legacy applications.

  1. Invest in consolidated hybrid cloud management tools.

No matter where a workload runs (in a public or private cloud), hybrid cloud management tools will manage and monitor cloud costs, as well as provide consistent user experiences. ESG discovered Transformed organizations are twice as likely as Unrealized organizations to consolidate management under one IT team for public cloud and on-premises resources (58% to 23% respectively.) Due to simplified and streamlined operations provided by hybrid cloud management tools, a single management team is sufficient.

  1. Make informed workload placement decisions and optimize workloads before moving them to public cloud infrastructure.

Nearly half of Transformed organizations (48%) fully customize applications prior to migration. Just 3% of Unrealized organizations put the same level of effort into workload preparation prior to migrating to the cloud.

The bigger truth

Based on ESG’s research, Transformed organizations are the exception, not the rule. And even those who have “transformed” have not reached the pinnacle, which means that they have the opportunity to improve even more. Additionally, ESG’s research shows even incremental improvements result in big rewards for the organization. For those interested in improving their standing against the benchmarks laid out by ESG, it is important to take a look at the 4 best practices above and begin implementing these suggestions.

Read the full report: Multi-cloud Management Maturity, Tangible Reasons Management Excellence Is Required in a Hybrid Computing World. HPE can help organizations simplify their hybrid IT experience with modern technologies and software-defined solutions. Additionally, Cloud Technology Partners (CTP), a HPE company, will work with the IT team to enhance learning, conquer cloud, and accelerate a successful digital transformation.


About the Author

4 Best Practices to Help Organizations Succeed in a Hybrid Cloud World TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

Hyperconvergence Delivers Unexpected Results for VDI Users

住宅とテクノロジー

The self-help industry is steadily growing due to a basic human desire for improvement

Consumers can find a plethora of books, podcasts, and seminars, not to mention products and services that promise positive change.

IT organizations that want to experience similar improvements can find them by implementing some less than urgent technology makeovers. The results often deliver unexpected and far-reaching benefits. Client virtualization environments stand to benefit more than most because even small improvements in system performance are multiplied across hundreds of desktops.

Technology pilot delivers unexpected performance results

System slowdowns, boot storm delays, and backups or batch jobs that run late into the evening are standard fare in virtual desktop environments. While the situation is not ideal, organizations will often postpone modernization of an aging virtual desktop infrastructure (VDI) as long as it remains functional. This can be a costly mistake. As one IT administrator discovered, a switch to hyperconverged infrastructure (HCI) can lower costs and deliver improvements to end user experience that ripple through the entire company.

In the outdated VDI environment at Maryland Auto Insurance, end users were accustomed to working around performance issues and system limitations, but the systems manager was not satisfied. When it was time for a server refresh, he took a step back and looked at the whole infrastructure. HCI could provide the required reliability and application compatibility, while consolidating the IT footprint in the datacenter. Intrigued, he set up an HCI pilot.

The new system delivered a much higher than expected return. As he put it, his organization didn’t know how slow their system was until they saw what was possible. “We do everything with virtual desktops around here, and the change was remarkable. Outlook and Microsoft Office loaded instantaneously, as did our underwriting and imaging applications. Multitasking was no longer a chore.” The IT team witnessed a growing level of frustration with the existing system. “People would look over their shoulders and see their co-workers in the pilot doing everything faster, and they wanted in.”

Proven Benefits of HCI for VDI

This might seem like an isolated case, but benchmark tests in the industry show similar performance results. Hyperconvergence is a popular choice for VDI because it is modular, efficient, and cost-effective. By converging multiple IT functions into a single server building block, hyperconvergence makes it easy to deploy, manage, and scale infrastructure that supports virtual desktops.

In a recent report, HPE SimpliVity hyperconverged infrastructure powered by Intel® demonstrated high performance in VDI environments. The Login VSI validated study showed consistent, very low latency performance at scale, and plenty of compute and storage resources available to host up to 1,000 knowledge workers. Even during node failure, HPE SimpliVity provided continuity of service with no impact on the end user experience. This kind of speed and resiliency can have a powerful effect on VDI end users and on business operations.

Maryland Auto Insurance reduced their infrastructure from seven racks to just half a rack with HPE SimpliVity, which helped them cut energy consumption nearly in half. They also took advantage of built-in backup, dedupe, and compression features. But the big surprise came in performance benefits, multiplied many times over in the VDI environment. Workloads across their enterprise now are balanced with just a few clicks. Every end user benefits from reduced time to launch applications. And because backups and batch jobs run two to three times faster, the system manager and his team get their evenings back.

If your data center could use improvement, consider HCI. For more information, check out The Gorilla Guide to Hyperconverged Infrastructure Strategy, which includes a chapter focused on VDI.


About the Author

Hyperconvergence Delivers Unexpected Results for VDI Users TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services. Read more articles by Thomas at HPE Shifting to Software-Defined blog.

Stumbling with your Public Cloud Deployments? An Industry Analyst Offers Advice

Risk Of Stumbling Hazard Warning Smoke Sign. Triangular Warning

As many organizations rush headlong into public cloud, IT continues to adjust to the complexities these environments create

Cost concerns, security, and a widening skills gap seem to consume today’s agenda, but is there a more basic issue at play here?

According to one industry analyst, the answer is yes. A cultural solution to cloud adoption may hold the key to greater success.

In a recent podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, discusses this topic with Edwin Yuen, Senior Analyst for Cloud Services and Orchestration, Data Protection, and DevOps at Enterprise Strategy Group (ESG)[i]. Several interesting insights from the interview caught my attention.

It’s not the technology slowing you down; it’s the culture

Gardner begins the interview by asking why enterprises are not culturally ready for public cloud adoption. Yuen explained one reason is that the role of IT in this new cloud world is not well-defined.

“We see a lot of business end-users and others essentially doing shadow IT – going around IT. That actually increases the friction between IT and the business,” explains Yuen. “It also leads to people going into the public cloud before they are ready, before there’s been a proper evaluation – which can potentially derail things.”

Yuen went on to say lines of business (LOB) or other groups are not working with core IT as they deploy to the public cloud; therefore, they are not getting all of the advantages they can. “You want to maximize the capabilities and minimize the inconvenience and cost. Planning is absolutely critical for that — and it involves core IT,” says Yuen. To ensure the best results possible, you should involve key players in the organization. For example, the organization’s procurement experts should be consulted to ensure you get the best deal for your money.

Budgeting is also important. “Companies very quickly realize that they don’t have variable budgets,” continues Yuen. “They need to think about how they use cloud and the consumption cost for an entire year. You can’t just go about your work and then find that you are out of budget when you get to the second half of the fiscal year.”

The beauty of an as-a-service model is you only pay for what you use. The risk is you have a virtually unlimited capacity to spend money. Remember, while capacity appears unlimited, budgets are not. IT is in the best position to help advise in this area, working with end users and procurement to ensure the organization doesn’t overspend in the cloud.

Bridging the cultural divide: a new level of communication

Yuen also brought up the importance of communication within the enterprise. “The traditional roles within an organization have been monolithic. End-users were consumers, central IT was a provider, and finances were handled by acquisitions and the administration. Now, everybody needs to work together, and have a much more holistic plan. There needs to be a new level of communication and more give-and-take.”

The key for improved cloud adoption, says Yuen, is opening the lines of communication, bridging the divides, and gaining new levels of understanding. “This is the digital transformation we are seeing across the board. It’s about IT being more flexible, listening to the needs of the end users, and being willing to be agile in providing services. In exchange, the end users come to IT first.”

Before public cloud, users of IT didn’t have to worry about cost or security issues, because IT handled it all for them. When an organization switches to the cloud without IT involvement, they often don’t discover everything IT was doing for them until things go wrong. Conversely, when supporting cloud environments, IT needs to make it fast and easy for users to deploy applications, while also putting guardrails in place. Successfully deploying cloud means working with a full stack team of experts all across the organization before jumping into a cloud operating model.

An inverse mindset

Yuen also brings up something he calls an inverse mindset. Traditionally, organizations maintained and optimized specific infrastructure to impact an application in a positive way. “Now, we are managing applications to deliver the proper experience, and we don’t care where the systems are. That infrastructure could be in the public cloud, across multiple providers; it could be in a private cloud, or a traditional backend and large mainframe system.” They just have to be configured correctly to provide the best return and performance the business requires.

As organizations embrace this inverse mindset, Yuen says it will be critical to monitor everything across all the different environments effectively with tools that automate and orchestrate. Additionally, organizations need machine learning (ML) or artificial intelligence (AI). “Once we train the models, they can be self-learning, self-healing, and self-operating. That’s going to relieve a lot of work.”

Having the right tools, such as HPE advisory services, can help you identify the best place to run applications. In addition, HPE OneSphere, an as-a-service multi-cloud management platform, enables organizations more control over the complexity of hybrid clouds.

Let HPE help you simplify your hybrid cloud experience with modern technologies and software-defined solutions such as composable infrastructure, hyperconvergence, infrastructure management, and multi-cloud management. Cloud Technology Partners (CTP), a HPE company, will work with your IT team to enhance learning, conquer cloud challenges, and accelerate your successful digital transformation. To listen to the full podcast, click here.

[i]Podcast recorded on Nov. 15, 2018. Recently Yuen became Principal Product Marketing Manager at Amazon Web Services


About Gary Thome

Stumbling with your Public Cloud Deployments? An Industry Analyst Offers Advice TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

1 2 3 11
Page 1 of 11