Enterprise Featured

What does multi-cloud mean to IT Ops, Developers and LOB?

HPE OneSphere benefits. 3 perspectives

Looking at hybrid IT challenges and solutions from 3 different perspectives

Digital transformation is ushering in a new era of hybrid IT– a combination of both public and private cloud – that allows businesses to innovate while meeting their own unique organizational needs. Yet, a hybrid IT environment can create complexity and operational friction that can slow a business down and hold them back.

As businesses seek ways to remove IT friction, streamline operations, and accelerate business innovation across their hybrid environment, it’s important for them to think about the needs of three particular groups – IT operations, developers, and line of business (LOB) executives. What challenges do each face? What opportunities do they see?

To answer these questions, IDC conducted in-depth interviews with IT operations staff and line of business individuals at Fortune 100 enterprises. The results can be found in a comprehensive research report – The Future of Hybrid IT Made Simple.

IT ops: Where’s my automation for deployment and management?

A hybrid IT environment is definitely more challenging for IT operations than a single, virtualized compute infrastructure located on premises. A lack of automation in a hybrid IT environment means deployment and management of siloed resources must be managed separately.

Other concerns with hybrid IT include IT interoperability and integration, application certification, change management/tracking, and complexity of the overall infrastructure. In addition, extensive training is needed for operations and development personnel as IT shifts to a service broker model.

As these challenges mount, IT can no longer be treated as a back-office function. Instead, IT ops is expected to drive new sources of competitive differentiation, while still supporting legacy infrastructure and processes.

As one IT ops executive explains in the report, “Hybrid IT is more complex when it comes to deployment and ongoing management. The initial setup of the process takes some time, and training people how to use the different portals further extends deployment timelines. Every time something new comes up, it’s always a challenge because people don’t necessarily like to learn anything new. There’s always a learning curve, and they are usually not too happy about it. Change management is always a headache.”

Application Developers: Where are my developer services and ease of use?

Hybrid IT is also challenging for application developers, but for completely different reasons. Developer services, such as infrastructure APIs, workflow, and automation tools, are not consistently available across private and public clouds. And a lack of unified provision tools means that IT must serialize much of public and private cloud service delivery, which leads to bottlenecks.

Developers feel that a complex hybrid IT infrastructure is difficult to interact with, slowing down their ability to quickly roll out new services on new platforms. Interoperability between development, test/QA, and production environments is also a problem, along with the learning curve on available tools that manage cloud resources. Integration and version control between their on-prem and cloud environments is also lacking, which slows them down and increases complexity.

The report quotes one application developer as saying, “Our major concern is with deploying third-party applications across multiple clouds. A big issue is the proprietary nature of each of these clouds. I can’t just take the virtual image of the machine and deploy it across multiple clouds without tweaking it.”

Line-of-Business (LOB) Executives: Where’s my visibility and cost controls?

LOB executives have very different concerns. They are frustrated by the slow response for new private cloud services. Although public cloud services are fast, executives feel that they also carry risk. They wonder if using public cloud exposes their business to the outside world. They also are concerned that they will be locked into a specific public cloud service. Adherence to SLAs, transparency, privacy, consistency across clouds, overall performance, and cost—all these issues weigh heavily on a LOB executive’s mind.

According to one LOB executive quoted in the report, “Application integration with on-premises data management layers like file systems is a problem when developing in the cloud. With hybrid IT, our goal is to ensure that data is available across all locations, using some kind of a secure message broker integrated with a database and a distributed file system.”

Reducing hybrid IT complexity – is it possible?

So what’s the solution? Is it possible to operate a hybrid IT environment without the headaches associated with it?

According to IDC, the answer is yes—but only if a multi-cloud strategy is bound together with an overarching hybrid IT strategy. And this is where companies like Hewlett Packard Enterprise (HPE) can help. HPE software-defined infrastructure and hybrid cloud solutions lets businesses reduce complexity so they can innovate with confidence.

For IT operations staff, using composable and hyperconverged software-defined infrastructure means that they will be able to move quickly. They can easily deploy and redeploy resources for all workloads. Plus, automating and streamlining processes frees up resources so IT can focus on what matters most. Developers can drive innovation using multi-cloud management software, rapidly accessing the tools and resources required to quickly develop and deploy apps. Lastly, multi-cloud management options let LOB executives gain insights across public clouds, private clouds, and on-premises environments, providing the visibility needed to optimize spending.

By delivering solutions that make hybrid IT simple to manage and control across on-premises and off-premises estates, a business can better meet the needs of IT operations, developers, and LOB executives. A hybrid IT strategy combined with multi-cloud management empowers everyone to move faster, increase competitiveness, and accelerate innovation.

To find out how HPE can help you determine and deploy a digital transformation strategy for your hybrid IT environment, visit HPE Pointnext. Read IDC’s full report, The Future of Hybrid IT Made Simple


About Lauren Whitehouse

What does multi-cloud mean to IT Ops, Developers and LOB? TechNativeLauren Whitehouse is the marketing director for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE).  She is a serial start-up marketer with over 30 years in the software industry, joining HPE from the SimpliVity acquisition in February 2017.  Lauren’s brings extensive experience from a number of executive leadership roles in software development, product management, product marketing, channel marketing, and marketing communications at emerging and market-leading global enterprises. She also spent several years as an industry analyst, speaker and blogger, serving as a popular resource for IT vendors and the media, as well as a contributing writer at TechTarget on storage topics.

To read more articles from Lauren, check out the HPE Shifting to Software-Defined blog.

Today’s Challenge: Remove Complexity from Multi-cloud, Hybrid IT

Business Leader Concept As Strategic Innovative Success Thinking

The cloud has conditioned us to expect more: on demand availability, simplicity, and speed—while the typical multi-cloud, hybrid IT environment is getting ever more complex

Ivan Pavlov was a Russian physiologist, famous in the 1890s for introducing the concept of a conditioned response – the theory that our brains can be trained to associate certain stimuli with a response.

In today’s enterprise data center, public cloud is a stimuli that has conditioned developers and lines of business to expect immediate availability of resources using a simple, self-service, on-demand infrastructure. IT is expected to respond by transforming on-premises infrastructure into a comparable experience, as well as managing workloads on multiple public clouds – without adding more complexity and cost.

Yet, is this goal of simple hybrid IT infrastructure management even possible with the tools available today?

Managing costs, security, and compliance amidst growing complexity

In the past, enterprise IT provided a private infrastructure to developers, complete with a tried-and-true command and control structure. Processes and approval workflows were optimized for cost, security, and compliance. (Of course, it’s well-known that these processes were/are typically slow and can delay product development by weeks or even months.)

In today’s enterprise, multiple cloud platforms are routinely used, each with their own toolset focused on maximizing the value of each vendor’s cloud platform. And most enterprise IT environments embrace a mixture of deployment models (on-premises infrastructure combined with multi-cloud), causing even more complexity.

In the midst of these challenges, IT’s operational workload is increasing, while the operations budget is decreasing. The enterprise’s highly distributed, siloed environment becomes complicated as there is no centralized management view. IT needs to find a way to deliver self-service public cloud and private infrastructure that empowers developers – while providing the tools and accountability for everyone to easily manage cost, security, and compliance.

What’s needed for better hybrid cloud management?

In a nutshell, an effective hybrid cloud management tool needs to provide the following:

  • Self-service infrastructure for low operational burden on IT
  • Developer enablement for rapid application development and deployment
  • Visibility and governance into infrastructure costs

This sounds relatively simple, right? Several approaches currently exist to tackle this problem, as described in a recent analyst report by Moor Insights & Strategy detailing enterprise hybrid cloud management challenges and solutions.

“One approach is to deliver a hybrid cloud, self-service infrastructure platform as a collection of standalone cloud infrastructure options,” the report explains.  A majority of enterprises today are using this method, yet it has a significant downside. “While this can deliver self-service resources from each infrastructure, it fails to unify and simplify the implementation of cost management, security, and compliance across the infrastructures via a consistent developer and operator experience.”

The report goes on to discuss another tactic – delivering cloud management tools via a cloud management platform (CMP). The report points out, “These platforms are good for automating cloud-native development and operations with cloud-based self-service infrastructure provisioning, but they have a few common limitations that result in only a partial solution for IT’s needs.”

These limitations include:

  1. A focus on container-based, cloud-native applications, rather than on applications migrated into operation on cloud infrastructure.
  2. An adoption by only parts of the organization – as different leaders prefer different platforms and tools.
  3. A focus on unifying developer tooling and operations across infrastructures instead of a focus on unifying enterprise-wide management of cost, security, and compliance.

Can we effectively manage a complex, hybrid IT environment?

Recent breakthroughs in hybrid cloud management software lets IT achieve their goal of delivering all their infrastructure simply, regardless if it is on or off premises. Hewlett Packard Enterprise (HPE) recently introduced HPE OneSphere, a hybrid cloud management solution that enables IT to easily manage data and applications distributed across multiple clouds and inside data centers.

With HPE OneSphere, IT can build and deploy clouds in minutes and provide ready-to-consume resources, tools, and services faster. Developers get easy access to tools, templates, and applications in a cross-cloud service catalog, so they can quickly use the tools they know. Consumption and cost analytics across the entire hybrid cloud environment is always available, providing real time insights into cost and governance.

Immediate availability, simplicity, and speed

Enterprises want to empower their developers by giving them the tools they need to be successful. That means embracing both on- and off-premises infrastructures – public cloud, private cloud, bare-metal, and containers. Yet they must also be able to manage it all simply, securely, and cost-effectively.

With HPE as a partner, IT can now better meet everyone’s expectations of availability, simplicity, and speed – no matter where applications are deployed.

To read more, download the analyst whitepaper by Moore Insights & Strategy, HPE OneSphere empowers IT to deliver all infrastructures as a service.


About Gary Thome

Today’s Challenge: Remove Complexity from Multi-cloud, Hybrid IT TechNative

Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies which include HPE OneSphere – multi-cloud management, HPE SimpliVity – Hyperconverged Infrastructure, HPE Synergy – Composable Infrastructure and HPE OneView – Integrated Management.

To read more articles from Gary, check out the HPE Shifting to Software-Defined

The Cost of Inaction in the Data Center

Time Is Money

Two houses in my neighborhood went on the market in late spring, and one of them is already under contract. The other one hasn’t budged. Why? I blame it on the cost of inaction.

The houses are about the same age, but in very different conditions. The first house was brought up-to-date when the family moved in. They focused on major improvements and items that would keep the house low maintenance. The other neighbor decided to do nothing immediately, only addressing issues when they needed to be fixed. Ultimately, this neighbor fell behind and had to deal with an increasing to-do list of repairs and upgrades. A few years elapsed, his house went on the market, and now potential buyers don’t just see a house; they see an enormous to-do list of items that need attention.

Maintaining a data center is a bit like maintaining a house. Every business needs to modernize to some extent to stay competitive in the market, it’s simply a question of approach. A passive problem-by-problem approach, like the one my neighbor took, can be appealing to IT organizations with limited budgets. The familiar process of waiting for issues to arise allows for business to go about as usual and seems to cost nothing, making the upfront costs and disruption required for a more proactive approach difficult to justify. However, if the organization’s goal is a complete technology refresh over the next 2-3 years, the cost of this passive approach can be considerable.

Hidden costs of inaction

Waiting for licenses to expire and equipment to fail sounds like it should cost nothing, because no upgrade actions are being taken. But it’s actually the most expensive way to maintain and scale out your environment, particularly in an infrastructure comprised of older apps and devices.

Along with the occasional hardware malfunctions, firmware upgrades, and power outages that every business experiences, legacy data centers are saddled with problem-solving in a complex, inefficient environment that was built over time. Each siloed device and software product has its own compatibility issues and upgrade schedule, compounding routine maintenance and updates into a continuous refresh cycle that can put a strain on resources. The more unique products and vendors in a data center, the more time needed to maintain, manage, and troubleshoot the systems, sometimes bringing productivity to a standstill.

To spread out the financial impact, some organizations choose to modernize over time, following the same approach they used to build the original data center. This piecemeal approach adds an extra burden to IT workloads and often requires costly specialists to help with deployment and employee training. Specialists, employee training, upgrade expenses, overtime for staff, diminished productivity, and lost revenue opportunity are all hidden costs in a passive modernization process. In the meantime, much of the existing technology in the data center is aging, and the cost to bring it up-to-date is increasing. Temporary staff and short term contractors can invent ways to make the old and new systems work together, but after these specialists leave, the root complexity persists.

A better way: proactive modernization

Businesses often will reject the idea of converting to new technology because of the perceived budget requirements and business disruption. But what if no additional budget was required? What if the same funds currently budgeted for maintaining the infrastructure could be used in a different way with remarkably different results?

Instead of throwing money at aging equipment, follow these simple rules when upgrading:

* Invest in a more efficient infrastructure. Proactive investment in a low maintenance, high efficiency, scalable solution does not necessarily require additional funding. If your organization wants to make significant changes to the data center, don’t let cost be a roadblock. Many vendors offer flexible financial services.

* Replace multiple products at once with a single platform that is easy to scale. Replacing one point solution with another might marginally improve efficiency. Yet, consolidated solutions that can be upgraded and scaled easily will provide the biggest improvements in operational efficiency.

* Choose a solution that your IT staff can easily manage. Select products that are simple to use and provide management capabilities across multiple sites.

A hyperconverged data center is a simpler data center Hyperconverged infrastructure (HCI) has become a popular option, particularly for mid-size customers, because it takes a simple, building bock approach to a tech refresh. Because hyperconverged infrastructure consolidates compute, storage, network switching, and a variety of IT functions onto virtualized hardware, the solutions can greatly simplify environments that have been divided by siloed point solutions. This consolidation drives significant operational

efficiency, freeing up time for other initiatives and revenue generating activities, even for organizations with limited budgets and staffing. Forrester Research has revealed that HCI systems can make current IT staff as much as 53% more productive than they are in a legacy environment.

The same Forrester study found that HPE hyperconverged solutions also reduce total cost of ownership (TCO) by 69% compared to traditional IT. HPE SimpliVity hyperconverged infrastructure converges the entire IT stack – compute, storage, firmware, hypervisor, and data virtualization software – into a single integrated node along with deduplication, compression, and data protection. VM-centric management makes the system easy to learn and the compact nodes can easily be scaled to meet demand in the data center or at the edge.

Focus on outcomes in your data center refresh

A proactive approach to updating your data center can provide immediate results. Organizations that look past the daily to-do list and instead focus on desired outcomes such as improved efficiency, reduced TCO, and more time for innovation, find they can upgrade with minimal disruption and a surprisingly quick return on investment.

To learn more about how hyperconvergence can deliver better outcomes for your IT organization, download the free e-book: Hyperconverged Infrastructure for Dummies.

_____________________________________

About Jesse St. Laurent

The Cost of Inaction in the Data Center TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. To read more articles from Jesse St. Laurent, check out the HPE Shifting to Software-Defined blog.

How to Automate Infrastructure to Save Time and Drive Simplicity

Automation Concept As An Innovation, Improving Productivity, Rel

Movies and TV shows are full of inventors trying to come up with ways to simplify everyday tasks to save time

Two examples come to mind immediately – Dick Van Dyke’s character, Caractacus Pott, in Chitty Chitty Bang Bang, and Christopher Lloyd’s Emmett “Doc” Brown in Back to the Future. (Both of whom invented breakfast-making machines, oddly enough!) The reason these types of inventions appear time and time again on screen is because everyone can relate to them. Everyone is looking for a way to simplify everyday tasks, save time, and make life easier.

In much the same way, today’s infrastructure management solutions can automate many processes in the datacenter to help businesses move faster and compete more effectively. Physical infrastructure can be automated using software-defined intelligence and a unified API. Hundreds to thousands of lines of code can be reduced to a single line of code, saving countless hours and making IT infrastructure management easier…(sorry this can’t make breakfast for you just yet…but it’s coming!)

Can software-defined intelligence and a unified API also help businesses deliver new applications and services faster — innovations that are often the lifeblood of many businesses? Yes, and here’s how.

Continuous delivery of applications and services requires fast, policy-based automation across development, testing, and production environments. A unified API for infrastructure can do that by allowing developers and ISVs to integrate with automation tools. For instance, a unified API can simplify control of compute, storage, and networking resources, so developers can code without needing a detailed understanding of the underlying physical resources.

One example of this in action is customers using an automation tool like Chef who can easily automate the provisioning of an entire stack from bare metal to applications in minutes. By combining an HPE partner’s automation with HPE OneView’s ability to stage, schedule, and install firmware updates, entire stacks can be updated or changed very quickly.

A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate solutions for customers. These partners range from large software suites such as VMware® vCenter, Microsoft® System Center, and Red Hat, to focused solution providers such as Chef, Docker, and others. By integrating with the unified API in HPE OneView, ISVs can provide solutions that reduce the time their customers spend managing their environments.

HPE OneView’s automation can take care of the “breakfast-making,” housekeeping issues involved with “keeping the lights on” while simultaneously opening the doors to innovation. And just like the on-screen inventors and inventions they created, HPE OneView and its integration partner make the datacenter simpler, allowing you to concentrate on the important tasks of developing apps that create value for your business.

To learn more about the benefits of HPE OneView and the extensive partner ecosystem HPE has built to help you on your software-defined journey, download the e-book: HPE OneView for Dummies.


About Paul Miller

How to Automate Infrastructure to Save Time and Drive Simplicity TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog

Place Your Bets: Hyperconverged or Composable Infrastructure in Tomorrow’s Data Center?

bets

In today’s fast-moving world of data center innovations, it’s difficult for businesses to stay up to date on industry trends and best practices

Technology changes quickly, so businesses look to industry analysts to make better informed purchasing decisions. Experts such as Gartner and Forrester invest heavily in research, and their corresponding reports and white papers provide clarity on future trends and changes in the industry.

What factors will shape tomorrow’s data center?

Many companies rely on these industry analysts because their research provides information needed to make a more educated decision about future architecture and infrastructure investments. But do these analysts really provide that clarity? And what happens when two research organizations conflict in their assessment of the future data center?

For example, I recently read Gartner’s Four Factors That Will Shape the Future of Hyperconverged Infrastructure. In this report, Gartner highlighted how they see the future of hyperconverged infrastructure (HCI). Although the report was filled with valuable information, I couldn’t help but think it too narrowly focused on just HCI –not aligning to other technologies that IT is using today and will incorporate in the data center of the future.

I wanted another opinion and found a recent report from Forrester titledThe Software-Defined Data Center Comes Of Age, which provided a very different opinion to Gartner. In Forrester’s report, the authors say that composable infrastructure systems (CIS) “…will become the predominant approach to computing” in the software-defined data center (SDDC) of the future.

First announced in 2015 by Hewlett Packard Enterprise (HPE), composable infrastructure treats compute, storage, and network devices as fluid pools of resources. Depending on workload requirements, IT can compose and then recompose these resources based upon the needs of the applications. A composable infrastructure improves business agility, optimizes IT resources, and accelerates time-to-value.

The Forrester report goes on to mention how they see HCI fitting into the software-defined data center of the future. “Virtual resource pools and physical resource pools are the SDDC building blocks…The major change over the past 18 months has been the lockstep maturation of hyperconverged infrastructure and its enablement of more efficient private cloud.”

According to Forrester, the SDDC is a major shift for enterprise infrastructure, unifying virtual and physical infrastructure with cloud and legacy systems. They advise infrastructure and operations professionals to “…invest in software and hardware solutions that are programmable or composable and repeatable with automation, leveraging prebuilt models built off of compute, storage, and network.”

Forrester’s report also predicted that other vendors will soon introduce their own version of composable infrastructure. “Only one current product legitimately meets Forrester’s definition of a local CIS — HPE’s Synergy. But the concept is so powerful and such a logical evolution that we’re confident in predicting a wave of similar products either shipping or announcing in 2018 from Cisco, Dell EMC, and possibly Huawei.”

Indeed, Forrester’s prediction was correct. In early May, Dell announced Kinetic Infrastructure at Dell EMC World. They have become the second vendor to announce a CIS platform. Like Forrester, I expect more announcements from other vendors in the next couple of years.

After I finished reading both industry analyst reports, I felt that the Forrester report provided a better view of how the data center of tomorrow would operate. As I meet with customers, I see firsthand how many IT organizations are already successfully incorporating containers, bare metal, and virtualized workloads — in combination with hyperconverged and composable infrastructure — to successfully achieve many aspects of what the Forrester report outlines.

A safe bet: a software-defined data center with hyperconverged and composable infrastructure

Investing in the best architectures and technologies will help you succeed not only today, but into the future. When making a decision, keep in mind how individual technologies fit together into the overall, cohesive picture. In the software-defined data center of the future, everything should work together seamlessly, as opposed to fragmented silos.

It’s also important to talk with actual companies that are actively investing in these future technologies. Learning from industry experts and those who are implementing successful SDDC strategies will help increase your odds of choosing the best technology for your data center today and into the future.

To learn more about the future of composable infrastructure for your business, download the Forrester research report, The Software-Defined Data Center Comes Of Age or Gartner’s Four Factors That Will Shape the Future of Hyperconverged Infrastructure. And for more details on composable infrastructure, download the free e-book, Composable Infrastructure for Dummies.

_____________________________________

About Paul Miller

Place Your Bets: Hyperconverged or Composable Infrastructure in Tomorrow’s Data Center? TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

The Rise of Multi-Cloud, Hybrid IT for the Enterprise

Skyscraper Glass Facades On A Bright Sunny Day With Sunbeams In The Blue Sky. Modern Buildings In Pa

A decade of innovations means that you can now create a multi-cloud, hybrid IT strategy that saves money and is more effective than running everything in the public cloud

The quote, “Everything old is new again” can certainly be applied to technology. Like everything else, technology has cycles that change over time. Innovations lead to new solutions that often change what’s old and make it better. And before you know it, a technology that was thought long gone is now revived — and better than ever.

Consider the enterprise datacenter. Cloud started more than 10 years ago and was the hot, new tech trend. Although businesses continue to migrate many applications to the cloud, trends reveal that enterprises are now choosing hybrid IT (a mix of on-premises and public cloud) because it gives them more options.

An enterprise can now keep key applications on-premises, using new technologies that offer the benefits of public cloud, without the cost, performance, or security concerns. That’s because on-premises solutions have not stood still. Costs have dropped, new types of infrastructures are now available, and deployment models are more flexible — giving you better options than ever before.

Looking back…when people first started using public cloud

On-premises costs were higher; infrastructure was harder to manage and provision

Ten years ago, hardware costs were higher. Flash memory was much more expensive and VMs were harder to manage and provision. Because a VM is a whole machine (even though it is virtual), each one contains all the code needed to run the virtual machine, which means lots of data duplication. Of course, this overhead resulted in relatively high admin costs and lower utilization rates.

The IT department didn’t offer speed and flexibility

Also, 10 years ago the needs of developers and data scientists working in a digitally-empowered world started changing. They needed speed and continuous innovation, something that Gartner identified as Mode 2. In a Mode 2 world, much more experimentation was needed. “Compose, tear-down, recompose. Rinse and repeat” became the mantra of Mode 2 developers.

Ten years ago, developers loved the autonomy that public cloud gave them because they didn’t need to jump through hoops. They didn’t have to wait until the IT department gave them permission to fire up a server. They would just contact a public cloud provider, pull out the company credit card, and were ready to go.

You had to pay for servers up front

Prior to public cloud, if you wanted compute power to try out a new application, you would have to pay for the hardware and software up front. Public cloud was a refreshing change. You paid for what you used, and you stopped paying when you didn’t need it anymore.

For many, this type of pay-as-you-go model was a game changer. Much of the work that Mode 2 developers do is experimental. Because they didn’t know if their application was going to work or even how much compute power they needed, the pay up front scenario was simply not an option.

Looking forward: How on-premises solutions have changed

Much has changed during the past 10-plus years that makes on-premises technology new again.

On-premises solution costs are now lower and performance is higher

Infrastructure vendors have innovated in many areas, which has cut the costs of running your data center. Flash memory costs have fallen dramatically, along with the price per giga-flop. New IT technologies are also available that save money and increase performance.

Hyperconverged solutions allow VMs to be managed in the same place, which results in lower admin costs and increased utilization. And some hyperconverged offerings provide de-duplication and compression, providing lower storage requirements and faster data access.

Composable infrastructure takes hyperconverged to another level by virtualizing the entire IT infrastructure. It treats physical compute, storage and network devices as services, managing all of IT via a single application. This eliminates the need to configure hardware to support specific applications and allows the infrastructure to be managed in software. Composable infrastructure creates pools of resources that are automatically composed in near real time. Because you can flex resources (compute, storage, and fabric) to meet your needs, you get higher utilization, saving you money.

All of these factors have dramatically lowered the costs of on-premises infrastructure — providing options that are better than public cloud for specific applications.

Developers now have speed and flexibility

And what about today’s Mode 2 developers — do on-premises solutions help with the speed and flexibility they need to innovate? The answer is yes. Some hyperconverged and composable solutions have workspaces that let developers work with autonomy. Yet, IT still retains the ability to govern these spaces, a capability that helps find wasted resources such as unused VMs.

Developers also love APIs, another capability now available to users of composable infrastructure. A developer can programmatically control composable infrastructure through a single, open RESTful API to automate the provisioning, configuration, and monitoring of infrastructure.

The growth of pay-as-you-go pricing for on-premises infrastructure

Many infrastructure companies today provide a pay-as-you-go business model, which eliminates the need for huge, upfront costs. For example, HPE works with you to forecast current and projected capacity — for a minimum commitment — and then creates a local buffer of IT resources beyond what you need now, that you can dip into as needed. The extra capacity is ready to be activated when you need more capacity, and it can easily expand into the Azure public cloud when needed. This ensures you have extra capacity deployed ahead of any upcoming demand.

Leading the way to a hybrid IT strategy

The datacenter of today is very different than the datacenter of a decade ago. Today’s innovative technologies can be located at the core of the datacenter, in the cloud, or at the edge — providing a myriad of choices and deployment models. Wherever IT resources are deployed, a multi-cloud, hybrid IT strategy is leading the way to more cost-effective, flexible, and powerful IT options for the enterprise.

To learn more about composable infrastructure, download the free e-book, Composable Infrastructure for Dummies.


About Paul Miller

The Rise of Multi-Cloud, Hybrid IT for the Enterprise TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

Stop juggling and increase productivity with better IT infrastructure management

pretty young girl standing and juggling with red balls

Those in IT are usually quite adept at juggling — keeping lots of balls in the air to ensure the organization’s entire IT infrastructure operates as efficiently as possible

From providing desktop support to provisioning and updating compute, storage, and fabric resources — the job of the IT professional is never done.

And it seems like the demands on IT become greater every day. That’s because the pace of innovation continues to accelerate, and IT services are becoming ever more complex. Businesses are now managing and consuming IT services across a hybrid infrastructure, and they’re trying to use infrastructure that is usually not designed for these demands. In addition, complex manual processes and non-integrated tools fail to provide the speed and simplicity needed to support current tasks, much less new ideas and applications.

To compete more successfully, CIOs want to move faster — without IT holding them back. The IT team wants to increase productivity by automating and streamlining tasks. And everyone wants less complexity, so they can spend more time innovating.

So what’s the answer? How can businesses move faster, remove management complexity, and increase productivity? To answer these questions, let’s look at a real world example of a business that achieved those goals with better IT infrastructure management.

No more juggling for Porsche Informatik

Porsche Informatik is one of Europe’s largest developers of custom software solutions for the automobile industry. With more than 1,600 virtual servers and 133 blade servers in two geographically-dispersed data centers, Porsche provides IT services to 450 employees. With 500TB of storage and 12,000 end devices, its environment carries out 1.5 million automated tasks a month. Business-critical applications run across the entire data center, from physical Microsoft Windows clusters to VMware® HA clusters, including in-house developed and third-party programs.

To reduce complexity and streamline management, Porsche Informatik needed a single, integrated management platform. The company turned to HPE OneView, an infrastructure automation engine built with software-defined intelligence and designed to simplify lifecycle operations and increase IT productivity.

HPE OneView allowed Porsche Informatik to stop juggling and improve productivity:

  • Reduced new configuration deployment times by 90%
  • Cut admin and engineer management time by 50%
  • Sped up the detection and correction of routine problems by 50%
  • Improved system availability by 30% to ensure the delivery of business-critical applications
  • Freed IT staff from routine tasks; enabling them to react more quickly to business requirements

An added benefit: a unified API

A key feature in HPE OneView is the unified API that provides a single interface to discover, search, inventory, configure, provision, update, and diagnose the HPE composable infrastructure. A single line of code fully describes and can provision the infrastructure required for an application, eliminating time-consuming scripting of more than 500 calls to low-level tools and interfaces required by competitive offerings.

HPE realizes the value of APIs, which is why they have created the HPE Composable Partner Program. Together, HPE and their partners build solutions that let customers reduce the time they spend managing their data center or cloud environment. That means businesses can stop juggling and spend more time innovating. A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate their solutions.

Peter Cermak, IT systems engineer at Porsche Informatik sums it up well. “The unified API and global dashboard provide a much better, intuitive view of our HPE infrastructure,” says Cermak. “Even people with only basic training can easily see the state of this part of our infrastructure. Not only do we now save a lot of time adding new servers and VLANs, it is also a fire-and-forget task. Previously, we had to re-check and debug profile-related issues but that is no longer necessary. In one operation, staff can configure many servers with identical settings and the time we save enables us to concentrate our work on customer requirements.”

Read the complete success storyLearn more about HPE OneView.

Stop juggling and increase productivity with better IT infrastructure management TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

How Hyperconvergence is Evolving toward Composable Infrastructure

Concept View Of Data Technology Against Skyscraper In City

Recently, I was scrolling through my twitter feed, and I came across an article on TheNextPlatform.com titled, The Evolution of Hyperconverged Storage to Composable Systems

The article discusses the evolution and growth of hyperconverged infrastructure (HCI) as a category, and details the future of HCI as moving toward composable infrastructure. What caught my eye was that the article’s image features HPE Synergy, the first platform built from the ground up for composable infrastructure.

After reading the article, I couldn’t agree more with the author—the evolution of hyperconverged storage is composable. As businesses evolve and grow beyond hyperconvergence, composable infrastructure helps IT cut costs, increase storage for all workloads, and improve networking — while accelerating and simplifying everything.  Here are three examples:

  1. Composable infrastructure lowers costs
    Composable infrastructure lets IT quickly assemble infrastructure building blocks using software-defined resource templates. These templates can easily be composed and then recomposed based on the demands of applications. By maximizing resource utilization, IT can eliminate overprovisioning and stranded infrastructure while ensuring right-sized resource allocation for each of the applications they are running. This enables customers to spend less money on infrastructure and significantly increase the speed to provision infrastructure, which can now be accomplished in minutes.
  2. Composable storage provides flexibility and simplicity for all workloads
    Composable infrastructure aggregates all stranded or unused storage into pools to meet the needs of any workload, enabling IT to quickly scale up and scale down storage needs as workloads dictate. For example, in HPE Synergy, a single storage module can hold up to 40 drives, which can be zoned to one or multiple compute modules. If the compute module needs more capacity, the storage pool can be automatically reallocated among compute modules to meet the needs of the workloads.
  3. Composable fabric simplifies your network
    The network interconnect is typically one of the biggest headaches for IT organizations to manage. To maintain workload performance, most customers will over provision their resources, which increases cost.  With composable infrastructure, you can dynamically change network allocation and bandwidth to meet your needs.

For example, HPE Synergy is an enterprise-grade solution built on industry standards that can easily integrate into existing heterogeneous data centers and connect seamlessly to existing networks and SANs. It abstracts away operational minutia, replacing it with high-level, automated operations. Change operations, such as updating firmware, adding additional storage to a service, or modifying network connectivity, are automatically implemented via a template, significantly reducing manual interaction and human error. IT can configure the entire infrastructure for development, testing, and production environments using one interface that is implemented in one simple step.

Hyperconverged combined with composable infrastructure

Today HPE customers are using both hyperconverged and composable infrastructure to achieve more flexibility with more cost-effective results, both at the core of their data center and at the edge of their business.

One example of this is an international bank that has deployed hyperconverged infrastructure and composable infrastructure. These combined technologies provide compelling functionality, workload consolidation, and simplified IT operations. Using HPE OneViewHPE SimpliVity, and HPE Synergy, this bank is improving its IT infrastructure across its entire business. Benefits result in simplified IT ops management, a smaller data center footprint, workload consolidation, and enhanced business agility. These solutions are setting this bank apart, letting them offer a growing number of new digital services that improve their customers banking experience.

To learn more about the future of composable infrastructure for your business, download the Forrester research report, Hybrid IT strategy insights: Composable infrastructure and business breakthroughs. And for more details on composable infrastructure, download the free e-book, Composable Infrastructure for Dummies.


About Paul Miller

How Hyperconvergence is Evolving toward Composable Infrastructure TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

The World’s Smartest People Can Work for You with the Right Infrastructure Management

Technology of AI Artificial intelligence dig data machine deep learning

More than 20 years ago, Sun Microsystems’ co-founder Bill Joy offered up an insightful thought: “No matter who you are, most of the smartest people work for someone else.”

This thought eventually became known as Joy’s Law, and today its applicability is only compounded by the sheer vastness of the tech industry.

The world’s most intelligent, innovative people are spread across the globe, all working on the industry’s next generation technologies. The difficult truth is that likely only a handful, or quite possibly, none of these people work for you. But that doesn’t have to be the case, and with the right IT strategy, you can start to put the smartest people from all over the planet to work for you – all in your own data center. To get there, you’ll need to get serious about turning your environment into a software-defined data center (SDDC).

Software-define your data center

First, let’s start with the benefits software-defined infrastructure delivers for the group of innovative people who are currently working for you. Your innovators – your developers – all work in the world of software. But your IT Ops organization typically deals in hardware. By software-defining your infrastructure, you bridge the gap between the two groups so that everyone is working in the same sphere.

In an SDDC, all of your infrastructure is managed by software, helping you to automate tasks, reduce risk, and move faster with less hands-on work to maintain infrastructure. But it’s not just about managing with software. It’s also about running software tools that help your developers build the apps and services your business is trying to deliver. Getting these two layers to work together seamlessly is key to building out the best software-defined data center, and where you have the opportunity to start recruiting the world’s brightest.

In some cases, you might already have many of these people “working” for you. For example, if you’ve already virtualized much of your data center using VMware or RedHat products, you’ve got some of them. If you’re using Docker to containerize your applications so you can move apps between your on-premises infrastructure and the cloud, you’ve got some more of them. But what you might be missing is having them all in the same place.

To make it easier for you to take advantage of the top-notch tools coming out of companies such as Docker, VMware, Red Hat, Puppet, Chef, Microsoft, and more, you need an infrastructure management solution that is built on a rich, unified API. Why? Because it will allow you to “recruit” those wonderful people Bill Joy was talking about in minutes not hours. A unified API allows you to quickly integrate the tools other companies have created with just a single line of code, and you can manage these tools from a single interface across your infrastructure. With a unified API, you can present infrastructure to your developers in one common language.

Public cloud is a popular solution today because of the speed with which developers can directly access the resources they need without going through their company’s IT organization. With the right tools, you can bring the same kind of speed and agility to your data center and be a better service provider for your business.

Right infrastructure, right people

HPE OneView is a management tool that lets you easily transform your infrastructure into a software-defined environment. At the core of HPE OneView is a rich, unified API that supports an extensive partner ecosystem. The world’s smartest people can get you started with integrations for the best tools in the industry across DevOps, IT Ops, cloud engines, facilities engines, and developer toolkits.

With the best tools and the smartest people working for your company, you can create a fast, flexible environment that moves like the cloud. You’ll quickly see how easy it is to flex and customize your environment, accelerate innovation, and support new business growth.

To learn more about the benefits of HPE OneView and the extensive partner ecosystem HPE has built to help you on your software-defined journey, download the e-book: HPE OneView for Dummies.


About the Author

The World’s Smartest People Can Work for You with the Right Infrastructure Management TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience. To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

Cost and Utilization Challenges of a Hybrid Cloud Environment

Cloud Computing – Datacenter

It’s been more than a decade since Amazon launched Elastic Compute Cloud and forever changed how businesses consume compute resources

Over the years, the popularity of cloud computing has continued to grow. That’s because many businesses are attracted to the promise of increased agility, faster innovation, and low startup costs that the cloud provides.

As enterprises expand to using multiple clouds, many have struggled to control costs. Effectively managing costs across multiple clouds in a hybrid IT environment has become a significant challenge—commonly resulting in unexpected charges and cost overruns.

High-tech analyst firm Moor Insights & Strategy looked into this challenge, producing a comprehensive report on how to simplify enterprise hybrid cloud cost management. I’ve summarized their findings in this article.

What causes cost overruns in a hybrid cloud environment?

Incomplete planning for actual costs seems to be at the core of the challenge organizations face. Typically, cost overruns occur for the following reasons:

  • Capacity planning didn’t allow for uncertainty
  • Cloud infrastructure utilization is lower than planned
  • Dev/test needs beyond production were not anticipated
  • Smaller costs (data transfer, load balancing, other services) were not accounted for
  • Resources are not being de-provisioned when finished using
  • Higher cost services are being used more than originally planned

Solving the cost issue

The next phase in hybrid cloud maturity must include better cost control and utilization. Having a hybrid cloud strategy will provide businesses with a more accurate forecast of expenses, yet this is only one part of the answer. The second part to achieving control over costs and utilization is through better visibility and accounting of the cloud infrastructure once it is in use.

If that goal sounds easy, don’t be deceived; it isn’t.

That’s because individual cloud providers have their own infrastructure with a toolset based on maximizing the value of their own cloud platform—not the broader hybrid cloud environment or experience that most enterprises want. These individual cloud toolsets do nothing to increase visibility and/or ease IT operations across complex, hybrid cloud environments.

In addition, individual cloud toolsets present IT operators with a continuous challenge as they try to impose some sort of accountability of infrastructure usage on developers. These monthly bills often include multiple teams with multiple cloud providers. And often, costs are only captured and categorized for each individual cloud provider.

Accountability is also often missed when using private clouds because they lack effective cost accountability. Additionally, it can take a long time to provision a VM (days to weeks), so teams are reluctant to release them back to IT. These two factors result in underutilized, stranded capacity.

What’s the answer for better hybrid cloud cost management?

To solve this challenge, businesses need easy-to-use, self-service tools that manage costs and application deployment across all public and private cloud environments. These tools should include features that not only help IT better manage the entire hybrid IT environment, but also make it easier for developers to get the resources they need to get their work done. Additionally, these tools should provide analytics that help the business better control costs and utilization.

Key elements of cloud management tools should include the following capabilities:

  • Self-service infrastructure that empowers a developer’s environment
  • Structure for tagging resources upon provisioning (with reporting capabilities)
  • API-driven, services-based SaaS platform that allows users to add existing cloud infrastructures and options for application developer use
  • Insights dashboard for visibility into cloud spend and utilization
  • Features that build cost visibility for budgeting, control, and optimization by the project owners (including drill down by cloud, project, and users)
  • Capacity and/or spend limits configurable per project (to avoid surprises)
  • SaaS-based platform to minimize setup and keep operational burdens low

Empowering developers with the tools they need

IT leaders realize that any cloud management initiative is more likely to be successful if developers are empowered instead of controlled. As I mentioned above, application developers typically lack any type of visibility and accountability into their infrastructure use. This lack of accountability contributes to both cost and utilization inefficiencies.

As developers adopt a more cloud-native structure, including architectures such as microservices, they control the iterative development and deployment of their applications. When provided with tools for visibility and control, developers can also manage the costs of their application, according to their assigned budgets.

Simplifying hybrid cloud

As enterprise cloud adoption continues to mature, organizations are developing a comprehensive strategy for managing both on- and off-premises infrastructure. Hewlett Packard Enterprise (HPE) is the only vendor with enterprise experience that is currently offering a comprehensive, software service that supports this initiative.

Moor Insights & Strategy recommends that IT leaders consider HPE OneSphere as a hybrid cloud management platform for addressing the cost and utilization challenges in a hybrid cloud environment. HPE OneSphere empowers application developers—a key constituent for success—with easy-to-use, self-service tools for cost management.

The full report, Simplify Enterprise Hybrid Cloud Cost Management with HPE OneSphere, is now available for download. More information on HPE OneSphere is available at: www.hpe.com/info/onesphere


About Gary Thome

Cost and Utilization Challenges of a Hybrid Cloud Environment TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Hewlett Packard Enterprise (HPE) has assembled an array of resources that are helping businesses succeed in a hybrid cloud world. Learn about HPE’s approach to developing and managing hybrid cloud by checking out the HPE OneSphere website. And to find out how HPE can help you determine an application placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.

1 2 3 4 8
Page 2 of 8