Enterprise Featured

Top Two Hybrid Cloud Concerns? Spend and Security

Caution

Is there a solution? Yes, consider a hybrid Microsoft Azure deployment

Businesses are embracing hybrid cloud in record numbers because it lets them choose a mix of applications, services and platforms — all tailored to their needs. Yet, many struggle with the complexity of operating different private and public clouds in conjunction with traditional infrastructure. Often, they don’t have the right skills to oversee and manage their cloud implementations, and that can lead to unrestrained cost and risk.

Earlier this year, almost a thousand professionals were asked about their adoption of cloud computing, and the results were compiled in the 2018 State of Cloud Survey. The survey reveals that cloud adoption continues to grow and 81% of respondents have a multi-cloud strategy. And the top cloud challenges these users face? Spend and security.

According to the survey, cloud users are aware that they are wasting money in the cloud – they estimate 30% waste. To combat this issue, 58% of cloud users rate cloud optimization efforts as their top initiative for the coming year. And according to Gartner, by 2020, organizations that lack cost optimization processes will average 40% overspend in public cloud.

Additionally, security continues to weigh on users’ minds. A whopping 77% of respondents in the survey see security as a challenge, while 29% see it as a significant challenge.

Many businesses that struggle with spend and security issues wonder if these problems can be solved. The answer is yes – with the right tools and expertise.

Microsoft Azure Stack: Minimizing security and regulatory concerns

Enter Microsoft Azure. Customers all over the world are choosing Microsoft Azure for their public cloud needs, making it one of the fastest-growing cloud platforms available today. And TheStreet.com reports that its growth shows no signs of slowing down, as “…72% of Azure customers see themselves deploying workloads in Azure Stack over the next three years.” So why is there so much interest in Azure Stack and how can it help businesses conquer security concerns?

Microsoft Azure Stack is an extension of Azure that lets business build and deploy hybrid applications anywhere. It lets DevOps leverage the same tools and processes they are familiar with in Microsoft Azure to build either private or public cloud instances of Azure, and then deploy them to the cloud that best meets their business, regulatory, and technical needs. Microsoft Azure Stack also allows businesses to speed development by using pre-built solutions from the Azure Marketplace, including many open-source tools and technologies.

In terms of meeting security needs, Azure Stack enables businesses to deliver Azure-consistent services within their own data center. That capability gives them the power and flexibility of Azure public cloud services — completely under their own governance.

Consumption-based pricing: A better way to implement and consume hybrid cloud resources

Another concern businesses have when using the cloud is overspending. One of the main reasons enterprises overspend is because they lack automation and simple tools that enhance the agility of the cloud to continuously monitor compliance and cost. And most businesses overprovision their infrastructure on-premises to be ready to handle unpredictable growth, which further adds to overspending.

Managed consumption for hybrid cloud is an operating model that lets businesses consume the exact cloud resources they need, wherever their workloads live — while also driving improved performance, cost, security and compliance. Some of these models also eliminate the need for staff to manage the hybrid environment day-to-day, which helps reduce human error and enables staff to focus on innovation.

If deployed correctly, this type of model lets businesses see who is using their cloud, what the costs are, and whether policies are followed. And with the right partner and tools to show usage, track cost, and monitor compliance and security, the business can be confident that they’re getting the most from their Azure hybrid cloud.

What’s the best way to implement a Microsoft Azure hybrid cloud environment?

A new service offered by Hewlett Packard Enterprise (HPE) meets these needs, letting the enterprise better manage both spend and security concerns of hybrid clouds on and off premises. Using services from Cloud Technology Partners (CTP, a Hewlett Packard Enterprise company), processes that manage cloud resources are set up in a customer’s environment of choice. After that, CTP services establish specific cost, security, and compliance controls. Coming soon, HPE GreenLake Hybrid Cloud will manage those resources on behalf of the customer. And unlike a traditional managed service, HPE GreenLake Hybrid Cloud will offer an automated, cloud-native model that is designed to eliminate the need for organizations to hire or train new staff to oversee and manage cloud implementations.

For Microsoft Azure Stack on-premises, HPE offers HPE ProLiant for Microsoft Azure Stack using HPE GreenLake Flex Capacity. This deployment model lets customers gain a pay-per-use experience, not only for the Azure Stack services, but for the underlying infrastructure. And by only paying for the capacity used, businesses can save more on IT cost – up to 30% of the infrastructure cost.

To learn more about HPE GreenLake Hybrid Cloud for Microsoft Azure Stack, watch this short video. For more information about HPE ProLiant for Microsoft Azure Stack, watch this on-demand video.


About Gary Thome

Top Two Hybrid Cloud Concerns? Spend and Security TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

Food for Thought as You Modernize Your Data Center

Vegetables on shelf in supermarket

Over the fourth of July holiday, I was sitting with a few friends discussing the bevvy of summer vegetables and fruits that were available this time of year

One friend absolutely lamented cooking at all in the summer – “it’s just too hot!”  After some back-and-forth, another friend introduced us to his two rules for cooking during the summer months: keep it simple and don’t overspend. As an enthusiastic summer chef, he resisted filling his grocery basket with unnecessary items, and while he did spend a little more for higher quality food, he only needed a few ingredients, which kept overall costs low.

The same two rules can be applied to data center modernization.

Rule #1: Keep it simple

When technology needs an update, some organizations choose to modernize over time, following the same problem-by-problem approach they used to build the original data center. This piecemeal approach is complex and can be agonizingly slow. What’s worse, just like complexity in the kitchen, all that time and effort won’t necessarily lead to a better end product or even lower costs.

Switching to hyperconverged infrastructure (HCI) can simplify your data center and keep costs down. HCI takes a building block approach to architecture, consolidating compute, storage, network switching, replication, and backup in a single integrated system. The consolidation of these IT functions onto virtualized hardware can greatly simplify environments that have been divided by siloed point solutions.

VM-centric HCI systems are also simple for IT administrators to manage, and it requires much less data-center space than traditional IT devices. One HCI customer, St. John’s Riverside Hospital, had been running its existing infrastructure environment for seven years and were quickly running out of physical space, restricting their ability to grow. What really sold the hospital on hyperconverged was the simplicity and ease of management from a single console.

Rule #2: Don’t overspend

Hyperconvergence requires some upfront investment, but it can deliver a huge return on that investment. In this study based on research with real world customers, Forrester Consulting found that HCI reduces TCO by 69% on average compared to traditional IT. Converging the entire IT stack – firmware, hypervisor, and data virtualization software – has additional advantages, according to that same study. Data center footprint can be reduced 10:1, backups and disaster recovery become simple and straightforward, and upgrades are managed for the whole stack. The most efficient solutions also free up significant staff time, boosting the economic benefits even further. IDC reports an 81% increase in time available to focus on new projects as a direct result of hyperconverged deployment.

A direct-sales organization based in California was intrigued by hyperconverged’s simplicity. Princess House was plagued with aging IT systems that were becoming increasingly inefficient and costly. They deployed an HCI solution to reduce complexity, and in the process they dramatically improved application performance and reduced total cost of ownership.

Both of these customers did extensive research on industry leading hyperconverged solutions, and they both chose HPE SimpliVity. The other vendors didn’t offer fully converged solutions or couldn’t match HPE SimpliVity’s simplicity and efficiency. As these businesses discovered, the results of a fully integrated stack can be wide-ranging and directly impact the bottom line.

To learn more about how HPE SimpliVity can simplify your IT environment and reduce TCO, download the free e-book: Hyperconverged Infrastructure for Dummies.


About Chris Purcell

Food for Thought as You Modernize Your Data Center TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and HPE OneSphere.  To read more articles from Chris, check out the HPE Shifting to Software-Defined blog.

Partner Software Integrations Provide Bridges That Help Businesses Move Faster

Top view over the highway,expressway and motorway at night, Aerial view interchange of a city, Shot from drone,Expressway is an important infrastructure in Thailand

Imagine a world without bridges

Any body of water or valley would become an obstacle for passage, slowing travel and communication. Fortunately, that’s not the case; millions of bridges all over the world allow people to move from one side to the other quickly and easily.

In the world of software development, partner integrations provide the same sort of benefit – a bridge between an enterprise’s infrastructure to developers and independent software vendors (ISVs). The design of this bridge allows ease of access from a variety of software applications to the hardware infrastructure. Due to the magic of APIs (the building blocks for these bridges), diverse software programs can be connected, allowing everyone to quickly and easily reap the benefits of this connectivity.

Software-defined intelligence and a single unified API

The key to easily bridging your infrastructure to the software applications you need to run is to software-define your infrastructure and manage it with software.  Without a common, intelligent, infrastructure management solution, connecting tools to the infrastructure would require vastly different materials and tools every time. An infrastructure management solution with a unified API means that each time you connect a new tool, the process is quick and easy.

For example, the unified API in HPE’s infrastructure management solution, HPE OneView, provides a single interface to discover, search, inventory, configure, provision, update, and diagnose the physical infrastructure. A single line of code fully describes and can provision the infrastructure required for an application, eliminating time-consuming scripting of more than 500 calls to low-level tools and interfaces required by competitive offerings.

Connecting partner integrations is just as simple.

Streamlining integrations – faster and simpler connections

Once your data center is software-defined and streamlined with infrastructure management software like HPE OneView, you have the bridge needed to connect with other integration partners. This results in enhanced automation for IT and increased ease of access for DevOps.

For example, Chef Automate can easily automate the provisioning of an entire stack from bare metal to applications in minutes. By combining an HPE partner’s automation with HPE OneView’s ability to stage, schedule, and install firmware updates, entire stacks can be provisioned, updated or changed very quickly.

Mesosphere’s integration with HPE OneView allows users to provision and update bare metal DC/OS nodes in the same way as virtual and cloud resources. This makes it easy for administrators to deploy and elastically scale a cluster on bare metal servers with just a few clicks.

In terms of increasing ease of access for DevOps, an integration with Docker and HPE OneView gives developers simple access to on-premises infrastructure for their containerized applications. With this integration, developers can now easily move their applications between on-premises infrastructure and the cloud, or vice versa, depending on their needs.

And these integration bridges aren’t just for DevOps.  HPE OneView for Microsoft Azure Log Analytics is the latest addition to the portfolio of HPE OneView integrations.  It links Microsoft cloud management services with on-premises HPE hardware infrastructure, giving IT operations personnel visibility into HPE physical infrastructure together with virtual machine, operating system, and application data collected by Log Analytics.

A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate solutions for customers. These partners range from large software suites like VMware® vCenter, Microsoft® System Center, and Red Hat to focused solution providers such as Chef, Docker,  Red Hat, CANCOM and others. By integrating with the unified API in HPE OneView, ISVs can provide solutions that reduce the time their customers spend managing their environments.

Building a better bridge with partner integrations

HPE customers all over the world are automating their IT infrastructure to simplify their environments and be more competitive. Powered by HPE OneView, HPE’s composable ecosystem of partners provides a bridge that will help businesses bypass obstacles and move faster and easier along their journey to a digital-first future.

To learn more about the benefits of HPE OneView and the extensive partner ecosystem HPE has built to help you on your software-defined journey, download the e-book: HPE OneView for Dummies.


About the Author

Partner Software Integrations Provide Bridges That Help Businesses Move Faster TechNativeFrances Guida leads HPE OneView Automation and Ecosystem. Building on her years of experience with virtualization and cloud computing, she is passionate about helping enterprises use emerging technologies to create business value.

To read more articles from Frances, check out the HPE Shifting to Software-Defined blog.

Enterprise Infrastructure Management Requires the Right Strategy for Success

SuccessStrategyHPEOneSphere

Distributed computing infrastructure has experienced evolutionary changes over the past two decades

Large, clunky server and storage systems have evolved into streamlined, highly efficient systems. Administrators have shifted how they consume resources, too. Historically, resource utilization was inefficient at best. Today, software automates the process to effectively manage resources — even the relationship between applications and the underlying hardware has changed. Today, abstractions between hardware and applications provide the ability for IT organizations to shift resources, as needed, to ever-changing application requirements.

In addition to the way resources were used, the physical number of things to manage skyrocketed. The number of servers and storage subsystems, network devices and amount of data have all seen a significant increase. And in the coming years, we can expect to see this trend continue to grow exponentially. In sum, a lot has changed… and continues to do so at a dizzying pace.

Understand the source of the complexity

In order to understand the problem, we need to break down the complexity into components. The underlying infrastructure, architecture and management solutions have all evolved. Yet, the amount of change is so staggering it creates its own complexity. In addition to managing more things, and more complex things, IT organizations are being asked to increase the rate in which they deliver services.

Breaking down the complexity falls into one of four key categories:

  1. Infrastructure: Infrastructure includes the server, storage and network hardware components. It also covers the architecture that governs how each of the components are configured.
  2. Software: The software that runs and manages the infrastructure is broken down into two sub-components: 1) The software that manages the physical hardware and 2) The overarching management software used to provide insights and guidance to manage the underlying infrastructure.
  3. Service Delivery Method: In the past, most organizations used a single, monolithic service delivery method to leverage infrastructure resources. Today, organizations rely on a number of different methods to manage infrastructure resources that include virtualized, converged, hyperconverged, composable, private cloud and public cloud. And unlike in the past, organizations today use a combination of different methods of infrastructure delivery to support the varied application requirements.
  4. Demand Shifts: IT organizations need to increase the speed and flexibility to the services they provide. Customer requirements are constantly changing and so are the underlying infrastructure requirements. These changes are forcing IT organizations to rethink their approach to infrastructure requirements in favor of more flexible options.

Each of these four categories provide a degree of complexity on their own. The challenge is that IT organizations are faced with all four categories all at once. With this combination, one can see how the complexity grows exponentially.

Simplifying the complexity

So, how do you cope with the dizzying array of constantly changing underpinnings? The first step is to simplify the environment. Beyond just the four categories outlined above, IT organizational and cultural shifts need to change. Streamlining the environment takes planning, time and effort.

Look for solutions and approaches that further simplify the environment. At the same time, consider how these changes impact your processes and organizational structure. Not all of the changes will be based in technology. As the demands of your customers change, so will your organization and processes. Look for opportunities to address technical debt and remove old or un-needed processes. These two steps alone go a long way toward simplification.

The importance of automation

Part of simplification includes the introduction of automation. In the past, organizations faced the fact that they had to do everything themselves. This was partly due to a lack of mature and sophisticated solutions along with the ability to add more people to resolve issues.

Today, that approach simply is no longer feasible. Humans cannot keep up with the rate of change. Solutions are far more mature and sophisticated than those of the past. Automation addresses these issues in a number of different ways:

  1. Increasing the speed of responsiveness
  2. Increasing accuracy by removing human error
  3. Taking the human out of simple, mundane processes.

The level of sophistication and automation built into today’s management tools address much of the complexity outlined above. Tools are now able to manage the entire infrastructure spectrum in a meaningful way. Management tool automation is a key factor to consider for any enterprise looking to transform the way they leverage infrastructure.

Designing your strategy

It is important to understand that IT strategy plays a direct role in the decisions about infrastructure and management tools. As enterprises continue down the path of digital transformation, it is critical that they consider ways to support changing customer demands at speed. Part of this strategy will most likely include a flexible set of solutions to meet the changing demands. Ideally, the strategy considers solutions with a sophisticated management and automation component that covers the entire portfolio.

These more sophisticated approaches allow IT organizations to do things not previously possible. Ultimately, this is where today’s IT organization strives to be.

This article is sponsored by Hewlett Packard Enterprise (HPE), one of several companies working to solve the complexity challenge of hybrid cloud. HPE recently announced HPE OneSphere, an as-a-service multi-cloud platform that simplifies management of multi-cloud environments and on-premises infrastructure. Through a unified view, IT can compose hybrid clouds capable of supporting both traditional and cloud-native applications.

For more information on simplifying a hybrid cloud environment, read the 451 Research report: Seeking Digital Transformation? Eight Essentials for Hybrid IT. To learn more about HPE’s approach to developing and managing hybrid cloud, check out the HPE OneSphere website.


About Tim M. Crawford

Enterprise Infrastructure Management Requires the Right Strategy for Success TechNativeTim M. Crawford is a CIO and Strategic Advisor at AVOA. With almost 30 years in IT, Tim is well versed in how IT can serve as a strategic weapon for businesses. Much of his work centers around the differences between traditional and transformational CIOs and the IT organizations they lead. Tim works with other CIOs and executive teams to transform their business through the use of technology. Tim also serves as a member of a number of private CIO groups including the Wall Street Journal’s exclusive CIO Network.

What does multi-cloud mean to IT Ops, Developers and LOB?

HPE OneSphere benefits. 3 perspectives

Looking at hybrid IT challenges and solutions from 3 different perspectives

Digital transformation is ushering in a new era of hybrid IT– a combination of both public and private cloud – that allows businesses to innovate while meeting their own unique organizational needs. Yet, a hybrid IT environment can create complexity and operational friction that can slow a business down and hold them back.

As businesses seek ways to remove IT friction, streamline operations, and accelerate business innovation across their hybrid environment, it’s important for them to think about the needs of three particular groups – IT operations, developers, and line of business (LOB) executives. What challenges do each face? What opportunities do they see?

To answer these questions, IDC conducted in-depth interviews with IT operations staff and line of business individuals at Fortune 100 enterprises. The results can be found in a comprehensive research report – The Future of Hybrid IT Made Simple.

IT ops: Where’s my automation for deployment and management?

A hybrid IT environment is definitely more challenging for IT operations than a single, virtualized compute infrastructure located on premises. A lack of automation in a hybrid IT environment means deployment and management of siloed resources must be managed separately.

Other concerns with hybrid IT include IT interoperability and integration, application certification, change management/tracking, and complexity of the overall infrastructure. In addition, extensive training is needed for operations and development personnel as IT shifts to a service broker model.

As these challenges mount, IT can no longer be treated as a back-office function. Instead, IT ops is expected to drive new sources of competitive differentiation, while still supporting legacy infrastructure and processes.

As one IT ops executive explains in the report, “Hybrid IT is more complex when it comes to deployment and ongoing management. The initial setup of the process takes some time, and training people how to use the different portals further extends deployment timelines. Every time something new comes up, it’s always a challenge because people don’t necessarily like to learn anything new. There’s always a learning curve, and they are usually not too happy about it. Change management is always a headache.”

Application Developers: Where are my developer services and ease of use?

Hybrid IT is also challenging for application developers, but for completely different reasons. Developer services, such as infrastructure APIs, workflow, and automation tools, are not consistently available across private and public clouds. And a lack of unified provision tools means that IT must serialize much of public and private cloud service delivery, which leads to bottlenecks.

Developers feel that a complex hybrid IT infrastructure is difficult to interact with, slowing down their ability to quickly roll out new services on new platforms. Interoperability between development, test/QA, and production environments is also a problem, along with the learning curve on available tools that manage cloud resources. Integration and version control between their on-prem and cloud environments is also lacking, which slows them down and increases complexity.

The report quotes one application developer as saying, “Our major concern is with deploying third-party applications across multiple clouds. A big issue is the proprietary nature of each of these clouds. I can’t just take the virtual image of the machine and deploy it across multiple clouds without tweaking it.”

Line-of-Business (LOB) Executives: Where’s my visibility and cost controls?

LOB executives have very different concerns. They are frustrated by the slow response for new private cloud services. Although public cloud services are fast, executives feel that they also carry risk. They wonder if using public cloud exposes their business to the outside world. They also are concerned that they will be locked into a specific public cloud service. Adherence to SLAs, transparency, privacy, consistency across clouds, overall performance, and cost—all these issues weigh heavily on a LOB executive’s mind.

According to one LOB executive quoted in the report, “Application integration with on-premises data management layers like file systems is a problem when developing in the cloud. With hybrid IT, our goal is to ensure that data is available across all locations, using some kind of a secure message broker integrated with a database and a distributed file system.”

Reducing hybrid IT complexity – is it possible?

So what’s the solution? Is it possible to operate a hybrid IT environment without the headaches associated with it?

According to IDC, the answer is yes—but only if a multi-cloud strategy is bound together with an overarching hybrid IT strategy. And this is where companies like Hewlett Packard Enterprise (HPE) can help. HPE software-defined infrastructure and hybrid cloud solutions lets businesses reduce complexity so they can innovate with confidence.

For IT operations staff, using composable and hyperconverged software-defined infrastructure means that they will be able to move quickly. They can easily deploy and redeploy resources for all workloads. Plus, automating and streamlining processes frees up resources so IT can focus on what matters most. Developers can drive innovation using multi-cloud management software, rapidly accessing the tools and resources required to quickly develop and deploy apps. Lastly, multi-cloud management options let LOB executives gain insights across public clouds, private clouds, and on-premises environments, providing the visibility needed to optimize spending.

By delivering solutions that make hybrid IT simple to manage and control across on-premises and off-premises estates, a business can better meet the needs of IT operations, developers, and LOB executives. A hybrid IT strategy combined with multi-cloud management empowers everyone to move faster, increase competitiveness, and accelerate innovation.

To find out how HPE can help you determine and deploy a digital transformation strategy for your hybrid IT environment, visit HPE Pointnext. Read IDC’s full report, The Future of Hybrid IT Made Simple


About Lauren Whitehouse

What does multi-cloud mean to IT Ops, Developers and LOB? TechNativeLauren Whitehouse is the marketing director for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE).  She is a serial start-up marketer with over 30 years in the software industry, joining HPE from the SimpliVity acquisition in February 2017.  Lauren’s brings extensive experience from a number of executive leadership roles in software development, product management, product marketing, channel marketing, and marketing communications at emerging and market-leading global enterprises. She also spent several years as an industry analyst, speaker and blogger, serving as a popular resource for IT vendors and the media, as well as a contributing writer at TechTarget on storage topics.

To read more articles from Lauren, check out the HPE Shifting to Software-Defined blog.

Today’s Challenge: Remove Complexity from Multi-cloud, Hybrid IT

Business Leader Concept As Strategic Innovative Success Thinking

The cloud has conditioned us to expect more: on demand availability, simplicity, and speed—while the typical multi-cloud, hybrid IT environment is getting ever more complex

Ivan Pavlov was a Russian physiologist, famous in the 1890s for introducing the concept of a conditioned response – the theory that our brains can be trained to associate certain stimuli with a response.

In today’s enterprise data center, public cloud is a stimuli that has conditioned developers and lines of business to expect immediate availability of resources using a simple, self-service, on-demand infrastructure. IT is expected to respond by transforming on-premises infrastructure into a comparable experience, as well as managing workloads on multiple public clouds – without adding more complexity and cost.

Yet, is this goal of simple hybrid IT infrastructure management even possible with the tools available today?

Managing costs, security, and compliance amidst growing complexity

In the past, enterprise IT provided a private infrastructure to developers, complete with a tried-and-true command and control structure. Processes and approval workflows were optimized for cost, security, and compliance. (Of course, it’s well-known that these processes were/are typically slow and can delay product development by weeks or even months.)

In today’s enterprise, multiple cloud platforms are routinely used, each with their own toolset focused on maximizing the value of each vendor’s cloud platform. And most enterprise IT environments embrace a mixture of deployment models (on-premises infrastructure combined with multi-cloud), causing even more complexity.

In the midst of these challenges, IT’s operational workload is increasing, while the operations budget is decreasing. The enterprise’s highly distributed, siloed environment becomes complicated as there is no centralized management view. IT needs to find a way to deliver self-service public cloud and private infrastructure that empowers developers – while providing the tools and accountability for everyone to easily manage cost, security, and compliance.

What’s needed for better hybrid cloud management?

In a nutshell, an effective hybrid cloud management tool needs to provide the following:

  • Self-service infrastructure for low operational burden on IT
  • Developer enablement for rapid application development and deployment
  • Visibility and governance into infrastructure costs

This sounds relatively simple, right? Several approaches currently exist to tackle this problem, as described in a recent analyst report by Moor Insights & Strategy detailing enterprise hybrid cloud management challenges and solutions.

“One approach is to deliver a hybrid cloud, self-service infrastructure platform as a collection of standalone cloud infrastructure options,” the report explains.  A majority of enterprises today are using this method, yet it has a significant downside. “While this can deliver self-service resources from each infrastructure, it fails to unify and simplify the implementation of cost management, security, and compliance across the infrastructures via a consistent developer and operator experience.”

The report goes on to discuss another tactic – delivering cloud management tools via a cloud management platform (CMP). The report points out, “These platforms are good for automating cloud-native development and operations with cloud-based self-service infrastructure provisioning, but they have a few common limitations that result in only a partial solution for IT’s needs.”

These limitations include:

  1. A focus on container-based, cloud-native applications, rather than on applications migrated into operation on cloud infrastructure.
  2. An adoption by only parts of the organization – as different leaders prefer different platforms and tools.
  3. A focus on unifying developer tooling and operations across infrastructures instead of a focus on unifying enterprise-wide management of cost, security, and compliance.

Can we effectively manage a complex, hybrid IT environment?

Recent breakthroughs in hybrid cloud management software lets IT achieve their goal of delivering all their infrastructure simply, regardless if it is on or off premises. Hewlett Packard Enterprise (HPE) recently introduced HPE OneSphere, a hybrid cloud management solution that enables IT to easily manage data and applications distributed across multiple clouds and inside data centers.

With HPE OneSphere, IT can build and deploy clouds in minutes and provide ready-to-consume resources, tools, and services faster. Developers get easy access to tools, templates, and applications in a cross-cloud service catalog, so they can quickly use the tools they know. Consumption and cost analytics across the entire hybrid cloud environment is always available, providing real time insights into cost and governance.

Immediate availability, simplicity, and speed

Enterprises want to empower their developers by giving them the tools they need to be successful. That means embracing both on- and off-premises infrastructures – public cloud, private cloud, bare-metal, and containers. Yet they must also be able to manage it all simply, securely, and cost-effectively.

With HPE as a partner, IT can now better meet everyone’s expectations of availability, simplicity, and speed – no matter where applications are deployed.

To read more, download the analyst whitepaper by Moore Insights & Strategy, HPE OneSphere empowers IT to deliver all infrastructures as a service.


About Gary Thome

Today’s Challenge: Remove Complexity from Multi-cloud, Hybrid IT TechNative

Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies which include HPE OneSphere – multi-cloud management, HPE SimpliVity – Hyperconverged Infrastructure, HPE Synergy – Composable Infrastructure and HPE OneView – Integrated Management.

To read more articles from Gary, check out the HPE Shifting to Software-Defined

The Cost of Inaction in the Data Center

Time Is Money

Two houses in my neighborhood went on the market in late spring, and one of them is already under contract. The other one hasn’t budged. Why? I blame it on the cost of inaction.

The houses are about the same age, but in very different conditions. The first house was brought up-to-date when the family moved in. They focused on major improvements and items that would keep the house low maintenance. The other neighbor decided to do nothing immediately, only addressing issues when they needed to be fixed. Ultimately, this neighbor fell behind and had to deal with an increasing to-do list of repairs and upgrades. A few years elapsed, his house went on the market, and now potential buyers don’t just see a house; they see an enormous to-do list of items that need attention.

Maintaining a data center is a bit like maintaining a house. Every business needs to modernize to some extent to stay competitive in the market, it’s simply a question of approach. A passive problem-by-problem approach, like the one my neighbor took, can be appealing to IT organizations with limited budgets. The familiar process of waiting for issues to arise allows for business to go about as usual and seems to cost nothing, making the upfront costs and disruption required for a more proactive approach difficult to justify. However, if the organization’s goal is a complete technology refresh over the next 2-3 years, the cost of this passive approach can be considerable.

Hidden costs of inaction

Waiting for licenses to expire and equipment to fail sounds like it should cost nothing, because no upgrade actions are being taken. But it’s actually the most expensive way to maintain and scale out your environment, particularly in an infrastructure comprised of older apps and devices.

Along with the occasional hardware malfunctions, firmware upgrades, and power outages that every business experiences, legacy data centers are saddled with problem-solving in a complex, inefficient environment that was built over time. Each siloed device and software product has its own compatibility issues and upgrade schedule, compounding routine maintenance and updates into a continuous refresh cycle that can put a strain on resources. The more unique products and vendors in a data center, the more time needed to maintain, manage, and troubleshoot the systems, sometimes bringing productivity to a standstill.

To spread out the financial impact, some organizations choose to modernize over time, following the same approach they used to build the original data center. This piecemeal approach adds an extra burden to IT workloads and often requires costly specialists to help with deployment and employee training. Specialists, employee training, upgrade expenses, overtime for staff, diminished productivity, and lost revenue opportunity are all hidden costs in a passive modernization process. In the meantime, much of the existing technology in the data center is aging, and the cost to bring it up-to-date is increasing. Temporary staff and short term contractors can invent ways to make the old and new systems work together, but after these specialists leave, the root complexity persists.

A better way: proactive modernization

Businesses often will reject the idea of converting to new technology because of the perceived budget requirements and business disruption. But what if no additional budget was required? What if the same funds currently budgeted for maintaining the infrastructure could be used in a different way with remarkably different results?

Instead of throwing money at aging equipment, follow these simple rules when upgrading:

* Invest in a more efficient infrastructure. Proactive investment in a low maintenance, high efficiency, scalable solution does not necessarily require additional funding. If your organization wants to make significant changes to the data center, don’t let cost be a roadblock. Many vendors offer flexible financial services.

* Replace multiple products at once with a single platform that is easy to scale. Replacing one point solution with another might marginally improve efficiency. Yet, consolidated solutions that can be upgraded and scaled easily will provide the biggest improvements in operational efficiency.

* Choose a solution that your IT staff can easily manage. Select products that are simple to use and provide management capabilities across multiple sites.

A hyperconverged data center is a simpler data center Hyperconverged infrastructure (HCI) has become a popular option, particularly for mid-size customers, because it takes a simple, building bock approach to a tech refresh. Because hyperconverged infrastructure consolidates compute, storage, network switching, and a variety of IT functions onto virtualized hardware, the solutions can greatly simplify environments that have been divided by siloed point solutions. This consolidation drives significant operational

efficiency, freeing up time for other initiatives and revenue generating activities, even for organizations with limited budgets and staffing. Forrester Research has revealed that HCI systems can make current IT staff as much as 53% more productive than they are in a legacy environment.

The same Forrester study found that HPE hyperconverged solutions also reduce total cost of ownership (TCO) by 69% compared to traditional IT. HPE SimpliVity hyperconverged infrastructure converges the entire IT stack – compute, storage, firmware, hypervisor, and data virtualization software – into a single integrated node along with deduplication, compression, and data protection. VM-centric management makes the system easy to learn and the compact nodes can easily be scaled to meet demand in the data center or at the edge.

Focus on outcomes in your data center refresh

A proactive approach to updating your data center can provide immediate results. Organizations that look past the daily to-do list and instead focus on desired outcomes such as improved efficiency, reduced TCO, and more time for innovation, find they can upgrade with minimal disruption and a surprisingly quick return on investment.

To learn more about how hyperconvergence can deliver better outcomes for your IT organization, download the free e-book: Hyperconverged Infrastructure for Dummies.

_____________________________________

About Jesse St. Laurent

The Cost of Inaction in the Data Center TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. To read more articles from Jesse St. Laurent, check out the HPE Shifting to Software-Defined blog.

How to Automate Infrastructure to Save Time and Drive Simplicity

Automation Concept As An Innovation, Improving Productivity, Rel

Movies and TV shows are full of inventors trying to come up with ways to simplify everyday tasks to save time

Two examples come to mind immediately – Dick Van Dyke’s character, Caractacus Pott, in Chitty Chitty Bang Bang, and Christopher Lloyd’s Emmett “Doc” Brown in Back to the Future. (Both of whom invented breakfast-making machines, oddly enough!) The reason these types of inventions appear time and time again on screen is because everyone can relate to them. Everyone is looking for a way to simplify everyday tasks, save time, and make life easier.

In much the same way, today’s infrastructure management solutions can automate many processes in the datacenter to help businesses move faster and compete more effectively. Physical infrastructure can be automated using software-defined intelligence and a unified API. Hundreds to thousands of lines of code can be reduced to a single line of code, saving countless hours and making IT infrastructure management easier…(sorry this can’t make breakfast for you just yet…but it’s coming!)

Can software-defined intelligence and a unified API also help businesses deliver new applications and services faster — innovations that are often the lifeblood of many businesses? Yes, and here’s how.

Continuous delivery of applications and services requires fast, policy-based automation across development, testing, and production environments. A unified API for infrastructure can do that by allowing developers and ISVs to integrate with automation tools. For instance, a unified API can simplify control of compute, storage, and networking resources, so developers can code without needing a detailed understanding of the underlying physical resources.

One example of this in action is customers using an automation tool like Chef who can easily automate the provisioning of an entire stack from bare metal to applications in minutes. By combining an HPE partner’s automation with HPE OneView’s ability to stage, schedule, and install firmware updates, entire stacks can be updated or changed very quickly.

A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate solutions for customers. These partners range from large software suites such as VMware® vCenter, Microsoft® System Center, and Red Hat, to focused solution providers such as Chef, Docker, and others. By integrating with the unified API in HPE OneView, ISVs can provide solutions that reduce the time their customers spend managing their environments.

HPE OneView’s automation can take care of the “breakfast-making,” housekeeping issues involved with “keeping the lights on” while simultaneously opening the doors to innovation. And just like the on-screen inventors and inventions they created, HPE OneView and its integration partner make the datacenter simpler, allowing you to concentrate on the important tasks of developing apps that create value for your business.

To learn more about the benefits of HPE OneView and the extensive partner ecosystem HPE has built to help you on your software-defined journey, download the e-book: HPE OneView for Dummies.


About Paul Miller

How to Automate Infrastructure to Save Time and Drive Simplicity TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog

Place Your Bets: Hyperconverged or Composable Infrastructure in Tomorrow’s Data Center?

bets

In today’s fast-moving world of data center innovations, it’s difficult for businesses to stay up to date on industry trends and best practices

Technology changes quickly, so businesses look to industry analysts to make better informed purchasing decisions. Experts such as Gartner and Forrester invest heavily in research, and their corresponding reports and white papers provide clarity on future trends and changes in the industry.

What factors will shape tomorrow’s data center?

Many companies rely on these industry analysts because their research provides information needed to make a more educated decision about future architecture and infrastructure investments. But do these analysts really provide that clarity? And what happens when two research organizations conflict in their assessment of the future data center?

For example, I recently read Gartner’s Four Factors That Will Shape the Future of Hyperconverged Infrastructure. In this report, Gartner highlighted how they see the future of hyperconverged infrastructure (HCI). Although the report was filled with valuable information, I couldn’t help but think it too narrowly focused on just HCI –not aligning to other technologies that IT is using today and will incorporate in the data center of the future.

I wanted another opinion and found a recent report from Forrester titledThe Software-Defined Data Center Comes Of Age, which provided a very different opinion to Gartner. In Forrester’s report, the authors say that composable infrastructure systems (CIS) “…will become the predominant approach to computing” in the software-defined data center (SDDC) of the future.

First announced in 2015 by Hewlett Packard Enterprise (HPE), composable infrastructure treats compute, storage, and network devices as fluid pools of resources. Depending on workload requirements, IT can compose and then recompose these resources based upon the needs of the applications. A composable infrastructure improves business agility, optimizes IT resources, and accelerates time-to-value.

The Forrester report goes on to mention how they see HCI fitting into the software-defined data center of the future. “Virtual resource pools and physical resource pools are the SDDC building blocks…The major change over the past 18 months has been the lockstep maturation of hyperconverged infrastructure and its enablement of more efficient private cloud.”

According to Forrester, the SDDC is a major shift for enterprise infrastructure, unifying virtual and physical infrastructure with cloud and legacy systems. They advise infrastructure and operations professionals to “…invest in software and hardware solutions that are programmable or composable and repeatable with automation, leveraging prebuilt models built off of compute, storage, and network.”

Forrester’s report also predicted that other vendors will soon introduce their own version of composable infrastructure. “Only one current product legitimately meets Forrester’s definition of a local CIS — HPE’s Synergy. But the concept is so powerful and such a logical evolution that we’re confident in predicting a wave of similar products either shipping or announcing in 2018 from Cisco, Dell EMC, and possibly Huawei.”

Indeed, Forrester’s prediction was correct. In early May, Dell announced Kinetic Infrastructure at Dell EMC World. They have become the second vendor to announce a CIS platform. Like Forrester, I expect more announcements from other vendors in the next couple of years.

After I finished reading both industry analyst reports, I felt that the Forrester report provided a better view of how the data center of tomorrow would operate. As I meet with customers, I see firsthand how many IT organizations are already successfully incorporating containers, bare metal, and virtualized workloads — in combination with hyperconverged and composable infrastructure — to successfully achieve many aspects of what the Forrester report outlines.

A safe bet: a software-defined data center with hyperconverged and composable infrastructure

Investing in the best architectures and technologies will help you succeed not only today, but into the future. When making a decision, keep in mind how individual technologies fit together into the overall, cohesive picture. In the software-defined data center of the future, everything should work together seamlessly, as opposed to fragmented silos.

It’s also important to talk with actual companies that are actively investing in these future technologies. Learning from industry experts and those who are implementing successful SDDC strategies will help increase your odds of choosing the best technology for your data center today and into the future.

To learn more about the future of composable infrastructure for your business, download the Forrester research report, The Software-Defined Data Center Comes Of Age or Gartner’s Four Factors That Will Shape the Future of Hyperconverged Infrastructure. And for more details on composable infrastructure, download the free e-book, Composable Infrastructure for Dummies.

_____________________________________

About Paul Miller

Place Your Bets: Hyperconverged or Composable Infrastructure in Tomorrow’s Data Center? TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

The Rise of Multi-Cloud, Hybrid IT for the Enterprise

Skyscraper Glass Facades On A Bright Sunny Day With Sunbeams In The Blue Sky. Modern Buildings In Pa

A decade of innovations means that you can now create a multi-cloud, hybrid IT strategy that saves money and is more effective than running everything in the public cloud

The quote, “Everything old is new again” can certainly be applied to technology. Like everything else, technology has cycles that change over time. Innovations lead to new solutions that often change what’s old and make it better. And before you know it, a technology that was thought long gone is now revived — and better than ever.

Consider the enterprise datacenter. Cloud started more than 10 years ago and was the hot, new tech trend. Although businesses continue to migrate many applications to the cloud, trends reveal that enterprises are now choosing hybrid IT (a mix of on-premises and public cloud) because it gives them more options.

An enterprise can now keep key applications on-premises, using new technologies that offer the benefits of public cloud, without the cost, performance, or security concerns. That’s because on-premises solutions have not stood still. Costs have dropped, new types of infrastructures are now available, and deployment models are more flexible — giving you better options than ever before.

Looking back…when people first started using public cloud

On-premises costs were higher; infrastructure was harder to manage and provision

Ten years ago, hardware costs were higher. Flash memory was much more expensive and VMs were harder to manage and provision. Because a VM is a whole machine (even though it is virtual), each one contains all the code needed to run the virtual machine, which means lots of data duplication. Of course, this overhead resulted in relatively high admin costs and lower utilization rates.

The IT department didn’t offer speed and flexibility

Also, 10 years ago the needs of developers and data scientists working in a digitally-empowered world started changing. They needed speed and continuous innovation, something that Gartner identified as Mode 2. In a Mode 2 world, much more experimentation was needed. “Compose, tear-down, recompose. Rinse and repeat” became the mantra of Mode 2 developers.

Ten years ago, developers loved the autonomy that public cloud gave them because they didn’t need to jump through hoops. They didn’t have to wait until the IT department gave them permission to fire up a server. They would just contact a public cloud provider, pull out the company credit card, and were ready to go.

You had to pay for servers up front

Prior to public cloud, if you wanted compute power to try out a new application, you would have to pay for the hardware and software up front. Public cloud was a refreshing change. You paid for what you used, and you stopped paying when you didn’t need it anymore.

For many, this type of pay-as-you-go model was a game changer. Much of the work that Mode 2 developers do is experimental. Because they didn’t know if their application was going to work or even how much compute power they needed, the pay up front scenario was simply not an option.

Looking forward: How on-premises solutions have changed

Much has changed during the past 10-plus years that makes on-premises technology new again.

On-premises solution costs are now lower and performance is higher

Infrastructure vendors have innovated in many areas, which has cut the costs of running your data center. Flash memory costs have fallen dramatically, along with the price per giga-flop. New IT technologies are also available that save money and increase performance.

Hyperconverged solutions allow VMs to be managed in the same place, which results in lower admin costs and increased utilization. And some hyperconverged offerings provide de-duplication and compression, providing lower storage requirements and faster data access.

Composable infrastructure takes hyperconverged to another level by virtualizing the entire IT infrastructure. It treats physical compute, storage and network devices as services, managing all of IT via a single application. This eliminates the need to configure hardware to support specific applications and allows the infrastructure to be managed in software. Composable infrastructure creates pools of resources that are automatically composed in near real time. Because you can flex resources (compute, storage, and fabric) to meet your needs, you get higher utilization, saving you money.

All of these factors have dramatically lowered the costs of on-premises infrastructure — providing options that are better than public cloud for specific applications.

Developers now have speed and flexibility

And what about today’s Mode 2 developers — do on-premises solutions help with the speed and flexibility they need to innovate? The answer is yes. Some hyperconverged and composable solutions have workspaces that let developers work with autonomy. Yet, IT still retains the ability to govern these spaces, a capability that helps find wasted resources such as unused VMs.

Developers also love APIs, another capability now available to users of composable infrastructure. A developer can programmatically control composable infrastructure through a single, open RESTful API to automate the provisioning, configuration, and monitoring of infrastructure.

The growth of pay-as-you-go pricing for on-premises infrastructure

Many infrastructure companies today provide a pay-as-you-go business model, which eliminates the need for huge, upfront costs. For example, HPE works with you to forecast current and projected capacity — for a minimum commitment — and then creates a local buffer of IT resources beyond what you need now, that you can dip into as needed. The extra capacity is ready to be activated when you need more capacity, and it can easily expand into the Azure public cloud when needed. This ensures you have extra capacity deployed ahead of any upcoming demand.

Leading the way to a hybrid IT strategy

The datacenter of today is very different than the datacenter of a decade ago. Today’s innovative technologies can be located at the core of the datacenter, in the cloud, or at the edge — providing a myriad of choices and deployment models. Wherever IT resources are deployed, a multi-cloud, hybrid IT strategy is leading the way to more cost-effective, flexible, and powerful IT options for the enterprise.

To learn more about composable infrastructure, download the free e-book, Composable Infrastructure for Dummies.


About Paul Miller

The Rise of Multi-Cloud, Hybrid IT for the Enterprise TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Paul’s organization is responsible for all marketing content and sales enablement tools to help HPE customers get the best product and solution experience.

To read more articles from Paul Miller, check out the HPE Shifting to Software-Defined blog.

1 2 3 7
Page 1 of 7