Enterprise Featured

The Art of Sizing Your IT Infrastructure

Miniature Network Engineers At Work

Sizing an IT infrastructure today can be tough

Although science is used to determine data center needs, there is still an art that goes into the decision process. The reason is simple: a variety of IT devices of various ages from various vendors — each with its own lifecycle, costs, and on-going maintenance issues — often present unforeseen consequences.

Most IT equipment is expected to actively contribute for at least 3 years without any additional purchases. But it is difficult to anticipate things like IOPS and capacity one year in advance, much less 3 to 5 years ahead. And how do you plan for future technologies that haven’t been invented yet? Faced with so many variables, it’s surprising more IT professionals don’t consult Ouija boards before they make sizing decisions. All joking aside, sizing your infrastructure does include an element of fortune-telling, and some amount of risk is inherent in the process.

Too small, too large, too many changes

If you select a small system with a low upfront costs, you could have problems down the road. Your end users may become angry because they don’t get the performance they expect. Additionally, many under-sized infrastructures require forklift upgrades a year later, eliminating any initial cost savings. And those costs can be compounded. To compensate for the inadequate size of the initial deployment and to guarantee the subsequent upgrade won’t fall short, departments will go to great expense to oversize the environment, well aware that the business won’t be able to take full advantage of the new products for years.

Organizations that look too far into the future will intentionally round up to a solution that is too large for their needs hoping to stay ahead of these sorts of forklift upgrades. This strategy is no better than under-sizing. Imagine the cost of over-investment if the director of IT, VP of IT, and CIO each round up the configuration to mitigate risk. Planned and purchased with the best intentions, these systems can become outdated before the value is ever realized.

Even the most accurate predictions can go wrong when business priorities change. Today’s business models change far faster than the average lifespan of enterprise IT equipment, requiring new levels of agility. Legacy products with a 5-year lifecycle become process and innovation anchors, keeping the business tied to the time period when the technology was acquired.

Don’t be fooled by cost, capacity, or cloud

It’s easy to run the risk of falling into the over/under sizing trap, lured by the lowest upfront costs or the greatest capacity for the dollar. Even public cloud, with its elastic approach to infrastructure, can be expensive in the long run. Continuously adaptable to meet business needs, this flexibility comes with trade-offs, including cost, compliance complexity, performance based on user and data locality, and data sovereignty. Although you can grow your environment in very small increments and only when needed, costs often go up very quickly if sizing is off.

Don’t be drawn into a quick decision based on a system that promises to deliver top results five years from now. Instead, reign your sights in to the next 1-2 years and assess your current state before making any decisions. Do you need all-flash performance to support tier-1 apps? Do you have remote and branch offices with sizing requirements of their own? Take a good look at your business needs and size your infrastructure according to your current workload and use cases, then find a solution that leaves options open for future growth.

Sizing for your environment

Each environment has unique data capacity and performance requirements, which is why one size won’t fit all, even in the most flexible, agile infrastructure. VDI is a great example of where sizing can be very difficult to anticipate and potentially costly to address, if the core system proves to be inadequate. The performance needs of the desktops in a new VDI environment, even when science is applied, cannot be accurately anticipated. As ESG points out, predicting what these performance needs will be, how quickly users will adapt, or which user types will be best suited is challenging, especially when planning a VDI environment.

Hyperconvergence offers a cost-effective, scale-as-you-need architecture that is particularly well-suited to VDI and remote office deployments. While it will not provide you with the ability to predict business needs 5 years in advance, it will allow you to set up a reasonably sized infrastructure for the next year and grow in a direction that makes sense. Every hyperconverged node includes all the components needed, so cost and capacity are easy to calculate. These smaller building blocks don’t require large upfront investment or forklift upgrades, so customers can adapt easily to changing dynamics.

Better sizing with hyperconvergence

Sizing is still important, because you want the overall investment to last 5 years. Hyperconvergence offers scalability that is well-defined and simple to implement, allowing customers to start with a simple proof of concept to better understand the viability of a technology and the performance envelope it delivers, then grow the environment as needed.

No crystal ball can ensure perfect sizing, but with hyperconvergence the costs are well-known up front so contingency budgets can be planned for. And hyperconvergence allows the performance to grow linearly, making capacity planning simple. HPE SimpliVity hyperconverged solutions offer a huge advantage for businesses that want the agility and incremental cost advantages of the public cloud in their own data center.

Examine the total economic impact of deploying a hyperconverged solution in this Forrester report.


About Jesse St. Laurent

The Art of Sizing Your IT Infrastructure TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

IT Ops, Developers, and Business Leaders Can Now Seamlessly Manage Their Hybrid Cloud Environment

ser21

Across datacenters worldwide, cloud conversations have changed dramatically over the last few years

Five years ago, the only way organizations could start new projects quickly or instantly deploy applications was to leverage the public cloud. While this offered many benefits, it also created some new challenges.

Hybrid cloud management simplified

One of the biggest challenges involved managing applications deployed in different locations. When an organization’s public cloud, on-premises private cloud, and traditional IT each exist in physical and operational silos, how do they seamlessly manage them all? The answer lies in providing a unified approach to all cloud resources.

To achieve management across everything, it is important to understand who is using these hybrid cloud resources, what mode of operations exists, and how these operations can be improved. In a typical business, three groups of people want different things.

  • IT Ops: IT operations want a simple solution. They want to use standardized infrastructure building blocks in their environment, drive as much automation as much as possible, and have APIs to integrate anything new with anything existing. They also want to focus less IT resources on ops and more IT resources on apps. In short, they want to focus more of their time on improving business outcomes and reduce time maintaining.
  • Developers: Developers want to have a common experience—wherever they are building and deploying applications. They also need simplicity, flexibility, and speed. They want to be self-sufficient and have the freedom to access cloud-native tools to more easily build and deploy apps.
  • Business leaders: For business leaders, driving digital transformation is imperative. They want a competitive advantage by investing in their digital business to grow revenue streams. They also want to manage costs and don’t want to be surprised with unexpected expenses at the end of a billing cycle. Business execs need a real-time view into usage and spending so that they can adjust and optimize along the way.

HPE OneSphere helps each group work together better

A hybrid cloud management software product is now available that addresses the needs of IT ops, developers, and business leaders. HPE OneSphere, the industry’s first SaaS-based, hybrid cloud management solution for on-premises IT and public clouds, is designed around the idea of helping these groups work together more efficiently.

  • Deliver clouds faster: Because HPE OneSphere is a SaaS solution, IT Ops benefit from the time-to-value and ability to offload the management overhead. IT can build and deploy clouds in minutes and provide ready-to-consume resources, tools, and services faster, which improves everyone’s productivity.
  • Enable fast app deployment: HPE OneSphere gives developers easy access to tools, templates, and applications in a cross-cloud service catalog, so they can quickly access the tools they know. No need for changes in the way applications are built and deployed.
  • Manage spend: Business leaders can access consumption and cost analytics across their hybrid cloud environment. This capability lets them act on insights in real time, enabling better—and faster—decision making.

Having it all with simple, hybrid cloud management

Data and applications distributed across multiple clouds, inside data centers, and at the edge create management challenges. Now generally available, HPE OneSphere is designed to streamline collaboration across the entire business — changing how organizations of all sizes get their arms around their hybrid cloud resources.


About Gary Thome

IT Ops, Developers, and Business Leaders Can Now Seamlessly Manage Their Hybrid Cloud Environment TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Hewlett Packard Enterprise (HPE) has assembled an array of resources that are helping businesses succeed in a hybrid cloud world. Learn about HPE’s approach to developing and managing hybrid cloud by checking out the HPE OneSphere website. And to find out how HPE can help you determine an application placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.

5 Things to Look for Before Jumping on Board with an Azure Stack Vendor

Silhouette of a young skater on a bright red background

When someone tells me I can have it all, it’s usually too good to be true

Yet, that’s exactly what Microsoft® is delivering to cloud customers with its popular Microsoft Azure Stack cloud platform.

Microsoft Azure Stack is a hybrid cloud platform that enables you to deliver Azure-consistent services within your own data center. That means you can have the power and flexibility of Azure public cloud services — completely under your own control.

Sound too good to be true? Not according to the myriad of customers who have already deployed Azure, making it one of the fastest-growing cloud platforms today. And its growth shows no signs of slowing down.

Azure Stack lets you leverage the same tools and processes to build apps for both private and public cloud, and then deploy them to the cloud that best meets your business, regulatory, and technical needs. It also allows you to speed development by using pre-built solutions from the Azure Marketplace, including many open-source tools and technologies.

The 5 key capabilities to keep in mind when choosing an Azure Stack solution provider

Before you get started with Azure Stack, take some time to determine the best solution that will fit your needs. Azure Stack is only available as an integrated system from a select group of vendors. Each of these vendors offer different capabilities, features, and services, so make sure the solution you select delivers in these five key areas:

1. Configuration flexibility

Flexibility is important because you want your solution to fit seamlessly into your existing IT environment. Look for a solution that gives you the greatest number of configuration options possible. After all, the more customizable a solution is, the more compatible it will be with your current environment and future needs.  

As a fully customizable solution, you want:

  • The exact size to meet your application requirements
  • The processor type that’s right for your workloads
  • Your choice of memory
  • Scalable storage capacity
  • Support for third-party networking switches, power supplies, and rack options

2. High capacity and performance

Capacity and performance are important because you want to run workloads as fast as possible. Many applications – such as analytics – demand extremely high levels of performance, which can be a challenge when using the public cloud. Running your workloads in an on-premises Azure Stack environment can give you the performance you need. But make sure you check out all of your hardware options to ensure you are getting the highest capacity and performance possible for your money.

3. Pay-as-you-consume pricing

If you deploy your Azure Stack solution using a consumption-based model, you’ll be able to reduce costs by leveraging cloud-style economics for the hardware and the cloud services. This approach gives you:

  • Rapid scalability
  • Variable costs aligned to metered usage
  • No upfront expense
  • Enterprise-grade support

4. High level of expertise

When choosing a solution, look for the vendors that can provide the expertise you will need to help you develop a comprehensive hybrid cloud strategy. Also look for a team that can deliver professional services that will meet your use case, design, and implementation needs.

5. Try before you buy

What could be better than a try before you buy approach to Azure Stack? Select a vendor that can provide you with an innovation center. These centers can get you can get up to speed on the Azure Stack solution prior to purchase, giving you the information you need to make the right decisions. At an innovation center, you can:

  • Access the latest Azure Stack software and hardware
  • Implement a proof-of-concept
  • Test your use cases to your hybrid cloud

And finally, an innovation center lets you see all of the key capabilities in action. You can experiment using highly flexible configurations while testing performance and capacity. And you can see how everything works together to reduce risk and accelerate time to value.

How the Azure Stack offering from HPE stacks up

HPE ProLiant for Microsoft Azure Stack is a fully integrated hybrid cloud solution that delivers Azure-compatible, software-defined infrastructure as a service (IaaS) and platform as a service (PaaS) on hardware manufactured by industry-leader, Hewlett Packard Enterprise (HPE).

HPE ProLiant for Microsoft Azure Stack is the only solution available that meets the five key capabilities listed above.

  • It is the most customizable solution available, providing the greatest number of configuration options.
  • It is uniquely architected to achieve both high memory capacity at 768 GB RAM and high performance at full 2400 MHz memory speed, increasing memory bandwidth by up to 28% compared to other, same capacity solutions.
  • HPE is the only infrastructure provider that gives you true, consumption-based IT for Azure Stack, making pay-as-you-consume pricing with HPE GreenLake Flex Capacity an attractive option for many.
  • Over 4,000 HPE experts are trained on Azure and hybrid cloud, ensuring you will deploy with confidence.
  • The HPE-Microsoft Innovation Centers, run jointly by HPE and Microsoft, let you try before you buy, while working with hybrid cloud and Azure experts.

Co-engineered by HPE and Microsoft and based on a 30-year partnership, this joint solution provides the five key capabilities you want in your Azure Stack solution.

To learn more, visit HPE ProLiant for Microsoft Azure Stack or watch this short 2 minute video. You can also view the webinar, Hewlett Packard Enterprise: the clear choice for your Azure Stack solution.


About Paul Miller

5 Things to Look for Before Jumping on Board with an Azure Stack Vendor TechNativePaul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Learn about HPE’s approach to developing and managing hybrid cloud infrastructure by checking out the HPE OneSphere website. Read more about composable infrastructure through the e-book, Composable Infrastructure for Dummies. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

Are You Just Keeping the Lights On in Your Datacenter?

Unique, leadership, new idea concept – One light bulb lamp glowing different and standing out from o

No matter the location or the language, running a business means running up costs

Capital expenditures like buying new hardware or operational costs, like hiring additional staff. Many of these costs are necessary, but many are not–especially in a company that is growing quickly. Complex datacenters demand upgrades, maintenance, and tech refreshes in a nearly continual stream. IT departments often get bogged down in maintenance costs, which makes it hard for them to focus on the business side of business.

This problem is the keeping the lights on trap, and it affects organizations of all sizes. Since 2013 when IDC found 80 percent of IT spending was used for maintenance and only 20 percent on moving the business forward, IT has made progress to shift towards business innovation. However, according to a more recent analysis by Deloitte, this is only the beginning of a longer transformational journey. Some IT teams have yet to start this shift, and those that have are looking for ways to move faster to keep up with technology innovation.

If they can’t, they’ll end up right back where they started: fighting to keep the lights on.

The notion that IT struggles to move beyond their traditional role and into a more innovative one is very common. But, as the IDC statistic shows, IT is more often a cost center, rather than a source of innovation and revenue for the company.

Why is this situation still so widespread? A core issue is that nearly everything in the datacenter is manual and not automated. Most datacenters have custom configurations that require their own manual maintenance with specialized tools. Incremental progress on any one or two help, but it isn’t enough to substantially change the big picture for the company. Over time, people get used to this status quo and start to think that it is completely normal. They fall into the trap of believing that a huge step forward toward automation and innovation is impossible.

For example, a well-established, mid-sized, German supplier to the automotive industry was one of the leaders in its market. A few years ago, cost pressures forced them to reduce the IT team headcount from 18 to fewer than 10 people. However, when the economy improved, the business came to IT with requirements to support important new initiatives. With limited people and time, the IT team couldn’t respond. They were stuck in the keeping the lights on trap.

To IT, hyperconvergence resonates because it addresses the trap. For many, including this German company, that means investing in a new IT solution such as hyperconverged infrastructure to simplify the IT teams’ day-to-day tasks. More specifically, this company discovered:

  • A hyperconverged infrastructure purchase frees up substantial capital budget – Hyperconverged infrastructure is, by definition, an integrated solution that incorporates many IT components, which are ordinarily purchased separately. Compute, storage, switches, backup, disaster recovery, WAN optimization, etc. are deployed in a single appliance. The company had plans to refresh these components over the next several budget cycles, but investing in hyperconverged infrastructure would free up these future capital expenses.
  • The productivity of hyperconverged infrastructure significantly reduces operational expenses – A lot of time is spent operating the existing environment: applying patches and upgrades to various components, administering backups and disaster recovery, managing vendor relationships, and training in various products and technology. Using hyperconverged infrastructure allows IT to simplify the various operating tasks within the datacenter.
  • Freed up headcount supports the new business initiatives – The IT team used the concept of lean IT to communicate to senior management the idea of using technology to reduce wasteful use of staff time so that it could be directed at more productive work.

This financial analysis unlocked the purchase of hyperconverged infrastructure. Not only could the IT team support the new project with a flat budget and no new headcount, its working environment greatly improved. Today, the team is working on innovative, forward-looking projects instead of spending long weekends and evenings simply keeping the systems running. This positive outcome would have been impossible to achieve with conventional IT.

The German company example is just one of the many HPE SimpliVity customers that have exited the keeping the lights on trap. To get successful buy-in across the organization and break the lights-on cycle, customers are embracing new business requirements, setting high expectations, and framing the investment proposal with phrases like lean IT to align with the business’ objectives.


About Jesse St. Laurent

Are You Just Keeping the Lights On in Your Datacenter? TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

Alexa, How Are My Cloud Providers Doing Today?

3d rendering pictogram voice recognition system of blue ground

How open architectures let you simplify and customize multi-cloud management

Smart speakers continue to grow in popularity, promoted as must-have virtual home assistants. Voice activated, they let you play music, create to-do lists, search the web, and control smart products, all hands-free. But can these virtual assistants help you manage your multi-cloud environment at work? Believe it or not, the answer is yes.

The Alexa demo – a fun way to highlight an open architecture

Effectively managing your multi-cloud environment in today’s complex hybrid IT estate can be a Herculean task. But, according to a developer at Hewlett-Packard Enterprise (HPE), it doesn’t have to be — just speak your request, wait a few seconds, and it’s taken care of.  All you need is a multi-cloud management solution and a virtual assistant.

That same developer wanted to show the importance of an open architecture, so he enlisted Alexa, Amazon’s popular voice-based home assistant, to help him. Using open standards, open source software, and an open API, he showed that integrating one application with another can be relatively simple. In less than a week, he had the demo up and running — ready to entertain an audience at an upcoming conference.

The importance of an open architecture

You may be wondering why the developer spent any time at all on this demo. Why is it so important that a multi-cloud management solution be extensible?

The answer is simple: it unlocks a world of possibilities that is only limited by your creativity. An open architecture lets anyone integrate one application to another, giving an enterprise unlimited opportunities to create custom experiences for themselves and their customers.

HPE recently announced HPE OneSphere, the industry’s first SaaS-based multi-cloud management solution for on-premises IT and public clouds. In this world of hybrid cloud, any multi-cloud management platform must be open and engineer friendly, and it must be able to easily integrate and work with other platforms.

The extensible and open API

In the case of the Alexa demo, the voice service receives a command that interacts with the HPE OneSphere API. Alexa interprets the voice commands and then translates them into API calls to HPE’s multi-cloud management solution. The open API and extensive documentation in HPE OneSphere makes it simple for developers to integrate the solution with many other applications. And because the HPE software manages your entire hybrid IT estate through APIs, anyone can take advantage of the APIs to interact with their whole environment.

When the open source Alexa/HPE OneSphere demo becomes available on GitHub, it will allow developers to take it and build additional skills to interact with their own hybrid infrastructure. By writing additional interactions and queries, developers can easily use modern scripting languages to extend the skillset by utilizing the HPE OneSphere APIs and other software-defined infrastructure elements. This capability allows hybrid IT management to be fully customized to meet the specific needs of any organization.  (To learn more about HPE open source innovation through collaboration and working with the HPE OneSphere API, check out the HPE Developer site.)

Three uses cases for Alexa and HPE OneSphere

Although the original purpose of the HPE OneSphere/Alexa integration was to demonstrate how open the HPE OneSphere architecture is, the integration proved useful. The demo highlights how Alexa or similar virtual assistants can integrate with HPE OneSphere to help enterprises better manage their multi-cloud environments. I’ve included some examples below of how three different users (IT operators, developers, and executives) could benefit from this hands-free, multi-cloud management application.

HPE OneSphere gives IT managers a better view of their entire hybrid IT environment and includes automation and proactive management tools for increased productivity.

IT operator: Alexa, how are my cloud providers doing today?

Alexa: Welcome to HPE OneSphere. Two out of two providers’ status OK. Amazon Web Services scores 702. Private cloud scores 751.

IT operator: Alexa, what is the utilization of my private cloud?

Alexa: The utilization score of private cloud is 336, using 6% CPU, 63% memory, 30% storage.

HPE OneSphere lets developers use the tools and apps they need when they need them — without having to wait for IT to provision new infrastructure. Using the SaaS web portal or through APIs, developers have instant access to a pool of IT resources.

Developer: Alexa, create a project named Experiment in my production environment.

Alexa: Project Experiment created in your production environment at your facility per organizational policy.

Not only can IT operators and developers benefit from a multi-cloud management solution like HPE OneSphere, business executives can make better business decisions because of increased transparency. Real-time, cross-cloud insights enable CIOs and lines of business to increase resource utilization and reduce costs, improving efficiency across the board.

Executive: Alexa, what is my total spend for the month across all projects?

Alexa: Total spend across all of your providers for the month is $33,495.

Not your father’s enterprise system

Today’s open architectures allow organizations to do amazing things. HPE open source API’s allows IT to manage clouds and infrastructure as code, so they can customize management of the entire hybrid IT environment in nearly unlimited ways – including receiving voice updates from a virtual assistant.

So what’s next? With the HPE OneSphere open architecture and API, you decide. Your creativity and imagination are the only limit.

To learn more about how to succeed in a hybrid IT environment, download the white paper from 451 Research: Eight Essentials for Hybrid IT.

About Gary Thome

Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). HPE has assembled an array of resources that are helping businesses succeed in their digital transformation. Learn about HPE’s approach to managing hybrid IT by checking out the HPE website, HPE OneSphere. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext.

To read more articles from Gary, check out the HPE Converged Data Center Infrastructure blog.

5 Reasons Why You Need to Plan for Hybrid Cloud

Number 5 from clouds in the sky. 3D rendering

Many of today’s successful businesses are moving beyond the public cloud into a new era of hybrid IT that combines public cloud, private cloud, and traditional IT

These organizations are implementing a hybrid cloud strategy because it is helping them improve the way they run their business and deliver new services to customers.

A number of organizations are opening up and sharing their stories — detailing why they added hybrid cloud to their current IT environment and how they are making hybrid IT management simpler than ever before. Based on their experiences, five reasons stand out as to why enterprise customers are integrating hybrid cloud with traditional IT.

  1. Ensure security while doing business faster

Optio Data, a data strategy company, helps its customers more efficiently deliver state-of-the-art IT solutions. Optio Data found that incorporating a hybrid IT approach addressed the needs of customers who required not only fast and flexible development options that cloud-based solutions provide, but also the security and control of keeping certain solutions on-premises. Read the full story: Simplifying hybrid IT for a successful digital transformation.

  1. Deliver products and services more efficiently

Hybrid IT lets businesses choose the best IT infrastructure to meet specific business needs. Redbox is a new-release movie and game rental company with more than 41,500 kiosks in the United States. By combining on-premises private cloud with public cloud, they can run their business more efficiently and deliver a variety of services and products to their customers. This type of IT flexibility opens the door for continued innovation and better customer engagement. Read the full story: Redbox implements hybrid IT strategy and sparks digital transformation.

  1. Collaborate more effectively

For one prominent animation studio, a hybrid cloud environment provides the flexibility to seamlessly collaborate. During peak compute rendering times, team members from their animation studios all over the world can access a hybrid cloud infrastructure — a combination of on-site private cloud and off-site managed private cloud. This hybrid cloud environment allows artists and producers worldwide to share and collaborate in real time. Read the full story: Creative freedom through digital transformation.

  1. Offer more services while saving money

According to an IT leader at one state’s department of transportation (DoT), deploying a hybrid IT strategy helps them provide more services in a more cost-effective way. Because of strict budget restrictions, running all of their workloads in the public cloud on a regular basis is cost-prohibitive. Instead, they run their workloads on premises using traditional IT or in a private cloud. And then during peak demand, they utilize the public cloud for extra capacity. Read the full story: Department of transportation moves from traditional to hybrid IT.

  1. Improve outcomes through more opportunities

Another organization that recently embraced a hybrid cloud environment is HudonAlpha, a leader in genomic research. Many of HudsonAlpha’s researchers have been given government grant money to test novel, new treatment theories. Because of these grants, they can quickly test creative ideas in a public cloud without taking resources away from the more accepted research methodologies that rely on HudsonAlpha’s private cloud. Read the full story here: How digital transformation is powering genomic research.

Hybrid cloud is now simple to manage

In the past, one of the main concerns with deploying applications in a hybrid IT infrastructure was added complexity. Using public cloud, private cloud, and on-premises IT, creates silos of information, making it difficult to share information and move applications from one IT model to another. The complexity of adjusting to a hybrid cloud environment has slowed down the digital transformation of many companies.

With the recent general availability of HPE OneSphere, a multi-cloud management solution, hybrid cloud complexity is no longer an issue. Through a software-as-a-service (SaaS) portal, HPE OneSphere gives customers access to a pool of IT resources that spans the public cloud services they subscribe to, as well as their on-premises environments. Using this new management tool, organizations are now able to seamlessly compose, operate, and optimize all workloads across on-premises, private, hosted, and public clouds.

Each day, more and more organizations are adding a growing mix of hybrid cloud environments to their traditional IT in order to run their businesses better and offer more services to their customers. Hybrid IT helps businesses deliver more to customers, collaborate better, save money, improve outcomes and ensure security. And now with HPE OneSphere, hybrid IT management is simpler than ever.


About Paul Miller

Paul Miller is Vice President of Marketing for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). Learn about HPE’s approach to developing and managing hybrid cloud infrastructure by checking out the HPE OneSphere website. Read more about composable infrastructure through the e-book, Composable Infrastructure for Dummies. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext.5 Reasons Why You Need to Plan for Hybrid Cloud TechNative

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

Are You Just Keeping the Lights On in Your Datacenter?

Unique, leadership, new idea concept – One light bulb lamp glowing different and standing out from o

No matter the location or the language, running a business means running up costs

Capital expenditures like buying new hardware or operational costs, like hiring additional staff. Many of these costs are necessary, but many are not–especially in a company that is growing quickly. Complex datacenters demand upgrades, maintenance, and tech refreshes in a nearly continual stream. IT departments often get bogged down in maintenance costs, which makes it hard for them to focus on the business side of business.

This problem is the keeping the lights on trap, and it affects organizations of all sizes. Since 2013 when IDC found 80 percent of IT spending was used for maintenance and only 20 percent on moving the business forward, IT has made progress to shift towards business innovation. However, according to a more recent analysis by Deloitte, this is only the beginning of a longer transformational journey. Some IT teams have yet to start this shift, and those that have are looking for ways to move faster to keep up with technology innovation.

If they can’t, they’ll end up right back where they started: fighting to keep the lights on.

The notion that IT struggles to move beyond their traditional role and into a more innovative one is very common. But, as the IDC statistic shows, IT is more often a cost center, rather than a source of innovation and revenue for the company.

Why is this situation still so widespread? A core issue is that nearly everything in the datacenter is manual and not automated. Most datacenters have custom configurations that require their own manual maintenance with specialized tools. Incremental progress on any one or two help, but it isn’t enough to substantially change the big picture for the company. Over time, people get used to this status quo and start to think that it is completely normal. They fall into the trap of believing that a huge step forward toward automation and innovation is impossible.

For example, a well-established, mid-sized, German supplier to the automotive industry was one of the leaders in its market. A few years ago, cost pressures forced them to reduce the IT team headcount from 18 to fewer than 10 people. However, when the economy improved, the business came to IT with requirements to support important new initiatives. With limited people and time, the IT team couldn’t respond. They were stuck in the keeping the lights on trap.

To IT, hyperconvergence resonates because it addresses the trap. For many, including this German company, that means investing in a new IT solution such as hyperconverged infrastructure to simplify the IT teams’ day-to-day tasks. More specifically, this company discovered:

  • A hyperconverged infrastructure purchase frees up substantial capital budget – Hyperconverged infrastructure is, by definition, an integrated solution that incorporates many IT components, which are ordinarily purchased separately. Compute, storage, switches, backup, disaster recovery, WAN optimization, etc. are deployed in a single appliance. The company had plans to refresh these components over the next several budget cycles, but investing in hyperconverged infrastructure would free up these future capital expenses.
  • The productivity of hyperconverged infrastructure significantly reduces operational expenses – A lot of time is spent operating the existing environment: applying patches and upgrades to various components, administering backups and disaster recovery, managing vendor relationships, and training in various products and technology. Using hyperconverged infrastructure allows IT to simplify the various operating tasks within the datacenter.
  • Freed up headcount supports the new business initiatives – The IT team used the concept of lean IT to communicate to senior management the idea of using technology to reduce wasteful use of staff time so that it could be directed at more productive work.

This financial analysis unlocked the purchase of hyperconverged infrastructure. Not only could the IT team support the new project with a flat budget and no new headcount, its working environment greatly improved. Today, the team is working on innovative, forward-looking projects instead of spending long weekends and evenings simply keeping the systems running. This positive outcome would have been impossible to achieve with conventional IT.

The German company example is just one of the many HPE SimpliVity customers that have exited the keeping the lights on trap. To get successful buy-in across the organization and break the lights-on cycle, customers are embracing new business requirements, setting high expectations, and framing the investment proposal with phrases like lean IT to align with the business’ objectives.


About Jesse St. Laurent

Are You Just Keeping the Lights On in Your Datacenter? TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

Putting Infrastructure Management in the Driver’s Seat

Self driving car on a road. Artificial intelligence of vehicle.

What makes the future exciting? Simplicity

In all mankind’s wildest imaginings about the future, a simpler life is almost always a central theme, driven by amazing technologies.

Some of the most talked about new technologies that promise to simplify life are already here, and some are not far away. The smartphone provides one place to do pretty much anything, anywhere. Smart lighting systems can be surprising helpful, automatically turning off and on lights. And self-driving cars are ready and waiting to be mass produced as soon as the technology that supports them is perfected.

Simplifying today’s data center with software-defined intelligence

One of the places that needs simplicity the most is the data center, where a mix of infrastructure often becomes difficult to manage and maintain. Infrastructure management that leverages software-defined intelligence should be at the core of driving simplicity for the data center, minimizing time spent on manual, repetitive tasks, and reducing human error. Infrastructure management should also do a better job of self-monitoring issues that can be resolved with little or no human intervention.

Finding an infrastructure management solution that brings that kind of software-defined intelligence to the data center doesn’t have to be a dream for the future. Solutions are available today that can dramatically increase data center simplicity. But, when searching for something to make life easier, a few basic requirements need to be met.

  • Template-based provisioning and updates

Provisioning servers is often a task that involves manual labor, thereby increasing the possibility for errors. Template-based provisioning cuts down on the time it takes to provision and significantly reduces errors. Think about a server template as a fixed course menu. This menu is crafted by expert chefs, or in this case, subject matter experts for servers, storage and networking, all working together to define the very best ingredients (settings) for a meal. This template is then deployed to any predetermined server, and repeated exactly the same way each time it is needed. IT can queue up the provisioning of hundreds of servers and then work on something else while the infrastructure management software does the work.

Template-based provisioning is practically the very definition of self-driving. IT can create an unlimited number of unique templates to cover all workloads and applications. Just press go, and then spend time on other activities that create more value for the business.

  • A consolidated view of all of infrastructure

Most businesses probably have more than just one type of infrastructure spread out across multiple data centers in locations worldwide. And any infrastructure management software is going to require a separate instance once a scale limit is reached, which results in multiple infrastructure management instances to keep track of.

A company with data centers locations worldwide, involving a mix of platforms, would have a difficult time keeping everything running smoothly. A single, consolidated view allows them to view everything at once and respond quickly to any issues.

  • Integrated remote support

What if IT could have an extra support person around the data center, checking on mundane but important items–tasks like maintaining warranties for equipment, ordering replacement parts, or handling service tickets? What if this support came included with the management software? Integrated remote support should be part of infrastructure management, saving time on issues that a computer can easily perform.

Imagine the time savings if integrated support could open a service ticket and send an email to verify the email was opened and the issue resolved. If a part breaks, infrastructure management could identify it, automatically order the part, and deliver it straight to the data center. Systems built with software-defined intelligence will know what is broken without human intervention. Keeping up with small but very important maintenance tasks like these, without having dedicated staff, can really make a difference in keeping things running smoothly.

The self-driving data center is here today

Businesses don’t have to look far to find this kind of software-defined, self-driving data center.  HPE OneView meets all of the above requirements and manages HPE servers, storage, and networking at scale. The HPE OneView Global Dashboard allows IT to keep tabs on the entire infrastructure, and integrated remote support is included for free with every HPE OneView license.


About Paul Miller

Putting Infrastructure Management in the Driver’s Seat TechNativePaul Miller is Vice President of Marketing for the Software-defined and Cloud Group at Hewlett Packard Enterprise (HPE). To learn how to migrate with ease to HPE OneView, watch this webinar replay. To learn more about HPE OneView, download the free ebook, HPE OneView for Dummies.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

Can Hyperconverged Infrastructure Solve the Data Problem?

Chocolate Smoothie (milkshake)

Fifteen years ago, your standard hard drive had a capacity of about 36GBs

These hard drives delivered roughly 150 IOPS. Today, hard drives are equipped with over 6TB of capacity… and they deliver roughly 150 IOPS.

The data problem isn’t just one of capacity anymore. Data growth is an issue to be sure, with IDC predicting the world will create 163 zettabytes of data a year by 2025. But this isn’t the problem that should be at the forefront of IT professionals’ minds. The most troubling issue for those in IT is that increased data growth and hard drive capacity don’t necessarily correlate to improved performance. Read and write speeds have not increased at nearly the rate that disk capacity has, and this creates a performance bottleneck.

Think of it like drinking a milkshake through a straw: no matter how large your cup is, you can’t drink your beverage any faster if you don’t increase the width of your straw. And the throughput gets worse if you make the milkshake thicker… you’re not drinking it any quicker, and you’re only making yourself frustrated, tired, and inefficient.

It’s the same in the data center too. As infrastructure struggles to keep up with the increased volume of data necessary to make your business productive, performance hasn’t kept up with the increased capacity available. However, there is a cure – the key to solving the data problem is in making data truly efficient.

Data efficiency technologies were originally designed to help manage rapidly growing volumes of data. However, now that the primary concern for IT isn’t addressing capacity limitations, but performance ones, data efficiency technologies like deduplication, compression, and optimization need to be adjusted to make sense in this new environment.

Herein lies the most prominent data center conundrum: how do you ensure peak performance and predictability of applications in a cost-effective manner in the post-virtualization world when IOPS requirements have increased dramatically and hard drive IOPS have increased only incrementally?

Many companies look to flash storage as a solution to combat stagnant performance rates. But, while flash storage is useful for removing the performance bottleneck, it’s expensive and not suitable for all portions of the data lifecycle.

One solution is hyperconverged infrastructure. Hyperconverged solutions that leverage flash/SSD technology are designed to make data efficient and increase data center performance. HPE SimpliVity hyperconverged technology delivers deduplication, compression, and optimization for all data globally, across all tiers of the data lifecycle. And it’s all inline, making the data much more efficient to store, move, track, and protect.

As the amount of data increases every second of every day, businesses have to find a way to make sure their infrastructure can handle the increased load – without sacrificing performance. By making data efficient from the very outset and across all lifecycles, HPE SimpliVity solves the data problem.


About Jesse St. Laurent

Can Hyperconverged Infrastructure Solve the Data Problem? TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook.

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

Hyperconvergence increases efficiency, making IT more effective

Full length of smiling casual man and woman cheering and high fiving in open modern office

World renowned consultant, educator, and author Peter Drucker said, “Efficiency is doing things right; effectiveness is doing the right things.”

When it comes to IT operational efficiency, doing things right means streamlining IT. By doing so, IT organizations can save time and expenses – devoting more resources to strategic activities.

But, it’s not about cutting corners. IT needs to be sure that the changes we make don’t sacrifice the quality of the product or service – in other words, we need to do the right things.

Streamlining IT with hyperconvergence

So, how do you trim complexity without sacrificing quality? According to some IT teams, adopting hyperconverged infrastructure is a solid step in the right direction.

Hyperconverged infrastructure can provide businesses with significantly increased efficiency, specifically operational efficiency. According to a report from IDC, businesses that implemented hyperconverged infrastructure realized an 81 percent increase in time spent on new projects and innovation. Additionally, these businesses also saw a 33 percent increase in budget spent on new technology projects and purchases. Hyperconvergence allowed these IT teams to spend less time keeping the lights on, so they could spend more time delivering value to the business.

Hyperconverged technology increases operational efficiency by combining 8 to 12 disparate IT components and services into a single solution that easily scales on demand, allowing IT teams to focus on accelerating business innovation. By massively simplifying the IT infrastructure stack, hyperconverged infrastructure improves IT agility and time to production, reduces IT costs and streamlines operations, and mitigates risk with enterprise resiliency and built-in data protection.

Less complexity means more time for innovation

Customers polled in that same IDC study reported that their IT operations saw a significant increase in efficiency once hyperconvergence was implemented. One IT administrator explained the increase by saying, “We’re finding this [infrastructure] just runs, and because there’s so much less complexity, so much less inventory and assets to manage, it gives us the ability to go work on other things that we had not had the time to in the past.” What’s more, customers noticed “… a huge reduction with IOPS across our environment.”

The author of the paper, Eric Sheppard, Research Director at the IDC Corporation, commented,Customers tell us that ‘mitigating risk to the business’ and ‘supporting business revenue objectives’ are the two most important business outcomes that can be achieved through the use of IT…It should be no surprise to learn that IT departments are increasingly looking for infrastructure that improves resource utilization rates while addressing productivity and agility within the datacenter. Organizations around the world have turned to converged systems to achieve just such goals.”

Why choose HPE SimpliVity hyperconverged?

The customers who participated in the study all have one thing in common: they chose HPE SimpliVity as their hyperconverged vendor of choice. HPE SimpliVity stands apart from the competition with the ability to globally manage all resources and workloads from a single interface. Two features in particular ranked high in customer survey results: the built-in data protection and accelerated data efficiency. IDC discovered that customers not only save time using HPE SimpliVity backup and replication features, but they were also able to retire third-party data protection solutions for HPE SimpliVity workloads. The survey found that:

  • Over 50% of HPE SimpliVity customers using the built-in data protection features are retiring their incumbent third-party backup and/or replication solutions.
  • 79% of customers see significant improvement in backup and disaster recovery processes, due to global deduplication and replication features and greatly reduced RTO/RPO times.
  • 75% of customers realize a 65% improvement in utilization of storage resources.

Customers make a compelling case that HPE SimpliVity hyperconverged infrastructure is effective in doing the right things and producing the desired results. As IDC indicates, “Hyperconverged scale-out and feature-rich systems are driving real benefits within datacenters around the world, impacting CAPEX and, more importantly, OPEX.”

To learn more about the operational efficiencies that hyperconverged customers experienced, read the full IDC report.


About Jesse St. Laurent

Hyperconvergence increases efficiency, making IT more effective TechNativeJesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

1 2 3 4 7
Page 2 of 7