Enterprise Featured

Reducing Infrastructure Complexity: There Has to be a Better Way

Destroyed Maze Concept Showing The Man Standing In A Burnt Labyr

Most people manage numerous financial accounts such as checking accounts, savings accounts, credit cards, mortgages, and various other reoccurring bills

Historically, we needed to deal with each account separately by mailing in payments, keeping a record of each transaction, and ensuring the correct funds were available. This process was cumbersome and took too much time. Today’s online banking technology has solved those complexities. Now people can remotely manage all their accounts from one interface, saving time, adding flexibility of how money is managed, and significantly improving the overall banking experience.

Needed: A common management solution for the data center

The old-style banking analogy is very similar to the multiple types of IT infrastructure management tools companies have used in the past (and some still use today) to run their business. Over the years, businesses built their IT infrastructure environment with separate servers, storage, and networking components, making their data center too complex and difficult to manage. These IT environments quickly grew out of control, along with the myriad of management tools and specialized resources needed to maintain them.

Just as online banking simplified and streamlined banking activities, many companies need one common management solution to consolidate all their infrastructure silos. Customers need a single solution, with one unified management interface, to help address the complexities of managing their hybrid IT environments that include server, storage, and networking.

Most infrastructure vendors struggle to provide such a solution, as many management applications address only server management and may require separate applications to handle different storage and networking components. The challenge is that such a tool would need to provide an at-a-glance view of the health status of multiple servers, profiles, and enclosures all over the world (including virtual and physical appliances). Just like online banking capabilities, the management tool would also need to provide faster access to information, which would enable better decision making.

The future is now

The type of tool companies need to solve their ever-growing complexity issues in the data center is not a dream for the future. This type of infrastructure management software for data centers is already available and being used by over 1 million customers around the world. HPE OneView enables customers to transform their servers, storage, and networking into software-defined infrastructure that eliminates complex manual processes, spurs IT collaboration, and increases the speed and flexibility of IT service delivery. The infrastructure management solution takes a software-defined, programmatic approach through efficient workflow automation, a modern dashboard, and a comprehensive partner ecosystem.

Using one management interface, customers can pull together servers, storage, and networking for many infrastructure platforms. And if a customer has multiple instances of HPE OneView, the HPE OneView Global Dashboard consolidates their hybrid infrastructure environment into a single view, delivering much-needed simplicity.

One customer, Porsche Informatik is one of Europe’s largest developers of custom software solutions for the automobile trade. The company needed to streamline the management of over 1,600 virtual servers and 133 blade servers – and they chose HPE OneView to help them do it.

“HPE OneView gives us a much better and more intuitive view of our HPE server infrastructure. This dramatically speeds up many system management tasks, including the deployment of new ESXi servers and VLAN configurations, which are 90% faster,” explains Gerald Nezerka, Windows Services Team Manager for Infrastructure & Common Platforms, Porsche Informatik.

Another customer, Opus Interactive, has provided cloud hosting and colocation services, supplemented by backup and recovery services, for more than 22 years. When the company was smaller, they were fine using standalone infrastructure management tools on an ad hoc basis. But after experiencing rapid growth over the last several years, Opus Interactive needed a simpler way to maintain its infrastructure in order to adhere to service-level agreements (SLAs) with more than 300 customers. They turned to HPE OneView.

“HPE OneView enables us to do more with less,” explains Eric Hulbert, President and Cofounder of Opus Interactive. “In order to maintain our level of service as we continue our rapid growth, we need to get as much infrastructure and as many customers managed per engineer or administrator as possible. Because it is now easier to manage all our devices, HPE OneView makes this happen.”

Just like online banking, infrastructure management software simplifies the view into hybrid IT environments and allows IT managers to save significant time and reduce complexity. And new features within HPE OneView now allow customers to consolidate all of their hybrid infrastructure into a single view – no matter where they are located. A better way to manage your infrastructure is here, and organizations everywhere are using it.

To learn more about HPE OneView, check out the free e-book: HPE OneView for Dummies. Or visit the website here.


About the Author

Reducing Infrastructure Complexity: There Has to be a Better Way TechNativeMcLeod Glass is Vice President and General Manager of HPE SimpliVity & Composable at Hewlett Packard Enterprise. In this role, he is responsible for all aspects of the product lifecycle across HPE Software-Defined Infrastructure. This includes HPE Composable Infrastructure, Hyperconverged and Cloud Software product lines. McLeod has held positions in engineering and product marketing across multiple HPE business units managing server, storage, and software product line.

Reducing Infrastructure Complexity: There Has to be a Better Way TechNative

4 Best Practices to Help Organizations Succeed in a Hybrid Cloud World

img

ESG Research Insights paper describes behaviors organizations should adopt to improve multi-cloud management.

Hybrid cloud continues to grow in popularity, fueled by its agility and scalability. Yet, many organizations realize that a hybrid cloud model (a combination of private, on-prem, and public cloud) also introduces complexity, which slows innovation. A hybrid model also makes it more difficult to view global utilization or track and control costs.

A recent ESG Research Insights Paper, Multi-cloud Management Maturity, Tangible Reasons Management Excellence Is Required in a Hybrid Computing World, details how organizations are managing heavily hybridized environments. In the paper, ESG surveyed 600 IT decision makers in organizations of at least 1,000 employees to determine a multi-cloud management maturity score.

Those surveyed use public cloud for nearly a quarter of their workloads – and the majority utilize multiple cloud service providers. They also implement on-premises workloads in the following percentages:

  • 37% of on-premises workloads are run on traditional physical servers.
  • 36% are run on VMs that are still predominantly managed as traditional servers.
  • 27% are run within a private cloud that inherits the core attributes of public cloud services.

Survey results – only 15% are “Transformed”

Based on the results, ESG divided the organizations into 4 groups: Unrealized, Modernized, Automated, and Transformed, ranking them from lowest to highest according to their degree of success in a hybrid landscape. The survey found most organizations fall somewhere in the middle of the multi-cloud maturity spectrum:

  • 15% Unrealized
  • 35% Modernized
  • 35% Automated
  • 15% Transformed

These results didn’t surprised me, as successful multi-cloud management involving public cloud and on-premises private is complex – and very few tools that solve the complexity problem are available.

However, I found a couple of results particularly interesting. Respondents could earn a total of 100 maturity points, yet the highest score achieved was only 86. And if an organization earned at least a score of 67.5, ESG included them in the “Transformed” category. ESG noted that the most advanced organizations still have lots of room to grow in terms of improving their cloud management maturity.

Another interesting finding in the report was that even incremental improvements resulted in substantial gains. Organizations that moved from one tier to the next realized substantial benefits throughout the enterprise.

Transformed organizations – what’s their secret?

According to ESG’s research, organizations that want to improve their multi-cloud management maturity should implement four best practices. To join the ranks of the Transformed multi-cloud management organizations, enterprises should do the following:

  1. Invest heavily in converged/hyperconverged infrastructure (CI/HCI) for on-premises workloads.

Ninety percent of all Transformed organizations have deployed CI/HCI platforms in their environments to support legacy workloads, while 88% have done so for newly developed applications. Instead of waiting for legacy infrastructure to depreciate fully, it’s interesting that many of these organizations are implementing these newer technologies proactively.

  1. Actively automate IT operations so staff can focus on other areas.

Transformed organizations report that they have either completely or mostly automated processes such as VM provisioning (86%), application deployment (88%), and performance/problem monitoring (86%). Once IT staff automates these processes, they have more time to focus on other initiatives such as supporting application development or re-architecting legacy applications.

  1. Invest in consolidated hybrid cloud management tools.

No matter where a workload runs (in a public or private cloud), hybrid cloud management tools will manage and monitor cloud costs, as well as provide consistent user experiences. ESG discovered Transformed organizations are twice as likely as Unrealized organizations to consolidate management under one IT team for public cloud and on-premises resources (58% to 23% respectively.) Due to simplified and streamlined operations provided by hybrid cloud management tools, a single management team is sufficient.

  1. Make informed workload placement decisions and optimize workloads before moving them to public cloud infrastructure.

Nearly half of Transformed organizations (48%) fully customize applications prior to migration. Just 3% of Unrealized organizations put the same level of effort into workload preparation prior to migrating to the cloud.

The bigger truth

Based on ESG’s research, Transformed organizations are the exception, not the rule. And even those who have “transformed” have not reached the pinnacle, which means that they have the opportunity to improve even more. Additionally, ESG’s research shows even incremental improvements result in big rewards for the organization. For those interested in improving their standing against the benchmarks laid out by ESG, it is important to take a look at the 4 best practices above and begin implementing these suggestions.

Read the full report: Multi-cloud Management Maturity, Tangible Reasons Management Excellence Is Required in a Hybrid Computing World. HPE can help organizations simplify their hybrid IT experience with modern technologies and software-defined solutions. Additionally, Cloud Technology Partners (CTP), a HPE company, will work with the IT team to enhance learning, conquer cloud, and accelerate a successful digital transformation.


About the Author

4 Best Practices to Help Organizations Succeed in a Hybrid Cloud World TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

4 Best Practices to Help Organizations Succeed in a Hybrid Cloud World TechNative

Hyperconvergence Delivers Unexpected Results for VDI Users

住宅とテクノロジー

The self-help industry is steadily growing due to a basic human desire for improvement

Consumers can find a plethora of books, podcasts, and seminars, not to mention products and services that promise positive change.

IT organizations that want to experience similar improvements can find them by implementing some less than urgent technology makeovers. The results often deliver unexpected and far-reaching benefits. Client virtualization environments stand to benefit more than most because even small improvements in system performance are multiplied across hundreds of desktops.

Technology pilot delivers unexpected performance results

System slowdowns, boot storm delays, and backups or batch jobs that run late into the evening are standard fare in virtual desktop environments. While the situation is not ideal, organizations will often postpone modernization of an aging virtual desktop infrastructure (VDI) as long as it remains functional. This can be a costly mistake. As one IT administrator discovered, a switch to hyperconverged infrastructure (HCI) can lower costs and deliver improvements to end user experience that ripple through the entire company.

In the outdated VDI environment at Maryland Auto Insurance, end users were accustomed to working around performance issues and system limitations, but the systems manager was not satisfied. When it was time for a server refresh, he took a step back and looked at the whole infrastructure. HCI could provide the required reliability and application compatibility, while consolidating the IT footprint in the datacenter. Intrigued, he set up an HCI pilot.

The new system delivered a much higher than expected return. As he put it, his organization didn’t know how slow their system was until they saw what was possible. “We do everything with virtual desktops around here, and the change was remarkable. Outlook and Microsoft Office loaded instantaneously, as did our underwriting and imaging applications. Multitasking was no longer a chore.” The IT team witnessed a growing level of frustration with the existing system. “People would look over their shoulders and see their co-workers in the pilot doing everything faster, and they wanted in.”

Proven Benefits of HCI for VDI

This might seem like an isolated case, but benchmark tests in the industry show similar performance results. Hyperconvergence is a popular choice for VDI because it is modular, efficient, and cost-effective. By converging multiple IT functions into a single server building block, hyperconvergence makes it easy to deploy, manage, and scale infrastructure that supports virtual desktops.

In a recent report, HPE SimpliVity hyperconverged infrastructure powered by Intel® demonstrated high performance in VDI environments. The Login VSI validated study showed consistent, very low latency performance at scale, and plenty of compute and storage resources available to host up to 1,000 knowledge workers. Even during node failure, HPE SimpliVity provided continuity of service with no impact on the end user experience. This kind of speed and resiliency can have a powerful effect on VDI end users and on business operations.

Maryland Auto Insurance reduced their infrastructure from seven racks to just half a rack with HPE SimpliVity, which helped them cut energy consumption nearly in half. They also took advantage of built-in backup, dedupe, and compression features. But the big surprise came in performance benefits, multiplied many times over in the VDI environment. Workloads across their enterprise now are balanced with just a few clicks. Every end user benefits from reduced time to launch applications. And because backups and batch jobs run two to three times faster, the system manager and his team get their evenings back.

If your data center could use improvement, consider HCI. For more information, check out The Gorilla Guide to Hyperconverged Infrastructure Strategy, which includes a chapter focused on VDI.


About the Author

Hyperconvergence Delivers Unexpected Results for VDI Users TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services. Read more articles by Thomas at HPE Shifting to Software-Defined blog.

Hyperconvergence Delivers Unexpected Results for VDI Users TechNative

Stumbling with your Public Cloud Deployments? An Industry Analyst Offers Advice

Risk Of Stumbling Hazard Warning Smoke Sign. Triangular Warning

As many organizations rush headlong into public cloud, IT continues to adjust to the complexities these environments create

Cost concerns, security, and a widening skills gap seem to consume today’s agenda, but is there a more basic issue at play here?

According to one industry analyst, the answer is yes. A cultural solution to cloud adoption may hold the key to greater success.

In a recent podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, discusses this topic with Edwin Yuen, Senior Analyst for Cloud Services and Orchestration, Data Protection, and DevOps at Enterprise Strategy Group (ESG)[i]. Several interesting insights from the interview caught my attention.

It’s not the technology slowing you down; it’s the culture

Gardner begins the interview by asking why enterprises are not culturally ready for public cloud adoption. Yuen explained one reason is that the role of IT in this new cloud world is not well-defined.

“We see a lot of business end-users and others essentially doing shadow IT – going around IT. That actually increases the friction between IT and the business,” explains Yuen. “It also leads to people going into the public cloud before they are ready, before there’s been a proper evaluation – which can potentially derail things.”

Yuen went on to say lines of business (LOB) or other groups are not working with core IT as they deploy to the public cloud; therefore, they are not getting all of the advantages they can. “You want to maximize the capabilities and minimize the inconvenience and cost. Planning is absolutely critical for that — and it involves core IT,” says Yuen. To ensure the best results possible, you should involve key players in the organization. For example, the organization’s procurement experts should be consulted to ensure you get the best deal for your money.

Budgeting is also important. “Companies very quickly realize that they don’t have variable budgets,” continues Yuen. “They need to think about how they use cloud and the consumption cost for an entire year. You can’t just go about your work and then find that you are out of budget when you get to the second half of the fiscal year.”

The beauty of an as-a-service model is you only pay for what you use. The risk is you have a virtually unlimited capacity to spend money. Remember, while capacity appears unlimited, budgets are not. IT is in the best position to help advise in this area, working with end users and procurement to ensure the organization doesn’t overspend in the cloud.

Bridging the cultural divide: a new level of communication

Yuen also brought up the importance of communication within the enterprise. “The traditional roles within an organization have been monolithic. End-users were consumers, central IT was a provider, and finances were handled by acquisitions and the administration. Now, everybody needs to work together, and have a much more holistic plan. There needs to be a new level of communication and more give-and-take.”

The key for improved cloud adoption, says Yuen, is opening the lines of communication, bridging the divides, and gaining new levels of understanding. “This is the digital transformation we are seeing across the board. It’s about IT being more flexible, listening to the needs of the end users, and being willing to be agile in providing services. In exchange, the end users come to IT first.”

Before public cloud, users of IT didn’t have to worry about cost or security issues, because IT handled it all for them. When an organization switches to the cloud without IT involvement, they often don’t discover everything IT was doing for them until things go wrong. Conversely, when supporting cloud environments, IT needs to make it fast and easy for users to deploy applications, while also putting guardrails in place. Successfully deploying cloud means working with a full stack team of experts all across the organization before jumping into a cloud operating model.

An inverse mindset

Yuen also brings up something he calls an inverse mindset. Traditionally, organizations maintained and optimized specific infrastructure to impact an application in a positive way. “Now, we are managing applications to deliver the proper experience, and we don’t care where the systems are. That infrastructure could be in the public cloud, across multiple providers; it could be in a private cloud, or a traditional backend and large mainframe system.” They just have to be configured correctly to provide the best return and performance the business requires.

As organizations embrace this inverse mindset, Yuen says it will be critical to monitor everything across all the different environments effectively with tools that automate and orchestrate. Additionally, organizations need machine learning (ML) or artificial intelligence (AI). “Once we train the models, they can be self-learning, self-healing, and self-operating. That’s going to relieve a lot of work.”

Having the right tools, such as HPE advisory services, can help you identify the best place to run applications. In addition, HPE OneSphere, an as-a-service multi-cloud management platform, enables organizations more control over the complexity of hybrid clouds.

Let HPE help you simplify your hybrid cloud experience with modern technologies and software-defined solutions such as composable infrastructure, hyperconvergence, infrastructure management, and multi-cloud management. Cloud Technology Partners (CTP), a HPE company, will work with your IT team to enhance learning, conquer cloud challenges, and accelerate your successful digital transformation. To listen to the full podcast, click here.

[i]Podcast recorded on Nov. 15, 2018. Recently Yuen became Principal Product Marketing Manager at Amazon Web Services


About Gary Thome

Stumbling with your Public Cloud Deployments? An Industry Analyst Offers Advice TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

Stumbling with your Public Cloud Deployments? An Industry Analyst Offers Advice TechNative

Network Innovation or Iteration? A Matter of Perspective

Office building top view background in retro style colors. Manha

As a member of the tech sector for quite some time, I have seen my fair share of marketing trends come and go

Typically, a new term is introduced and quickly becomes the “hot new thing,” promising to not only deliver the end-user from the challenges of their current environment, but also give the solution provider the opportunity to make a lot of money. These new entrants into the market are always billed as innovative and the cure-all for the biggest challenges in the data center that will enable the business to be more agile and productive. Then reality kicks in and promises don’t quite live up to expectations.

So how do we, as consumers of technology and innovation, break the cycle of undelivered promises and ensure that we are investing in true game-changing solutions?

Recently, Gartner published the report, Look Beyond Network Vendors for Network Innovation. The report discusses how innovation isn’t coming from the traditional network manufacturers, and notes that true innovation is created and driven into the market by end users, specifically the web scale (hyperscale) companies. Gartner observes that traditional network providers tend to over-hype minor feature enhancements and iterate on solutions already in the market — instead of taking a fresh look at the situation.

Gartner’s position shouldn’t come as a surprise — books, such as The Innovator’s Dilemma, discuss this premise at length. Larger companies typically struggle with true innovation because they come at it from the lens of their product set and the need to protect existing revenue streams. This misguided strategy leads to slower iterative approaches to moving technology forward, as opposed to employing truly disruptive technologies immediately. The big companies are also slow to react to emerging markets because they are after larger market share opportunities, which is why big companies don’t typically innovate. Instead, they iterate.

Don’t get me wrong; iteration is not all bad. But there is a difference between an iteration that introduces minor improvements touted as solving major challenges and an innovation that presents a new perspective and means of solving the problem on a broader scale. While the compute and storage side of the data center has seen true innovation introduced in the last decade, I contend that the network has only seen iteration. This lag is now showing itself and causing issues for today’s enterprises dealing with big data, hyperconverged systems, and cloud native apps.

James Hamilton, celebrated VP and Distinguished Engineer at Amazon, railed against the network, saying the data center network is in his way. Web scale providers, like Amazon, realized early on they could not continue building networks the traditional way. It would be too expensive, too complex, and too difficult to manage. So, they set out to build a better network on their own without the traditional network providers who were slow to meet their needs. The basic principles of the design would be one system, built on commodity switches, 100% software-defined and with deep API integration, allowing any workload to be deployed anywhere, at any time.

Enterprises have taken notice of what the web scale providers, like Amazon, are achieving, and want to duplicate those strategies. The problem is, most companies do not have teams of developers to build custom network infrastructures, nor the resources to support them. In addition, the network traditionally is not included as a key part of the core business plan. Rather, the network is just one of many tools in IT’s toolbox, often deployed ‘out of the box’ and relied upon to perform and support the demands of the business.

So, while they want the same network agility and manageability the web scale companies enjoy, enterprises struggle to achieve agility and performance based on the available iterations of technology presented to them by known vendors. In addition, network innovation presents an exceptional challenge due to the silos created around network roles and the need for IT staff to manage the network. Because of this isolation, it is easier to pass through iterative solutions as new and continue the cycle of inefficiency.

The cycle of iteration over innovation stops when technology providers stands alongside customers, understand their immediate and future needs, and show dedication and creativity to providing matching solutions.

Taking this to heart, Hewlett Packard Enterprise (HPE) worked with customers to understand their challenges in today’s world. As a result, they changed their perspective. The network is not a bunch of pipes – it is an energized mesh that is responsive to fluctuating needs, scalable to support growth, and simple to manage.

HPE Composable Fabric is the latest solution purpose-built to broaden the perspective of your network and deliver innovative results without iteration. Take a look at your network: is it truly up to the task or is it ‘just fine’? What could some innovation do for your business? To learn more about the HPE Composable Fabric, visit HPE.com.


About Thomas Goepel

Network Innovation or Iteration? A Matter of Perspective TechNative

Thomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services. To read more articles by Thomas, please visit the HPE Shifting to Software-Defined blog.

Network Innovation or Iteration? A Matter of Perspective TechNative

A Software-Defined Future Separates the Brains from the Brawn

man using computer laptop on rootop at night

Instead of purchasing physical items, consumers today increasingly lease, stream, and rent products and services that support and bring value to their lives

Why? Because it helps serve their needs today, while allowing for a change in preference tomorrow.

This need for flexibility spills over into the world of business IT. It’s challenging to plan for the unknown when rigid hardware architectures limit the ability to react and adjust to changing business conditions. That’s why revolutionary enterprise solutions like edge computing, cloud, virtualization, and containers are starting to be powered by software-defined technologies. IT organizations are shying away from hardware commitments and looking for innovations that give them the flexibility to deliver products and services today, while simultaneously adjusting and planning for tomorrow.

Smartening up with software

The move towards a software-defined infrastructure is about abstracting the control plane from the underlying hardware. When the brains of the system are separated from the brawn, the options for underlying hardware increase. It also becomes less expensive and more interchangeable, while the overarching software becomes more capable and faster, evolving as it adapts to the changing environment.

Enterprises that build on-premises clouds with software-defined infrastructures have three basic requirements:  Friction-free agility of physical resources, control systems that maximize physical resource use and provide maximum return on investment, and an integrated infrastructure for automated provisioning and resource management.

While all of these requirements exist across the spectrum of the enterprise infrastructure — compute, storage, and network — the network plays a foundational role because it acts as the glue between compute and storage. That means the agility, control, and integrality of the network (or lack thereof) directly impacts a company’s ability to deliver optimized applications. If the network is continually getting in the way, the business loses its agility.

The case for software-defined networking

According to advisory firm Nemertes Research, more than 30% of organizations have software-defined technologies in place today. The main driver behind that trend is the need to automate and reduce the amount of time required to run the infrastructure. Examples of these cloud-driven IT consumption models include IT-as-a-Service in public cloud and private cloud scenarios.

Software-defined networking can best be described as giving enterprise IT the ability to manage data traffic from a central console instead of having to shape each individual switch manually. In a traditional network, switches send data in the same direction and in the same exact manner, but with software-defined networking, traffic can move in any direction. That flexibility allows IT to update switch rules to optimize performance and adjust to new priorities and requirements.

By using software-defined networking technologies, enterprise IT can solve traditional networking challenges such as latency, performance bottlenecks, and geographical boundaries—resulting in a more responsive network strategy. In addition, they can adjust for future-looking requirements such as scalability and automation.

As companies continue to progress and move closer to an improved digitized state, they seek better business agility by being smarter about compute, storage, and networking choices. The move from traditional “brawny” data centers to software-defined technologies is a critical way to save on time and resources with businesses gaining flexibility to update services based on tomorrow’s preferences.

Learn more about how Hewlett Packard Enterprise (HPE) is delivering a software-defined networking fabric purpose-built for workload-driven performance and scale on the HPE Composable Fabric website.


About the Author

A Software-Defined Future Separates the Brains from the Brawn TechNativeMcLeod Glass is Vice President and General Manager of HPE SimpliVity & Composable at Hewlett Packard Enterprise. In this role, he is responsible for all aspects of the product lifecycle across HPE Software-Defined Infrastructure. This includes HPE Composable Infrastructure, Hyperconverged and Cloud Software product lines. McLeod has held positions in engineering and product marketing across multiple HPE business units managing server, storage, and software product line.

To read more articles by McLeod, visit the HPE Shifting to Software-Defined blogsite.

A Software-Defined Future Separates the Brains from the Brawn TechNative

Is Your Network Aware of Your Infrastructure? It Should Be.

Block Chain Technology

When I was a varsity baseball pitcher in high school, I couldn’t imagine pitching a game without being fully aware of everyone and everything on the field—the score, inning, strike count, outs, and runners at any given moment

Awareness is crucial, because when I threw the pitch, my readiness to react could significantly affect the outcome of the game.

Awareness, by definition, is about understanding one’s environment, dynamics, variables, and current and potential future state. It’s safe to say that a good understanding of what’s going on around you is critical to success in business, as it is in most endeavors.

Situational awareness meets hyperconverged infrastructure

In the data center, the need for situational awareness is critical. Not only is it vital for IT to know what is going on in all aspects of the data center, it is important that they be able to anticipate potential events and know how they will react.

This is especially true for the network. Today’s applications and workloads place enormous demands on IT infrastructure and it is critical to the success of the business that there is real-time visibility, awareness, and management of network performance. The only way the business can achieve this level of situational awareness is with a software-defined network.

Hyperconverged infrastructure (HCI) is a perfect example of technology that could benefit from intelligent software-defined networking. Enterprises are introducing HCI to accelerate the pace of innovation, bring costs down, and streamline operations. Consolidated compute and storage resources go a long way to achieving those goals, but legacy network technology can impact performance. For example, with a traditional networking architecture, setting up the infrastructure to support new applications can take days or even weeks.

Intelligent, workload-aware networks can handle the on-demand services, dynamic workloads, and diverse traffic flows of the contemporary data center. By treating the network as an integral component of a hyperconverged system—with unified administration and automated provisioning—IT organizations can bring cloud agility, scalability, and simplicity to the enterprise data center. This means that the fabric is not just a collection of pipes for transporting data. Instead, it is an intelligent network that is aware of the infrastructure and able to recognize and respond to specific events. This level of awareness includes understanding the criticality of certain workloads, and in turn identifying and isolating that workload to guarantee performance.

HPE Composable Fabric delivers network awareness through tight integrations at the application programming interface (API) level. Because Composable Fabric is under software control, workflow logic can automatically discover information HCI clusters, VMs, and cluster nodes (i.e. awareness), and then automatically provision resources.

Your network, like a baseball team, will perform more effectively if it knows what to do and exactly when to do it without waiting for manual intervention. HPE Composable Fabric delivers awareness of your underlying infrastructure and is the key to automation, simplicity, and reduced costs in the data center.

To learn more, download the free e-book, Hyperconverged for Dummies.


About Thomas Goepel

Is Your Network Aware of Your Infrastructure? It Should Be. TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services.

To read more articles by Thomas Goepel, visit the HPE Shifting to Software-Defined blog

Is Your Network Aware of Your Infrastructure? It Should Be. TechNative

Which comes first: Multi-cloud management maturity or IT excellence?

Chick of Pekin, a breed of bantam chicken, Gallus gallus domesti

You’ve heard the popular causality problem, “Which came first: the chicken or the egg?”

The dilemma comes from the fact that all chickens hatch from eggs, while all chicken eggs are laid by chickens.

After reading the recent ESG Research Insights Paper, Multi-cloud Management Maturity, Tangible Reasons Management Excellence Is Required in a Hybrid Computing World, I recalled the chicken and the egg conundrum. The survey made me wonder, which comes first–multi-cloud management maturity or IT excellence?

To answer that question, I took a closer look at ESG’s findings. They surveyed 600 IT decision makers (from enterprises with at least 1,000 employees) to determine their multi-cloud management maturity score. In other words, where did each organization fall in terms of how comprehensively they have implemented multi-cloud management? And once that was determined, what correlation did each organization’s maturity score have on their IT excellence?

Hybrid IT is real – and complexity is growing

The first statistic ESG shared was that hybrid IT is real, and it’s a growing problem for many organizations. Combining public cloud services with on-premises IT provides the flexibility and scalability many enterprises need to compete more effectively. Yet hybrid IT also introduces complexity, which can slow innovation and hinder management of global usage and costs.

According to the survey, the respondents use public cloud for nearly a quarter of their workloads – and the majority utilize multiple cloud service providers. They also retain on-premises workloads, with 37% running on traditional physical servers, 36% on VMs managed as traditional servers, and 27% on a private cloud. As you can imagine, running workloads in so many different places can easily create complexity, actually inhibiting business success instead of enabling it.

Do you have what it takes to be transformed?

To determine maturity status, ESG asked questions about IT business processes and outcomes, segmenting the respondents into four different tiers of multi-cloud management maturity. The intent of the research was to identify organizations ESG considered Transformed– those that enjoy a high degree of success in a hybrid environment. ESG also wanted to provide insights and actionable recommendations organizations can use try to achieve similar results.

According to this report, 15% of those surveyed have tamed hybrid IT complexity, achieving a maturity level that ESG calls Transformed cloud managers. They also found that 35% reached the second highest level, Automated. Another 35% achieved a Modernized designation, and rounding out the lowest level were the Unrealized organizations at 15%.

Benefits of multi-cloud management maturity

The report went on to explain the importance of multi-cloud management maturity for the enterprise: “Having the tools, processes, and technologies to effectively navigate this varied landscape should yield many benefits for the organization.”

ESG details four key benefits mature multi-cloud management provides to a transformed organization:

  • Improves IT standing with executive leaders

One of the biggest benefits of being a transformed organization is that business leaders view the IT function with a high level of esteem. This improved IT standing is because business executives believe superior IT agility (gained in the transformation) provides a competitive advantage and a positive impact on the company’s financial success.

  • Enables modern app development

Organizations that operate better in a multi-cloud world are also better able to support a modern development organization. Consequently, improved multi-cloud management correlates positively with better development outcomes.

  • Optimizes on-premises infrastructure operations

One of the clouds that organizations must manage is their on-premises private cloud. Not surprisingly, organizations that score higher in terms of multi-cloud management also operate their on-premises infrastructure more effectively and efficiently.

  • Increases more effective public cloud resourcing

Much like #3 above, it makes sense that transformed cloud managers and organizations also surpass their peers in terms of public cloud utilization. Obviously, more visibility and control provides IT with the data they need to make smarter cloud choices.

Multi-cloud management maturity – is it a cause or effect?

As I read the key benefits of a transformed multi-cloud management organization, I come back to my original question. Is multi-cloud management maturity a cause or an effect of IT excellence? As in the chicken and the egg question, which comes first?

According to ESG, “While correlation does not equate to causation, ESG believes that several dimensions of its multi-cloud management maturity model directly impact an IT organization’s infrastructure management capabilities.” So although multi-cloud management doesn’t necessarily cause IT excellence, it certainly does impact it – in a very positive way.

Organizations need to acknowledge the reality of a hybrid IT, multi-cloud environment – and work toward achieving a transformed maturity model along with IT excellence. And the good news is that it looks like the two complement each other quite nicely.

You can read the full report here: Multi-cloud Management Maturity, Tangible Reasons Management Excellence Is Required in a Hybrid Computing World. Let HPE help you simplify your hybrid cloud experience with modern technologies and software-defined solutions. Additionally, Cloud Technology Partners (CTP), a HPE company, will work with your IT team to enhance learning, conquer cloud and accelerate your successful digital transformation.


About Gary Thome

Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

Which comes first: Multi-cloud management maturity or IT excellence? TechNative

The Network Knows. Let It Help You.

Artificial Intelligence Brain

The 1927 film Metropolis was most likely the first movie ever created about a robot taking over the world

This German expressionist film was not sophisticated (in fact, science fiction writer H.G. Wells said the movie was downright “silly”) but it did have some merit. If nothing else, it started an entire genre of films exploring the freakish possibilities of artificial intelligence.

Fast forward ninety-plus years, and we’re living in a world where intelligent systems are more than a sci-fi fantasy – they are becoming a necessity. Take hyperconvergence, for example. In the technology trifecta of hyperconverged compute, storage, and networking, intelligence across the IT environment represents a critical next phase of networking. That line of discussion, as well as my affinity for AI movies, got me thinking about the accelerated pace of change in automation technology, and how the role of networking has changed, particularly in regards to network awareness.

Network awareness is growing exponentially around us. From the Internet of Things, to Machine Learning, to Automated Configuration and Remediation, networks are becoming more essential than ever before in providing critical services to keep IT operations alive.

Today’s modern applications and hyperconverged infrastructure solutions have literally changed the role of the network from pipes that connect clients and servers to a full-fledged, bi-directional blanket of data communications that keeps distributed components and their associated data in sync (hence the term “fabric”). With all the data flowing in and around the network, I suppose it might have been inevitable that networks would learn something. When a network understands the data, it develops an awareness of the applications and infrastructure – an awareness that enables an intelligent network to optimize and automate the infrastructure.

Robots need to be aware of their surroundings to perform tasks effectively and efficiently. An intelligent networking fabric, such as HPE Composable Fabric, understands where hosts, virtual machines, and storage servers are attached and how they communicate. The network knows which physical and virtual NICs are plugged into which virtual and physical switches, and which VLANs are used by which virtual machines on which ports. HPE Composable Fabric monitors the infrastructure and adapts the network as these items evolve over time.

What a composable networking fabric can do for you

In a workload-aware environment, networking “sensors” create and maintain a model of your operational environment, adjusting the internal network fabric bandwidth and network path isolation as needed. You can think of it as a network concierge for applications and infrastructure – helping them moment-to-moment. The network knows what they need and is there to assist, across dozens of switches, hundreds of hosts, and thousands of virtual machines.

The result? Infrastructure operators achieve a simplified network management and optimization experience. DevOps engineers can leverage the power of this awareness through APIs to integrate and automate network operations into existing and new workflows, removing even more friction and further streamlining hyperconverged and enterprise cloud environments. Think of it like HPE Composable Fabric empowering your network with a bunch of helpful robots that can automatically re-cable and reconfigure your switches to match the ever-changing needs of your application infrastructure.

All of this is possible because the intelligent network knows… let it help you.

To learn more about software-defined networking in hyperconverged environments, download the free e-book, Hyperconverged for Dummies. Or, check out HPE SimpliVity with Composable Fabric for hyperconvergence.


About Thomas Goepel

The Network Knows. Let It Help You. TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services. To read more from Thomas Goepel, please visit the HPE Shifting to Software-Defined blog.

The Network Knows. Let It Help You. TechNative

Resolve to Add Cloud Deployment Training to Your List of 2019 Goals

The Man’s Head Businessman With Digital Brain And Connections Of

As companies struggle to deploy and manage workloads across their hybrid cloud environments, employees need better training

It’s the start of 2019, and business leaders are contemplating what new technologies will give them a brighter, more successful New Year. Of course, the typical hot topics come to mind: compliance, security, artificial intelligence, machine learning, and edge computing. Yet one important item should also be on your list of IT goals for the New Year: better training to address today’s widening digital skills gap in enterprise cloud deployment.

As I listened to a recent BriefingsDirect podcast, the idea of cloud training dominated the interview. According to Robert Christiansen, vice president at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company, businesses need to concentrate more on their people — instead of their technology — to speed cloud deployment.

Problem: lack of speedy progress on the cloud journey

“Enterprises are on a cloud journey,” explains Christiansen. “They have begun their investment, they recognize agility is a mandate for them, and they want to get those teams rolling. What we are seeing is a lack of progress with regard to the speed and momentum of the adoption of applications running in the public clouds. It’s going a little slower than they’d like.”

For example, many businesses are seeing their public cloud bills increase, yet operating expenses are not falling. IT teams are struggling to move applications from their traditional IT systems to public clouds, which means they are not meeting key performance indicators. These challenges result in lackluster business outcomes, causing many executives to investigate where they can make further refinements.

It’s a people problem

A large part of the problem has to do with outdated behaviors, says Christiansen. “The technology is ubiquitous, meaning everybody in the marketplace now can own pretty much the same technology. So what’s your competitive advantage? The true competitive advantage now for any company is the people and how they consume and use the technology to solve a problem.”

Christiansen believes the central IT team needs more skills. In the past, central IT’s job included providing and monitoring on-site infrastructure, which included implementing certain safeguards. Now, public cloud puts these types of automation controls in the software – and IT doesn’t necessarily have the skill sets needed to manage clouds in the same way they manage physical infrastructure.

Compounding the problem is the concern that if they move to the public cloud, IT will automate themselves out of a job. “That’s the big, 800-pound gorilla sitting in the corner no one wants to talk about,” remarks Christiansen. “How do you deal with that?”

The reality is many classic IT roles will go away with a public cloud implementation, which means the traditional IT folks need to reinvent themselves and transition into new roles. Of course, this type of job transition takes training.

Working in a complex hybrid cloud world

Another challenge is certain legacy applications won’t be moving to the public cloud at all, which creates more training challenges for IT. These database-centric applications are centers of gravity that the business runs on. “Moving them and doing a big lift over to a public cloud platform may not make financial sense. There is no real benefit to make that happen. We are going to be living between an on-premises and a public cloud environment for quite some time,” Christiansen continues.

To navigate this hybrid cloud world, businesses need to create a holistic view and determine how to govern everything under one strategy. IT must establish a governance framework and put automation in place to pull these two worlds together seamlessly. To network between the two environments, IT needs public cloud training and hybrid cloud management tools.

Don’t forget to train business units

Once a business solves the central IT training issues, the next step is proper training for the organization’s business units. “We have found much of an organization’s delay in rolling out public cloud is because the people who are consuming it [the business units] are not ready or knowledgeable enough to maximize this investment.”

Christiansen relayed the story of how CTP recently helped a telecommunications company roll out their common core services. “The central IT team built out about 12 common core services, and they knew almost immediately the rest of the organization was not ready to consume them.”

This meant more than 5,000 people had to be upskilled on how to consume the new cloud services. Without training, this company could easily get a cloud bill that’s out of whack or developers writing code without using the guardrails needed to keep their data secure. To solve this problem, CTP helped this organization build a training program to bring up the skills of almost 5,000 people.

Put this on your wish list: better training for cloud deployment

Christiansen believes every global Fortune 2,000 company should implement some sort of cloud deployment training. “We have a massive training, upskilling, and enablement process that has to happen over the next several years,” he concludes.

What does this mean for the typical business? If better cloud deployment training is not on your list for 2019, maybe you should think about adding it.

To listen to Christiansen’s complete BriefingsDirect podcast, click here. To learn more about managing your multi-cloud environment, check out this link. For more information on a smooth transition to multi-cloud, visit the CTP website.


About Chris Purcell

Resolve to Add Cloud Deployment Training to Your List of 2019 Goals TechNative

Chris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and Project New Hybrid IT Stack. To read more from Chris Purcell, please visit the HPE Shifting to Software-Defined blog

 

Resolve to Add Cloud Deployment Training to Your List of 2019 Goals TechNative
1 2 3 4 12
Page 2 of 12