Enterprise Featured

Admitting you have a problem with your cloud transformation

Silhouette of helping hand between two climber

Too often your cloud transformation is all about the technology. Don’t forget about the people – your biggest asset.

It’s frequently said that the first step in solving a problem is recognizing that you have one. In the midst of a high-stakes cloud transformation, all too often a business can’t clearly see the problem before them.

Here’s a typical scenario: You’re six months into your public cloud transformation; everything is slowing down, and you’re not meeting your key performance indicators (KPIs). These KPIs could include things like:

  • The number of workloads moved from the data center to the cloud is too low.
  • Automation has not been implemented, and your headcount is too high.
  • Your cloud bill is growing without a corresponding decrease in operating costs.
  • Employees are struggling to understand how they can meet the KPIs.
  • DevOps practices are not being implemented.

What’s wrong? You haven’t equipped your employees with the tools they need to be successful in your digital transformation.

As VP of Global Cloud Delivery at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company, I’ve seen these struggles first-hand. Businesses all over the world are embarking on a cloud transformation journey, but they’re finding that it is challenging. And the problem stems from one area – people.

First things first: Deal with the permafrost layer of management.

The key decision makers in your digital transformation are your cloud program owners and sponsors. These employees typically make up what I call the permafrost layer of leadership within an IT organization. In many companies, these are the people that have been around for a while; they can appear to be frozen in place because they’ve been doing things the same way for years.

Cloud has redefined the IT industry and job market — again. Yet, this particular displacement is unlike any in history. With the new outsourcing model comes a completely different method of IT consumption that is counter to existing roles and responsibilities. The permafrost must be thawed…I’ve outlined a few thoughts below.

Realize the core obstacle of success –- your people.

The bottom line is that your people are not ready to do this work. Let’s examine why.

The fatal flaw in most digital transformation journeys is legacy thinking by many of those I mentioned earlier — the permafrost people. Typically these people are scared of change, because they’re afraid of losing their jobs. Others think that they are too close to retirement to change. They’re thinking, “Reinvent myself 5 years before retirement? No way! I’ll wait for this whole cloud thing to blow over.”

To be successful in your cloud transformation, you need to change from a legacy mindset to a new way of doing things – and that takes education. Employees need to know how to organize, deploy, and use the cloud.

So what do permafrost people typically do to solve the problem? Instead of trying to fix it, they point to the cloud itself as the problem. I’m even hearing about enterprises moving their apps back from the cloud to on-premises. This type of confusion occurs when the cloud team doesn’t clearly understand what they are doing. But, with the right training, you can better determine which workloads to keep on premises and which ones should be kept in the cloud.

Real life story: Thawing the permafrost

Let me give you a real life example. A cloud team in the financial services industry launched cloud services to their clients – the business units. The managers in the business units looked at the new, slick cloud services and said, “Thanks, we’ll take it from here.” Unfortunately, the managers weren’t given basic training in cloud deployment, such as migration strategies, financial controls, and resource scheduling. Six months later, the bill for the cloud program was way out of hand! AND… the business units had yet to deploy a single production app!

The problem was that the business units were consuming the services with no knowledge of how to use them. Much like a mobile phone provider, the consumer must know how to operate the device or else risk getting hit with costly bills, security problems, and poor experiences.

To solve the problem, the cloud team established a training program for the business units that covered Cloud 101, governance, financial best practices, and basic security. Once the business units better understood the services, they regained control of their bills, and the release rate of production applications accelerated.

All is not lost

Businesses all over the world are struggling with the same cloud deployment issues. Although the problem seems complex, the fix is simple. Businesses need to concentrate on people, their most valuable assets. Proper training is the key to a successful cloud transformation, followed closely by automation software that simplifies management of your multi-clouds.

This article is the first in a series on how to train your employees for a successful cloud transformation. In the next articles, I’ll discuss how to improve your cloud deployment, bring people together in a new DevOps team, empower employees, and implement long-term cloud success.

For more information on a smooth transition to multi-cloud, visit the CTP website. To learn more about how to ease your digital transformation, click here. To find out more about simplifying and automating your cloud with multi-cloud management software, go to HPE OneSphere.


About the Author

Robert Christiansen is a cloud technology leader, best-selling author, mentor, and speaker. In his role as VP of Global Cloud Delivery at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise company, Christiansen oversees the delivery of professional services and innovation for HPE’s public cloud business. His client base includes Fortune 500 and Global 2000 customers, and his team’s leadership model encompasses the entire IT transformation journey, from inception to execution. To read more articles by Robert, visit the Shifting to Software-Defined blog.

 

Marriage of Inconvenience? DevOps & ITSM

Psychology Of Communication And  Marriage Counselling Concept As

Time and time again I’ve heard the phrase, “DevOps doesn’t understand ITIL; they are so different and don’t really have anything in common.”

Before I can delve into all the ramifications of this statement, let me start out by defining the three terms: DevOps, ITSM, and ITIL.

  • DevOps is an enterprise software development phrase that is used to define an agile relationship between development and IT operations, which encourages better communication, collaboration, and continuous delivery.
  • IT Service Management (ITSM) is all about how an organization plans, designs, implements, operates, and governs IT services for its customers.
  • IT Infrastructure Library (ITIL) is a set of best practices for ITSM in order to maximize efficiency, effectiveness, and cost optimization.

Opposites attract: why DevOps and ITIL work well together

As a mature CIO with legacy ITSM experience, when I first came across DevOps, I was attracted to it precisely because it was different. Sadly, these differences initially confused and annoyed me. In fact, I began to think that my approach to ITSM was generally right and that DevOps was wrong. Therefore, I did what any good CIO would do, I tried to “fix” DevOps based on my own pre-conception of IT processes. I did this in order to make DevOps more like ITSM. Not surprisingly, this failed dramatically.

What I forgot was that there’s a reason opposites attract: it’s good for both sides. Take marriage for example. If you married someone just like you, then you wouldn’t have to grow, get out of your comfort zone, or have to enter into someone else’s world. The same applies to the marriage of DevOps and ITSM — the differences can add richness, depth, and texture — if you embrace the union.

DevOps versus ITSM Basics

If you put the basics of each concept side-by-side, a stark contrast of key tenants appear within each approach. Looking at these differences, they appear to be totally disjointed and conflicting. Especially when considering the ever rapid rate of innovation and new product delivery that our Web-based world demands.

ITIL

DevOps

Planned Iterative
Process-based Incremental
Procedure-based Collaborative
Documented Experimental
Waterfall/Sequenced Lean/Agile

I would argue that ITIL as a framework for service management focuses on delivering fit-for-use products and services. Likewise, DevOps is a philosophy/culture that promotes collaborative and agile processes — also focused on delivering fit-for-use products and services. By merging these together, an organization can produce an effective hybrid DevOps/ITIL environment.

DevOps addresses the inherent inefficiencies that ITIL has in time propagated due to the increasing complexity in technology and service management. These include communicating in silos, not focusing on the customer, and lack of collaboration across the business. And ITIL will play an important role in DevOps by providing rigor, audit, governance, and credibility in its final delivery.

Actually, DevOps and ITSM are the perfect union

By leveraging both methodologies via a marriage of convenience, an organization can create lasting value through collaboration and continuous improvement. Consequently, I strongly believe that ITIL and DevOps are compatible.

ITSM is a crucial part of building and maintaining a platform for sound DevOps practices based on people, processes, and technology. The language and terminology used may be different but the outcomes are the same – delivering value to the business where it needs it most.

Remember, differences are often the biggest asset when combining two different people or groups. If you learn how to merge them successfully, your IT teams will have the ability to embrace the best of both concepts.

Aligning these practices to accommodate true service management for a hybrid cloud/IT environment is the goal of HPE’s World Wide Strategic Transformation, Governance & Operations Center of Excellence (CoE). The CoE runs customer centric transformational workshops to develop strategic roadmaps that provide customers with a cohesive view of DevOps and ITSM.


About the author

Marriage of Inconvenience? DevOps & ITSM TechNativeMario Devargas is a CIO Advisor for HPE, consulting with organizations in the adoption of collaborative working processes — not just in IT, but across the entire enterprise. With over 30 years at an executive level, he is a passionate and visionary CIO with an extensive record of achievement across the private and public sectors within the corporate and commercial markets, banking, manufacturing and most recently public sector.

To read more articles by Devargas, visit HPE’s Transforming IT blogsite. To read more about Digital Transformation, visit HPE’s Shifting to Software-Defined blogsite.

Attack of the Cloud Killers: Fending Off Threats to Cloud Ops Success

Abstract Futuristic Cyberspace With A Hacked Array Of Binary Dat

For many companies, the cloud model is no longer an option – it’s a mandate

Yet, the transition to public cloud can be challenging. You’re up against all sorts of governance and insight issues, along with numerous change management demands. These potential cloud killers are not so much technology challenges as organizational and cultural ones, and they all stem from the fact that cloud operations (Cloud Ops) is a wholly different kind of IT world.

Below I list three principles that I’ve found useful while working with companies on the front lines of cloud rollouts:

  1. Understand the need for a completely different governance model.

What I have in mind here is not your classic governance, risk, and compliance (GRC) processes – though those are crucial for cloud success too. Cloud Ops governance is all about how do I maximize value? It’s about having deep visibility into the financial health of the assets.

Finance controls are the number one thing that needs to be done differently from the beginning. As my colleague John Treadway pointed out in a recent post (3 Ways HPE GreenLake Hybrid Cloud Drives Hybrid IT Success), “if you’re not paying attention to what you’re using in a public cloud, you can easily end up overpaying.”

What’s more, the acceleration of consumption is much, much faster with cloud than with the classic model–with all its POs, contracts, legal involvement and so on. Within a very short time, you can end up with uncontrolled, unmonitored usage and zero visibility. You’ve got spend that you can’t answer for, so what likely happens is that your finance department steps in and kills the project.

Then you have a wrecked cloud initiative, and IT blames the business or falls back and says, “our cloud program costs too much” without actually looking for the root cause of the problem. This issues all could have been avoided by putting the right controls in place from the start.

  1. Recognize that your Cloud Ops team can’t be the same team that’s running your classic model.

Your cloud ops team and your on-premises IT team need to be separate and differently dedicated. And the decision as to how to populate each team is a tough one. But I’ve worked with quite a few companies that originally tried to merge their on-prem operational teams and their cloud operational teams in the hope that they could act as one. It hasn’t worked well at all.

The on-prem operational teams may not understand how the new model works. They’re probably not familiar with the technology and the new software platforms that you’re deploying. The cloud folks may lack the depth of experience needed to manage the on-prem assets. This is where the friction starts, with team dynamics issues, turf battles, and silo building. I’m a convinced advocate for bypassing these problems by the simple expedient of keeping the two teams’ workflows distinct and separate.

  1. Target your training.

The two-team approach is also useful in your training programs. Cloud Ops training will be very different from your on-prem op training or your developer training.

Cloud Ops training should be oriented more towards managing spend controls, enabling or disabling services, and supporting users who are consuming cloud services. Importantly, it should also focus on the DevOps relationship and the benefits it delivers to IT service consumers.

The ideal training should also focus on the DevOps relationship and the benefits it delivers to IT service consumers. A tight connection between development teams and operations teams is pivotal for maximizing the value of cloud implementations. That partnership needs to be carefully fostered, and a big part of that is through training.

Building a top-flight Cloud Ops function is a demanding task, but an essential one for companies that want to see the best results from this innovative, agile paradigm.

Get Started Today

HPE has experienced consultants to support you through every stage of the cloud lifecycle. Get the expert assistance you need to bring cloud computing services to your business quickly and efficiently. Take a look at the  HPE Cloud Services website and get started today.


About the author

Attack of the Cloud Killers: Fending Off Threats to Cloud Ops Success TechNativeRobert Christiansen is a cloud technology leader, best-selling author, mentor, and speaker. In his role as VP of Global Cloud Delivery at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise company, Christiansen oversees the delivery of professional services and innovation for HPE’s public cloud business. His client base includes Fortune 500 and Global 2000 customers, and his team’s leadership model encompasses the entire IT transformation journey, from inception to execution.

To read more articles by Christiansen, visit HPE’s Transforming IT blogsite or  Shifting to Software-defined blogsites.

How Workload-Aware Networks Work

Workload aware networks

Many of today’s hyperconverged infrastructure (HCI) solutions converge compute and storage stacks, but leave the critical networking component as an afterthought

Compute can be boosted with faster processors, and choosing an all-flash storage-based solution can greatly improve performance of the HCI node, but outdated network technology can drag down overall performance. Without an intelligent, high-speed network to accommodate data traffic, IT organizations often get unwanted surprises that reduce efficiency and increase costs.

The solution? A unified network that supports diverse applications and workloads and still ensures high performance, reliability, and service quality. The technology exists today – here’s how it works.

Traditional networking architecture

Traditionally, the movement of traffic between the core of the data center and the edge is called north-south traffic. AI and machine learning data, in contrast, moves between edge or IoT devices, nodes, and clusters spread throughout the network in order to handle parallel or distributed processing. This pattern is called east-west traffic, and uses a software-defined networking (SDN) infrastructure to route traffic through a diverse array of paths.

The most important advantage of SDN applies to AI and machine learning data traffic: the intent-based network can dynamically adjust traffic paths to meet the needs of specific and unpredictable workloads. One such solution — HPE Composable Fabric (a recent technology acquisition from Plexxi) — is designed to be workload aware and innately understands the needs of the applications running on the network. The composable fabric also automates the distribution of this east-west traffic to deliver the high speeds and low latency that AI and machine learning applications require. This scale-out architecture also accommodates the unpredictable ebbs and flows of AI and machine-learning traffic.

Death to Rip-and-Replace

Adding new AI or machine learning workloads to a data center offers the perfect opportunity to try a scale-out system. Gone are the rip-and-replace days when adding cutting-edge data center solutions meant disrupting all business users while equipment was upgraded. Software-defined technologies can be added to existing infrastructure and scaled out to meet demand.

One option is deciding to roll out a new AI application on a sub-infrastructure with a modern, scale-out system that links back to the existing data center network infrastructure. Hyperconverged compute, storage, and networking elements integrate with one another and deliver control-center interfaces that administrators operate without having high-level technical certifications. Software now automates most of the processes that were previously manual setting thresholds at a very granular level. Workloads are routed, processed, and stored with the speed, capacity, security, and response time required.

In the case of SDN, pipes don’t need to be redesigned. The workload-aware software assigns an optimized traffic path for each workload — from AI and machine learning data to traditional databases and other applications. Workloads with unique requirements are isolated or have paths reserved for anticipated traffic flows.

In a common example, a company might have its sales team run a monthly in-depth customer analysis, which requires processing excessive volumes of data in a short amount of time. Using an SDN system like HPE Composable Fabric, administrators automate the process of reserving a traffic path that delivers wide bandwidth, high throughput, and low latency at the exact time of the month, ultimately improving service to the line of business users and preserving business continuity.

Give AI Apps a Network as Intelligent as They Are

AI and machine learning workloads deserve networks that are as intelligent as the applications. Companies can succeed in processing AI and machine learning applications using scale-out, hyperconverged solutions for compute, storage, and networking capabilities without needing to rip-and-replace existing infrastructure. Instead, hyperconvergence offers the agility to add modern capabilities incrementally. In addition to offering big savings on capital expenses, scale-out systems deliver huge savings on operating expenses since they avoid overprovisioning situations and automate tedious and often technical processes.

Scale-out innovations in compute and storage systems enabled advancements from static data resources to an era where AI and machine learning applications can process data in real-time. The network has finally caught up, and companies now benefit from hyperconverged solutions that offer agility, scalability, and automated optimization of traffic flows based on workload awareness.

To learn more about the future of networks and how they can boost the performance of your business, download the Gartner report, Look Beyond the Status Quo for Network Innovation. And for more details on composable fabric and hyperconverged infrastructures, read more on the HPE website.


About Thomas Goepel

How Workload-Aware Networks Work TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services.

To read more articles from Thomas, check out the HPE Shifting to Software-Defined blog.

Artificial Intelligence and Machine Learning Require a Better Network

Artificial Intelligence

Artificial intelligence (AI) applications are quickly finding their way into everyday life – whether it’s traffic data for Waze maps, sensor data from self-driving cars, or Netflix entertainment recommendations

All of these apps generate extreme volumes of data that must be collected and processed in real time. Networks built as recently as 10 years ago weren’t required to collect, route, and process this vast amount of data at real-time speeds. Typical networks contained a web of hardware and cabling, a one-size-fits-all offering of bandwidth and throughput that was far too cumbersome to handle today’s AI and machine learning applications.

Not surprisingly, modern networking is based on a very different design. Innovations in software-defined technologies allow for scale-out infrastructures that can be added incrementally to meet business needs. Let’s take a look at why and how.

AI: Not Your Grandfather’s Workload

Before AI and machine-learning applications became commonplace, most network traffic only needed to carry application workloads such as SQL and other structured databases and office applications. Businesses handled data processing — the compute component — on-premises in data centers. Through a mix of disk and tape libraries, they managed storage both onsite and offsite. Companies gleaned business intelligence by funneling data into a data warehouse and running a batch analysis or data mining program on the entire data set. This process resulted in static data, time-consuming analysis, and information that was outdated the moment it was published.

Now, businesses process Big Data using parallel processing technologies such as Hadoop. Hadoop is one of the first examples of a scale-out application. Compute capacity is available on-the-fly, and it can be handled on-premises, rented from the public cloud, or processed in a hybrid of the two environments.

Data for AI and machine learning is a specialized segment of Big Data. It’s extremely data intensive and often requires real-time transit and processing speeds. Think of internet-of-things (IoT) sensors that collect dozens of data points per minute. Real-time analysis catches any situation that meets or exceeds a threshold and transmits anomalies for immediate action. To handle the speeds and unpredictable data volumes, the processing is generally routed to a hybrid cloud environment that is set up to support the need for agility and shared resources. The communication, or networking, between servers and storage is mission critical for many of these AI and machine learning applications.

Scale-Out, Hyperconverged Capabilities Extend Beyond the Compute Layer

Until now, scale-out technology innovation happened more slowly on the storage and networking sides than the compute side, but storage has been catching up. Converged and hyperconverged storage systems unify data processing and data accessibility. HPE SimpliVity is a great example of simply scalable technology that converges compute, storage, firmware, hypervisor, and data virtualization software into a single, integrated node. The HPE hyperconverged solution also offers deduplication, compression, and data protection all in one system.

Now it’s networking’s turn. While hyperconverged architecture pushed the envelope for both servers and storage, networking lagged behind its infrastructure brethren, waiting for an opportune moment in technology innovation: the intersection of workload-aware software and software-defined infrastructure.

Through tight software integration, HPE Composable Fabric becomes aware of its HPE SimpliVity hyperconverged environment and automates many routine network configuration and management tasks. For example, the software-defined network automatically discovers hyperconverged nodes, virtual controllers, and hypervisor guest VMs, and can dynamically provision the network fabric in response to real-time compute and storage events, such as the addition of a new node. The highly adaptable data center network fabric delivers high performance and service quality for diverse applications and workloads while making better use of network capacity.

HPE has assembled an array of resources to help you support your most demanding AI and machine learning applications. You can learn more about HPE’s approach to hyperconvergence and composable fabrics by checking out the HPE website, HPE SimpliVity with Composable Fabric.


About Thomas Goepel

Artificial Intelligence and Machine Learning Require a Better Network TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services.

To read more articles from Thomas, check out the HPE Shifting to Software-Defined blog

To Speed Development and Time to Market, Empower Your Developers

Software computer programming code background

Digital transformation is empowering developers, resulting in products and services that enhance customer engagement and experiences

Numerous studies show that employees who have a sense of control over their jobs are typically happier and more productive. Application developers are no different – the more freedom and options they have to do their jobs well, the more productive they can become.

What’s holding developers back?

Many organizations have moved to public cloud, which has streamlined processes and given developers immediate access to the resources they need. At first, this scenario seemed ideal. But as organizations implement multiple infrastructure environments (both public and private clouds) to control costs and better meet developer’s changing needs, complexity has increased. This is because every cloud infrastructure has its own toolset, which is based on maximizing the value of each cloud platform — instead of the multi-cloud environment as a whole.

For developers using both cloud resources and traditional IT, the problem is compounded even more. These developers must also rely on internal IT operations to give them access to onsite resources, which can slow projects by days, weeks, or even months.

Empowering developers with more application ownership

Ideally, developers need to take more ownership of their applications’ lifecycles — from development through production deployment. The process of developing and deploying apps to cloud environments should be simple for developers; they shouldn’t have to become full-stack cloud deployment experts. IT must help in this endeavor, working with developers to make the process as simple and seamless as possible.

To Speed Development and Time to Market, Empower Your Developers TechNative
©nd3000

To help facilitate this type of partnership, enterprises are turning to hybrid cloud management solutions. These solutions go beyond just focusing on application management, which is offered by today’s typical cloud providers. The most effective multi-cloud management solutions empower developers by giving them on-demand, self-service infrastructure and cloud management tools for all applications located either on- or off-premises.

A recent analyst report by Moor Insights & Strategy, details this type of hybrid cloud management solution and shows how enterprises can empower developers with faster application development environments and deployment capabilities.

What features do developers need?

Moor Insights & Strategy recommends that enterprises implement a software-as-a-service (SaaS) cloud management solution that delivers unified application deployment and cost management across an enterprise’s hybrid cloud environments. It should provide a consistent, unified Application Programming Interface (API) and user experience across the environments by creating connections with each one and surfacing its capabilities quickly.

A robust hybrid cloud management platform should allow IT to deliver public cloud options from AWS, Azure, Google, and others. The solution should also give developers a private cloud experience to run options such as VMware, OpenStack, and Kubernetes on bare metal.

The report also mentions two additional features that can help enterprise IT teams empower their developers even more.

  • Catalogs: Compute images, application stack templates, and services should be compiled in a self-service catalog available to developers. It should have the capability to be automatically surfaced and updated in real-time based on availability from each cloud platform. For example, if something new is available from Kubernetes or Docker, the hybrid cloud management software will automatically detect the addition of a service and add it to the catalog.
  • Projects: Another feature that can help empower developers is some sort of projects capability, which would allow developers to organize users and application deployments by team project. This feature gives teams workspaces to manage all resources for specific applications, including custom, role-based access management and tagging for cost management – across all resources in any cloud environment.

Are these types of hybrid cloud management solutions available today?

Very few cloud platform vendors have invested in cloud-management capabilities for the enterprise. As I mentioned earlier, they are more focused on their own public clouds, private clouds, or application management platforms. A multi-vendor, multi-cloud approach would be counter-productive to their goals of keeping customers fully engaged with a single cloud solution.

A more strategic, hybrid cloud approach is needed for enterprise customers – a need that Hewlett Packard Enterprise (HPE) has stepped up to fill. In 2017 HPE announced HPE OneSphere, an as-a-service multi-cloud management platform that simplifies managing multi-cloud environments and on-premises infrastructure.

According to the report, “Moor Insights & Strategy recommends IT leaders consider HPE OneSphere as a cloud management solution to empower developers with an IT-curated, hybrid cloud experience that streamlines their path from application development to deployment.”  HPE OneSphere helps enterprise application developers not yet familiar with cloud platforms to onboard via a simplified experience. For developers more experienced with cloud platforms, the solution’s image, service, and template catalogs offer simplicity in working across hybrid cloud environments.

Immediate availability, simplicity, and speed

Enterprises want to empower their developers by giving them the tools they need to be successful. That means embracing both on- and off-premises infrastructures in public clouds, private clouds, along with using bare-metal and containers. Yet developers must also be able to easily and quickly work in all environments – securely and cost-effectively.

With HPE as a partner, developers are empowered to deliver products and services that enhance customer engagement and experiences– no matter where applications are deployed.

To read more on this topic, download the analyst whitepaper by Moore Insights & Strategy, HPE OneSphere Strengthens Enterprise Developer Empowerment in Hybrid Cloud. For more information on a smooth transition to multi-cloud, visit the Cloud Technology Partners (CTP) website.


About the Author

To Speed Development and Time to Market, Empower Your Developers TechNativeGary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

The Right Infrastructure Management Can Help You Tackle Growth Before it Tackles You

two american football players quarterback sacked in silhouette s

Many IT admins often sit back and reminisce about the days when life was easier, simpler

Maybe they are looking out at a business and a data center that is growing faster than they can keep up with, or a shrinking budget that is difficult to operate within. Maybe they are trying to figure out how they can tackle both. From deploying new infrastructure that meets expanding workloads to training new employees, the demands caused by growth take up valuable time and resources they may not have. Implementing an easy-to-use, easy-to-learn infrastructure management solution is a key to helping alleviate these growing pains.

Do more with less

Employing a good infrastructure management solution allows IT admins to tackle more on their to-do list with fewer people and in less time. This means having the ability to add infrastructure and grow capacity without always needing to add people. To do this effectively though, IT needs a straightforward user interface that anyone can learn — with minimal training. More importantly, it has to be able to automate manual tasks, like firmware and driver updates, to help easily deploy new servers and simplify everyday maintenance and monitoring tasks. Accomplishing this puts IT admins on their way to reducing costs in the operating budget and freeing up time for value-generating activities.

HPE OneView meets all of the above requirements with a broad range of capabilities that enable customers to transform their datacenter resources into software-defined infrastructure. Let’s explore how HPE OneView helped a cloud hosting and collation services company simplify and automate processes.

Infrastructure management in-action

Located in Hillsboro, Oregon, Opus Interactive recently made the switch to HPE OneView to help keep up with its significant company growth. Expanding to more than 300 customers, Opus could no longer manage its infrastructure with the tools it had as a smaller company. To solve this challenge, Opus investigated new processes and tools that would increase its productivity without having to increase its staff level.

“We wanted to get as much infrastructure and as many customers managed by each engineer or administrator as possible, so it would be easier,” says Eric Hulbert, president and co-founder of Opus Interactive.

Implementing a software-defined infrastructure management platform with HPE OneView, Opus Interactive achieved significant time savings. The company’s current IT staff is now delivering more value to its growing customer base, while simplifying operations and increasing automation. HPE OneView’s template-based provisioning allows Opus Interactive to create a fast and reliable way to implement consistency for its services.

“As a small company, the ability to automate caters to the speed and flexibility of our customers,” says Hulbert.

We can roll out equipment for our customers quickly. For example, we can develop one template for a group of servers and then roll it out to 10 customer applications identically. A lot of times you hear stories of IT departments taking weeks or months to roll out something new. We use [HPE] OneView to help us basically launch in days.

With HPE OneView in place, an IT admin can sit back and instead of reminiscing about an easier time, they can look forward to the future and plan for more without worrying about how they are going to manage it.

To learn more about how Opus Interactive was able to put HPE OneView to work in its datacenter, read the recent IDC customer spotlight: Managing Infrastructure for Rapid Growth with HPE OneView. To learn more about how you can implement HPE OneView for FREE to see if it’s right for your data center and business goals, visit hpe.com/info/oneview.


About Chris Purcell

The Right Infrastructure Management Can Help You Tackle Growth Before it Tackles You TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and HPE OneSphere.

To read more articles by Chris Purcell, visit HPE’s Shifting to Software-defined blogsite.

 

Healthcare Facilities Discover Pain Relief for the Datacenter

Medicine insurance and security concept. Doctor offer tablet pc with unlock lock icon on virtual screen. Open healthcare IT, IoT, Computing, robot surgery integration medical support technology

People who work at healthcare facilities suffer from pains of their own, and the source of the pain may surprise you

It is the kind of pain that has nothing to do with the human body, and the relief cannot come from the medical staff. It’s the pain of managing an aging IT infrastructure.

IT administrators have a tough time in healthcare datacenters these days. Legacy IT systems can be difficult to manage and grow. Over time, a siloed architecture can make an infrastructure more complex, more costly, and harder to upgrade. As if that’s not challenging enough, HIPAA regulations impose stringent rules on medical databases to ensure the protection of patient information. Although products are available that can mitigate the risk of catastrophic datacenter failure and data breaches, it’s hard to know which ones will integrate smoothly into a legacy IT environment. It’s equally important that, as legacy IT is refreshed and modernized, new systems are resilient and overcome complexity and cost headaches.

Prescription for healthcare facilities: HIPPA-compliant hyperconverged solutions

But relief from datacenter pain is available – and no, you don’t need a prescription. Technology can ease the pain of legacy infrastructures for IT operations and can even mend fractured data protection strategies. Both of these benefits help keep costs – and IT operations’ blood pressure – down.

Healthcare IT professionals are discovering the benefits of deploying hyperconverged infrastructure (HCI) in their datacenters and at remote sites. Hyperconverged infrastructure combines up to a dozen discrete IT infrastructure components and advanced data services into a single x86-based building block that easily scales as requirements demand and provides similar costs and experiences of public clouds.  Some hyperconverged solutions include built-in data protection and validated data security solutions as well.

A HIPAA-compliant HCI offering typically includes an array of compliance monitoring and enforcement tools, along with VM-centric management that simplifies management of the combined solution from a single console. The hyperconverged infrastructure controls the efficient movement of VMs across datacenters. The data encryption solution allows the IT admin to key, clone, restore, and rekey VMs, instructing the system to shred old encryption keys. Cloned and restored VMs can be rekeyed as well, which means data cannot be duplicated and stolen by simply using data services. A security platform that performs encryption with zero downtime helps to increase compliance with various regulations and best practices.

Weeks Medical Center, a critical access hospital offering medical, surgical, and intensive care services, chose HPE SimpliVity hyperconverged infrastructure for improved risk mitigation and HIPAA compliance. In addition to $100,000 annual cost savings, they realized backup and recovery improvements, including restoration of massive PACS (medical image) files in minutes.

Neil Medical Group provides pharmacy dispensing and consulting services to Long Term Care facilities in the US. Recently, they refreshed their legacy IT infrastructure, simplifying operations and improving data protection for critical applications. They also experienced a 25% increase in application performance with HPE SimpliVity hyperconverged infrastructure, and as much as 137:1 data efficiency using always-on deduplication and compression.

Altrecht, a Netherlands-based healthcare company that treats more than 25,000 patients annually, consolidated 8 racks of legacy equipment down to two racks of hyperconverged infrastructure systems. Rapid and efficient data protection allows them to back up and recover VMs in minutes compared to hours with their previous infrastructure.

The Namibia Institute of Pathology (NIP) is the largest diagnostic pathology service provider in Namibia, handling all public health sector pathology testing and providing disease monitoring services. When it was time to refresh their IT, NIP leveraged hyperconverged infrastructure to build a private cloud to shield sensitive data. They now achieve fast, simple full backups and replication and have consolidated 40U to 6U of rack space with HPE SimpliVity. They also streamlined their IT operations and realized a 94:1 data efficiency ratio.

All of these healthcare facilities choose Hewlett Packard Enterprise (HPE) to refresh their aging infrastructure. HPE SimpliVity hyperconverged systems are helping healthcare IT teams all over the world find pain relief by simplifying their IT operations and protecting their critical data. HPE has validated HPE SimpliVity interoperability with HyTrust DataControl, a security and compliance platform for virtual machines that offers simple encryption key management. This implementation and best practices guide explains how, with HyTrust DataControl, HPE SimpliVity hyperconverged infrastructure becomes a HIPAA-compliant solution that secures application data through encryption.

To learn more about how hyperconverged solutions can bring pain relief to your datacenter, download the free e-book: Hyperconverged Infrastructure for Dummies.


About the Author

Healthcare Facilities Discover Pain Relief for the Datacenter TechNativeLauren Whitehouse is the marketing director for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE).  She is a serial start-up marketer with over 30 years in the software industry, joining HPE from the SimpliVity acquisition in February 2017.  Lauren brings extensive experience from a number of executive leadership roles in software development, product management, product marketing, channel marketing, and marketing communications at market-leading global enterprises. She also spent several years as an industry analyst, speaker and blogger, serving as a popular resource for IT vendors and the media, as well as a contributing writer at TechTarget on storage topics.

5 Competencies You Need to Succeed in DevOps and Beyond

Portrait Of Male Engineer On Factory Floor Of Busy Workshop

DevOps (development and operations) is an enterprise software development phrase that is used to define an agile relationship between development and IT operations, which encourages better communication and collaboration between the two business units

When an organization applies this same goal to its entire business, it can transform itself. Instead of siloed, traditionally adversarial groups, everyone collaborates with a common goal that will help the organization win in its market sector.

Skills vs. competencies – What is needed?

In order to take advantage of the DevOps transformational wave, organizations must determine how their existing skills and competencies align to their future goals. In particular, they must embrace new delivery processes such as agile, lean, continuous improvement and obviously, DevOps. These areas will require an updated set of abilities, attitudes, and critical thinking. It is not just a skill that can be learned but an evolved competence.

While skills are an important part of learning, they are not enough to guide people towards true mastery and success. Skills focus on the abilities a person needs to perform a specific task or activity. Competencies, however, take skills to the next level by translating them into behaviors that demonstrate what has been mastered. Competencies include a dynamic combination of abilities, attitudes, behaviors, as well as knowledge.

5 key competencies for success

Success in DevOps will rely on much more than skills. Therefore, it’s time to develop innovative DevOps competencies that will serve all organizations now and in the future. An organization needs to focus on five key competency areas.

1) Culture

Culture is a set of shared organizational assumptions that are learned by teams as they resolve problems. Integrated into day-to-day norms, culture is considered the correct way to perceive, think, and feel in relation to problems.

Within DevOps, culture evolves to include implementation of continuous improvement, the building of collective intelligence, and continuously react to feedback—all based on trust.

2) Leadership

Transformational change requires strong leaders at all levels with core competencies in visioning, strategic management, flexibility, and the ability to inspire others to innovate and perform. Learning a set of skills does not make you a leader; instead leadership embodies an innate passion, integrity, authenticity, and courage.

DevOps, like any other environment, requires leaders who are champions – individuals with advanced communications skills, the knowledge of diverse cultures, and people who behave collaboratively when working in teams. DevOps leadership is not something you learn and just do. For DevOps champions, leadership is a journey of discovery that inspires, persuades, and fascinates their followers.

3) Resource/management

All change programs (and DevOps is no different) require the effective management of available resources. During your DevOps transformation, you will move from existing processes that overburden and deliver poor results to processes that are focused on value, continuous delivery, and quality outcomes.

Continuous collaboration across multi-functional teams is at the heart of DevOps, delivering an accountable business outcome. Organizations need to ensure collaborative working behaviors are the norm; silos are broken down, and trust is built into the delivery process.

4) Continuous delivery

Continuous Delivery is a set of processes and practices that radically removes waste from your software production process, enabling faster delivery of high-quality functionality. It also sets up a rapid and effective feedback loop between your business and your users.

Essentially, continuous delivery is about putting the release schedule in the hands of the business, not in the hands of IT. Implementing continuous delivery means that your software is always production-ready throughout its entire lifecycle.

5) Business value

In DevOps, business value is an informal term that determines the value assigned to an outcome. It is the ability to deliver a user-centric product that generates business value throughout the production development cycle by using continuous feedback from the end customer. Product features are shipped straight away – as opposed to the traditional way of delivery where the product is only shipped when it is complete.

Successful DevOps outcomes rely on a true understanding of the business processes and needs that will add the most value to the business practice. They also identify potential improvements, analyzing the business to ensure implemented solutions are effective in terms of cost, desired outcome, functionality, and lead time.

Understanding the bigger picture

The real challenge for an organization is how to go from the current skills and certificate-based specialist job functions to a broad-based expertise and awareness, supported by the right skills and these 5 core competencies.

Skills, competency, and experience are the most sought-after mix of abilities any organization wishes for their teams.  This never-ending quest toward DevOps goals will probably be the most exciting journey of growth your team has ever dared to take. It will provide significant gains in productivity, quality, business success, and ultimately employee happiness.

To learn more about DevOps and the HPE Developer Community, visit HPE DEV.


About Mario Devargas

5 Competencies You Need to Succeed in DevOps and Beyond TechNativeMario Devargas is a CIO Advisor for HPE, consulting with organizations in the adoption of collaborative working processes — not just in IT, but across the entire enterprise. With over 30 years at an executive level, he is a passionate and visionary CIO with an extensive record of achievement across the private and public sectors within the corporate and commercial markets, banking, manufacturing and most recently public sector. To read more articles by Devargas, visit HPE’s Transforming IT blogsite.

To read more about Digital Transformation, visit HPE’s Shifting to Software-defined blogsite.

DevOps – Your 3-step Transformation Journey

Digital composite of Futuristic 3d human over black background

Greek philosopher, Heraclitus once said, “The only thing constant is change.” Today’s DevOps transformation is a prime example

The DevOps model aims at unifying software development (Dev) with software operations (Ops) so that everyone can better work together across the entire development lifecycle. As businesses search for the best ways to digitally transform, DevOps is at the center of it all – and it’s driving a constant state of change.

As organizations all over the world adjust to the DevOps movement, they look to industry experts for advice. For several years, I have been advising organizations on how to collaborate not just within IT, but also across the entire enterprise. These organizations span multiple geographies, technologies, cultural background, and skill sets. Through these experiences, I have learned a lot – mostly that transformations are hard and normally take much longer than anyone wants.

Below I’ve detailed my 3-step approach to DevOps Transformation that I use with organizations that are on this journey. It is NOT a sequential step approach; it is an agile step approach – which means it is sometimes sequential, other times concurrent. Yet it is always aligned to the culture and capacity of the organization.

Step 1 – Expand agile practices beyond IT

Too often DevOps is seen as the mandate solely of IT, and there is an assumption that the business, and in particular its C-level executives, don’t take any interest in the software development function. Nor do executives understand words like agile and waterfall, never mind DevOps.

Lately, there has been a shift in the way software is being developed with the advent of agile DevOps that resembles the lean manufacturing shift in the 80s and 90s. Furthermore, it is a well understood fact that “software is eating the world” and most businesses need apps (both mobile and web apps) to sell their products. Any time you have an app, software development is seen as a critical part of the day-to-day business strategy. Hence, C-level executives are definitely taking a deeper interest in how these things can be delivered in an effective way

Other parts of an organization also must be part of a truly agile DevOps process. For example, customer feedback and effective operations are part of the process of breaking down silos and improving outcomes. All stakeholders in an organization across all disciplines must be involved with full support from its leadership and management.

Step 2 – Shift Left within a Continuous Culture

Shifting left in terms of software and system testing is a phrase that means to test early in the lifecycle, thereby finding problems sooner – which can save time and money. Shifting left, when referring to customers, means giving them better access and resources to find answers to their questions. And a continuous culture promotes the idea of finding and eliminating waste in everything you do.

DevOps is based on the idea of shifting left in its testing approach, performing testing earlier in the software delivery lifecycle, which will inevitably eliminate long back-end dependencies and increase quality.

Step 3 – Empower a culture of Fail Fast not Fail-Silently

When I first started my business work life, it was the norm to fail silently— mask the failure, hope it does not get spotted, blame someone else, or simply ignore the issue. New ways of thinking by innovative companies focuses on the concept of fail fast & fail often.  As long as employees can learn from their failures, this strategy can help the company grow in an agile way.

In order to succeed in a DevOps transformation, 4 types of failures are encouraged:

  • Fail Early – The sooner the failure is spotted, the faster you learn, and the faster you can fix it.
  • Fail Fast – Failing faster ensures you learn faster and can fix it.
  • Fail Often – The more things you try, the more failures you’ll get – and the more you learn.
  • Fail Better – Combine all of the above to maximize learning, leading to spectacular success!

Successful transformation involves more than just getting your teams within development and operations to work together. It involves the entire organization with a mindset focused on continual improvement, testing out ideas early, empowering the customer, and encouraging success through failure.

To learn more about DevOps and the HPE Developer Community,  visit HPE DEV.


About the Author

DevOps – Your 3-step Transformation Journey TechNativeMario Devargas is a CIO Advisor for HPE, consulting with organizations in the adoption of collaborative working processes — not just in IT, but across the entire enterprise. With over 30 years at an executive level, he is a passionate and visionary CIO with an extensive record of achievement across the private and public sectors within the corporate and commercial markets, banking, manufacturing and most recently public sector. To read more articles by Devargas, visit HPE’s Transforming IT blogsite.

To read more about Digital Transformation, visit HPE’s Shifting to Software-defined blogsite.

1 2 3 9
Page 1 of 9