Enterprise Featured

Cloud Transition – 5 Best Practices to Follow

Globe interface against clouds and skyline with blue and grey tr

Is your business transitioning from on-premises infrastructure to multi-cloud, hybrid cloud, or both?

If you answered yes to any of these, you are embarking on the single most significant technology shift your company will face over the next decade.

Whether you are looking to move one workload or shutdown an entire data center, the transformation is complex and full of difficulty. Successful cloud adoption dictates a razor-sharp focus and a detailed blueprint, as a single misstep can become costly and time consuming.

I suggest following these five best practices to increase your chances of success.

  1. Make a cloud-first commitment

A cloud-first commitment means that all your applications should be in the cloud – either in a public cloud or on premises in a private cloud. For example, many businesses are moving all their applications to the public cloud unless there is a compelling reason to remain on premises. Of course, due to cost, security, or governance concerns, some applications or data may need to stay on premises. In that case, setting up a private cloud that easily integrates with your public cloud is the best option. Either way, you must prioritize a cloud-first strategy in order to reap the full benefits of your public cloud or on-premises private cloud.

A cloud-first strategy ensures that your organization will dedicate the appropriate resources needed to make necessary changes. Cloud-first also requires assigning dedicated teams and funding your cloud program properly. This means team members should only work on cloud-related activities, and they should focus entirely on getting the enterprise to the cloud securely – not just kicking the tires with a proof-of-concept or a pilot program.

  1. Know your cloud economics

Understanding the economics of public cloud adoption seems like a no-brainer. However, I’ve found that more than 50% of enterprises do not take the time to determine the business case for moving to the public cloud, probably because they “already know” it’s a good thing. Nevertheless, an organization gains valuable insights by building a business case to improve their understanding of cloud economics.

Public cloud economics fall into two separate buckets. The first is a total cost of ownership (TCO) analysis along with hard cost savings – a like-for-like replacement of on-premises services with cloud services. When determining your current costs, look at the whole package, not just server-for-server comparisons. Cost considerations should include the following: hardware, networking, downtime, upgrades, disaster recovery/business continuity, service level agreement penalties, deployment, operational support, performance, selecting vendor software, requirements analysis, training, integration, quality testing, application enhancement, bug fixes, physical security costs, contracting, replacement, and other risks.

The second bucket of cloud economics includes agility and other soft benefits. For example: What is the benefit of having highly flexible, agile infrastructure? What is the financial impact of decreasing provisioning times from months to hours? How do you measure the impact of productivity?

  1. Discover the inner-workings of your application estate

Cloud environments like AWS, Azure, and Google are not fully backward compatible. That means you won’t be able to move all of your applications to these clouds. Depending on the importance of these applications, you will need a hybrid cloud network where the public cloud provider connects to a private cloud via complex technology. With the proper integration between your public and private clouds, the two different environments can work together well – but you must set up the integration properly.

Data has gravity; therefore, a realistic view of the estate and the risks of moving that data must be weighed in context of your overall risk profile. Sometimes, the interconnections between apps and data are so complex you will need to keep them close to the data.

The challenges with hybrid cloud networks include latency as well as the volume of data transmitted through the network. Simply put, you can cripple your cloud program if you don’t have a thorough understanding of the mapping and data volume between application dependencies. Many organizations rely on outside cloud experts who use automation and numerous tools to make this process easier.

  1. Perform a security and governance gap assessment

The Cloud Security Alliance (CSA) is the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment. The CSA’s Cloud Controls Matrix (CCM) serves as a baseline of control objects for cloud computing in the enterprise.

Repeatable patterns of reference architectures form a baseline for you to assess security and governance gaps in your cloud program. Performing a security and governance gap assessment means looking at your control objectives against a known standard, such as CSA’s matrix, and documenting the gaps in your controls against accepted best practices. Instead of building your security reference architecture from the ground up, you can accept a baseline and make minor changes to meet your specific needs.

  1. Plan for continuous compliance

Consumption-based, cloud models require a new level of governance because legacy change controls are too slow. Instead, you need continuous compliance, in the form of software that is constantly looking at your environment and controlling the consumption and usage of cloud services.

For example, AWS EC2 servers and attached storage (EBS) form the basic server/storage configuration. If you delete a server and do not specifically tell AWS to delete the storage, the storage is orphaned. Over time, orphaned block storage becomes a risk to the company. Unless properly governed, unknown storage volumes cost money and can contain sensitive data.

Continuous governance is a combination of security, risk, compliance, and finance controls that are implemented using software. And like any software controls, managing the profiles is where you gain your greatest benefits in the form of consistent, repeatable outcomes with fewer errors.

Best practices are just the starting point

Following these five best practices will start you on the road to a successful cloud transformation that integrates well with your private environment. But remember, successfully navigating this transition is difficult. Before starting your overall cloud program, make sure you have assembled a team with the experience, tools, and processes necessary to execute the move successfully.

For more information on a smooth transition to hybrid cloud or multi-cloud environments, visit the CTP website. Follow this link to learn more about how to ease your digital transformation. Visit HPE OneView and HPE OneSphere to read how to simplify and automate infrastructure and multi-cloud management.


About the Author

Cloud Transition – 5 Best Practices to Follow TechNativeRobert Christiansen is a cloud technology leader, best-selling author, mentor, and speaker. In his role as VP of Global Cloud Delivery at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise company, Christiansen oversees the delivery of professional services and innovation for HPE’s public cloud business. His client base includes Fortune 500 and Global 2000 customers, and his team’s leadership model encompasses the entire IT transformation journey, from inception to execution. To read more articles by Robert, please visit the HPE Shifting to Software-Defined blog

Cloud Transition – 5 Best Practices to Follow TechNative

A Year-Over-Year Comparison of the Gartner Magic Quadrant for Hyperconverged Infrastructure

Yoy – Year Over Year Acronym With Marker, Business Concept Backg

“If you don’t know where you are going any road can take you there.”  

–Lewis Carroll, Alice in Wonderland 

In business, it’s helpful to have a roadmap, showing where you are going and how to get there. Yet, knowing what path to take is often challenging, especially in the fast-moving and ever-changing tech world. Sometimes it’s easier to see where you are going, if you have a sense of where you have been and the progress you have made along the way. 

For hyperconverged infrastructure (HCI) customers, the Gartner Magic Quadrants for HCI can provide this insight. The reports offer a snapshot of where the industry is now and where vendors rank among Gartner’s quadrants. Yet if customer look at the reports year-over-year, it can also provide interesting insights into where the industry is headed and what vendors are on the right track. 

When the 2018 Gartner Magic Quadrant for Hyperconverged Infrastructure was released, I reviewed the reports from previous years. This article provides some analysis over a two-year period. 

A deep-dive into the report 

The first step in analyzing the reports year-over-year is to compare the 2017 A Year-Over-Year Comparison of the Gartner Magic Quadrant for Hyperconverged Infrastructure TechNativeto the latest, 2018 quadrants. When you create a comparison view, as the diagram on the left shows, (an aggregate of two years – 2017 shown in yellow and the latest 2018 results shown in red), you can start to see an interesting story form that you cannot see from a single year’s report.  

 

For instance, in the 2018 Magic Quadrant, the lower left quadrant (Niche Players), collected the greatest population of vendors. Four new vendors joined this category in 2018, one former vendor dropped out (HTBase), with another vendor just recently leaving early in 2019 (Maxta). Throughout 2019, I expect to see more vendors exiting than entering the Niche Players quadrant. Why? As this category becomes more mature, it is harder for newer vendors to catch up. 

Another point of note is that although these “Niche Players” come to market with some intriguing and well-featured products, few were able to last the year. This area of the Quadrant is the riskiest for vendor selection and investment, so caution is needed.  

In contrast, the upper right quadrant (Leaders) tells quite a different story. In 2018, one existing vendor (Cisco) entered this section, and two other vendors made progress up and to the right, (VMware and HPE). There also appears to be interesting movement for Dell, as it shifted up and to the left towards the Challenger quadrant, which I feel is somewhat in the wrong direction. And lastly, the 2017 leader Nutanix, stayed in the exact same position as the previous year without any movement. Movement up and to the right typically indicates continued investment and new innovation in the HCI solution, so keep that in mind when comparing a vendor’s progress year-over-year. 

Critical Capabilities Report provides even more detail 

Many factors go into choosing a vendor and solution that is right for your individual situation, and the leader in the market may not be best for every business. That is why I always suggest reviewing the accompanying Critical Capabilities Report from Gartner after a Magic Quadrant is released. The Critical Capabilities Report can be even more important than the Magic Quadrant itself, because it provides a more detailed look at the differences between each vendor’s solutions. By viewing this report,   customers can better evaluate which individual capabilities are most important to their business. 

Not always an apples-to-apples comparison 

In the 2018 Critical Capabilities Report for HCI, you can start to see the DNA of each vendor and compare how they rate across six different use case categories. I always encourage readers to dig a little deeper with their research. Why? These types of comparisons allow you to see that not all vendor features are equally compared. Although Gartner tries very hard to equally categorize all functionality together into the use cases, some vendor’s functionality doesn’t always fit neatly into each of Gartner’s categories. 

For example, Gartner rated Hewlett Packard Enterprise (HPE) low in one section of the Critical Capabilities Report for not providing cloud backups. This is somewhat of a binary category with a yes or no response, without the opportunity to describe a solution that might prove to be a better for customers. In this case, HPE SimpliVity, provides a redundant site for disaster recovery, which gives companies assurance that they will be able to operate from a remote site in case of an unplanned outage at the main site. Some customers see this feature as more cost-effective than cloud backups, but it is a feature that has not yet been able to carry much weight in the Gartner Magic Quadrant and Critical Capabilities Reports. Other vendors have similar challenges in other use cases, hence my advice to conduct in-depth research before making a purchasing decision. 

Looking back can provide 20/20 insights for the future 

The 2018 Gartner Magic Quadrant and Critical Capabilities for Hyperconverged Infrastructure Reports are a great place to start your investigation if you are looking into HCI. But keep in mind that to truly take advantage of all that these reports have to offer, you may have to look back to get a full year-over-year analysis. I encourage you to delve into the accompanying capabilities report so you have a full picture of what all the different products are able to provide. 

In closing, I want to thank Gartner Inc. for creating the reports. I’d also like to thank all the customers, partners, and vendors that contributed input that make these reports possible. 

To learn more about hyperconverged infrastructure, check out the Hyperconverged for Dummies guide or the Gorilla Guide to Hyperconverged Infrastructure. 

About the Author

Chris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for HPE Synergy, HPE OneView, HPE SimpliVity hyperconverged solutions, and HPE OneSphere. To read more from Chris Purcell, please visit theHPE Shifting to Software-Defined blog. 

A Year-Over-Year Comparison of the Gartner Magic Quadrant for Hyperconverged Infrastructure TechNative

Developers and IT Ops: Finding the Hybrid Cloud Common Ground

tt

Trying to get different teams to collaborate is never an easy feat

But no two groups are more notoriously difficult to work together than developers and IT operators. After all, the two groups have had a contentious relationship for quite some time. Historically, developers want to release features as quickly and efficiently as possible, while IT operators want to ensure things are done reliably, securely, and meet corporate and compliance policies. 

In a recent BriefingsDirect podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, talks with Daniel Newman, Principal Analyst and Founding Partner at Futurum Research and Analysis, about how developers and IT operators can find newfound common ground around making hybrid cloud the best long-term economic value for their organizations. 

Two worlds colliding 

Gardner opens the interview by asking Newman to give his thoughts on the ever-increasing separation between DevOps and IT Ops. “We now have two worlds colliding. You have a world of strict, confined policies. That’s the ‘ops’ side of DevOps,” Newman explains. “You also have the developers who have been given free rein to do what they need to do; to get what they need to get done, done.” But, Newman explains, the industry is now experiencing a massive shift that requires more orchestration and coordination between these groups. 

With the introduction of new cloud options, lack of collaboration between developers and IT Ops leads businesses to experience out-of-control expenses, out-of-control governance and security, and difficulty taking advantage of private, public, or hybrid cloud successfully. 

“There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges inside of any IT organization: having Dev and IT Ops share the same goals,” Newman explains. But he does offer some good news, “New tools may give them more of a reason to start working in that way.” 

It’s not just about DevOps 

Gardner brings up two other areas that could benefits from collaboration — data placement and data analytics. According to Gardner, “We talked about trying to bridge the gap between development and Ops, but I think there are other gaps, too.” 

Newman agrees. In terms of data placement, Newman says, “Developers are usually worried about data from the sense of what can they do with that data to improve and enhance the applications.”  But when you add in elements like machine learning and artificial intelligence (AI), it ups the compute and storage requirements. “With all of these complexities, you have to ask, ‘Who really owns this data?’” Newman adds. 

Data placement for IT Ops, according to Newman, typically just worries about capacity and resources performance for data. 

Newman goes on to explain that businesses can’t leave the data lifecycle to developers and IT operators — business leadership should be asking, ‘We have all this data. What are we doing with it? How are we managing it? Where does it live? How do we pour it between different clouds? What stays on-premises and what goes off? How do we govern it? How can we have governance over privacy and compliance?’ 

“So your DevOps group just got bigger, because the data deluge is going to be the most valuable resource any company has. It will be, if it isn’t already today, the most influential variable in what your company becomes,” Newman explains. 

And, he says, it all comes back to shared tools, shared visibility, and shared goals. 

Data analytics is another thing all together, Newman shares. First, the data from the running applications is managed through pure orchestration in DevOps, he explains, “And that works fine through composability tools. Those tools provide IT the ability to add guard rails to the developers, so they are not doing things in the shadows, but instead do things in coordination.” The disconnect comes from the bigger analytical data. “It’s a gold mine of information. Now we have to figure out an extract process and incorporate that data into almost every enterprise-level application that developers are building,” he says. 

Is hybrid cloud the answer? 

The bottom line is that there are many moving parts of IT that, in its current state, remain disjointed. “But we are at the point now with composability and automation of getting an uber-view over services and processes to start making these new connections – technically, culturally, and organizationally,” explains Gardner. 

One way the industry is moving toward unity is through multi-cloud or hybrid cloud. And according to Newman, this is a welcome shift. Multi-cloud brings together the best components of each cloud model and allows businesses to choose where each application should live based on its unique needs.  

“However, companies right now still struggle with the resources to run multi-cloud,” he says. They don’t know which cloud approach is best for their workloads because they are not getting all of the information delivered as a total, cohesive picture. “It depends on all of the relationships, the disparate resources they have across Dev and Ops, and the data can change on a week-to-week basis. One cloud may have been perfect a month ago, yet all of a sudden you change the way an application is running and consuming data, and it’s now in a different cloud.” 

What is needed is a unified view that allows everyone, including developers and operations (and beyond), to make informed decisions that take each part of cloud deployment into account. 

A move in the right direction 

Newman and Gardner both agree that HPE Composable Cloud is a step in the right direction. HPE Composable Cloud is a hybrid cloud platform that delivers composability across the data center with an open integrated software stack that enables businesses with the speed, scale, and economics of public cloud providers. As a turnkey cloud platform with compliance and security, it offers enhanced capabilities such as end-to-end automation, built-in AI operations, an innovative fabric built for composable environments, and hybrid cloud management ready to scale. 

According to Newman, “What HPE is doing with Composable Cloud takes the cloud plus composable infrastructure and, working through HPE OneSphere and HPE OneView, brings them all into a single view.” This type of unified view delivers the most usable and valuable dashboard-type of cloud use data. And, Newman thinks, this type of single view can bridge the gap between IT groups that seem to have trouble collaborating. “Give me one view, give me one screen to look at, and I think your Dev and Ops — and everybody in between – and all your new data and data science friends will all appreciate that view,” he explains. 

Gardner concludes the interview by reiterating Newman’s view. “What I have seen from HPE around the Composable Cloud vision moves a big step [in the right] direction. It might be geared toward operators, but ultimately it’s geared toward the entire enterprise, and gives the business an ability to coordinate, manage, and gain insights into all these different facets of a digital business.” 

To learn more about HPE Composable Cloud, check out Top 10 Reasons to Move to HPE Composable Cloud. To listen to Newman and Gardner’s full interview, click here. To read the transcript of the podcast, click here. 


About the Author

Developers and IT Ops: Finding the Hybrid Cloud Common Ground TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for HPE Synergy, HPE OneView, HPE SimpliVity hyperconverged solutions, and HPE OneSphere. To read more from Chris Purcell, please visit the HPE Shifting to Software-Defined blog. 

Developers and IT Ops: Finding the Hybrid Cloud Common Ground TechNative

Working together to solve the challenges of hybrid and multi-cloud

earth futuristic technology abstract background

According to a recent report, the hybrid cloud market is exploding

It is expected to grow from $44.60 billion in 2018 to $97.64 billion by 2023, at a Compound Annual Growth Rate (CAGR) of 17.0% during the forecast period. 

As hybrid cloud deployments enfold, organizations can expect to encounter some long-term challenges. In a recent podcast, Dana Gardner, Principal Analyst at Interarbor Solutions and John Abbott, Vice President of Infrastructure and Co-Founder of The 451 Group, discuss the growth of hybrid cloud and what challenges the enterprise is facing. I found the interview insightful, especially their thoughts on increased complexity, uncontrolled costs, and an ever-widening skills gap. 

Solving the complexity problem 

As organizations seek a mix of hybrid and multi-cloud infrastructure, they are implementing cloud in a way that wasn’t anticipated years ago. “CAPEX to OPEX, operational agility, complexity, and costs have all been big factors,” explains Abbott. “Also, on-premises deployments continue to remain a critical function. You can’t just get rid of your existing infrastructure investments that you have made over many, many years.” 

Gardner suggests that technologies such as artificial intelligence (AI) and machine learning (ML) could help solve the hybrid cloud complexity issue. Abbott agrees both present a huge potential. “IT tools are in a great position to gather a huge amount of data from sensors and from usage data [and] logs, and pull that together. We can then more clearly see patterns and optimize in the future.” 

I think a combination of AI and automation is critical to simplify the complexity of hybrid and multi-cloud. Newer applications are incredibility complex with lots of moving parts; sometimes developers can make hundreds of changes a day to a single application. With this type of complexity, traditional governance models just won’t work, which is why HPE InfoSight is so important. HPE InfoSight is an industry-leading predictive analytics platform that brings software-defined intelligence to the data center with the ability to predict and prevent infrastructure problems before they happen. 

Control costs, or they will control you 

Gardner and Abbott discuss at length the growing challenge of controlling cloud costs. “Cloud models can significantly reduce cost, but only if you control it. Sizes of instances, time slices, time increments, and things like that all have a huge effect on the total cost of cloud services,” Abbott explains. 

Abbot gives the example of the large number of people who may have authority to order services. “If you have multiple people in an organization ordering particular services from their credit cards, that gets out of control. I think Amazon Web Services (AWS) alone has hundreds of price points — things are really hard to keep a track of.” 

To gain control over your spending on cloud, Abbot believes IT admins need better management tools. “They need a single pane of glass — or at least a single control point — for these multiple services, both on-premises and in the cloud.” 

Organizations do indeed need better multi-cloud management tools. Yet, in many organizations, IT needs to play a stronger role in managing governance of applications, because they understand it better than anyone else. The enterprise still needs someone to pay attention to what is going on and proactively make decisions. IT needs to periodically look at where everything is deployed and make appropriate decisions such as removing zombie machines that aren’t being used, or move an application from one cloud to another. When IT takes responsibility, they will naturally select the tools they need to become more efficient. 

What about the skills gap? 

Both analysts mention the current skills gap as a concern for today’s enterprise. Gardner wonders who in the enterprise has the knowledge and expertise to oversee all of the financial functions involved with multi- and hybrid cloud deployments. This person needs to understand the technology, and also the economic implications, such as forecasting and budgets. Gardner doesn’t believe the typical IT director or admin has those skills right now. 

Abbot thinks this role is evolving with a new generation of IT admins. “There are skill shortages, obviously, for managing specialist equipment, and organizations can’t replace some of those older admin types. So they are building up a new level of expertise that is more generalist.” He goes on to say that developers and the systems architects will also need the help of new automation tools, management consoles, and control planes, such as HPE OneSphere and HPE OneView. 

Abbot believes that organizations should look to a more experienced partner to help them in this rapidly changing and complex environment. Abbot references the recent acquisition of BlueData by Hewlett Packard Enterprise (HPE). “The experts in data analysis and in artificial intelligence (AI) and the data scientists coming up are the people that will drive this. And they need partners with expertise in vertical sectors to help them pull it together,” concludes Abbot. 

HPE is building a portfolio of product and services to help the enterprise digitally transform. Let HPE help you simplify your hybrid cloud experience with modern technologies and software-defined solutions. To get insights, trends, and characteristics of new hybrid cloud platforms and find out more about how the right solutions and the right partner enables an extension of the cloud experience across your business, check out the IDC white paper, Delivering IT Services in the New Hybrid Cloud: Extending the Cloud Experience Across the Enterprise. 


About the Author

Working together to solve the challenges of hybrid and multi-cloud TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents. 

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog. 

Working together to solve the challenges of hybrid and multi-cloud TechNative

An Integrated Network – Are You Sure It Means What You Think It Means?

Gears Mechanism, Digital Transformation, Data Integration And Digital Technology Concept

Vendors, especially networking vendors, use the term “integrated” to imply that their technology works with other parts of the IT infrastructure ecosystem in a blended, cohesive manner.

This is especially true for cloud management platforms. However, every vendor seems to have a different idea of what the term means and the implications it has on the overall data center. In other words, an integrated network might not be as integrated as you think.

In the race for technology vendors to win market share, the devil is in the details.

The challenge for IT organizations

Businesses that are trying to build a private, on premise cloud infrastructure want the same ease of use and management experience that they would get with a public cloud service. Providers such as Amazon AWS, Microsoft Azure, or Google Cloud Platform build custom software programs to create highly-integrated environments. Software-defined aspects of compute, storage, and network infrastructure all talk to each other behind the scenes via APIs, exchanging data about their own systems state, health, and capabilities. Custom applications make for an intuitive end user experience, but this option is not feasible for all companies.

However, a software-defined network is not only feasible, it can provide true integration, and help drive a better overall cloud experience. The network is responsible for gluing together the compute and the storage resources in a way that matters to the applications. The network is at the center of a good private cloud strategy, and plays an especially important role in the area of integration.

What comprises a true network integration?

Networking vendors make a variety of claims with respect to their level of “integrated-ness” in the IT infrastructure ecosystem. But integration has a number of facets that comprise a true integrated system. To achieve a true on premise cloud, integration needs to evolve. It needs to be a fully bi-directional information exchange, and it needs to cover all three layers of integrated networking requirements.

  1. Bootstrap Layer

As customers expect more ease-of-use, they insist on easier installation for components, and expect vendors to work together to provide single installation and bootstrapping mechanisms that resolve common chicken and the egg issues. There are very few examples of integrated bootstrapping in the industry outside of pre-engineered converged systems. However, this is changing.

  1. Lifecycle Layer

Lifecycle refers to the various phases a particular resource might go through; compute and storage have diverse lifecycle needs. For example, on the compute side, virtual machines need to be created, modified or moved, and eventually destroyed. For storage, a given data store might need to have similar type events and may also need to be rebuilt if the underlying drive system is changed. Increasingly, higher-speed NVMe based memory systems are being used as the underlying storage mechanism which can have explicit end-to-end lossless requirements for packet transport. The network needs to understand all of these lifecycle events to provide a fully integrated environment.

A truly integrated network accommodates the lifecycle requirements of both compute and storage, whether that entails auto-configuration of network for connectivity purposes, or dynamic response to events that need network resources such as bandwidth, low latency, or dedicated network paths.

A networking vendor that focuses only on compute events, such as VLAN auto-configuration for new VMs or dynamic updates as VMs move, is overlooking a considerable portion of the infrastructure.

  1. Converged Management Layer

As a best practice, private clouds should require as few management consoles as possible. Many of the leading cloud management platforms allow the other systems to be managed from that platform via third party plug-ins. A fully integrated network should provide as much management integration into that platform as possible. This should not stop at the read-only viewing of network information but extend into full operational control of the system.

Dig deep

There is more to a network integration than being integrated. To achieve a true on premise cloud, the network needs to have a full bi-directional information exchange and needs to cover all three layers of integrated networking requirements: bootstrap, lifecycle and converged management.

When evaluating your own network, ask the tough questions and dig deep into your integration. You may discover it is not as integrated as you think it is. True integration will enable unencumbered workload performance and quantifiable business value.

Hewlett Packard Enterprise (HPE) has worked with customers to better understand their needs and believes that the integrated network is an energized mesh that is responsive to fluctuating needs, scalable to support growth, and simple to manage. The HPE Composable Fabric is purpose-built to integrate your network with the infrastructure and deliver results without iteration.

To learn more about networking innovation, read Gartner’s whitepaper, Look Beyond Network Vendors for Network Innovation. And, to learn more about HPE Composable Fabric, check out: HPE Composable Fabric: Data Center Fabric Explained.


About the author

An Integrated Network – Are You Sure It Means What You Think It Means? TechNativeMat Mathews is the Vice President and General Manager for Composable Fabric at Hewlett Packard Enterprise. Mat has spent 20 years in the networking industry observing, experimenting, and ultimately honing his technology vision. Prior to his current role at HPE, Mat was co-founder and VP Product Management at Plexxi. He began his career as a software engineer and holds a Bachelor’s of Science in Computer Systems Engineering from the University of Massachusetts at Amherst. To read more from Mat, please visit the Shifting to Software-Defined blogsite.

An Integrated Network – Are You Sure It Means What You Think It Means? TechNative

The Resurgence of On-Premises Infrastructure, Along with More Options

Corridor Of  Server Room With Server Racks In Datacenter. 3D Ill

It wasn’t that long ago that industry experts were quick to declare on-premises computing a thing of the past

It seemed like everyone was moving toward the public cloud, and pundits were calling on-premises computing dead and buried. 

Fast forward a few years, and those very same experts were  beginning  to backtrack. Maybe on-premises infrastructure wasn’t as dead as we’d thought. An interesting headline appeared on Forbes.com:  Reports of the Data Center’s Death Are Greatly Exaggerated.  The author explains that although public cloud is pervasive, the data center is  actually thriving. Industry  analysts seem to agree. Even public cloud vendors recognized the market is not satisfied with only public cloud – and they began pursuing on-premises opportunities. With the announcement of  AWS Outposts, AWS, the largest public cloud vendor, admitted that not everything is moving to the public cloud. 

The future is all about choice 

Previously, technology limited organizations to two choices – either go to the public cloud or stay on premises. In the future, businesses large and small will have a wide range of options of locations and consumption models. And the lines separating all of those options will blur as IT embraces the cloud experience. 

As organizations evaluate their best options, many understand that cloud is not a destination. Instead, it is new way of doing business focused on speed, scalability, simplicity, and economics. This new business model allows IT to distribute data and apps across a wide spectrum of options. It also shifts the focus of IT away from optimizing infrastructure to positively impacting applications. Instead, they will manage applications to deliver the best experience wherever the infrastructure is located. 

Choose what’s best for each individual workload 

In the future, organizations will place data and applications where each performs best. And constantly changing requirements of service delivery, operational simplicity, guaranteed security, and optimization of costs will dictate placement. For example, the General Data Protection Regulation had far-reaching changes in terms of how global businesses secure customer data. Consequently, these changes led organizations to make adjustments in how they deployed applications. 

Organizations who have done their homework will deploy applications and data where it makes the most sense – on a myriad of points along the two ends of the public cloud and on-premises spectrum. Some may choose colocation because it provides the infrastructure and security of a dedicated data center without the costs of maintaining a facility. Other workloads are best served in a private cloud using traditional on-premises infrastructure with consumption-based pricing. Another application may demand high security and control, yet flexibility and scalability, which would make on-premises private cloud the best alternative. 

Having choices clearly gives organizations better deployment and consumption options for each individual application. And as needs change, deployment and consumption models will also change. The beauty of having numerous choices is that it gives organizations more flexibility to manage costs, security, and technical needs. 

More choices may equal more complexity 

The downside to all of this choice is that it can mean more complexity, as each deployment model is different. And as new technologies are introduced, the lines between all of these options are often obscured. For example, consumption-based pricing gives customers the flexibility to pay for what they use but still manage the infrastructure themselves, which fits neither the traditional on-premises nor the public cloud model. 

As technology advances and choices continue to expand, it’s often difficult for an organization to adapt. To solve this challenge, they need a new mindset, one that is agile, adjusting to IT changes quickly. Too many times, they are constrained by legacy thinking, infrastructure, and tools. Better training and tools from industry experts can solve these issues. 

Required: Expertise and a trusted advisor 

To succeed in this agile yet often complex environment, many businesses will need valuable expertise. They should seek out partners that provide more than just the two options of on-premises or public cloud. Instead, savvy organizations will choose experts who provide solutions along the spectrum of deployment options. For instance, a vendor such as Hewlett Packard Enterprise (HPE) provides a wide range of solutions: Offerings include traditional on-premises infrastructure, private cloud (both owned and rented with consumption-based pricing), as well as colocation.   

A successful organization will also need tools and professional services to help support as many options as possible.  HPE advisory services  can help identify the best place to run applications, while  HPE OneSphere, an as-a-service multi-cloud management platform, can help organizations gain more control over the  complexity of hybrid clouds. In addition,  Cloud Technology Partners (CTP), a HPE company, works with IT teams to enhance learning, conquer cloud challenges, and accelerate successful digital transformation. 

It’s time to stop limiting choices to only on-premises versus public cloud. Instead, consider all the options available for new opportunities and long-term success. Compliance, cost, performance, control, complexity of migration – all of these factors will determine  the right mix for deployment of data and applications. 


About Gary Thome 

The Resurgence of On-Premises Infrastructure, Along with More Options TechNativeGary Thome is the Vice President and Chief Technology Officer for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies. Over his extensive career in the computer industry, Gary has authored 50 patents. 

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog. 

The Resurgence of On-Premises Infrastructure, Along with More Options TechNative

It’s a Brave, New Cloud World Out There. Are You Ready?

Steps and door

Cloud adoption across an enterprise is a major undertaking, fundamentally changing how IT works

What organizations used to accomplish in man power and labor is now transformed and automated. Using software, a business can manage their datacenter as code. And it’s not just compute, storage, and fabric; it’s all the ancillary software services that surround it – identity, encryption, logging, monitoring, and continuous governance. What used to take months to plan and develop now takes days and weeks to implement.

But, there’s a downside. Organizations aren’t well-prepared for or knowledgeable about this monumental shift in how the new IT needs to work. It’s a brave new DevOps world out there, and organizations are struggling to be successful within it.

As I travel the world, I see the same problems play out time and again—and many of these challenges involve the changing roles of people within an IT organization. In this article, I describe how to identify and embrace these roles as you migrate to the cloud and transform your business.

It’s a new world, so you need new roles

Current roles are evolving to encompass new tasks brought on by the daily operations in the public cloud. One of the biggest problems for organizations is a lack of skilled people. And when qualified people are found, they are often expensive. To combat this challenge, many organizations choose to develop a training, up-skilling, and enablement process—an expensive investment in people and a process that will take years.

Changing roles requires people to think differently, transitioning old skills to new skills. The first step in finding the right skills is to identify what you need. To help you determine the learning tracks necessary for your people, I’ve structured the new cloud roles into eight fundamental skill zones:

  • People
  • Data
  • Innovation and new apps
  • Existing portfolio and apps
  • Strategy
  • Security
  • DevOps practice
  • Operations and service quality

The chart below lists these eight skill zones; under each, I break down the skills needed per zone.

It’s a Brave, New Cloud World Out There. Are You Ready? TechNative

This chart contains a wealth of information you can use to identify the roles you need to train your people for a successful cloud transformation. For example, let’s review the top left skill zone: people. In this section, you will see how the roles of human resources (HR) and learning and development (L&D) staff should change.

I list six key skill areas that will need retraining under the people zone. HR and L&D will want to invest in cloud talent enablement programs, along with rethinking what curriculums they offer employees for continuous learning. The people in this group must develop a training initiative targeted at each new role from the other seven areas listed. The goal of HR and L&D staff should be to ensure learning services for this new cloud world are constantly available. They must find and provide educational resources such as virtual instructor-led training, podcasts, webinars, and progressive learning management systems (LMS).

HR and L&D must also understand all of the new cloud roles and weigh different needs in their recruiting and compensation packages. And finally, they will want to look at talent retention differently by determining how to engage current employees better in their new cloud roles.

Break down the silos – the new roles are interconnected

A key thing to keep in mind is that everyone in each skill zone interacts with the others. For example, let’s say you are a developer writing a new application. You’ve been trained in all six areas in your specialty: you understand the new cloud tools and software, automation techniques, platform as a service (PaaS) management, along with the other key areas listed.

But…if you haven’t thought about the economics involving your app, you may not last long. As soon as your CFO (grouped in the top right zone under strategy) sees the latest cloud bill, all of your hard work may very well be scrapped due to runaway deployment costs.

In this new cloud world, no one is an island; everyone is co-dependent on each other. And communication between all eight skill zones is only the beginning. Those in each zone must have a basic knowledge of each roles and responsibilities in each of the other zones.

Successfully navigating the new cloud world

Cloud adoption across an enterprise is a major undertaking. Organizations are finding themselves in this brave, new cloud world without the proper compass to navigate through it. To be successful, you must start at the beginning – by first identifying key roles and the skills you will need.

This article is the fourth in a series on how to train your employees for a successful cloud transformation. You can read the first three articles here: Admitting you have a problem with your cloud transformation, 5 proven tactics to break up the cloud deployment logjam, and IT Operations and Developers: Can’t we all just get along. For more information on a smooth transition to multi-cloud, visit the CTP website. To learn more about how to ease your digital transformation, click here.


About the Author

Robert Christiansen is a cloud technology leader, best-selling author, mentor, and speaker. In his role as VP of Global Cloud Delivery at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise company, Christiansen oversees the delivery of professional services and innovation for HPE’s public cloud business. His client base includes Fortune 500 and Global 2000 customers, and his team’s leadership model encompasses the entire IT transformation journey, from inception to execution. To read more articles by Robert, please visit the HPE Shifting to Software-Defined blog.

It’s a Brave, New Cloud World Out There. Are You Ready? TechNative

Two IT Industry Analysts Discuss Taming Multi-Cloud Complexity

Wireless communication network, IoT Internet of Things and ICT Information Communication Technology concept. Blue sky with cloud in sunny day. Connection background.

Much has changed for businesses in the last 40 years

In the 1980s, personal computer growth lead to microcomputers (servers), and by the 1990s, data centers were commonplace. Then, virtualization and the need to process an explosion of data fueled data center growth in the early 2000s. When Amazon launched its commercial web service (EC2) in 2006, cloud computing dramatically changed how businesses handle their data – and their businesses.

As an IT industry analyst, Martin Hingley, President and Market Analyst at IT Candor Limited, based in Oxford, UK, had a front row seat to all of this change. In a recent BriefingsDirect podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, discusses some of these changes with Hingley. The two analysts examine how artificial intelligence, orchestration, and automation are helping tame complexity brought about by continuous change.

After 30 years of IT data center evolution – are we closer to simplicity?

Gardner began the interview by asking Hingley if a new era with new technology is helping organizations better manage IT complexity. Hingley responded, “I have been an IT industry analyst for 35 years, and it’s always been the same. Each generation of systems comes in and takes over from the last, which has always left operators with the problem of trying to manage the new with the old.”

Hingley recalled the shift to the client/server model in the late 1980s and early 1990s with the influx of PC servers. “At that point, admins had to manage all of these new systems, and they couldn’t manage them under the same structure. Of course, this problem has continued over time.”

Management complexity is especially difficult for larger organizations because they have such a huge mix of resources. “Cloud hasn’t helped,” Hingley explained. “Cloud is very different from your internal IT stuff — the way you program it, the way you develop applications. It has a wonderful cost proposition; at least initially. But now, of course, these companies have to deal with all of this complexity.” Managing multi-cloud resources (private and public) combined with traditional IT is much more difficult.

Massive amounts of data: get your house in order using AI

Additionally, consumers and businesses create massive amounts of data, which are not being filtered properly. According to Hingley, “Every jetliner flying across the Atlantic creates 5TB of data; and how many of these fly across the Atlantic every day?” In order to analyze this amount of data properly, we need better techniques to pick out the valuable bits of data. “You can’t do it with people. You have to use artificial intelligence (AI) and machine learning (ML).”

Hingley emphasized how important it is to get a handle on your data – not only for simplicity, but also for better governance. For example, The European Union (EU) General Data Protection Regulation (GDPR) reshapes how organization must handle data, which has far-reaching consequences for all businesses.

“The challenge is that you need a single version of the truth,” explains Hingley. “Lots of IT organizations don’t have that. If they are subpoenaed to supply every email that has the word Monte Carlo in it, they couldn’t do it. There are probably 25 copies of all the emails. There’s no way of organizing it. Data governance is hugely important; it’s not nice to have, it’s essential to have. These regulations are coming – not just in the EU; GDPR is being adopted in lots of countries.”

Software-defined and composable cloud

Along with AI, organizations will also need to create a common approach to the deployment of cloud, multi-cloud, and hybrid-cloud, thereby simplifying management of diverse resources. As an example of such a solution, Gardner mentioned the latest composable news from Hewlett Packard Enterprise (HPE).

Announced in November 2018, the HPE Composable Cloud is the first integrated software stack built for composable environments. Optimized for applications running in VMs, containers, clouds, or on bare metal, this hybrid cloud platform gives customers the speed, efficiency, scale, and economics of the public cloud providers. These benefits are enabled through built-in AI-driven operations with HPE InfoSight, intelligent storage features, an innovative fabric built for composable environments, and HPE OneSphere, the as-a-Service hybrid cloud management solution.

“I like what HPE is doing, in particular the mixing of the different resources,” agreed Hingley. “You also have the HPE GreenLake model underneath, so you can pay for only what you use. You have to be able to mix all of these together, as HPE is doing. Moreover, in terms of the architecture, the network fabric approach, the software-defined approach, the API connections, these are essential to move forward.”

Automation and optimization across all of IT

New levels of maturity and composability are helping organizations attain better IT management amidst constantly changing and ever-growing complex IT environments. Gaining an uber-view of IT might finally lead to automation and optimization across multi-cloud, hybrid cloud, and legacy IT assets. Once this challenge is conquered, businesses will be better prepared to take on the next one.

To hear the full interview, click here. To learn more about the latest insights, trends and challenges of delivering IT services in the new hybrid cloud world, check out the IDC white paper, Delivering IT Services in the New Hybrid Cloud: Extending the Cloud Experience Across the Enterprise.


About Chris Purcell

Two IT Industry Analysts Discuss Taming Multi-Cloud Complexity TechNativePurcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for HPE Synergy, HPE OneView, HPE SimpliVity hyperconverged solutions, and HPE OneSphere. To read more from Chris Purcell, please visit the HPE Shifting to Software-Defined blog.

 

Two IT Industry Analysts Discuss Taming Multi-Cloud Complexity TechNative

“Instant Everything” Poses 3 Unique Challenges for Datacenters at the Edge

EdgeComputingHPE

We live in a world where digital needs are more immediate than ever

We don’t just want results now, we expect them. We play games and download movies on-demand, share photos and upload videos almost instantly. We don’t even like waiting a few extra seconds for data to travel from the core datacenter to our digital devices. This expectation for “instant everything” is driving a change in where data resides, a shift from the datacenter to the edge.

To accommodate increased demand at the edge, remote office/branch office (ROBO) sites have evolved into modular datacenters. No longer limited to brick-and-mortar offices, edge sites can include scientific labs in the wilderness, manufacturing facilities, airplane cockpits, or IT closets on an oil rig or a cruise ship. These sites have unique challenges, and each challenge can increase costs significantly.

Challenge #1: Sizing and scaling at the edge

Poorly sized infrastructures are a notorious money drain. IT organizations rarely experience a budget surplus, yet business requirements demand the best technology available, which makes sizing even more important in datacenter modernization projects. When complex environments require a refresh, IT needs to estimate business growth rates and system requirements 2-3 years in advance to ensure sufficient data capacity and system performance. Too little capacity limits business growth; too much wastes money as components sit idle in the rack.

Accurate sizing becomes critical at the edge. Remote locations often have smaller budgets than the main office and very limited physical space. Organizations can run into literal walls when they attempt to modernize technology. Expanding physical facilities to accommodate new IT racks, if even possible, is expensive, and power and cooling costs for the additional devices quickly add up.

Challenge #2: Ongoing maintenance and management costs

With the increasing number of datacenters at the edge, it is not feasible to employ specialized IT staff with expertise in storage, servers, networking, and backup and recovery at every site, even for larger enterprises. While individual systems might be inexpensive up front, they are rarely cost effective in the long run. In fact, siloed systems can incur significant OPEX costs for ongoing maintenance and management, including travel and fees for multiple technology specialists, and downtime to synchronize systems.

Challenge #3: The cost of data protection

Fractured data protection strategies put data at risk, and that risk increases with the number of different methods at each site, such as backing up to tape or portable hard drives. Costs for specialists who are contracted to build and test DR plans can escalate if contractors are needed on-site to change tapes and keep backup apps compatible with servers and storage upgrades. For midsize businesses, these costs are prohibitive. If a remote site experiences a natural disaster or a cyberattack, these organizations have the additional costs of downtime and data loss.

How hyperconvergence keeps costs down at the edge

Hyperconverged infrastructure (HCI) addresses every one of these challenges, and these compact systems are increasingly popular at the edge. Easy to deploy and scale, the most comprehensive hyperconverged solutions consolidate storage, servers, and advanced data services, including data protection, inside each node. That means IT only has one integrated system to manage, one solution to upgrade and maintain, and only one tech refresh cycle.

Organizations can start with as few as two nodes per location and scale out incrementally, expanding the system as needs grow and eliminating the cost of guessed-at future requirements. In an HPE SimpliVity infrastructure, hyperconverged devices at all sites are part of the same federation and can be monitored and managed together from a single interface by an IT administrator without specialized training. And multiple nodes protect data across the entire business, from core to edge, in case of a drive, node, cluster or site failure.

A recent study revealed how cost-effective HPE SimpliVity hyperconverged infrastructure can be for ROBO sites. Enterprise Strategy Group set up a use case study with a single remote office deployment and found that hyperconverged solutions provide a savings of 49% when compared with a traditional SAN. They then increased the deployments and discovered that the savings become greater as the number of sites grow. In total cost of ownership (TCO), efficiencies in HCI architecture were found to boost savings to as much as 55% in remote and branch office deployments.

With so much importance placed on data at the edge, organizations need a simple, cost-effective infrastructure that can grow with them and keep their data available, no matter where it resides. Learn more about HCI for remote datacenters in the Gorilla Guide to Hyperconverged Infrastructure.


About the Author

“Instant Everything” Poses 3 Unique Challenges for Datacenters at the Edge TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. With over 26 years of experience in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, Goepel has held various engineering, marketing, and consulting positions in R&D, sales and services. To read more article by Thomas Goepel, visit the HPE Shifting to Software-Defined blog

“Instant Everything” Poses 3 Unique Challenges for Datacenters at the Edge TechNative

IT’s Vital Role in Multi-cloud, Hybrid IT Procurement

Futuristic cloud technology digital Network technology background, communication technology illustration Abstract background

Changes in cloud deployment models are forcing a rethinking of IT’s role, along with the demand for new tools and expertise

For a time, IT seemed to be underappreciated or simply bypassed. With a swipe of a credit card, some business units and developers found a quick and easy way to get the services they needed—without engaging with their IT counterparts. Now things are changing. With the complexities of managing different workloads on and off premises, these same users are once again seeking the help of IT to cut through the chaos and ensure the right safeguards are in place.

In a recent BriefingsDirect podcast, Dana Gardner, Principal Analyst at Interarbor Solutions, talks with Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The two discuss the changing IT procurement landscape as more organizations move to multi-cloud and hybrid IT operating models.

IT’s new role – back to the future with an emphasis on hybrid

Dillingham began the interview by explaining why procurement of hybrid and multi-cloud services are changing. “What began as organic adoption — from the developers and business units seeking agility and speed — is now coming back around to the IT-focused topics of governance, orchestration across platforms, and modernization of private infrastructure.” Organizations are now realizing that IT has to be involved in order to provide much needed governance.

Gardner mentioned that many organizations are looking beyond simply choosing between public or private clouds. Dillingham agrees that there is a growing interest in hybrid cloud and multi-cloud. “Some organizations adopt a single public cloud vendor. But others see that as a long-term risk in management, resourcing, and maintaining flexibility as a business. So they’re adopting multiple cloud vendors, which is becoming the more popular strategic orientation.”

Of course, effectively managing multi-cloud and hybrid IT in many organizations creates new challenges for IT. Dillingham explains that as IT complexity grows, “The business units and developers will look to IT for the right tools and capabilities, not necessarily to shed accountability but because that is the traditional role of IT, to enable those capabilities. IT is therefore set up for procurement.”

Multi-cloud economics 101 — an outside expert can help

Dillingham suggests that organizations consider all types of deployments in terms of costs. Large, existing investments in data center infrastructure will continue to serve a vital interest, yet many types of cloud deployments will also thrive. And all workloads will need cost optimization, security, compliance, auditability, and customization.

He also recommends businesses seek out consultants to avoid traps and pitfalls, which will help better manage their expectations and goals. Outside expertise is extremely valuable not only with customers in the same industry, but also across industries. “The best insights will come from knowing what it looks like to triage application portfolios, what migrations you want across cloud infrastructures, and the proper set up of comprehensive governance, control processes, and education structures,” explains Dillingham.

Gardner added that systems integrators, in addition to some vendors, are going to help organizations make the transition from traditional IT procurement to everything-as-a service. “That’s more intelligent than trying to do this on your own or go down a dark alley and make mistakes. As we know, the cloud providers are probably not going to stand up and wave a flag if you’re spending too much money with them.”

To help in the governance of all deployments, Dillingham says IT will want to implement the ultimate toolset that will work across both public and private infrastructures. “A vendor that’s looking beyond just public cloud, like Hewlett Packard Enterprise (HPE), and delivers a multi-cloud and hybrid cloud management orientation, is set up to be a potential tour guide and strategic consultative adviser.”

Advice for optimizing hybrid and multi-cloud economics

Gardner concludes the interview by discussing how managing multi-cloud and hybrid cloud environments is incredibly dynamic and complex. “It’s a fast-moving target. The cloud providers are bringing out new services all the time. There are literally thousands of different cloud service SKUs for infrastructure-as-a-service, for storage-as-a-service, and for other APIs and third-party services.”

Dillingham advises that the best IT adoption plan comes down to the requirements of each specific organization. He cautions that the more business units the organization has, the more important it is that IT drives “collaboration at the highest organizational level and be responsible for the overall cloud strategy.” Collaboration must encompass all aspects, including “platform selection, governance, process, and people skills.”

HPE can help IT teams simplify their hybrid and multi-cloud experience with modern technologies and software-defined solutions such as composable infrastructure, hyperconvergence, infrastructure management, and multi-cloud management. Listen to the full podcast here. Read the transcript here.


About the Author

IT’s Vital Role in Multi-cloud, Hybrid IT Procurement TechNativeChris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and Project New Hybrid IT Stack.

To read more from Chris Purcell, please visit the HPE Shifting to Software-Defined blog.

 

IT’s Vital Role in Multi-cloud, Hybrid IT Procurement TechNative
1 2 3 12
Page 1 of 12