As multi-cloud adoption expands across the world, international companies need to consider how globalization affects business ecosystems and hybrid IT complexity.
Digital transformation has heightened the focus on data as value, so businesses need to reevaluate the legal and logistical obstacles that come with the global distribution of data and determine what infrastructure will be required to make it work.
Dana Gardner, Principal Analyst at Interarbor Solutions, sits down with leading IT industry analysts for his BriefingsDirect Voice of the Analyst podcast series to wrestle with the mounting complexities businesses face as they transform their IT strategy. To tackle the implications of globalization, Gardner spoke with Peter Burris, Head of Research at Wikibon in Palo Alto, CA.
To begin, Burris breaks down three main concerns that companies should be weary of when adjusting their IT strategy for globalization: latency, privacy, and control.
Latency and bandwidth
In a digital world, it’s easy for businesses to forget that the complex physics of cloud computing on a global scale — moving data of any size across different regions — can be extremely expensive due to bandwidth costs and latency issues. Factoring costs and where they wanted to run particular applications into a global strategy becomes complicated when a service is being consumed thousands of miles away from where the data resides.
Gardner suggests that if it requires substantial heavy lifting to make bandwidth capable, international businesses might consider sticking to a private cloud or on-premises approach with a small, local data center. Burris identifies two architectural means of achieving this approach: edge centers, where data processing can be closer to the source for lower storage and processing cost, or a true private cloud.
“True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience,” explains Burris.
Ultimately, the less data that has to be moved, the less latency and bandwidth will drive up costs to impede a strong IT strategy. Deployment decisions won’t completely absolve these concerns, however; once the architecture is established, IT teams will need tools to keep everything running efficiently.
Privacy and legal implications
According to Burris, the second major concern in a global hybrid cloud scenario is privacy. Intellectual property is treated differently from one global region to another, so while most major hyperscale cloud-service providers are US-based corporations, there should be concern for how intellectual property is treated globally. Under certain circumstances, governments can access hyperscalers’ infrastructure, assets, and data. That means non-US companies may worry that US-based firms are fairly liberal with how they share data with the government. A truly global-minded enterprise needs to think about privacy in a way that accommodates local markets and local approaches to property.
“All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated,” says Burris. Currently, however, the technology available for businesses to demonstrate this protection is quite immature. IT teams will need to confront both limited visibility and an overwhelming flow of data in order to show clients the status of their data.
Intellectual property control
The third issue Burris discusses involves who controls the data in a global, multi-cloud environment. When it comes to generating return, data doesn’t work like typical assets. Unlike money or time, data doesn’t follow laws of scarcity in the traditional sense. Data can be copied, shared, and appreciates as it’s used and integrated successfully across multiple platforms.
According to Burris, the concern is that if data is in one location, there are two possible risks. First, the data can be copied and stolen. Second, rules and regulations exclusive to that area may restrict how data can be combined with other sources of data, potentially making it less valuable.
Solving the hybrid IT complexity problem
Gardner summarized the interview by affirming how complex many of these issues are. When you have so many unknowns and a lack of maturity in a market, it also opens up a huge opportunity for someone to step in and solve some of these issues.
Burris defines a need for infrastructure management that uses machine learning, automation, and advanced technology to ensure that the infrastructure is not only managed but also optimized and economized. So far, both he and Gardner point to Hewlett Packard Enterprise (HPE) as one player advancing toward this goal.
In June of this year, HPE announced Project New Hybrid IT Stack, a tool that will enable enterprises to achieve a digital transformation with broader visibility and management. This week at HPE Discover, the product was re-branded and launched officially as HPE OneSphere. Using this new tool, organizations will be able to seamlessly compose, operate, and optimize all workloads across on-premises, private, hosted, and public clouds. HPE OneSphere also provides dashboards based on different user roles that offer business analytics. IT operators and application developers will be able to simply click or code resources and services across their entire hybrid infrastructure.
Hear what Dana Gardner had to say on the day of the launch below
— TechNative (@TechNative) November 29, 2017
Hybrid IT already has enough moving parts — scattering a multi-infrastructure model across several global regions only adds to this complexity. With the right data management tools, IT teams can rein in cost, refocus architecture, and make sure enterprises are in compliance with international laws and client needs.
HPE has assembled an array of resources that are helping businesses succeed in a hybrid IT world. Learn about HPE’s approach to managing hybrid IT by checking out the HPE OneSphere page. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext.
About the author
Chris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and HPE OneSphere