close

Enterprise Featured

How one company increased productivity through better infrastructure management

Businessman holding red rocket in his hand 3D rendering

Those in IT are usually quite adept at juggling — keeping lots of balls in the air to ensure the organization’s entire IT infrastructure operates as efficiently as possible

From providing desktop support to provisioning and updating compute, storage, and fabric resources, the job of the IT professional is never done. As the pace of innovation continues to accelerate and IT services become ever more complex, the demands on IT become greater every day. Businesses are now managing and consuming IT services across a hybrid infrastructure, and they’re trying to use infrastructure that is usually not designed for these demands. In addition, complex manual processes and non-integrated tools fail to provide the speed and simplicity needed to support current tasks, much less new ideas and applications.

To compete more successfully, CIOs want to move faster without worrying that their IT is holding them back. At the same time, the IT team wants to increase productivity by automating and streamlining tasks. And everyone wants less complexity, so they can spend more time innovating.

So what’s the answer? How can businesses move faster, remove management complexity, and increase productivity? To answer these questions, let’s look at a real world example of a business that achieved those goals with better IT infrastructure management.

No more juggling for Porsche Informatik

Porsche Informatik is one of Europe’s largest developers of custom software solutions for the automobile industry. With more than 1,600 virtual servers and 133 blade servers in two geographically-dispersed data centers, Porsche provides IT services to 450 employees. With 500TB of storage and 12,000 end devices, its environment carries out 1.5 million automated tasks a month. Business-critical applications run across the entire data center, from physical Microsoft Windows clusters to VMware® HA clusters, including in-house developed and third-party programs.

To reduce complexity and streamline management, Porsche needed a single, integrated management platform. The company turned to HPE OneView, an infrastructure automation engine built with software-defined intelligence and designed to simplify lifecycle operations and increase IT productivity.

HPE OneView allowed Porsche to improve productivity by reducing new configuration deployment times by 90%. It also cut admin and engineer management time by 50% and sped up the detection and correction of routine problems by 50%. All of which freed IT staff from routine tasks, enabling them to react more quickly to business requirements and focus on innovation for the business.

An added benefit: a unified API

A key feature in HPE OneView is the unified API that provides a single interface to discover, search, inventory, configure, provision, update, and diagnose across all HPE infrastructure. A single line of code fully describes and can provision the infrastructure required for an application, eliminating time-consuming scripting of more than 500 calls to low-level tools and interfaces required by competitive offerings.

HPE OneView capitalizes on the value a unified API can deliver through the HPE Composable Infrastructure Ecosystem. Together, HPE and their partners build solutions that let customers reduce the time they spend managing their data center or cloud environment. That means businesses can stop juggling and spend more time innovating. A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate their solutions.

Peter Cermak, IT systems engineer at Porsche Informatik sums it up well. “The [HPE OneView] unified API and Global Dashboard provide a much better, intuitive view of our HPE infrastructure,” says Cermak. “Even people with only basic training can easily see the state of this part of our infrastructure. Not only do we now save a lot of time adding new servers and VLANs, it is also a fire-and-forget task. Previously, we had to re-check and debug profile-related issues but that is no longer necessary. In one operation, staff can configure many servers with identical settings and the time we save enables us to concentrate our work on customer requirements.”

To read Porsche Informatik’s complete success story, click here.


Paul Miller is Vice President of Marketing for the Software-defined and Cloud Group at Hewlett Packard Enterprise (HPE). To learn how to migrate with ease to HPE OneView, watch this webinar replay. To learn more about HPE OneView, download the free ebook, HPE OneView for Dummies.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

10 ways to tell if an infrastructure solution is really composable

In Data Center Two IT Engineers Walking Through Rows of Server Racks. They Work on Tablet Computer and Laptop.

History has shown that whenever a new, innovative technology comes to market, there are both skeptics and early adopters who embrace it with open arms

For instance, when Apple introduced the first iPhone in 2008, I was skeptical. Why did I need my phone to take pictures and have a GPS to find my way? I had friends and colleagues who ran out to get one and raved about its capabilities. Fast forward a few years, and every cell phone company in the world offers a smart phone, and I have to admit, I don’t know where I’d be without mine.

The same pattern is true when it comes to composable infrastructure. First introduced as a concept in late 2015, some in the industry were skeptical and others saw the incredible potential it provided. Of course, competitors realized the remarkable possibilities for the technology and immediately began imitating it, calling their solutions “composable.”

Just like any new technology, it’s important for customers to be clear on what composable infrastructure really is and what it isn’t – because not just any piece of infrastructure can be classified as composable. To help you evaluate any solution calling itself composable, here are ten ways to tell if it really is:

1. Does it have a single infrastructure for all applications?

Any solution that claims to be composable must deliver the ability to provide the application with the exact optimized footprint it needs – one that provisions and runs a workload across virtual machines, bare-metal deployment, containers, and cloud-native applications.

2. Does it use all resources?

A truly composable infrastructure should provide fluid pools of compute, storage area network (SAN), local storage, and network fabric resources that can be continually aggregated, disaggregated, and composed based on the needs of the application.

3. Does the solution deliver software-defined intelligence?

Composable Infrastructure should remove complexity by providing programmable and template‐driven software intelligence in a single management platform. The software-defined intelligence should also deliver capabilities for self-discovery and auto-integrating across all components of the infrastructure, allowing new resources to be automatically discovered and integrated.

4. Does it have a single unified API?

Today, each element of infrastructure in your data center probably has its own distinct API, requiring complicated code or management tools to configure and provision a single workload. True composability provides one simple and open RESTful API that allows you to abstract and control every element of infrastructure and easily plugs into other programming elements.

5. Is it compatible with a variety of tools, chosen from a growing ecosystem?

DevOps teams prefer sets of tools that allow them to rapidly provision and deploy applications. Those tools should still work in a composable infrastructure. A broad and growing ecosystem of tools should also be available (such as Ansible, Chef, Docker, Microsoft, OpenStack, Puppet, and VMware).

6. Does it deliver true infrastructure-as-code?

A composable solution should allow DevOps teams to provision and control physical resources from their applications, giving them true infrastructure-as-code ability that enables continuous deployment. Think of it as bare metal at the speed of cloud, with templates that unify the process for provisioning compute, connectivity, storage, and OS deployment in a single step – just like provisioning VM instances from the public cloud.

7. Is the solution designed for composability?

Instead of investing in a reconfigured, old product, look for a solution that is designed for composability from the ground up.

8. Does the solution deliver a return on your investment?

A composable solution should eliminate over‐provisioning and stranded resources. Each workload should use only the resources it needs, and return those resources to the pool when they are no longer needed. Processes and services align around a single-delivery model, reducing complexity and cost.

9. Can the solution future-proof your business?

With a composable solution, there should be no limits to scalability and no barriers to creativity – your infrastructure should get out of the way and work harmoniously with any data center.

10. Does it allow you to start on the path to composability now?

Find a solution that allows you to get started when you’re ready and proceed at the speed that is best for your business. Your solution should allow you to deploy in incremental steps, delivering the benefits of composability where it is most needed today without disrupting critical applications.

Composable Infrastructure is changing how businesses are operating their data centers, opening new opportunities for speed and flexibility while removing cost and complexity. As the first platform built from the ground up for composable infrastructure, HPE Synergy guarantees each of these 10 items. Customers worldwide are seeing first-hand how HPE Synergy and composable infrastructure provides the flexibility to adapt to every workload and the power to deliver services at lightning speed.


About Paul Miller

Paul Miller is Vice President of Marketing for the Software-defined and Cloud Group at Hewlett Packard Enterprise (HPE). HPE has assembled an array of resources that are helping businesses succeed in a hybrid IT world. To learn more about composable infrastructure, download the Composable Infrastructure For Dummies guide.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

Hyperconvergence Takes All-Flash Storage to the Next Level

futuristic technology background

For decades, traditional storage (or disk storage) worked well for IT teams, giving them the high capacity they needed

But, as data storage space becomes more important every year, capacity alone is no longer enough. IT teams often face high latency issues and drive failures with traditional storage solutions – as seen by the high number of articles available offering advice for overcoming and anticipating such failures.

Businesses today need IT to ensure high performance for their applications and easy access to important data, which is why flash is in demand and disrupting the industry status quo. In 2016, IDC noted that the flash-based storage market was on the rise, and vendors have been actively building up their integrated, hybrid, and all-flash system portfolios over the past year.

One foreseeable outcome of this portfolio growth is a coupling of highly efficient technologies in order to achieve the highest possible levels of efficiency. As a result, it’s not surprising to discover some hyperconverged platforms are being integrated with flash storage. Hyperconverged infrastructure and flash storage have been developed side by side for years to accomplish the same goals without intersecting. Now industry experts expect that the combination of these technologies in one platform will soon become a preferred approach for on-premises IT.

Much like flash storage, advanced hyperconverged architectures address many modern business problems. The most sophisticated infrastructures are designed to improve performance, scalability, and data availability while reducing input/output operations (IOPS) that once made traditional storage challenging. In these instances, hyperconverged infrastructure and flash storage enhance each other’s capabilities, making them well suited for a combined solution.

Hyperconverged systems such as HPE SimpliVity can take flash technology to the next level by simplifying management, increasing durability, and reducing floor space. Customers who once considered built-in deduplication and compression nice-to-have “checklist items,” have come to rely on these features to not only increase performance, but also to alleviate capacity constraints within each unit. This built-in deduplication and compression, combined with reduced floor space, delivers higher performance, gives IT teams less to manage, and ensures high data availability and resiliency. Built-in data protection, low maintenance interfaces, and all-flash platforms have evolved similarly from wish-list features to requirements in today’s fast-paced business environments.

While flash adoption is on the rise, traditional storage is not obsolete. Despite the industry anticipation that disk storage will disappear, this technology is still the logical choice for deployments where capacity is the priority over performance. Like tape backup devices, disk storage will likely not disappear any time soon. But the trend is clear: The need for durable, high performance, flash-based solutions is on the rise. Hyperconverged systems that incorporate flash technology are designed to meet ever-growing and changing business needs. Simple, powerful, and efficient technology is the way forward for integrated datacenter solutions.


About Jesse St. Laurent

Jesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. Learn more about the all-flash hyperconverged infrastructure in this ESG Technical whitepaper.  For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook.

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog

Don’t get caught in a tech refresh quicksand. Instead, choose hyperconvergence

roter-sand-2042738_1920

Technology refreshes are necessary but often difficult to complete, and that can make you feel trapped

As soon as you finish refreshing one device, it’s time to update another. It’s much like being caught in quicksand ­– you try to move, but it is almost impossible. In fact, you may even be sinking. Anyone who has used older IT infrastructure models knows the feeling of constantly refreshing but never getting ahead.

Most businesses today are dealing with refresh cycles in their data center every three to five years, for as many as a dozen different devices. No matter the lifespan of the technology, once tech refreshes come around, the cycles take a lot of time. Before they can actually choose, implement, and migrate to a new solution, the business has to define infrastructure requirements, research an appropriate solution, and then vet the vendors and sales teams. With all this time spent on finding, implementing, and training on a new solution, businesses can be wondering, “When will I actually find time to innovate?”

Escaping the quicksand

There is an answer to today’s data center problems, especially when looking to reduce complexity and battle inefficiency. Hyperconverged infrastructure offers an extended hand (or very long vine) to IT professionals stuck in tech refresh quicksand to vastly simplify IT operations and break the cycle of constantly buying new technology.

Because hyperconverged infrastructure consolidates multiple IT infrastructure components into a single product, the technology lifecycles are also brought together, making it easier to plan for tech refreshes and easier to manage the entire process centrally. By combining all IT functions and services below the hypervisor, hyperconverged infrastructure offers the opportunity for IT to deal with a single vendor. And since the entire solution is managed through existing and well-known tools, training time is minimized as well.

Reducing the amount of complexity in the data center leads to only good things. For tech refresh cycles, it gives time back to the IT team. No longer do IT professionals have to spend all of their time going through the strenuous (and time-consuming) process of vetting, implementing, and training on new data center solutions. The time saved by hyperconverged infrastructure is time you can spend innovating.

In fact, an IDC white paper noted that hyperconverged customers who implemented HPE SimpliVity increased IT budget spent on new technology projects/purchases (from 43% to 57%) compared with IT budget spent on maintaining existing infrastructure. The white paper also stated that HPE SimpliVity customers increased time spent on innovation and new projects from 16% to 29%. Generally, this innovation was made possible from time savings associated with managing fewer infrastructure components to support respondents’ virtualized workloads, simplified backup/recovery and disaster recovery (an improvement of 44%), and less time spent troubleshooting.

If you feel like you’ve been caught in tech refresh quicksand, don’t continue to struggle with legacy IT and outdated systems — you’ll only sink faster. Hyperconverged infrastructure offers a way to break the buying cycle, leave the quicksand behind you, and take back control of your data center.


About Jesse St. Laurent

Jesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. To learn more about how hyperconverged customers are benefitting from HPE SimpliVity, read the full IDC report. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook.

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog

Seeking Digital Transformation? 8 Essentials for Successful Hybrid IT

Technology And Biometric Concept

Today’s businesses are rapidly changing – and that is all part of a wider digital transformation initiative that is sweeping the industry

Yet as businesses move to the public cloud seeking lower costs and greater flexibility, they’re discovering some challenges. These challenges are coming in the form of unexpected costs. Others arise while trying to move traditional workloads off-premises to locations that are sometimes less than ideal.

More and more enterprises are recognizing that a hybrid IT strategy solves many of these challenges. A hybrid IT environment provides a balanced combination of traditional on-premises infrastructure, private cloud, and off-premises public cloud. And through careful analysis, a hybrid IT estate allows IT teams to select the best deployment model for each application. Mission-critical workloads can remain on-prem where employees can confidently retain complete control. Whereas a newly-developed, revenue-generating app can hum along beautifully in the public cloud. And a non-revenue generating app can run extremely well on a private cloud, where it’s available for all to use within the constraints of an existing budget.

So, all is well…right? Actually, not quite.

Bridging the gap between on- and off-premises workloads

Even when each workload is placed where it is best suited, another challenge still needs to be solved. On-premises and off-premises worlds can become very siloed, increasing complexity and slowing digital transformation. IT can’t easily manage them as one because they don’t have the visibility they need across their entire hybrid estate. What IT needs is a single view to easily manage everything at the same time, regardless of where it is being hosted.

In 2017, Hewlett Packard Enterprise (HPE) commissioned 451 Research to thoroughly determine why and how companies are digitally transforming their businesses and what challenges they must overcome to provide and manage the ultimate hybrid IT platform. The research, Seeking Digital Transformation? Eight Essentials for Hybrid IT, provides interesting insights into the digital transformation journey, and what is still needed to bridge the gap between on- and off-premises IT infrastructure.

451 Research’s Eight Essentials for Hybrid IT:

  • Hybrid IT requires a unified, software-defined control plane that is simple and quick to deploy across traditional enterprise and private and public cloud infrastructure, and bridges the worlds of public and private clouds seamlessly.
  • Hybrid IT must support the current and emerging OS and virtualization layers that businesses are using to host their applications, such as VMware, OpenStack, and Microsoft AzureStack, as well as bare-metal and container-based applications.
  • Everything should be software-defined and available in multiple packaging formats, but for real efficiency and performance of on-premises deployments, hyperconvergedand composable infrastructure are necessary baselines.
  • Developers should be able to build their applications anywhere and deploy them anywhere as soon as they are ready. They need a hybrid workspace supporting traditional workloads in VMs, modern apps in containers, and flexibility across bare metal and private and public clouds.
  • Developers and IT operations need a ‘no-ops’ hybrid IT management-as-a-service portal and app store. This will enable developers to compose, deploy and scale hybrid clouds that support all applica­tions, and to manage production compliance and lifecycle governance.
  • Integrated, software-defined data services will become more necessary as the data explosion progresses. Data efficiency, resiliency, management, and mobility are all key requirements that should be abstracted away from the underlying storage and made available across the hybrid platform.
  • An analytics-powered business dashboard should provide business and IT operations managers with visibility on costs and utilization across private and public infrastructure, breaking up the data into the separate lines of business to calculate the cost of individual projects.
  • A hybrid IT architecture must have room for emerging and future technologies, such as APIs, microservices, hybrid computing, memory-based computing, and the extension of intelligence to the edge though the Internet of Things.

HPE OneSphere, a hybrid cloud management solution

When HPE announced HPE OneSphere in November of 2017, they delivered a solution that met the 8 essentials for hybrid IT that 451 Research described in its report. Through its software-as-a-service (SaaS) portal, HPE OneSphere provides customers access to pools of IT resources that span the public cloud services they subscribe to, as well as their on-premises environments.

The solution works across virtual machines, containerized workloads, and bare metal applications, so users can compose hybrid clouds capable of supporting both traditional and cloud-native applications.  Delivered as a service, HPE OneSphere provides a single point to access all applications and data across an organization’s hybrid estate.

Digital transformation is here to stay…and the journey just got easier

The digital transformation is here, and it is disruptive. Companies that are willing to embrace change will not only survive, they will flourish. Yet, it’s not an easy journey. Industry experts can help organizations learn about solutions that will make digital transformation simpler. Read 451 Research’s full report here: Seeking Digital Transformation? Eight Essentials for Hybrid IT.


About Paul Miller

Paul Miller is Vice President of Marketing for the Software-defined and Cloud Group at Hewlett Packard Enterprise (HPE). HPE has assembled an array of resources that are helping businesses succeed in a hybrid IT world. To learn more about composable infrastructure, download the Composable Infrastructure For Dummies guide. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext. Learn about HPE’s approach to managing hybrid cloud by checking out the HPE website, HPE OneSphere.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

 

Fortune 1000 IT Experts Share Hybrid IT and Digital Transformation Strategies

AdobeStock_122043310 [Converted]s

Digital transformation is affecting businesses of all sizes and across all industries

Many, including media, entertainment, and retail, have already made the transformation, while others, such as healthcare, government, and manufacturing, are on the cusp of major changes.

In a world where IT is embedded everywhere and data is becoming more readily available, businesses must embrace digital transformation in order to keep up with the competition. Those that are maximizing this opportunity are treating IT not just as another function but as a core competency that drives competitive differentiation while also supporting typical business processes and outcomes.

Embarking on digital transformation can seem daunting for some IT professionals. While they understand the need for innovation, they’re responsible for keeping typical business functions running smoothly without hiccups or downtime – threats that could mean major revenue loss for an organization. They must transform their businesses while offering the best of both worlds – the same security, quality, and user experience of an on-premises solution, plus the scalability, agility, and efficiency of a public cloud.

To that end, digital transformation is ushering in the era of hybrid IT. But don’t be mistaken – the term hybrid IT does not necessarily refer to a mashed up version of public and private cloud. It’s much more than that. It’s a sophisticated combination of multiple, different forms of IT that allow businesses to innovate while meeting their own unique organizational needs.

So how are businesses implementing hybrid IT models, and what have been their experiences? Hewlett Packard Enterprise (HPE) commissioned IDC to find out through complex, in-depth interviews with IT operations staff and line of business individuals at Fortune 1000 enterprises. The results can be found in a comprehensive research report – The Future of Hybrid IT Made Simple. The interviews sought to understand hybrid IT strategies and the opportunities associated with them.

Here are some key takeaways:

  1. Hybrid IT optimizes cost and application performance across on-premises and public clouds. A dashboard designed specifically for line-of-business executives provides full visibility into IT operations, so businesses can keep track of metrics and measure cloud providers by performance. By having insight into these areas, businesses ultimately save money and reduce risk.
  2. Hybrid IT serves as a continuous DevOps platform, offering a single, secure, and curated platform with integrated developer tools. This enables DevOps teams to focus on application delivery instead of infrastructure management.
  3. Hybrid IT enables IT operations to become a virtual cloud service provider as part of its core competency. In doing this, a hybrid IT model becomes a self-service portal that provides a single view across the entire estate and is designed to be used by line-of-business executives, developers, and IT executives.
  4. Hybrid IT enables provisioning (and rapid deprovisioning) of autonomous compute, storage, and fabric instances from fluid resources pools. By championing a low opsmodel in which the entire infrastructure is software defined and treated as code via a unified API, hybrid IT works with composable infrastructure to provision tasks within minutes. Traditional IT provisioning, on the other hand, can take days – and even months in many cases.

The research pointed to a strong conclusion: Businesses are investing in hybrid IT strategies and will continue to do so because it is a crucial vehicle for them to transform digitally. Hewlett Packard Enterprise (HPE) enables digital transformation by making it possible to develop and deploy workloads where they best fit based on business needs. At the same time, HPE makes hybrid IT simple to manage and control across on-premises and off-premises estates.

In an era where change is the only constant, digital transformation is the only way forward. And one approach to tackle that effectively is to implement a robust hybrid IT strategy.


About Paul Miller

Paul Miller is Vice President of Marketing for the Software-defined and Cloud Group at Hewlett Packard Enterprise (HPE). HPE has assembled an array of resources that are helping businesses succeed in a hybrid IT world. To learn more about composable infrastructure, download the Composable Infrastructure For Dummies guide. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext. Learn about HPE’s approach to managing hybrid cloud by checking out the HPE website, HPE OneSphere.

To read more articles from Paul Miller, check out the HPE Converged Data Center Infrastructure blog.

 

Ransomware Response: 6 Steps to Limit Data Loss

digital cyber security system concept 3d illustration

Ransomware is a dominant threat to businesses everywhere — and it’s not going away anytime soon.

Although ransomware is a reality for IT teams to acknowledge, they are not always prepared for it. Of course, no one thinks that their datacenter will be the next one to fall victim to a ransomware attack, but the statistics are alarming.

According to an FBI report, one ransomware variant in early 2016 compromised as many as 100,000 computers a day. And those statistics are not subsiding. Hackers are constantly inventing new ways to gain access to sensitive information and critical files. According to Verizon’s 2017 Data Breach Investigations Report, ransomware is now the 5th most common type of malware – up from 22nd in the same report just 3 years earlier.

Businesses that have a response strategy will be able to better identify the signs of an attack and recover from it more quickly. Your ransomware response strategy should include six critical steps your business can take to respond better to a cyber attack and avoid data loss and company downtime.

Ransomware Response Strategy

  1. Educate the company.

Your IT teams should make sure that everyone knows what is at stake and what steps to take both before and after a ransomware attack occurs. Education is key to not only preventing ransomware from entering the systems but also to stopping it quickly once inside. Ransomware often infiltrates the system by an employee clicking on a link in a seemingly harmless email from an unknown source. With proper education, your staff can identify the most common types of ransomware and the typical ways by which it enters the system. They should also be educated on how prevalent these types of viruses are becoming. Equally important, educate staff on what to do after an attack – who to report issues to and what steps to take to minimize the damage.

  1. Know the signs of an attack.

A ransomware attack is most often characterized by the locking of files, folders, and applications until a price is paid in bitcoins to attackers. Attacks will often masquerade as government or police agencies accusing the computer owner of criminal activity and demanding that payment be made within a certain timeframe or else the user will be arrested. It’s important to recognize attacks quickly so the restoring processes can begin as soon as possible. And it’s important to note: many companies never get their data back, even if they pay the ransom.

  1. Correctly define how long your business can be offline and how much data you can afford to lose.

The next step in your ransomware recovery plan is to correctly define the recovery time objectives (RTOs) and recovery point objectives (RPOs) for your company. This is imperative in order to get operations back online without paying attackers. To define your RTOs and RPOs, you must first ask yourself two questions: How long can the business shut down while waiting for the restore to take place, and how many hours of business-critical data can the company afford to lose?

  1. Decide on a solution that can meet your defined RTOs and RPOs.

Once you’ve defined your RTOs and RPOs, you have to find a solution that can meet those requirements to get your infrastructure back up and running. According to Ponemon Institute, the average cost of IT downtime is $8,850 per minute. Therefore, a business will be bleeding money for every second spent waiting on requirements to be met. You should make sure to choose a data protection strategy that is not only best for the business, but one that can get the infrastructure running again in the time provisioned.

  1. Assess integrated solutions to protect remote and branch offices.

Having multiple backup and disaster recovery solutions only serves to intensify complexity. Simplify your data protection scheme by picking only the solutions that are right for your environment. This is particularly important if you have multiple remote offices (ROBO) to support with small or nonexistent staff at each site. Solutions that offer integrated functions, such as built-in data protection, will help to ease the burden at remote offices and provide better protection to ROBO sites.

  1. Ensure your solution is simple enough to allow systems to get back online quickly.

In addition to reducing the complexity of your data protection and backup solutions, seek a datacenter solution that stresses ease of use. Simplicity is most critical when recovering from a ransomware attack. When IT downtime incurs as much as $8,850 per minute, every second counts and reducing the restore process by a few clicks may make a significant difference.

Peace of mind – built in and guaranteed

Some businesses have turned to HPE SimpliVity because it makes ransomware protection simple with its built-in data protection. When using HPE SimpliVity’s built-in backup capability, it takes less than one minute, on average, to complete a local backup or local restore of a 1TB VM, guaranteed. In fact, one HPE SimpliVity customer fell victim to a ransomware attack when transferring data from the previous infrastructure to the new hyperconverged solution. Yet, they were able recover data quickly and avoided any downtime and expenses. Had the attack occurred during a period when they were still backing up to tape, the business would have lost almost 12 hours of data. Thankfully, they only lost less than an hour of data using HPE SimpliVity’s hyperconverged solution.

Ransomware is a threat to every business. IT teams need to recognize this fact and adjust their data protection strategies accordingly. Organizations should work under the assumption that they will eventually become infected and should focus on minimizing downtime once infected, as well as have a data protection strategy in place that supports their defined RTOs and RPOs. Using the six steps listed above, the damage done by ransomware can be minimized.


About Jesse St. Laurent

Jesse St. Laurent is the Chief Technologist for HPE Hyperconverged and SimpliVity. He uses his 20 years of experience to engage channel partners, evaluate emerging technologies, and shape innovative technology solutions involving data center modernization. For more information on how hyperconverged infrastructure can elevate your hybrid IT environment, download the free HPE SimpliVity edition of Hyperconverged Infrastructure for Dummies ebook.

To read more articles from Jesse St. Laurent, check out the HPE Converged Data Center Infrastructure blog.

The Complex State of Hybrid IT: How to Make It Simple

young woman holding smart phone and modern architecture interior, wireless communication network concept. smart home, smart house.

Usually in January, the President of the United States delivers the annual State of the Union address – a speech mandated by the U.S. Constitution

Other countries have similar addresses, such as the UK’s Speech from the Throne. The purpose of all such speeches is to give an account of past accomplishments and detail future goals.

During HPE Discover Madrid at the end of November 2017, Ric Lewis, Senior Vice President and General Manager of Software-Defined and Cloud Group addressed the audience. As he gave his thoughts on the HPE landscape, I couldn’t help but draw a comparison. The format of the content he presented was strikingly similar to that of a State of the Union address. He presented the accomplishments of the previous months, then followed it with a bold vision for the future.

Judging by the reactions of those in attendance, Lewis presented one of the boldest visions HPE has announced for some time. After a year of talking about making hybrid IT simple, Lewis showed how HPE is delivering on that promise.

Hybrid IT here to stay, but complexity is slowing digital transformation

Addressing a standing room only crowd of close to 500 (with many more conference attendees watching via a live stream), Lewis set the stage. He described the state of business today – a mix of hybrid IT resources that includes on-premises infrastructure, private cloud, and public cloud. Using a hybrid IT strategy, businesses are able to deliver a new variety of services to their customers and can expand the capabilities they need to provide them.

Yet in this new era of hybrid IT deployments, complexity reigns. Different deployment models make it nearly impossible to easily share information or move applications from one model to another. Although speed and agility are vital for today’s digital business, hybrid IT complexity is slowing innovation. More agility, visibility, optimization, and automation are needed.

Simplicity through a multi-cloud management platform

To solve this growing challenge, Lewis presented the world’s first multi-cloud, management solution — HPE OneSphere. This software-as-a service (SaaS) solution is definitely a big move for HPE, a company known for delivering hardware platforms.

“This is a complete game changer for your on-premises and public cloud environment – designed for the way businesses work,” Lewis explained. “Enterprises are now able to build clouds, deploy apps, and gain insights faster and easier than they ever could across a hybrid IT estate.”

Using this new solution, IT can dramatically simplify operations. Visibility and automation are provided for a variety of public clouds, on premises IT, containers, and VMs. Shadow IT activities that were previously unaccounted for are now easily and quickly tracked.

A multi-cloud management solution gives developers the power to access exactly what they need, when they need it — both in the public cloud or on premises. Using the SaaS web portal or through APIs, developers have instant access to a pool of IT resources.

Business executives are also beneficiaries of this new multi-cloud management solution. Real-time, cross-cloud insights enable CIOs and lines of business to increase resource utilization and reduce costs, improving efficiency across the board.

Hybrid cloud users reveal real world experiences

To demonstrate what a multi-cloud management solution can achieve in a real world hybrid IT environment, Lewis introduced two beta customers. Both told compelling stories.

Katreena Mullican, Senior Architect at HudsonAlpha Institute for Biotechnology, explained, “Life in the era of hybrid IT is not quite as simple as it sounds. Our collaboration with researchers worldwide creates a complex and ever-growing hybrid IT environment that needs to be managed.”

A multi-cloud management platform helps HudsonAlpha solve this challenge. “To enable rapid innovation for the researchers, IT doesn’t want to be a bottleneck in the provisioning of resources,” continued Mullican. “We embrace the idea of putting tools in their hands to provision the infrastructure that they need to get the research done.”

Next up was Kate Swanborg, Senior Vice President of Technology and Strategic Alliance, DreamWorks Animation. She began by explaining the computational challenges of making a computer-generated animated film. “By the time we’re done making one of our movies, we’ve crafted half a billion digital files and used 80 million computational hours – for just one movie.” She went on to say that they can have as many as 10 active films in production simultaneously. In addition, they produce innovation-driven short programs and entertainment for location-based theme parks.

“Something like HPE OneSphere enables our creatively by simplifying all of the different fit-for-purpose infrastructures that we need for all of these different creative outlets,” explained Swanborg. DreamWorks has both on-premises and off-premises solutions, which adds a fair amount of complexity. Yet, they must be able to act on ideas quickly. “We need to be able to utilize the right cloud for the right purpose,” continued Swanborg. “We look to HPE OneSphere to simplify that for us – it’s absolutely critical.”

A simpler hybrid IT is here, accelerating digital transformation

Both speakers vividly demonstrated that hybrid IT is a necessity in their organizations, but it also brings complexity that could easily slow innovation. HPE OneSphere solves this problem by transforming the state of hybrid IT from complex to simple.

HPE OneSphere, the industry’s first multi-cloud management SaaS solution, will be available in late January 2018. To learn more about HPE OneSphere, register for the upcoming webinars: HPE OneSphere: Simplify multi-cloud management to build clouds, deploy apps, and gain insights faster.


Chris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and HPE OneSphere. HPE has assembled an array of resources that are helping businesses succeed in a hybrid IT world. Learn about HPE’s approach to managing hybrid IT by checking out the HPE website, HPE OneSphere.

To read more articles from Chris Purcell, check out the HPE Converged Data Center Infrastructure blog.

 

Software-Defined Intelligence and a Unified API would make Iron Man proud

The Birth of Artificial Intelligence

J. A. R. V. I. S. (Just A Rather Very Intelligent System) is Tony Stark’s computer system in the movie Iron Man

J. A. R. V. I. S. automatically takes care of everything for the fictional superhero—from heating and cooling his house to alerting him when security protocols have been overridden.

Although today’s IT management systems aren’t nearly as sophisticated as J.A.R.V.I.S., they can automate many processes to help businesses move faster and compete more effectively. As I described in a previous article, Life in the fast lane…automation with software-defined intelligence, physical infrastructure can be automated using software-defined intelligence and a unified API. Hundreds to thousands of lines of code can be reduced to a single line of code, saving countless hours and making IT infrastructure management easier.

Can software-defined intelligence and a unified API also help businesses deliver new applications and services faster — innovations that are often the lifeblood of many businesses? Yes, and here’s how.

Continuous delivery of applications and services requires fast, policy-based automation of applications and infrastructure across development, testing and production environments. A unified API for infrastructure can do just that by letting developers and ISVs integrate with automation tool chains. For instance, a unified infrastructure API can simplify control of compute, storage and networking resources, so developers can code without needing a detailed understanding of the underlying physical elements.

HPE simplifies bare metal infrastructure automation by using software-defined templates to fully define the infrastructure and then provides a unified API, native in HPE OneView, to eliminate time-consuming coordination across servers, storage, and networking. Customers can use this toolset directly, or they can use pre-built integrations created by an HPE partner.

For example, customers using an automation tool like Chef can easily automate the provisioning of an entire stack from HPE bare metal through their application in minutes. This partner integration can also help keep the customer’s environment up to date. By combining an HPE partner’s automation with HPE OneView’s ability to stage, schedule, and install firmware updates, entire stacks can be updated with no downtime – from infrastructure to application.

A growing list of ISV partners are taking advantage of the unified API in HPE OneView to automate solutions for customers. These partners range from large software suites like VMware® vCenter, Microsoft® System Center, and Red Hat, to focused solution providers such as Chef, Docker,  Mesophere, CANCOM, and others. By integrating with the unified API in HPE OneView, ISVs can provide solutions that reduce the time their customers spend managing their environments.

Just like J. A. R. V. I. S., these integrations built by HPE partners take care of the housekeeping issues involved with moving apps and services from the cloud to your datacenter. And because it is so simple, you can concentrate on the superhero tasks of developing apps that create value for your business.


 

About Frances Guida

Frances Guida leads HPE OneView Automation and Ecosystem. Building on her years of experience with virtualization and cloud computing, she is passionate about helping enterprises use emerging technologies to create business value.  To learn how to migrate with ease to HPE OneView, watch this webinar replay. To learn more about HPE OneView, download the free ebook, HPE OneView for Dummies.

To read more articles from Frances, check out the HPE Converged Data Center Infrastructure blog.

 

HudsonAlpha simplifies hybrid IT with HPE OneSphere

Picture1

From the discovery of DNA over a century ago to the sequencing of the entire human genome, the study of genomic research has come a long way

Due to recent advances in technology, researchers now have the capability to sequence a genome faster and more cost-effectively than ever before. And the information they gather from understanding a person’s unique genetic profile can be used to find and administer better medical treatments that can save lives.

One of the leaders in genomic research is HudsonAlpha Institute for Biotechnology. The computational work done at HudsonAlpha is data-intensive – and constantly growing. HudsonAlpha generates more than 6 petabytes of data a year that must be managed, stored, manipulated, and analyzed. To continue to advance in their research goals, they needed to digitally transform, which meant rethinking how they implement IT across their organization. 

Strengthen on-premises infrastructure and embrace multi-cloud

The first step in HudsonAlpha’s digital transformation was to deploy a private cloud on premises, which would serve as the IT foundation of all their research projects. They implemented hyperconverged infrastructure (HCI) because it provided a cost-effective and easy-to-use infrastructure that removed many of their production bottlenecks. They also selected composable infrastructure because it gave them more power, agility, and ease of use.

“We can now quickly adjust our compute, storage, and fabric resources to meet rapidly changing needs — reducing re-provisioning time from four days to less than two hours,” revealed Peyton McNully CIO at HudsonAlpha. “We’ve also increased storage capacity and lowered their costs.”

HudsonAlpha’s digital transformation doesn’t stop at their datacenter walls — it also includes public clouds. As part of HudsonAlpha’s nonprofit research mission, they are encouraging more and more researchers to collaborate with them and use their IT infrastructure. Many of these researchers have been given government grant money to test novel, new treatment theories. Because of these grants, researchers can quickly test creative ideas in a public cloud without taking resources away from the more accepted research methodologies in use on HudsonAlpha’s private cloud.

More researchers means more complexity 

“We are excited about the new research we are able to conduct due to the government grants. Recent awards include research in biofuels, novel antibiotics, personalized medicine, and immunotherapy,” explained Katreena Mullican, Senior Architect at HudsonAlpha. “Yet life in the era of hybrid IT is not quite as simple as it sounds. Our collaboration with researchers worldwide creates a complex and ever-growing hybrid IT environment that needs to be managed.”

HudsonAlpha’s relatively small IT department needs a better way to proactively support the increased demands of such a large number of additional researchers accessing their infrastructure via multi-cloud environments. Researchers must be able to not only share data across public cloud, private cloud and on-premises IT, they need to be able to seamlessly move applications from one IT model to another.

“This surge of new researchers pursuing groundbreaking discoveries is a great opportunity. But it also means that HudsonAlpha must invest more time, money, and experts to ensure everything works together seamlessly,” continued Mullican. “Tracking, managing, and analyzing data in a multi-cloud environment is challenging to say the least.”

Enter HPE OneSphere, a multi-cloud management solution

In the fall of 2017, HudsonAlpha began working with Hewlett Packard Enterprise (HPE) to overcome this challenge. They implemented an innovative technology called HPE OneSphere, a newly announced multi-cloud management solution that lets customers deploy, operate, and optimize public cloud, private cloud, and on-premises environments through a simple and unified experience.

By streamlining management of hybrid IT resources, HPE OneSphere enables users to dynamically adjust workloads, easily transfer data, and rapidly develop apps. Users also benefit from a self-service design and unified experience, which minimizes the internal operations an enterprise needs to employ.

From a business perspective, HPE OneSphere provides better flexibility, higher productivity, and stronger control of utilization and spend across clouds — all while giving a user their tools of choice: clouds, containers, VMs, and bare metal.

Take control of hybrid IT

And that’s where HPE OneSphere has provided HudsonAlpha with significant improvements. Using this new cloud management and analytics platform, researchers can use unified workspaces to rapidly access needed services from private or public clouds. And more importantly, HudsonAlpha can track and analyze usage of these workloads wherever they are located — on or off premises in whatever cloud they are using around the world.

“Through the HPE OneSphere analytics dashboard, we can now get a very clear view on how infrastructure is being used across our entire hybrid estate,” McNully explained.  “With this improved insight, we can allocate resources more effectively.”

For HudsonAlpha, HPE OneSphere is an essential solution to a growing challenge: how to manage and gain control of the escalating complexity of constantly expanding hybrid IT environments.

“HudsonAlpha is excited to be working with HPE to test drive this breakthrough multi-cloud management solution,” concluded McNully. “As a growing number of researchers use our infrastructure to collaborate more effectively, we will be better able to find answers to our most troubling health and science questions.”

HPE OneSphere is playing an important part in facilitating the collaborative research at HudsonAlpha. With better access to important data and applications across a multi-cloud environment, researchers can complete projects faster. And HudsonAlpha is able to take better control of an extremely complex hybrid IT environment, which saves them time and money.

HudsonAlpha’s Katreena Mullican talks about her experience with HPE OneSphere below.

Digital transformation: Simplifying the complex for better collaboration

HudsonAlpha is just one example of how enterprises are digitally transforming to better succeed in today’s complicated hybrid IT world.  As enterprises worldwide seek to move faster, increase productivity and control costs, they are embracing a variety of technologies in a multi-cloud environment. While a myriad of options such as clouds, containers, VMs, and bare metal provide flexibility and choice, they also deliver complexity. HPE OneSphere provides the flexibility, choice, and control an enterprise needs to stay a step ahead of the competition.


About Chris Purcell

Chris Purcell drives analyst relations for the Software-Defined and Cloud Group at Hewlett Packard Enterprise. The Software-Defined and Cloud Group organization is responsible for marketing for composable infrastructure, HPE OneView, HPE SimpliVity hyperconverged solutions and Project New Hybrid IT Stack.  HPE has assembled an array of resources that are helping businesses succeed in a hybrid IT world. Learn about HPE’s approach to managing hybrid IT by checking out the HPE website, HPE OneSphere. And to find out how HPE can help you determine a workload placement strategy that meets your service level agreements, visit HPE Pointnext. HudsonAlpha has established an open source user community for sharing lessons learned with Synergy Image Streamer and Hybrid IT at: https://hudsonalpha.github.io/synergy/