How Workload-Aware Networks Work

    Workload aware networks
    ©Bigstock

    Many of today’s hyperconverged infrastructure (HCI) solutions converge compute and storage stacks, but leave the critical networking component as an afterthought

    Compute can be boosted with faster processors, and choosing an all-flash storage-based solution can greatly improve performance of the HCI node, but outdated network technology can drag down overall performance. Without an intelligent, high-speed network to accommodate data traffic, IT organizations often get unwanted surprises that reduce efficiency and increase costs.

    The solution? A unified network that supports diverse applications and workloads and still ensures high performance, reliability, and service quality. The technology exists today – here’s how it works.

    Traditional networking architecture

    Traditionally, the movement of traffic between the core of the data center and the edge is called north-south traffic. AI and machine learning data, in contrast, moves between edge or IoT devices, nodes, and clusters spread throughout the network in order to handle parallel or distributed processing. This pattern is called east-west traffic, and uses a software-defined networking (SDN) infrastructure to route traffic through a diverse array of paths.

    The most important advantage of SDN applies to AI and machine learning data traffic: the intent-based network can dynamically adjust traffic paths to meet the needs of specific and unpredictable workloads. One such solution — HPE Composable Fabric (a recent technology acquisition from Plexxi) — is designed to be workload aware and innately understands the needs of the applications running on the network. The composable fabric also automates the distribution of this east-west traffic to deliver the high speeds and low latency that AI and machine learning applications require. This scale-out architecture also accommodates the unpredictable ebbs and flows of AI and machine-learning traffic.

    Death to Rip-and-Replace

    Adding new AI or machine learning workloads to a data center offers the perfect opportunity to try a scale-out system. Gone are the rip-and-replace days when adding cutting-edge data center solutions meant disrupting all business users while equipment was upgraded. Software-defined technologies can be added to existing infrastructure and scaled out to meet demand.

    One option is deciding to roll out a new AI application on a sub-infrastructure with a modern, scale-out system that links back to the existing data center network infrastructure. Hyperconverged compute, storage, and networking elements integrate with one another and deliver control-center interfaces that administrators operate without having high-level technical certifications. Software now automates most of the processes that were previously manual setting thresholds at a very granular level. Workloads are routed, processed, and stored with the speed, capacity, security, and response time required.

    In the case of SDN, pipes don’t need to be redesigned. The workload-aware software assigns an optimized traffic path for each workload — from AI and machine learning data to traditional databases and other applications. Workloads with unique requirements are isolated or have paths reserved for anticipated traffic flows.

    In a common example, a company might have its sales team run a monthly in-depth customer analysis, which requires processing excessive volumes of data in a short amount of time. Using an SDN system like HPE Composable Fabric, administrators automate the process of reserving a traffic path that delivers wide bandwidth, high throughput, and low latency at the exact time of the month, ultimately improving service to the line of business users and preserving business continuity.

    Give AI Apps a Network as Intelligent as They Are

    AI and machine learning workloads deserve networks that are as intelligent as the applications. Companies can succeed in processing AI and machine learning applications using scale-out, hyperconverged solutions for compute, storage, and networking capabilities without needing to rip-and-replace existing infrastructure. Instead, hyperconvergence offers the agility to add modern capabilities incrementally. In addition to offering big savings on capital expenses, scale-out systems deliver huge savings on operating expenses since they avoid overprovisioning situations and automate tedious and often technical processes.

    Scale-out innovations in compute and storage systems enabled advancements from static data resources to an era where AI and machine learning applications can process data in real-time. The network has finally caught up, and companies now benefit from hyperconverged solutions that offer agility, scalability, and automated optimization of traffic flows based on workload awareness.

    To learn more about the future of networks and how they can boost the performance of your business, download the Gartner report, Look Beyond the Status Quo for Network Innovation. And for more details on composable fabric and hyperconverged infrastructures, read more on the HPE website.


    About Thomas Goepel

    How Workload-Aware Networks Work TechNativeThomas Goepel is the Director Product Management, Hyperconverged Solutions for Hewlett Packard Enterprise. In this role, he is responsible for driving the strategy of HPE’s Hyperconverged products and solutions spanning from the Infrastructure Components to the Management Software. Thomas has over 26 years of experience working in the electronics industry, the last 25 of which at Hewlett Packard Enterprise, where he has held various engineering, marketing and consulting positions in R&D, sales and services.

    To read more articles from Thomas, check out the HPE Shifting to Software-Defined blog.

    Tweet
    Share
    +1
    Share
    WhatsApp