Netdev 0x18 venue
California, USA
Previous editions
Fosstodon
NETDEV VIDEOS
Session
Drinking From The Host Packet Fire Hose
Speakers
Nabil Bitar
Jamal Hadi Salim
Pedro Tammela
Label
Moonshot
Session Type
Talk
Contents
Description
Over the past decade, Ethernet port capacities have experienced a 40x increase, while CPU processing capacity has only seen a 10x rise and memory bandwidth remained relatively stagnant [1]. In 2024, the IEEE’s ratification of 800 Gbps ports doubled the port capacity from 400 Gbps and highlighted the pressing need to address the growing gap between IO, CPU processing capability and memory bandwidth. As a result, traditional kernel IO infrastructure processing methods are becoming increasingly inadequate for managing this disparity.
A notable response to this challenge has been the adoption of XPU (DPU/IPU) NICs[2][3][4] by major cloud providers, which offload network infrastructure tasks from host processors. Our work aims to enhance host resource efficiency in environments equipped with multi-100G ports. In the first phase we worked on offloading Access Control Lists (ACL, iptables/tc), Load Balancing and TLS. This effort led us to a conclusion that simply accelerating such functions through XPU offloads is not enough to manage the high IO demands as well as memory bandwidth required for host application processing.
The second phase of our effort(covered in this talk) is to look into the variety of existing and newer approaches to alleviate the afore-mentioned host-resource strain using XPUs.
In the talk we will briefly share the results from our XPU endeavours and then proceed to provide a comparative analysis of various network processing techniques designed to alleviate host-resource strain. These include GRO/RSC, TSO, Big TCP, and MTU size adjustments, as well as zero-copy methods like MSG_ZEROCOPY, IO_URING, and sendfile. We will examine the impact of these techniques on throughput, CPU, power, and memory bandwidth utilisation, particularly for large transfer traffic patterns common in storage, content delivery and ML training.
In our third phase we plan to develop a comprehensive approach that integrates XPU acceleration(as mentioned above in phase 1) with the techniques from this talk to minimize host CPU load and optimize resources that are made available for application processing. We will briefly discuss our initial thoughts and future work on merging the two worlds. We believe our study is the first public work of its kind and hope it will serve as a valuable resource for the community.
References: [1] https://netdevconf.info/0x17/sessions/talk/congestion-control-architecture-for-host-congestion.html [2]https://www.intel.com/content/www/us/en/products/details/network-io/ipu/e2000-asic.html [3]https://www.amd.com/en/accelerators/pensando [4]https://www.nvidia.com/en-us/networking/products/data-processing-unit/
Recent News
Bronze Sponsor, NVIDIA
[Tue, 09, Jul. 2024]
Bronze Sponsor, Fastly
[Tue, 25, Jun. 2024]
Fireside Chat with Martin Casado
[Wed, 19, Jun. 2024]
Bronze Sponsor, Viasat
[Wed, 05, Jun. 2024]
Bronze Sponsor, secunet
[Mon, 03, Jun. 2024]
Important Dates
Closing of CFS | April 22nd |
Notification by | May 21st |
Conference dates | July 15th-19th |