User Tools

Site Tools


0x13:reports:d1t1t04-hardware-offload-workshop

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
0x13:reports:d1t1t04-hardware-offload-workshop [2019/04/03 20:25] ehalep0x13:reports:d1t1t04-hardware-offload-workshop [2019/09/28 17:04] (current) – external edit 127.0.0.1
Line 7: Line 7:
  
 The discussion started with the general switch ASIC offload and the many devlink updates, such as health Monitor, which monitors device health, and is used to pass information from the device to upper layer. The discussion started with the general switch ASIC offload and the many devlink updates, such as health Monitor, which monitors device health, and is used to pass information from the device to upper layer.
-There was a discussion about the need to have more hardware counter visibility for upper layers in the stack, right now the hardware has lots of stat counters, programmable ones but they are not tied well into the different layers in the stack.+There was a discussion about the need to have more hardware counter visibility for upper layers in the stack. Right now the hardware has lots of stat counters, programmable ones but they are not tied well into the different layers in the stack.
  
 The discussion then shifted to packet drop visibility in the Control plane which is very important. The proposed solutions are: The discussion then shifted to packet drop visibility in the Control plane which is very important. The proposed solutions are:
Line 26: Line 26:
 The discussion continued with talks about policers being configured between ASIC and CPU, to limit the number of packets. The point made was not eliminate stats but augment it more. The discussion continued with talks about policers being configured between ASIC and CPU, to limit the number of packets. The point made was not eliminate stats but augment it more.
  
-                     ii.     Doorbell overflowdiscovery and recovery for RDMA queues was a topic of discussion(Ariel from Broadcom)+The next talk was about Doorbell overflow recovery. The topic of discussion was the discovery and recovery for RDMA queues. Possible solutions were fast dequeuing, CPU stall and drop message detection and recovery procedures.
  
-                    iii    QoS offload for NIC eSwitch model: Focus on Ingress rate limiting and Policing.+This talk was followed with Qos Ingress Rate limiting and OVS offload with TCThe focus was on ingress rate limiting and policing. The rate limited was done with TC offload by adding a matchall type cls with police action and introducing reserved priorities. OVS should install Tc filters with priority offset, reserve higher priority for rate limiting
 +A possible issue with ovs-tc offload is when going from software to hardware, tc police is in software and filters are offloaded, this could break semanting. Possible solutions include reverting to original semantics of policing with offload isn't supported and ovs forcing tc filters in software only.
  
-1    Add matchall type cls with police action+Rony raised the question of why were priorities chosen vs chainsThe answer was that recirculation is good use case for chains.
  
-2.     Introduce reserved Priorities.+This was followed by a small test demo.
  
-a.      OVS should install Tc filters with priority offset, reserve higher priority for rate limiting +Finally the last talk was about Scalable NIC HW offload. The talk begun with discussing the large amount of scaling hardware offloads. 
- +1. Scale without using SRIOV 
-3.     Software-> hardware +2. Multiple dynamic instances deployment at faster speed than VFs 
- +3. NIC HW has very well defined vport based virtualization mode 
-a.      Enable TC offload +4. One PCI device split into multiple smaller sub devices 
- +5. Each sub device comes with own devices, vport, namespace resource 
-b.     Add bridge and interfaces +6. Leverage mature switchdev mode and OVS eco-system 
- +7. Applicable for SmartNIC use case. 
-c.      Configure rate limit…translates to matchall filter with police action +8Using rich vendor agnostic devlink iproute2 tool.
- +
-                                                         i.          Drop continue action +
- +
-d.     Configure OVS tc filters +
- +
-Rony: (why did you choose priorities vs chains…recirculation is a good use case for chains..) +
- +
-                  iv.     Scalable NIC HW offload (Or Garlitz, Parav Pandit) +
- +
-a     Large amount of HW functions +
- +
-                                                         i         Scale without using SRIOV +
- +
-                                                       ii         Multiple dynamic instances deployment at faster speed than VFs +
- +
-                                                     iii         NIC HW has very well defined vport based virtualization mode +
- +
-                                                     iv         One PCI device split into multiple smaller sub devices +
- +
-                                                       v         Each sub device comes with own devices, vport, namespace resource +
- +
-                                                     vi         Leverage mature switchdev mode and OVS eco-system +
- +
-                                                   vii         Applicable for SmartNIC use case. +
- +
-                                                 viii         Use rich vendor agnostic devlink iproute2 tool +
- +
-                                                     ix         Mdev software model view +
- +
-1.     Mlx5 mdev devices +
- +
-2.     Add control plane knob to add /query remove mdev devices +
- +
-a.      Devlink used +
- +
-3.     Mentioned vDPA from Intel +
- +
-4.     Create 3 devices, netdev, RDMA device and representor netdev. +
- +
-5.     In HW mdev is attached to a vport +
- +
-6.     Map it to a container…cannot be mapped to a VM since single instance of driver. +
- +
-                                                                                                      i.          Not connected to VFIO (it’s not necessary…), there is no buffer copy involved +
- +
- +
-Site: https://www.netdevconf.org/0x13/session.html?workshop-hardware-offload +
-Slides:  +
-Videos: +
  
 +The question that the presentors raised was how to achieve an Mdev software model view. A couple of points provided were:
 +1. Mlx5 mdev devices
 +2. Adding control plane knob to add /query remove mdev devices
 +3. Mentioned vDPA from Intel
 +4. Create 3 devices, netdev, RDMA device and representor netdev.
 +5. In HW mdev is attached to a vport
 +6. Map it to a container…cannot be mapped to a VM since single instance of driver.
  
 +The talk was concluded with reasons it's been implemented that way, as the devlink tool and bus model fits requirements such as providing vendor agnostic solution and multi-port subdevice creation.
  
 +Site: https://www.netdevconf.info/0x13/session.html?workshop-hardware-offload
0x13/reports/d1t1t04-hardware-offload-workshop.1554323139.txt.gz · Last modified: 2019/09/28 17:04 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki