0x13:reports:d1t1t04-hardware-offload-workshop
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
0x13:reports:d1t1t04-hardware-offload-workshop [2019/04/03 21:14] – ehalep | 0x13:reports:d1t1t04-hardware-offload-workshop [2019/09/28 17:04] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 7: | Line 7: | ||
The discussion started with the general switch ASIC offload and the many devlink updates, such as health Monitor, which monitors device health, and is used to pass information from the device to upper layer. | The discussion started with the general switch ASIC offload and the many devlink updates, such as health Monitor, which monitors device health, and is used to pass information from the device to upper layer. | ||
- | There was a discussion about the need to have more hardware counter visibility for upper layers in the stack, right now the hardware has lots of stat counters, programmable ones but they are not tied well into the different layers in the stack. | + | There was a discussion about the need to have more hardware counter visibility for upper layers in the stack. Right now the hardware has lots of stat counters, programmable ones but they are not tied well into the different layers in the stack. |
The discussion then shifted to packet drop visibility in the Control plane which is very important. The proposed solutions are: | The discussion then shifted to packet drop visibility in the Control plane which is very important. The proposed solutions are: | ||
Line 28: | Line 28: | ||
The next talk was about Doorbell overflow recovery. The topic of discussion was the discovery and recovery for RDMA queues. Possible solutions were fast dequeuing, CPU stall and drop message detection and recovery procedures. | The next talk was about Doorbell overflow recovery. The topic of discussion was the discovery and recovery for RDMA queues. Possible solutions were fast dequeuing, CPU stall and drop message detection and recovery procedures. | ||
- | This talk was followed | + | This talk was followed |
- | Possible issues | + | A possible issue with ovs-tc offload |
Rony raised the question of why were priorities chosen vs chains. The answer was that recirculation is a good use case for chains. | Rony raised the question of why were priorities chosen vs chains. The answer was that recirculation is a good use case for chains. | ||
Line 35: | Line 35: | ||
This was followed by a small test demo. | This was followed by a small test demo. | ||
- | Finally the | + | Finally the last talk was about Scalable NIC HW offload. |
- | iv. Scalable NIC HW offload | + | 1. Scale without using SRIOV |
- | + | 2. Multiple dynamic instances deployment at faster speed than VFs | |
- | a. | + | 3. NIC HW has very well defined vport based virtualization mode |
- | + | 4. One PCI device split into multiple smaller sub devices | |
- | i. Scale without using SRIOV | + | 5. Each sub device comes with own devices, vport, namespace resource |
- | + | 6. Leverage mature switchdev mode and OVS eco-system | |
- | ii. Multiple dynamic instances deployment at faster speed than VFs | + | 7. Applicable for SmartNIC use case. |
- | + | 8. Using rich vendor agnostic devlink iproute2 tool. | |
- | iii. NIC HW has very well defined vport based virtualization mode | + | |
- | + | ||
- | iv. One PCI device split into multiple smaller sub devices | + | |
- | + | ||
- | v. Each sub device comes with own devices, vport, namespace resource | + | |
- | + | ||
- | vi. Leverage mature switchdev mode and OVS eco-system | + | |
- | + | ||
- | vii. Applicable for SmartNIC use case. | + | |
- | + | ||
- | viii. | + | |
- | + | ||
- | ix. Mdev software model view | + | |
- | + | ||
- | 1. Mlx5 mdev devices | + | |
- | + | ||
- | 2. Add control plane knob to add /query remove mdev devices | + | |
- | + | ||
- | a. Devlink used | + | |
- | + | ||
- | 3. | + | |
- | + | ||
- | 4. | + | |
- | + | ||
- | 5. In HW mdev is attached to a vport | + | |
- | + | ||
- | 6. Map it to a container…cannot be mapped to a VM since single instance of driver. | + | |
- | + | ||
- | i. Not connected to VFIO (it’s not necessary…), | + | |
- | + | ||
- | + | ||
- | Site: https:// | + | |
- | Slides: | + | |
- | Videos: | + | |
+ | The question that the presentors raised was how to achieve an Mdev software model view. A couple of points provided were: | ||
+ | 1. Mlx5 mdev devices | ||
+ | 2. Adding control plane knob to add /query remove mdev devices | ||
+ | 3. Mentioned vDPA from Intel | ||
+ | 4. Create 3 devices, netdev, RDMA device and representor netdev. | ||
+ | 5. In HW mdev is attached to a vport | ||
+ | 6. Map it to a container…cannot be mapped to a VM since single instance of driver. | ||
+ | The talk was concluded with reasons it's been implemented that way, as the devlink tool and bus model fits requirements such as providing vendor agnostic solution and multi-port subdevice creation. | ||
+ | Site: https:// |
0x13/reports/d1t1t04-hardware-offload-workshop.1554326062.txt.gz · Last modified: 2019/09/28 17:04 (external edit)