User Tools

Site Tools


0x13:reports:d2t1t02-performance-study-of-nvme-tcp-and-nvme-roce-on-linux

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
0x13:reports:d2t1t02-performance-study-of-nvme-tcp-and-nvme-roce-on-linux [2019/04/03 21:55] ehalep0x13:reports:d2t1t02-performance-study-of-nvme-tcp-and-nvme-roce-on-linux [2019/09/28 17:04] (current) – external edit 127.0.0.1
Line 4: Line 4:
 Report by: Kiran Patil  Report by: Kiran Patil 
  
-This session covered why this performance study was conducted. There is open-source implementation for both - NVMe/RoCE and NVMe/TCP. Not every data center will have RoCE devices, but there will also be Ethernet devices. As it was known, RoCE devices are faster w.r.t. latency because of RDMA IO model but how it compared with ubiquitous Ethernet (TCP/IP). This talk went over performance study and comparison of both the implementation (NVMe/RoCE and NVMe/TCP). Due to inherent nature of kernel TCP/IP layer, NVMe over TCP has higher latency compared to NVMe/RoCE (no surprised there).  +This session covered the reasons this performance study was conducted. There is an open-source implementation for both - NVMe/RoCE and NVMe/TCP. Not every data center will have RoCE devices, but there will also be Ethernet devices. As it was known, RoCE devices are faster w.r.t. latency because of RDMA IO model but how it compared with ubiquitous Ethernet (TCP/IP). This talk went over performance study and comparison of both the implementation (NVMe/RoCE and NVMe/TCP). Due to inherent nature of kernel TCP/IP layer, NVMe over TCP has higher latency compared to NVMe/RoCE (no surprised there). 
- +
-This session also covered third implementation of NVMe/SPDK (Storage over DPDK - user space). This comparative study proved that using in-kernel NVMe/TCP implementation - user can get 1M IOPS with reasonable latency (max. latency is 1.5 ms) and also showed traffic scaling when 8 cores were used. +
- +
-Site: https://www.netdevconf.org/0x13/session.html?talk-nvme-tcp-roce +
-Slides:  +
-Videos: +
  
 +This session also covered the third implementation of NVMe/SPDK (Storage over DPDK - user space). This comparative study proved that using in-kernel NVMe/TCP implementation - user can get 1M IOPS with reasonable latency (max. latency is 1.5 ms) and also showed traffic scaling when 8 cores were used.
  
 +Site: https://www.netdevconf.info/0x13/session.html?talk-nvme-tcp-roce
0x13/reports/d2t1t02-performance-study-of-nvme-tcp-and-nvme-roce-on-linux.1554328553.txt.gz · Last modified: 2019/09/28 17:04 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki