10Gbps Ethernet NICs for the All-Flash NVMe Datacenter

Game changing RDMA Technology accelerates Hyper-Converged Infrastructure 

Non-Volatile Memory Express (NVMe) and Storage Class Memory (SCM) are offered in current generation servers with Intel Xeon Scalable Processors. The resulting server storage performance gains are driving an increase in network bandwidth: Virtual Machines (VMs) and containers are more densely deployed on servers. A 10GbE network has become necessary to support this infrastructure.

It is also important to consider additional Network Interface Card (NIC) features that enhance overall performance that may be applicable to a deployment like iSCSI hardware offload or Remote Direct Memory Access (RDMA). Marvell FastLinQ 41000 Series adapters are equipped with one of the most extensive collections of these Ethernet features on the market.

Marvell commissioned Demartek to evaluate the benefits of the Marvell FastLinQ 41000 Series when used with latest generation servers. Demartek tested the Marvell FastLinQ 41000 Series for Layer 2 performance, compared the iSCSI hardware initiator offload performance to that of software initiator on a leading competitor, and evaluated Marvell FastLinQ 41000 Series Universal RDMA use in a hyper-converged Storage Spaces Direct (S2D) cluster with SCM and NVMe storage.




Key Findings: 
  • The Marvell FastLinQ 41000 Series achieved line rate bidirectional performance for buffer sizes of 1KB up to 1MB.

  • The Marvell FastLinQ 41000 Series hardware iSCSI initiator had an average of 7.2 times more IOPS than the Linux software iSCSI initiator on Intel for unidirectional workloads.

  • For large block S2D read testing, the cluster utilizing the Marvell FastLinQ 41000 Series with Universal RDMA achieved a total average throughput of 10,470 MBPS while using on average 16% of available cluster processor.

  • For large block S2D write testing, the cluster utilizing the Marvell FastLinQ 41000 Series with Universal RDMA achieved a total average throughput of 1,227 MBPS over RoCE and 2,360 MBPS over iWARP, while using on average 10% of available cluster processor.

COMPREHENSIVE RDMA EVALUATION REPORT

By submitting this form, your information will be processed in accordance with Marvell's Privacy Policy