Slow Raid-10 Speeds with 8 x NVMe

Issues related to hardware problems
Post Reply
SolaDrive
Posts: 12
Joined: 2017/05/10 16:36:07
Contact:

Slow Raid-10 Speeds with 8 x NVMe

Post by SolaDrive » 2019/12/16 20:10:54

We have 8 x 2TB NVMe drives in raid-10 with CentOS 7 that are attached to our X11DDW-L-B motherboard using 2x ASUS Hyper M.2 X16 PCIe which holds 4 NVMe each. These expansion cards are attached to one slot on the motherboard using a dual riser card. In raid-10 with 1Gb stripe size, I am only getting around 3400MB/s write test speeds, but shouldn't that number be around double or triple?

Motherboard: X11DDW-L-B
NVMe Drives: XPG SX8200 Pro 2TB
Riser Card: RSC-R1UW-2E16
CentOS 7 Kernel: CentOS Linux release 7.7.1908
3.10.0-1062.9.1.el7.x86_64

Code: Select all

[root@ny26 ~]# hdparm -tT /dev/md126

/dev/md126:
 Timing cached reads:   15264 MB in  2.00 seconds = 7649.53 MB/sec
 Timing buffered disk reads: 10222 MB in  3.00 seconds = 3406.85 MB/sec
[root@ny26 ~]# hdparm -tT /dev/md126
I checked each of the drives and they are getting close to 3000MB/s:

Code: Select all

[root@ny26 ~]# hdparm -tT /dev/nvme4n1

/dev/nvme4n1:
Timing cached reads: 15836 MB in 2.00 seconds = 7936.69 MB/sec
Timing buffered disk reads: 8920 MB in 3.00 seconds = 2973.22 MB/sec
[root@ny26 ~]# hdparm -tT /dev/nvme5n1

/dev/nvme5n1:
Timing cached reads: 15758 MB in 2.00 seconds = 7897.33 MB/sec
Timing buffered disk reads: 9132 MB in 3.00 seconds = 3043.70 MB/sec
[root@ny26 ~]# hdparm -tT /dev/nvme7n1

/dev/nvme7n1:
Timing cached reads: 15734 MB in 2.00 seconds = 7885.57 MB/sec
Timing buffered disk reads: 9050 MB in 3.00 seconds = 3016.04 MB/sec
[root@ny26 ~]# hdparm -tT /dev/nvme6n1

/dev/nvme6n1:
Timing cached reads: 15824 MB in 2.00 seconds = 7930.45 MB/sec
Timing buffered disk reads: 9130 MB in 3.00 seconds = 3043.19 MB/sec

User avatar
TrevorH
Site Admin
Posts: 33215
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Slow Raid-10 Speeds with 8 x NVMe

Post by TrevorH » 2019/12/17 09:32:57

That throughput figure looks suspiciously close to PCiE 3.0 x4 speeds. I'd suspect your hardware isn't giving you the full PCiE x16
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

SolaDrive
Posts: 12
Joined: 2017/05/10 16:36:07
Contact:

Re: Slow Raid-10 Speeds with 8 x NVMe

Post by SolaDrive » 2019/12/18 19:04:13

Well each of the nvme operates on x4 as each CPU lane is set to x4x4x4x4 in the bios, so in total x32 lanes, that's how it should be as per Supermicro technician.

User avatar
TrevorH
Site Admin
Posts: 33215
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Slow Raid-10 Speeds with 8 x NVMe

Post by TrevorH » 2019/12/18 22:39:10

Yes, and your combined throughput using 8 x PCiE 3.0 x4 drives is the same as one single PCiE 3.0 x4 slot.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

Post Reply