[Solved] Unveiling the 10GbE Enigma: How I Boosted Speed to 800MB/s on Synology DS923+ with 4x RAID 0

Unable to maximize transfer speed on Synology DS-923+
Just put together the following setup:
NAS:
– Synology DS-923+, with 32GB RAM and 4x WD Red Plus 10TB each in RAID 0
– Synology 10Gbps ethernet adapter

Switch:
– TRENDnet 10Gbps Switch

Desktop:
– TRENDnet 10Gbps ethernet card, using Marvell’s AQtion chipset
– Brand new CAT6A cabling connecting everything
– Jumbo Frames enabled at 9000 at both Synology and Desktop (Switch is unmanaged)

The WD Red Plus are rated as capable of sustaining 215MB/s. Therefore, 4 in RAID 0 should be able to go 860MB/s, right?

But when I transfer a very large 128GB file from the desktop (NVMe SSD, rated 3000MB/s+) to this RAID 0 volume, I’m only getting 400MB/s, or about only half of what should be expected.

The ethernet ports on both the NAS, Switch and Desktop are all lighting green. They only light green when the connection is at 10Gbps. 5Gbps and below will cause them to light orange.

Any ideas what could be hindering the performance?

Gustavo

 

UPDATE
I removed all bottlenecks:

– Arguably a top NAS device
– Plenty of RAM
– No other services running other than stock DSM 7.2. No third party apps competing for resources
– Brand new 7200RPM CMR hard drives rated each at 215MB/s sustainable write speeds
– Zero fragmentation on the HDDs, brand new setup, zero files installed other than DSM.
– File transfer of a single 128GB file, coming from an NVMe drive on the desktop, so source fragmentation and speed limit also not a bottleneck.
– All new 10GbE devices on the entire chain, with brand new Cat6A cables.
– Enabled 9000 jumbo frames on both desktop and NAS. Router still on 1500 but traffic not flowing through it, but only through the 10Gb switch.

I can’t imagine where the bottleneck is. Maybe these 10TB Western Digitals’ Red Plus are much slower than rated, but I got 7200RPM, not the 5400RPM, which is the case for the 8TB models. It is not unrealistic that they can indeed sustain 200MB/s

I will try iScsi to see if I get anything different, but even on network SMB protocol, speeds should be much greater.

UPDATE

Some additional tests:

Common setup
– DS 923+ 32GB RAM, stock DSM 7.2 without any installed or running programs other than plain vanilla first boot setup
– Synology proprietary 10 GbE card installed
– Switch: TrendNet S750 5-port 10GbE
– Desktop: Ryzen 5800X, 32GB, 2TB NVMe
– NIC card on desktop: TrendNet TEG-10GECTX (Marvell AQtion chipset)
– Jumbo Packet 9000 bits

A) TESTING WITH 4 HDDS IN RAID0

iSCSI
4 HDDs in RAID0
Result: 300MB/s
–> Very weird that iSCSI performance is worse than SMB.

SMB
4 HDDs in RAID0
Btrfs
Result: 400MB/s
–> This seems too low as each HDD capable of 200MB/s

SMB
4 HDDs in RAID0
Ext4
Result: 400MB/s
–> File system choice does not affect performance in this test

B) TESTING WITH 2 HDDS IN RAID 0

SMB
2 HDDs in RAID0
Btrfs
Result: 400MB/s
–> This proves that these drives can sustain at least 200MB/s each. 4 should go to 800MB/s as far as the HDDs are concerned.

SMB
2 HDDs in RAID0
Ext4
Result: 400MB/s
–> And file system choice not affecting performance in large file transfers

C) TESTING WITH 4 HDDS IN 2 RAID 0 POOLS

SMB
2 HDDs in RAID0 and Ext4
+
2 HDDs in RAID0 and Btrfs
Simultaneous data transfer from different SSDs on desktop
Result: 200MB/s on each transfer
–> Clearly, there’s a cap at 400MB/s…

Where is this cap coming from?
– Test shows HDDs not the bottleck
– Maybe the Synology DS-923+ isn’t really 10GbE capable?
– Maybe TrendNET switch or NICs not really 10GbE capable?

 

 

UPDATE – FIXED

Finally, figured out what was happening!
It turns out it was a super stupid mistake!
I plugged the 10GbE card on a PCIe 2.0 x1 slot on my desktop, therefore limiting it to 500MB/s!!!
Once I moved the card to a PCIe 3.0 x4 slot, I immediately got 800MB/s on the 4xRAID0 array.
Unbelievably stupid on my end.
At least, lessons learned:
1) 4x RAID 0 with modern HDDs can indeed achieve 800MB/s. That’s better than SATA SSD speed territory, while also providing major storage capacity (40TB in my case)
2) Always check the damn PCIe slot!
3) Btrfs and Ext4 are equivalent in terms of performance on large file transfers on Synology
4) Jumbo packets (MTU 9000) do provide a meaningful speed boost. Around 10% – 15% in my testing vs MTU1500.
I hope this is helpful.

 

 


If you like this service, please consider supporting us.
We use affiliate links on the blog allowing NAScompares information and advice service to be free of charge to you. Anything you purchase on the day you click on our links will generate a small commission which is used to run the website. Here is a link for Amazon and B&H. You can also get me a ☕ Ko-fi or old school Paypal. Thanks! To find out more about how to support this advice service check HERE   If you need to fix or configure a NAS, check Fiver   Have you thought about helping others with your knowledge? Find Instructions Here  

☕ WE LOVE COFFEE ☕

Or support us by using our affiliate links on Amazon UK and Amazon US
     

locked content ko-fi subscribe

DISCUSS with others your opinion about this subject.
ASK questions to NAS community
SHARE more details what you have found on this subject
CONTRIBUTE with your own article or review. Click HERE
IMPROVE this niche ecosystem, let us know what to change/fix on this site
EARN KO-FI Share your knowledge with others and get paid for it! Click HERE

ASK YOUR QUESTIONS HERE!