Not Getting 10GbE Speed? 20 Fixes and Solutions

20 Ways to Improve Your 10GbE Network Speeds

Upgrading to 10GbE networking should, in theory, allow you to achieve 1GB/s (1000MB/s) network speeds, unlocking ultra-fast data transfers for large files, backups, and high-performance applications. However, many users find that real-world performance falls far short of these expectations. Instead of the seamless, high-speed experience they anticipated, they encounter slower-than-expected speeds, inconsistent performance, and unexplained bottlenecks that limit throughput.

Whether you’re using a NAS, a 10GbE switch, or a direct PC-to-NAS connection, numerous factors can influence network performance. These can range from hardware limitations (such as underpowered CPUs, slow storage, or limited PCIe lanes) to misconfigured network settings (like incorrect MTU sizes, VLAN issues, or outdated drivers). Even the quality of your network cables and transceivers can play a crucial role in determining whether you’re getting the full 10GbE bandwidth or suffering from hidden bottlenecks.

In this guide, we’ll explore TWENTY common reasons why your 10GbE network might not be delivering full speeds, along with detailed fixes and optimizations for each issue. Each point is carefully explained, ensuring that you can identify, diagnose, and resolve the specific problems affecting your network performance. Whether you’re dealing with a NAS that isn’t reaching expected speeds, a 10GbE adapter that’s underperforming, or a switch that isn’t behaving as expected, this guide will help you troubleshoot step by step, so you can fully unlock the potential of your 10GbE network.


1. (Obvious one) Your Storage is Too Slow to Keep Up with 10GbE Speeds

The Problem:

One of the biggest misconceptions about 10GbE networking is that simply having a 10GbE network adapter means you will automatically get 1GB/s speeds. However, your actual storage performance is often the bottleneck. Most traditional hard drives (HDDs) have a sequential read/write speed of only 160-280MB/s, meaning that a single drive cannot fully saturate a 10GbE connection. Even with multiple HDDs in a RAID array, performance may still fall short of 1GB/s due to RAID overhead and the limitations of mechanical disks.

For example, if you have a 4-bay NAS with standard 7200RPM hard drives in RAID 5, you may only reach 500-600MB/s, which is half the potential of your 10GbE network. The situation gets worse if you are using RAID 6, as the additional parity calculations introduce a write performance penalty.

The Fix:

  • Switch to SSDs: If you need consistent 10GbE performance, you will need SSDs instead of HDDs. Even four SATA SSDs in RAID 5 can saturate a 10GbE connection (~1GB/s read/write).
  • Use NVMe Storage for Maximum Speeds: If your NAS supports NVMe SSDs, using them will provide 3-5GB/s speeds, which far exceeds 10GbE bandwidth.
  • Optimize RAID Configuration:
    • RAID 0 offers maximum speed, but no redundancy.
    • RAID 5 or RAID 10 is the best balance for speed and data protection.
    • RAID 6 is great for redundancy but can severely impact write performance.

How to Check Disk Speeds:

Run a disk speed test to verify if storage is the issue:

Windows (CrystalDiskMark)

  1. Download and install CrystalDiskMark.
  2. Select your storage volume (NAS drive, local SSD, etc.).
  3. Run a sequential read/write test.
  4. If speeds are below 1GB/s, your storage is the bottleneck.

Linux/macOS (dd Command)

dd if=/dev/zero of=/mnt/testfile bs=1G count=5 oflag=direct
  • This writes 5GB of data to test sequential write speeds.
  • Check the MB/s value after the test completes—if it’s below 1000MB/s, your storage is too slow.


2. Your SSDs or NVMe Drives Are Running at Lower PCIe Speeds

The Problem:

Even if your NAS or PC is using SSDs, you might not be getting full speeds due to PCIe lane limitations. Some NAS devices throttle M.2 NVMe SSDs to PCIe 3.0 x1 or x2, which caps speeds at 800-1600MB/s—not enough to fully saturate a 10GbE connection.

This issue is particularly common in budget-friendly NAS systems and motherboards where multiple M.2 slots share bandwidth with SATA ports or other PCIe devices. Even high-speed SSDs like the Samsung 980 Pro (7000MB/s rated speed) will be bottlenecked if placed in an underpowered slot.

The Fix:

  • Check PCIe Lane Assignments:
    • Some motherboards share PCIe lanes between M.2 slots and other components (e.g., GPU, SATA ports).
    • Move your NVMe SSD to a full x4 slot for maximum speed.

Linux/macOS (Check PCIe Speeds)

lspci -vvv | grep -i nvme
  • Look for PCIe x1 or PCIe x2—this means your SSDs are not running at full bandwidth.

Windows (Check with CrystalDiskInfo)

  1. Download CrystalDiskInfo.
  2. Look for the PCIe link speed in the SSD details.

If speeds are lower than expected, try moving the SSD to a different M.2 slot or checking BIOS settings to enable full PCIe bandwidth.


3. You’re Using DRAM-less SSDs (HMB-Only SSDs Can Throttle Speeds)

The Problem:

Not all SSDs are created equal. Some budget SSDs lack DRAM cache and instead rely on Host Memory Buffer (HMB), which offloads caching duties to system RAM. While this design helps reduce costs, it also means significantly lower sustained write performance.

For a single SSD, this might not be an issue, but in a RAID configuration, the problem worsens as multiple drives compete for system memory. DRAM-less SSDs also tend to overheat faster, leading to thermal throttling, further reducing performance.

The Fix:

  • Use SSDs with DRAM cache: High-performance SSDs like the Samsung 970 EVO, WD Black SN850, and Crucial P5 Plus have dedicated DRAM to prevent slowdowns.
  • Monitor SSD temperatures:
    • If SSDs are overheating (above 70°C), use heatsinks or active cooling.
  • Check SSD type in Windows:
    1. Open Device Manager → Expand Disk Drives.
    2. Search your SSD model online—if it lacks DRAM, it could be a performance bottleneck.


4. Your Switch is Not Actually 10GbE (Misleading Switch Descriptions)

The Problem:

Many users unknowingly purchase “10GbE” switches that only have limited 10GbE ports. Some switches advertise 10GbE speeds, but only one or two ports support it, while the rest run at 1GbE.

It’s also possible that your NAS or PC is plugged into a non-10GbE port, creating an invisible bottleneck.

The Fix:

  • Check the switch model’s specifications to confirm the number of true 10GbE ports.
  • Log into your switch’s admin panel and confirm the port speeds:
    • If using Netgear, Ubiquiti, or Cisco, log in and check the port status.
    • If using a managed switch, run the following command via SSH:
      show interfaces status
    • Look for 10G/10000M to confirm that the port is running at full speed.

Windows (Check Network Speed)

  1. Open Control Panel > Network and Sharing Center.
  2. Click on your 10GbE adapter → Check Speed (should show 10.0Gbps).

If your switch only has 1-2 ports at 10GbE, you may need to reconfigure your network layout or upgrade to a full 10GbE switch.


5. You’re Using the Wrong Ethernet Cables (Cat5e vs. Cat6/Cat7)

The Problem:

Not all Ethernet cables can handle 10GbE speeds over long distances. If you’re using Cat5e, performance drops significantly after 10 meters.

The Fix:

  • Use at least Cat6 for short runs (up to 30 meters).
  • Use Cat6a or Cat7 for long runs (30m+).
  • Inspect cables—cheap or old cables may not be rated for 10GbE.

How to Check Your Cable Type

  1. Look at the cable jacket—it should say Cat6, Cat6a, or Cat7.
  2. If the cable does not specify, assume it’s Cat5e and replace it.

If using fiber, make sure your SFP+ transceivers are rated for 10GbE—many cheap adapters are 1GbE only.


6. Your Network Adapter is Using the Wrong Driver or Firmware

The Problem:

Even if you have a 10GbE network adapter installed, outdated or incorrect drivers can limit speeds or cause inconsistent performance. Many network cards rely on manufacturer-specific drivers for optimal performance, but some operating systems may install generic drivers that lack key optimizations.

This issue is common with Intel, Mellanox, Broadcom, and Aquantia/AQC NICs—especially if they were installed manually or came pre-installed with a NAS or prebuilt server.

The Fix:

  1. Check your network adapter model:
    • Windows: Open Device Manager > Network Adapters and find your 10GbE NIC name.
    • Linux/macOS: Run the following command to list your installed NICs:
      lspci | grep Ethernet
  2. Update the driver manually:
    • Windows: Go to the manufacturer’s website (Intel, Broadcom, Mellanox, etc.) and download the latest driver.
    • Linux: Update using ethtool:
      sudo ethtool -i ethX # Replace ethX with your network interface
  3. Check and update NIC firmware: Some network cards require a firmware update for full 10GbE support. Many Aquantia NICs, for example, need firmware updates to fix link speed negotiation issues.
  4. Ensure your OS isn’t using a generic driver:
    • In Windows, open Device Manager, right-click the NIC, and select Properties > Driver. If it says Microsoft Generic Adapter, update it manually.
    • In Linux, check driver details with:
      ethtool -i ethX

      If the driver is a generic kernel driver, install the manufacturer’s official driver.


7. MTU (Jumbo Frames) is Not Set Correctly

The Problem:

By default, most network devices use a 1500-byte MTU (Maximum Transmission Unit). However, 10GbE networks can benefit from larger packet sizes (9000 bytes, known as Jumbo Frames). If one device has Jumbo Frames enabled but another doesn’t, packets get fragmented, leading to lower speeds, higher latency, and increased CPU usage.

The Fix:

  1. Enable Jumbo Frames (MTU 9000) on All Devices:
    • Windows:
      • Go to Control Panel > Network and Sharing Center > Change Adapter Settings.
      • Right-click your 10GbE adapter, select Properties > Configure > Advanced.
      • Set Jumbo Frame / MTU to 9000.
    • Linux/macOS:
      sudo ifconfig ethX mtu 9000
    • NAS:
      • Synology: Go to Control Panel > Network > Interfaces > Edit and set MTU to 9000.
      • QNAP: Go to Network & Virtual Switch > Interfaces > Jumbo Frames.
  2. Check MTU Settings on Your Switch:
    • If your switch does not support MTU 9000, disable Jumbo Frames or upgrade the switch.
  3. Verify MTU Configuration:
    • Run a ping test with large packets:
      ping -f -l 8972 NAS_IP

      If the packets fragment, MTU isn’t properly configured.


8. Your NAS or PC CPU is Too Weak to Handle 10GbE Traffic

The Problem:

Even if you have fast storage and a 10GbE adapter, a low-power CPU can bottleneck network performance. Many NAS devices use ARM-based or low-end Intel CPUs (e.g., Celeron, Atom, or N-series processors) that struggle to handle high-speed transfers, encryption, or multi-user traffic.

For example, some budget NAS units advertise 10GbE connectivity, but their CPU is too weak to push consistent 1GB/s speeds—especially if multiple users are accessing data simultaneously.

The Fix:

  • Check NAS CPU specs:
    • If your NAS has a quad-core ARM or low-end Intel CPU, it may not be capable of full 10GbE speeds.
  • Monitor CPU Usage:
    • Windows: Open Task Manager > Performance and check if the CPU is maxed out during transfers.
    • Linux/macOS: Use:
      top
  • Disable resource-heavy background tasks:
    • Stop or schedule RAID scrubbing, snapshots, virus scans, and indexing during off-hours.
  • Use an x86 NAS with a high-performance CPU:
    • Intel Core i3/i5, Ryzen, or Xeon-based NAS units handle 10GbE much better than Celeron/ARM-based models.


9. VLAN, QoS, or Network Prioritization is Throttling Your 10GbE Traffic

The Problem:

If you’re using a managed switch or router, incorrect VLAN (Virtual LAN) or QoS (Quality of Service) settings may be limiting your 10GbE speeds. Some switches automatically assign lower priority to high-bandwidth devices, throttling performance.

The Fix:

  1. Check VLAN settings:
    • If your 10GbE NAS or PC is in a VLAN with limited bandwidth, remove it from that VLAN or adjust the priority settings.
  2. Disable or Adjust QoS Settings:
    • Log into your switch’s admin panel and look for QoS (Quality of Service) settings.
    • If enabled, check if bandwidth limits are applied to your 10GbE ports.
    • In some switches (e.g., Ubiquiti, Netgear, Cisco), set QoS priority for 10GbE devices to “High”.
  3. Run a Speed Test Without VLAN or QoS:
    • Temporarily disable VLAN/QoS, then test file transfer speeds again.

If speeds improve, your VLAN/QoS settings were throttling your network.


10. Background Processes or Other Network Devices Are Consuming Bandwidth

The Problem:

If you’re not getting full 10GbE speeds, it’s possible that another device is using the NAS at the same time. Even if your PC or NAS seems idle, background tasks like cloud syncing, automated backups, Plex transcoding, or surveillance camera recording can consume CPU, storage I/O, and network bandwidth.

The Fix:

  1. Check if other devices are using the NAS:
    • Windows: Open Task Manager > Network and check if any background processes are consuming bandwidth.
    • Linux/macOS: Use:
      iftop -i ethX
    • On your NAS, check if:
      • Plex or media servers are streaming.
      • Security cameras are recording to the NAS.
      • Backups/snapshots are running in the background.
  2. Pause Background Tasks:
    • Temporarily disable cloud syncing, RAID scrubbing, and backups, then retest network speeds.
  3. Run an IPerf Network Speed Test:
    • Windows/Linux:
      • On NAS:
        iperf3 -s
      • On PC:
        iperf3 -c NAS_IP -P 4
    • If IPerf shows 1GB/s speeds but file transfers don’t, then background processes or storage limitations are the issue.


11. Your SFP+ Transceiver or Media Converter is Bottlenecking Performance

The Problem:

If you’re using SFP+ transceivers or fiber-to-RJ45 media converters, they might not be running at full 10GbE speeds. Many budget-friendly SFP+ modules are actually 1GbE-only or have compatibility issues with certain switches and NICs. Additionally, some fiber-to-copper converters (e.g., cheap third-party models) overheat quickly, leading to throttling and slow speeds.

The Fix:

  1. Check Your SFP+ Transceiver Rating:
    • Run the following command on a Linux-based NAS or switch:
      ethtool ethX
    • If the output shows 1000Mbps instead of 10000Mbps, your SFP+ module is not running at full speed.
  2. Use Verified SFP+ Modules:
    • Stick to brand-certified transceivers (e.g., Intel, Mellanox, Cisco, Ubiquiti, MikroTik).
    • Generic eBay/Amazon SFP+ transceivers may not properly negotiate at 10GbE.
  3. Check for Overheating:
    • Touch the transceiver—if it’s too hot to hold, it may be thermal throttling.
    • Consider active cooling (small heatsinks or airflow near the module).
  4. Verify Media Converters:
    • Some cheap SFP-to-RJ45 converters cap speeds at 5GbE or lower.
    • Try swapping the converter for a direct 10GbE-capable SFP+ transceiver.

12. Your PCIe Slot is Throttling Your 10GbE NIC

The Problem:

Your 10GbE network card (NIC) might be plugged into a PCIe slot that doesn’t provide full bandwidth. Some motherboards limit secondary PCIe slots to x1 or x2 speeds, which reduces network performance significantly.

For example:

  • A PCIe 2.0 x1 slot only supports 500MB/s, far below 10GbE speeds.
  • A PCIe 3.0 x4 slot is required for full 10GbE performance.

The Fix:

  1. Check PCIe Slot Assignment:
    • Windows: Use HWiNFO64 or Device Manager to check PCIe link speed.
    • Linux/macOS: Run:
      lspci -vvv | grep -i ethernet

      If it says PCIe x1, your NIC is bottlenecked.

  2. Move the 10GbE NIC to a Better Slot:
    • Use a PCIe 3.0/4.0 x4 or x8 slot for full bandwidth.
    • Avoid chipset-controlled PCIe slots, as they share bandwidth with SATA, USB, and other devices.
  3. Enable Full PCIe Speed in BIOS:
    • Go to BIOS > Advanced Settings > PCIe Configuration.
    • Set the slot to “Gen 3” or “Gen 4” (depending on your motherboard).


13. SMB or NFS Protocol Overhead is Slowing Transfers

The Problem:

If you’re transferring files over a mapped network drive (SMB/NFS), protocol overhead can reduce real-world speeds. Windows SMB, in particular, can limit large file transfers due to encryption, signing, or buffer settings.

The Fix:

  1. Enable SMB Multichannel for Faster Transfers (Windows):
    • Open PowerShell as Administrator and run:
      powershell
      Set-SmbClientConfiguration -EnableMultiChannel $true
    • This allows multiple TCP connections for higher throughput.
  2. Disable SMB Signing (If Safe to Do So):
    • Windows:
      powershell
      Set-SmbClientConfiguration -RequireSecuritySignature $false
    • Linux:
      Add the following line to /etc/fstab when mounting an SMB share:

      ini
      vers=3.0,seal=no
  3. Try NFS Instead of SMB (If Using Linux/macOS):
    • SMB can be slow for large sequential transfers.
    • NFS performs better for 10GbE direct-attached storage (NAS to PC).
  4. Use iSCSI for Direct Storage Access:
    • If your NAS supports iSCSI, mount an iSCSI target for block-level access, which can be much faster than SMB/NFS.

14. Your Router or Network Switch is Blocking Full Speeds

The Problem:

Many consumer-grade routers and switches have built-in traffic management features that can throttle high-speed connections. Even some high-end managed switches may have bandwidth limits, VLAN misconfigurations, or QoS settings that restrict speeds.

The Fix:

  1. Disable Traffic Shaping or QoS:
    • On a managed switch, log in and disable bandwidth limits on your 10GbE ports.
    • On a router, look for:
      • Smart QoS / Traffic Prioritization (disable it).
      • Bandwidth Limiting (set to unlimited).
  2. Check VLAN Configuration:
    • If your NAS and PC are in different VLANs, traffic might be routed through the main router, slowing speeds.
    • Move both devices into the same VLAN for direct 10GbE connectivity.
  3. Ensure Your Switch Supports Full 10GbE Throughput:
    • Some low-end 10GbE switches have an internal bandwidth cap.
    • Example: A switch with five 10GbE ports but only a 20Gbps internal backplane will throttle performance under heavy load.

15. Windows Power Management is Throttling Your 10GbE Card

The Problem:

Windows Power Management settings may be automatically throttling your 10GbE network adapter to save energy. This can cause inconsistent speeds and unexpected slowdowns.

The Fix:

  1. Disable Energy-Efficient Ethernet (EEE):
    • Open Device Manager → Expand Network Adapters → Right-click your 10GbE adapterProperties.
    • Under the Advanced tab, find “Energy-Efficient Ethernet” and set it to Disabled.
  2. Set Windows Power Plan to High Performance:
    • Open Control Panel > Power Options.
    • Select High Performance (or Ultimate Performance if available).
  3. Disable CPU Power Throttling:
    • Open PowerShell as Administrator and run:
      powershell
      powercfg -setactive SCHEME_MIN
    • This forces Windows to prioritize performance over power saving.
  4. Check for Interrupt Moderation & Adaptive Inter-Frame Spacing:
    • In Device Manager, under the Advanced tab of your 10GbE adapter, disable:
      • Interrupt Moderation
      • Adaptive Inter-Frame Spacing


16. Your NAS or PC is Routing Traffic Through the Wrong Network (Subnet Mismatch)

The Problem:

Even if you have a direct 10GbE connection between your NAS and PC, your operating system might still route traffic through a slower network interface (e.g., a 1GbE connection or even Wi-Fi). This can happen if your system prioritizes the wrong network adapter, or if your NAS and PC are on different subnets, causing traffic to be routed through a slower router or switch instead of using the direct 10GbE link.

For example:

  • Your NAS has two network interfaces:
    • 10GbE: 192.168.2.10
    • 1GbE: 192.168.1.10
  • Your PC has two interfaces:
    • 10GbE: 192.168.2.20
    • Wi-Fi: 192.168.1.50

If your PC is trying to reach the NAS using the 1GbE or Wi-Fi address, it may bypass the 10GbE connection entirely, leading to slow speeds.

The Fix:

  1. Ensure Both Devices Are on the Same Subnet
    • Assign both 10GbE interfaces an IP in the same range (e.g., 192.168.2.x).
    • Set the 1GbE and Wi-Fi interfaces to a different subnet (e.g., 192.168.1.x).
  2. Manually Set the 10GbE Network as the Preferred Route
    • Windows (CMD – Run as Administrator):
      powershell
      netsh interface ipv4 set interface "10GbE Adapter Name" metric=1
    • Linux/macOS:
      sudo ip route add 192.168.2.0/24 dev ethX metric 10
    • A lower metric prioritizes the 10GbE connection over slower networks.
  3. Check Active Routes to Ensure 10GbE is Being Used
    • Windows:
      powershell
      route print
    • Linux/macOS:
      ip route show
    • Look for 192.168.2.x going through the 10GbE adapter. If another network is being used, adjust the routing table.


17. Your SATA Controller is Bottlenecking Multiple Drives

The Problem:

Even if you have fast SSDs or multiple hard drives in RAID, the SATA controller inside your NAS or PC might be the bottleneck. Some budget NAS units and lower-end PC motherboards use cheap SATA controllers (e.g., JMicron, ASMedia, Marvel) that bottleneck total disk throughput.

For example:

  • Your NAS or PC has six SATA ports, but they are all routed through a single PCIe 2.0 x1 controller (which has a max bandwidth of 500MB/s).
  • Even though each SSD is capable of 500MB/s, the total throughput is capped by the controller’s bandwidth.

The Fix:

  1. Check the SATA Controller in Use:
    • Windows (Device Manager): Expand Storage Controllers and check the SATA controller manufacturer.
    • Linux/macOS:
      lspci | grep SATA
    • If you see JMicron, ASMedia, or Marvel, you might have a bandwidth-limited controller.
  2. Use an HBA (Host Bus Adapter) Instead
    • If your motherboard or NAS has limited SATA bandwidth, install a dedicated LSI/Broadcom HBA card (e.g., LSI 9211-8i, LSI 9300-8i) to get full-speed SATA connectivity.
  3. Check the SATA Backplane in NAS Enclosures
    • Some NAS enclosures have a shared SATA controller for all drives, limiting total speed.
    • If possible, upgrade to a NAS with multiple SATA controllers or use NVMe SSDs instead.

18. Your System’s TCP/IP Stack is Not Optimized for High-Speed Transfers

The Problem:

By default, most operating systems have conservative TCP settings that are optimized for 1GbE networks, but not for high-speed 10GbE connections. Without proper tuning, TCP window size, congestion control, and buffer settings can limit data transfer rates over high-bandwidth connections.

The Fix:

Windows: Optimize TCP Settings via PowerShell

  1. Enable TCP Window Auto-Tuning:
    powershell
    netsh int tcp set global autotuninglevel=normal
  2. Enable Receive Side Scaling (RSS) to Use Multiple CPU Cores:
    powershell
    Set-NetAdapterRss -Name "10GbE Adapter Name" -Enabled $true
  3. Increase TCP Receive Buffers:
    powershell
    netsh int tcp set global rss=enabled
    netsh int tcp set global ecncapability=disabled
    netsh int tcp set global chimney=enabled

Linux/macOS: Increase TCP Buffers

Edit /etc/sysctl.conf and add:

net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864

Then apply the changes:

sudo sysctl -p

19. Antivirus or Firewall Software is Interfering with Network Speeds

The Problem:

Many antivirus and firewall programs scan all incoming and outgoing network traffic, which can significantly slow down 10GbE speeds. Some intrusion prevention systems (IPS), such as those in Sophos, Norton, Bitdefender, and Windows Defender, can introduce latency and CPU overhead when processing large file transfers.

The Fix:

  1. Temporarily Disable Your Antivirus/Firewall and Run a File Transfer Test
    • If speeds improve, your security software is causing the slowdown.
  2. Whitelist Your NAS or 10GbE Connection in Security Software
    • Add your NAS IP address as an exclusion in your antivirus or firewall settings.
  3. Disable Real-Time Scanning for Large File Transfers
    • In Windows Defender:
      • Open Windows Security → Go to Virus & Threat Protection.
      • Under Exclusions, add your NAS drive or network adapter.
  4. Check for Router-Level Security Features
    • Some routers have Deep Packet Inspection (DPI) or Intrusion Prevention (IPS) enabled, which can slow down traffic.
    • Log into your router’s admin panel and disable unnecessary security features for local transfers.

20. Your Network is Experiencing Microburst Congestion (Overloaded Buffers)

The Problem:

Some 10GbE switches have limited packet buffers, causing microburst congestion when multiple devices transfer data simultaneously. This results in random slowdowns, packet loss, and jitter, even if total traffic is well below 10GbE capacity.

The Fix:

  1. Enable Flow Control on Your Switch
    • Log into the switch’s admin panel.
    • Enable 802.3x Flow Control on your 10GbE ports.
  2. Use a Higher-Quality Switch with Larger Buffers
    • Some cheap 10GbE switches have small packet buffers, leading to congestion.
    • Consider an enterprise-grade switch (e.g., Netgear XS716T, Cisco SG550X, Ubiquiti EdgeSwitch).
  3. Monitor Switch Traffic for Spikes
    • Use iftop or Wireshark to monitor packet loss or delays.
    • If needed, upgrade your switch to one with better buffering.


📧 SUBSCRIBE TO OUR NEWSLETTER 🔔


    🔒 Join Inner Circle

    Get an alert every time something gets added to this specific article!


    Want to follow specific category? 📧 Subscribe

    This description contains links to Amazon. These links will take you to some of the products mentioned in today's content. As an Amazon Associate, I earn from qualifying purchases. Visit the NASCompares Deal Finder to find the best place to buy this device in your region, based on Service, Support and Reputation - Just Search for your NAS Drive in the Box Below

    Need Advice on Data Storage from an Expert?

    Finally, for free advice about your setup, just leave a message in the comments below here at NASCompares.com and we will get back to you. Need Help? Where possible (and where appropriate) please provide as much information about your requirements, as then I can arrange the best answer and solution to your needs. Do not worry about your e-mail address being required, it will NOT be used in a mailing list and will NOT be used in any way other than to respond to your enquiry.

      By clicking SEND you accept this Privacy Policy
      Question will be added on Q&A forum. You will receive an email from us when someone replies to it.
      🔒Private Fast Track Message (1-24Hours)

      TRY CHAT Terms and Conditions
      If you like this service, please consider supporting us. We use affiliate links on the blog allowing NAScompares information and advice service to be free of charge to you.Anything you purchase on the day you click on our links will generate a small commission which isused to run the website. Here is a link for Amazon and B&H.You can also get me a ☕ Ko-fi or old school Paypal. Thanks!To find out more about how to support this advice service check HEREIf you need to fix or configure a NAS, check Fiver Have you thought about helping others with your knowledge? Find Instructions Here  
       
      Or support us by using our affiliate links on Amazon UK and Amazon US
          
       
      Alternatively, why not ask me on the ASK NASCompares forum, by clicking the button below. This is a community hub that serves as a place that I can answer your question, chew the fat, share new release information and even get corrections posted. I will always get around to answering ALL queries, but as a one-man operation, I cannot promise speed! So by sharing your query in the ASK NASCompares section below, you can get a better range of solutions and suggestions, alongside my own.

      ☕ WE LOVE COFFEE ☕

       

       

      locked content ko-fi subscribe

      DISCUSS with others your opinion about this subject.
      ASK questions to NAS community
      SHARE more details what you have found on this subject
      CONTRIBUTE with your own article or review. Click HERE
      IMPROVE this niche ecosystem, let us know what to change/fix on this site
      EARN KO-FI Share your knowledge with others and get paid for it! Click HERE

      ASK YOUR QUESTIONS HERE!

      33 thoughts on “Not Getting 10GbE Speed? 20 Fixes and Solutions

      1. ARM can be great, its just the ones qnap use are cheap. Apples M series for instance does high speed encryption pretty damn fast.
        Its just about having the instruction set being hardware accelerated or specifically optimised!
        REPLY ON YOUTUBE

      2. When I was a kid (sort of), we routinely backed up one of our pizza sized 2MB disk packs to the other one on our personal computer of the day. That backup took an acceptable 2 minutes or so, after which we could get back to work defining the future.
        REPLY ON YOUTUBE

      3. I have a Synology DS1819+ with a Synology E10M20-T1 M.2 SSD & 10 GbE Combo Adapter installed. Does this Adapter require any drivers or are they embedded into DSM? I checked the download page for the E10M20-T1 adapter and the only downloads available are documentation. Depending on the type and size of the files I am uploading to my NAS I see a range of 300 to 700 MB/s is that good?
        REPLY ON YOUTUBE

      4. So very informative with a massive explanation of possible bottlenecks affecting data transfer speeds. Love the point in the video while steaming information at the camera an interrupt is acknowledged, and the result of “I hate seagulls” is multiplexed. Love the video. There is so much key information in the cake, let alone the icing.
        REPLY ON YOUTUBE

      5. Practical read-write speed on a spinning HDD is more like 120[5400]-160[7200] MiB/s. The 200+ numbers are for very idealized raw sequential data on the outer cylinders without so much overhead as a file system structure. Platter bit density also matters to some degree, because for a given RPM more bits pass by the head per second.
        REPLY ON YOUTUBE

      6. Windows has some interesting limitations if you are running WSL, Hyper-V etc. For example were a VM can get full 10Gbe speeds but the host OS will see something around 7-8Gbps. There are various fixes that can be used once a windows machine gets into this state. (this is beyond the fixes mentioned on this video – which are also great things to check).
        REPLY ON YOUTUBE

      7. I don’t know if it was mentioned, but it’s really important to study the capacity to writespeed drop on SSD’s. This goes for all SSD’s; NVME, SATA etc. 80-90% of consumer SSD’s have an issue with writespeed dropping off severely as the drive gets fuller.
        REPLY ON YOUTUBE

      8. A single HDD will be 2-3x faster than 1GbE connection, so a 10GbE connection is necessary to get the full speed out of even the smallest NAS.
        However, the read and especially write speed really tapers off for HDD’s when trying to scale it. Adding more drives in a a RAID will increase speeds, but not linearly, and one should not expect to hit the 10GbE limit easily.
        But SSD’s are completely different. Even 3 x SATA 2.5″ SSD in RAID will easily saturate a 10GbE, hitting 1.1GB/s, on both read and write. And that’s before even talking about how much better SSD is with many small files.
        You can easily make 10GbE capable NAS by just putting in a sata card or two and installing 2.5″ SATA disks. This can be easier and more flexible than trying to find ports for NVME sticks.
        REPLY ON YOUTUBE

      9. I have DS1621xs+ with 6x SSD drives in RAID5 plus Ubiquity USW-EnterpriseXG-24 switch and get 9500Mbit/s download, but upload never exceeded 3000Mbit/s. Why do you think it may be?
        REPLY ON YOUTUBE

      10. 8:12 I have tried just about every brand available for SFP+ to RJ45 adapters in my network gear. Yes they do get hot like you said. lol.
        There is only a single chipset available righty now at least as far as I am aware that has any kind of decent heat output level. It is a much newer Broadcom chipset based adapter, and you can get the model from places like 10GTek or FS but they are quite expensive (double the price). Most of these adapters use 2.5-2.9w each and are the old Broadcom chip or Marvell chips (or Aquantia which is now Marvell). The newest Broadcom models use 1.8w. There are very few ways of differentiating which chip is in the device, but the easiest thing to watch for is the power draw spec on the product page. If it says 1.8w then it is the new Broadcom chip, if it doesnt then it is an old and hot chip. The other way to tell is the newer chip also supports copper Ethernet up to 100m, the older and hotter chips only support between 30m and 80m depending on the chip model.

        The other thing to watch out for when using these adapters is that sometimes the switch you stick them in is incompatible to some extent. They will do 10gb down, but only 2.5gb up despite being rated as 10/10. That means when doing a file transfer you are limited to 2.5gb because one side is going up and that uplink is being capped by the switch or transceiver/adapter due to the incompatibility. No settings you change can fix this as it is a problem at the hardware level. I suspect that since Ethernet at these speeds use all 4 wire pairs, and since 2.5g*4 = 10gb, that somehow the incompatibility is sending the uplink side data on only as single pair of wires instead of all of them.
        REPLY ON YOUTUBE

      11. I want to know why so many motherboards now have 5Gbe and not 10Gbe. I am not aware of a switch that is 5Gbe so unless there is a splitter to two 2.5Gbe or you are connecting directly to another pc with 5Gbe I am not sure what the use case is
        REPLY ON YOUTUBE

      12. My Synology homemade nas is running 25G and transfers a little over 2000MB/sec read and write four PCIe U.2 7.68TB drives. I don’t need much storage just fast. Using a Ubiquiti USW-Pro-Aggregation 28port 10G 4 port 25G switch.
        REPLY ON YOUTUBE

      13. Hello Rob, 10 GBE is not 1000 or even 1024 MB/sec but instead 1250 MB/sec. 8 Bits equal 1 Byte, so 10.000 / 8 is 1250 mb/sec.

        As this is a 20% difference to what you explained I am sorry but I had to clarify that, my friend.

        Greetings from Germany.
        REPLY ON YOUTUBE

      14. The Lan wall socket has to be cat6 also. When I upgraded my house to cat6 I bought less expensive copper coated aluminum. Cat6 only ever got 768. Lots of information here to check out on my Nas thank you very much
        REPLY ON YOUTUBE

      15. Not have had any problems at all. I am not getting the transfer speeds as my Truenas pools aren’t able to get higher speeds. But the linkspeed easily reaches 10gbe in Iperf. And also when loading the ramcache I often see the 10gbe speeds.

        I just bought 10gbe nics, a cheap zyxsel 10gbe capable switch. 15+ meters of cat7 (which is quite cheap) on one hand, and just regular shorter cat5e on the other end. And done. Decent quality cat5e is perfectly capable of running 10gbe on shorter lengths.
        REPLY ON YOUTUBE

      16. I’m glad to see you mention cabling. And it’s not just what’s in the walls. I was doing some work on my network last year and found some very old pre Cat5e patch cords. Yes, they were short but those old, low band width things couldn’t have helped.
        REPLY ON YOUTUBE

      17. Thats why Unraid array user withiut cache best stick with 2.5G

        Destruction write mode max about 250MB/s with modern spinning rust
        Or 60-80MB/s in normal write mode

        Read : about max speed of a spinning rust which about a 2.5G speed

        It dont stack up speed like zfs raidz or raid 0/5/6

        ????
        REPLY ON YOUTUBE