Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A little pro-tip for those about to run wires through your walls: use shielded cables. Also, probably don’t run them anywhere near AC wire. Some people recommend a full stud between them, whereas the NEC allows them to be run next to eachother as long as there is at least 50mm of clearance.

I think this is usually recommended for safety reasons, to prevent mistakes from causing high voltage to get into your low voltage runs and become a shock hazard or potentially start a fire. But, I have noticed empirically that running a bit too close with the wrong cable does absolutely result in some serious interference issues. So… your wired connections are not free of signal issues either, especially if you want to neatly run them in-wall.

edit: also, worth noting that this is a highly U.S. centric piece of advice. I mean I’m sure it applies elsewhere, but both electrical systems and construction differences probably make some of the details very different, and I’m not cultured enough to know exactly which and how.



Some of the "shielded" ethernet cables on Amazon are questionable.

Monoprice sells double-shielded (SFTP) enterprise-grade cables directly.


Pretty much everything sold on Amazon these days is questionable. A lot of their Ethernet cables are definitively trash though.


Ethernet on copper cabling doesn't seem to be keeping up, shielded or not, previous and current gen ethernet (10g/100g) over copper is really power hungry or nonexistent.


Consumer devices hardly even have 10Gbit, and often don’t even have WiFi 6. Even if they did, my NAS running on spinning rust is not bottlenecked on network bandwidth even slightly, and affording the amount of switching fabric necessary to sustain 10Gbit across all my devices is not economical right now.

I mean, this might be different for fiber in some regards, but I doubt that is terribly economical either.


Consumer NAS and laptops[1] do have it and with SSDs the story is different.

[1] Or more accurately, as laptops dropped builtin ethernet ports, the kind of ethernet you use got decoupled from the laptop model.


My M1 Mac Mini only supports 1Gbit. My Synology NAS only has 1Gbit ports, and while I do have the 10Gbit interface card for it, using that prevents you from using NVMe cache, and with SATA, even a good SSD will have trouble saturating SATA due to the exact reasons NVMe exists. My phone is too old for Wifi 6, My surface laptop 4 has Wifi 6, but no thunderbolt port; just a single USB 3.1 port. Technically enough for 10 GbE, assuming you didn’t need absolutely anything else on that port.

Then you need 10Gbit network equipment. A 10Gbit switch with more than 8 ports is going to be expensive and consume a pretty ridiculous amount of energy. It would be difficult to argue that there is any truly “consumer grade” 10GbE equipment; almost all of it comes in sizes designed to fit in racks… Wifi 6 APs exist, but my AP-HD isn’t, because it wasn’t purchased that long ago, still works, and I have had no good reason to replace it. Not to mention the actual speeds I get on WiFi are never even close to the theoretical speeds, making it seem unlikely it could even saturate my 1GbE setup in practice.

Although I do have some 10GbE equipment, even directly connecting devices together I have found that many of my devices, of which many are consumer-grade, already bottleneck elsewhere and can’t even come close to saturation. So as of today, 10GbE does not seem to be a huge win for most users, even if they do own some equipment that can do it.


If you're willing to spend more Synology tax, there's a NVMe cache + 10GbE card now.


Thanks, I might if I continue to stick with Synology. (Not sure that I will, but still.)


How is your nas not bottlenecked by 1gbps? I'd expect pretty much any modern single hdd to have a sequential read speed over 120 MBps, let alone an array.


Well, I have a few comments.

- It is true that a modern HDD has over 120 MBps sequential read, but not by much: 4GiB WD Reds are probably somewhere around 150-160 MB/s.

- Sequential speeds aren’t really that practically useful for me. While I do have some large files, most workloads are going to involve smaller files and a lot more seeking.

- When using a filesystem with parity, like btrfs or zfs, you definitely have some overhead for writing and checking parity. I’m sure it’s still very fast, but it is overhead, and NASes don’t tend to be the most cutting edge compute boxes.

- Even if some RAID configurations might improve performance, the configuration I am using has a fair bit of overhead. I suspect you need something like RAID 0 or 10 for maximum performance, but those setups are inflexible compared to RAIDZ or Synology SHR.

- As it is now, my NAS is unable to saturate the network bandwidth using Samba alone. It just never reaches a gigabit. Part of this is probably related to Samba or the CIFS protocol, but honestly I don’t know if it has the raw CPU muscle.

All in all, I am not too concerned about it. It is possible that I could yield better performance with 10 GbE, but I don’t think it would matter for most of my use cases. Plus, disk resources are definitely being consumed by background tasks as it is.


I think those transfer rates are typical for spinning HDDs from many years ago. See eg this review of 4 TB drives from when that size was trending: https://www.tomshardware.com/reviews/desktop-hdd.15-st4000dm... - the middle drive of the 2013 comparison has the avg throughput that matches the 120 MB/s max theoretical throughput of 1 G ethernet.

The transfer rates go up roughly hand in hand with capacity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: