This is going to be an short ongoing series.
Crack a bottle of Mountain Dew and grab a bag of Doritos.
I have been working on building a new NAS for my local network. I previously built on in 2019 and documented the entire process 10GBit 62TB 16 Xeon Core NAS Build Series.
I plan on posting a series on this new NAS as well. I will probably not go in as much detail, but will cover the most important parts.
Old NAS Build
- Dual Xeon 2690 v2 8 Core Xeon
- 64GB Ram DDR3 ECC Ram
- 16 x 4TB Seagate 7200 RPM SAS HD
- Dual 10Gbit Ethernet
- IPMI
- 2 x Silicon Power Sata SSD
This machine was fast, and relatively power efficient considering. I'm not going to go into much detail about it until I do a comparison at the end. I do want to talk about why I upgraded this very capable server and what lessons I learned.
Why am I upgrading?
This server is more than capable and is very fast. There was a few things that really bothered me.
First is due to COVID, at the time my electricity costs were around $0.15 kWh. Not long after the start of the COVID adventure, electricity costs spiked to around $0.26 kWh or even more, I wasn't fully paying attention the entire time. At 130w idle, and 360w at full tilt, it wasn't the most efficient server.
This server had IPMI built-in the motherboard. IPMI is a feature that allows you to connect a network cable to a network card that allows remote access to the box even when the power is off. It also allowed you to see more detailed statistics about the hardware. The IPMI on this board sucked, it used a very dated Java based client that was no longer supported in modern browsers and required using a very old version of firefox to be able to connect to the interface. Maintaining an older version of Firefox along with a modern version was a pain.
The fan controller on this server absolutely sucked. I don't know if it was a bug, or just a bad board, but the fans would spin up to 100% at random intervals. This was maddening and I ultimately ended up getting a fan controller for like $24 to manually set the fan RPM. This allowed me to set them high enough to offer good cooling but not high enough the noise was annoying. It was even more annoying that it would cycle between 0% and 100% rather than just staying at 100% or some other loud volume. I would occasional get it to go away by going into the IPMI and set the board to performance mode so they wouldn't go too low to cause an event but it was not consistent and as I said before, the IPMI interface was problematic.
This motherboard still only used VGA, so I had to have a VGA compatible monitor around for managing this server.
I hated the 4U Rosewill case with a passion. It's a very popular case, but I just hated it. It didn't help that I had like 17 drives in the thing with cables everywhere and the thing was heavier than your mom.
I lost about half the drives due to extreme crypto mining disaster. At the time I was CPU mining with a lot of my servers, and had this server mining 24/7 as the CPUs were idle and were fairly fast given their age and cost. Due to the above fan problem, half the drives in the case overheated and failed over a period of 30 days. Backups, Backups, Backups!
I ended up destroying all the drives and throwing them away. I picked up fewer new larger enterprise drives to replace them. I previously had 14 4TB 7200 RPM SAS drives in the case, along with a hot spare. It was a disaster waiting to happen, but it was fun messing with a larger array. These drives were super cheap Enterprise drives, and fairly fast. Enterprise drives tend to last a long time and take a lot of abuse, but the heat was like living in Texas without AC.
Prior to building my new NAS, I swapped out the two Xeon 2690 v2 for 2697 v2, these were even faster and I got them nearly free. At the time, this was more than a NAS, I was running virtualization (Proxmox) on it with my NAS software virtualized. My new build I took a very different approach. Prior to decommissioning this server, and after I got tired of 200% electrical bills, I replaced both CPUs with a single 2630L low power CPU which is more than enough to act as a NAS without all the virtualization. In the end, I went from 130W idle and 360W full load, to about 80W idle and 150W full load. I made a few further adjustments other than just swapping the CPU. This was all after the great crypto drive disaster, I don't even want to know what the wattage was with 15 platter drives, 3 SSD drives, and one NVME. I'm guessing another 80-120W.
I believe that's the main reasons why I wanted out of "Nastybeast" and into my new NAS. Clever name eh? Just call me Mikey. Cowabunga dudes!
There was one other reason, I wanted my NAS to be a NAS, and nothing else. This means it handled file sharing, backups, and snapshots. No applications, virtual machines, or containers. That job would fall on Mini PCs! I'll get into some detail about these later. Spoiler, they are fast, tiny, and extremely low power.
New NAS Build
coming soon...