Building a Home ServerPublished on:
I wrote this article not because I’ve built a home server but because I’m on the verge of doing it and I’d like to justify (to myself) why building one is reasonable and why I chose the parts I did.
I want to build a home server because I realize that I have content (pictures, home movies, etc). This content used to be stored on an external hard drive, but when that hard drive died, I lost a good chunk of it. Since then, I’ve moved the rest to a Windows Storage Pool. But then I thought about accessing the content remotely, and I didn’t want my work / gaming PC to be on 24/7 and exposed to the internet for power efficiency and security respectively. Having an overclocked CPU and GPU on 24/7 (even if at idle) isn’t ideal.
Using Onedrive has fine – great for editing online documents, but space is limited and I want a better sharing story (eg. loved one’s backup their pictures here too). Though, probably the most important reason is because setting up a home system sounds fun to me and it’s a good learning opportunity.
First up the part list:
|CPU||Intel - Pentium G4600 3.6GHz Dual-Core Processor||$86.99 @ Amazon|
|CPU Cooler||Noctua - NH-L9i 33.8 CFM CPU Cooler||$39.15 @ Newegg|
|Motherboard||ASRock - E3C236D2I Mini ITX LGA1151 Motherboard||$239.99 @ Newegg|
|Memory||Kingston - ValueRAM 16GB (1 x 16GB) DDR4-2133 Memory||$159.79 @ Amazon|
|Storage||6x Seagate - Desktop HDD 4TB 3.5" 5900RPM|
|Case||Fractal Design - Node 304 Mini ITX Tower Case||$74.99 @ Newegg|
|Power Supply||Silverstone - 300W 80+ Bronze Certified SFX Power Supply||$49.99 @ Amazon|
|Prices include shipping, taxes, rebates, and discounts|
|Generated by PCPartPicker 2017-08-08 19:54 EDT-0400|
The case really defines the rest of the build, so I’m starting here. A small form factor (SFF) case will limit oneself to more expensive components while a larger case will take up more room. I waffled between many cases – I was trying to get a small case that would fit on a shelf in the utility closet, but wouldn’t compromise on the number of 3.5” drives. The height restriction meant a lot of decent mini tower cases were excluded because even they were too tall. Here were the contenders:
A SFF case with 8 hot-swappable 3.5” drive bays inside + more is quite an achievement and is the only option when SFF is needed with an absolute maximum number of drives. The downsides are that at $150 it was on the pricey side and many reviews stated that thermal management was a challenge so aftermarket fans with case modding is a necessity. This guy wrote an article solely to convince people not to buy the DS380B. Anyways, one goal of this build is to keep cost and effort to a minimum, so this case was eliminated.
An HTPC that has a lot going for it. The horizontal design makes it a alluring, as it can be placed on one of my cabinets. But with only four slots for 3.5”, it would be limited as far as a storage server is concerned. Double parity RAID would mean that half of the drives are redundant. The worst thing that could happen would be running out of room and being forced to decide on whether to buy bigger drives or get a dedicated NAS case.
Fractal Design Node 304
A SFF cube case that has 6 3.5” bays, goes on sale for $60, has great reviews, and touted for the silent fans!? Sold.
Lian Li PC-Q25
Special mention must be made to Lian Li’s case which houses 7 drive bays, costs more, and some (not many) have reported thermal issues.
I’ve decided on the Pentium G4600.
- With Kaby Lake, pentium processors are blessed with hyper threading so their 2 physical cores become 4 logical cores.
- Kaby Lake also improved power efficiency with stress testing a G4560 using only 24W.
- All “Core” chips don’t support ECC memory (thus excluded)
- Paying 15% more a 100 MHz boost made me exclude the G4620
- I actually wanted top of the line integrated graphics card (Intel HD Graphics 630) because there won’t be a dedicated GPU in this box and I will cry if I was GPU limited anywhere.
- Cheap! I’m going to grab it when the price hits $80
- The server will sit idle most of its life, so no need to get a powerful CPU. In the future, if it turns out I need more horsepower, by then there should be a nice array of secondhand kaby lake xeons out there.
A Mini-ITX motherboard that supports ECC memory, socket 1151, and the Kaby Lake basically makes our decision for us!
There were a couple of ASRock boards and I went with E3C236D2I, the one with six SATA ports (same as case) with an added bonus of IPMI.
Unfortunately, a $240 price tag is a bit hard to swallow. There is definitely a price to pay for keeping the size down, but using enterprise RAM!
Speaking of the RAM, I went with a single stick of 16GB ECC RAM. This may seem odd, but I’ll try and explain. I’m using ECC memory because I want to be safe rather than sorry and I’m not scrounging around looking for pennies so I can afford it. I’m only interested in a single stick because buying 32GB upfront seems overkill, I’m not made of money. Since the motherboard only has two DIMM slots I wanted something significant that should last in the meantime.
On a side note, RAM is expensive right now, the 16GB of RAM is retailing for $150 whereas it debuted at $75. Don’t worry, I have price triggers.
Even though the case supports tower CPU coolers and the Pentium G4600 comes with a stock cooler, I’ve opted for the slim aftermarket Notctua CPU Cooler: Noctua NH-L9i. The Noctua promises to be much quieter than the stock cooler. Since the fan is so slim, if I decide to get an even tinier case in the future, the fan will fit!
Since I won’t be overclocking the CPU, I’ll be able to use the low noise adaptor to make the cooler even quieter.
I went with the SilverStone 300W power supply.
- I couldn’t buy anything that was below 300W (I was shooting for something around 200W). The reason is that power supplies were made to operate between 20% and 100% of their rated wattage. If I had gone with a 450W power supply (the next power supply in SilverStone’s lineup), I’d need a idle usage of at least 90W instead of 60W to get that guaranteed efficiency. Basically, this is me being environmentally conscious.
- 80+ bronze rating was distinguishing in this low of power range
- SFX form will allow me to get an even smaller case in the future if needed
- Is semi fanless (quiet). People report that only under extreme duress does the fan turn on
I already have a couple of 4TB Seagate 3.5” drives, so getting more of them is a logical choice. Ideally, I wouldn’t have to buy all of them up front, but that is cost of ZFS. Here’s to hoping a get a good deal on them!
One of the things I’m still pondering is what I should do about a bootable drive. I could drop down to using a RAID of 5 drives and get a different drive for the OS. Brian Moses uses a flash drive. I’m actually thinking of using my one PCIe slot to host a M.2 PCIe adapter and grabbing a Samsung 960 EVO or something similar. PCPartPicker doesn’t list the motherboard as capable of using the M.2, but we’ll see about that, as the motherboard manual specifically calls out instructions for M.2 NVMe drives.
Update: The motherboard does support M.2 drives, but only of the smallest kind (form factor 2242), which really eliminates potential drives. I’ll have to look towards 2.5” Sata SSDs.
After trying for a week to get FreeBSD and Plex working together, I gave up and have decided that Ubuntu 16.04 with docker is the way forward. Let me explain:
The first task was to determine whether to use a hardware RAID controller or a software based one. Searching around, it became clear that a software RAID was better due to costs and features from file systems like ZFS. Speaking of ZFS, it’s the best file system for a home server, as it is built for turning several disks into a one and features compression, encryption, etc.
Choosing ZFS, it would make sense if the OS was FreeBSD. ZFS and FreeBSD go
together like bread and butter. They are tried and tested together. Since I was
(and still am) unfamiliar with FreeBSD, I spent a week learning about jails and
other administrative functions. The concept of jails (application isolation
without performance cost) sounded amazing. Not to mention FreeBSD seemed like a
lightweight OS. Running
top would only show a dozen or so processes. I got
quickly to work setting up a FreeBSD playground inside a virtual machine.
First I tried setting up an NFS server but ran into problems as I needed NFS v4 to run an NFS server on nested ZFS filesystems, but NFS v4 isn’t baked into Windows, so it was a no go. Then after only a couple hours of fighting with SMB, I finally got it working. I’m just going to squirrel away the config here for a rainy day:
[global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = vm-freebsd wins support = No security = user passdb backend = tdbsam domain master = yes local master = yes preferred master = yes os level = 65 # Example: share /usr/src accessible only to 'developer' user [pool] path = /pool/data valid users = nick, guest writable = yes browsable = yes read only = no guest ok = yes public = no create mask = 0666 directory mask = 0755
I think the trick was that I wanted SMB users to be users on the VM, so the Samba server should act as the master.
So as you can see, everything was going smoothly – that is until I tried setting up Plex. I thought that since plexmediaserver was on FreshPorts that everything should work. It didn’t, and since I didn’t know FreeBSD, ZFS, or plex, I went on a wild goose chase of frustration. The internet even failed me, as the errors I was searching came back with zero results.
In a fit, I created an Ubuntu VM and ran the plex docker container and everything just worked. I gave up FreeBSD right then and there. I wasn’t going to force something. I later found out that since FreeBSD represented less than 1% of plex’s user base, the team didn’t want to spend the resources for updates. Oh well, ideally I wouldn’t have to use docker (downloading all those images seem … bloated), but since it’s rise to ubiquity and promise of compatibility, I’ll hop on the bandwagon.
With that, let’s take a look at some of the applications I’m looking to run:
- ddclient: A dynamic dns client. It keeps my dns records updated whenever my ISP decides to give me a new IP.
- nginx: A webserver that will serve as a reverse proxy for all downstream applications. Will be able to use certificates from Let’s Encrypt without configuring each application.
- collectd: A system metric gather (CPU, Memory, Disk, Networking, Thermals, IPMI, etc). This will send the data to:
- graphite: Using the official graphite docker image to store various metrics about the system and other applications. These metrics will be visualized using:
- grafana: Using the official grafana docker image creates graphs and dashboards that are second to none. Just look what I did for my home PC
- plex: Using the official plex docker image will be used to host the few movies and shows that I have lying around.
- nextcloud: Using the official nextcloud docker image will be essential for creating my own “cloud”. I can even use extensions to access my keepass or enable two factor authentication
- gitea: Using the official gitea docker image, I’ll be hosting my private code here.
- jenkins: The official jenkins docker will build all the private code
- rstudio: The rocker docker image will let me access my rstudio sessions when I’m away. Currently, I have a digitalocean machine with rsutdio, but it’s been a pain for me to create and destroy the machine every time I need it.
- pi-hole: blocks ads on the DNS level so one can block ads on all devices on the network. And, of course, there is a docker image, which has been working wonderfully in my test playground.
You’d be wrong if you thought I’d abandon my current cloud storage providers (Onedrive, Google Drive, etc). In fact, I pay them for increased storage because stuff happens and I need to have backups of pictures, home videos, code, and important documents. I’m planning on keeping all the clouds in sync with everything encrypted using rclone. That way if a backup is compromised, it is no big deal.
I’m also not going to abandon DigitalOcean, as those machines easily have more uptime and uplink than Comcast here. My philosophy is that if I want to show people my creation, I’ll host it externally, else I’ll self-host it. Plus it is a lot easier to tear down and recreate machines with an IAAS rather than bare metal.
The only question now is … when will I jump head first?
If you'd like to leave a comment, please email firstname.lastname@example.org