As my upgrade to an AMD Ryzen 9 3950X is finally complete, I had my old AMD ThreadRipper 1950X left over. While it wasn’t the perfect gaming or compute CPU, it deserved to live on for pretty much solo-ing the entire workload I’ve given it up until now.
I’ve always wanted a NAS and Home Theather system, and some friends recommended FreeNAS and PLEX to me. While I’m not happy with PLEX due to various completely abnormal reasons, it does work for streaming over the internet. The plan was simple: Run a Host OS with various virtual machines for several servers.
Primarily I wanted FreeNAS and PLEX, but in the future I wanted to extend that to pfSense, PiHole and other services like game servers, like Minecraft, Space Engineers and other computationally expensive servers that I don’t want to run on my gaming and editing system.
In the end, I decided to split up the machine like this:
- FreeNAS and PLEX on NUMA #0 with 12GB RAM and 6/12 Cores/Threads. Eventually will have the KFA2 GTX 1650 Super passed in for transcoding purposes, if necessary.
- pfSense on NUMA #1 with 4GB RAM and 2/4 Cores/Threads.
- PiHole on NUMA #1 with 4GB RAM and 2/4 Cores/Threads.
Setting up the Host
I’ve chosen Debian as the Host OS as I’m already familiar with it, and went with QEMU and libvirt for the Virtualization. I know how libvirt works to some degree, and dislike the one-click-setup things like Docker and Kubernetes where I have no control over what is actually being done.
Most of the installation is just the default Debian, except that I needed it as a server, so no Window Manager, no Print Server, just SSH and System Utilities. When it came to setting up the partition table, I had to make sure to convert Base-2 into SI units as the Debian Installer still doesn’t support industry standard units for storage. I ended up with this partition table on my old Intel 600p 256GB NVMe:
- 256MiB EFI System Partition
- 4GiB Boot filesystem
- 32GiB Swap Partition (to match Host Memory)
- 128GiB System Partition
- All remaining space used as LVM for VMs down the road.
Once the setup was done, it was time to set up networking. Since I wanted both 1gbit NICs to run in teaming mode, ideally one that gives me 2gbit available transfer rate total. 10gbit, 5gbit and 2.5gbit are still too expensive at the time, and thanks to systemd-networkd (never thought I’d write thanks to a systemd part) it was a breeze to get the two working together – and without any issues with my current switch either!
And finally there was the LVM storage which is not initialized during the setup. Setting it up is as simple as running two commands, to be exact:
lvm pvcreate /dev/nvme0n1p5 lvm vgcreate vg0 /dev/nvme0n1p5
Yay, we now have storage for the VM Operating Systems, and now it was time to set up virtualization on the machine.
Virtualization with libvirt and qemu is super easy, thanks to the extensive documentation on it that sometimes is confusing to read. I started with the usual, define the storage pool for ISO files for installation images, then define the storage pool for the VM’s Operating System on the LVM volume group, then define a bridge type network for all VMs to use.
My ISO storage pool looks like this:
<pool type="dir"> <name>iso</name> <target> <path>/srv/iso</path> </target> </pool>
And my VM storage pool looks like this:
<pool type="logical"> <name>storage</name> <source> <name>vg0</name> <format type="lvm2"/> </source> <target> <path>/dev/vg0</path> </target> </pool>
Networking took a few tries, but I ended up with this which worked well enough and allows VMs to use the full 1gbit of the host port at near zero cost:
<network> <name>host-bridge</name> <forward mode='bridge'/> <bridge name='br0'/> </network>
So with that done, what’s left is the Virtual Machines themselves. Since this is getting a bit longer than expected, I’ve split this up into two separate posts. You can find the second post using the Next Post button below (if it has already been published).