Skip to main content

A hyper converged node for home use

I recently build a hyper converged node for home use. It virtualizes networking, storage and compute. To achieve that it uses Proxmox, ZFS, docker and pfSense.

I will go over our needs and constraints, the design decision and the solution as compared to my current setup. Besides sharing info, this writing also act as documentation for myself. I am not an expert in any topics. The solution I chose fits my needs. I’m sure there are many things I don’t know about. If you see something that can be improved, please let me know.

Needs and constraints #

We have following needs and constraints:

  • We have MacBooks. Each runs Windows in a virtual machine (VM) due to our needs for software that only runs on that OS. The solution must be able to run multiple Windows VMs.
  • We have Samba running as a Time Machine. Its backend storage is a software RAID of a two-disk mirror running on Ubuntu Linux. In addition, this backend storage server is a NAS.
  • I build docker images for my open source project.
  • There is a music streaming service running in a docker container that reads music files from the NAS.
  • We also have a Pi-hole running in a docker container for DNS-based ad blocking.
  • On the networking side, we need a VPN server. It needs to support macOS and iOS clients. Above compute and services must be accessible for the VPN clients. There must be Dynamic DNS support because our WAN IP is dynamic.

What we have #

We have two hardware devices, a home router and an Ubuntu Linux server. Combined, they already meet some of the needs. They are probably able to support the rest via more configuration. Here is where the pain points show up.

Pain point 1 #

Our laptops are aging and we are considering upgrading. We are in the Apple ecosystem so the future will be Apple Silicon devices. Although Parallels Desktop supports running developers version of Windows for ARM at the moment, Microsoft has stated there is no official support. To solve this, QEMU is installed on the Ubuntu Linux server. Docker is also installed on the same server. Docker networking is on an IPv4 network. However there is a networking issue where QEMU VMs only get IPv6 addresses. I suspect some configuration is needed to make both co-exist on IPv4.

Pain point 2 #

Our home router runs OpenWRT. Running as a VPN server is supported but seems to be messy to setup. Additionally when I try another home-grade router, Speedtest.net reports double throughput. We have no other router that supports running as a VPN server.

Solution #

Since there will be VMs running Windows, what if we virtualize everything? That turns out to be the solution.

  • We install docker in its own Linux VM, which solves the IPv4 co-existence issue.
  • We install pfSense in its own VM. It serves as our Internet router.
  • pfSense’s documentation is clear on how to setup a IPsec VPN server, as well as client setup on iOS and macOS devices. IPsec VPN is natively supported by macOS and iOS, so no additional software is needed on client side.
  • pfSense has a package called pfBlockerNG. It has a DNS-based ad block similar to Pi-hole.
  • Just like OpenWRT, pfSense has a dynamic DNS client. We use that and Let’s Encrypt to keep certificates for VPN valid and up-to-date.
  • Now that pfSense is our Internet router, we configure the OpenWRT router as an access point. It is now a Wifi hotspot and a network switch.
  • We install Proxmox to host all the VMs.
  • Our Proxmox host is also our NAS. Proxmox doesn’t support Linux software RAID (mdadm) out-of-the-box, so we convert to ZFS in a two-drive mirror configuration. One of the reasons that we use Linux software RAID is that if the hardware ever fails, another Linux running matching software can read the RAID array. If we were using hardware RAID, the array may only be readable by identical hardware RAID controller and firmware version. This hardware dependency might not be easy to find several years later. ZFS is also software-based, so another machine running the proper ZFS software version can read the RAID array.
  • Same as before, we install Samba to the Proxmox host. We configure Samba to share as a Time Machine and to share folders on ZFS as network volumes. These network volumes are visible from for all macOS, iOS, Linux and Windows machines, whether virtual or physical.
  • We use Portainer to orchestrate docker containers. Daapd-in-a-docker-container serves our music on the network.

Hardware changes #

  • ZFS typically need a lot of RAM as cache. We add 16GB more RAM, for a total of 32 GB.
  • We add a 256 GB SSD to store the new VM disk images. This SSD is not part of ZFS, so we will need to add backup service later.
  • We add a 2-port gigabyte ethernet network PCIe card, for a total of 3 ports. 2 ports are passed through to the pfSense VM. One port for WAN and the other for LAN. The third port is for the Proxmox host.

Bonus #

We gain the following bonuses:

  • Not only is pfBlockerNG a DNS-based ad blocker, it also has IP-based blocking. It also supports country-based IP blocking, but we are not using it yet.
  • ZFS has built-in on-the-fly compression.
  • A ZFS pool can have multiple file systems or buckets. Each bucket can have its own size quota. This quota can be changed later.
  • The Windows virtual machines are always running, just a quick access away via Remote Desktop.

Summary #

We build a hyper converged node using Proxmox. Local ZFS serves as our storage backend. Virtual machines serve our compute needs, including running a pfSense router.