The Wi-Fi reception in my house had been pretty terrible lately. The original setup was fairly simple: all the devices connected to an ISP (Hinet) -provided Alcatel I-040GW, an ONT device with wired & wireless router capabilities, and a TP-Link WR841HP V1 in WDS relay mode on another floor to extend the wireless signal.
An issue seemed to be that the connection between the I-040GW and the WR841HP was unstable. Of course, I could simply get another wireless AP and hook it on to the I-040GW. However, as there seemed to be reports of the I-040GW overheating under heavy load, and I also wanted to set up ad-blocking together with more complex firewall and routing rules for an open guest network and my home server (mostly used for storage; will make a post about it in the future), I thought I might as well bite the bullet and set up a pfSense virtual machine on the server as a router along the way (and therefore reliefing the burden on the ONT).
For the hardware, I got a refurbished Asus RT-AC66U B1 (which is essentially the same hardware as the RT-AC68U) for around NT$2400 (~US$80) and a used Intel GbE NIC for around NT$450 (~US$15). Since I would be using the wireless router in AP mode (and as a switch), with all the routing done by the server, I could probably get away with a less powerful model. That being said, I wanted to set up VLANs for isolation, so getting a model with good custom firmware support made sense.
The final network looks as follows. (Drawn using https://draw.io/)
(Of course, the Wireguard traffic still goes through Hinet. I have no idea how to represent that in the graph though.)
(The hashtags are the names used in firewall aliases, and yes, I am a fan of Azur Lane.)
Essentially, the devices are connected to the new AP, which is then connected to the server. The server then sends the traffic through a pfSense VM and out to the I-040GW operating in bridge mode. In addition, the WR841HP extends the openwireless.org guest network, and the MOD (Hinet IPTV device) is connected directly to the I-040GW in bridge mode since it only requires access to VLAN 4081 on the WAN side.
Note that only the two ports connected to the server and the MOD operate in bridge mode. “Normal” routing with a DHCP server, PPPoE, NAT, …etc, are still enabled on the other ports so that, even if something goes haywire with my server when I am out, my family can still use the Internet by plugging into the I-040GW.
However, I am currently experimenting with disabling bridge mode on the server port and using PPPoE passthrough (, which, conveniently, is enabled on the I-040GW by default). This has the benefit of being able to manage the ONT directly from the LAN.
The pfSense VM, like the other VMs on the server, is managed by libvirt/virt-manager. The installation process itself was fairly straightforward. (Not so much for setting it all up though. Some hurdles are mentioned in the next section.) The major things I had set up are listed below. (In addition, some IPv6-related settings will be mentioned in the Troubleshooting section.)
- Block the guest VLAN from accessing other machines on the LAN.
- Block the printer from accessing the WAN (and disable IPv6 on it for simplicity).
- Static DHCP entries for commonly used hosts
- DNS host overrides for internal hostnames (under a subdomain of a real domain, following the advice here)
Policy routing: Allows the server to use a dedicated fixed IP.
- Port forwarding for services on the server
NAT Reflection: Allows for accessing the public IP of the server on the LAN.
- DNS over TLS with DNSSEC. Note that the official guide is out of date at the time of writing. (Unofficial guide)
- Outbound NAT: Allows for managing the WR841HP, which blocks requests from other subnets, from other LANs. (Guide)
- Pass IPv6 ICMP as recommended by https://ipv6-test.com/.
- pfBlockerNG (package): Provides Pi-hole-like DNS ad-blocking. Note that pfBlockerNG-devel (2.2.5) provides significantly more features and built-in feeds than pfBlockerNG (2.1.4). (Guide)
- Internal IPv6 ULAs with NPt. (Will go into detail in the section below.)
- iperf (package): Useful for testing performance.
- ntopng (package): Analysis and visualization of network traffic. Not a whole lot useful for my use case, but still fun to play with.
Thought of But Did Not Implement
I tried passing both the Intel NIC and the Atheros Killer E2400 on my motherboard to the virtual machine. However, the latter did not seem to be receiving any packets. After quite some debugging and searching, it turned out that FreeBSD support for the E2400 was buggy, and I had to resort to MacVTap.
A side note,
virt-host-validate helped me figure out that I did not pass
intel_iommu=1 to the kernel initially.
VirtIO Checksum Offloading
One needs to disable hardware checksum offloading when using VirtIO NICs, which I use to connect the host with the pfSense guest. The symptoms, if it is not disabled, are pretty weird. Added to the difficulty of debugging was that my host could somehow get IPv6 addresses via SLAAC from the Killer E2400 without going through PPPoE, causing me to think that IPv6 works.
If one creates a bridge interface from
/etc/network/interface (on Debian), link a VM to it, then issue
systemctl restart networking, the VM interface (
vtnet*) can be removed from the bridge. Pretty dumb of me to not find out earlier, but this still caused quite some head-scratching. (Why does DHCP work on
vtnet but not on the bridge interface?)
Before realizing this, I also tried creating an isolated network in libvirt instead. However, IPv6 for isolated networks was disabled by default. One could enable it by manually setting an IPv6 address, but dnsmasq would take up a ton of CPU, possibly caused by this bug. I then switched to using
<network ipv6='yes'> (docs) and adding the following to
iface virbr4 inet6 auto
pre-up /sbin/sysctl -w net.ipv6.conf.virbr4.disable_ipv6=0
Which worked, but felt a bit hackish compared to a manual bridge.
Dual PPPoE / DHCPv6
As the diagram shows, I wanted to have two outgoing PPPoE connections with three LAN interfaces (a VirtIO bridge and two VLANs on the Intel NIC). Aside from the prefix-size issues mentioned below, the LANs flat-out could not receive IPv6 prefixes (best case scenario, only one interface could get a prefix). After some trial and error, I realized that disabling one PPPoE connection made the LAN associated with the other PPPoE connection work reliably. This led me to yet another bug report. To work around this, I wrote my own dhcp6c configuration file and used the
shellcmd package to run the following command on boot:
pkill -f '[d]hcp6c'; ln -sf /var/run/dhcp6c.pid /var/run/dhcp6c_pppoe0.pid; ln -sf /var/run/dhcp6c.pid /var/run/dhcp6c_pppoe1.pid; /usr/local/sbin/dhcp6c -dDn -c /conf/dhcp6c_merged.conf -p /var/run/dhcp6c.pid pppoe0 pppoe1
DHCPv6-PD Prefix Size / Dynamic NPt
Having more LAN interfaces than PPPoE connections, I would need to obtain prefixes shorter than
/64 for prefix delegation to work. Unfortunately, Hinet only gave out
/64 prefixes no matter what I request in the settings, so I had to either use another PPPoE connection or use DHCPv6 to hand out ULAs and apply NPt on the router.
I opted for the latter due to being able to share the same prefix between the private and guest LANs, though the extra effort may not be worth it in retrospect.
We envision a world where, in any urban environment:
The false notion that an IP address could be used as a sole identifier is finally a thing of the past, creating a privacy-enhancing norm of shared networks.
Unfortunately (again), Hinet does not honor DUID, so the prefix I get changes every time, and pfSense does not support dynamic NPt at the time of writing. I worked around this by writing a PHP script that updates the NPt entry and is invoked whenever the DHCPv6 client gets a new reply.
Unfortunately (yet again), Android does not support DHCPv6. Although disabling IPv6 on Android devices is fine with me, devices will still get IPv6 DNS servers via RA (router announcement) and show to the user “connected, no Internet” until it gives up and reverts to IPv4. Simply setting pfSense to announce
:: as the DNS server for RA resolved this. (Clients that do support DHCPv6 will get their DNS information from that instead.)
The workarounds used to mitigate the two issues above (dual DHCPv6 clients & dynamic NPt) will be covered in depth in a future post.
Asus RT-AC66U B1 (MerlinWRT)
Setting up the AP itself was considerably simpler. I flashed MerlinWRT as the firmware, and for setting up VLANs, https://www.snbforums.com/threads/ssid-to-vlan.24791/ and https://gist.github.com/Jimmy-Z/6120988090b9696c420385e7e42c64c4 are good resources. (Note that the latter is for the AC86U, and has to be modified a bit for the AC66U.) However, I did not realize that I needed to disable NAT acceleration (and that the setting only seemed to kick in after a
nvram commit and a reboot), causing some headaches.
In addition, client isolation on the guest networks is enabled.
Again, there will be a future article on setting up VLANs on this AP.
Even though my current setup can probably be achieved by a custom firmware on the wireless router, the current pfSense configuration has several advantages such as the flexibility and the performance of running on a more beefy machine.
That being said, did the Wi-Fi performance improve? Well, connecting directly to the RT-AC66U, the bandwidth was greatly improved due to the switch from 802.11n to 802.11ac. As for connecting through the WR841HP, while the stability did get better, the speed, similar to before, still lingered at a mere 20Mbps compared to the 100/40Mbps plan we had. Connecting to the RT-AC66U with other devices from where the WR841HP was placed resulted in similar numbers. Therefore, it remains to be investigated whether i) moving the WR841HP, ii) adding more WDS devices, or iii) experiment with power-line Ethernet, helps.