Home USB Dongles and Multi-WAN - Go ahead and laugh at me
Post
Cancel

USB Dongles and Multi-WAN - Go ahead and laugh at me

Tldr: The TP-Link UE300 USB ethernet dongle does not work properly on OPNsense 21.7.3_3. USB Ethernet dongles are a bad idea in general for this application. Don’t ping 8.8.8.8 or 8.8.4.4 as your monitor IP. I didn’t follow some really junior level troubleshooting best practices so I spent a day and a half chasing my tail. Fast food is neither fast or food.


I just wanted to add a note documenting my troubles with USB ethernet and dual WAN setup for anyone who might find this via Google later. It has the added benefit of giving everyone a chance to either feel my pain or laugh at me. I’m fine with either. 

I’ll start by saying the TP-Link UE300 USB ethernet dongle does not work properly on OPNsense as of 21.7.3_3. I found this out the hard way. I feel this is a combination of things and not really the fault of OPNsense.

I had a backup ISP installed the other day, however I was still waiting on my new Intel dual port card to be delivered. Being the impatient IT guy that I am, I found a UE300 in my desk. I use it mostly as a way to give a wired connection to some ultra thin Windows laptops I need to work on from time to time. I thought I’d toss it in to get a jump start on the config process and validate fail-over. Setup was straight forward and went nicely, easy even. The goal was active/active with fail-over. Mostly a default config following the OPNsense multi-WAN documentation. 

The initial issue I experienced was limited bandwidth, 30/20 vs the 200/20 I expected. I just chalked it up to a limitation either in a driver or my USB bus (both are USB 3.0). In any case this was a temp setup for the next few days. The next issue started hours after I completed the multi-WAN setup. I think this delay is what through me off so much. The monitor began reporting packet loss over both services at random times, between 13% and 100%. This screwed with fail-over. I also noticed the CPU utilization would spike to 100% sometimes when the packet loss was happening. This exasperated the packet loss on both links with them being at 100% loss at times, thus both being marked down at the same time. I would also notice my pings to the router LAN and web interface would stop responding. I also noticed some of my pings leaking out on the WAN side to the ISP. I thought at first this was my config, something I missed, session states, firewall rules etc. 

Indeed I did correct some of the symptoms in the configs. Most notably, I made LAN firewall rules for the router web interface and pings using the gateway default vs WANgroup. Pings and web interface access improved but still would stop responding. 

It turns out my first true problem was I was following the doc and pinging 8.8.8.8 and 8.8.4.4 as a monitor IP. Whenever I’ve pinged these addresses throughout my career, packet loss was my fault/problem. Except this time, I was getting between 13% and 50%. After reading up (because I’ve never encounter this before) I found this was a bad idea. Google it, I learned something. I adjusted my monitor pings to something I control, this helped greatly. So much so it worked for almost 20 hours before I had another event. 

A side note, I can’t ping the gateway IP. The backup service provider has given me a fancy router that I can’t bypass and don’t want. The other sometimes continues to respond in an outage, I suspect they have failing hardware or a back-haul that is being cut.

My second true problem was that cheap dongle. As I mentioned above, I initially thought it was my config, mainly TCP sessions flipping between the two ISPs or something to the like. While this could have played some small part in the problem it didn’t explain everything. I took a break, got some lunch. While chewing on some overpriced lukewarm mediocre at best, food, I thought about what changed. Queue the light bulb. That stupid $9 dongle. As soon as I walked in the door I swapped the interfaces with my lab network. My WAN problems ended.

I guess the lessons here are: Don’t use cheap garbage for important tasks. Test your hardware before deployment. USB ethernet is for the birds. Verify your monitor IP. Most importantly slow down and walk though troubleshooting. I could have at the very least halved my time spent on this by just starting at Layer 1. I kept thinking about more and more complicated problems and solutions. That crummy lunch got my brain to slow down just enough.

As for OPNsense, I think the only one change I would like to see is the monitor IP. I’d like to see something with more checks, maybe allow me to configure 3-5 IP addresses to check per interface. If two IPs show a problem then de-prioritize the link, 3 or more mark down. Maybe add something for if all links show down, enable all links, disable monitor and measure traffic flow. Mark down if no traffic flow. Or some combination of the sort. I see this could get complicated fast. I just would not make traffic flow decisions at my data center based on a few pings to one IP not coming back from some external off site/network 3rd party I have no relations with or control over. So why should I here? I feel this could be a defining feature or at the very least a plugin. I’ll also toss in Dropbox support for backups.

Thanks for reading my small book. Happy routing!

-

Setup PowerDNS Recursor on Ubuntu 20.04

Comments powered by Disqus.