Tom Purl's Blog

🚗 My '83 Datsun On the Side of the Information Superhighway 🛣️


This may be a bit wonky but it’s a surprisingly important issue at a lot of companies that don’t have a dedicated QA department:

How do we ensure that our system is “green” after deploying a new feature? Who is responsible for running the tests and what should those tests include?

Many developers ask the same question this way:

Why do I have to be online at 2 AM to manually test my feature after it is deployed?

I’ve heard this question so many times that I thought I would write down my high-level answer. The answer isn’s complicated, and its implementation isn’t difficult. However, it’s difficult to implement by many companies because it requires a coordinated effort across 3 groups of your product development team.

Step 1 – Creating user acceptance criteria

Acceptance tests enforce how the user will interact with our system. But before you create those you need the acceptance criteria. It is up to the “holy trinity” (PO, Developer, Tester) to define these at story creation time. They also need to be updated if the spec for the story changes by the same people. The Specification by Example process is a good, light-weight and cheap way of doing this.

For most software, 99% of the time the acceptance criteria should be *testable*. You can’t have testable acceptance criteria without writing more atomic, succincnt and well-defined stories. Without good stories you can’t have good tests, and the ripple effects are very, very expensive.

One heuristic for creating testable acceptance criteria is Gherkin. It gives you a shared syntax for specifying requirements and makes it possible to generate automated tests. But there are other options for this too.

Step 2 – Creating the user acceptance tests

This is also the responsibility of the holy trinity. These tests can either be manual or automated. This is a tremendous understatement. Of course they should be automated. That will save you tons of money and time. No seriously.

These tests impersonate the customer and ensure that the happy path scenarios are properly implemented. They also ensure that the app reacts properly in unhappy path scenarios.

These tests should be:

  • Cheap
  • Lightweight
  • Cheap
  • Fast
  • Test only what is necessary
    • i.e., if the story doesn’t involve a UI change don’t use the UI to test the story
  • Cheap
  • Automated
  • Cheap

These tests need to run at deployment time.

Step 3 – Test Execution and Profit

If the acceptance tests aren’t automated then they need to be executed by a member of the holy trinity at the time of deployment. This option is:

  • Expensive
  • Error-prone
  • Slow
  • Expensive
  • Hated by everyone

Otherwise we can add them to a deployment pipeline and execute them immediately after the deployment step.

Note: After discussing this post with a few people in Reddit I wanted to emphasize that the process below is only for personal to-do lists and their related notes. I personally would not do something as foolish as store company information (even to-do's) outside of my employer's cloud and I really recommend that you do the same 😸

I’m in the process of moving all of my to-do lists and projects into Nextcloud from org-mode after almost 10 years. I’m surprised by how well this is working for me and I thought it might be useful to write down some of my thoughts.

But first, a little background. Currently I have two-types of to-do lists:

  1. Project-based
    1. Tasks are grouped in a somewhat formal way, have a life cycle and all lead to a common goal
  2. Ephemeral and recurring
    1. All of the little tasks that we need to write down so we remember them

In org-mode I used to have a separate file for each project and a few, huge files for all of my ephemeral and recurring tasks. I then interacted with these todo lists using Emacs with lots of great add-ons on my persional laptop and organice everywhere else. This was always “good enough” to keep me from jumping ship but bad enough to cause me to struggle when I wasn’t using my personal laptop (which is 95% of my waking hours).

Nextcloud is the best option I’ve found to replace org-mode for my to-do lists. Scratch that – it’s an excellent option. Here’s how I’m using it and why I am enjoying it so much more than org-mode for this particular use case.


I’m storing projects as Kanban boards in Nextcloud’s Deck app. Each board has the following lists which dictate each task’s life-cycle:

  • To-Do
  • In-Process
  • Done

Within each list we store cards, These cards can also have due dates and have a description section that uses Markdown formatting. This section can also include clickable checklists, and the Deck app tracks these checklist items as if they were sub-tasks (which was a nice UI surprise).

I prepend each board’s title with a suffix of Story. For example, my board that covers migrating my self-hosted Nextcloud instance to my new K8S cluster is titled Story – Migrate nextcloud to new k8s cluster. I then map these stories to parent features by doing the following.

  1. Creating a feature card (if it doesn’t exist already) in one of the following project boads:
    1. !! Personal Projects
    2. !! Professional Projects
  2. Linking my story board to the feature card by creating a Nextcloud project.
    1. In Nextcloud, projects are a fancy way of saying that two “things” are linked together somehow.

Now I can view all of the stories associated with a feature by looking at the Details section of the feature card.

I use a very simple, Kanban-like workflow for moving my tasks to completion. Finally, once a board is completed I archive it.

Managing my projects in the Deck app is very intuitive, easy and robust. However, sometimes it’s difficult to use the Deck app on your phone, even though the Deck mobile app is very good. Also, Kanban boards aren’t very good at storing one-off, ephemeral tasks or recurring tasks. They are better suited for formal projects.

Integrating Project Tasks with the Calendar App

A killer feature of the Deck app in Nextcloud (and honestly I’ve never seen this anywhere else) is its tight and intuitive integration with multiple other Nextcloud apps, including the Nextcloud Calendar app. Here’s how the two apps are linked:

  1. Each board is a calendar
  2. Each card on that board is a task (whish are part of the CalDAV standard)

Note: Deck boards are CalDAV calendars but don’t support the entire standard. For that reason you can’t really edit them using a CalDAV-compliant client. However you can view them using such a client and then edit them using the Nextcloud website or Nextcloud Deck for Android. Please see my Special Note section below for more details.

If you give your Deck card a due date it will show up on your calendar alongside your events, along with any tasks that you created outside of the Deck app. Which is pretty sweet 🙂

Advanced Task Management using the Tasks App

The Nextcloud Tasks app makes it easy to manage ephemeral or repeating tasks. Like I said earlier, Kanban boards aren’t very well suited for one-off tasks (pick up the dry cleaning) or recurring tasks. I don’t think there’s even a way to create recurring lists or cards (i.e. tasks) in the Deck app. I therefore use the Nextcloud Tasks app to manage a few ephemeral task lists for me.

Since tasks are part of the CalDAV standard it makes sense that they are stored with the events in each of your calendars (i.e. Deck boards). By that I mean that, behind the scenes, tasks and events are stored in Nextcloud like they are in any other CalDAV-compliant server. However, your interfaces to those tasks include the Deck, Calendar, and Tasks apps (to varying degrees). This gives you a lot of flexibility with how you manage your project and ephemeral todo list workflows when using the Nextcloud web interface.

Note that the CalDAV standard does support recurring tasks but the Nextcloud Tasks app does not. However, using a variety of third-party applications (like OpenTasks for Android) you can create recurring tasks that can be synced with your Nextcloud server using a CalDAV syncing tool (like the excellent DAVx5 app on Android).

The Importance of Client Ubiquity

I 💙 org-mode and Emacs for so many reasons. It has fundamentally changed the way that I interact with information and manage knowledge. However, it’s always been a poor choice as a project and todo list manager for me for one big reason: lack of interfaces. I love using org-mode in Emacs on my peronal laptop, and I love all of the tools that make is easy for me to manage my tasks and agenda. It’s like driving a race in a sleek sports car that is also a hover craft and a submarine and runs on sunshine. But as soon as I walk away from that laptop that amazing interface is replaced by a bicycle at best and a scooter at worst.

Let’s start with accessing my todo list from my work laptop. Since all of org-mode’s content is stored in text files (which is one of its best features) I would need to sync my org-mode files between my work and personal laptops. This is rarely an option in most organizations for security reasons.

I’m therefore forced to use a tool with a web interface like organice, which is a modern, excellent web interface for org-mode files that accesses them using WebDAV or Dropbox integration. And I must admit that organice really is a robust, beautiful and useful application. But Emacs users are used to an incredibly powerful and programmable interface with a ludicrously rich ecosystem of add-ons. You really can’t expect a web application, even one as good as organice to even come close to what Emacs can do today for at least another 20 years.

Storing your tasks in a CalDAV-compliant server gives you the ability to easily manage your todo lists on any system in a simple, transparent way. And if that CalDAV server and client happens to be Nextcloud, you have a lot of very good options for managing those todo’s using a variety of workflows without any additional configuration required.

Special Note About Syncing Deck Boards

Deck boards aren’t actually stored as conventional CalDAV calendars – they are stored as task lists, and unfortunately they’re the type of task lists to which DAVx5 can’t write. So when you using 3rd-party, non-Nextcloud apps like aCalendar+ and OpenTasks you can’t update tasks that exist as boards or lists in Deck, which means you can’t use those apps to update project to-do’s (if you’re using my project management workflow that is).

The good news is that you can still see those tasks in those apps, meaning that they’re still part of your daily agenda and you will receive notifications about them on your phone. Also, the Deck app for Android is very good and that doesn’t use davx5 for syncing – it updates the boards directly. So you do still have an interface on my phone to update those tasks/cards, and a very good one at that. You just need to jump from one app to another to make it hapepn.

Tags: #cicd #brainbender

I am in the process of reading Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by David Farley and Jez Humble. Most of the principles in the book align with what I’ve learned as a technical professional for the last 20 years, but occasionally I’ll see something that bends my brain a bit. Here’s an example of that from the first chapter:

The aim of the deployment pipeline is threefold. First, it makes every part of the process of building, deploying, testing, and releasing software visible to everybody involved, aiding collaboration. Second, it improves feedback so that problems are identified, and so resolved, as early in the process as possible. Finally, it enables teams to deploy and release any version of their software to any environment at will through a fully automated process.

The reason why this seems so strange to me is that I’m used to the following workflow:

  1. Build out the mostly-static prod and non-prod environments ahead of time using IAC
    1. Example: A set of Ansible playbooks that build out a Stage and Prod environment
  2. Develop an application and automated build process that does things like run tests
    1. Example: A Django application that is built and tested using a Makefile
  3. Write a Pipeline script that is able to run your Makefile and deploy the resulting build to one of the static environments from step 1.
    1. Example: A Jenkins Pipeline that is running within a Jenkins server that was created before step 1

However, my interpretation of “releasing any version to any environment is” is that I can deploy any arbitrary version of my app to a completely new environment, run my tests, and then throw that environment away. Oh, and all of the code that does that should live within my app’s repo.

So I guess my questions at this point are...

  1. What’s a good “starter” framework for creating a new environment on-demand that can run my app?
  2. Am I making this too complex? Should I just use what’s built into Gitlab or Github and replace a few days of work with 5 good lines of config?

Unfortunately I don’t think this topic is covered until Chapter 13, and I’m on Chapter 2. Oh well, it good motivation to get back to work 😼

This post was initially published on 6/5/2020

I recently installed my first Nextcloud server on top of a new Digital Ocean Kubernetes (K8S) cluster as a Kubernetes training exercise. I ended up learning a ton about Kubernetes but I also learned a lot about how to run a Nextcloud server.

One thing I learned very quickly is that most default web server configurations don’t support uploading files larger than a few megabytes. I therefore got a ton of errors the first time I tried syncing an image folder.

Since I was using the official nextcloud:apache image I figured that the built-in Apache server was configured properly. I therefore started looking into how I could configure my Kubernetes Ingress to accept large file uploads. And since I was using the Nginx Ingress Controller it had to be Nginx-specific.

The docs were a little confusing on this, but the good news is that all I had to do was set an annotation in the ingress like this:

  apiVersion: extensions/v1beta1
  kind: Ingress
    annotations: "letsencrypt-prod" "nginx"
      # maps to client_max_body_size 128m
    name: nextcloud-ingress
      - hosts:
        secretName: nextcloud-cert-tls
      - host:
            - backend:
                serviceName: nextcloud
                servicePort: 80

The key line is this one:

  • 128m

My understanding is that this line configures the client_max_body_size variable in your Ingress’ nginx.conf file. Granted, it would be nice if the annotation had a name that is closer to the conf file variable name, but I’m just glad I figured this out 😼

One of the killer features of using Nginx as your Kubernetes Ingress Controller is that you can configure tons of different things using simple annotations. You can find more information on them here:

Tags: #digitalocean, #nginx, #kubernetes

This post was originally published on 2019-10-17


The first thing I always try to do to learn a new language after writing “hello world” is implementing fizzbuzz. This wasn’t true with the Robot Framework, so I thought I would be time to give it a try.

My Implementation

 *** Settings ***
 Documentation    Fizzbuzz kata
 Library    BuiltIn

 *** Test Cases ***

 Print Fizzbuzz
     [Documentation]    Print the numbers 1-100 in the log.html file, replacing
     ...                all numbers that are divisible by 3 with "fizz", 5 with
     ...                "buzz", and if divisible by both "fizzbuzz".


 *** Keywords ***

     FOR    ${number}    IN RANGE    1    101
         ${divisible_by_3}=    Is Mod Zero    ${number}    3
         ${divisible_by_5}=    Is Mod Zero    ${number}    5
         ${divisible_by_15}=   Is Mod Zero    ${number}   15
         Run keyword if    ${divisible_by_15}    Log to Console    FIZZBUZZ
         ...    ELSE IF    ${divisible_by_3}     Log to Console    FIZZ
         ...    ELSE IF    ${divisible_by_5}     Log to Console    BUZZ
         ...    ELSE    Log to Console    ${number}

 Is Mod Zero
     [Documentation]    Returns whether the modulus of two numbers is zero.
     [Arguments]        ${dividend}    ${divisor}
     [Return]           ${is_modulus_zero}
     # Go-go gadget Python!
     ${is_modulus_zero}=    Evaluate    divmod(${dividend},${divisor})[1] == 0


The first thing I learned from this exercise was how surprisingly difficult it was to evaluate the result of an expression. If I was running this in Python I would do something like this:

for num in range(1, 101):
    if num % 15 == 0:
    elif num % 3 == 0:
    elif num % 5 == 0:

I can evaluate the num % 3 part within the else statement using Python. But here’s what I can’t do using the Robot Framework:

Run keyword if    Is Mod Zero    ${number}    15   Log to Console    FIZZBUZZ
...    ELSE IF    Run keyword and return status    Is Mod Zero    ${number}    3     Log to Console    FIZZ

I’m sure something like this is possible without creating a temporary variable (and evaluating the Is Mod Zero 3 times every time) but I’m not quite sure what it is.

The second thing I learned was how easy it was to run a Python one-liner from Robot. If that didn’t work then I simply didn’t see how I was going to evaluate a modulus from Robot without writing a Python module (for a one-liner).

Tags: #robotframework, #programming

This post was initially posted on 4/14/2020

A co-worker of mine recently asked me why I prefer to write automated REST API tests using the Robot Framework. Specifically, he couldn’t understand why I didn’t just automate everything using Postman, which is a very popular way of doing such things.

I was a little surprised by what I told him and thought that this may help other so I here’s my rationale. If I’m wrong I’m sure someone will let me know :–)

  1. Postman doesn’t really support the idea of a “setup” and “teardown” functions. The closest analogues are “pre-request scripts” and “Tests”. These are good at a request level, but a test case is often larger than just one request. I’m a huge fan of how Robot Framework handles test case and suite-level setup and teardown functionality and how you can configure it as an annotation.

  2. Code that you write in the “pre-request scripts” and “tests” sections can’t easily be modularized into external libraries. So for example, if each request requires you to run 10 lines of JS as a pre-request script, then you’re copying and pasting that JS into each and every request. If you need to make a change to that JS, then you need to copy and paste the new JS into each request. This makes things very difficult to maintain.

  3. It’s difficult to follow the workflows of a Postman test suite. Let’s say that you want to run request #1 before you run request #2, and if everything works then run request #3. Then let’s say that you want to run request #4, then 2 and 3. I’ve seen examples on how to do this but it’s very, very kludgy and I wouldn’t want to maintain those scripts or follow that bouncing ball.

  4. The response I’ve seen to #3 is that you just simplify your test cases as much as possible and then put everything else you test needs to do in JS. But then that takes us back to #2.

So what is Postman good for? To me, the killer feature of Postman is that you can “kick the tires” of your API and then write your test using a single tool that is nearly ubiquitous. And I agree that Postman is by far the best tool I’ve found for quickly poking and prodding a REST API.

So I guess what I’m saying is, when it comes to prototyping REST calls, Postman is hard to beat. However, if I want to actually write a formal test suite that is easy to read, write, and maintain, I would much rather use a “real” programming language bundled with a good test runner (both of which are included in the Robot Framework).

Tags: #postman, #robotframework, #testing

A Solution In Search Of a Problem

I like to read a lot of different media sources including blogs, forums and a few choice Twitter feeds. To keep all of those feeds in one place (and be Nosurf Neddy) I have traditionally used a wonderful service called Feedbin. Not only can I use it ot follow many blogs and forums with RSS feeds, I can also use it to follow feed sources that don’t use RSS like Twitter. It’s very inexpensive, has a wonderful interface that works well on my phone, and has worked flawlessly for me for years.

But like I often say, I guess my life isn’t complicated enough. So I recently took a stab at using the Element chat client as the interface for all of my feeds. I’m very happy with the results so I thought I would share how I did it.


The idea of using a chat client to keep up with your feeds may sound a bit foreign so here’s basically how I do it.

  1. I invite a Maubot-based RSS bot to a chat room (e.g. “Fun Feeds”)
    1. I only have to do this once after creating a room
  2. I ask the bot to subscribe to a feed using a command like the following:
    1. !rss subscribe
    2. I only have to do this once for each feed

Now the bot will periodically scan all of my subscripitons and post them as new chat messages. Easy-peasy.


But what about feeds that don’t use RSS (e.g. Twitter)? For that I use a proxy called RSS-Bridge, which generates RSS feeds for dozens of non-RSS sites on-demand.



I use the excellent chat client Element (which used to be called Riot) on both my laptop and my phone. This is the official client for the Matrix chat network. I run my own server but you certainly don’t have to. You can use the official Matrix server for free or rent one for $5 a month.

Bot “Server”

You need a place where you can run the bot at all times. I have a server on my private home network (David) where I run a Maubot Docker image. When this container is started it logs into a Matrix homeserver and starts monitoring its rooms.

Please note that when I say “server” I mean “computer that runs all of the time”. This computer does not need to be accessible from the outside world. Maubot is a client that uses Matrix as a server, not the other way around.

RSS-Bridge Proxy

I also need a to run the RSS-Bridge software on a computer that runs all of the time. The good news is that this “server” only needs to be accessible by the bot, so you can run them both on the same machine. I therefore also run this software on David as a Docker container.



The most important part to remember is that a bot is just a daemon that can log into a chat account. It doesn’t need to be installed on the same host as your chat server. However, in Maubot’s case it does need to be configured using a web interface. Which seems unnecesarily complex and confusing to me. Having said that, once you figure it out the web interface is very easy to use and powerful.

The docs here for setting up Maubot are very good and complete, if a bit non-intuitive. Make sure you read the doc completely a few times before attempting to setup a bot.


RSS-Bridge was also a little confusing. For example, let’s assume that I want to subscribe to the @danagould Twitter feed. After enabling the Twitter bridge I can interact with it using a very nice web interface. The only problem is, the web interface doesn’t return the URL that I need to access the RSS feed. Intead it returns the results of visiting that URL. To build the URL you have two choice:

  1. Read the actions docs on building URL’s
  2. Use your browser’s developer tools to peek at the URL that was used to return the RSS results


Lessons Learned

I Experience Less FOMO

Most RSS readers present unread posts like files in a folder or messages in an inbox. Seeing everything that I didn’t have time to read used to give me a bad case of FOMO.

Now if I don’t check my feed chat rooms for a week I just don’t see messages I’m missing. Sure, I could scroll up 15 pages and see everything, but why would I when ignorance is bliss?

Everything is surprisingly stable

Since this solution is so relatively complex I was worried that it would decay pretty quickly. But I’m happy to say that everything has survived a few reboots.

The Matrix Ecosystem Keep Getting Better

Element has improved dramatically over the last 18 months and it now very polished. If you don’t like it then there’s also a large list of alternataive clients. If only I could hook in Discord and Slack bridges then I could use one excellent interface for all chats and feeds.

Tags: #matrix, #rss, #chat, #bots


I recetly decided to setup a small server running IPFire on the edge of my network so I could easily host a secure DMZ. I didn’t have any hardware lying around that could run it so I put together the following requirements:

  • Small: I wanted to stash this thing away like a cable modem
  • Energy Efficient: It’s important to me to limit my computer power usage as much as possible
  • Relatively Recent CPU: This is important on an edge router for security reasons. Please note that I don’t need a particularly fast processor. I need one that is well supported by it’s vendor with security patches.
  • Brand Name: There’s a lot of tiny computers available from no-name vendors with no-name parts. I didn’t want to risk ordering something from one of those vendors that may contain gray-market parts.
  • Cheap: One of the reasons I’m building a home DMZ is to save money over the long term on hosting costs. Spending $400 on a router (which is quite cheap for a good one) makes that much more difficult.
  • 3 Ethernet Ports or 4 USB 3 Ports: Ideally the device would have 3 ethernet ports but that addition usually adds at least a couple hundred dollars to the price. The next best thing, which has worked very well for me in the past. was to use USB3->ethernet adapters. I even had a few lying around the house already.

I ended up getitng a great deal on Ebay on a used HP EliteDesk 705 G3 Desktop Mini:


I plugged in my two USB3->ethernet adapters, installed and configured IPFire, and in about an hour I had an excellent router/firewall/IPS/whatever system on the edge of my network. Thank goodness I’m so smart and really knocked this one out of the park, right?


Anatomy of an Outage

Things worked oh-so-well for about 3 weeks but then something funny happened.

First, I discovered a super-cool media server call Jellyfin. I have a large media library on my home file server and have been using Kodi for a while to access it of CIFS. This worked fairly well but I didn’t like that Kodi a) didn’t have a great web client for watching movies and b) wasn’t able to transcode videos on-the-fly. Jellyfin solves both of these problems and after a few days of poking around I was hooked.

So one day I was watching Rick and Morty in my kitchen for about 90 minutes when suddenly the video just froze. I wasn’t able to ping the file server or router from my laptop, all of which were on my private, non-DMZ network. After a little troubleshooting I discovered that even when I plgged my laptop directly into the USB->ethernet adapter for my private network I wasn’t able to talk to the router. Luckily I had another, unused USB->ethernet adapter so I just replaced the “broken” one with that. Thank goodness I never throw anything away 😼

After a few reboots everything worked again and I went right back to my video streaming. And then it happened again 20 minutes later.

“Oh, I know, it must be the USB port” I thought, so I chose a different one. After a few more reboots everything worked again for a couple of hours but it failed again. Rinse and repeat anothe 3 times. Every outage was a little longer and required more reboots and swap-outs and praying to the gods of flaky computer problems.

The last outage (hopefully!) was so bad that I ended up re-installing IPFire. Thankfully that is very simple but good lord it’s not something I want to do every couple of days. It’s been about a week since I did that and so far, so good.



I’m very fortunate to have many good friends, and some of those friends know a heckuva lot more about networking and hardware than I do. After discussing this very strange issue with them we basically theorized that Jellyfin is probably putting a lot of strain on the network adapter. That strain was causing a relatively large amount of power to be drawn or heat to be created and that was causing the either USB->ethernet adapter or the USB bus to stop working for one reason or another.

To avoid the problem I’ve done the following:

  • Since I have 2 USB3->ethernet adapters (1 for the DMZ network and 1 for the private network) I now split them between the two USB3 buses on the computer.
  • The Jellyfin client appears to try and use as much bandwidth as possible by default. I configured all of the clients that I have to use no more than 1 Mb/s of bandwidth.

Now I appear to be in much better shape. I’ve watched a ton of video using Jellyfin over the last week and I haven’t seen any problems.

Lessons Learned

If you’re reading this you proabably are like me: a cheap bastard.You want to really maximize what you get for what you spend and you don’t mind doing a little more work in the short term to make that happen. This is an admirable trait.

What I have learned from this experience is that you are very likely to not save money on a custom home router by using USB3->ethernet adapters. They work great if you want to plug your laptop into a wired network connection during the day, but I just haven’t had a good experience with running them on a router that has to work 99.99% of the time.

The good news is, you don’t have to buy gray market hardware or spends big buck on niche network hardware. All you need to do is buy the smallest, cheapest computer that you can afford that includes at least one PCIe interface. You can then add an inexpensive 2-port gigabit ethernet card to the device to give yourself enough ports to run both private and DMZ networks.

In my case, I wouldn’t even need to find a different computer model. I would just need to buy the “next size up” of the 705 G3, which is the “small form factor (SFF)” model.


It’s a little bigger and it uses a little more power, but it’s still a nice, small computer. And it’s not even that much more expensive.

Here’s my costs for my current rig:

Part Price Already owned?
Used G3 Mini $110 No
USB3->Ethernet Adapter 1 $25 Yes
USB3->Ethernet Adapter 2 $25 Yes

Here’s current prices for the alternative

Part Price Already owned?
Used G3 SFF $160 No
2-port PCIe Ethernet Adapter $22 No

If I could go back I would definitely redo things with the G3 SFF and save myself and my family a ton of hassle. And who knows? If the USB adapters issues continue this is a relatively cheap and easy solution.

Please learn from my mistakes. You can build a powerful, flexible and low-powered router for less than $200 without rolling the dice on flaky USB->ethernet adapters.

Tags: #networking, #security, #hardware

Enter your email to subscribe to updates.