Tom Purl's Blog

🚗 My '83 Datsun On the Side of the Information Superhighway 🛣️

Tags: #cicd #brainbender

I am in the process of reading Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by David Farley and Jez Humble. Most of the principles in the book align with what I’ve learned as a technical professional for the last 20 years, but occasionally I’ll see something that bends my brain a bit. Here’s an example of that from the first chapter:

The aim of the deployment pipeline is threefold. First, it makes every part of the process of building, deploying, testing, and releasing software visible to everybody involved, aiding collaboration. Second, it improves feedback so that problems are identified, and so resolved, as early in the process as possible. Finally, it enables teams to deploy and release any version of their software to any environment at will through a fully automated process.

The reason why this seems so strange to me is that I’m used to the following workflow:

  1. Build out the mostly-static prod and non-prod environments ahead of time using IAC
    1. Example: A set of Ansible playbooks that build out a Stage and Prod environment
  2. Develop an application and automated build process that does things like run tests
    1. Example: A Django application that is built and tested using a Makefile
  3. Write a Pipeline script that is able to run your Makefile and deploy the resulting build to one of the static environments from step 1.
    1. Example: A Jenkins Pipeline that is running within a Jenkins server that was created before step 1

However, my interpretation of “releasing any version to any environment is” is that I can deploy any arbitrary version of my app to a completely new environment, run my tests, and then throw that environment away. Oh, and all of the code that does that should live within my app’s repo.

So I guess my questions at this point are...

  1. What’s a good “starter” framework for creating a new environment on-demand that can run my app?
  2. Am I making this too complex? Should I just use what’s built into Gitlab or Github and replace a few days of work with 5 good lines of config?

Unfortunately I don’t think this topic is covered until Chapter 13, and I’m on Chapter 2. Oh well, it good motivation to get back to work 😼

This post was initially published on 6/5/2020

I recently installed my first Nextcloud server on top of a new Digital Ocean Kubernetes (K8S) cluster as a Kubernetes training exercise. I ended up learning a ton about Kubernetes but I also learned a lot about how to run a Nextcloud server.

One thing I learned very quickly is that most default web server configurations don’t support uploading files larger than a few megabytes. I therefore got a ton of errors the first time I tried syncing an image folder.

Since I was using the official nextcloud:apache image I figured that the built-in Apache server was configured properly. I therefore started looking into how I could configure my Kubernetes Ingress to accept large file uploads. And since I was using the Nginx Ingress Controller it had to be Nginx-specific.

The docs were a little confusing on this, but the good news is that all I had to do was set an annotation in the ingress like this:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
      kubernetes.io/ingress.class: "nginx"
      # maps to client_max_body_size
      nginx.ingress.kubernetes.io/proxy-body-size: 128m
    name: nextcloud-ingress
  spec:
    tls:
      - hosts:
          - docs.tompurl.com
        secretName: nextcloud-cert-tls
    rules:
      - host: docs.tompurl.com
        http:
          paths:
            - backend:
                serviceName: nextcloud
                servicePort: 80

The key line is this one:

  • nginx.ingress.kubernetes.io/proxy-body-size: 128m

My understanding is that this line configures the client_max_body_size variable in your Ingress’ nginx.conf file. Granted, it would be nice if the annotation had a name that is closer to the conf file variable name, but I’m just glad I figured this out 😼

One of the killer features of using Nginx as your Kubernetes Ingress Controller is that you can configure tons of different things using simple annotations. You can find more information on them here:


Tags: #digitalocean, #nginx, #kubernetes

This post was originally published on 2019-10-17

Overview

The first thing I always try to do to learn a new language after writing “hello world” is implementing fizzbuzz. This wasn’t true with the Robot Framework, so I thought I would be time to give it a try.

My Implementation

 *** Settings ***
 Documentation    Fizzbuzz kata
 Library    BuiltIn

 *** Test Cases ***

 Print Fizzbuzz
     [Documentation]    Print the numbers 1-100 in the log.html file, replacing
     ...                all numbers that are divisible by 3 with "fizz", 5 with
     ...                "buzz", and if divisible by both "fizzbuzz".

     Fizzbuzz

 *** Keywords ***

 Fizzbuzz
     FOR    ${number}    IN RANGE    1    101
         ${divisible_by_3}=    Is Mod Zero    ${number}    3
         ${divisible_by_5}=    Is Mod Zero    ${number}    5
         ${divisible_by_15}=   Is Mod Zero    ${number}   15
         Run keyword if    ${divisible_by_15}    Log to Console    FIZZBUZZ
         ...    ELSE IF    ${divisible_by_3}     Log to Console    FIZZ
         ...    ELSE IF    ${divisible_by_5}     Log to Console    BUZZ
         ...    ELSE    Log to Console    ${number}
     END

 Is Mod Zero
     [Documentation]    Returns whether the modulus of two numbers is zero.
     [Arguments]        ${dividend}    ${divisor}
     [Return]           ${is_modulus_zero}
     # Go-go gadget Python!
     ${is_modulus_zero}=    Evaluate    divmod(${dividend},${divisor})[1] == 0

Observations

The first thing I learned from this exercise was how surprisingly difficult it was to evaluate the result of an expression. If I was running this in Python I would do something like this:

for num in range(1, 101):
    if num % 15 == 0:
        print("fizzbuzz")
    elif num % 3 == 0:
        print("fizz")
    elif num % 5 == 0:
        print("buzz")
    else:
        print(num)

I can evaluate the num % 3 part within the else statement using Python. But here’s what I can’t do using the Robot Framework:

Run keyword if    Is Mod Zero    ${number}    15   Log to Console    FIZZBUZZ
...    ELSE IF    Run keyword and return status    Is Mod Zero    ${number}    3     Log to Console    FIZZ

I’m sure something like this is possible without creating a temporary variable (and evaluating the Is Mod Zero 3 times every time) but I’m not quite sure what it is.

The second thing I learned was how easy it was to run a Python one-liner from Robot. If that didn’t work then I simply didn’t see how I was going to evaluate a modulus from Robot without writing a Python module (for a one-liner).

Tags: #robotframework, #programming

This post was initially posted on 4/14/2020

A co-worker of mine recently asked me why I prefer to write automated REST API tests using the Robot Framework. Specifically, he couldn’t understand why I didn’t just automate everything using Postman, which is a very popular way of doing such things.

I was a little surprised by what I told him and thought that this may help other so I here’s my rationale. If I’m wrong I’m sure someone will let me know :–)

  1. Postman doesn’t really support the idea of a “setup” and “teardown” functions. The closest analogues are “pre-request scripts” and “Tests”. These are good at a request level, but a test case is often larger than just one request. I’m a huge fan of how Robot Framework handles test case and suite-level setup and teardown functionality and how you can configure it as an annotation.

  2. Code that you write in the “pre-request scripts” and “tests” sections can’t easily be modularized into external libraries. So for example, if each request requires you to run 10 lines of JS as a pre-request script, then you’re copying and pasting that JS into each and every request. If you need to make a change to that JS, then you need to copy and paste the new JS into each request. This makes things very difficult to maintain.

  3. It’s difficult to follow the workflows of a Postman test suite. Let’s say that you want to run request #1 before you run request #2, and if everything works then run request #3. Then let’s say that you want to run request #4, then 2 and 3. I’ve seen examples on how to do this but it’s very, very kludgy and I wouldn’t want to maintain those scripts or follow that bouncing ball.

  4. The response I’ve seen to #3 is that you just simplify your test cases as much as possible and then put everything else you test needs to do in JS. But then that takes us back to #2.

So what is Postman good for? To me, the killer feature of Postman is that you can “kick the tires” of your API and then write your test using a single tool that is nearly ubiquitous. And I agree that Postman is by far the best tool I’ve found for quickly poking and prodding a REST API.

So I guess what I’m saying is, when it comes to prototyping REST calls, Postman is hard to beat. However, if I want to actually write a formal test suite that is easy to read, write, and maintain, I would much rather use a “real” programming language bundled with a good test runner (both of which are included in the Robot Framework).


Tags: #postman, #robotframework, #testing

A Solution In Search Of a Problem

I like to read a lot of different media sources including blogs, forums and a few choice Twitter feeds. To keep all of those feeds in one place (and be Nosurf Neddy) I have traditionally used a wonderful service called Feedbin. Not only can I use it ot follow many blogs and forums with RSS feeds, I can also use it to follow feed sources that don’t use RSS like Twitter. It’s very inexpensive, has a wonderful interface that works well on my phone, and has worked flawlessly for me for years.

But like I often say, I guess my life isn’t complicated enough. So I recently took a stab at using the Element chat client as the interface for all of my feeds. I’m very happy with the results so I thought I would share how I did it.

Workflow

The idea of using a chat client to keep up with your feeds may sound a bit foreign so here’s basically how I do it.

  1. I invite a Maubot-based RSS bot to a chat room (e.g. “Fun Feeds”)
    1. I only have to do this once after creating a room
  2. I ask the bot to subscribe to a feed using a command like the following:
    1. !rss subscribe https://www.reddit.com/r/aww.rss
    2. I only have to do this once for each feed

Now the bot will periodically scan all of my subscripitons and post them as new chat messages. Easy-peasy.

img

But what about feeds that don’t use RSS (e.g. Twitter)? For that I use a proxy called RSS-Bridge, which generates RSS feeds for dozens of non-RSS sites on-demand.

Architecture

Chat

I use the excellent chat client Element (which used to be called Riot) on both my laptop and my phone. This is the official client for the Matrix chat network. I run my own server but you certainly don’t have to. You can use the official Matrix server for free or rent one for $5 a month.

Bot “Server”

You need a place where you can run the bot at all times. I have a server on my private home network (David) where I run a Maubot Docker image. When this container is started it logs into a Matrix homeserver and starts monitoring its rooms.

Please note that when I say “server” I mean “computer that runs all of the time”. This computer does not need to be accessible from the outside world. Maubot is a client that uses Matrix as a server, not the other way around.

RSS-Bridge Proxy

I also need a to run the RSS-Bridge software on a computer that runs all of the time. The good news is that this “server” only needs to be accessible by the bot, so you can run them both on the same machine. I therefore also run this software on David as a Docker container.

Setup

Maubot

The most important part to remember is that a bot is just a daemon that can log into a chat account. It doesn’t need to be installed on the same host as your chat server. However, in Maubot’s case it does need to be configured using a web interface. Which seems unnecesarily complex and confusing to me. Having said that, once you figure it out the web interface is very easy to use and powerful.

The docs here for setting up Maubot are very good and complete, if a bit non-intuitive. Make sure you read the doc completely a few times before attempting to setup a bot.

RSS-Bridge

RSS-Bridge was also a little confusing. For example, let’s assume that I want to subscribe to the @danagould Twitter feed. After enabling the Twitter bridge I can interact with it using a very nice web interface. The only problem is, the web interface doesn’t return the URL that I need to access the RSS feed. Intead it returns the results of visiting that URL. To build the URL you have two choice:

  1. Read the actions docs on building URL’s
  2. Use your browser’s developer tools to peek at the URL that was used to return the RSS results

img

Lessons Learned

I Experience Less FOMO

Most RSS readers present unread posts like files in a folder or messages in an inbox. Seeing everything that I didn’t have time to read used to give me a bad case of FOMO.

Now if I don’t check my feed chat rooms for a week I just don’t see messages I’m missing. Sure, I could scroll up 15 pages and see everything, but why would I when ignorance is bliss?

Everything is surprisingly stable

Since this solution is so relatively complex I was worried that it would decay pretty quickly. But I’m happy to say that everything has survived a few reboots.

The Matrix Ecosystem Keep Getting Better

Element has improved dramatically over the last 18 months and it now very polished. If you don’t like it then there’s also a large list of alternataive clients. If only I could hook in Discord and Slack bridges then I could use one excellent interface for all chats and feeds.


Tags: #matrix, #rss, #chat, #bots

Background

I recetly decided to setup a small server running IPFire on the edge of my network so I could easily host a secure DMZ. I didn’t have any hardware lying around that could run it so I put together the following requirements:

  • Small: I wanted to stash this thing away like a cable modem
  • Energy Efficient: It’s important to me to limit my computer power usage as much as possible
  • Relatively Recent CPU: This is important on an edge router for security reasons. Please note that I don’t need a particularly fast processor. I need one that is well supported by it’s vendor with security patches.
  • Brand Name: There’s a lot of tiny computers available from no-name vendors with no-name parts. I didn’t want to risk ordering something from one of those vendors that may contain gray-market parts.
  • Cheap: One of the reasons I’m building a home DMZ is to save money over the long term on hosting costs. Spending $400 on a router (which is quite cheap for a good one) makes that much more difficult.
  • 3 Ethernet Ports or 4 USB 3 Ports: Ideally the device would have 3 ethernet ports but that addition usually adds at least a couple hundred dollars to the price. The next best thing, which has worked very well for me in the past. was to use USB3->ethernet adapters. I even had a few lying around the house already.

I ended up getitng a great deal on Ebay on a used HP EliteDesk 705 G3 Desktop Mini:

img

I plugged in my two USB3->ethernet adapters, installed and configured IPFire, and in about an hour I had an excellent router/firewall/IPS/whatever system on the edge of my network. Thank goodness I’m so smart and really knocked this one out of the park, right?

img

Anatomy of an Outage

Things worked oh-so-well for about 3 weeks but then something funny happened.

First, I discovered a super-cool media server call Jellyfin. I have a large media library on my home file server and have been using Kodi for a while to access it of CIFS. This worked fairly well but I didn’t like that Kodi a) didn’t have a great web client for watching movies and b) wasn’t able to transcode videos on-the-fly. Jellyfin solves both of these problems and after a few days of poking around I was hooked.

So one day I was watching Rick and Morty in my kitchen for about 90 minutes when suddenly the video just froze. I wasn’t able to ping the file server or router from my laptop, all of which were on my private, non-DMZ network. After a little troubleshooting I discovered that even when I plgged my laptop directly into the USB->ethernet adapter for my private network I wasn’t able to talk to the router. Luckily I had another, unused USB->ethernet adapter so I just replaced the “broken” one with that. Thank goodness I never throw anything away 😼

After a few reboots everything worked again and I went right back to my video streaming. And then it happened again 20 minutes later.

“Oh, I know, it must be the USB port” I thought, so I chose a different one. After a few more reboots everything worked again for a couple of hours but it failed again. Rinse and repeat anothe 3 times. Every outage was a little longer and required more reboots and swap-outs and praying to the gods of flaky computer problems.

The last outage (hopefully!) was so bad that I ended up re-installing IPFire. Thankfully that is very simple but good lord it’s not something I want to do every couple of days. It’s been about a week since I did that and so far, so good.

img

Solutions

I’m very fortunate to have many good friends, and some of those friends know a heckuva lot more about networking and hardware than I do. After discussing this very strange issue with them we basically theorized that Jellyfin is probably putting a lot of strain on the network adapter. That strain was causing a relatively large amount of power to be drawn or heat to be created and that was causing the either USB->ethernet adapter or the USB bus to stop working for one reason or another.

To avoid the problem I’ve done the following:

  • Since I have 2 USB3->ethernet adapters (1 for the DMZ network and 1 for the private network) I now split them between the two USB3 buses on the computer.
  • The Jellyfin client appears to try and use as much bandwidth as possible by default. I configured all of the clients that I have to use no more than 1 Mb/s of bandwidth.

Now I appear to be in much better shape. I’ve watched a ton of video using Jellyfin over the last week and I haven’t seen any problems.

Lessons Learned

If you’re reading this you proabably are like me: a cheap bastard.You want to really maximize what you get for what you spend and you don’t mind doing a little more work in the short term to make that happen. This is an admirable trait.

What I have learned from this experience is that you are very likely to not save money on a custom home router by using USB3->ethernet adapters. They work great if you want to plug your laptop into a wired network connection during the day, but I just haven’t had a good experience with running them on a router that has to work 99.99% of the time.

The good news is, you don’t have to buy gray market hardware or spends big buck on niche network hardware. All you need to do is buy the smallest, cheapest computer that you can afford that includes at least one PCIe interface. You can then add an inexpensive 2-port gigabit ethernet card to the device to give yourself enough ports to run both private and DMZ networks.

In my case, I wouldn’t even need to find a different computer model. I would just need to buy the “next size up” of the 705 G3, which is the “small form factor (SFF)” model.

img

It’s a little bigger and it uses a little more power, but it’s still a nice, small computer. And it’s not even that much more expensive.

Here’s my costs for my current rig:

Part Price Already owned?
Used G3 Mini $110 No
USB3->Ethernet Adapter 1 $25 Yes
USB3->Ethernet Adapter 2 $25 Yes
$160

Here’s current prices for the alternative

Part Price Already owned?
Used G3 SFF $160 No
2-port PCIe Ethernet Adapter $22 No
$182

If I could go back I would definitely redo things with the G3 SFF and save myself and my family a ton of hassle. And who knows? If the USB adapters issues continue this is a relatively cheap and easy solution.

Please learn from my mistakes. You can build a powerful, flexible and low-powered router for less than $200 without rolling the dice on flaky USB->ethernet adapters.


Tags: #networking, #security, #hardware

Enter your email to subscribe to updates.