Tom Purl's Blog

🚗 My '83 Datsun On the Side of the Information Superhighway 🛣️

I was working my way through the Elyses Destructured Enchantments exercise in the Clojure track of the excellent Exercism training site and it really expanded my knowledge of sequential destructuring It’s very cool, understanding the basics of that makes it much easier pass around sequences and understand other people’s code.

One side effect of working on this exercise was learning about combining sequences (i.e vectors, lists, and ordered sets). There are tons and tons of functions available in the standard library that do something like this and I had quite a bit of difficulty finding one that worked for this scenario.

All I had to do was combine one or more numbers with a vector of strings. Should be easy right? Well, here’s what I tried:

user> (def strings ["one" "two" "three"])
;; => #'user/strings
user> (concat 9 strings)
Error printing return value (IllegalArgumentException) at clojure.lang.RT/seqFrom (
Don't know how to create ISeq from: java.lang.Long
user> (concat [9 strings])
;; => (9 ["one" "two" "three"])
user> [9 strings]
;; => [9 ["one" "two" "three"]]
user> (conj 9 strings)
Execution error (ClassCastException) at user/eval13467 (form-init5656966070595896937.clj:47).
class java.lang.Long cannot be cast to class clojure.lang.IPersistentCollection (java.lang.Long is in module java.base of loader 'bootstrap'; clojure.lang.IPersistentCollection is in unnamed module of loader 'app')
user> (conj [9 strings])
;; => [9 ["one" "two" "three"]]

Now take those lines, multiply it by 20 and add 5 or 6 other functions that seem to be able to do what I want and you’ll begin to appreciate my frustration 😠 Some functions kindof worked (like into) but they didn’t give me the ability control where I placed the number.

And of course, the most frustrating part is that this is supposed to be easy. I’m sure I could write something requiring 3 or 4 functions to do this but manipulating lists is Clojure’s bread and butter – there was certainly a very simple, idiomatic solution that I was missing.

In the end I finally broke down and started Googling for other people’s solutions and found the “magic function”: flatten:

(flatten [9 strings])
;; => (9 "one" "two" "three")

Oh well, I’m glad I finally found this, and I’m assuming that I’ll be using it a lot in the future.

Tags: #clojure, #exercism

Clojure has a few fun macros that help your thread an expression through one or more forms. These macros are also sometimes referred to as the thrush operators. Here’s a few:

(There are more of these, but this is a good place to start. And damn are they hard to find using a search engine. Save yourself some grief and just bookmark ClojureDocs.)

Here’s basically what I think they do. In functional programming you tend to think “inside-out”. That is, you start manipulating your data in the middle of your code and then send those results to one or more wrapping functions.

So for example, let’s say you wanted to know how much to tip for a check, and you prefer to calculate the amount based on the post-tax total. The conventional way to do that with a functional language is like this:

user> (def amount 20.00)
;; => #'user/amount
user> (def sales-tax 0.075)
;; => #'user/sales-tax
user> (def tip-percetage 0.2)
;; => #'user/tip-percentage
user> (* (+ (* amount sales-tax) amount) tip-percentage)
;; => 4.3

To read that code you would first look at the innermost expression, (* amount sales-tax). You would then take the result of that operation (a sequence of even numbers) and pass that to the + function.

This is a bit unfamiliar to people who are used to shell scripting, which depend upon pipelines. Here’s an example:

ps auxwww | grep -v grep | grep firefox
#> tom         5572  4.0  9.3 6142372 726572 ?      Sl   Dec28 179:09 /usr/lib/firefox/firefox

Here, the output of the first command, ps auxwww is passed to the first grep command, which filters out any line containing the string “grep” using the -v flag. Then that output is sent to the second grep command, which filters out everything except the lines containing the string firefox.

You could argue that the shell pipeline is a bit easier to read, because it goes left-to-right. There isn’t really any nesting. And that’s what I believe the thrush operators are for – making it easier to specify how you want to manipulate your data in a pipeline.

So here’s the Clojure example again, this time with a thrush operator:

user> (-> amount
    (* sales-tax)
    (+ amount)
    (* tip-percentage))
;; => 4.3

This evaluates to this:

user> (use 'clojure.walk')
;; => nil
user> (macroexpand-all '(-> amount (* sales-tax) (+ amount) (* tip-percentage)))
;; => (* (+ (* amount sales-tax) amount) tip-percentage)

…which is exactly what we had above. Just easier to read. Also, how cool is the macroexpand-all function? I’m so happy to have learned about this today.

The difference between ->> and -> was a bit confusing to me at first, but it ll has to do with where the expr is placed in the following forms. Here’s an example:

user> (def c 5)
;; => #'user/c
user> (-> c (+ 3) (/ 2) (- 1))
;; => 3
user> (->> c (+ 3) (/ 2) (- 1))
;; => 3/4
user> (macroexpand-all '(-> c (+ 3) (/ 2) (- 1)))
;; => (- (/ (+ c 3) 2) 1)
user> (macroexpand-all '(->> c (+ 3) (/ 2) (- 1)))
;; => (- 1 (/ 2 (+ 3 c)))
  • -> places c as the second item in all forms (i.e. right after the function name)
  • ->> places c as the last item in all forms

Either way, I’m happy to have discovered these fantastic macros. I imagine it will make my Clojure code much simpler to imagine and read.

Tags: #clojure

Coding is my favorite puzzle-solving experience, and the best site I’ve ever found for coding exercises is Exercism. Not only is it free to use, it has an excellent interface, tons of great exercises, and supports dozens of languages. Even Emacs lisp 🐃 Heck, you can even receive (or give) mentoring from volunteers for free. I highly recommend trying it out, no matter what level you are at as a programmer or hobbyist.

I spent quite a bit of time on this site about a year and a half ago. I was very happy to see that the site has only improved during that time. The UX is even better and there are many more exercises.

I decided to try the Clojure exercises again because it’s the most enjoyable language I’ve ever used. It is concise, modern, functional, and integrates wonderfully with Emacs using Cider. Also, it gives me an excuse to re-read my favorite programming book ever.

Anywho, I’m going to try and blog about what I learn from these exercises here. I hope some of the posts will help others learn as much as they will help me.

Tags: #clojure, #exercism, #emacs


KDE Neon is a fantastic, desktop-oriented Linux distribution. It is snappy, cohesive, and incredibly simple to install and configure for many purposes. Any user of KDE that wants something up-to-date that “just works” should consider it.

The Whys

For the last year or so I’ve been using NixOS on my personal laptop. It is an excellent, well-designed Linux distribution that is powered by the Nix package manager. I would tell you why this technology is so cool but I think this page does a better job than I would.

I, however, had the following issues running it on my personal laptop:

  • Less than stellar multimedia support: I spent hours trying to get Jack audion working, all to no avail for example.
  • Difficulty with containers: You have to use special workarounds to use technologies like Snaps, Docker, AppImage, and Flatpak. Often, these workarounds required more knowledge of derivations and the entire Nix ecosystem than I possessed, and I have trouble finding enough time to learn what was necessary.
  • Many desktop app packages were broken: Bitwarden is a good example. There’s an official package (actually, they’re called derivations with Nix) but it doesn’t work, even after spending quite a bit of time troubleshooting.

Note: This is not a NixOS diss track 😺 NixOS is excellent for many uses and I still happily use the Nix Package Manager every day on other Linux distributions. It just wasn’t ideal for this use case.

So I needed something new that worked for everything I wanted to do, was stupid-simple to setup, and had good support for KDE. And that’s why I tried KDE Neon.

What is it?

(Image by PublicDomainPictures from Pixabay)

Here’s the description from the project home:

The latest and greatest of KDE community software packaged on a rock-solid base.

This translates into the following stack:

  1. Ubuntu 20.04
  2. A “Neon” PPA that provides the latest and greatest KDE binaries

So if you’re familiar with the Ubuntu ecosystem you should be comfortable.

Laptop Info

I use an IBM Thinkpad T490. it is light, has a very long battery life, is very durable and reasonably priced. However, it’s not very powerful, but that’s ok. If I need to run anything that requires more oomph then I run it on a remote server.

Here’s the “about this system” info:

Operating System: KDE neon 5.23
KDE Plasma Version: 5.23.4
KDE Frameworks Version: 5.89.0
Qt Version: 5.15.3
Kernel Version: 5.11.0-43-generic (64-bit)
Graphics Platform: X11
Processors: 8 × Intel® Core™ i5-8265U CPU @ 1.60GHz
Memory: 7.4 GiB of RAM
Graphics Processor: Mesa Intel® UHD Graphics 620

First impressions

Very Simple Setup

I think I clicked on about 4 or 5 buttons to install the OS. It was trivially simple. When it was done everything below was properly configured:

  • Wifi
  • My graphics card

This may sound trivial but on a lot of distributions can make setting up these services cumbersome at best. i still don’t think that I properly configured my graphics card on my old system.

Package Management is Weird but Manageable

I’m a big fan of Debian’s package manager apt, which is one of the package mangers that this system uses. After the initial installation I tried upgrading things and got an interesting message:

$ sudo apt-get upgrade On KDE neon you should use pkcon update to install updates. If you absolutely must use apt you do have to use dist-upgrade or full-upgrade in place of the upgrade command. Abort.

Uhhhh, ok. I’ve never heard of PackageKit before it appears to be yet another attempt at unifying all existing, Liunx-based package managers while also extending those managers (which is the scary part to me).

There’s also a graphical package manager called Discover which also wraps apt and makes it easy to install Flatpaks. I haven’t used it to install any apps yet but I have used it to update the firmware on my laptop, which is something I’ve never seen a package manager recommend before. So that’s actually quite nice.

I do need to use a few Snaps and so far those have worked perfectly.

The KDE Integration is Flawless 👀 🍬

(My current desktop using the Latte dock and the mcOS-Big-Sur-large layout.)

I really like using KDE but in the past I’ve had the following issues:

  • My distribution shipped a fairly old version
  • I was unable to install add-ons or themes

Both of these issues have eliminated with KDE Neon.I’m using the latest stable version of KDE and I have been able to effortlessly install lots of fun eye candy.

Performance and Snappiness is Good

Like I said above my laptop is not exactly a speed demon. Thankfully I haven’t had any performance issues or lockups yet. Everything feels very fast and responsive.

Tags: #linux, #kde, #nix

Tags: #cowsayseries

(This blog post was originally published on 2013/11/29 and is part 3 of 3 of my Cowsay Series of articles.)

This is the third post in a series of articles about writing my first application that uses sockets. For more information about why I’m doing this or how, please see my firt article.

Now With Rspec And STDERR!

Wow, that is not a sexy heading :–)

When I left off last time, I had a server that worked pretty well as long as you it could parse everything that you sent to it. However, once things got a little funny, the client or server would simply fail.

There’s a lot that I want to change about the socket-oriented aspects of the server (i.e. how it handles EOF’s), but it was bugging the heck out of me that this thing was so brittle. So I had to fix that first.

Also, I got tired of running a bunch of functional tests by hand every time I added a new feature or refactored something, so I decided to try this computer automation thing that all of the kids are doing. I’ll talk more about how I used RSpec to do this later in the article.

Oh, and since my “project” has 3 whole files now and, like, dozens of lines of code, I’ve decided to actually host it as a project on Github. You can see it here:

Using Popen3 To Improve Security and Error-Handling

Fixing My Command Injection Bug

In my last iteration, I executed cowsay using the following line of code:

`cowsay -f #{commands[:body]} "#{commands[:message]}"`

One of the problems with this code is that it makes it very easy to “inject” commands that have nothing to do with cowsay.

For example, here’s a simple way to invoke cowsay using a heredoc:

cat <<EOF | nc localhost 4481
BODY hellokitty

This would give us the following:


< Hi >
     |      \
     | O . O|

In this example, the line of code above would interpolate to this:

`cowsay -f hellokitty "Hi"`

Everything looks good so far, but what if someone sent the following string to netcat:

cat <<EOF | nc localhost 4481
MESSAGE Hi"; sleep "5
BODY hellokitty

It’s possible that the line of code could interpolate to this:

`cowsay -f hellokitty "Hi"; sleep "5"`

This actually works. If you run the netcat command above against this version of the server.rb file, then it will sleep for 5 seconds before it returns the output of cowsay.

Of course, sleeping for 5 seconds isn’t really the worst case scenario. An attacker could inject a shell command that does things like delete important files or install malicious code.

The solution to this problem is simple and time-tested – parameterize your input. Here’s my new version of the code that executes the cowsay command:

def process(commands)
  output = nil
  err_msg = nil
  exit_status = nil

  Open3.popen3('/usr/games/cowsay', '-f', commands[:body], commands[:message]) { |stdin, stdout, stderr, wait_thr|
    # TODO Do I need to wait for the process to complete?
    output =
    err_msg =
    exit_status = wait_thr.value.exitstatus

  if exit_status != 0 then
    output = "ERROR #{err_msg}"

  return exit_status, output

This is a bit more complex than the previous one-liner, so here’s a quick summary of what I’m doing:

  • I use the popen3 method to execute cowsay command.
  • I parameterize my options and arguments by separating them with commas. By doing so, I’m no longer passing my command to the shell, which means significantly fewer options for command injection.

Now let’s try my “sleepy” version of the netcat command above with the new version of server.rb:

cat <<EOF | nc localhost 4481
MESSAGE Hi; sleep 5
BODY hellokitty

...whichwould give you this:


 < Hi; sleep 5 >
      |      \
      | O . O|

Hooray! No more shell games.

Handling Non-Fatal Errors

The last version of my server.rb file did a really poor job handling really rudimentary parsing errors. For example, if you didn’t pass the MESSAGE heading properly, the server would write a message to the STDERR and then freeze. Also, if you messed up your BODY heading, the server would simply write a message to its console. This is not terribly helpful for your client.

I needed a way to convey error messages to the client. I therefore decided on the following conventions:

  • I would always return a STATUS heading. If everything was processed properly, this code would always be 0. Otherwise, it would be some number greater than 0.

  • If the STATUS is 0, then an ascii art picture would be returned. Otherwise, and error message would be returned.

Now when the MESSAGE heading is malformed I can simply send an error message back to the client with the appropriate status from the parse method.

Grabbing the status code and error message from the cowsay command is easily accomplished using the popen3 method in the code example above. This command makes it easy to read the STDOUT and STDERR file handles along with the status code returned by the cowsay process. All I have to do then is test if the status code is > 0, and if it is, return the contents of STDERR.

Automated Functional Testing Made Simple

Now that my little script is actually starting to flirt with the idea of usefulness, I found that I was running a lot of manual tests against it. Of course, running these tests was error prone and labor intensive, so I finally tried to find some way test the code in an automated way.

The solution was writing a half-dozen RSpec tests, which was much easier than I thought it would be. As a matter of fact, it only took half an hour to cover all of the tests that I needed, which will probably save me at least an hour this week alone.

Here’s the current version of cowsay-spec.rb. To run the tests, this is all that I have to type:

rspec cowsay-spec.rb

One nice thing about RSpec is that it’s very easy to read. Even if you’re not a programmer, you can probably infer what I’m doing.

Also, please note that I’m not using the cowsay client.rb file to drive these tests. I figured that if any network client written in any language can interact with the cowsay server, then it makes the most sense to test it using “raw” sockets. And the easiest way for me to do that is to shell out a call to netcat..

Seriously, I should have done this at the beginning. It’s already saving me a ton of time, and it’s so easy to use.


I finally feel like I’m getting close to something that is actually useful. I can handle errors in a robust and intuitive way, and I can now test any new or updated features very quickly and easily.

Next, I’m going to focus on improving the way that streams are read and written by the client and server. Once that’s done, I believe that I will have developed this project as much as I can.

Tags: #cowsayseries

This blog post was originally published on 2013/11/27

(This article is part 2 of 3 of my Cowsay Series of articles.)

This is the second post in a series of articles about writing my first application that uses sockets. For more information about why I’m doing this or how, please see my first article.

More Functional Requirements

I have a working server, but there are two things that bug me about it:

  1. I have to test it using netcat, which is good for simple stuff but things would be much easier with an actual client.
  2. Right now, the server just process a “raw” string of commands. I would rather have the server interpret parameters.

I figure that I’m going to need some type of “message format” to make requirement #2 work, so I first try to define that.

My Message Format

Since I’m familar with HTTP, I decided to use a message format that is very similar. Right now, I simply want to be able to pass a message and cow body format to the cowsay server. I therefore decided to send messages that look something like this:

BODY beavis.zen

That’s it. Just plain old text (unicode?) over the wire with two properties. In the future, I’ll probably want to use return codes and more header options.

The Client

Here’s my first stab at a very simple client:

Github Gist

require 'socket'

module CowSay
    class Client
        class << self
            attr_accessor :host, :port

        # Convert our arguments into a document that we can send to the cowsay
        # Options:
        #   message: The message that you want the cow to say
        #   body: The cowsay body that you want to use
        def self.say(options)

            if !options[:message]
                raise "ERROR: Missing message argument"

            if !options[:body]
                options[:body] = "default"

            request <<EOF
MESSAGE #{options[:message]}
BODY    #{options[:body]}

        def self.request(string)
            # Create a new connection for each operation
            @client =, port)

            # Send EOF after writing the request

            # Read until EOF to get the response
end = 'localhost'
CowSay::Client.port = 4481

puts CowSay::Client.say message: 'this is cool!'
puts CowSay::Client.say message: 'This SUCKS!', body: 'beavis.zen'
puts CowSay::Client.say message: 'Moshi moshi!', body: 'hellokitty'

This is really a very simple socket client. I have one real method called say which understands two keys, message and body. I then take those values, drop them in a heredoc, and then send that to the server.

Of course, now that I’m using a new message format, I’m going to need to make some changes on the server too.

The Server, Part Two

Here’s my stab at creating a server that can read the new message format:

Github Gist

require 'socket'

module CowSay
    class Server
        def initialize(port)
            # Create the underlying socket server
            @server =
            puts "Listening on port #{@server.local_address.ip_port}"

        def start
            # TODO Currently this server can only accept one connection at at
            # time. Do I want to change that so I can process multiple requests
            # at once?
            Socket.accept_loop(@server) do |connection|

        # Find a value in a line for a given key
        def find_value_for_key(key, document)

            retval = nil

            re = /^#{key} (.*)/
            md = re.match(document)

            if md != nil
                retval = md[1]


        # Parse the document that is sent by the client and convert it into a
        # hash table.
        def parse(document)
            commands =

            message_value = find_value_for_key("MESSAGE", document)
            if message_value == nil then
                $stderr.puts "ERROR: Empty message"
            commands[:message] = message_value

            body_value = find_value_for_key("BODY", document)
            if body_value == nil then
                commands[:body] = "default"
                commands[:body] = body_value


        def handle(connection)
            # TODO Read is going to block until EOF. I need to use something
            # different that will work without an EOF.
            request =

            # The current API will accept a message only from netcat. This
            # message is what the cow will say. Soon I will add support for
            # more features, like choosing your cow.

            # Write back the result of the hash operation
            connection.write process(parse(request))

        def process(commands)
            # TODO Currently I can't capture STDERR output. This is
            # definitely a problem when someone passes a bogus
            # body file name.
            `cowsay -f #{commands[:body]} "#{commands[:message]}"`

server =

There’s a few things that I added to this code:

  • Before sending the message to the process method, I now have to parse it.
  • The parse method simply grabs the MESSAGE and BODY values with some help from the find_value_for_key method and then performs some very simple validation.
  • The process method now does some very rudimentaryn parameterization. Eventually I would like some more safeguards in place to ensure that bad input cannot be passed to the cowsay executable, but for now this will do.


First, let’s take a look at some “happy path” testing. In your first window, execute the following command:

ruby server.rb
# Returns 'Listening on port 4481'

Great. Now in another window, execute the following command:

ruby client.rb
< this is cool! >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
< This SUCKS! >
   \         __------~~-,
    \      ,'            ,
          /               \
         /                :
        |                  '
         _| =-.     .-.   ||
         o|/o/       _.   |
         /  ~          \ |
       (____@)  ___~    |
          |_===~~~.`    |
       _______.--~     |
       \________       |
                \      |
              __/-___-- -__
             /            _ \
< Moshi moshi! >
     |      \
     | O . O|

Nice. Let’s also try a quick test using netcat:

echo "MESSAGE Oh YEAH\nBODY milk" | nc localhost 4481

...which should return:

< Oh YEAH >
 \     ____________
  \    |__________|
      /           /\
     /           /  \
    |          |     |
    |  ==\ /== |     |
    |   O   O  | \ \ |
    |     <    |  \ \|
   /|          |   \ \
  / |  \_____/ |   / /
 / /|          |  / /|
/||\|          | /||\/
       |  |  |  |
      <__/    \__>

And now for the unhappy path. What happens if I pass a “body type” that the cowsay server doesn’t recognize?

echo "MESSAGE Boom goes the dynamite\nBODY bogus" | nc localhost 4481

The client exits normally, but I see the following error message in the console window in which the server is running:

cowsay: Could not find bogus cowfile!

It looks like the STDERR from the cowsay process is only being written to the console. In the future, I’ll need to capture that and make the server appropriately.

What if I don’t pass a message?

echo "BODY default" | nc localhost 4481

In this case, the client freezes. I then see the following error in the server console window:

ERROR: Empty message

The server then becomes unresponsive. This is definitely the first bug that I will need to fix in my next revision.


I’m happy with the progress of my little socket server and client. In my next revision I am going to focus on the following:

  • Having the server handle bad input gracefully
  • Making sure that the server is able to respond in a predictable, informative way when it experiences issues
  • Finally ditching the backticks and executing the cowsay process in a more robust way.

Tags: #cowsayseries

This blog post was originally published on 2013/11/12

(This article is part 1 of 3 of my Cowsay Series of articles.)

I’ve read through Working With TCP Sockets a few times to improve my socket programming knowledge. I’ve administered software systems for a while now I know most of the basics, but there are definitely some gaps I should fill in. This book has been a great tool for helping me identify those gaps.

However, there is only so much I can learn by reading about other people’s code – I needed something that I could create and break and fix again to really understand the lessons from the book. I therefore decided to rip off Avdi Grimm and create my own cowsay server.

I always learn more when I write about what I’m learning, so I’m also going to blog about it. This post is the first in a series that will record the evolution of this script from a naive toy to something that someone else would actually consider using some day.

Requirements – Iteration 1

First, I need to point out that I’m not creating a web application. I’m creating a lower-level server that communicates with its client using plain old sockets. This example is designed to teach me about networking in general, not HTTP programming.

So what does that mean? Well, it means that I need to write our own server and client. Writing them both is a pretty tall order, and I’ve never even written one of these things before. What I need is some sort of naive “scaffold” that works well enough to provide feedback while I turn it into a “real” program.

I therefore think that my first requirement is to only write a server. All client communication will be performed by the netcat program. I can worry about the client in a future iteration.

My second and final requirement is that the server just work. I will put my ego on the bench for a little while and just write working code that I know has plenty of flaws and anti-patterns. I’m not writing the next Nginx here – I’m having fun and learning something new. Besides, there will be plenty of time to turn this into something that I can show off.


Github gist

require 'socket'

module CowSay
    class Server
        def initialize(port)
            # Create the underlying socket server
            @server =
            puts "Listening on port #{@server.local_address.ip_port}"

        def start
            # TODO Currently this server can only accept one connection at at
            # time. Do I want to change that so I can process multiple requests
            # at once?
            Socket.accept_loop(@server) do |connection|

        def handle(connection)
            # TODO Read is going to block until EOF. I need to use something
            # different that will work without an EOF.
            request =

            # The current API will accept a message only from netcat. This
            # message is what the cow will say. Soon I will add support for
            # more features, like choosing your cow.
            # TODO - Parse the request

            # Write back the result of the hash operation
            connection.write process(request)

        def process(request)
            # TODO This is just painfully naive. I'll use a different
            # interface eventually.
            `cowsay "#{request}"`

server =

The low-level details of this script are out of the scope of this blog post. If you’re curious, then I do recommend the Working With TCP Sockets book. It’s an excellent introduction.

Thankfully, even if you don’t know a bunch about socket programming, it’s pretty simple to read Ruby code. Here’s basically what is happening:

  1. A new server process is created in the initialize method.
  2. When the start method is called, the server waits for a client to try to connect. When that happens, we enter the accept_loop block and do something about it.
  3. In the handle method we read the contents of the request and then forward them on to the process method.
  4. Here, we “shell out” a call to the cowsay program that is on the server, passing it the contents of the request.
  5. Finally, the output of the cowsay program is sent back to the client in line 32.
  6. Oh wait, one more step. The program goes back to line 15 and waits for another request. The server will block until that happens.


Like I said earlier, a proper client is out of the scope of this iteration, so we will test the script using netcat. Here’s how everything works on my system.

First, let’s start the server:

ruby cowsays_server/server.rb

...which outputs:

Listening on port 4481

Next, let’s connect with our client:

echo "I like coffee" | nc localhost 4481

...which should show you this:

< I like coffee  >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Hooray! Working code.

So What’s Wrong

Lots it turns out. Here’s some of the biggies.


If the client only sends part of a message and doesn’t end with an EOF character then my server will just block, waiting for that character. If another request comes along while it’s blocking, then that request will also wait until the first one is done, which will be never. Typically you don’t want to make it possible for one malformed request to DOS your server :–)

Here’s what I mean. Start your server using the commands above and then try type this:

(echo -n "Made you break"; cat) | nc localhost 4481

You may notice that nothing will happen. This command sends a string with no newline at the end, which means no EOF command for the server. The accept_loop command will therefore wait for that command forever.

Now type CTRL-z to stop that command and then type the following:

echo "Message 1" | nc localhost 4481

Still nothing happens. Your first command is still being handled by the server, so this second command will just sit patiently in the queue and wait. To prove everything that I’ve said so far, trying killing the first blocking command. Press CTRL-z again and then the following commands:

kill %1

You should see something like the following:

[1]  + 31288 terminated  ( echo -n "Made you break"; cat; ) |
       31289 terminated  nc localhost 4481

$  ____________
< Message 1  >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

[2]  + 31356 done       echo "Message 1" |
       31357 done       nc localhost 4481

What you just did was kill the first “job”, which was the message that was missing an EOF. Our server is finally free to respond to our second request.

Command Injection Attacks

Here’s another fun way to break your server. Try sending the following command:

echo "--bogus" | nc localhost 4481

Your server should write something like this to your STDOUT:

 nknown option: -
 nknown option: o
 nknown option: u
 nknown option:

Obviously, my code has no idea how to handle command line options that are disguised as a message. Also, now I won’t be able to use the server again until I restart it. Lame.

In a future iteration, I’ll actually need to parse request input and handle error codes and messages sent to STDERR. Backticks just aren’t going to cut it.


Performance isn’t super important for a server like this, but it’s still useful to see how a sever like this performs when more nthan one person is actually trying to use it at the same time. But how do you performance test a server like this?

for num in $(seq 5); do echo "Test #$num" | nc localhost 4481 &; done

This command may be a little scary looking since it’s an inline loop. Here’s how that command is actually expanded by the shell:

echo "Test #1" | nc localhost 4481 &
echo "Test #2" | nc localhost 4481 &
echo "Test #3" | nc localhost 4481 &
echo "Test #4" | nc localhost 4481 &
echo "Test #5" | nc localhost 4481 &

There are two key things to notice about these commands:

  • Each command has it’s own unique identifier. That will be important eventually.
  • Each command is “backgrounded” by the ampersand (&) sign. This means that the shell will not wait for the command to finish executing before it moves on to the next command. This simple trick allows us to send the five requests to the sever in very quick succession, which makes them nearly simultaneous.

So anywho, if you run the inline loop above, you should see 5 cows printed in quick succession. Great! Our server can handle 5 nearly-simultaneous requests.

At this point though, you may be wondering if the requests were handled in order. Let’s filter out everything but the “Test” message with this command:

for num in $(seq 5); do echo "Test #$num" | nc localhost 4481 &; done | grep Test

You should see output that looks something like this:

< Test #1  >
< Test #2  >
< Test #3  >
< Test #4  >
< Test #5  >

Cool. Every command was executed in order. What is I were to double the number of near-simultaneous requests? Since we are running our test with an inline loop, all you have to do is change the “5” to a “10” like this:

for num in $(seq 10); do echo "Test #$num" | nc localhost 4481 &; done | grep Test

...which will output something similar to (but probably diffferent than) this:

< Test #1  >
< Test #2  >
< Test #4  >
< Test #3  >
< Test #5  >
< Test #6  >
< Test #7  >
< Test #10  >
< Test #8  >
< Test #9  >

Interesting. I have to assume that “Test #10” was actually executed after “Test #9”, but apparently it was popped off of the accept queue first.

Of course it’s no fun to stress test something if you can’t find a way to break it. So how many requests does it take? Well, by default Ruby’s listen queue size is 5. This is the queue from which the accept_loop block grabs requests. I would imagine that 6 requests would cause at least one of my requests to fail. However, as we just saw above my server was easily able to handle 10 near-simultaneous requests.

The other possibility is that the accept_loop method actually sets the listen queue size to the SOMAXCONN value, which is 128 on my system. So how would my server handle 129 requests? To find out, simply change the “10” to “129” in the previous command.

On my system, the command executed without any errors. Granted, it took a few minutes to run, and you could definitely see some long pauses. But I guess the lesson learned is that even when we exceed the size of the listen queue, there seems to be enough idiot-proofing built into the Ruby runtime and Linux kernel to still make everything work eventually. Also, the long default TCP timeouts probably help.

I even tried running the loop above with 10,000 requests, but the only error I got was that I filled my shell’s job table. I really did not expect that. It looks like I need to find a better way to stress test this server.


There’s a lot more that I want to do with this server. Here’s some stuff that I haven’t mentioned yet:

  • Protcol Definition – Eventually, I need to create a client and I should define some type of protocol that it can use to talk to the server.
  • Concurrency – I would like to eventually make this a preforking server.
  • Support For Most Cowsay Features – You should be able to use a different cow.

I hope I was able to help someone else learn a little bit about socket programming. Thanks for reading!


This may be a bit wonky but it’s a surprisingly important issue at a lot of companies that don’t have a dedicated QA department:

How do we ensure that our system is “green” after deploying a new feature? Who is responsible for running the tests and what should those tests include?

Many developers ask the same question this way:

Why do I have to be online at 2 AM to manually test my feature after it is deployed?

I’ve heard this question so many times that I thought I would write down my high-level answer. The answer isn’s complicated, and its implementation isn’t difficult. However, it’s difficult to implement by many companies because it requires a coordinated effort across 3 groups of your product development team.

Step 1 – Creating user acceptance criteria

Acceptance tests enforce how the user will interact with our system. But before you create those you need the acceptance criteria. It is up to the “holy trinity” (PO, Developer, Tester) to define these at story creation time. They also need to be updated if the spec for the story changes by the same people. The Specification by Example process is a good, light-weight and cheap way of doing this.

For most software, 99% of the time the acceptance criteria should be *testable*. You can’t have testable acceptance criteria without writing more atomic, succincnt and well-defined stories. Without good stories you can’t have good tests, and the ripple effects are very, very expensive.

One heuristic for creating testable acceptance criteria is Gherkin. It gives you a shared syntax for specifying requirements and makes it possible to generate automated tests. But there are other options for this too.

Step 2 – Creating the user acceptance tests

This is also the responsibility of the holy trinity. These tests can either be manual or automated. This is a tremendous understatement. Of course they should be automated. That will save you tons of money and time. No seriously.

These tests impersonate the customer and ensure that the happy path scenarios are properly implemented. They also ensure that the app reacts properly in unhappy path scenarios.

These tests should be:

  • Cheap
  • Lightweight
  • Cheap
  • Fast
  • Test only what is necessary
    • i.e., if the story doesn’t involve a UI change don’t use the UI to test the story
  • Cheap
  • Automated
  • Cheap

These tests need to run at deployment time.

Step 3 – Test Execution and Profit

If the acceptance tests aren’t automated then they need to be executed by a member of the holy trinity at the time of deployment. This option is:

  • Expensive
  • Error-prone
  • Slow
  • Expensive
  • Hated by everyone

Otherwise we can add them to a deployment pipeline and execute them immediately after the deployment step.

Note: After discussing this post with a few people in Reddit I wanted to emphasize that the process below is only for personal to-do lists and their related notes. I personally would not do something as foolish as store company information (even to-do's) outside of my employer's cloud and I really recommend that you do the same 😸

I’m in the process of moving all of my to-do lists and projects into Nextcloud from org-mode after almost 10 years. I’m surprised by how well this is working for me and I thought it might be useful to write down some of my thoughts.

But first, a little background. Currently I have two-types of to-do lists:

  1. Project-based
    1. Tasks are grouped in a somewhat formal way, have a life cycle and all lead to a common goal
  2. Ephemeral and recurring
    1. All of the little tasks that we need to write down so we remember them

In org-mode I used to have a separate file for each project and a few, huge files for all of my ephemeral and recurring tasks. I then interacted with these todo lists using Emacs with lots of great add-ons on my persional laptop and organice everywhere else. This was always “good enough” to keep me from jumping ship but bad enough to cause me to struggle when I wasn’t using my personal laptop (which is 95% of my waking hours).

Nextcloud is the best option I’ve found to replace org-mode for my to-do lists. Scratch that – it’s an excellent option. Here’s how I’m using it and why I am enjoying it so much more than org-mode for this particular use case.


I’m storing projects as Kanban boards in Nextcloud’s Deck app. Each board has the following lists which dictate each task’s life-cycle:

  • To-Do
  • In-Process
  • Done

Within each list we store cards, These cards can also have due dates and have a description section that uses Markdown formatting. This section can also include clickable checklists, and the Deck app tracks these checklist items as if they were sub-tasks (which was a nice UI surprise).

I prepend each board’s title with a suffix of Story. For example, my board that covers migrating my self-hosted Nextcloud instance to my new K8S cluster is titled Story – Migrate nextcloud to new k8s cluster. I then map these stories to parent features by doing the following.

  1. Creating a feature card (if it doesn’t exist already) in one of the following project boads:
    1. !! Personal Projects
    2. !! Professional Projects
  2. Linking my story board to the feature card by creating a Nextcloud project.
    1. In Nextcloud, projects are a fancy way of saying that two “things” are linked together somehow.

Now I can view all of the stories associated with a feature by looking at the Details section of the feature card.

I use a very simple, Kanban-like workflow for moving my tasks to completion. Finally, once a board is completed I archive it.

Managing my projects in the Deck app is very intuitive, easy and robust. However, sometimes it’s difficult to use the Deck app on your phone, even though the Deck mobile app is very good. Also, Kanban boards aren’t very good at storing one-off, ephemeral tasks or recurring tasks. They are better suited for formal projects.

Integrating Project Tasks with the Calendar App

A killer feature of the Deck app in Nextcloud (and honestly I’ve never seen this anywhere else) is its tight and intuitive integration with multiple other Nextcloud apps, including the Nextcloud Calendar app. Here’s how the two apps are linked:

  1. Each board is a calendar
  2. Each card on that board is a task (whish are part of the CalDAV standard)

Note: Deck boards are CalDAV calendars but don’t support the entire standard. For that reason you can’t really edit them using a CalDAV-compliant client. However you can view them using such a client and then edit them using the Nextcloud website or Nextcloud Deck for Android. Please see my Special Note section below for more details.

If you give your Deck card a due date it will show up on your calendar alongside your events, along with any tasks that you created outside of the Deck app. Which is pretty sweet 🙂

Advanced Task Management using the Tasks App

The Nextcloud Tasks app makes it easy to manage ephemeral or repeating tasks. Like I said earlier, Kanban boards aren’t very well suited for one-off tasks (pick up the dry cleaning) or recurring tasks. I don’t think there’s even a way to create recurring lists or cards (i.e. tasks) in the Deck app. I therefore use the Nextcloud Tasks app to manage a few ephemeral task lists for me.

Since tasks are part of the CalDAV standard it makes sense that they are stored with the events in each of your calendars (i.e. Deck boards). By that I mean that, behind the scenes, tasks and events are stored in Nextcloud like they are in any other CalDAV-compliant server. However, your interfaces to those tasks include the Deck, Calendar, and Tasks apps (to varying degrees). This gives you a lot of flexibility with how you manage your project and ephemeral todo list workflows when using the Nextcloud web interface.

Note that the CalDAV standard does support recurring tasks but the Nextcloud Tasks app does not. However, using a variety of third-party applications (like OpenTasks for Android) you can create recurring tasks that can be synced with your Nextcloud server using a CalDAV syncing tool (like the excellent DAVx5 app on Android).

The Importance of Client Ubiquity

I 💙 org-mode and Emacs for so many reasons. It has fundamentally changed the way that I interact with information and manage knowledge. However, it’s always been a poor choice as a project and todo list manager for me for one big reason: lack of interfaces. I love using org-mode in Emacs on my peronal laptop, and I love all of the tools that make is easy for me to manage my tasks and agenda. It’s like driving a race in a sleek sports car that is also a hover craft and a submarine and runs on sunshine. But as soon as I walk away from that laptop that amazing interface is replaced by a bicycle at best and a scooter at worst.

Let’s start with accessing my todo list from my work laptop. Since all of org-mode’s content is stored in text files (which is one of its best features) I would need to sync my org-mode files between my work and personal laptops. This is rarely an option in most organizations for security reasons.

I’m therefore forced to use a tool with a web interface like organice, which is a modern, excellent web interface for org-mode files that accesses them using WebDAV or Dropbox integration. And I must admit that organice really is a robust, beautiful and useful application. But Emacs users are used to an incredibly powerful and programmable interface with a ludicrously rich ecosystem of add-ons. You really can’t expect a web application, even one as good as organice to even come close to what Emacs can do today for at least another 20 years.

Storing your tasks in a CalDAV-compliant server gives you the ability to easily manage your todo lists on any system in a simple, transparent way. And if that CalDAV server and client happens to be Nextcloud, you have a lot of very good options for managing those todo’s using a variety of workflows without any additional configuration required.

Special Note About Syncing Deck Boards

Deck boards aren’t actually stored as conventional CalDAV calendars – they are stored as task lists, and unfortunately they’re the type of task lists to which DAVx5 can’t write. So when you using 3rd-party, non-Nextcloud apps like aCalendar+ and OpenTasks you can’t update tasks that exist as boards or lists in Deck, which means you can’t use those apps to update project to-do’s (if you’re using my project management workflow that is).

The good news is that you can still see those tasks in those apps, meaning that they’re still part of your daily agenda and you will receive notifications about them on your phone. Also, the Deck app for Android is very good and that doesn’t use davx5 for syncing – it updates the boards directly. So you do still have an interface on my phone to update those tasks/cards, and a very good one at that. You just need to jump from one app to another to make it hapepn.

Tags: #cicd #brainbender

I am in the process of reading Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by David Farley and Jez Humble. Most of the principles in the book align with what I’ve learned as a technical professional for the last 20 years, but occasionally I’ll see something that bends my brain a bit. Here’s an example of that from the first chapter:

The aim of the deployment pipeline is threefold. First, it makes every part of the process of building, deploying, testing, and releasing software visible to everybody involved, aiding collaboration. Second, it improves feedback so that problems are identified, and so resolved, as early in the process as possible. Finally, it enables teams to deploy and release any version of their software to any environment at will through a fully automated process.

The reason why this seems so strange to me is that I’m used to the following workflow:

  1. Build out the mostly-static prod and non-prod environments ahead of time using IAC
    1. Example: A set of Ansible playbooks that build out a Stage and Prod environment
  2. Develop an application and automated build process that does things like run tests
    1. Example: A Django application that is built and tested using a Makefile
  3. Write a Pipeline script that is able to run your Makefile and deploy the resulting build to one of the static environments from step 1.
    1. Example: A Jenkins Pipeline that is running within a Jenkins server that was created before step 1

However, my interpretation of “releasing any version to any environment is” is that I can deploy any arbitrary version of my app to a completely new environment, run my tests, and then throw that environment away. Oh, and all of the code that does that should live within my app’s repo.

So I guess my questions at this point are...

  1. What’s a good “starter” framework for creating a new environment on-demand that can run my app?
  2. Am I making this too complex? Should I just use what’s built into Gitlab or Github and replace a few days of work with 5 good lines of config?

Unfortunately I don’t think this topic is covered until Chapter 13, and I’m on Chapter 2. Oh well, it good motivation to get back to work 😼

Enter your email to subscribe to updates.