A Nomad Primer

Over the past few months, I’ve had the opportunity to create a Nomad cluster and develop tooling around it.1 What follows is a short list of some Nomad CLI commands I’ve found useful in administrating a cluster. (The job name “nginx” is used as an example.)

nomad status nginx

Display basic information about a job. Helpfully displays allocation IDs and deployment status. When in doubt, run this command.

nomad job history -p nginx

Display the full version history of a job, including diffs between versions.

nomad logs [-stderr] -job nginx

Get logs from a random container running your job. Can optionally get logs from STDERR of the container instead. You can also run this command without the -job flag on an allocation ID to get the logs from one specific container.

nomad alloc-status -stats <alloc-id>

Get information about a specific allocation of a job. This is useful for debugging startup issues, as it presents a log of every event related to the allocation. The -stats flag shows CPU and memory usage, as well as any reserved ports.

nomad node-status

Get a list of currently running Nomad client nodes. This can help you link a Node ID hash to an actual IP address if you need to figure out which node an allocation is running on.

Feel free to reach out to me with any more Nomad tips you find useful! These were just the ones I’ve gotten the most out of.

  1. One of those tools, a container auto-scaler, will hopefully be open-sourced soon. [return]

Optimizing Dockerized Rails

Rails isn’t exactly known for its speed or small size. This translates to Docker as well. The default Go Docker image is 200MB smaller than the default Ruby image, and 300MB smaller than the default Rails image. Since Uncommon uses Rails and has lots of assets, the total image size for the app container alone came in at 1.39 GB, and when you add in Postgres (233 MB) and Redis (111 MB), you end up with a nearly 1.75 GB set of Docker images that you need to download over the internet.

First, I visited the ruby repository on Docker Hub. Examining the supported tags, I found that Docker provides slim versions of each ruby build which “only contains the minimal packages needed to run ruby.” Of course Uncommon relies on Rails, so I ended up needing to install quite a few extra packages that were installed in the standard ruby image. After it was all said and done, the image based on ruby-slim came in at 1.18 GB. I then added the .git directory to the repository’s .dockerignore file, which brought the size down to 1.16 GB, for a total of nearly 250 MB off the original image.

Next was Postgres. I came across Alpine Linux while browsing Docker repositories on Github. Alpine Linux is based off of BusyBox, but adds a package manager and other optimizations to make the operating system more usable. GliderLabs has built a Docker repository based off of Alpine here. Intrigued by the purpoted 10x decrease in size, I decided to try to convert the default Debian-based postgres repository to an Alpine-based one. After some small script changes, the end result came out to 29 MB as opposed to the original 233 MB. I did the same with Redis, which came out to a mere 13 MB compared to the default 111 MB.

It’s fairly easy to switch your Docker images to Alpine when there’s a package for what you want to install in the Alpine repositories. I ran into trouble trying to convert my Telegraf image, however, as no Alpine package currently exists. While I probably could have made my own package for it, I decided to take the easy way out and switch from an Ubuntu base to a Debian base. This brought the image down from 237 MB to 167 MB, which isn’t bad for one line of Dockerfile changes.

Finally, I moved the Uncommon app itself over to Alpine. After a few hours of trial and error, I found the right combinations of apk packages for the app to run successfully. The final size of that image is around 880 MB.

All together, these optimizations bring the Uncommon Docker stack down from 1.75 GB to 920 MB, only 53% of its original size.

Docker Metrics with Telegraf

With the release of InfluxDB v0.9, I was eager to start using Google’s cAdvisor to begin collecting metrics from Docker containers. Unfortunately, the new InfluxDB version comes with a new breaking API that cAdvisor still isn’t compatible with. Not only does cAdvisor not support the new API, it’s currently impossible to successfully run go get github.com/google/cadvisor because of this issue. After struggling with cAdvisor for a month, I learned that InfluxDB recently rolled out their own metrics collector, Telegraf, which is pretty much guaranteed to have the best InfluxDB integration possible.

The new version of InfluxDB also includes alpha support for clustering, which is key when working with large infrastructures. In InfluxDB, each node is a broker node, a data node, or both. Data nodes host the data, while brokers are members of a raft consensus group.1 In this Docker cluster, I chose to run a data node on every machine in order to reduce network throughput at the cost of slightly increased disk usage. This decision also makes Telegraf easier to set up, as with the right network configuration it can just report to localhost.

Thus, the docker command to start up an InfluxDB cluster look something like this:

docker run -e FORCE_HOSTNAME=auto -e PRE_CREATE_DB="telegraf" -e REPLI_FACTOR="3" --volume=/influxdb:/data --publish=8083 --publish=8086 --expose 8090 --expose=8099 -d tutum/influxdb:latest

docker run -e FORCE_HOSTNAME=auto -e SEEDS="master:8090" --volume=/influxdb:/data --expose 8090 --expose=8099 -d tutum/influxdb:latest

docker run -e FORCE_HOSTNAME=auto -e SEEDS="master:8090" --volume=/influxdb:/data --expose 8090 --expose=8099 -d tutum/influxdb:latest

Currently Telegraf only supports Vagrant officially. I made a Docker repository at bb/telegraf that will suffice for now. You can start it up with

docker run -d bbailey/telegraf

and it will automatically use localhost:8086 as the InfluxDB URL.

After running these containers, you should start to see data appearing in InfluxDB. All that’s left is to access it. Luckily, it’s very simple to get important data from the InfluxDB API using the native Go client:

q := "SELECT percentile(value, 95) FROM docker_system WHERE name='telegraf' ORDER BY asc"
res, _ := queryDB(con, q)

This gets you the 90th percentile of the CPU usage of the docker container named “telegraf.”

In a very basic benchmark test involving top, Telegraf used less than half the CPU that cAdvisor did. While Telegraf doesn’t have the same strong focus on Docker and the documentation is incredibly sparse, the metrics it provides are useful and serve enough of the same purposes, and its native InfluxDB integration makes it a welcome change from other metrics reporters.

  1. InfluxDB allows for a maximum of three brokers in the current version, but that still allows for one failure which should be plenty [return]

Filtering Fun

Last year my high school implemented a draconian network filter. The school district had always erred on the side of caution when it came to network filtering, putting faith in blacklists over students’ willpower.

It was interesting to watch the filter develop over the years; in eighth grade you could bypass the filter just by using https, in tenth grade a VPN was more than sufficient. Last year, however, the district implemented UltraSurf. I’m not sure if they used a built-in blacklist or (more likely) paid a security company a large sum to develop one for them, but it was highly effective. Much too effective. Sites like tumblr were blocked for “prohibited friendship content.” If you made enough “suspicious” Google searches, your MAC address was blacklisted for an hour. Most ports were blocked, including :22 (ssh).

I took an independent study class last year. Most of the class hosted projects on Github. To clone a git repository from Github, you need to use ssh. (Or https, but the district blocked that as well). We had a few developers from the community come in to help us with our projects, and one suggested using netcat to examine how the filter was shutting down ssh traffic. (It’s important to note that ssh via other ports didn’t work either.) Using netcat, we figured out that the filter looked for a specific pattern in the ssh version 2 header that the version 1 header didn’t match. I went home that night and worked on getting my home Raspberry Pi running ssh version 1 so I could tunnel from school to my house, bypassing the filter. The steps I took are listed below.

  1. Add Port 443 to sshd_config
  2. Switch Protocol line to 1,2 instead of 2
  3. Run ssh-keygen -t rsa1 to generate a host key with no passphrase and save it as /etc/ssh/ssh_host_key
  4. Add HostKey /etc/ssh/ssh_host_key to the sshd_config
  5. For tunneling purposes, add the PermitTunnel yes to the config file

That should be all the necessary changes, but I included the full file below. As a bonus, I transferred the file using netcat:

  1. (on the server) cat/etc/ssh/sshd_config | nc $DESTINATION_IP 9999
  2. (on the client) nc -l 9999 > ~/Desktop/sshd_config

Below is the finalized sshd_config file:

# Package generated configuration file
# See the sshd_config(5) manpage for details

# What ports, IPs and protocols we listen for
Port 22
Port 443

Protocol 1,2

HostKey /etc/ssh/ssh_host_key

# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes

# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 768

# Logging
SyslogFacility AUTH
LogLevel INFO

# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile  %h/.ssh/authorized_keys

PermitTunnel yes

# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes

# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords yes

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication yes

X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no

# Allow client to pass locale environment variables
AcceptEnv LANG LC_*

Subsystem sftp /usr/lib/openssh/sftp-server

UsePAM yes

Here’s how to use that server as a tunnel for web traffic.


  1. Run the command ssh -D 8080 -C -N -p 443 USERNAME@ -1.
  2. Open System Preferences and go to Network.
  3. Click on Advanced, then Proxies.
  4. Check the box next to SOCKS Proxy, and type in as the server, and 8080 as the port.
  5. Save your settings (make sure you hit Apply) and enjoy!
  6. (You may need to tell your browser to use system proxy settings).

On Linux

  1. Run the command ssh -D 8080 -C -N -p 443 USERNAME@ -1.
  2. Configure your browser to use a SOCKS5 proxy on localhost:8080.

On Windows

  1. Download PuTTY.
  2. Open PuTTY. Under the Session tab, put in the host name of the server you set up (or its IP address).
  3. Expand the SSH tab and select Tunnels. For source port, put in 8080, and select Dynamic.
  4. Leave the hostname blank, click OK.
  5. Click open, or go back to the Sessions tab and save the configuration so you can load it later.
  6. Browse to a normally-blocked site. You should be able to access it. If not, try setting your browser to use a SOCKS5 proxy with localhost as the host and 8080 as the port.

This setup continued to work for the rest of the year, and should still work now.

I wish I'd written you a letter

I spent the afternoon lost in Ned Vizzini’s Teen Angst? Naah…, a collection of short essays about his experiences as a teenager. It’s candid, thoughtful, and entertaining.

Ned Vizzini also wrote It’s Kind Of A Funny Story, a book based on his week-long stay in a mental hospital that changed his life. The book itself went on to change so many other lives.

He killed himself last winter.

I don’t know why.

This book affected me more than most books I’ve read. I’m not sure why. The last chapter is a note from Ned about what happened to him after high school. He sounded like he was doing so well. He was doing so well. And now he’s gone.

Maybe I feel this way because, in a strange way, he reminded me of me. He went to a magnet high school, was rather socially awkward, etc. But I feel like anyone could relate to his experiences. Teen Angst? makes Ned Vizzini feel like a real person. It reads as if he’s hanging out in your dorm room and sharing stories about high school.

I wish I could have got to know him. I feel like I missed out on an opportunity to make a new friend, even though I doubt I ever would have met him. He seems like a great person.

I wish I knew why, but it doesn’t really matter. It happened, and now it’s over.

I miss him. And I don’t know why, but I wish he were here.

The book ends kind of like this:

I’m a writer from now on, for better or worse, and so far it’s mostly all better…Do I have days where I wake up and no Muses are there and I don’t even want to deal with my life anymore? Sure…But above and beyond that are the days when the words come together and I sit back in my chair and go, “Man, this is fun.” And there are the days where I get an e-mail or a letter from someone who read my writing and liked it and I just slap myself in the head for an entirely different reason, because I’m blessed.

I wish I had written you a letter, Ned.

Stop Making Sense

I’ll always remember the first time I heard David Byrne say “Hi. I’ve got a tape I want to play.” As soon as he walks out onto the stage and pops a tape in his boom box, you know you’re in for a treat.

Byrne launches in to an energetic and captivating acoustic performance of “Psycho Killer”, one of Talking Heads’ most well-known songs, reeling around the stage with a trademark paranoid look in his eyes.

Byrne’s performance is the opening to what many consider the greatest concert film of all time, Stop Making Sense. The movie was filmed over the course of three nights at Hollywood’s Pantages Theater in December 1983 using mostly white light and lengthy shots. Byrne is joined by one member of the band for each successive song, until the stage is packed for “Burning Down The House”. The film also features Byrne’s iconic oversized suit and eccentric dance moves.

I’m not exactly sure why I love Stop Making Sense so much. The music certainly plays a large role–every song featured is catchy and meaningful. The pure energy which the film manages to convey is also impressive and moving. The band’s technical skill is easily observable; Bassist Tina Weymouth bass and drummer Chris Frantz artfully create the perfect backdrop groove for Byrne to dance across and manipulate. And yet the band manages to appear nonchalant and intense at the same time despite Byrne’s tense, paranoid character.

The moment when the band is finally all on stage for “Burning Down The House” is a particularly stunning one. The amazingly energetic performance takes what is perhaps Talking Heads’ most famous song to even greater heights.

Perhaps my favorite part of Stop Making Sense is the performance of “Once In A Lifetime”. Byrne has said that the song’s lyrics are modeled after the unique syntax of televangelists, and it shows. The song builds steadily until it reaches a crescendo as Byrne shouts “time isn’t holding us, time isn’t after us” over cascading, forceful guitar strums. One wide, contrasting “chiaroscuro” shot of Byrne makes up over seventy-five percent of the song’s five minute duration, allowing the viewer to focus completely on the song.

The film ends with “Crosseyed and Painless”, the first song that features shots of the audience. The audience inclusion surprises the viewer and helps them feel involved in the final moments of the concert, knowing that they’ve witnessed something unique.

There’s nothing like Stop Making Sense.

My First Foray Into Game Making

This past weekend I went to Dallas to visit a friend. We’d always shared an interest in games, but over the last six months, my friend had actually started to code his own games from scratch. Quite frankly, I was impressed. While I’ve delved into many different coding projects over the last few months, I’d actually forgotten about one of my original goals; to develop a game. While my friend’s games were not masterpieces, he had made several fun clones of games like Mario Kart, and a maze game with Pokemon sprites. Most of these he coded in DarkBasic.

Naturally, after playing around with the games a bit, we decided to continue improving them. Coding isn’t much of a two-person job though, so eventually I started work on my text-based RPG I wrote in Ruby two years ago. It seems that we both have a touch of programming ADD though, and my friend started a new 2D fighting game (which I’ll add is coming along nicely) and I wrote restaurbot, a simple and comical restaurant robot.

It was at this point that I emailed my dad, asking for advice on creating text-based games in Ruby (what I had wasn’t working too well). He provided me with many links, but most were graphical. This, of course, intrigued me. Could I achieve what my friend was doing in a simpler language I already understood?

This particular link was to a Ruby library called Gosu. This library allows you to easily create games in Ruby or C++. Getting started was simple: sudo gem install gosu did the trick (I installed the other gems suggested on the homepage later). With the help of the simple tutorial on Gosu’s github page, I had a game with working controls and graphics up within twenty minutes.

When I showed this to my friend and told him how easy it was, he wanted to try it for himself. I helped him install Ruby on his XP machine, and most of the gems ran smoothly, and he quickly learned how Gosu and Ruby work (in fact, he’s about to send me the game files he’s been working on).

The real fun started when we began modifying the Gosu example projects that come with the gem. (Note: We had no luck finding the directory on my friend’s machine, so I sent him my files, which, due to RVM, were located in ~/.rvm/gems/ruby-VERSION/gems/gosu-VERSION/examples).

The CptnRuby example was definitely the most fun to change. It provides a great starting code base for a sidescroller (which I’m in fact using for my next project). The example shows you how to get images from tilesets and implement gravity, both useful, especially if you want to make a platformer.

When I really started enjoying and understanding Gosu, however, was after watching the ruby4kids screencasts on it (lame, I know). My first game, SpriteDodge, is based off of the example project they create there.

Once you get off the ground, you’ll find Gosu very easy to use and logical in its implementation into one of many’s favorite programming languages.

If you get stuck with Gosu, make sure you check the Ruby rdoc for information, or hop in the #gosu channel on freenode.net (the people there are very helpful). If you need art for your games, you should check out lostgarden, or do a quick Google search for ‘free game assets.’ And when you finish your first game, feel free to post it in the showcase section of the Gosu forums. Enjoy, and good luck!