Immutable Kubernetes Selectors

it’s easy to be frustrated with immutable selectors on deployments and statefulSets, i know i was for a while, but after reading a bit more about it tonight, i feel it was a good call.

btw, jobs and immutability are a different ball of wax that i’m not addressing here.

the opener on this issue explains a large part of it, especially this bit: “users are not expected to change the selectors” – i.e. the controllers all the way down from deploymentController|statefulSetController->replicaSetController->scheduler->kubelet were designed to rely on these selectors not changing, but not as an oversight; as an opinionated decision that is a major enabler. – a better way to think of these labels might be something closer to a unique identifier, and by making them immutable they’re avoiding the need to track the changing of unique identifiers in a state machine.

– not that all labels act like UIDs, only the ones you use as selectors in parent objects.

the emphasis on parent objects is key here. the serviceController does not need the same constraint because services are essentially orthogonal or sisters to the things to which their selectors point. but if the selector of a deployment changes, the deploymentController has no way to know what that selector was previously, and therefore has no way to know which, if any, existing replicaSets are related to that deployment. it proceeds to find no replicaSets matching the selector, the existing ones are now orphaned, and it creates new ones.

apparently a lot of people were shooting themselves in the foot, and experiencing orphan “storms” before the community decided to stop defaulting spec.selector to spec.template.labels (which was defaulted from metadata.labels and is seemingly the only reason they ever thought it was OK to make the selectors mutable on deployments (they were never mutable on statefulSets.))

it’s easy to think of Kubernetes labels as just visual cues, and sometimes they are just that, but labels are also a first class citizen around which the core Kubernetes code and algorithms are designed. they give you the magical combination of A) a powerful arbitrary key value labeling and searching, and B) the dependability of UIDs, or reference-ability across the database. the trade off is, immutable labelSelectors.

i.e. you gotta delete and recreate that sh*t if you wanna change the labelSectors.

P.S. I haven’t blogged for a while mostly because I try too hard to get it all perfect. So I’m trying a new, less time consuming approach, as seen above.

Why I don’t Copy and Paste Commands into a Terminal

Years ago I was doing customer support for a company that rented virtual private servers. There wasn’t enough technical staff so I started googling the answers to the customers problems, and following tutorials and instructions to fix their servers.

During this time, my brother gave me a lot of good advice for which I am very grateful, but perhaps the single most valuable thing he said to me – the thing that cemented all my other sysadmin learning – was:
“I found that typing out the commands rather than copy-pasting them helps me remember them.”
So I started forcing myself to type out the commands I was seeing in tutorials/instructions and in my notes, and I found my brother was very right. (Note, nothing wrong with copy-pasting commands from your terminal into your notes if you’re a note taker.)

Moreover, I found it’s not just about the ability to remember commands and their options. A large portion of the command line knowledge and confidence I’ve gained over the years was via mistyping commands. When you mistype a command, or type it from memory, if forces you to think about what the command is doing and what it’s options are said to be. It also gives you confidence about the safety and danger of various mistypings, (when you should be extremely careful, and when it’s ok to trial-and-error it.)

P.S. On note taking:

For me, “man grep” is where I keep my notes on grep-ing things, “man awk” is where I keep my notes on awk-ing things, et al. – This has a primary and secondary advantage:
  1. My notes remain precisely coupled to changes in the command’s options – my notes are never out of date for one second.
  2. My notes can always be found in the time it takes to type “man command” – no searching through stacks of paperwork or electronic notes.

Docker PostgreSQL Workflow

This workflow example uses Stackbrew’s trusted PostgreSQL image. You could develop your own and accomplish the same. You can copy-paste all of the commands below editing only the database name, role name, and password.

Perhaps ideally you’ll use your PostgreSQL instance from within other Docker containers, but if you’re not ready to make the switch to running each of your services as separate Docker containers, you can expose your PostgreSQL container’s port onto the host to make it in essence be (and appear as) a standard PostgreSQL installation.

For use with other containers:

docker run \
  --detach \
  --name postgres \
For use as a standard PostgreSQL installation:
docker run \
  --detach \
  --name postgres \
  --publish \
You now have a database container named “postgres”. We have detached from it and left it running. It exposes it’s own port 5432 to localhost:5432 in whatever container you link it with, and/or with the host if using it as a standard PostgreSQL installation. Docker will automatically pull the stackbrew/postgres image to your local machine if you do not yet have it.
Each of the next examples use containers run from the same stackbrew/postgres image, yet are temporary and will be removed after running. Each will also link to the now created “postgres” container and run their own copies of the psql/pg_dump/pg_restore clients.

Create role/database:

It is (from all appearances) required to have single quotes around the password in the CREATE ROLE command, so we’ll echo it into Docker’s stdin so it will not be escaped by the shell:
echo "CREATE ROLE \"demorole\" WITH LOGIN ENCRYPTED PASSWORD 'password' CREATEDB;" | docker run \
  --rm \
  --interactive \ 
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
echo "CREATE DATABASE \"demodatabase\" WITH OWNER \"demorole\" TEMPLATE template0 ENCODING 'UTF8';" | docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

echo "GRANT ALL PRIVILEGES ON DATABASE \"demodatabase\" TO \"demorole\";" | docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

Restore/load a database file:

pg_restore through Docker’s stdin can be kinda slow, so instead bind mount in the .sql or .tar file:
From a dump.sql file in your current directory:
docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  --volume $PWD/:/tmp/ \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -d demodatabase < /tmp/dump.sql'
From a dump.tar file in your current directory:
docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  --volume $PWD/:/tmp/ \
  stackbrew/postgres:latest \
  bash -c 'exec pg_restore -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -d demodatabase -F tar -v /tmp/dump.tar'

Accessing the database from another container:

Now you have your container named “postgres” with your database loaded. You can give access to this database to another container:
docker run \
  --detach \
  --link postgres:postgres \
..the “ruby-bundler-rails” container sees PostgreSQL running on it’s own localhost:5432.

Accessing the database from the host:

If you chose to run your PostgreSQL instance with the ‘–publish’ option, then it’s already ready to be accessed by other applications on your host. You could install for example the psql client on the host and start using it as usual, but why bother installing it when you you already have an image with psql. Whether you’re exposing your PostgreSQL instance to your host or to other containers or both, you can use the same stackbrew/postgres image to run your normal commands on the running instance.
Here are a few examples from which you can extrapolate how to accomplish all the things you’re already familiar with doing  with PostgreSQL:

Dump the database into your current dir on the host:

docker run \
  --interactive \
  --link postgres:postgres \
  --volume $PWD/:/tmp/ \
  stackbrew/postgres:latest \
  bash -c 'exec pg_dump -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -F tar -v -d demodatabase > /tmp/dump.tar'

Interactive mode:

docker run \
  --rm \
  --interactive \
  --tty \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

List databases, etc:

docker run \
  --rm \
  --interactive \
  --tty \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -c "\l"'


You will not want to remove the container named ‘postgres’ until you have a dump of your lastest database updates.

Some have thought to link the actual database files onto the host by running the instance with something like ‘–volume /demodatabase:/var/lib/postgresql/9.1/main’ which would make the container removable/replaceable. – This deserves more consideration/experimentation.


If you’re picky and want to RTFM on each of Docker’s Command Line Interface options, good for you! It’ll take you slightly longer and benefit us all with more impact. But regardless of that, with a near zero learning curve, anywhere you have the Docker daemon installed you can start using this service now. It’ll be especially easy if you’re already relatively familiar with PostgreSQL.

SSH Port Forward to reach a private intranet Service

I’m familiar with using SSH Port Forwards for the purpose of forwarding a port on remoteMachineX to my localMachine, and vise versa, yet somehow before now I have not realized how to read the port of remoteMachineY through my SSH access to remoteMachineX. I’ve been wondering how to do this for a long time but never got the question right to figure it out.


  • Have: remoteMachineX and remoteMachineY on the same private network.
  • Have: SSH access to remoteMachineX.
  • Do not have: SSH access to remoteMachineY, it’s only serving (for example) the HTTP protocol on port 80 on it’s private interface.
  • Want: to browse remoteMachineY’s website.

What do?

Reading this article it suddenly became clear to me how to do this easily with SSH port forwards.
Before now I’ve only ever port forwarded to/from localhost ports, like this:
ssh -L localhost:9000:localhost:80
Which is more commonly abbreviated:
ssh -L 9000:localhost:80
and accomplishes the ability to read port 80 of remoteMachineX on your localMachine’s port 9000; but if you want to access port 80 of remoteMachineY through the local private network of remoteMachineX, you can:
ssh -L localhost:9000:remoteMachineY:80


ssh -L localhost:9000:
then hit localhost:9000 in your localMachine’s browser.
Note: I often use port 9000 on my localMachine, since localhost ports below 1024 (typically) are restricted and would require sudo privileges. 9000 is also easy to type on the keyboard and remember in the brain.

Setup HTTPS SSL access on a Netgear GS724T switch

UPDATE: I was only able to get this to work with a dh512 key on some switches.

I want to be able to manage my switches remotely, and login to them using SSL for security.

Note: A lot of this OpenSSL info was gleaned from here. A lot of this TFTP info was gleaned from here.

We need to generate a key and self-signed certificate, and then we need to serve up those two files with a tftp server so the switch can download them.

Generate the key/cert:

  • openssl genrsa -out privkey.pem 1024
  • openssl req -new -x509 -key privkey.pem -out certificate.pem -days 3650 ## be sure to make your "Common Name" equal the name (hostname, fqdn) of your switch.
  • cat privkey.pem >> certificate.pem
  • openssl dhparam -out dh1024.pem 1024 # or 512

Create and run TFTP server on Ubuntu:

  • sudo apt-get update && sudo apt-get install tftp tftpd xinetd
  • Create a file here: /etc/xinetd.d/tftp with this content:
  • service tftp
    protocol        = udp
    port            = 69
    socket_type     = dgram
    wait            = yes
    user            = nobody
    server          = /usr/sbin/in.tftpd
    server_args     = /tftpboot
    disable         = no
  • Restart xinetd with :
  • sudo /etc/init.d/xinetd restart
  • We’ll serve files out of a root dir called /tftpboot/, so mkdir and chown/chmod it:
  • sudo mkdir /tftpboot
    sudo chmod -R 777 /tftpboot
    sudo chown -R nobody /tftpboot
  • Move your certificate.pem and dh1024.pem files into that that dir with:
  • mv dh1024.pem /tftpboot/ && mv certificate.pem /tftpboot/

Download the files into your switch:

  • In your switches HTTP interface, head to:
  • Security -> HTTPS -> Certificate Download ->
  • Put in the IP of your Ubuntu TFTP server, and the name of the file to download.
  • Upload certificate.pem as your “SSL Server certificate PEM file”,
  • Upload dh1024.pem as your “SSL DH Strong Encryption parameter PEM file”.

Now enable HTTPS on your switch and reboot and enjoy!

After you’re done with your TFTP server, you probably want to edit /etc/xinetd.d/tftp and change “disable = no” to “disable = yes” and then restart xinetd again so you don’t continue to server your keys to anyone.

SSH port forward in Golang

Do you ever use SSH port fowards to work with remote services locally?

ssh -L 9000:localhost:9999

Here’s how I figured out how to do it in Golang:

package main

// Forward from local port 9000 to remote port 9999

import (
	//  "fmt"

var (
	username         = "root"
	password         = clientPassword("password")
	serverAddrString = ""
	localAddrString  = "localhost:9000"
	remoteAddrString = "localhost:9999"

type clientPassword string

func (password clientPassword) Password(user string) (string, error) {
	return string(password), nil

func forward(localConn net.Conn, config *ssh.ClientConfig) {
	// Setup sshClientConn (type *ssh.ClientConn)
	sshClientConn, err := ssh.Dial("tcp", serverAddrString, config)
	if err != nil {
		log.Fatalf("ssh.Dial failed: %s", err)

	// Setup sshConn (type net.Conn)
	sshConn, err := sshClientConn.Dial("tcp", remoteAddrString)

	// Copy localConn.Reader to sshConn.Writer
	go func() {
		_, err = io.Copy(sshConn, localConn)
		if err != nil {
			log.Fatalf("io.Copy failed: %v", err)

	// Copy sshConn.Reader to localConn.Writer
	go func() {
		_, err = io.Copy(localConn, sshConn)
		if err != nil {
			log.Fatalf("io.Copy failed: %v", err)

func main() {
	// Setup SSH config (type *ssh.ClientConfig)
	config := &ssh.ClientConfig{
		User: username,
		Auth: []ssh.ClientAuth{

	// Setup localListener (type net.Listener)
	localListener, err := net.Listen("tcp", localAddrString)
	if err != nil {
		log.Fatalf("net.Listen failed: %v", err)

	for {
		// Setup localConn (type net.Conn)
		localConn, err := localListener.Accept()
		if err != nil {
			log.Fatalf("listen.Accept failed: %v", err)
		go forward(localConn, config)

Save Time with DNS Search on Ubuntu and Mac OS X

Do you SSH and/or ping a lot of servers under the same domain name? Ever get tired of typing the same domain name multiple times a day? E.G.:


Easily setup your machines to automatically fill it in for you. Instead you’ll be able to:

ssh server1
ssh server2
ping merp

Here’s how on Ubuntu 14.04:

  • Edit your /etc/resolvconf/resolv.conf.d/base file with your favorite editor:
sudo vi /etc/resolvconf/resolv.conf.d/base
  • Add lines like this and save your changes:
  • Now update resolvconf with:
sudo resolvconf -u
  • That’s it!

Here’s how on Ubuntu (older versions):

  • Edit your /etc/network/interfaces file with your favorite editor:
sudo vi /etc/network/interfaces
  • Under each interface (if you have more than one) look for a ‘dns-search’ line, and create it if it’s not there. The file should end up looking something like this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
 # dns-* options are implemented by the resolvconf package, if installed
  • After saving your changes, restart networking:
sudo /etc/init.d/networking restart
  • That’s it!


Here’s how on Mac OS X: (I’m using Mountain Lion.)

  • Both view and edit your settings with the ‘networksetup’ command.
  • First check what network interfaces you have:
sudo networksetup -listallnetworkservices
  • If you’re like me, you’ll only need to setup search domains for the ‘Ethernet’ and ‘Wi-Fi’ interfaces.
  • Check if there are already any set; you’re likely to see “There aren’t any Search Domains set on [interface].”:
sudo networksetup -getsearchdomains Ethernet
sudo networksetup -getsearchdomains Wi-Fi
  • Set a new search domain for each interface you use:
sudo networksetup -setsearchdomains Ethernet
sudo networksetup -setsearchdomains Wi-Fi
  • That’s it!

Regaining root access to a virtual machine with Guestfish

Passwordless users with SSH public/private key access is a great way to go, but this requires a user to have passwordless sudo rights if it is to have sudo at all.

A couple of times now I have locked my user out of a having root access on a VM via various methods. – I still am able to get into the machine, but not use sudo, and no other user can use sudo either. What now?

If you have root access to the host machine and you’re able to install libguestfs, you can recover it. NOTE: Ubuntu 12.04 is the first Ubuntu version to have the libguestfs package available in the repository.

I have fixed both Ubuntu 10.04 and 12.04 Virtual Machines of the qcow2 format. Guestfish claims it can do many other formats. I used a Ubuntu 12.04 host machine to run guestfish. This will install the libguestfs package and any other dependencies you don’t already have:

sudo apt-get update
sudo apt-get install guestfish
Be sure to shutdown the VM before making any changes with guestfish. You are likely to corrupt your VM if you try to use guestfish in read/write mode while the VM is running.

Now we will open the sudoers file on the VM:

sudo guestfish --rw -a /path/to/vm_file.qcow2 -i edit /etc/sudoers

Make sure to add the following line at the end of the file, since other sudoer lines may override it otherwise:


where [USERNAME] is your user on the VM. Mine looked like this:

davidamick ALL=(ALL) NOPASSWD: ALL

Now save the file, close the editor, and restart the VM to find your user able to gain root without using it’s non-existent password. 🙂 It’s a good idea to then continue to set it up in whatever more proper way you use normally, like adding your user to an admin group who has the NOPASSWD: directive, and removing the line you just added. P.S. Guestfish is very powerful, and is also capable of adding a password to a root or other user. If you need to do this, try using guestfish’s “command” command to run a command inside the VM. You would not however want to run any command that requires user feedback, (I.E. the “passwd” command) since guestfish will hang and not play nice with this (as I found out the hard way.) Instead, practice first on a separate machine using the “crypt” and “usermod” commands to change the password in a single command, then run that command with guestfish on the VM.

UPDATE: Here is an example of adding a new password:

command "bash -c 'echo davidamick:asdfasdf | chpasswd'"

A (very) Basic Understanding of Email Phishing

Anyone can send an email and fill in the “from” address with someone else’s email address.

  • You cannot know for certain who an email came from.
  • You can know for certain (relatively) who an email is going to.

Thus, when someone emails you asking you for help with their password or SSH key (or any other highly important change):

Be nice, reply to your sender assuming they are the real person without revealing any important or confidential information. Include the message they sent you in your reply so they will see it, and ask them to reply back to confirm.
  • If you do NOT get a reply back saying something like “What are you talking about? I didn’t request to change my password/SSH key?!“,
  • If you DO get a reply back saying something to effect of “Yes please, thanks!“,
  • Then are you safe to proceed with a major security change like a password or SSH key change.

It’s that simple. 🙂

  • Caveat: This assumes the attacker does not have access to the email account to which you reply.
  • Note: Use  https to connect to your mail server/service to avoid people snooping on your signal.

Ruby ree-1.8.7-2012.02 on Ubuntu 12.04

Thanks to help from Fabio Rhem’s gist, I was able to get Ruby Enterprise Edition 1.8.7 (ree-1.8.7-2012.02) working on Ubuntu 12.04. I wrote my own gist for our own uses:

apt-get install -y build-essential wget zlib1g-dev libssl-dev libffi-dev libreadline-dev libxslt-dev libxml2-dev
cd /usr/src


tar xzf ruby-enterprise-1.8.7-2012.02.tar.gz && cd ruby-enterprise-1.8.7-2012.02/source


patch -p1 < tcmalloc.patch
patch -p1 < stdout-rouge-fix.patch

cd .. && ./installer --auto /usr/local --dont-install-useful-gems --no-dev-docs