All posts by revacuate

Why I don’t Copy and Paste Commands into a Terminal

Years ago I was doing customer support for a company that rented virtual private servers. There wasn’t enough technical staff so I started googling the answers to the customers problems, and following tutorials and instructions to fix their servers.

During this time, my brother gave me a lot of good advice for which I am very grateful, but perhaps the single most valuable thing he said to me – the thing that cemented all my other sysadmin learning – was:
“I found that typing out the commands rather than copy-pasting them helps me remember them.”
So I started forcing myself to type out the commands I was seeing in tutorials/instructions and in my notes, and I found my brother was very right. (Note, nothing wrong with copy-pasting commands from your terminal into your notes if you’re a note taker.)

Moreover, I found it’s not just about the ability to remember commands and their options. A large portion of the command line knowledge and confidence I’ve gained over the years was via mistyping commands. When you mistype a command, or type it from memory, if forces you to think about what the command is doing and what it’s options are said to be. It also gives you confidence about the safety and danger of various mistypings, (when you should be extremely careful, and when it’s ok to trial-and-error it.)

P.S. On note taking:

For me, “man grep” is where I keep my notes on grep-ing things, “man awk” is where I keep my notes on awk-ing things, et al. – This has a primary and secondary advantage:
  1. My notes remain precisely coupled to changes in the command’s options – my notes are never out of date for one second.
  2. My notes can always be found in the time it takes to type “man command” – no searching through stacks of paperwork or electronic notes.
Advertisements

Docker PostgreSQL Workflow

This workflow example uses Stackbrew’s trusted PostgreSQL image. You could develop your own and accomplish the same. You can copy-paste all of the commands below editing only the database name, role name, and password.

Perhaps ideally you’ll use your PostgreSQL instance from within other Docker containers, but if you’re not ready to make the switch to running each of your services as separate Docker containers, you can expose your PostgreSQL container’s port onto the host to make it in essence be (and appear as) a standard PostgreSQL installation.

For use with other containers:

docker run \
  --detach \
  --name postgres \
  stackbrew/postgres:latest
For use as a standard PostgreSQL installation:
docker run \
  --detach \
  --name postgres \
  --publish 127.0.0.1:5432:5432 \
  stackbrew/postgres:latest
You now have a database container named “postgres”. We have detached from it and left it running. It exposes it’s own port 5432 to localhost:5432 in whatever container you link it with, and/or with the host if using it as a standard PostgreSQL installation. Docker will automatically pull the stackbrew/postgres image to your local machine if you do not yet have it.
Each of the next examples use containers run from the same stackbrew/postgres image, yet are temporary and will be removed after running. Each will also link to the now created “postgres” container and run their own copies of the psql/pg_dump/pg_restore clients.

Create role/database:

It is (from all appearances) required to have single quotes around the password in the CREATE ROLE command, so we’ll echo it into Docker’s stdin so it will not be escaped by the shell:
echo "CREATE ROLE \"demorole\" WITH LOGIN ENCRYPTED PASSWORD 'password' CREATEDB;" | docker run \
  --rm \
  --interactive \ 
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
 
echo "CREATE DATABASE \"demodatabase\" WITH OWNER \"demorole\" TEMPLATE template0 ENCODING 'UTF8';" | docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

echo "GRANT ALL PRIVILEGES ON DATABASE \"demodatabase\" TO \"demorole\";" | docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

Restore/load a database file:

pg_restore through Docker’s stdin can be kinda slow, so instead bind mount in the .sql or .tar file:
From a dump.sql file in your current directory:
docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  --volume $PWD/:/tmp/ \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -d demodatabase < /tmp/dump.sql'
From a dump.tar file in your current directory:
docker run \
  --rm \
  --interactive \
  --link postgres:postgres \
  --volume $PWD/:/tmp/ \
  stackbrew/postgres:latest \
  bash -c 'exec pg_restore -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -d demodatabase -F tar -v /tmp/dump.tar'

Accessing the database from another container:

Now you have your container named “postgres” with your database loaded. You can give access to this database to another container:
docker run \
  --detach \
  --link postgres:postgres \
  localhost:5000/ubuntu/ruby-bundler-rails
..the “ruby-bundler-rails” container sees PostgreSQL running on it’s own localhost:5432.

Accessing the database from the host:

If you chose to run your PostgreSQL instance with the ‘–publish 127.0.0.1:5432:5432’ option, then it’s already ready to be accessed by other applications on your host. You could install for example the psql client on the host and start using it as usual, but why bother installing it when you you already have an image with psql. Whether you’re exposing your PostgreSQL instance to your host or to other containers or both, you can use the same stackbrew/postgres image to run your normal commands on the running instance.
Here are a few examples from which you can extrapolate how to accomplish all the things you’re already familiar with doing  with PostgreSQL:

Dump the database into your current dir on the host:

docker run \
  --interactive \
  --link postgres:postgres \
  --volume $PWD/:/tmp/ \
  stackbrew/postgres:latest \
  bash -c 'exec pg_dump -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -F tar -v -d demodatabase > /tmp/dump.tar'

Interactive mode:

docker run \
  --rm \
  --interactive \
  --tty \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

List databases, etc:

docker run \
  --rm \
  --interactive \
  --tty \
  --link postgres:postgres \
  stackbrew/postgres:latest \
  bash -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres -c "\l"'

Thoughts/Questions:

You will not want to remove the container named ‘postgres’ until you have a dump of your lastest database updates.

Some have thought to link the actual database files onto the host by running the instance with something like ‘–volume /demodatabase:/var/lib/postgresql/9.1/main’ which would make the container removable/replaceable. – This deserves more consideration/experimentation.

Conclusion:

If you’re picky and want to RTFM on each of Docker’s Command Line Interface options, good for you! It’ll take you slightly longer and benefit us all with more impact. But regardless of that, with a near zero learning curve, anywhere you have the Docker daemon installed you can start using this service now. It’ll be especially easy if you’re already relatively familiar with PostgreSQL.

SSH Port Forward to reach a private intranet Service

I’m familiar with using SSH Port Forwards for the purpose of forwarding a port on remoteMachineX to my localMachine, and vise versa, yet somehow before now I have not realized how to read the port of remoteMachineY through my SSH access to remoteMachineX. I’ve been wondering how to do this for a long time but never got the question right to figure it out.

Scenario:

  • Have: remoteMachineX and remoteMachineY on the same private network.
  • Have: SSH access to remoteMachineX.
  • Do not have: SSH access to remoteMachineY, it’s only serving (for example) the HTTP protocol on port 80 on it’s private interface.
  • Want: to browse remoteMachineY’s website.

What do?

Reading this article it suddenly became clear to me how to do this easily with SSH port forwards.
Before now I’ve only ever port forwarded to/from localhost ports, like this:
ssh -L localhost:9000:localhost:80 user@remoteMachineX.com
Which is more commonly abbreviated:
ssh -L 9000:localhost:80 user@remoteMachineX.com
and accomplishes the ability to read port 80 of remoteMachineX on your localMachine’s port 9000; but if you want to access port 80 of remoteMachineY through the local private network of remoteMachineX, you can:
ssh -L localhost:9000:remoteMachineY:80 user@remoteMachineX.com

Example:

ssh -L localhost:9000:10.10.10.11:80 user@hostmachine.domain.com
then hit localhost:9000 in your localMachine’s browser.
Note: I often use port 9000 on my localMachine, since localhost ports below 1024 (typically) are restricted and would require sudo privileges. 9000 is also easy to type on the keyboard and remember in the brain.

Setup HTTPS SSL access on a Netgear GS724T switch

UPDATE: I was only able to get this to work with a dh512 key on some switches.

I want to be able to manage my switches remotely, and login to them using SSL for security.

Note: A lot of this OpenSSL info was gleaned from here. A lot of this TFTP info was gleaned from here.

We need to generate a key and self-signed certificate, and then we need to serve up those two files with a tftp server so the switch can download them.

Generate the key/cert:

  • openssl genrsa -out privkey.pem 1024
  • openssl req -new -x509 -key privkey.pem -out certificate.pem -days 3650 ## be sure to make your "Common Name" equal the name (hostname, fqdn) of your switch.
    
  • cat privkey.pem >> certificate.pem
  • openssl dhparam -out dh1024.pem 1024 # or 512

Create and run TFTP server on Ubuntu:

  • sudo apt-get update && sudo apt-get install tftp tftpd xinetd
  • Create a file here: /etc/xinetd.d/tftp with this content:
  • service tftp
    {
    protocol        = udp
    port            = 69
    socket_type     = dgram
    wait            = yes
    user            = nobody
    server          = /usr/sbin/in.tftpd
    server_args     = /tftpboot
    disable         = no
    }
  • Restart xinetd with :
  • sudo /etc/init.d/xinetd restart
  • We’ll serve files out of a root dir called /tftpboot/, so mkdir and chown/chmod it:
  • sudo mkdir /tftpboot
    sudo chmod -R 777 /tftpboot
    sudo chown -R nobody /tftpboot
  • Move your certificate.pem and dh1024.pem files into that that dir with:
  • mv dh1024.pem /tftpboot/ && mv certificate.pem /tftpboot/

Download the files into your switch:

  • In your switches HTTP interface, head to:
  • Security -> HTTPS -> Certificate Download ->
  • Put in the IP of your Ubuntu TFTP server, and the name of the file to download.
  • Upload certificate.pem as your “SSL Server certificate PEM file”,
  • Upload dh1024.pem as your “SSL DH Strong Encryption parameter PEM file”.

Now enable HTTPS on your switch and reboot and enjoy!

After you’re done with your TFTP server, you probably want to edit /etc/xinetd.d/tftp and change “disable = no” to “disable = yes” and then restart xinetd again so you don’t continue to server your keys to anyone.

SSH port forward in Golang

Do you ever use SSH port fowards to work with remote services locally?

ssh -L 9000:localhost:9999 user@server.com

Here’s how I figured out how to do it in Golang:

package main

// Forward from local port 9000 to remote port 9999

import (
	"code.google.com/p/go.crypto/ssh"
	"io"
	"log"
	"net"
	//  "fmt"
)

var (
	username         = "root"
	password         = clientPassword("password")
	serverAddrString = "192.168.1.100:22"
	localAddrString  = "localhost:9000"
	remoteAddrString = "localhost:9999"
)

type clientPassword string

func (password clientPassword) Password(user string) (string, error) {
	return string(password), nil
}

func forward(localConn net.Conn, config *ssh.ClientConfig) {
	// Setup sshClientConn (type *ssh.ClientConn)
	sshClientConn, err := ssh.Dial("tcp", serverAddrString, config)
	if err != nil {
		log.Fatalf("ssh.Dial failed: %s", err)
	}

	// Setup sshConn (type net.Conn)
	sshConn, err := sshClientConn.Dial("tcp", remoteAddrString)

	// Copy localConn.Reader to sshConn.Writer
	go func() {
		_, err = io.Copy(sshConn, localConn)
		if err != nil {
			log.Fatalf("io.Copy failed: %v", err)
		}
	}()

	// Copy sshConn.Reader to localConn.Writer
	go func() {
		_, err = io.Copy(localConn, sshConn)
		if err != nil {
			log.Fatalf("io.Copy failed: %v", err)
		}
	}()
}

func main() {
	// Setup SSH config (type *ssh.ClientConfig)
	config := &ssh.ClientConfig{
		User: username,
		Auth: []ssh.ClientAuth{
			ssh.ClientAuthPassword(password),
		},
	}

	// Setup localListener (type net.Listener)
	localListener, err := net.Listen("tcp", localAddrString)
	if err != nil {
		log.Fatalf("net.Listen failed: %v", err)
	}

	for {
		// Setup localConn (type net.Conn)
		localConn, err := localListener.Accept()
		if err != nil {
			log.Fatalf("listen.Accept failed: %v", err)
		}
		go forward(localConn, config)
	}
}

Save Time with DNS Search on Ubuntu and Mac OS X

Do you SSH and/or ping a lot of servers under the same domain name? Ever get tired of typing the same domain name multiple times a day? E.G.:

ssh server1.ourcompanysname.com
ssh server2.ourcompanysname.com
ping merp.ourcompanysname.com

Easily setup your machines to automatically fill it in for you. Instead you’ll be able to:

ssh server1
ssh server2
ping merp

Here’s how on Ubuntu 14.04:

  • Edit your /etc/resolvconf/resolv.conf.d/base file with your favorite editor:
sudo vi /etc/resolvconf/resolv.conf.d/base
  • Add lines like this and save your changes:
search ourcompanysname.com
  • Now update resolvconf with:
sudo resolvconf -u
  • That’s it!

Here’s how on Ubuntu (older versions):

  • Edit your /etc/network/interfaces file with your favorite editor:
sudo vi /etc/network/interfaces
  • Under each interface (if you have more than one) look for a ‘dns-search’ line, and create it if it’s not there. The file should end up looking something like this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
 address 123.123.123.123
 netmask 255.255.255.0
 # dns-* options are implemented by the resolvconf package, if installed
 dns-nameservers 8.8.8.8 8.8.4.4
 dns-search ourcompanysname.com
  • After saving your changes, restart networking:
sudo /etc/init.d/networking restart
  • That’s it!

 

Here’s how on Mac OS X: (I’m using Mountain Lion.)

  • Both view and edit your settings with the ‘networksetup’ command.
  • First check what network interfaces you have:
sudo networksetup -listallnetworkservices
  • If you’re like me, you’ll only need to setup search domains for the ‘Ethernet’ and ‘Wi-Fi’ interfaces.
  • Check if there are already any set; you’re likely to see “There aren’t any Search Domains set on [interface].”:
sudo networksetup -getsearchdomains Ethernet
sudo networksetup -getsearchdomains Wi-Fi
  • Set a new search domain for each interface you use:
sudo networksetup -setsearchdomains Ethernet ourcompanysname.com
sudo networksetup -setsearchdomains Wi-Fi ourcompanysname.com
  • That’s it!

Regaining root access to a virtual machine with Guestfish

Passwordless users with SSH public/private key access is a great way to go, but this requires a user to have passwordless sudo rights if it is to have sudo at all.

A couple of times now I have locked my user out of a having root access on a VM via various methods. – I still am able to get into the machine, but not use sudo, and no other user can use sudo either. What now?

If you have root access to the host machine and you’re able to install libguestfs, you can recover it. NOTE: Ubuntu 12.04 is the first Ubuntu version to have the libguestfs package available in the repository.

I have fixed both Ubuntu 10.04 and 12.04 Virtual Machines of the qcow2 format. Guestfish claims it can do many other formats. I used a Ubuntu 12.04 host machine to run guestfish. This will install the libguestfs package and any other dependencies you don’t already have:

sudo apt-get update
sudo apt-get install guestfish
Be sure to shutdown the VM before making any changes with guestfish. You are likely to corrupt your VM if you try to use guestfish in read/write mode while the VM is running.

Now we will open the sudoers file on the VM:

sudo guestfish --rw -a /path/to/vm_file.qcow2 -i edit /etc/sudoers

Make sure to add the following line at the end of the file, since other sudoer lines may override it otherwise:

[USERNAME] ALL=(ALL) NOPASSWD: ALL

where [USERNAME] is your user on the VM. Mine looked like this:

davidamick ALL=(ALL) NOPASSWD: ALL

Now save the file, close the editor, and restart the VM to find your user able to gain root without using it’s non-existent password. 🙂 It’s a good idea to then continue to set it up in whatever more proper way you use normally, like adding your user to an admin group who has the NOPASSWD: directive, and removing the line you just added. P.S. Guestfish is very powerful, and is also capable of adding a password to a root or other user. If you need to do this, try using guestfish’s “command” command to run a command inside the VM. You would not however want to run any command that requires user feedback, (I.E. the “passwd” command) since guestfish will hang and not play nice with this (as I found out the hard way.) Instead, practice first on a separate machine using the “crypt” and “usermod” commands to change the password in a single command, then run that command with guestfish on the VM.

UPDATE: Here is an example of adding a new password:

command "bash -c 'echo davidamick:asdfasdf | chpasswd'"

A (very) Basic Understanding of Email Phishing

Anyone can send an email and fill in the “from” address with someone else’s email address.

  • You cannot know for certain who an email came from.
  • You can know for certain (relatively) who an email is going to.

Thus, when someone emails you asking you for help with their password or SSH key (or any other highly important change):

Be nice, reply to your sender assuming they are the real person without revealing any important or confidential information. Include the message they sent you in your reply so they will see it, and ask them to reply back to confirm.
  • If you do NOT get a reply back saying something like “What are you talking about? I didn’t request to change my password/SSH key?!“,
  • If you DO get a reply back saying something to effect of “Yes please, thanks!“,
  • Then are you safe to proceed with a major security change like a password or SSH key change.

It’s that simple. 🙂

  • Caveat: This assumes the attacker does not have access to the email account to which you reply.
  • Note: Use  https to connect to your mail server/service to avoid people snooping on your signal.

Ruby ree-1.8.7-2012.02 on Ubuntu 12.04

Thanks to help from Fabio Rhem’s gist, I was able to get Ruby Enterprise Edition 1.8.7 (ree-1.8.7-2012.02) working on Ubuntu 12.04. I wrote my own gist for our own uses:

apt-get install -y build-essential wget zlib1g-dev libssl-dev libffi-dev libreadline-dev libxslt-dev libxml2-dev
cd /usr/src

wget http://rubyenterpriseedition.googlecode.com/files/ruby-enterprise-1.8.7-2012.02.tar.gz

tar xzf ruby-enterprise-1.8.7-2012.02.tar.gz && cd ruby-enterprise-1.8.7-2012.02/source

wget https://github.com/wayneeseguin/rvm/raw/master/patches/ree/1.8.7/tcmalloc.patch
wget https://github.com/wayneeseguin/rvm/raw/master/patches/ree/1.8.7/stdout-rouge-fix.patch

patch -p1 < tcmalloc.patch
patch -p1 < stdout-rouge-fix.patch

cd .. && ./installer --auto /usr/local --dont-install-useful-gems --no-dev-docs

Comment tags in /etc/network/interfaces

Today I took down networking on one of our production host machines. It’s not a machine local to us, so we were fortunate enough to have a secondary network setup to access the server from another server of ours.

After much troubleshooting as to what wrong, I found the tricky answer. Here’s what I did:

I had commented out a line on my practice machine, restarted networking and everything went fine. I then did the same thing on the production machine:

I changed:

"  gateway   10.10.10.1"
to:

"  #gateway   10.10.10.1"
and everything blew up!
When I then changed it instead to this:

"#  gateway   10.10.10.1"
everything worked fine again.
In the interfaces file at least, it’s *very* important to put comment tags as the first letter of a line you want to comment.