— 2 min read

SSHFP records are a defense against people blindly typing ‘yes’ when asked if they want to continue connecting to an SSH host who’s authenticity is unknown.

$ ssh some.host.tld
The authenticity of host 'some.host.tld (123.456.789.10)' can't be established.
ED25519 key fingerprint is 69:76:51:39:a4:c6:de:15:7c:50:4b:4a:a7:98:40:5e.
Are you sure you want to continue connecting (yes/no)?

This prompt is likely to be extremely familiar to you and, most people seem to just type ‘yes’ to move on with their lives, which defeats the whole purpose of this prompt.

If you use DNSSEC you can bypass this prompt entirely by publishing your server’s key fingerprints via DNS and having SSH authenticate them for you.

Generating your SSHFP record

You can get SSH to generate the DNS records for you, log in …

 — 1 min read

I currently use name.com as my registrar and I use Rage4 because Rage4 are awesome, they also support TLSA and SSHFP records and of course, DNSSEC.

I’m writing this up because I found getting DNSSEC from Rage4 to work with name.com as my registrar was a pain and the name.com support were not very helpful, linking me to a support article that I’d already read and did not help at all.

Rage4

I’m going to assume you’ve already got your records in Rage4, if not, the interface is really easy so you’ll figure it out.

Within the management section for your domain’s zone, there is a menu bar of icons, the icon pictured below enabled DNSSEC.

Enabled DNSSEC

Clicking this will turn on DNSSEC. You will then have a new icon that will allow you to display your DNSSEC information.

Display DNSSEC info

Clicking this icon …

 — 3 min read

Public Key Pinning is a security feature that tells a web browser to associate a public cryptographic key with a server or servers. When a web browser visits a website for the first time, it will read the HPKP header and store the hashes for the certificates that are provided. Each time the browser then revisits that website, the hash from the provided public key is compared against the stored keys, if the hashes do not match, the web browser should display a warning.

The HPKP header adds protection against man-in-the-middle (MITM) attacks but, if incorrectly configured can make your website display a TLS error for a long period of time.

Here’s a look at what this website publishes as it’s HKPK header.

Public-Key-Pins: pin-sha256="cYf9T3Il8DaCnaMaM0LatIAru1vqmcu2JSwS7uvyEB0=";
                 pin-sha256="u2q8QZ8Hjp3o/efZjsch9NKjnZmrISJQjwoi/rmsKLU=";
                 max-age=15768000; includeSubDomains

To explain it, the first pin-sha265 key is the hash of the public key that …

 — 3 min read

This is really a follow up article to one I wrote earlier this year but is really applicable to any similar set-up, with some modifications. The only configuration similarity this requires is that mail for all users is stored on the filesystem in the same place, rather than to separate locations i.e. each user having ~/.Maildir.

EncFS

sudo apt-get install encfs

Once installed, you’ll need to make a directory for encrypted and decrypted mail to live.

sudo mkdir /var/mail/encrypted /var/mail/decrypted

You’ll need to set up permissions so your mail user can access the fuse device and the new directories.

For me, this user and group are called vmail but yours may be different.

sudo chgrp mail /var/mail/decrypted
sudo g+rw /var/mail/decrypted
sudo usermod -a -g fuse vmail
sudo chgrp fuse /dev/fuse
sudo chmod g+rw /dev/fuse

Next …

 — < 1 min read

I think most of us have been in a position where we really shouldn’t continue communicating with someone or contact that person when drunk… You know what I mean, ex relationships etc (it happens.)

With Postfix you can block yourself from emailing that person again, which is quite useful.

In /etc/postfix/main.cf add make the start of your smtpd_recipient_restrictions look like below.

smtpd_recipient_restrictions =
    check_recipient_access hash:/etc/postfix/recipient_access,

Create a new file /etc/postfix/recipient_access and add the email address you wish to block, the word REJECT in capitals and optionally; a reason. Example below.

test@example.com REJECT Don't be silly... You're probably drunk.

For every address you wish to block yourself from emailing, simply add them on a new line.

You can see the email is blocked from being sent in /var/log/mail.log.

NOQUEUE: reject: RCPT from 123.123.123.123: 554 5 …
 — 9 min read

This mail platform does use a fair amount of memory, the memory usage is ClamAV and Solr, the latter being used for IMAP SEARCH. I personally use 2 GB.

I’ll warn you all now, this is a long article.

SSL

sudo openssl genrsa -out /etc/ssl/private/mail.key 4096
sudo openssl req -new -key /etc/ssl/private/mail.key -out /tmp/mail.csr
sudo openssl x509 -req -days 365 -in /tmp/mail.csr -signkey /etc/ssl/private/mail.key -out /etc/ssl/certs/mail.crt

MySQL

sudo apt-get install mysql-server

You’ll be prompted several times for a password for MySQL during the installation, just come up with something nice and secure.

The first thing to set-up will be the MySQL database and schema.

mysql -u root -p

Next up, create the database.

CREATE DATABASE mailserver CHARACTER SET utf8 COLLATE utf8_general_ci;

And grant some privileges, you’ll need …

 — 2 min read

With haproxy 1.5 finally being released we are lucky enough to get a basic interface around OCSP stapling.

Sadly this interface really is quite basic and it’s not the simplest thing to figure out without some trial and error.

According to the official documentation, you should be able to pipe your OCSP response to haproxy via it’s stats socket. Sadly I could not get this to work properly at all, so I decided to swap the piping for a file and reload solution.

You’ll need to get a copy of your certification authorities root certificate to proceed with this.

Looking for your OCSP URI

If you don’t know the URI you need to do an OCSP lookup against, you can find it in your certificate data.

openssl x509 -in /path/to/your/certificate -text

Inside the output, look for the following section.

Authority Information Access …
 — < 1 min read

By default haproxy enables stateless SSL session resumption, but you can enable stateful session resumption in accordance with RFC 5077. This functionality, like the SSL handling it relies on is only available from haproxy 1.5.

Configuration

The option to enable stateful SSL session resumption is as below

no-tls-tickets

You will need to add it in to your bind line, like below

bind 0.0.0.0:443 ssl ... no-tls-tickets
 — 2 min read

I am a firm believer in using SSL as much as possible, for me that is pretty much everywhere and, thanks to the wonderful guys at GlobalSign, most of my SSL certificates are free becauses my projects are all open source.

I used a blog post by Hynek Schlawack as a base for my SSL setup, he is keeping this article up-to-date as much as possible so it should be a great source for any security conscious people that would like to know more and get good explanations about each part.

Let’s take a brief look at how this website achieves it’s A* rating.

Key

I use a 4096 bit RSA key that is no a Debian weak key.

Protcols

I do not support SSLv2 or SSLv3 but I do support much stronger protocols;

  • TLS 1.2,
  • TLS 1.1 and,
  • TLS 1.0.

dhparam

It’s a …

 — < 1 min read

I have previously written an article on using SPDY with haproxy but have been spending some time recently being annoyed that the SPDY check tool said I didn’t advertise a fall back to HTTP over SSL in the NPN protocol list.

After some digging I discovered it was actually quite simple to advertise multiple protocols using npn and haproxy.

Previously my article called for using the following section of configuration at the end of the bind line.

npn spdy/2

To advertise HTTP protocols as well as SPDY you simply need to add them to the npn list, using commas as a delimiter.

npn spdy/2,http/1.1
 — < 1 min read

I recently wrote an article on using haproxy, SSL and SPDY with nginx backend servers.

This article is a little extra on top of that to explain how to enable statistics for haproxy so you can monitor the backend statuses etc.

Example stats page

Moar stats!

Enabling stats

listen stats :8000
    mode http
    stats enable
    stats hide-version
    stats realm haproxy\ stats
    stats uri /
    stats auth admin:admin

Place the above content in the haproxy configuration file (/etc/haproxy/haproxy.cfg).

Be sure to replace admin:admin with your a proper username and password, username first, password after the colon.

Restart haproxy, and then browse to https://yousite.com:8000.

 — 5 min read

I wrote an article last week explaining that I had changed my blog and built my own nginx packages with SPDY built in.

I decided I would take things a little further and poke around with haproxy some more. The initial plan was to compile the latest dev source of haproxy with SSL termination enabled.

In doing so I realised I would lose SPDY support, which upset me a little. After some digging I found that the 1.5-dev branch of haproxy supports npn and thus can handle SPDY.

I tweaked my builds a little more and managed to get haproxy running as an SSL terminating load balancer, with SPDY connections being sent off to my nginx servers with SPDY enabled and all other non-SPDY connections were passed on to an nginx virtual host with SPDY disabled.

Requirements

I have released my haproxy build as a debian file below …

 — 2 min read

I decided to rebuild syslog.tv as pure HTML using RST and Pelican and rebrand it as kura.gg.

In doing so I decided I would go all out and use SPDY and ngx_pagespeed (mod_pagespeed) for fun to see exactly what I could do.

Sadly no version of nginx has been officially released with SPDY or ngx_pagespeed enabled, you can compile nginx from source to enable SPDY so I thought I would go ahead and do it, releasing some Debian packages in the process.

After compiling nginx from the source package available at the Ubuntu PPA, I decided I would go further and compile in ngx_pagespeed.

 — < 1 min read

I spend a LOT of time with tunnels open to multiple machines, connecting directly to PostgreSQL, RabbitMQ and many other services all via SSH.

I have written several helper functions and this is the final version that I created in a small competition with @codeinthehole.

Gist removed. Sorry.

Installation

Simply add the contents to ~/.bashrc

Usage

Usage is pretty simply, just called portforward from the command line, pressing <TAB> as you type in a server name from your ~/.ssh/config file and the same with the port.

portforward sy<TAB>

Will become:

portforward syslog.tv

And finally

portforward syslog.tv 15672
 — < 1 min read

I wrote this little Python program a while ago and now people are starting to email me about it, asking for it to be part of the DenyHosts Debian packages so I figured I’d write a quick article on it.

If you’re like my developers, you’ll find yourself getting banned from servers all the time and have to come speak to someone like me (your sys engineer/admin), or maybe you are an admin and are sick of people banning themselves and want and easy way to unban them.

So give this a try:

https://github.com/kura/denyhosts-unban

From the GitHub page you can download either the tarballs, zipballs or a Debian .deb package. Install it using the instructions in the README and you’re good to go.

Unban

Unbanning is simple, you can either unban a single IP using:

sudo denyhosts-unban 10.0.0.1 …
 — 1 min read

If like me you run in to issue when using OpenJDK, my issues come from it’s memory problems when you’re allocating and using large amounts of memory - mostly for Solr where we’re concerned but obviously I’d switch for other high memory usage instances too.

So without further ado, lets get the installation going.

You’ll need Debian’s “add-apt-repository”, on servers this doesn’t usually come by default so we’ll need to install it.

sudo apt-get install python-software-properties

Next we need to add Java’s PPA.

sudo add-apt-repository ppa:sun-java-community-team/sun-java6

Once this is done we’ll need to update our apt caches and install Java 6.

sudo apt-get install sun-java6-jdk

Now that this is installed we should get the Java version, remember it for future.

java -version

You’ll get something like this

java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9 …
 — 3 min read

I have built and released an open-source email server in the past for testing send rates and speeds, this project was called SimpleMTA and is available here.

Recently I have rebuilt this project for an internal project at work using the Tornado framework. Sadly this project as a whole cannot be released but a version of this code will be released in the near future.

Until that is released I have launched a new service called blackhole.io

What is blackhole.io?

blackhole.io is a completely open mail relay that forgets anything that is sent to it, meaning there is no auth requirements and no storage of email data within the service. Literally anyone can send anything to it and have it never get delivered.

You can even send commands out of order, meaning you can call the DATA command without ever using HELO, MAIL FROM or RCPT TO …

 — 3 min read

I’ve recently been toying with my Raspberry Pi mirror including moving it out on to Amazon’s S3. I’ve written an article on how to back up to S3, but that isn’t enough when it comes to serving data from S3.

I needed the ability to RSYNC data from the official Raspberry Pi servers on to mine and then in to S3 and for that I used s3fs and FUSE.

FUSE

You can actually do this successfully without requiring FUSE, just by installing the s3fs binary on to your system, but this only allows the user who mounted to access the mounted bucket and also is not possible via /etc/fstab.

FUSE allows you to implement a filesystem within a userspace program, thus allowing us to give other users access and auto-mount using /etc/fstab.

Installation

Fuse

Installing FUSE is simple

sudo apt-get install fuse-utils

s3fs

We …

 — 2 min read

I have several servers powering syslog including it’s Raspberry Pi mirror, load balancer and email servers. All of my servers are hosted using Linode in their London data centre and have Linode’s back-up system doing both daily and weekly snapshots.

For the app and database servers I do server-side backups storing each website and it’s database in it’s own folder within /backup in case I require a quick back-up to fix something, rather than the server has died.

This is all well and good but I like having an off-site backup too and for that I use S3

About S3

Amazon’s S3 is pretty cheap and very easy to use. Because only data is going in you don’t pay a transfer fee and the cost of storage is very affordable, you can see a pricing list here.

To do the backup I use a …

 — 2 min read

The unattended-upgrades package used on Debian is based on the one from Ubuntu. It is generally pretty safe in my opinion but I only ever enable it for security upgrades.

Installation

apt-get install unattended-upgrades apticron

unattended-upgrades handles the actual updates, apticron is used for emailing you of available updates - it is not required but I like it.

Configuring unattended-upgrades

Open up /etc/apt/apt.conf.d/50unattended-upgrades and change it to the content below.

APT::Periodic::Enable "1";
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";
Unattended-Upgrade::Mail "**YOUR_EMAIL_HERE**";

// Automatically upgrade packages from these (origin, archive) pairs
Unattended-Upgrade::Allowed-Origins {
    "${distro_id} stable";
    "${distro_id} ${distro_codename}-security";
};

// Automatically reboot *WITHOUT CONFIRMATION* if a
 // the file /var/run/reboot-required is found after the upgrade
 Unattended-Upgrade::Automatic-Reboot "false";

So lets explain the above. As you can see we enable periodic updates, enable update package lists (triggers an apt-get update), enable autoclean …

 — < 1 min read

Installation

To install we need to run the following command:

sudo apt-get install -y sks

Now we build the key database:

sudo sks build

And change the permissions for the sks user:

sudo chown -R debian-sks:debian-sks /var/lib/sks/DB

Next we need to make sks start from init, open up /etc/default/sks in your favourite editor and *initstart* to look like below:

initstart=yes

Now we can start the service with:

sudo /etc/init.d/sks start

Your keyserver will now be up and running on port 11371.

Web interface

We’ll need to create a web folder within sks with the following command:

sudo mkdir -p /var/lib/sks/www/

Change it’s permissions so the sks user can access it.

sudo chown -R debian-sks:debian-sks /var/lib/sks/www

And finally we need create a single HTML file for the interface, I have provided that …

 — 4 min read

Installation

First up we’ll need to install git and some Python tools to get Gitosis installed.

sudo apt-get install -y git-core gitweb python-setuptools

Next we have to clone gitosis from it’s git repository and install it.

cd /tmp
git clone git://eagain.net/gitosis.git
cd gitosis
sudo python setup.py install

Adding your git user

sudo adduser --system --shell /bin/sh --gecos 'git version control' --group --disabled-password --home /home/git git

The above command creates a new system user with /bin/sh as it’s shell with no password and a homedir of /home/git/ and also creates a group with the same name.

Initialising gitosis

You’ll need an SSH key for this, if you have one simply copy the contents of it to your new git server, if you do not have one then you can generate one on your machine using

ssh-keygen

And then …

 — 1 min read

*I would generally not advise using this unless you have skill at debugging why OOM has spawned and also debugging kernel panics after they happen, from logs.*

It is possible to configure your kernel to panic when OOM is spawned, which in itself is not useful but, coupled with a kernel option for auto-rebooting a system when the kernel panics it can be a very useful tool.

Think before implementing this and use at your own risk, I take zero responsibility for you using this.

sudo sysctl vm.panic_on_oom=1
sudo sysctl kernel.panic=X # X is the amount of seconds to wait before rebooting

*DO NOT FORGET TO CHANGE X*

This will inject the changes in to a system that is currently running but will be forgotten on reboot so use the lines below to save permanently.

sudo echo "vm.panic_on_oom=1" >> /etc/sysctl.conf
sudo echo "kernel.panic …
 — 1 min read

Preparation

First we need to make sure we have all the stuff we need to compile mk livestatus and run it

sudo apt-get install make build-essential xinetd ucspi-unix

MK Livestatus

Grab the mk livestatus source from here, currently it’s version 1.1.10p3 but update the commands below to match your version.

wget https://mathias-kettner.de/download/mk-livestatus-1.1.10p3.tar.gz
tar xvzf mk-livestatus-1.1.10p3.tar.gz
cd mk-livestatus-1.1.10p3
./configure
make
sudo make install

Xinetd

Now that it’s compiled we need to write a xinetd config for it, create a new file called /etc/xinetd.d/livestatus and put the following in it

service livestatus {
    type = UNLISTED
    port = 6557
    socket_type = stream
    protocol = tcp
    wait = no
    cps = 100 3
    instances = 500
    per_source = 250
    flags = NODELAY
    user = nagios
    server = /usr/bin/unixcat
    server_args = /var/lib/nagios3/rw/live
    only_from = 127.0.0.1 # modify this to …