— 2 min read

The point

The whole point of this is to get as much load off of Apache as possible to keep the server running nice and smoothly.

Configuration

The configuration below will mean that nginx will serve basically everything;

  • static files
  • uploaded files and
  • cached content

simply replace the VARIABLES below and everything should be good to go, if copy-pasting from below isn’t working properly you can download a full copy from here.

server {
  listen 80;
  server_name **DOMAIN_HERE**;
  access_log /var/log/nginx/access.**DOMAIN_HERE**.log;

  gzip on;
  gzip_disable msie6; # disable gzip for IE6
  gzip_static on;
  gzip_comp_level 9; # highest level of compression
  gzip_proxied any;
  gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

  proxy_redirect off;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_pass_header Set-Cookie;root **/PATH/TO/WORDPRESS**;

  # default location, used for the basic proxying
  location / {
    # if we're requesting a file and …
 — 2 min read

Pound is a great little load balancer, it’s fast, opensource and supports SSL termination, which is great!

Install

sudo apt-get install pound

Configuration

The default configuration should be pretty good for most purposes, but feel free to tweak as you require.

HTTP

We’ll first look at load balancing HTTP, in case you don’t want or need HTTPS load balancing.

We’ll need delete all the content within ListenHTTP block, once done it should look like this

ListenHTTP
End

Now we add an address and port to listen on and finally a line to remove an HTTP header

ListenHTTP
    Address 0.0.0.0 # all interfaces
    Port 80
    HeadRemove "X-Forwarded-For"
End

This is a basic configuration, for each backend we want to load balance we’ll need to add a service within that listener.

You’ll notice we’re removing incoming headers called X-Forwarded-For, this is to make …

 — 2 min read

A simple yet effective method for protecting your mail server from spam is to use greylisting. In simple terms, when an email is received the server will temporarily reject it with a 450 response code claiming that the server is busy, the sending server should then attempt to try to deliver at a later point in time, if enough time has passed the recipient server will then accept the incoming mail and whitelist the send address for a period of time.

This is effective because most spam servers are configured not to retry the send whereas real mail servers generally will retry. This sadly does not protect against spam coming from comprised mail servers or accounts like on Hotmail.com.

Installation

sudo apt-get install postgrey

Configuring Postgrey

By default Postgrey runs on 127.0.0.1:60000, which is the local loopback interface so it is not exposed to the …

 — 1 min read

This really should be quite a quick and simple post.

I use several tools to protect my mail servers from spam, the most effective of these I’ve found is using external lists in conjunction with reject_rbl_client and reject_rhsbl_client.

+======================+======================================================================================================+ | Service | description | +======================+======================================================================================================+ | zen.spamhaus.org | A single lookup for querying the SBL, XBL and PBL databases. | | | - SBL - Verified sources of spam, including spammers and their support services | | | - XBL - Illegal third-party exploits (e.g. open proxies and Trojan Horses) | | | - PBL - Static, dial-up & DHCP IP address space that is not meant to be initiating SMTP connections | +———————————+———————————————————————————————————————————————————+ | dnsbl.sorbs.net | Unsolicited bulk/commercial email senders | +———————————+———————————————————————————————————————————————————+ | spam.dnsbl.sorbs.net | Hosts that have allegedly sent spam to the admins of SORBS at any time | +———————————+———————————————————————————————————————————————————+ | b1.spamcop.net | IP addresses which have been used to transmit reported email to SpamCop users | +———————————+———————————————————————————————————————————————————+ | rhsbl.ahbl.org | Domains sending spam, domains owned by spammers, comment spam domains, spammed URLs …

 — < 1 min read

This is part 4 of my series on configuring a mail server, please see part one, part two and part three if you’re not familiar with them.

The content of this article was written to work with the previous three articles but should work on any SpamAssassin set-up.

Razor

First off we need to install Razor.

sudo apt-get install razor

Now we need to run three commands to register and configure Razor.

sudo razor-admin -home=/etc/spamassassin/.razor -register
sudo razor-admin -home=/etc/spamassassin/.razor -create
sudo razor-admin -home=/etc/spamassassin/.razor -discover

These 3 commands should be pretty self explanatory, they register Razor, create it’s configuration and discover the Razor servers.

Pyzor

Now we’ll install Pyzor.

sudo apt-get install pyzor

Now we also need to tell Pyzor to discover it’s servers.

pyzor --homedir /etc/mail/spamassassin discover

SpamAssassin

Add the following lines to the end …

 — < 1 min read

I have created a scripts that handle these tasks for you, available `here`_.

First thing we need to do is create an sources list specifically for security.

sudo grep "-security" /etc/apt/sources.list | sudo grep -v "#" > /etc/apt/security.sources.list

Now that this is done we can simply continue to use the command below to trigger security-only upgrades

sudo apt-get upgrade -o Dir::Etc::SourceList=/etc/apt/security.sources.list

Note

This will work until you upgrade your distro (e.g. 10.04 -> 12.04), at which point you will need to re-run the first command to regenerate the security.sources.list file.

 — 3 min read

This is part 3 of my guide to getting a mail server configured with all the sexy bits to improve deliverability, spam and virus protection.

You can view part 1 here and part 2 here.

The key pair

We need to create a key pair to sign emails with:

.. code-block:: bash
openssl genrsa -out private.key 1024 openssl rsa -in private.key -out public.key -pubout -outform PEM sudo mkdir /etc/dk/ sudo cp private.key /etc/dk/dk.key

Now we can move on to DK and DKIM signing, make sure you keep the public key for later.

DKIM

First we’ll need to install an application to sign our emails.

sudo apt-get install dkim-filter

Once installed we need to configure it, open up /etc/default/dkim-filter, modify the file to look like below replacing <DOMAIN> with the domain you want to sign email from.

DAEMON_OPTS="-l -o X-DomainKeys …
 — 3 min read

This is part 2 of my series on mail servers on Debian 6/Ubuntu 10.04, it should work on other versions of each though. Part 1 is available here.

SpamAssassin

First off we’ll get SpamAssassin installed and configured.

sudo apt-get install spamassassin

We’ll be configuring SpamAssassin as a daemon that Postfix interfaces with using spamc.

SpamAssassin on Debian and Ubuntu runs as root which is NOT a good thing so we’ll need to make some changes.

We’ll add a group called spamd with GID**5001**.

sudo groupadd -g 5001 spamd

Next we add a user spamd with UID 5001 and add it to the spamd group, as well as set it’s home directory as /var/lib/spamassassin and make sure it has no shell access or SSH access.

sudo useradd -u 5001 -g spamd -s /usr/sbin/nologin -d /var/lib/spamassassin spamd

Now …

 — 3 min read

This guide is part 1 of what I plan will be a couple of guides that take you through installing a base mail system, SpamAssassin, DKIM and much more. Stay tuned.

This guide was written for Debian 6 but should be the same or similar for Debian 5 and Ubuntu 10.04 and above.

The installation

sudo apt-get install dovecot-imapd postfix sasl2-bin libsasl2-2 libsasl2-modules

Choose “Internet site” when prompted and enter the fully qualified name of your server.

Once all this is done installing we’ll need to make some changes, first off will be Postfix.

Postfix

Open up /etc/postfix/main.cf and add the following to the end of the file

home_mailbox = Maildir/
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
smtpd_sasl_local_domain = $myhostname
broken_sasl_auth_clients = yes

smtpd_sender_restrictions = permit_sasl_authenticated,
    permit_mynetworks,

smtpd_recipient_restrictions = permit_mynetworks,
    permit_sasl_authenticated,
    reject_unauth_destination,
    reject_unknown_sender_domain,

Here we basically tell Postfix to store all email in maildir format in the user’s home directory. We …

 — < 1 min read

Recently I started using Pound as a load balancer to a cluster of nginx servers and found my access logs were filled with the IP address of the load balancer. I did some digging and found the correct way to “fix” this.

First thing you need to do is make sure you remove X-Forwarded-For from Pound

ListenHTTP
    # ... snip ...
    # ... snip ...
    HeadRemove "X-Forwarded-For"
End

Once this is done, reload Pound.

Next you need nginx compiled with realip module - https://wiki.nginx.org/NginxHttpRealIpModule

On Ubuntu/Debian servers this module comes by default, otherwise you may have to compile it in yourself using the following option:

--with-http_realip_module

Once this is all done modify your nginx vhosts and add the following 2 lines

set_real_ip_from [IP];
real_ip_header X-Forwarded-For;

Where [IP] is the IP address of your load balancer.

To configure this to work with Apache you need the mod_rpaf module.

 — 1 min read

I’d first like to point out that although the VMDKs are shared between hosts using a shared SCSI BUS they are not synched, meaning that if you write to the mounted point on any machine it will not display on other machines with the same mount point until you remount the drive. Annoying, but understandable.

To business.

First off all machines that you want to share this VMDK with will need to be OFFLINE.

Next up we create the VMDK, I find it easiest to do this by adding hardware to an already existing machine, I’m going to use one that I want the VMDK shared with to make it even simpler.

Create a new disk

You will need to enable clustering features as shown below, this means you cannot use thin provisioning.

Choose disk size

You will need to add the VMDK to a new SCSI BUS, this will usually begin with 1: or …

 — < 1 min read

I was recently tasked with adding Google tracking cookies to our nginx logging for a couple of sites. It was so it could be pushed through a log processor.

It turned out too be a little trickier than it would have been with Apache, but the process itself is still quite simple.

Open up the server definition you wish to add it to and add a custom log format like below

log_format g-a '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '"__utma=$cookie___utma;__utmb=$cookie___utmb;__utmc=$cookie___utmc;__utmv=$cookie___utmv;__utmz=$cookie_umtz"';

This log format can then be added to your access log like below:

access_log /var/log/nginx/access.example.com.log g-a;

Reload nginx

sudo /etc/init.d/nginx reload

If all goes well, you should see Google Analytics appearing in your access logs like below

1.1.1.1 - - [05/Jun/2011:20:35:50 +0100] "GET / HTTP …
 — < 1 min read

Quite a simple one:

ssh -f USER@INTERMEDIATE_DEVICE -L LOCAL_PORT:DESTINATION_DEVICE:DESTINATION_PORT -N

-f tells ssh to go to background -L binds a local port to a remote device and port -N tells ssh not to execute any commands

So use this to tunnel from local port 8000 in to a remote machine on port 22 you’d use

ssh -f user@server.test.com -L 8000:server.destination.com:22 -N

Once the tunnel is open you can use the following to ssh or scp data around

ssh localhost -p 8000
scp -P 8000 /path/to/local/file user@localhost:~
scp -P 8000 user@localhost:/path/to/remote/file .

I use ssh tunnels all the time to remote access and use one of our Solr servers that is blocked behind a firewall.

 — < 1 min read

The command below no longer works, for an updated version that does work and should continue to work (until you upgrade to a new distro version e.g. 10.04 -> 12.04) please see `here`_.

Really simple, should work for most cases, I’ve not found anything wrong with it.

sudo aptitude update && sudo aptitude install '?and(~U,~Asecurity)'
 — 6 min read

Lately I’ve been doing a lot of work with Varnish, this includes testing it within a load balanced environment, putting it behind nginx, putting it in front of Solr, the list goes on.

This blog post will hopefully give you an insight in to a simple way of combining nginx, Varnish and Apache to create a powerful Wordpress environment that can really take a hammering.

I’m going to assume you already have Apache and nginx working together, if not I suggest you read my other articles on these subjects to learn how to combine them.

Installing Varnish

sudo apt-get install varnish

Configuring Apache

I suggest binding Apache to port 81, this is easy to change, open the following file in your favourite editor.

/etc/apache2/ports.conf

Change the Listen and NameVirtualHost lines to:

Listen 81
NameVirtualHost *:81

This will mean you need to go and change all …

 — < 1 min read

First we need to install sshfs.

sudo apt-get install sshfs fuse-utils

Now we make a mount point, I’m going to use a directory in my home directory for this.

mkdir ~/remote-content

And now we simply mount our remote directory to it.

sshfs user@host:/path/to/location ~/remote-content

It’s as simple as that.

 — 3 min read

Quick introduction

My employers presented me with a challenge this week. The task was not difficult in the end but to me it was an untried concept involving MySQL.

I have never been a fan of MySQL and generally turn my nose at the thought of using it, let alone replicating it etc.

The task in question? Master -> Master -> Slave -> Slave replication.

From this point forward I will expect you to have MySQL installed and set-up as normal.

  • Master 1 will be known as Master 1 and Slave 2 with IP 10.1.1.1
  • Master 2 will be known as Master 2 and Slave 1 with IP 10.1.1.2
  • Slave 1 will be known as Slave 3 with IP 10.1.1.3
  • and Slave 2 will be known as Slave 4 with IP 10.1.1.4

Master 1

Modify your MySQL config file, usually named …

 — < 1 min read

I needed to quickly modify 500 hundred XML files, each was about 10MB in size, thankfully Linux makes that pretty fast and very easy.

find . -name "*.xml" -print | xargs sed -i 's/FROM/TO/g'

A semi “real world” example:

find . -name "*.xml" -print | xargs sed -i 's/foo/bar/g'
 — < 1 min read

Recently I had to install Oracle on a virtual machine but didn’t find out until after I’d spun up of the machine that Oracle required at least 2GB of swap space, my machine did not have enough.

Thankfully it’s quite simple to increase swap space, using VMWare ESX, simple add a new drive to the machine as you normally would, I used 5GB.

Detecting the new SCSI drive and partitioning it

This bit is simple, I’m going to assume you’re logged in as root.

sudo echo "- - -" > /sys/class/scsi_host/**host0**/scan && fdisk -l

If host0 doesn’t work, try changing to host1, host2 etc.

Now we need to format the drive, for me it was /dev/sdb.

sudo cfdisk /dev/sdb

Create a new logical partition, set it’s type to 82 Linux Swap and simply write the changes.

Adding swap

Next we simply add …

 — 3 min read

During a seemingly normal work day a colleague pointed out a problem to me and asked if I had any solution.

The problem was that they were trying to use InfoBright (https://www.infobright.com/) for some data crunching, export the data to CSV and then import in to MySQL. My first idea was to output the data from InfoBright as SQL and pipe it directly in to MySQL, this turned out to not be possible as the version of IB they were using only supported output as CSV.

This in itself wasn’t a problem, the problem lay with the fact that IB would only output the file with 0660 permissions, and although both IB and MySQL ran as user mysql and group mysql, MySQL itself flat out refused to import the CSV file unless it was world readable (0664), which was slightly annoying.

If the CSV didn’t …

 — < 1 min read

Sometimes you want to be able to install packages on another machine without the hassle of a long apt-get install command or having to write down every single package you’ve installed.

Luckily Debian has the wonderful dpkg which has 2 methods for generating a list of installed packages and another for importing a list.

Generating a list of installed packages

sudo dpkg --get-selections > selections

This will generate a file called selections which will contain something like

... snip ...
adduser install
apache2 install
apache2-mpm-prefork install
apache2-utils install
apache2.2-bin install
apache2.2-common install
apt install
... snip...

This is just a simple, plain text file so can be copied between servers.

Installing packages from an exported list

This is almost just as easy, first we need to actually set the list of selected packages

sudo dpkg --set-selections < selections

Then we need to actually do an update and install

sudo apt-get update && sudo …
 — 4 min read

Configuration changes

I made some modifications to my nginx configuration this weekend to improve performance and clear up some bugs.

upstream backend {
    server 127.0.0.1:81 fail_timeout=120s;
}

server {
    listen 80;
    server_name syslog.tv;

    access_log /var/log/nginx/access.syslog.tv.log;

    gzip on;
    gzip_disable msie6;
    gzip_static on;
    gzip_comp_level 9;
    gzip_proxied any;
    gzip_types text/plain text/css application/x-javascript text/xml
    application/xml application/xml+rss text/javascript;

   location / {
        root /var/www/syslog.tv;

        set $wordpress_logged_in "";
        set $comment_author_email "";
        set $comment_author "";

        if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
             set $wordpress_logged_in wordpress_logged_in_$1;
        }

        if ($http_cookie ~* "comment_author_email_[^=]*=([^;]+)(;|$)") {
            set $comment_author_email comment_author_email_$1;
        }

        if ($http_cookie ~* "comment_author_[^=]*=([^;]+)(;|$)") {
            set $comment_author comment_author_$1;
        }

        set $my_cache_key "$scheme://$host$uri$is_args$args$wordpress_logged_in$comment_author_email$comment_author";

        client_max_body_size 8m;

        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass_header Set-Cookie;
        proxy_cache cache;
        proxy_cache_key $my_cache_key;
        proxy_cache_valid 200 302 60m;
        proxy_cache_valid 404 1m;
        proxy_pass https://backend;
    }

    location ~* .(jpg|png|gif|jpeg|js|css …
 — < 1 min read

Sometimes keeping multiple copies of keys, certificates and root certificates can be a real annoyance, thankfully it’s quite simple to convert them in to a single PKCS#12 file with the following command.

openssl pkcs12 -export -out certificate.pkcs -in certificate.crt -inkey private.key -certfile rootcert.crt -name "PKCS#12 Certificate Bundle"

This will create a file called certificate.pkcs which will contain the contents of the certificate.crt, private.key and your root certificate rootcert.crt, it will also have an internal reference to it’s name PKCS#12 Certificate Bundle to make it easier to inspect the certificate to find what it should contain, usually you’d set this to something more useful.