The command below no longer works, for an updated version that does work and should continue to work (until you upgrade to a new distro version e.g. 10.04 -> 12.04) please see `here`_.

Really simple, should work for most cases, I’ve not found anything wrong with it.

sudo aptitude update && sudo aptitude install '?and(~U,~Asecurity)'

Lately I’ve been doing a lot of work with Varnish, this includes testing it within a load balanced environment, putting it behind nginx, putting it in front of Solr, the list goes on.

This blog post will hopefully give you an insight in to a simple way of combining nginx, Varnish and Apache to create a powerful Wordpress environment that can really take a hammering.

I’m going to assume you already have Apache and nginx working together, if not I suggest you read my other articles on these subjects to learn how to combine them.

Installing Varnish

sudo apt-get install varnish

Configuring Apache

I suggest binding Apache to port 81, this is easy to change, open the following file in your favourite editor.

/etc/apache2/ports.conf

Change the Listen and NameVirtualHost lines to:

Listen 81
NameVirtualHost *:81

This will mean you need to go and change all …

First we need to install sshfs.

sudo apt-get install sshfs fuse-utils

Now we make a mount point, I’m going to use a directory in my home directory for this.

mkdir ~/remote-content

And now we simply mount our remote directory to it.

sshfs user@host:/path/to/location ~/remote-content

It’s as simple as that.

Quick introduction

My employers presented me with a challenge this week. The task was not difficult in the end but to me it was an untried concept involving MySQL.

I have never been a fan of MySQL and generally turn my nose at the thought of using it, let alone replicating it etc.

The task in question? Master -> Master -> Slave -> Slave replication.

From this point forward I will expect you to have MySQL installed and set-up as normal.

  • Master 1 will be known as Master 1 and Slave 2 with IP 10.1.1.1
  • Master 2 will be known as Master 2 and Slave 1 with IP 10.1.1.2
  • Slave 1 will be known as Slave 3 with IP 10.1.1.3
  • and Slave 2 will be known as Slave 4 with IP 10.1.1.4

Master 1

Modify your MySQL config file, usually named …

Recently I had to install Oracle on a virtual machine but didn’t find out until after I’d spun up of the machine that Oracle required at least 2GB of swap space, my machine did not have enough.

Thankfully it’s quite simple to increase swap space, using VMWare ESX, simple add a new drive to the machine as you normally would, I used 5GB.

Detecting the new SCSI drive and partitioning it

This bit is simple, I’m going to assume you’re logged in as root.

sudo echo "- - -" > /sys/class/scsi_host/**host0**/scan && fdisk -l

If host0 doesn’t work, try changing to host1, host2 etc.

Now we need to format the drive, for me it was /dev/sdb.

sudo cfdisk /dev/sdb

Create a new logical partition, set it’s type to 82 Linux Swap and simply write the changes.

Adding swap

Next we simply add …

During a seemingly normal work day a colleague pointed out a problem to me and asked if I had any solution.

The problem was that they were trying to use InfoBright (https://www.infobright.com/) for some data crunching, export the data to CSV and then import in to MySQL. My first idea was to output the data from InfoBright as SQL and pipe it directly in to MySQL, this turned out to not be possible as the version of IB they were using only supported output as CSV.

This in itself wasn’t a problem, the problem lay with the fact that IB would only output the file with 0660 permissions, and although both IB and MySQL ran as user mysql and group mysql, MySQL itself flat out refused to import the CSV file unless it was world readable (0664), which was slightly annoying.

If the CSV didn’t …

Sometimes you want to be able to install packages on another machine without the hassle of a long apt-get install command or having to write down every single package you’ve installed.

Luckily Debian has the wonderful dpkg which has 2 methods for generating a list of installed packages and another for importing a list.

Generating a list of installed packages

sudo dpkg --get-selections > selections

This will generate a file called selections which will contain something like

... snip ...
adduser install
apache2 install
apache2-mpm-prefork install
apache2-utils install
apache2.2-bin install
apache2.2-common install
apt install
... snip...

This is just a simple, plain text file so can be copied between servers.

Installing packages from an exported list

This is almost just as easy, first we need to actually set the list of selected packages

sudo dpkg --set-selections < selections

Then we need to actually do an update and install

sudo apt-get update && sudo …

Configuration changes

I made some modifications to my nginx configuration this weekend to improve performance and clear up some bugs.

upstream backend {
    server 127.0.0.1:81 fail_timeout=120s;
}

server {
    listen 80;
    server_name syslog.tv;

    access_log /var/log/nginx/access.syslog.tv.log;

    gzip on;
    gzip_disable msie6;
    gzip_static on;
    gzip_comp_level 9;
    gzip_proxied any;
    gzip_types text/plain text/css application/x-javascript text/xml
    application/xml application/xml+rss text/javascript;

   location / {
        root /var/www/syslog.tv;

        set $wordpress_logged_in "";
        set $comment_author_email "";
        set $comment_author "";

        if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
             set $wordpress_logged_in wordpress_logged_in_$1;
        }

        if ($http_cookie ~* "comment_author_email_[^=]*=([^;]+)(;|$)") {
            set $comment_author_email comment_author_email_$1;
        }

        if ($http_cookie ~* "comment_author_[^=]*=([^;]+)(;|$)") {
            set $comment_author comment_author_$1;
        }

        set $my_cache_key "$scheme://$host$uri$is_args$args$wordpress_logged_in$comment_author_email$comment_author";

        client_max_body_size 8m;

        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass_header Set-Cookie;
        proxy_cache cache;
        proxy_cache_key $my_cache_key;
        proxy_cache_valid 200 302 60m;
        proxy_cache_valid 404 1m;
        proxy_pass https://backend;
    }

    location ~* .(jpg|png|gif|jpeg|js|css …

Sometimes keeping multiple copies of keys, certificates and root certificates can be a real annoyance, thankfully it’s quite simple to convert them in to a single PKCS#12 file with the following command.

openssl pkcs12 -export -out certificate.pkcs -in certificate.crt -inkey private.key -certfile rootcert.crt -name "PKCS#12 Certificate Bundle"

This will create a file called certificate.pkcs which will contain the contents of the certificate.crt, private.key and your root certificate rootcert.crt, it will also have an internal reference to it’s name PKCS#12 Certificate Bundle to make it easier to inspect the certificate to find what it should contain, usually you’d set this to something more useful.

The title of this post is a bit stupid, but I honestly couldn’t think of any other way to write it…

When compiling nginx by hand, by default make install will push the binaries out to /usr/local/nginx and it doesn’t come with a start/stop script, understandably because it doesn’t know which OS it is going to be installed on etc etc.

Recently I was tasked with building nginx to an old Red Hat Enterprise Live 4 server with no yum installation, no nginx package in up2date and not being able to find an RPM that’s link wasn’t dead.

I’ve always felt that, being a Debian user, people think of me as only being able to use apt-get and if I’m feeling especially adventurous dpkg - to install applications. Some people know that I build .deb files in my spare time, but …

Recently I found that one of the servers I look after that runs a high profile site was generating semi-high load at traffic peaks. You could generally say that this would be understandable but the server was shooting up to a load of around 10 for a few seconds and with that load jump I was able to graph an increase of Apache processes on top of it. Again though, this would generally be considered normal, but knowing how well the server performs and having nginx sitting on top handling all static content I knew something wasn’t quite right.

Looking through the logs I found quite a lot of requests from a badly written spider which was generating a lot of server load when it hit the server, but after IP banning the culprit I also found several instances of Apache waking it’s child processes.

127.0.0 …

I’ll assume you already have Nagios installed and configured and have an understanding of actually configuring and using Nagios.

Remote server — the server to be monitored

First we’ll install the needed plugins and daemon on the remote server.

sudo apt-get install nagios-plugins nagios-nrpe-server

Once installed, open up /etc/nagios/nrpe_local.cfg

And place the following in it

allowed_hosts=NAGIOS.SERVER.IP,127.0.0.1

command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20 -c 10
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200
command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 20 -c 10

Save and exit.

Commands need to explicitly be enabled on the …

This is a very quick blog to show you how to show a users IP address in your Apache access logs when the site in question is being reverse proxied to Apache through nginx.

You need the rpaf module for Apache, on Debian and Ubuntu this is simple to install

sudo apt-get install libapache2-mod-rpaf
sudo a2enmod rpaf
sudo /etc/init.d/apache2 restart

This set of commands will do the following;

  1. Update apt package list
  2. Install libapache2-mod-rpaf
  3. Enable mod-rpaf
  4. Gracefully restart Apache (doesn’t kill connections)

Once installed you simple need to be sure to pass the correct headers through, so open up one of your nginx site configuration files and add the following within the server definition.

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

So you should have something that looks like this, but without the “… snip …”

server {
    # ...snip...
    location / {
        # ...snip...
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        # ...snip...
    }
}

Today I ran in to something I’d never seen before when configuring nginx.

I ran the nginx config test, as I usually do before I restart it.

nginx -t

But, the response I got was interesting

2010/03/18 21:16:09 [emerg] 12299#0: could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
2010/03/18 21:16:09 [emerg] 12299#0: the configuration file /etc/nginx/nginx.conf test failed

I found that one of the domain names I was using was over 32 characters in length, nginx’s default max length.

Thankfully the fix was simple.

http {
    # ...snip...
    server_names_hash_bucket_size 64;
    # ...snip...
}

This is a really great simple way to find files on the filesystem that are over 200k in size.

find /path/to/directory/ -type f -size +200k -exec ls -lh {} ; | awk '{ print $NF ": " $5 }'

You can use the output of this to either store in a file, or pipe to wc for a count of lines

find /path/to/directory/ -type f -size +200k -exec ls -lh {} ; | awk '{ print $NF ": " $5 }' | wc -l

You can also use egrep before wc to look for specific filetypes

find /path/to/directory/ -type f -size +200k -exec ls -lh {} ; | awk '{ print $NF ": " $5 }' | egrep '(jpg|bmp|gif|tiff|jpeg)' | wc -l

Some times as an administrator you will be given a certificate from a third party that will be in the DER format, which cannot be loaded in to Apache.

Converting it is a simple process:

openssl x509 -in certificate.crt -inform DER -out certificate.pem -outform PEM

To my surprise I have found that there are still people out there who use “more”, this has shocked me.

So this is a very, very short blog post to tell those who visit that less is more and more is less.

What?

less is a command that comes as standard in almost all Linux distros now, and unlike more it actually has the ability to do backwards and forwards scrolling with Page Up, Page Down, arrow keys and spacebar. It’s a fantastic little command!

less FILE

Very simple to use and an all round great tool. The best thing about less is it doesn’t need to read the whole file in one go, it reads in chunks. Opening a 100MB log file is simple with less!

Useful options

-gHighlights just the current match of any searched string,
-ICase-insensitive searches,
-MShow a more detailed prompt …

Today I finally got round to setting up my local user ssh config on my new work laptop and figured I’d do a quick write up on it and it’s uses.

You can create a configuration file in your home directory that will override the options set in your machine-wide config.

Your configuration files

Your local config can be found/created in:

~/.ssh/config

And your machine-wide configuration is in:

/etc/ssh/ssh_config

Rather than editing my ssh config across my whole machine I’m doing it for my local user specifically.

Reading the man page for ssh_config will give you a full list of available options, below I will outline several that I use and find very useful.

Your host definitions

First things first, we need to define a host.

Host host.domain.com

Each host you add to your config will need to have a host …

Over the last two days I’ve had the interesting task of online some VMs from clones and increasing their disk space to accommodate a mass of user uploaded content. I’ve done this before but never actually with an Logical Volume Management (LVM) disk.

My first approach, like a fool, was to clone the VM from source and boot it from a remotely mounted GParted ISO, this didn’t actually go as expected and I was unable to add it to the LVM, I found a nice guide online and consulted a colleague because I knew he’d done something similar recently. After the first successful size increase I realised I was able to do it without ever rebooting the machine itself, this is accomplished by actually adding an extra disk to the VM, this disk can then be partition with cfdisk and then added to the LVM, thus …

This is yet another follow up to post to several previous posts about using nginx as a reverse proxy with caching. It is actually a direct addition to my post from a week or so ago which outlined how to actually using nginx’s proxy caching feature which can be read here — /2010/02/07/nginx-proxy_cache-and-explained-benchmarked/.

Even more changes?

Yes, even more changes, these are basic changes that are there to improve the caching capabilities and also implement load balancing.

Cache changes

The first set of changes are in the main nginx configuration file

/etc/nginx/nginx.conf

These changes basically just change the proxy_cache key

proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=cache:8m max_size=1000m inactive=600m;
proxy_temp_path /tmp/nginx;
proxy_cache_key "$scheme://$host$request_uri";

I’ve decided to put the temporary caches file in to an nginx specific directory, just to separate them from other cache files …

I have written a much newer, clearer and better article on DomainKeys signing email `here`_.

About

This guide is a sister to another guide I wrote a while back about how to use DomainKeys Identified Mail (DKIM) with Postfix on Debian, which can be read here - /2010/01/11/dkim-on-debian-with-postfix/.

DomainKeys is an older implementation than DKIM, DKIM is a merge of DomainKeys and Identified Mail. Both DomainKeys and DKIM are used so having both configured is a good idea.

Getting started

Lets start off by installing the dk-filter

sudo apt-get install dk-filter

Once installed you can can create a public and private key set using the commands below, if you’re already using DKIM you can skip this step and just use your already existing key.

openssl genrsa -out private.key 1024
openssl rsa -in private.key -out public.key -pubout -outform PEM
sudo mkdir /etc/mail
sudo …

You can get a basic overview on what SPF is, what it’s for and it’s more advanced usages here - https://www.openspf.org/

This article is to give only a basic insight in to how you can use an SPF record to valid mail from your servers.

The DNS

SPF records work from your DNS, it’s really simple. Technically there is a DNS type defined for SPF records as of RFC 4408, but since not all servers recognise this type it also works in the TXT type.

A simple usage of SPF is

v=spf1 a mx -all

Imagine this exists on this domain, syslog.tv. This spf record would mean that ALL AN and MX servers listed in the DNS records of syslog.tv would be valid senders.

The hypen (-) before all means that if the mail appears to be coming from a server that isn …

Installation

Simple, if it’s not installed already then run the following commands

sudo apt-get install iptables
sudo /etc/init.d/iptables start

The safest and best way of configuring iptables, in my opinion, is to have two files. The first is a temporary/test set that you will save to first, the second is the actual rule set that will be loaded to iptables.

Configuration

So, first we’ll create an empty temp rules file

sudo touch /etc/iptables.temp.rules

Add some simple rules to it:

*filter
# Allows all loopback traffic and drop all traffic to 127/8 that doesn't use lo

-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT

# Accepts all established inbound connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allows all outbound traffic
-A OUTPUT -j ACCEPT

#SSH
-A INPUT -p tcp -m …

The beginning

Where to begin? nginx would be a good start I suppose. It’s far easier and makes much for sense for you to actually read about nginx from it’s own website - https://nginx.org/en/ - but just to give a simple explanation too; `nginx is king of static content HTTP servers.`

Anyone that has dealt with Apache on medium to high traffic websites will know that Apache is bit of a `wheezy, old geezer` when it comes to content serving using it’s mpm-worker (threaded). Very often high traffic will cause server load to go through the roof but for serving dynamic content, there really is no better HTTP server than Apache, so this leaves us in a bit of a predicament; a high powered website with dynamic content and lots of static files like JS, CSS and imagery, what do we do?!

In this example `dynamic …

The problem

So lets get to the problem first. I have several lightly to medium loaded sites running on some virtual servers, they servers themselves are highly configured to run beautifully on our host environments, very, very RAM intensive but low disk I/O and low CPU usage.

As mentioned, the sites are relatively low loaded, they’ll generally hang around at between 3,000-5,000 unique hits a day and are run through Apache using PHP, various PHP modules and MySQL, a simple generic LAMP environment, yet customised to suit it’s surroundings and host.

The sites themselves run fine on that setup, no issues at all on normal days, but on set days of the week these sites can double in unique hits or even more than double, with KeepAlive enabled and a KeepAliveTimeout set low Apache has problems handling this kind of load (I should point out …

Figured I’d write this one up quickly as it proved to annoy the hell out of me at 4:30am this morning getting it working on a live server.

Apache 2 can serve SSL content to multiple vhosts on your setup, provided they use different IP addresses, this post will give you a quick run down on how to do it.

First up we need to actually add the new IP to the machine in /etc/network/interfaces.

auto eth0
iface eth0 inet static
    address 10.1.1.7
    netmask 255.255.255.0
    gateway 10.1.1.1

auto eth0:1
iface eth0:1 inet static
    address 10.1.1.8
    netmask 255.255.255.0

Replace my IPs with your own.

Restart networking.

sudo /etc/init.d/networking restart

Next task is Apache 2 to configure it to listen on both IPs.

/etc/apache2/ports.conf

My …

Server security is something I’ve always tried to keep myself up-to-date on. I have at least a dozen RSS feeds that I read daily to learn about the latest flaws, holes releases etc. That being said I am by no means an “expert”, I’ve learned what I’ve needed to learn over time. I like to think that over the years I’ve gained enough knowledge to almost completely secure servers with all the programs installed that I generally use.

The aim of this article is to introduce you to some of the programs I use for security and some config changes that can be made to other programs to make them more secure. It is aimed at web servers but other changes work anywhere, like the SSH changes.

SSH

We’ll start with a very simple change that makes a very big difference, a change to the …

Ah sudo, one of my favourites, funnily enough I’ve noticed a lot of Linux users use sudo (mainly because Ubuntu installs and configures your first user by default,) but very few seem to know that much about it. This can include not even knowing how to add a user to sudoers.

This article will give you some useful information on what sudo actually is, how to configure it and how to restrict it.

What is sudo?

So, quickly running man sudo gives us some information what sudo actually is and does.

sudo allows a permitted user to execute a command as the superuser or another user, as specified in the sudoers file. The real and effective uid and gid are set to match those of the target user as specified in the passwd file and the group vector is initialized based on the group file (unless the -P option …

This is gonna be quite a simple tutorial that should be the same (excluding pathing and apt) across other Linux distros.

Installation

First off we’ll get Apache and mod_ssl install

sudo apt-get install apache2

SSL should be enabled by default, if not run the following

sudo a2enmod ssl

SSL certificate

There are several ways of doing this, the first you need to figure out is if you want a self signed certificate or one signed by a provider like GeoTrust, this type is not free. In this article I’ll cover both, starting with self signed.

Self signed

sudp mkdir /etc/apache2/ssl
sudo /usr/sbin/make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/apache2/ssl/apache.pem

Provider signed

Please note, this type of certificate has to be paid for, prices at time of writing range from £15/year to £2,000/year.

There are actually some more options …

This is a quick follow up to a previous post about getting this blog running on nginx with a reverse proxy to Apache 2.

It seems the issue stems from 3 mods I had installed and enabled

  1. mod-spamhaus
  2. mod-evasive and
  3. mod-security

The 3, when running together are a fantastic way to strengthen any web server from attack, be it DOS, injection, XLS etc. I’ve sworn by all 3 of them for years now and I thought I had them cracked for security:performance ratio, when it comes to reverse proxying requests from nginx to Apache 2 where WordPress is concerned, apparently I was very wrong.

The issue wasn’t so bad when the cache was full, but seeing as my cache is only alive for an hour that leaves an open point for the cache to be recreated when a user views the page. This in itself isn’t …

There is a much newer article on this subject `here`_ and covers DomainKeys and DKIM.

Mail and mail servers have always been my forté if I’m to be honest, my home mail server has been spam free for years now, nothing really gets past due to my love of all things installable and configurable.

Several months ago I started a new job and after a few weeks I was tasked with getting DKIM signing to work on our mail platform, DKIM was semi-new to me, I’d never bothered with anything but SPF before so I figured I’d give it a shot.

At work our servers are Debian based but are the evil that is Ubuntu, strangely though I was able to find an Ubuntu specific article that wasn’t absolute rubbish, which surprised me no end. I was able to get dkim-milter working with Postfix and …

This is a rather old article, for more up-to-date information please see;

  1. /2010/02/07/nginx-proxy_cache-and-explained-benchmarked/
  2. /2010/02/14/more-nginx-proxy_cache-optimizations-and-nginx-load-balancing/

I’ve started collecting a few blogs on my servers now and figured from this one on I would consolidate it in to one workable, usable location. Removing my need to update multiple plugins, themes and WordPress itself, over and over.

This time round I thought I’d do it properly, and properly in my book is as complicated and “awesome” as it can possibly be, without growing legs and running off to stomp a city.

Love

I’ve fallen in-love with nginx (https://nginx.org/) over the last 6 months or so, I’d been an avid user of LighTTPD for a very long time before but started to look in to nginx mid year as a replacement. I learned that at my new job they used nginx for …