I have several servers powering syslog including it’s Raspberry Pi mirror, load balancer and email servers. All of my servers are hosted using Linode in their London data centre and have Linode’s back-up system doing both daily and weekly snapshots.
For the app and database servers I do server-side backups storing each website and it’s database in it’s own folder within /backup in case I require a quick back-up to fix something, rather than the server has died.
This is all well and good but I like having an off-site backup too and for that I use S3…
Amazon’s S3 is pretty cheap and very easy to use. Because only data is going in you don’t pay a transfer fee and the cost of storage is very affordable, you can see a pricing list here.
To do the backup I use a daily cron job which then uploads the data to S3 using s3cmd.
Download the S3 tools package list in to apt
sudo wget -O- -q https://s3tools.org/repo/deb-all/stable/s3tools.key | sudo apt-key add -
sudo wget https://s3tools.org/repo/deb-all/stable/s3tools.list -O /etc/apt/sources.list.d/s3tools.list
Update your package list and install s3cmd
sudo apt-get update && apt-get install s3cmd
You’ll need to configure the tool to work with your AWS account, so run
sudo s3cmd --configure
When prompted, fill in your access and secret key which you can find on the Amazone website.
When asked to provide an encryption password, I choose yes but you can say no.
When asking if you want to use HTTPS, I choose yes but again, you can say no, it really depends on how secure you want the data transfer.
I would suggest using an encryption password and enabling HTTPS.
Now that s3cmd is installed and configured you can use it.
You can create a bucket using the s3cmd command below, but as far as I know you can’t select a location so I create my buckets manually on the web interface.
s3cmd mb s3://your-bucket-name
Once done you can see a list of available buckets with
As shown below
2012-02-29 20:28 s3://kura-linode-test
Now that this is done we can put some data in there, create a test file
echo "this is a test" > test.file
And put it in S3
s3cmd put test.file s3://your-bucket-name/
You can see it using
s3cmd ls s3://your-bucket-name
Download it with
s3cmd get s3://your-bucket-name/test.file
And delete it with
s3cmd del s3://your-bucket-name/test.file
Once satisfied with this you can create a shell script to automate some backups for you, I’ll provide a simple one below that uploads my home directory.
s3cmd sync --recursive --skip-existing /home/kura