Free VMWare ESX(i) 5.5 VM backups to Amazon S3

Find an article
Nov 15
Published by in · Leave your thoughts
( words)
Warning! There was an error loading some of the images for this post.

#1 Enable SSH on ESX(i) Host

To enable SSH on the ESX(i) host please follow this guide:

#2 Create a Ubuntu VM

  1. Download Ubuntu Server 12.04 and create a new VM
    Direct Downloads:

    • 32 bit:
    • 64 bit:
  2. Install Python 2.6.9 (latest) – the patch of VMWare ESX(i) I was using had Python 2.6.8:
    sudo apt-get install python-software-properties
    sudo add-apt-repository ppa:fkrull/deadsnakes
    sudo apt-get update
    sudo apt-get install python2.6
    wget -O - | sudo python2.6

    If you experience DNS problems after setting up the VM, please see:

  3. Install s3cmd:
    tar xzf s3cmd-1.6.1.tar.gz
    cd s3cmd-1.6.1/
    sudo python2.6 install
  4. I couldn’t get the latest version of python-dateutils to work, so I fell back to dateutils-1.5 (
    tar zxf python-dateutil-1.5.tar.gz
    cd python-dateutil-1.5/
    sudo python2.6 install
    rm -rf /usr/local/lib/python2.6/dist-packages/python_dateutil-2.6.0-py2.6.egg
  5. Copy s3cmd and associated python libraries to the ESX(i) host; ensure you have enabled SSH as instructed in #1. Remember to replace the IP address in all the below commands with your own:
    scp /usr/local/bin/s3cmd root@
    scp /usr/lib/python2.6/ root@
    scp /usr/lib/python2.6/ root@
    scp -r /usr/lib/python2.6/distutils/ root@
    scpr -r /usr/local/lib/python2.6/dist-packages/ root@
  6. Statically compile the latest version of gpg – this ensures that we’re not relying on any shared libraries and can copy the binary over to the ESX(i) host:
    sudo apt-get install build-essential
    sudo apt-get install libgpg-error-dev libgcrypt-dev libassuan-dev libksba-dev libpth-dev zlib1g-dev
    tar xjf libgpg-error-1.25.tar.bz2
    cd libgpg-error-1.25/
    ./configure && sudo make && sudo make install
    cd ..
    tar -xjf gnupg-2.0.30.tar.bz2
    cd gnupg-2.0.30/
    ./configure LDFLAGS="-static"
    make LDFLAGS="-static" LINKFORSHARED=" "

    Copy the statically compiled binary to the ESX(i) host; ensure you have enabled SSH as instructed in #1. Remember to replace the IP address in all the below commands with your own:

    scp g10/gpg2 root@

#3 Configure ESX(i) Host

By this point we should have all the necessary programs on the host to configure backups:

  1. Add the following to the bottom of /etc/profile.local:
    export PYTHONPATH="/lib/python2.6/dist-packages/"
  2. Logout and back in from your console session to refresh the environment variables. Then check the environment variable has been set:
    echo $PYTHONPATH
  3. Verify that both gpg2 and s3cmd are working:
    s3cmd --version
    gpg2 --version

    If s3cmd returns a not found error, ensure the hash bang points to the correct python binary.

  4. Configure s3cmd with your Amazon S3 account:
    s3cmd --configure
  5. Download GhettoVCB backup script and upload it to somewhere that your ESX(i) host can access. Unfortunately at the time of publishing this article the script had a few bugs. You can see a diff of the fixes here:

#4 Using s3cmd and GhettoVCB

Backup a given VM: -m "VM_NAME"

Upload backup to Amazon S3:

s3cmd put /vmfs/volumes/datastore1/backups/vm_name.gz s3://

#5 Script to backup multiple VMs via cron

  1. Create a ghettoVCB.conf file, replacing /ABSOLUTE/PATH/backups with a directory where backups should be placed:
  2. Create VMs_2_BACKUP file with the name of each VM to backup on separate lines
  3. Create, replacing BUCKET_NAME_HERE, /ABSOLUTE/PATH/TO with your relevant information:
    # Auto Backup VMs.
    # s3 protocol uri, including bucket and optional path.
    # Absolute path to GhettoVCB global config file.
    # File containing the names of VMs to backup.
    # Each VM name should be on a separate line.
    # Add GLOBAL_CONF variables to current context.
    source "${GLOBAL_CONF}"
    # Ensure default value if one is not selected or variable is null
    if [[ -z ${VM_BACKUP_DIR_NAMING_CONVENTION} ]] ; then
        VM_BACKUP_DIR_NAMING_CONVENTION="$(date +%F_%k-%M-%S)"
    # Back up each VM in the VMs file.
    while IFS= read -r VM_NAME <&3; do
        # Backup -g ${GLOBAL_CONF} -m "${VM_NAME}"
        # Lets assume that this is the compressed backup file...
        BACKUP_FILE=$(ls "$BACKUP_DIR" | head -1)
        # Send to S3
        s3cmd put "${BACKUP_DIR}/${BACKUP_FILE}" "${S3_URI}/${FILENAME}"
        # Delete backup file
        rm -rf "${BACKUP_DIR}"
    done 3< "${VMs}"
  4. Run the script automatically by creating a cron job in /var/spool/cron/crontabs/root


Leave a Reply

Your email address will not be published.