Skip to main content

Testing provisioning scripts with throw-away VMs

Using expendable virtual machines for testing purposes is hardly a new concept.
Now pretty-please-with-sugar-on-top would be automating the whole create-provision-test-destroy cycle.

Enter Vagrant. Vagrant is a tool for building and distributing virtualized development environments. It allows automated creation, provisioning (with Puppet, Chef or shell scripts) and tear down.

Installing Vagrant on Ubuntu 10.04 LTS (which I'm still running) requires the following:

  1. a 4.1.x VirtualBox distribution (the 3.1 bundled with Ubuntu will not work)
  2. a recent version of ruby (you guessed it: the 1.8.x bundled with Ubuntu will not work)
To install VirtualBox add the following repo to /etc/apt/sources.list:

deb http://download.virtualbox.org/virtualbox/debian lucid contrib non-free
then issue the usual apt-get update and install with (remove stock packages first):
sudo apt-get update
sudo apt-get remove virtualbox-ose virtualbox-ose-dkms
sudo apt-get install virtualbox-4.1
Ruby is a more complicated and longer issue because, in case your system still ships with 1.8, it requires a full build of ruby 1.9. If you already have ruby 1.9 you can skip this section.
To install ruby 1.9 alongside any other existing ruby we'll use rvm. Rvm will install a separate version of ruby without replacing or breaking the one that came with our system. Unfortunately this requires a full build, so you might want to launch it before you go out (following instructions were taken from here):
curl -s https://rvm.beginrescueend.com/install/rvm -o rvm-installer
chmod +x rvm-installer
./rvm-installer --version latest
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # This loads RVM into a shell session
rvm install ruby-1.9.2
rvm use 1.9.2
rvm --default use 1.9.2
gem install vagrant
Now we're ready to use Vagrant. First we need to download a virtual machine image that vagrant will use as a template to jumpstart our throw-away vms:
vagrant box add base http://files.vagrantup.com/lucid32.box
This will take some time depending on your connection speed. When it's done we need to tell vagrant to generate a template configuration file with the command:
vagrant init

The configuration file is called a Vagrantfile and it provides vagrant with all the information needed to assemble the vm. The parts of the Vagrantfile we'll want to change are:
  • networking: by bridging one vm interface to our LAN so that the VM can access resources beyond the localsystem. Set the network option as follows: config.vm.network :bridged
  • a shared (read-write) folder from the host for caching and file distribution: config.vm.share_folder "v-data", "/vagrant_data", "install-data"
  • a provisioning method between Chef, Puppet and shell
After that the vm can be created with:
vagrant up
The up command will also run the configured provisioning method after the vm has completely booted-up. In my case I chose to run just a shell script.
We can now login into the vm (as the vagrant user, use sudo to issue commands as root) to check if everything's allright:
vagrant ssh
To destroy the vm:
vagrant destroy
Now whenever we need/want to recreate the vm we just run vagrant up again and Vagrant will take care of booting the vm, setting up networking and then running the configured provision method. And it's quick too as on my laptop the whole process, including the complete run of the provision script takes roughly 2 mins.

Comments

Popular posts from this blog

Indexing Apache access logs with ELK (Elasticsearch+Logstash+Kibana)

Who said that grepping Apache logs has to be boring?

The truth is that, as Enteprise applications move to the browser too, Apache access logs are a gold mine, it does not matter what your role is: developer, support or sysadmin. If you are not mining them you are most likely missing out a ton of information and, probably, making the wrong decisions.
ELK (Elasticsearch, Logstash, Kibana) is a terrific, Open Source stack for visually analyzing Apache (or nginx) logs (but also any other timestamped data).

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive.
Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

A not so short guide to ZFS on Linux

Updated Oct 16 2013: shadow copies, memory settings and links for further learning.
Updated Nov 15 2013: shadow copies example, samba tuning.

Unless you've been living under a rock you should have by now heard many stories about how awesome ZFS is and the many ways it can help with saving your bacon.

The downside is that ZFS is not available (natively) for Linux because the CDDL license under which it is released is incompatible with the GPL. Assuming you are not interested in converting to one of the many Illumos distributions or FreeBSD this guide might serve you as a starting point if you are attracted  by ZFS features but are reluctant to try it out on production systems.

Basically in this post I note down both the tought process and the actual commands for implementing a fileserver for a small office. The fileserver will run as a virtual machine in a large ESXi host and use ZFS as the filesystem for shared data.