Skip to main content

Devopsdays Rome 2012

Disclaimer: this is just a shameless post to get myself a place at the great Rome event ;-). Oh well, this does not mean this post is not interesting to read.

When I went to the Extreme Programming conference in Alghero (Sardinia) in 2001 I was consulting mostly as a Systems Administrator. So I felt a little bit like a fish out of water and actually one of the participants asked me: do you think there are aspects of XP that can be applied to systems administration?

I think I said yes, but at that time it was kind of hard for me to find points of contact between the two.
Maybe unit testing could be associated with putting a pervasive monitoring in place so that when I refactored a configuration I would know if it worked before clients did.
Or coding standards could be associated with using automated installers for deploying servers, but what about keeping the configuration in sync after, when the systems went into production? And what about the rest of the rules?
Last but not least, at the time, provisioning servers was still a physical task which meant that deployment was just a relatively short phase after procurement and installation.

Anyway shortly after the conference the sysadmin gig ended and I gradually went back to developing web apps full time with just a little sysadmin'ing on the side.

Fast forward to 2012: I am still developing web apps (using Agile methodologies) but I am also sysadmin'ing again. In 2012 though servers are virtual and they can be provisioned with a few clicks: today the only thing that slows down and can possibly get the deployment process wrong is ... me!
So I asked myself: what can I do to improve the overall process so that servers are installed and configured in the same way, they can be easily reconfigured without manually logging in into each and every one of them and I can readily tell my boss what version of which app they are running?

I knew that the answer was in configuration management, so I started researching. I first looked into Puppet, but I didn't really like the open-core model, so I moved on and eventually settled on SaltStack. Salt started out as remote execution tool, but then gained configuration management capabilities and recently added support for cloud provisioning. Salt is written in Python which makes it more friendly to people (like me) that are not acquainted with Ruby yet. Also, Salt uses YAML instead of a DSL for configuration management, but other than that the configuration directives are striking similar.

The great thing about configuration management tools is that once you have put in place the basic infrastructure you'll quickly get addicted to it and expand it and grow it just because, now, you can.
Btw, I keep a diary of my experience with Salt here on my blog.

Another indispensable tool in my Devops toolbox is OpenNMS: I use it to monitor nearly everything thanks to its ability to receive inputs from just about any source (jmx, snmp, syslog, raw events over http, wmi, sql). With OpenNMS I always have everything under control (even batch jobs!) and I can infer if the release of a new app is hogging resources on the database server or whether the application server needs more ram as the developer would want. Another great feature of OpenNMS are the built-in reports: customers (still) running Nagios just drool over those!

I am now looking for a way to integrate Salt and OpenNMS so that whenever a host is configured through Salt the necessary bits are also configured on OpenNMS so that monitoring and configuration stay in sync. Maybe at Devopsdays Rome I'll find a solution.

LogStash + ElasticSearch + Kibana is a mix that I didn't had the time to deploy yet, but that I want to try out as soon as I can.

Looking forward to meet you in Rome!

Comments

Anonymous said…
Impressive publish! STICK WITH IT!
Anonymous said…
That's really interesting. Thanks for posting all the great information! Had never thought of it all that way before.

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.