Posts

Showing posts from August, 2012

Salt Diaries: installing on SLES (episode 3)

Image
Welcome to the third episode of the series ! In the previous posts we have installed salt on CentOS machines and then moved on with a basic state configuration (we will cover more in the coming postst). Now it's time to handle those pesky SLES hosts for which there are no pre built binaries. Therefore we'll have to install salt using pip . I'll cover SLES11 in this post as that's the only variant I have. Hopefully other versions should require only minor changes. Note: active subscription to Novell update service is required as the following packages can only be found on SLES 11 SDK (it's an iso, and a large one, so if you don't have it around start downloading it before you start):  python-devel libopenssl-devel zlib-devel swig Installation Add the SDK iso in the Software Management sources. Then, as root, run the following commands (answer yes when required): zypper in gcc-c++ python-devel libopenssl-devel zlib-devel swig zypper -p http://downlo...

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in...

Salt diaries: states (part 2 of deploying salt on a small network)

After part 1  of this series I had Salt running properly on all minions. It's now time to get some work done with it. We will start with something simple like making sure that ntp is installed and running on all minions. In order to to do that we will use the Salt  states enforcement feature. The default salt states configuration requires that: state definitions be kept in  /srv/salt the default state be named top.sls We will probably need to create both the directory and the files, which we can do with the following command (check that you are not overwriting your own state, needs to be done on the master only!): mkdir -p /srv/salt cat <<EOF >/srv/salt/top.sls base: '*': - ntp EOF What this state definition means is that the base state requires all nodes (as selected by '*') to apply the ntp state. Since we have not yet defined an ntp state we are going to do it right away: cat <<EOF >/srv/salt/ntp.sls ntp: pkg: - ins...

Salt diaries: deploying salt on a small network

Image
This post is the first in a series documenting the deployment of Salt on a small network ( ~ 100 hosts, initially targeting only linux-based ones which account for roughly half of it). Due to the low number of hosts I have gone for a single master layout. The linux hosts are for the greatest part running Centos 5.[4,5] in both x86 and x64 favors, and just a couple running SLES. Installing salt master The easiest way to install salt on Centos is to pull in the epel repository  : rpm -Uvh http://download.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm then install salt with yum: yum install -y salt-master Since minions by default will attempt to connect to the salt master by resolving an host named  salt  I configured a salt cname record for the salt master host in the dns server. At this point the master can be started with: /etc/init.d/salt-master start Note : I don't have firewall or SELinux enabled. In particular SE...