Skip to main content

Managing a Mapserver farm with Salt

Mapserver is probably the most popular Open Source web mapping platform (even though Geoserver is getting much of the limelight nowadays).

One of the advantages that Mapserver has against Geoserver is that its configuration is pretty easy because it consists of a flat text file (Geoserver instead uses a xml-backed repository).
Because of this kind of repository managing a Geoserver farm becomes complicated when changes have to be replicated across all hosts and the services restarted to pick up the changes. To address this issue there have been recent efforts to build a multi-master replication mechanism plugged into Geoserver. While this is pretty cool (and it's done by an Italian company of which of course I'm proud of, being an Italian myself) I think it's even cooler to see how easy it is to manage Mapserver configuration files in a similar cluster environment.

The Mapserver setup is as follows:
  1. a cluster of mapserver servers serving WMS through a number of maps (more than one, otherwise it's pointless)
  2. a master node managing the cluster (can be one of the nodes in the cluster)
  3. data is stored in a shareable source like a db
When a map needs to be changed the map file is edited on the master node (tested, if necessary) and then the changes are replicated to all nodes. For the sake of simplicity we assume that all map files are stored under /srv/maps/ (optionally in subdirs) and are referenced in the WMS requests with the usual ?map=/srv/maps/some.map parameter. Since the map file is read with every request there is no need to restart anything.


With Mapserver the only tool required for the job is Salt. Salt is a remote execution and configuration manager. It works in more or less the same way as Puppet, but it's Free (Puppet is actually more sophisticated (read: expensive ;-)) than Salt, but in our case all the extra sophistication does not change the outcome).

Installing Salt is a piece of cake and on Ubuntu it is only a matter of adding the repo and then apt-get install. The details are here. The same install must be done on all nodes and on the master node.

When you're done simply start the salt daemon on the master:

/etc/init.d/salt-master start

on the nodes edit the /etc/salt/minion configuration file, find the master option and set it to the master's address or dns name, then start the client with the command:

/etc/init.d/salt-minion start

(clients are called in minions in Salt parlance). Check that all minions are communicating with the master by issuing this command on the master:

salt-key L

This command will report the keys of all minions that have communicated with the master. Before the master can issue commands to the minions the master must accept the minions' keys. Let's do it with this command:

salt-key -A

Now let's check communications again by asking all the minions to ping the master and report back:

salt '*' test.ping

if everything is ok it's time to configure the mapserver replication cfg on the master.
Edit the /etc/salt/master file on the master and uncomment the default file root in the File Server settings. It should read like this:

# Default:
file_roots:
 base:
  - /srv/salt

restart the master and create the following files in /srv/salt

bash# cat top.sls
base:
  '*':
    - mapserver

bash# cat mapserver.sls
/srv/maps:
  file:
   - recurse
   - source: salt://mapserver/srv/maps

now let's create the directory /srv/salt/mapserver/srv/maps and copy the mapfiles (along with dependencies like symbols, fonts, etc) into this directory.

Restart the salt master (/etc/init.d/salt-master restart) and if there are no syntax errors we should be ready do go.

WARNING: the following commands will cause the files in the /srv/maps directory of the minions to be overwritten by those served by the master. As of 0.9.4 files that exist on the minions but are not on the master will not be modified. Do not proceed further on a live system unless you know what you're doing. You have been warned.

With this command we tell the salt master to push state changes to all minions. The state changes include a full replica of the contents of the /srv/salt/mapserver/srv/maps contents.

salt -t 60 '*' state.highstate

The replication might take some time but will eventually complete. Now check on the minions that the files have been correctly transferred. Every time you must push changes again just drop the files on the master and then run the state.highstate command.

Congrats, you're done.

Comments

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.