Skip to main content

Salt Diaries: keeping salt up-to-date (episode 4)



Welcome back! In our quest to simplify the configuration and automate our systems we have installed Salt on all our servers and then moved on to some basic state management. We want of course to do more sophisticated stuff with salt and we'll get to that too. But first we want to make sure that all minions are aligned to the same salt version (the latest in this case).

To do that we will add another state to our configuration which we will call (very much unimaginatively) salt.sls. The content is below:
salt-minion:
   pkg:
      - latest
   service:
      - running
      - watch:
         - pkg: salt-minion

This instructs minions to upgrade the salt-minion package on the node and, if upgraded, restart the service. To activate this state we'll edit the top.sls state file as follows:
base:
  '*':
    - ntp
    - salt

We are now ready to apply the changes. Let's start with a guinea-pig minion:
[prompt]# salt 'expendable.local' state.highstate
expendable.local:
----------
    State: - pkg
    Name:      salt-minion
    Function:  latest
        Result:    True
        Comment:   Package salt-minion upgraded to latest
        Changes:   salt-minion: {'new': '0.10.2-2.el5', 'old': '0.10.1-1.el5'}
                   salt: {'new': '0.10.2-2.el5', 'old': '0.10.1-1.el5'}
                   
----------
    State: - service
    Name:      salt-minion
    Function:  running
        Result:    True
        Comment:   Service restarted
        Changes:   salt-minion: True

Seems ok, let's check the package version (note that I turned on verbose logging):
[prompt]# salt -v 'expendable.local' pkg.version salt
Executing job with jid 20120906174807503993
-------------------------------------------

The following minions did not return:
expendable.local

Ooop! Something is not quite right...in fact the minion is up and running, it's just that the server is still using the old connection. A quick check with netstat on the minion side will confirm this. Luckily for us there's no need to login onto each minion to get Salt working again. Simply restart the master and it'll be all ready to go:
[prompt]# /etc/init.d/salt-master restart
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                               [  OK  ]
[prompt]# salt -v 'expendable.local' pkg.version salt
Executing job with jid 20120906175310393699
-------------------------------------------
{'expendable.local': '0.10.2-2.el5'}

Very good! Now we can deploy the new state to all minions with:
[prompt]# salt -v -t 60 '*' state.highstate
we'll just have to restart the master after that and we're good to go.

Minions restarting while other states are running or have to be run

One problem with the state definition given above is that the package upgrade (and consequent minion restart) could be executed while other states are also running, so to prevent that we'll edit the state again and add an order condition:
salt-minion:
   pkg:
      - latest
      - order: last
   service:
      - running
      - watch:
         - pkg: salt-minion

What about SuSE-based servers?

Since to install Salt on SuSE servers we did not use the default distro package manager we'll have to script our way out using the same tool that we used for installing Salt: pip.
The new state definition becomes this:
salt-minion:
{% if grains['os'] == 'RedHat' %}
   pkg:
      - latest
{% endif %}
{% if grains['os'] == 'SUSE' %}
   pip:
      - installed
      - name: salt
      - upgrade: True
{% endif %}
      - order: last
   service:
      - running
      - watch:
{% if grains['os'] == 'RedHat' %}
        - pkg: salt-minion
{% endif %}
{% if grains['os'] == 'SUSE' %}
        - pip: salt
{% endif %}

Update: while this is the formally correct way for upgrading SuSE it won't work because the pip state has a bug for which it won't update an already installed package. As we can see here the state returns when a package is already installed. I'm going to open a pull request to get it fixed asap.

Update March 2013: a reader notes that SaltStack rpm packages are available from the OpenSuse repo so I suggest you switch to using those.

Comments

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.