Skip to main content

heartbeat won't start zope

Today I was installing a web cluster with the usual heartbeat/drbd stuff. The customer also needs zope because a part of the web site requires it, so I went and built a fresh rpm of it (link to zope rpm).
I then added zope to the haresources file, only to find out that heartbeat will cowardly refuse to start it!

When heartbeat starts to acquire resources it checks the status first by running resource status.
The problem is that the zope rc script prints 'not running' when it cannot find any zope instance running and heartbeat resource manager greps the zope status output for [Rr]unning, to decide whether it must start zope or it is already started for whatever reason.

The solution should be already in heartbeat cvs for some time now and was proposed by Lars Ellemberg of the drbd project. See this link for details:
http://lists.linux-ha.org/pipermail/linux-ha/2004-June/011154.html

Comments

Popular posts from this blog

Indexing Apache access logs with ELK (Elasticsearch+Logstash+Kibana)

Who said that grepping Apache logs has to be boring?

The truth is that, as Enteprise applications move to the browser too, Apache access logs are a gold mine, it does not matter what your role is: developer, support or sysadmin. If you are not mining them you are most likely missing out a ton of information and, probably, making the wrong decisions.
ELK (Elasticsearch, Logstash, Kibana) is a terrific, Open Source stack for visually analyzing Apache (or nginx) logs (but also any other timestamped data).

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive.
Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post.
Btw: the recovery, depending on db size and disk speed, is going to take long.

The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux.

The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking.

Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd into the mirth home. …