Posts

Showing posts from 2015

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.

Centralized async logging from VBS scripts

Borrow a page out of the snowplow book and log asynchronously from any script with a GET request to a central system. In Visual Basic Script: URLGet "http://your.server.com/action/" & WshNetwork.ComputerName & "/" & activity & "/message/" & strValue Function URLGet(URL) Set Http = CreateObject("Microsoft.XMLHTTP") Http.Open "GET",URL,True Http.Send End Function And get analytics for free on top of your scripts with logstash (or snowplow).

Fun with Postgresql and ZFS

Image
I will show how to use ZFS instant snapshotting and cloning functionality to effortlessly clone a running postgres database regardless of its size. Setup Install your Linux OS of choice then ZFS and Postgres. I use Centos 7 but most commands used in this post are distro-indipendent. Create a zfs pool called tank or use whatever name suits you. In the pool create a filesystem called pgdata . For the sake of following a minimalist ZFS best practice apply the following settings: zfs set compression=lz4 tank/pgdata zfs set xattr=sa tank/pgdata

Simple is beautiful: Guava LoadingCache

Image
Cache is king . Database lookups can be expensive and RAM prices have been falling for years to the point that developers don't care anymore (ops do, btw). So what if you could use those gigabytes of RAM for caching expensive queries or frequently used data? If you are a Java developer chances are that you came across Ehcache , Spring cache, or perhaps rolled your own. Disclaimer: not the Guava cited in this post Grails bundles Ehcache because it can be used by Hibernate and also has a  plugin which allows to cache methods just by annotating them. Unfortunately Ehcache has some quirks  and with the stock configuration it will also not allow parallel deployment (I documented a fix here  on this blog). An alternative cache implementation that I recently had a chance to use is the Guava LoadingCache . The implementation is so elegant that it will make you want to use it everywhere! Basically you build a cache, configure TTL and size (optional) and supply a lo...

Detect missed executions with OpenNMS

Image
Everyone knows that OpenNMS is a powerful monitoring solution, but not everyone knows that since version 1.10 circa it embeds the Drools rule processing engine. Drools programs can then be used to extend the event handling logic in new and powerful ways. The following example shows how OpenNMS can be extended to detect missed executions for recurring activities like backups or scheduled jobs.

9 months with WIFIWEB

Image
WIFIWEb is a local WDSL internet provider. Since I moved last year I have been a customer with their WDSL Max 10 profile . This is the pingdom report for the last 9 months: Applications sensitive to latency and micro-interruptions (like Remote Desktop) would from time to time drop the connection. Bandwidth-wise results varied over the period, but except for one time when I had to call to fix a performance issue, the experience was pretty smooth with a download speed consistently in a 6~8 Mb/s window. The 1Mb upload speed was always achieved. Call quality using free VOIP softphones (sflphone or linphone) was generally bad, but I dont't know how much the fault lies with software or the connection. Verdict: recommended.

RUNDECK job maintenance

Image
Learn more about Rundeck . Now that I have a fair number of jobs scheduled by Rundeck, how do I periodically prune the job execution history and keep only the last, say, 30 executions for each job?

OpenNMS performance: tune Jrobin RRD file strategy

Image
One of the nice aspects of OpenNMS is that, out of the box, it will collect a lot of data from most snmp-enabled resources. The downside is that such collection is I/O heavy (iops, not throughput). Even on moderate installations with hundreds of nodes it is enough to swamp even the fastest disk subsystem (except for those with controllers supported by large write caches). A symptom is that I/O wait will be quite high on the opennms box itself. I/O Wait before and after switch jrobin backend from FILE to MNIO

OpenNMS 15: warm your postgres cache

OpenNMS 15 puts a much higher load on the database than previous versions. Besides tuning postgres, the OS and perhaps splitting the app and the db on different boxes one aspect that I found to really make a difference is having a warm postgres cache.

Auto-upload Elastisearch template mapping with Apache Camel

Image
When feeding data into Elastisearch, one important step is to configure the correct template for the index/type so that, for instance, numeric fields are stored as numbers to ensure that they can be sorted by and/or confronted correctly. The Elasticsearch Logstash plugin has a handy option just for this purpose. If you are not using Logstash you have to do it yourself, eithr through configuration mgmt, startup scripts or simply manaully launching the appropriate curl command. If you have followed my previous post on using Apache Camel to feed sql data into Elasticsearch then it might come natural to attempt to use Camel also for the purpose of uploading the template mapping.

Camel-Elasticsearch: create timestamped indices

One nice feature of the logstash-elasticsearch integration is that, by default, logstash will use timestamped indices when feeding data to elasticsearch. This means that yesterday's data is in a separate index from today's data and from each other day's data, simplifying index management. For instance, suppose you only want to keep the last 30 days: elasticsearch-remove-old-indices.sh -i 30 The Apache Camel Elasticsearch component provides no such feature out of the box, but luckily it is quite easy to implement (when you know what to do.  /grin ).