Skip to main content

Goodbye mainframe

Today I have presentiated at the shutdown of a Bull DPS 9000 mainframe. It will be replaced by an Oracle RAC cluster on linux and hp hardware. My first work as an IT person was creating a suite of shell scripts that essentially fetched data via ftp from the mainframe and loaded them in an Oracle database. That was some 5 years ago and thise scripts more or less worked for all that time.

The scripts were scheduled to run on a BULL Escala machine (it featured a powerPC processor) running AIX. Those were my first tries with vi in an horrible zsh enviroment only! The escala will be shutdown very soon too.

Rest in peace...

Comments

Popular posts from this blog

Indexing Apache access logs with ELK (Elasticsearch+Logstash+Kibana)

Who said that grepping Apache logs has to be boring?

The truth is that, as Enteprise applications move to the browser too, Apache access logs are a gold mine, it does not matter what your role is: developer, support or sysadmin. If you are not mining them you are most likely missing out a ton of information and, probably, making the wrong decisions.
ELK (Elasticsearch, Logstash, Kibana) is a terrific, Open Source stack for visually analyzing Apache (or nginx) logs (but also any other timestamped data).

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive.
Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

RUNDECK job maintenance

Learn more about Rundeck.

Now that I have a fair number of jobs scheduled by Rundeck, how do I periodically prune the job execution history and keep only the last, say, 30 executions for each job?