Skip to main content

An alternative auditing strategy for Grails apps

In some applications it is useful, and in some case required, that the application audits some or all database operations: for instance to track when a user updates, deletes or inserts a record.

In Grails this is often handled by a plugin, like the Audit Logging plugin, which hooks into the Hibernate events and then logs the details to an audit table, including a timestamp, the user (if available) who initiated the operation and the type of operation.

Unfortunately the Audit Logging plugin suffers from some issues and if you, like me, have to ship today then you can only roll your own solution.

In this post I will lay out a solution which is simple, efficient and should work in most scenarios. The general requirements are:
  1. only one datasource
  2. the underlying dbms must have triggers and some kind of session variables (most do, but I'll focus on Oracle)
  3. quartz jobs, background operations and other activities not necessarily initiated by an http request must properly initialize the connection (like the filter does) or auditing of the user will not work. But on second thought these items are not always 'owned' by a user, or are they? I guess this last item depends mostly on your requirements
The astute reader will note that item 1 or the case when multiple datasources are present can be easily generalized from the proposed solution. If you have a solution I'll be glad to post/link it.

Overview

The solution requires the creationg of a Grails filter which sets a session variable holding the details of the logged-in user owning the current request. The session variable can then be accessed by standard DBMS triggers/procedures and logged together with the operation details without the application even knowing it.
Besides being faster (no round trips to the database, except for the session setup at the beginning of the request) the auditing will be active even in case of changes made by external sql tools.

I will be using Oracle as the underlying database for this post and Grails 2.0.x. Again I will gladly link to solutions for other DBMS.

The filter

The filter in itself  is pretty simple and I think there is actually not much that needs explaining:

import groovy.sql.Sql
class OracleAuditFilters {
    def filters = {
        alterSession(uri: '/**') {
            before = {
                def session = grailsApplication.mainContext.sessionFactory.currentSession
                if(session!=null && session.connection()!=null) {
                    def sql=new Sql(session.connection())
                    sql.execute("select audit_user.au_set(?) from dual",[request.getRemoteUser()?request.getRemoteUser():"ANONYMOUS"])
                }
            }
            after = { Map model ->
            }
            afterView = { Exception e ->
            }
        }
    }
}

With every request it calls a package function which sets the username of the currently logged in user on the database side. Note that the username is retrieved from the request and is authentication-framework agnostic, as long as the framework complies with J2EE standards (Spring Security does, in case you're wondering).

Note that this is a Grails filter, not a J2EE filter.

DBMS session variable code

Oracle has support for session variables though user created contexts, but frankly it looks like more pain than gain. I will then resort to an alternative solution that exploits a particular feature of package variables. Package variables in fact behave much like session variables in the sense that each session has its own instance of a package variable that can be set/get indipendently of other sessions.

The code for the package is shown below, please note the use of the package variable login:

CREATE OR REPLACE PACKAGE audit_user
AS
    FUNCTION au_get RETURN varchar2;
    FUNCTION au_set(username in varchar2) RETURN varchar2;
END audit_user;
/

CREATE OR REPLACE PACKAGE BODY audit_user
AS
    login varchar2(20);
    
    function au_get
    RETURN varchar2
    IS
    begin
      RETURN login;
    END get;
    
    function au_set(username in varchar2)
    RETURN varchar2
    IS
    begin
      login:=username;
      RETURN login;
    END get;
    
END audit_user;
/

I think now you're seeing how this goes: the filter calls the audit_user.au_set() function which inform the database of the user who will be performing the next operations.
Any database code can now access the currently connected user by invoking:

audit_user.au_get()

I have not tried it, but it should even work in column defaults (as long as the audit_user package has already been declared).

Audit trigger example

The following sql is code for an Oracle auditing trigger. The actual auditing code is omitted as it is not relevant to the post and likely to change depending on preferences and requirements. Note the use of the audit_user.au_get function again and the fallback to the currently connected user when it is null (i.e. when using an sql tool like sqlplus).

 CREATE OR REPLACE TRIGGER MYTABLE_AUD AFTER
  INSERT OR
  DELETE OR
  UPDATE ON MYTABLE FOR EACH ROW 
DECLARE v_operation CHAR(1);
  v_username VARCHAR(30);
  BEGIN
    IF INSERTING THEN
      v_operation := 'I';
    ELSIF DELETING THEN
      v_operation := 'D';
    ELSE
      v_operation := 'U';
    END IF;

    -- returns the user connected to the web application
    SELECT audit_user.au_get INTO V_USERNAME FROM DUAL;
    
    IF V_USERNAME IS NULL THEN
        V_USERNAME:=user; -- returns the user currently connected to oracle
    END IF;

    -- AUDITING CODE GOES HERE
 
  END;

Comments

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.