Monitoring filesystem for changes

I’ve been looking for a robust way to monitor filesystem changes in Linux. I see a lot of fairly simplistic discussion about “how to use” inotify and inotifywait, incron, direvent, and so on … but no comprehensive discussion about how/why/when each of these (and other) tools would be useful, and what sort of system load they incur … particularly as the number of “watches” increases. This could be a problem in a multi-user system, where different users are setting up watches using bash scripts as handlers for the various event triggers.

How do these approaches differ in terms of system resource usage and scalability, particularly as the number of user-based “event/handler” items increases?

2 Likes

Hi Stan (mind if I call you Stan?),

I like it! I’ve always liked the idea of having a file monitor and yet I’ve never actually implemented one. It’s just one of those things that I know is probably important, but there always seems to be something else that’s just slightly more important or urgent to do instead in my homelab. I myself have been looking at Tripwire. Red Hat recently just recently ran their own guide on it: Security monitoring in Linux with Tripwire.

What’re your goals? Security monitoring?

Have you seen this quick review of file watchers? Seems like a good bird’s-eye view of the file watcher landscape to get you started.

-TorqueWrench

This is actually interesting that you brought this up. By a complete coincidence, I ended up needing inotifywait and fanotify this past weekend for a project I’ve been working on for unRAID.

I have been working on an antivirus solution for unRAID. Namely “dockerizing” ESET NOD32 Antivirus for Linux into a Docker container. If you’re interested in the project you can check it out on Docker hub and Github.

The on-demand scanning works great, but unfortunately its on-access scanning does not appear to be working properly with unRAID, even with the “watch” directories mounted as bind-mounts. Unfortunately, that’s…kind of the whole point of an antivirus solution on a file server. On-demand scanning is nice for periodic checks, but on-access is key to making sure malware doesn’t end up on the server in the first place…

I thought I had found a “good enough” solution with ClamAV, but unfortunately for on-access scanning with ClamAV, you need fanotify, which doesn’t seem to be in unRAID’s Slackware kernel.

With my limited options rapidly being decimated, I’ve decided to fashion an on-access scanner using parts from ESET’s NOD32 on-demand scanner, in much the same way that a caveman sharpened a stick for defense.

A high-level overview of my design:

  • Create a bash script using inotifywait to monitor a directory (recursively)

  • For performance reasons, esets_scan functions best with a directory to scan than individual files passed to it. Therefore. we should hold off on scanning a directory until we’re reasonably sure that we’re finished writing to it.

  • Since we only care about the directory and not the individual files in it, compile a list of directories with timestamps to a file. As an individual directory is picked up by inotifywait, replace the previous timestamp with sed.

  • After “sufficient time” has passed (~1 min) pass directories to /opt/eset/esets/sbin/eset_scan {dir_here} and remove directory from list.

I am undecided how I want to handle the actual scheduling of the scans. I am thinking I will probably just use a second bash job to monitor the inotifywait file since the inotifywait monitor script will be sitting a while loop.

There’s always something…

Yes, I’ve seen and evaluated most of these. There are some good ones and some “meh” ones. I don’t see a need for yet another one, but it would be nice to make some tweaks to existing ones.

All of the inotify-based file watchers are single-system only. They have no network context. So, if you use a container or VM, you lose the far-end notifications. The only realistic approach I’ve seen to “fix” this is to use a master SSH connection and start an inotifywait on the far-end to ferry stdout back to the near-end system over the SSH link. This is hackish, but works.

I’m also in a pretty active discussion with the maintainer of GNU’s “direvent” (which, on Linux, uses inotify) to add some form of network context to that package. If you haven’t looked at direvent, it’s pretty good as a centralized wrapper for inotify. Not as flexiible as using inotifywait, and not as easy to distribute as incron, but pretty nice.

1 Like

All of this discussion goes back to my original question … what are the various system burdens with different “watcher” approaches, and how do they scale? Clearly the inotify approaches are event-based, and so will perform “better” than polling approaches (like scripts, etc.). But how does this play out as the number of inodes increases, and/or the number of users using inotifywait increases? There are a million bash-based approaches that essentially poll and use conventional tools to parse the outcomes. One of my favorites to catch new files is “find” with “-cnewer” and a touched file for comparison … but that really doesn’t resolve all of the angles, and is probably not very efficient.