On Thu, Jul 1, 2021, at 2:23 PM, Vitaliy Lotorev wrote:
Hi,
Zuul requires external storage for logs and I failed to find a simple solution for removing jobs log directories after a period of time. What I've found so far:
* zuul.opendev.org uploads logs to AFS and Swift. Looking at https://opendev.org/opendev/system-config/ I can't understand how directories are rotated on AFS.
OpenDev doesn't really rotate the contents in AFS. We store data in AFS that we wish to stick around for longer than our normal log retention period. Occasionally admins do manual cleanup in AFS but otherwise we expect that data to stick around.
* probably Swift has built-in file/logs rotation, but I don't have OpenStack and Swift.
Yes, object stores like swift, s3, ceph, etc tend to support object expiry. When you create the object you annotate it with metadata for when the object should be expired then background processes in the service automatically do the cleanup for you. OpenDev relies on this functionality in swift, but I believe the various roles for object store log uploads support the same functionality with other object stores.
* free Aftifactory CE/OSS version doesn't support plugins which are recommended to implement cron cleanup plugin https://jfrog.com/knowledge-base/artifactory-cleanup-best-practices/
Could you help with ideas for in-house OSS storage service that would cleanup directories with binary artifacts/logs? In case of logs kept on AFS zuul.opendev.org: are directories cleaned up just by date (e.g. remove all 'check jobs after two weeks')?
Before OpenDev uploaded logs to swift we had our CI system copy logs using scp/rsync to a log server with 12TB of disk. On that server we ran cron jobs to run find and expire any files older than our desired expiration date. I think this, https://opendev.org/opendev/puppet-openstackci/src/branch/master/files/log_a..., is the script we were using at the time. It is fairly complicated and probably worth understanding what it is doing before you try running it since it can be destructive. It is also possible you can simplify it further. Note that it also tries to compress recent files in addition to deleting old files. The major downside to this approach is that you have to stay on top of your disk usage and keep track of whether or not the cleanups are producing enough headroom for new logs otherwise your test jobs will all start failing and then you have to fix things really quickly.
-- Regards, Vitaliy