Categories We Write About

Automatically compress large log files

Automatically compressing large log files is an efficient way to save disk space and manage system resources. Here’s how to implement automatic compression using common tools and scripting methods on Linux-based systems.


Why Compress Log Files?

  • Save disk space: Logs can grow rapidly, especially on active servers.

  • Improve performance: Compressing infrequently accessed logs reduces I/O overhead.

  • Simplify archiving: Compressed files are easier to transfer and store.


Common Compression Tools

  1. gzip – Fast, widely supported.

    bash
    gzip filename.log

    Creates filename.log.gz.

  2. bzip2 – Better compression than gzip, slower.

    bash
    bzip2 filename.log

    Creates filename.log.bz2.

  3. xz – Highest compression ratio, slowest.

    bash
    xz filename.log

    Creates filename.log.xz.


Automating with logrotate

logrotate is a built-in Linux utility for log management, including rotation and compression.

Configuration Example

Edit or create a configuration file in /etc/logrotate.d/:

bash
/var/log/myapp/*.log { daily missingok rotate 14 compress delaycompress notifempty create 0640 root adm postrotate systemctl reload myapp endscript }
  • compress – Enables compression using gzip by default.

  • delaycompress – Delays compression until the next rotation.

  • rotate 14 – Keeps the last 14 compressed log files.

  • postrotate – Runs after rotation, often used to restart services.


Automating via Cron Jobs

For custom setups or older systems, a cron job with a script can automate compression.

Sample Script

bash
#!/bin/bash LOG_DIR="/var/log/myapp" MAX_SIZE=104857600 # 100 MB find "$LOG_DIR" -name "*.log" | while read file; do if [ $(stat -c%s "$file") -gt $MAX_SIZE ]; then gzip "$file" fi done

Add to Cron

Edit cron with crontab -e:

bash
0 * * * * /path/to/compress-logs.sh

This runs the script every hour.


Using systemd Journal Compression

If your system uses journald, it already supports compression.

Check Current Configuration

bash
cat /etc/systemd/journald.conf

Enable Compression

Add or modify:

ini
Compress=yes SystemMaxUse=500M SystemKeepFree=100M

Then restart the service:

bash
systemctl restart systemd-journald

Managing Old Logs

To remove very old compressed files, use find:

bash
find /var/log -name "*.gz" -type f -mtime +30 -delete

Deletes .gz files older than 30 days.


Monitoring and Alerts

To avoid silent failures:

  • Set email alerts via cron or monitoring software.

  • Log actions inside your compression scripts.

  • Monitor disk usage with tools like ncdu or du -sh.


Best Practices

  • Use logrotate for apps writing directly to files.

  • Compress only inactive logs to avoid corruption.

  • Archive to cloud storage or external drives if necessary.

  • Test your setup in a staging environment.


Conclusion

Automatically compressing large log files is a simple but powerful practice that can significantly reduce disk usage and improve system manageability. Whether through logrotate, cron jobs, or systemd-journald, Linux offers robust tools to implement this with minimal overhead.

Share This Page:

Enter your email below to join The Palos Publishing Company Email List

We respect your email privacy

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Categories We Write About