Full Server

Recently one of our app servers stopped serving, the server was running, but it had run out of disk space. The database wasn’t too large, and the Laravel apps log files weren’t massive.

The first stop then was to find any large files, to see what was clogging up the system.

find / -size +10M -ls

This revealed a 17Gb log file: /var/log/mysql/mysql-slow.log

It appeared that MySQL was taking it’s time with some queries.

We sync our database with various external APIs every day, these are inserted in bulk, so I don’t mind them taking over 0.1 second to run (the default slow-query log threshold set in our mysql config). What I do mind is having the server run out of space due to large log files.

Solving the problem

The first thing you should do if you are having issues like this is to analyse the log file in question, and see what is going on. If you are getting long query times on transactions involving your users, then that might be something you look into. The odd long query for a backend script, or in our case a bulk insert from an API that doesn’t affect user load times shouldn’t be so much of a problem (confirmed in our case with further investigation)

Analyse the log file

sudo -i mysqldumpslow /var/log/mysql/mysql-slow.log

If you find anything abnormal here then it’s worth investigating.

To modify the slow-query threshold we first need to login to mysql as the root user

sudo -i mysql

Then we can modify the threshold with the following command

set global long_query_time = 0.5;

This number can be adjusted to suite your needs, setting it too high could hide potential issues that you might want to investigate, too short and you could fill up your hard drive.

Now you’re server shouldn’t fill up so dramatically, and you can still view and investigate slow queries to keep things running smoothly.

Leave a Reply

Your email address will not be published. Required fields are marked *