I was about to write some thoughts down about a dot-matrix LCD display…. when my server failed to respond. It did reply to a PING, so the network connectivity was OK. When I was able to log in again a few minutes later, I noticed an insane load.
load average: 47.38, 80.48, 55.56
80?!?!?!?!?! what the fuck is my server doing!?!?!?! What has caused this behaviour? Is it a Denial-of-Service attack? If so, who has interest in bringing my server down?
Anyways, apart from the insane load, and the temporary being unavailable of the server, the real problem is this condition causes the server to run out of memory eventually, causing the kernel to start killing tasks, and when it does so, clamd is most likely the victim. When clamd is down, the mail server will refuse to handle any more mails. The default behaviour in case of out-of-memory is to kill the task using most memory. Clamav is a memory-hungry daemon. Why does it use that much memory? Anyhow, I am wondering if there is a way to exclude a specific process from the list of potential-to-be-killed tasks?
I believe the high load is caused due nearly running out-of-memory, causing heavy swap usage. Anyhow, everytime this event appears, it is caused by apache. So, that raises the question if there is a way to prevent apache (and anything that runs under it, such as php) from using too much memory?
Last this problem occurred, there were 128 instances of apache in memory. I guess, I should reduce this to a way lower value. I have been having problems like this before, and as solution I have started using mpm prefork. Its configuration allows me to set the maximum number of instances, and I have reduced this to a lower value. I hope this prevents this problem from appearing in the future.
So, in the end, it appears to be a simple misconfiguration. No attack whatsoever.
« ArchLinuxARM on the Raspberry Pi (part 6) Just some thoughts »
I have reduced the number of apache instances even further, as it was still trying to clog up my system. Output from top reveals that the most memory hungry instances of apache took up to 47 MB of RAM. That explains why it is causing problems.
What I would like to know is which (php) script is responsible for using up so much memory. How can I determine that?
But probably this wasn’t the only problem. When my system load was being high again, I noticed heavy CPU usage of MySQL. So, something is processing much data. I was already suspecting a specific website. This site is running an e107 CMS, the very same software that ran on the first BlaatSchaap website. On request of the person for who I made this site, I left he forum open for non-registered users. So, spambots found this rather interesting and spammed their spam right in there. It’s what they do. Well… yesterday I closed the unregistered posting option, but the problems continues. I have closed the forums completely now. Perhaps just accessing the spammed data is that heavy on the database? Seems weird. Databases are supposed to be able to handle massive amounts of data. (ok… it was mysql and not postgresql, but still…. ) Anyhow, when I removed on of those spam threads from the forum, the MySQL thread was using lots of CPU so that’s why I just de-activated that forum for now.