Ã‚Â See this if you're having trouble printing code examples
by Adam Pedersen
I'm getting to know far more about servers than I ever wanted to, after hundreds of hours of Google research trying to squeeze/beat performance out of Apache. I do have 15 years programming experience in other areas, and I've reached the conclusion that the only experts in Apache/Linux are the programmers who wrote (and poorly documented) all this stuff. So I've gathered everything I could learn after countless hours of frustration and I'm writing this up in return for the immense amount of help I've received from the documentation of others.
If you're reaching the limits of your Apache server because you're serving a lot of dynamic content, you can either spend thousands on new equipment or reduce bloat to increase your server capacity from 2 to 10 times. This article concentrates on important and weakly documented ways of increasing capacity without the need for additional hardware.
Understanding Server Load Problems
There are a few common areas of server load problems, and a thousand uncommon. Let's focus on the top three I've seen:
- Drive Swapping, where too many processes (or runaway processes) use too much RAM.
- CPU, from poorly optimized DB queries, poorly optimized code, and runaway processes.
- Network, whether hardware limits or moron attacks.
Managing Apache's RAM Usage
Apache processes use a ton of RAM. This issue becomes major when you realize that after each process has done its job, the bloated process sits and spoon-feeds data to the client, instead of moving on to bigger and better things. This problem is compounded by a bit of essential info that should really be more common knowledge:
If you serve 100% static files with Apache, each
httpd process will use around 2-3 megs of RAM.
If you serve 99% static files and 1% dynamic files with Apache, each
httpd process will use from 3-20 megs of RAM (depending on your MOST complex dynamic page).
This occurs because a process grows to accommodate whatever it is serving, and NEVER decreases until that process dies. Unless you have very few dynamic pages and major traffic fluctuation, most of your httpd processes will soon take up an amount of RAM equal to the largest dynamic script on your system. A very smart web server would deal with this automatically. As it is, you have a few options to manually improve RAM usage.
Reduce Wasted Processes by Tweaking
This is a tradeoff. The
KeepAliveTimeout configuration setting adjusts the amount of time a process sits around doing nothing but taking up space. Those seconds add up in a huge way. Using
KeepAlive can increase speed for both you and the client Ã¢â‚¬