My site has gotten pretty large. We have over 12,000 pages, and about 8,000-10,000 hits per day. Also, our database is about 3 GB. We're running on a dedicated server with 4 GB ram, but yet the server load has been gravitating towards 2-3, sometimes 4 or 5 if it gets out of control. I'm trying to tweak Apache and MySQL, and find what would work best for such a site.
Here's a quick snapshot of the resources used:
top - 10:24:51 up 5 days, 12:37, 1 user, load average: 2.09, 2.04, 3.08
Tasks: 152 total, 1 running, 151 sleeping, 0 stopped, 0 zombie
Cpu(s): 13.4%us, 1.6%sy, 0.0%ni, 70.9%id, 14.0%wa, 0.1%hi, 0.0%si, 0.0%st
Mem: 3579200k total, 2564120k used, 1015080k free, 50428k buffers
Swap: 1959920k total, 337520k used, 1622400k free, 1206828k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32591 mysql 15 0 675m 133m 5364 S 98.5 3.8 4:48.08 mysqld
1 root 15 0 10352 576 544 S 0.0 0.0 0:00.80 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.85 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/0
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root RT -5 0 0 0 S 0.0 0.0 0:00.33 migration/1
6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1
7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1
8 root 10 -5 0 0 0 S 0.0 0.0 0:00.01 events/0
9 root 10 -5 0 0 0 S 0.0 0.0 0:00.02 events/1
10 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khelper
39 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 kthread
44 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/0
45 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/1
46 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid
105 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/0
106 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/1
OK, MySQL was pretty bad when this was taken. Generally, restarting MySQL will lower the load for a bit.
Here's my /etc/my.cnf:
max_connections = 50
key_buffer = 256M
table_cache = 4096
sort_buffer_size = 2M
read_buffer_size = 128K
read_rnd_buffer_size = 128K
myisam_sort_buffer_size = 64M
max_heap_table_size = 48M
tmp_table_size = 48M
wait_timeout = 45
interactive_timeout = 300
thread_cache_size = 8
innodb_buffer_pool_size = 5M
innodb_thread_concurrency = 4
max_allowed_packet = 16M
key_buffer = 96M
read_buffer = 128K
write_buffer = 128K
key_buffer = 512M
sort_buffer_size = 64M
read_buffer = 128K
write_buffer = 128K
And here are some settings from httpd.conf, since that also sometimes gives difficulty:
I know everything's set pretty low, but it seems Apache spirals out of control if MaxClients is set too high.
Any help would be greatly appreciated with this.
Please try to upgrade your memory.
Do you think this is what I need to do?
It is seeming so to me as well. I have HTTPD under control, under about 300-400M, but MySQL seems to require at least 3G to run, which is most of our memory. We have 300M in swap at all times at this point.
I installed Varnish to help out with at least speeding things up, but MySQL is still going out of control.
Thinking either of splitting off MySQL onto its own server, or upgrading the entire server.
Your primary strategy should be to profile your mysql queries, look for the slowest ones and then check if they can be made more efficient e.g check indexes are optimal
Also look for any commonly used queries that aren't caching. I'd probably run apachebuddy/apachetuner/mysqltuner to check the configurations, and wtop and to more closely monitor specific page activity. 10k hits a day is only 14 hits a minute spaced over a 12 hour busy period, which isn't a lot for a 4gb server to handle.
It may not be a lot, but again our database is pretty large.
Not much I can do about the queries. We're using Drupal, so I can't really change the queries.
You haven't used full memory yet, but have spent some swap space. Have you tried to use the tools such as tuning-primer or mysqltuner (this one is especially useful for finding out fragmented tables)
In that case, first port of call would be to ensure drupal is set up to cache pages. I'm not particularly familiar with drupal so couldn't comment if the system has built in caching, or whether there are addons available that implement this, but would be surprised if this wasn't the case.
You can still also change/add indexes, and optimise your mysql configuration towards poorly performing areas.
There's still plenty you can do: turn on slow query logging or use mytop to identify particular queries, then check them via EXPLAIN and see what can be done e.g if a slow query is creating temporary tables (often the case with large databases) you may be able to reconfigure mysql to use tmpfs instead of disk for temporary files which is a lot quicker if you have the spare memory.
I concur with turning on caching, for example memcached. And if you're comfortable installing software, I would also recommend swapping out Apache for NGiNX with php-fpm, which is my experience is a LOT faster than Apache.
If you can hack it into drupal, I would even go for full page caching: http://www.kingletas.com/2012/08/full-page-cache-with-nginx-and-memcache.html
On a test we performed at work a while back we went from 50 requests per second to 2500 requests per second with 100 concurrent users and almost no load. Of course you can only apply this technique if your content doesn't change a lot.
A bit late, but what kind of site are you using? Is it a site that focusses on logged in users? Or do al the visitors get to see the same content? If the last is the case, Varnish should help to sort out most of your problems because not every hit will need MySQL queries.
Our site also bennefitted a lot by installing Memcached for our MySQL database, we got a performance gain of 30-40% on peak moments.
You also might want to install APC to speed up PHP (if you are using it) and reduce the load the server has for compiling the PHP code every hit.
This topic is now closed. New replies are no longer allowed.