I have two tables that are loaded with content-rich rows. One of the column's in the rows is pretty much a statistical counter, that keeps track of how many times that content has been viewed.
The tables are myisam right now, but I'm thinking of switching to innodb just because I'm having a major problem with delays due to these tables locking. I probably run about one-million (or more) +1 queries like so:
$addview = mysql_query("UPDATE table SET views=views+1 WHERE ... LIMIT 1",$rsc);
Doing one-million of those a day seems to be creating the issue I'm having with locking tables. Now, I have a few options I've been considering.
- Just switch those two tables to innodb.
- Create a new table on the server that is innodb and handles the UPDATE +1 queries.
- Use an entirely new server dedicated to an innodb table to handle the update UPDATE +1 queries.
On both option 2 and option 3, I'd create an hourly PHP script that takes the numbers from the innodb table and update the content tables with the updated counts in a single query.
What sounds like the way to go? Option 1 is obviously easiest, but I heard in terms of speed/efficiency there are some drawbacks from going from myisam to innodb. I don't mind spending a bit more to get a server dedicated to keeping up with the UPDATE +1 queries.
All feedback appreciated.
a million queries? whoa
i don't have the hands-on experience of that type of heavy usage, and i surely don't know that "in terms of speed/efficiency there are some drawbacks from going from myisam to innodb" -- in fact i would expect there to be marked improvem,ents, since myisam uses table-level locking whereas innodb uses row-level locking
why not just switch over and see? you don't even have to unload/reload the table to do that, and if you don't get immediate benefits, you can easily switch back
Really? That was my next question. One could just hit the switch to innodb and switch back without anybody or anything being the wiser?
Just make a backup first, as always. Switching engines will copy the contents of the table into a different file on the hard disk... if you have gigabytes of data that could take minutes or hours, if you don't, then it can be a few seconds. Your disks can probably only copy 50 MB/s or something around there since they won't be totally idle at the time, so you can do the math and figure out how much time you need before switching if it's important. Or just take note of how long it takes to make a backup of the table, that'll be about the same.
I do over a million of those "UPDATE table SET views=views+1" queries a day for W3Counter. That's in addition to a couple million INSERT queries a day happening at the same time on other tables. That's on each DB server, which aren't super servers considering the workload... just a single quad-core Xeon processor and 8GB RAM each.
I didn't have an issue doing that with either MyISAM or InnoDB. The table with all the updates usually has around 10 million rows in it.
I did eventually settle on InnoDB as the better choice for my usage, but the database is overall a hybrid, as MyISAM has other advantages I exploit on other tables. Just makes balancing the settings for various memory pools and per-query buffers a little more difficult.
Are you sure the delay is because of locking? I don't know that I'll be able to tell you something useful, but I'm curious about this... run a SHOW STATUS query, what are the values for Table_locks_immediate and Table_locks_waited?
Here we go. This is during a slow period, non-peak hours:
That's a much higher wait/immediate ratio than I get, guess locking is a problem. I'd try the InnoDB switch. If you're gonna do that, remember that you might want to set some innodb configuration values in your my.cnf file.
The only important ones for me were
innodb_buffer_pool_size = 1536M
innodb_file_per_table = 1
That gives 1.5GB to the InnoDB buffer so I can keep the table with all the UPDATEs in RAM, while the MyISAM tables fight over the other 5-6GB. The file per table thing is to make disaster recovery easier on me; I don't need all the tables to get the site back online after a major failure like a disk loss, so I can restore the important ones first.
You might also try different settings for innodb_flush_log_at_trx_commit.
Thanks for all the help. I think I'm going to end up creating a third remote mysql server with three innodb tables dedicated to keeping up with the +1 UPDATE queries.
Right now I have two remote MySQL Servers, one master which accepts reading and writing and one slave which accepts reading only. Both are Dual 5520 Xeon Chips with 16GB Ram.
You'd think that would be enough. (sigh)