I've got a table in my database with 90,000 or so (it actually says ~82,944 right now, not sure what the ~ means?) and it gets updated every minute via a cron script. Sometimes during the update there will be inserts, sometimes deletes, sometimes updates, so it's really important to me that this table runs as smooth as possible. I realize most of this is probably going to be optimized in my php connection code, but I'd like to clear up some mysql questions as well.
Do I take a performance hit by having var chars and variable length fields in the table? Would it be better to just set them as chars of 255 in size? I have lots of hard drive space, so that's not a worry and I was thinking this might help.
Any tips in the right direction would certainly help.
You would take the hit the other way around. Specifying them as CHAR(255) they are always going to take up that length. Specifying them as VARCHAR(255) if a particular row is only 30 CHAR in length that is all it will take up.
by the way 90,000 rows is completely insignificant as far as table size goes. Think 90 million rows before you think table performance hits.
I realize that 90k rows isn't a lot for MySQL, but this table get hits by cron scripts multiple times every minute, one script counts how many records are in a given category, another script updates a time field in it, another script adds into the table, another script checks every single record in the table (through out a 20ish hour period) to ensure the entry is still valid and that isn't even when a user wants to request data from the table.
So the table is basically under constant read/write access. It's sitting on an old AMD 2000xp+ Ubuntu Server with 1gb of memory and it runs quite slow for me on my own local network. I can't imagine how terrible it would run on shared hosting! So I'm trying to do all I can to avoid purchasing an expensive VPS when it goes live.
What storage engine is the table using and are any fields in the table "Full Text" fields?
I just swapped it to innoDB as some users from this forum suggested that would speed things up. I'm still not sure what a "Full Text" field is, but someone else said if I didn't know what they were, I probably wasn't using them.
Just to add in, I do have a field in my table structure that uses MEDIUMTEXT for storing of big strings but that's all I can relate "Full Text" to at the moment. :rofl:
that's called a tilde and in this context it means "approximately"
so your table has approximately 82,944 rows
How can it only be approximate? Why wouldn't it just count it directly?
is the table innodb or myisam? the latter, since every update locks the table, always has an accurate count, but innodb doesn't keep a running total, you have to actually run a SELECT COUNT(*) query to get the exact count
An alternative to running the extra query to get the count would be (if using php) count() (or an equivilant function in .net, asp, etc) on the array containing the result set
It was Myisam, it's now innodb because everyone on the forums basically all informed me it would speed the site up. Oddly it appears to have made it much slower. >.< :nono:
What indexes do you currently have on the table concerned?
By indexes do you mean primary keys? Only one at the moment called article_id.
The indexs can be found with
SHOW INDEXES FROM da_table_name
Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment
listings 0 PRIMARY 1 listing_id A 159140 NULL NULL BTREE
listings 0 listing_url 1 listing_url A 159140 NULL NULL BTREE