Help speeding up large table

I have a table with xxx,xxx rows. I need to run through the whole table and select by name. I then need to run through the same table again selecting all the same rows with another name but that also match the same dateset.How can i optimise this? Is there any way of using the stored memory of the table without having to select again? Or is this in fact using the table from memory the 2nd time?

$qry=“SELECT timeset,value FROM values_daily WHERE name='”.$c1.“’ ORDER by timeset ASC”;
$result=mysql_query($qry);
while($row = mysql_fetch_array( $result )) {
$timeset = $row[‘timeset’];
$valuec1 = $row[‘value’];

$qry2="SELECT value FROM values_daily WHERE name='".$c2."' AND timeset='".$timeset."'";
$result2=mysql_query($qry2);
while($row2 = mysql_fetch_array( $result2 )) {
	$valuec2 = $row2['value'];
}

}

just run a SHOW CREATE TABLE tablename query, and it will include the indexes

sorted it at last! That took some doing on a sun morning!
Thanks :slight_smile:
BTW i’m not sure i understand how the name column can be unique index as there are many rows with same name? It’s definitely done the trick though, a whopping speed difference!

Do you have a way of checking the query outside the script?

Notice no closing curly bracket - but that might just be a typo.

Can you debug script?

$result=mysql_query($qry);
while($row = mysql_fetch_array( $result )) {
$timeset = $row[‘timeset’];
$valuec1 = $row[‘v1value’];
$valuec2 = $row[‘v2value’];

yes?

the script just seems to hang, never finish loading…
I have 159k rows (5 small columns in total)…shouldn’t take too long to grab all those values should it?

yes, i use SQLyog and i’ve run the query on its own there. Very slow indeed, even with currently only 60k rows (as we are now 7hrs into the 24hr storage). Does return the result set: 475 rows returned, 59358ms taken

SELECT v1.timeset as timeset, v1.value as v1value, v2.value as v2value
FROM values_daily v1
LEFT JOIN values_daily v2 ON v2.name='$c2' AND v2.timeset = v1.timeset
WHERE v1.name = '$c1'
ORDER BY v1.timeset

v1.timeset is $timeset
v1.value is $valuec1
and v2.value is $valuec2

You shouldn’t set unique if it is not unique! Actually I’m surprised in let you. Doesn’t sound right to me. Maybe it overrode unique because it was obvious from the current values that it was not and it will have built the index so there are duplicate values!

The index will make a big difference. Previously it had to traverse the whole database to locate the records - the index allows it to get to the relevant records quickly.

The following query should give you the values for the last day

SELECT v1.timeset AS timeset,
          v1.name AS v1name, v2.name AS v2name, 
          ROUND((v2.value/v1.value),4) as v3value;
FROM values_daily v1
LEFT JOIN values_daily v2 ON v2.name='$c2' AND v2.timeset = v1.timeset
WHERE v1.name = '$c1' and v1.timeset > ADDTIME(NOW(), '-1 00:00:00.0');
ORDER BY v1.timeset

It will not give missing time slots. Does the charting software really require them?

I’m confused now! I have edited those 2 columns indexes such that the index box isn’t ticked, they still appear as before but without unique ticked.

Will the following do what you want

SELECT v1.timeset, v1.value, v2.name
FROM values_daily v1
LEFT JOIN values_daily v2 ON v2.name='$c2' and v2.timeset = v1.timeset
WHERE v1.name = '$c1'
ORDER by v1.timeset

My db has new values every 1min for every pair.

No sure I am understanding. Can you give a example of the db contents for one timeset and three currencies, say GBP, EUR, US.

mmm…i applied unique to the 2 cols in sqlyog and its showing (ISO is actually column name for ‘name’:

Indexes:primary, Columns: id, Unique:unique
Indexes:ISO, Columns: timeset,ISO, Unique:unique

is that right? Well, 0.09 secs to perform same query as 59 secs before…is that really possible??

I would not recommend that. There’s no way PHP will be faster than the database if the database is set up correctly.

Try an index on the ‘name’ and on the ‘timeset’ fields first.

CREATE UNIQUE INDEX `name` ON `values_daily`;
CREATE UNIQUE INDEX `timeset` ON `values_daily`;

UNIQUE only if they are unique values

ok will try that…

just wondering what’s the best approach here. I’m trying to chart the results of a currency pair for the 24hr period.
My db has new values every 1min for every pair.
I need to take every timeset entry and calculate the pair:

$value3 = round( 1 * ($valuec2/$valuec1),4);

and send the timeset and value to the chart script…
Also, whatever point of the 24hr peiod, we need to send zero values for the remaining timesets to show full 24hr period.

the id column is only one indexed i think. I’m just experimenting at moment by putting the whole table into php array and then working on it there…would aa php array be faster to process do you think? I’ve got a time ron my testing script and to grab 62k rows (3 cols - the timeset col is TIME_TO_SEC converted as retrieved) and it took 0.407 secs

Have you got an the name field and the timeset fields indexed?

How would i then get the values for:
$timeset = $row[‘timeset’];
$valuec1 = $row[‘value’];
$valuec2 = $row[‘value’];

Sorry I am not familiar with SQLyog.