I need a bit of help with something I am trying to do.
I am building a 'macro' type solution so that a txt file and a csv file can be compared for duplicate numbers.
What I have done so far is create two upload functions that load the data from the files to their own tables in a database.
So, one table for the txt file and one for the csv file.
What I need to do now is 'wash' the tables so that any duplicates are removed, leaving a set of numbers that can be exported to an excel file for the user.
I'm thinking that I need to do a select query for the first table and run the results, individually, through a loop. In that loop I do a second select query for the other table and compare the two. Whatever is a duplicated gets deleted from both tables and whatever isn't is saved for the export.
Is this the right method?
Is anyone able to help me get started on it at all, or 'point me in the right direction'?!
Thanks for the response.
I'm just comparing the one column of data.
Basically, I have two files of number ranges that need to have duplicated numbers removed.
I was also thinking that something similar to the below might work:
SELECT * FROM textDoc WHERE serviceNumberRange IN (SELECT * FROM excelDoc)????
what you describe can be accomplished with a FULL OUTER JOIN, but unfortunately, if you're using mysql (as so many php applications do) then it's just a bit more complicated
what exactly do you mean by duplicates? how many columns are you comparing?
SELECT n FROM table1 WHERE n NOT IN ( SELECT n FROM table2 )
SELECT n FROM table2 WHERE n NOT IN ( SELECT n FROM table1 )
and when you say "duplicates removed" do you mean from both sides? or keep one of them?
I will need to remove both instances of the number.