peppy — 2011-06-04T15:16:45-04:00 — #1
When I design a script, I just select all (or *) when calling data from a table. The table might have 30 columns and I'll just need to use 1 column. I'm now curious about resource efficiency
Does anyone know how much resources are wasted in doing this? I'm not sure if it uses more resources to tell the computer to (select this column but not the rest) or (just select all columns). Does it even matter? Or does it matter when the table becomes very large?
system — 2011-06-04T15:29:15-04:00 — #2
SELECT column_name will always be more efficient than SELECT * - though specifically how much, I can't tell you. I can tell you that when you know you'll only use a subset of columns, to select only those columns.
ibazz — 2011-06-04T18:26:31-04:00 — #3
selecting only the columns you'll need is the way to go.
Even more significant than server or query efficiency; imagine you have 20 columns and you use the (evil) star selector for your queries. What will happen when you change your db, as you may be very likely to need to do. add another column to a table and it will also be returned if the query uses the * selector.
But, your page output script won't have been built to expect that extra data in the result set so what is it going to do with it? most likely, it'll just return superflousous data and with the values not in the same sequence as your script requires. Your whole website could break.
If you select only those columns each query needs, then it doesn't matter what other cols are added to the tables in the future. The query will still only return the values it was meant to return.
peppy — 2011-06-05T14:57:20-04:00 — #4
Thanks a lot for the advice, I will keep this in mind from now on when I'm building scripts.
michael_morris1 — 2011-06-06T09:38:53-04:00 — #5
Wildcard selection is tempting, but troublesome. It's not always bad though. For example, a query against a view may often use a wildcard because one of the roles of a view is to store a commonly used query in the database rather than in the code.
General rule of thumb - if you're touching one table only, or a view, the wildcard is fine. If joins are used, then using a wildcard is going to cause trouble.
Personally I only use wildcards against views and even then only when I'm absolutely sure I want everything in that view. Maybe one in 40 queries I write fit this criteria. I also use wildcards during testing.
system — 2011-06-06T12:37:03-04:00 — #6
I'd say that's more a problem with the application than the query. You should never create an application that expects data in a structured or numerically indexed manner, unless you're dealing with SDF, EDI, or some other structured format (all of which are antiquated, btw).
Modern software development handles data storage & transfer with descriptors assigned to each element of data. This means your application should retrieve values from your database using field-names rather than numerical index. Doing so will prevent any of the issues you've presented in cases where table structures or queries change, so no reason not to!
Also good to note here that using a structured DAL / Model will avoid any issues with database changes, as your application will reference an abstracted data layer, which should auto-adjust to any database changes you make (depending on what framework you use).
ibazz — 2011-06-06T13:18:40-04:00 — #7
can you show an example of what you mean
babywolf — 2011-06-06T13:39:00-04:00 — #8
Of course, the * will take more resource than select with specific column.
Let's imagine that your table have average 1Mb data in each records, and you just need the column A with about some kbs in each records. After your query executed, the result set will return to the application and buffered to memory. Then what do you think?
Working with large size table you will see the different.