Double Hashing Passwords for Extra Security?

I just got the idea of double hashing stored passwords. In theory, this would make it a lot more difficult for someone to crack a password using brute-force. One would need to know first of all that the password is double hashed (if they didn’t know that, then they’ve got no chance of brute-forcing the password). If they knew it was double hashed, then it would take their brute-force engine twice as long to crack the password as the engine would need to run the md5 algorithm twice.

If you still haven’t caught on, by double hashing I mean…

$hash = MD5(MD5($password));

Is doing something like this a good measure to further protect a users password?

If the brute force mechanism is entering passwords the same way a normal user would then it will enter UNHASHED passwords to try and get a result. The fact that your system then hashes it twice won’t make any difference.

Also the processing time of calculating a hash on a string is insignificant compared to the connectivity work.

The double hash could be useful to stop someone using a rainbow table or md5 dictionary if they happened to get access to the hashes, but using a salt would probably be more effective.


define('PW_SALT', 'some ~unusual~ *string *!@');

$hashed = md5($plain_text_password . PW_SALT);

Yes, obviously, but I’m assuming the hacker/cracker has somehow extracted the hash’s from the database and then applied brute-force to the selected hash. If this were the case, then the hacker wouldn’t be aware that the password has been double hashed and therefore any brute force attempts wouldn’t get the hacker anywhere.

If the hacker knows that the passwords are double hashed, then it becomes more useless in protecting the password (the only remaining benefit is that the brute-force attack would take longer due to the double hashing).

You assume the same thing when using a salt however. You assume the hacker isn’t aware that a salt has been used. If the hacker is aware of the salt, then the salt becomes useless much like the double hashing.

The latest wisdom suggests, that you use both methods to achieve this goal.
Basically, you generate large salt (length 128b or more), and store in database Nth iteration (where N is several hundreds) of repetitive hashing salt with password.

//Generating new salt and hash for new password
function set_password($password,$user_id){
$salt='';
for($x=0;$x<64;$x++){
   $salt.=chr(rand(0,255)); //generating salt of length 512b
}
$hashed_password='';
for($x=0;$x<512;$x++){
   $hashed_password=hash('sha512',$password . $hashed_password . $salt,true);
}
$hashed_password_to_store=hash('sha512',$hashed_password . $salt,true);
store_password($salt,$hashed_password_to_store,$user_id);//function that connects to DB and does inserting.
}

Continuing thought… for a hacker to be really unaware of hashing iteration count my suggestion is to modify above code to add additional hashing iterations:

//Generating new salt and hash for new password
function set_password($password,$user_id){
$salt='';
for($x=0;$x<64;$x++){
   $salt.=chr(rand(0,255)); //generating salt of length 512b
}
$hashed_password='';
for($x=0;$x<512;$x++){
   $hashed_password=hash('sha512',$password . $hashed_password . $salt,true);
}
/* here comes the addition */
$additional_number_of_iterations = ord($hashed_password[0]); //here we get a number from 0 to 255
for($x=0;$x<$additional_number_of_iterations;$x++){
   $hashed_password=hash('sha512',$password . $hashed_password . $salt,true);
}
$hashed_password_to_store=hash('sha512',$hashed_password . $salt,true); //password is not added here,
//for Hardened Stateless Cookies schema to work, see below.
store_password($salt,$hashed_password_to_store,$user_id);//function that connects to DB and does inserting.
}

So basically attacker has first to perform initial count of hashing iterations (for every single password/salt combination) just to find out how many additional iterations must be performed afterwards.
It might be of interest for you to read:
Hardened stateless cookies and [url=http://www.cl.cam.ac.uk/~sjm217/papers/protocols08cookies.pdf]the paper it mentions. And also [url=http://www.sitepoint.com/forums/showthread.php?t=548697]this thread maybe.

…wow! maybe not.

Hehe, and this would not put a serious strain on the server?

Using an solution with key/position shifting and an effective salt is just as effective without running the servers into the ground.

I do not think that it would put too much strain, because you only need to do that once per authentication. What exactly do you mean by “key/position shifting”?

Let me first start with, “You are wrong!”.

Please do not take any offence, my point is just that when it comes to application development you need to “know” not “think/believe”.

If we take the first code sequence, you are running the hash function 513 times each time using the sha512 option. If you believe that will not leave a load on the server, you should really do a few benchmarks on it. If you do so, you will notice it does create a major load, now imagine what that load would be when you have thousands of users using the site at the same time.

The key/position shifting is a system that we have implemented in our framework. The main reason for creating that is so that even if someone buys a license to our scripts they do not know the hashing algorithm used on the other sites using the same script.

Per today, that is the major disadventage by using a open source script without modifying it. Everyone will know the hashing algorithm used, so if they get ahold of the hashed passwords from the database etc its easier to bruteforce them back into “text”.

What the key/position shifting does is provide a few options under the installation (obviously these values can not be changed afterwards without rendering the current passwords in the database useless). The options give them multiple options on how to manipulate the password and the salt, i.e. append the salt at the left side, right side, middle. Shift the position of the characters in the password, salt, etc etc.

This together with a long and strong salt, will make brute forcing so and say impossible. And even if its brute forced, they will have a hell of a time finding out which of the characters are actually the password without having the “algorithm” settings that manipulate the password and the salt before its hashed.

Double-MD5ing is actually making it slightly easier to break.

It’s virtually impossible for a hashing algorithm to have no clashes. If you double-hash - you’re squaring the amount of possible clashes.

That is unless, of course, unless you add a salt after the initial encryption.

I just took an looked over the code you posted, and I am not certain you are aware of it but you solution is a waste of resources. I should have noticed it earlier, but I barely glinted at the code.

Ill explain the issue, first lets take a look on this code from the first example.

$hashed_password='';
for($x=0;$x<512;$x++){
   $hashed_password=hash('sha512',$password . $hashed_password . $salt,true);
}
$hashed_password_to_store=hash('sha512',$hashed_password . $salt,true);

You are running the hashing 512 times, but every time you append the password and the salt, the only differese is that you append the previous hash as well.

Now, the issue is that you are appending the password… This means that all I need to do to find the password is “bruteforce” it twice.

The first time I will get the “hash” + the salt. Since I know the length of a sha512 hash I actually got the salt now as well, making the second bruteforce attemt much simpler.

The second time I know the system is password + hash + salt, and I know how long the sha512 hash is, so when I get a match I know at once what is the password.

As you can see, what you used 513 sha512 hashes to create; I can bruteforce with only two sha512 hashes.

Hi, TheRedDevil!
I really appreciate your criticism.
Well - to start off - this code sample was just to show the concept - not to blindly use (as it is with any code). Nevertheless I run several tests with it, and my workstation can handle couple of hundreds (~250) simultaneous calls to authentication function per second, without noticeable performance issues. Since authentication requests are made only once per session even my workstation can handle more than thousand concurrent sessions (it can not, of course, but not because of authentication routine).
You can decrease/increase iteration count to a desirable number if you will, so that it fits your needs.
You are absolutely correct about adding password to every hashing iteration. It should not be done - password must be added only at first iteration.

Regarding your key/position shifting - You are aware of term “security through obscurity”, aren’t you?

(the) password must be added only at first iteration.

As I said, this increases the risk of clashes.

Remember, not every hash is unclashable (and when shrinking large strings to small ones, theres no wonder why). By hashing something, you have a 1 in n chance of clashing (n is quite large).

But if you run it through again, you’re making the risk more likely because you’re adding another step to this - it’s now 1 in (n/2).

I’ll show you it in a more graphical approach:

So one hash is technically stronger than two.

After the third itiration: 4/n
After the fifth: 8/n

After the 512th: (1.34078079 × 10^154)/n

You’ve just dramatically increased the chances of clashing, making it weaker.

Make sense?

Just a small correction Jake: You say that not every hashing technique is unclashable… you meant NO hashing technique is unclashable. It’s impossible to have a hash of finite length and not have clashes.

Also, multiple iterations of a hash doesn’t decrease the security. If you had a small sample of codes that you started with then yes, it would. Since we have an infinite number (technically, practically it is still very large compared to what the hash maps to), there is no ‘lost’ security or increased chance of hashing by using it twice. The number of possible outputs after 1 hash is exactly the same as after 2 given a large enough set of starting ‘codes’ (which the english language more than provides).

Techniques I use for hashing passwords:

Server salt - something that is applied to every hash.
User salt - randomly generated and stored in the users table, means a rainbow table has to be generated per user. Not practical.
When I hash my passwords, I usually combine a few random salts with the password, the password repeated again but backwards, just general mucking around.
Hash the password using 2 different methods / salts (including combining SHA1 and MD5), and store & check both. This greatly reduces the chance of clashes, it is extremely unlikely that 2 codes will clash using 2 different hashing methods!

Even if there were no clashes on the second itteration, double hashing would at best have no possible improvement in security. There is no possibility of converting a hash back into the original value in the first place since there are huge numbers of original values that all map to the same hash value. All double hashing could possibly do is to reduce the number of possible result values.

Yes, but the point of it was to make sure that it isn’t just a simple single md5 hash happening. There are so many md5 rainbow tables out there now that a single md5 hash is almost as bad as storing the password in plaintext. I imagine there are quite a few less ‘double md5’ rainbow tables, so it will be more secure.

Just found that in the manual - that’s basically what my point is.

And if people still use passwords with common words in, they should be worried about security.

Mine’s a 10 digit, alphanumerical (and on money-related sites, capital and lower alphanumerical with 2 symbols), similar to: ‘g!4fAkG^39’. Anyone who doesn’t have something similar should consider making themselves a new password anyway :wink: If you use the same password on a less-secure website, a rainbow attack would be easy if your password is, for example, hammerapple, cranefly123, etc.

The key is that hashing a password more than once does not really increase the security at all. And considering that the code in your example is roughtly 63 times slower than if you just used one hash its just not worth it, if you increase the salt to a reasonable length it will be even slower.

Another issue with your code, is that if someone gets access to your member’s password hashes through the database then they also have the salts. With other words, your password hashes would be just as secure without the salt. If you altered the order etc of the salt before using it in the script, this would not have been that an serious issue as long as the person did not gain access to the server itself.

Hehe, I cant believe you actually said that? The “method” that you so strongly recommend is a worse version of that as well.

I am very well aware that the method we implement before hashing the passwords are indeed an layer of “security through obscurity”, but that layer does improve the security of the hashes. With other words, brute forcing one of our hashes will be several times harder than brute forcing yours if we assume the person managed to get into the database to steal the password hashes.

The key is that with our method brute forcing the passwords from two different sites running the same script will require completly different approaches.

That is true, but keep in mind that doing so will make it more vounerable as well in the event the attacker gain access to the database. The reason for this is due to one of the hashes will be weaker than the others, and instead of trying again and again to login with the information he finds, he can just quickly verify it towards the other hash. If both match he has the “winning” string and just need to figure out what part of it is actually the password.

As long as you salt the passwords properly, you dont need to worry about rainbow tables.

Why not just skip MD5 all together and utilize the hash function with a different algo ?

I benchmarked some algos some months ago.

If memory serves me right, I believe the hash function is quicker than the md5 function for creating an md5 hash anyway. :slight_smile:

If we are talking wikipedia articles, then I suggest you read Key Derivation Function (KDF) and [url=http://en.wikipedia.org/wiki/Key_strengthening]Key strenghtening and more importantly the papers it references:
Secure Applications of Low-Entropy Keys by J. Kelsey, B. Schneier, C. Hall, and D. Wagner (1997)
A Password Stretching Method with User Specific Salts by ChangHee Lee and Heejo Lee (2007)
RFC 2898

Furthermore, this method is used in production. For instance here:
KeePass - Protection against dictionary attacks and the wikipedia also mentions WPA.

The whole point of it is to make password bruteforcing (to be more precise dictionary attacks) more time consuming.

This is wrong - even if the person has the salt, it is far more secure than not using a salt at all. The reason being that they don’t know how that salt is used (before the password? after it? somewhere in between more complicated strings? etc), and then even if you do, you have to generate all the rainbow tables for that specific hashing + salt method, which takes a very long time. You would have to do this for every user whose password you want to crack.

I don’t think having 2 hashes makes it any more vulnerable really. In fact, it makes it more secure. Even if you have the 2 hashes, you STILL don’t know:

  • Which hashing method(s) or combinations were used in each
  • The server salt in each
  • How the user salt is used in each

Without any of this information, it would be pretty much impossible to crack anything. WITH this information, you would then have to generate 2 sets of rainbow tables, one for each hashing method, and THEN cross reference them to make sure you have got their password, not just something that clashes when hashed.

There is no reason why one hashing method should be any weaker than the others, and even if it was - it is still going to be far more secure than using that hashing method on its own, there is an extra layer of security (the 2nd hash) on this one.

Indeed, which is why I suggest using both a server salt (a different one for each hashing method you use if storing multiple hashes) and a user salt.