Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Password Hashing Competition (password-hashing.net)
49 points by dchest on Feb 14, 2013 | hide | past | favorite | 73 comments


"The poor state of passwords protection in web services: passwords are too often either stored in clear (these are the services that send you your password by email after hitting "I forgot my password"), or just hashed with a cryptographic hash function (like MD5 or SHA-1), which exposes users' passwords to efficient brute force cracking methods."

If you can't get people to use current crypto hashing techniques like bcrypt or scrypt why would coming up with another one help?

Of course it would be nice to see something with the work factor features of bcrypt that were more specifically resistant to GPU optimized crackers.


I'm torn.† On the one hand, it sends a "jury is out" message on modern password hashing in general. On the other hand, developers already handwave about "bcrypt not having gotten enough cryptographic review", as if someone was ever going to publish a cryptanalytic result showing bcrypt to be worse than SHA1.

I'd have liked the jury to have been back on this last decade, but I'll settle for it being in next year.

By the way, the construction you're looking for is scrypt.

I'm not really torn.


> "jury is out"

One advantage of a jury being out is that someday said jury can come back in and return a verdict.

> as if someone was ever going to publish a cryptanalytic result showing bcrypt to be worse than SHA1.

SHA-1 has a formal specification, an RFC, a reference implementation, implementation guidance, and comprehensive test vectors published. To date, bcrypt is lacking some of those things.

And yes, bcrypt has gotten pwned worse than SHA-1 as a result: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-2483


Give me a break. That's an application implementation flaw, and one no standard could have prevented. It's like saying that insufficient cryptanalysis is responsible for the OpenSSL RCEs.

Again: I'm not really torn. It's a good thing you're all doing this.

The jury is not really out on bcrypt, though.


one no standard could have prevented

Then I'm sure you'll have no trouble* finding similar vulnerabilities introduced by implementation flaws of any NIST (or even IETF) defined algorithms.

*Note: sarcasm.


Oh, you mean like potential remote code execution in AES?


For reference, it looks like he's referring to this:

http://freecode.com/articles/suse-new-krb5-packages-fix-remo...


Do you have a link to the code with the bug?

I'll bet that "Specially crafted AES and RC4 packets" does not actually refer to the implementation of a NIST standard algorithm, just its usage in a context outside of such a standard.


If I wanted to play dirty, I'd have mentioned ASN.1. :)


I agree, ASN.1 sucks in terms of implementation in security-critical contexts.

But it's not a NIST or IETF standard, so it doesn't count. It's from ISO/IEC/ITU.


Or allowing \0 in X.509 fields.


How about this: http://blog.fortify.com/blog/fortify/2009/02/20/SHA-3-Round-...?

Granted, these were just candidates and not actually NIST defined algorithms, but the point stands that algorithms can be fine while standard implementations have bugs.


Those were round 1 submissions, not even close to being "standard implementations". Which proves my point that the standardization process works to minimize implementation bugs in implementations of the standard.


NIST standardization sure didn't help SHA2:

http://mail-index.netbsd.org/tech-security/2009/07/28/msg000...

You're just wrong about this point, Marsh. You are very smart and often right, but not invariably so.


OK, so here's the patch to the bug in question:

http://cvsweb.netbsd.org/bsdweb.cgi/src/common/lib/libc/hash...

     +	/* The state and buffer size are driven by SHA256, not by SHA224. */
      	memcpy(context->state, sha224_initial_hash_value,
     -	    (size_t)(SHA224_DIGEST_LENGTH));
     -	memset(context->buffer, 0, (size_t)(SHA224_BLOCK_LENGTH));
     +	    (size_t)(SHA256_DIGEST_LENGTH));
     +	memset(context->buffer, 0, (size_t)(SHA256_BLOCK_LENGTH));
The NetBSD code was confused about which algorithm it was implementing. This can hardly be used to generalize about vulnerabilities in specific NIST approved algorithms.

You're just wrong about this point, Marsh. You are very smart and often right, but not invariably so.

So even if we allow this example as meeting my test of similar vulnerabilities introduced by implementation flaws of any NIST (or even IETF) defined algorithms I can still claim that this bug that existed in NetBSD only for 3 months in the spring of 2009 is the exception which proves the rule.

It doesn't compare at all in scope to http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-2483 in terms of the length of time, number of systems affected, or number of credentials created with weak cypto.

EDIT: Sorry, I'm looking at the wrong patch. It appears that at the time indicated in the advisory, a bunch of other stuff was added to the source tree.

Either I'm missing something obvious or there's a bit of misdirection going on. For example, it says: *"The overflow occurs at the time the hash init function is called (e.g. SHA256_Init). The init functions then pass the wrong size for the context as an argument to the memset function which then overwrites 4 bytes of the memory buffer located after the one holding the context." and "fixed: NetBSD-4 branch: Jul 22, 2009"

But the diffs of 2009-07-22 http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/sys/sha2.h.diff?... http://cvsweb.netbsd.org/bsdweb.cgi/src/common/lib/libc/hash... don't seem to affect the memset call in SHA256_Init() at all or the size of the structure.


The funny thing is that when BLAKE2 was announced, there were a lot of comments here, on reddit, and elsewhere, saying "but why is it fast? hashes must be slow to prevent password bruteforcing!". Authors had to add FAQ explaining the difference. So, at least some people have learned that "password hashes" must be slow.


Yeah, that was weird. Talk about learning the wrong lesson, or learning the right one and applying it to the wrong thing.


I attended a talk last month from [major security company] and he said that bcrypt was designed to be resistant against GPUs by using lots of space, and scrypt was for resistance to FPGAs.

I politely challenged him if that was right and he stuck to his guns, and honestly I haven't tried brute-forcing them myself so I couldn't really do any more than that. I thought scrypt was what was resistant to GPUs by using lots of space (hence the "s").


Kind of and not really. bcrypt's "memory-hardness" is minimal--it relies on the 4KB Blowfish setup cost (which stays constant.)

Designed for? Not for GPUs specifically (it was released in 1999), but the choice of Blowfish and its 4KB overhead was deliberate, and it makes hardware implementation harder. Effective against? Reasonably, but not compared to e.g. scrypt. Of course, SHA and friends still don't come close to bcrypt--they have a different purpose. (That also means that PBKDF2 is particularly bad in terms of memory cost: it's extremely easy to parallelize if you're using e.g. HMAC-SHA.)

scrypt tries to maximize the required die area by doing unpredictable (and tunable) memory access, and is significantly harder/costlier to run in parallel on ASICs and GPUs: http://i.imgur.com/iUpUk.png (from http://www.tarsnap.com/scrypt/scrypt.pdf)


"• The poor state of passwords protection in web services: passwords are too often either stored in clear (these are the services that send you your password by email after hitting "I forgot my password"), or just hashed with a cryptographic hash function (like MD5 or SHA-1), which exposes users' passwords to efficient brute force cracking methods."

That is a huge issue that I wished was made a criminal action.

But when a user forgets there password, the issue of issuing a new one falls down to verifying the user is who they say they are without a password and falls down to the previous details held by that user.

   Be it phone number to SMS a reset code or email to issue a rest code or new password.  It is a hard area that will always have a weakness.
I do wonder if the postal system could step in by offering a service which you could walk in - show your passport or supporting ID and have them issue you with a rest code which you could use upon the site.

Certianly would add to the services they offered and whilst it would not be instant or as easy for many it certainly would be far more robust way to address the issue. A service that until something better comes along that would work for everybodies best interests.

Maybe a good opertunity for a startup to pursue.


The post office idea doesn't solve all that much. People tend to conflate two different problems:

* Is the person real with a legal presence somewhere?

* Is the user the same unique person who logged in last time?

Most websites need to establish uniqueness rather than real world identity. Giving websites data about real identity actually puts the user at greater risk. Do I have to enter a social security or driving licence number online so the post office can link the accounts?


Far from having to give more information if anything you could give far less. Post office of other physical trusted source would have that and issue a reset code that would be validated via the trusted route to the post office.

So for example. You create a login to this site, you give usual stuff and in the password reset options register a unique ID number that would be issues by the post office and any rests of passwords and indeed verification could be handeled via a external source in this case the post office who can offer a physical verification as and when needed. Sites would have to support such things but in a day when we have official identification for physical verification of who we are then digitaly it all boild down to trust. Certainly when you have to reset a password you are in a situation which the site may have many ways to verify you and that is done by taking more information than they need sometimes. So in effect a form of national ID card that would be to the people advantage and allow them to protect themselfs if they forget passwords is certainly one solution.

But the whole area of verifiying a user who has forgoten there password is one of those problems that really has no magic solution without the site holding more information than they need to hold. The option at least to have a secure/trusted 3rd party able to do that verification and issue a new password or token that can be used to obtain a new password is certainly a solution.

But as for the website having real user details, that would not be so, they would have number which could be one of many the user has and with that nothing more than they get now from a IP or MAC or email address and in many respects the ability to protect there anomanimity with website far greater than they have now. Sure the authorities would be able to know who is who, fake name/deatils on websites or not, but they can do that already. Certainly having the option instead of not for many would be a solution they would prefer.

So in short, no you wont be handing website private information, if anything you would offer them less.


.That makes no sense. How do the post office know that you are resetting the password of an account you have legitimate access to without the website storing personal details. I suppose you could have a .post office OpenId but that would still require people to go to the post office once at the start which would make people less likely to sign up


"I do wonder if the postal system could step in by offering a service which you could walk in - show your passport or supporting ID and have them issue you with a rest code which you could use upon the site."

Was that a joke? You want to insert the government in the middle of every password reset? What about password resets on Sundays or after 8pm?


No no not a joke and be it post office or another trusted 3rd party that had physical presence so you could walk in and verify and get a reset token is the proposal.

It would be optional and sites would have to offer the option. But it is a approach that for many would work much better.

Also given the goverment has the legal right to your passwords in many countries the post office would certainly be a good option due to them already having presence in many locations.

Now as for the Sunday or late resets - yes that currently would be a issue, but people should not forget there passwords that easily for there own sake and certainly with online orders and dilevry's the post office of today needs to evolve though that is another issue.

So not a joke, just a solution that for many would be far better than the alternatives and one which less technicaly aware would be able to deal with should the need arose.

Now until biometrics evolve and are more common in ther euse household wise then it is a viable option. May not be perfect but certainly better than other options security wise for joe public. It may not be perfect but, nothing is and with that it could work.


Hashing data with SHA1 should be a criminal offense? Seriously?


partialy in jest but what I mean we often come across sitess that have issues and boils down to unsalted weak security hashing. In that you only find out about such weakness's when things go wrong.

Certainly MD5 I'd say less in jest but as for SHA1 and any hashing that is found to have flaws and known about for a while the ongoing use is really down to its working don't touch it. Now if you have a site using say MD5 or SHA1 hashing unsalted and they get attacked and leaked then the attacked can brute force those hash's into plain text. If a site works and uses poor security and is not attacked then what forced them to maintain there security or improve it as it is seemed just to work. Sadly hindsite and reaction to security issues or break ins with password hash tables stolen for many sites is when they do something and that is retrospectively. So in a way a law that forced or even focused site owners to deal with problems before they are problems would only be good for joe publics privacy. Maybe a law or other means are the way, maybe not. But certainly the current situaton and attitude by some managment in companies towards there IT people with regards to them wanting to fix issues that are not obviously an issue now can only but help.

I know and I'm sure others have known of situations in IT when the IT people want to do something to fix a potentual problem and yet budgets prevent them as it is not an issue and certainly from my experience of companies that use ITIL or other approaches they bias towards retrospective approach's in contrast to proactive ones. So if it takes a law to focus and support the IT people in companies to enable them to do the right thing then it can only be good.


The known flaws in MD5 do not damage its security as a password hash. Passwords are a particularly difficult cryptanalytic challenge; you need a preimage attack that works on data constrained to a very small number of blocks.


It already is. The FCC regulates this sort of thing. If you are a large business with lots of user data and you don't take appropriate steps to protect it they can and will fine you.


The FCC has never fined anyone for SHA1-hashing passwords.


I'll take your word for it, but I bet that has to do with the fact that once you get big enough for the FCC to care you probably know enough to properly store a password.

They've certainly fined other companies for other transgressions of a similar nature.


For instance...?


Path just had to pay an 800k fine (though I think that was for coppa violations?).

Twitter & Google have also had very prominent investigations & settlements in the past couple of years.


I think tptacek might be using the socratic method or else he is just messing with you but I can't watch it any longer.

The regulatory function you are talking about falls under the purview of the FTC, not the FCC.

"Path Social Networking App Settles FTC Charges it Deceived Consumers and Improperly Collected Personal Information from Users' Mobile Address Books

Company also Will Pay $800,000 for Allegedly Collecting Kids' Personal Information without their Parents’ Consent"[1]

[1] http://www.ftc.gov/opa/2013/02/path.shtm


Doh! You're right about the FTC vs FCC. TLA fail on my part!

Though I think that sometimes the FCC gets involved in these sorts of things too? Though maybe not? I am now officially out of the realm of where I know what I'm talking about.


For the record: I genuinely do not know the answer to the question I just asked, although obviously I'm skeptical about this underlying claim.


I am equally skeptical about the SHA-1 claim. I am just commenting that as a general rule customer protection falls under the purview of the FTC. There are certainly corner cases where industry specific regulations introduce additional oversight when it comes to customer information protection, e.g. OCC/OTS/NCUA and GLBA[1], or HHS and HIPPA[2].

[1] http://www.occ.gov/news-issuances/news-releases/2005/nr-ia-2...

[2] http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachno...

Side Bar:

If you are bored and want to get your wonk on search regulations.gov for sha-1[a]. It looks like most of the proposed rules mentioning SHA-1 come from HHS, FRA (Federal Railroad Administration) and the NIGC (National Indian Gaming Commission). However there is one reference to SHA-1 in an FCC rule about the Commercial Mobile Alert System[b]:

  CMAC-digest
     Optional element. The code representing
     the digital digest (``hash'') computed
     from the resource file. Calculated using
     the Secure Hash Algorithm (SHA-1) per
     [FIPS 180-2]. Alert Gateway uses the CAP
     digest element to populate this element.

[a] http://www.regulations.gov/#!searchResults;rpp=25;po=0;s=sha...

[b] http://www.regulations.gov/#!documentDetail;D=FCC-2008-0002-...


Were any of those for appsec mistakes?


"FTC Accepts Final Settlement with Twitter for Failure to Safeguard Personal Information" - http://www.ftc.gov/opa/2011/03/twitter.shtm

I don't know the exact nature of the failures.


Maybe you mean Federal Trade Commission instead of the Federal Communications Commission?

I am not suggesting that either organization fined someone for not using SHA1. I'm just suggesting that this sort of regulatory function falls under the purview of the FTC not the FCC.


Civil fines for a large corporation not properly protecting customer data is very different from making it a general criminal offense.


True. I overstated.


Given that bcrypt and scrypt have had very little attention from cryptographers, it would be a better to start there than adding even more schemes. Unless there is some reason to believe that the winner of this contest will attract serious analysis. If that's true then they should probably allow the established players to enter.

Diversity isn't necessarily a good thing in security. Look at all the attacks on SSL/TLS that rely on being able to negotiate a cypher.


I think that's exactly one of the purposes of competition: to attract more attention and analysis. I'm pretty sure we'll see some variant of scrypt there ;)


Actually, TLS cipher suite negotiaion has been a lifesaver for mitigating recent attacks. http://www.phonefactor.com/resources/CipherSuiteMitigationFo...

Unless you're referring to SSLv2 or home-rolled nonstandard downgrade logic invented by browser vendors, modern TLS is pretty good at preventing ciphersuite downgrade attacks.

If a single standard had been specified it would have been some minor variant of AES-CBC-HMAC<SHA>. We know of several ways to attack that combination now.


Ah-ah! We know several ways to attack poor implementations of that combination now. AES-CBC with HMAC is still sound.


Where "poor implementations" = "just about about all of them, ever, except those patched in the last 24 months".


Are you arguing that AES-CBC + HMAC is NOT a sound construction, or are you just augmenting my comment with more information?


I'm not sure.

I think I'm saying that a "formally proven sound" construction that is full of poorly understood traps for the implementer on actual machines is not always the best choice in practice.


Absurdly intelligent and well-informed crypto people have screwed up CTR mode before.


Yep. I think we can go farther than that and say that it's been historically screwed up by protocol designers and/or implementers more often than not (with predictable IVs, timing and length oracles, MAC-then-encrypt, etc)


Can you cite anything that shows that bcrypt hasn't been studied by cryptographers? I've not heard that yet and would like to add it to my research (I keep notes on good password storage techniques).


I am not sure how you are supposed to prove a negative. However I have often wondered why scrypt/bcrypt do not show up in any literature reviews. Search for scrypt/bcrypt:

https://eprint.iacr.org/search.html


Can you find cryptanalytic results on any passphrase-based KDF? PBKDF2 is a standard; find results on it from before say 2 years ago.


I didn't have that link, and that pretty well backs your point. I know proving a negative is next to impossible, but I figured that you'd have a reason for making the assertion 9which I wanted to know). The closest I can find to a study of bcrypt/scrypt is:

http://www.openwall.com/presentations/YaC2012-Password-Hashi... http://www.openwall.com/presentations/ZeroNights2012-New-In-...

Technically these are reviews/improvements to scrypt rather than an in-depth analysis of the security of the algorithm.

Even the Security StackExchange site is kind of silent on the issue:

http://security.stackexchange.com/questions/4781/do-any-secu...


No panic, just if you use IE read this: http://en.wikipedia.org/wiki/Server_Name_Indication#Support

Real browsers like Opera, Chrome, firefox.... all good

my bad and edited my panic away.

Thanks dchest for being more awake than I.


The certificate looks good to my browser. shrug

It's a StartCom intermediate certificate, good from 2/5/13 to 2/7/14, and the connection is encrypted with RC4_128. So says Chrome...


Yes yoru spot on, just looked into it more and it is TLS 1.0 on chrome and I disabled that upon my IE browser, so may be the issue and explains why. Though as different certificate my initial thoughts are still with some anti IE script of some form - IE gets cert for differnt site and also from same issuing authority.

No panic.


What problem? What browser and OS do you use?


On my other computer - and sadly had IE (latest and patched IE 9) running.

Now checked with Opera, firefox and Chrome and they get issued a different certificate - most odd and probably a anti IE thing.

IE seems to get a certificate for www.131002.net, others get certificate for that site. Different expiry dates and details so again probably a anti IE script of some form.


Aha, it's because IE on WinXP doesn't support SNI (http://en.wikipedia.org/wiki/Server_Name_Indication#Support). Unfortunately, there's no way for servers to work around this other than putting different domains on different IP addresses.


Box in question has Vista, and did have TLS disabled. Still I'll blame microsoft.


I only found this about Windows 7 / IE8 SNI problem, maybe it'll help http://answers.microsoft.com/en-us/ie/forum/ie8-windows_7/sn...


I edited the original post to highlight my stupidity and many thanks for being more awake than I. I'll have a read thru that when a bit more awake but moral being. Don't use IE, even if you are using one of your lesser computers :|.

Again thank you.


Did you make customizations to end up with TLS disabled in IE on Vista? Or did you end up this way fom default settings?


By default TLS 1.0 is enabled, I disabled it myself ages ago as other more robust options were supported and whilst extensions have been added at the time I felt it was for the best. Worked fine for me until this moment at least when I have used that particular box, but nice to always learn.

in short - no default is TLS 1.0 on, I changed it to disabled for TLS 1.0 and left SSL 2.0 and 3.0 enabled as they are by default.


Why do you disable the newest protocol version (TLS 1.0 is essentially SSL 3.1) but keep the older versions?

I'd enable SSL 3.0 and TLS 1.x, but disable SSL 2.0 since it's old and broken.


SSL 2.0 is old and busted. You ought to disable it.

SSL 3.0 is all-but identical to TLS 1.0.


While I think it's cool to have a competition to help come up with better, "standardized" methods of hashing, PBKDF2 is pretty good already. The problem seems to be a lack of implementation, not a lack of good options.


PBKDF2 is not pretty good: the difference between GPU/hardware-accelerated implementations of PBKDF2 with common hashes and software implementations for CPUs is huge. Plus you can compute it in parallel cheaply. See scrypt paper for details: http://www.tarsnap.com/scrypt.html


I can't find in that paper how many iterations they put PBKDF2 through and which hashing algorithm they used. Any implementation of PBKDF2 I do uses a very high number of iterations (50,000+) and I use whirlpool, which is amongst the slowest of algorithms.


100ms in table: PBKDF2-HMAC-SHA256 with an iteration count of 86,000; 5s: PBKDF2-HMAC-SHA256 with an iteration count of 4,300,000 (see page 13).

If you're using the slowest possible algorithm/biggest number of iterations, you're still vulnerable to parallel attacks: imagine a huge number of cheap chips bruteforcing passwords. Scrypt tries to maximize the cost of such chips by requiring large amounts of RAM for computation. Again, this is explained in the paper.


Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: