The web is all abuzz about how SHA-1 is “broken”, “a failure,” “obsolete”, etc.
It is supposed to be computationally impractical to create two documents that have the same secure hash code, and yet Google has demonstrated that they have done just that for the SHA-1 algorithm.
I’d like to make two simple observations to put this in perspective.
This is not a surprise. Cryptography experts have suspected since 2005 that SHA-1 was vulnerable and recommended using other algorithms. The security community has been gleeful about Google’s announcement. They feel vindicated for telling people for years not to use SHA-1.
This took a lot of work, both in terms of research and computing. Crypto researchers have been trying to break SHA-1 for 22 years. And according to their announcement, these are the resources Google had to use to break SHA-1:
- Nine quintillion (9,223,372,036,854,775,808) SHA-1 computations in total
- 6,500 years of CPU computation to complete the attack first phase
- 110 years of GPU computation to complete the second phase
While SHA-1 is no longer recommended, it’s hardly a failure. I don’t imagine it would take 22 years of research and millions of CPU hours to break into my car.
I’m not saying people should use SHA-1. I recently advised a client not to use SHA-1. Why not use a better algorithm (or at least what is currently widely believed to be a better algorithm) like SHA-256? But I am saying it’s easy to exaggerate what it means to say SHA-1 has failed.
Update: Attacks on SHA-1 have gotten orders of magnitude more efficient in the time since this post was first written. See this announcement of a chosen prefix collision.