开发者

Why does a collision detection in a cryptographic hash function make finding other collisions easier?

From Wikipedia, I read:

Joux[3] noted th开发者_开发技巧at 2-collisions lead to n-collisions: if it is feasible to find two messages with the same MD5 hash, it is effectively no more difficult to find as many messages as the attacker desires with identical MD5 hashes.

But why is this so? I can't imagine why? The algorithms are open right, people can read the maths which generates the hashes, which is the digest machinery. So if we know one collision why does it help find new ones?

Is it just making small iterations to both of the first collision messages and then monitoring their changes to remap them?


This is not a property of all hash functions, but a weakness of the Merkle–Damgård construction (which MD5 and SHA-1 are based on), known as length extension. The weakness involves the fact that you can "resume" the hash calculation with specially-selected appended data. For full details of how this is used to generate arbitrarily many collisions, see:

  • Multicollisions in Iterated Hash Functions. Application to Cascaded Constructions (Antoine Joux)

For a related attack based on this idea, see:

  • Understanding hash length extension attacks
  • Flickr's API Signature Forgery Vulnerability


I think the key here is the word "feasible". In crypto-land, feasible means "reasonable amount of time compared to the value of whatever it is I'm trying to break", or maybe "less time that it would take using brute-force" depending on how you look at things.

So, if I can find 1 collision feasibly then I can find n collisions feasibly because n*small is still small.

There will still be some n where n*small > value of breakage.

Would this apply to other hash functions? I believe so, but I could be wrong.

Let the flaming commence.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜