[openssl-dev] Potential timing problems in OpenSSL RSA implementation

Andy Polyakov appro at openssl.org
Sun Dec 17 17:11:28 UTC 2017


Hi,

> I'd like to stress that this is highly speculative, it may very well be
> that this is not exploitable in any practical way. But I thought it's
> important enough that it should be public knowledge. (Also "This leaves
> a small timing channel, [...] but it is not believed to be large
> enough to be exploitable" said TLS 1.2 when it described what later
> became Lucky13.)
> 
> Fixing this would likely require changing the bignum library in a
> way that it knows fixed size types that allow e.g. treating a 256 byte
> number in the same way as a 255 byte number.

One has to keep in mind that in order to measure some/any difference
your instrument's accuracy has to be sufficient. And I'd say this was
what lead to failure to recognize significance of time channel that
became Lucky13. I mean it was failure to appreciate accuracy of
measuring, wasn't it? [In the context one can even wonder how
significant was the fact that attacker was ~2x faster than victim in
original paper :-)]. Another essential component is that oracle timing
has to be *reliably* distinguishable from "natural" variation. [I write
"natural" in quotes, because program behaviour is not directly governed
by laws of physics, isn't it? Even if they, complex programs, tend to
exhibit not purely deterministic timing.] With this in mind one can
actually wonder if timing difference between handling 256- and 254-byte
numbers at the end of operation can actually be measured. As we would be
looking at handful cycles difference when "natural" variation is more
like hundred[s]. However! I'm not saying that it's absolutely
unfeasible(*), but rather that above might be wrong *first* question to
ask. Because timing signal from input lengths that differ by *limb* is
more likely to be reliably distinguishable. One of key reasons being the
fact that inputs of unusual lengths are customarily treated by distinct
and less optimized code paths. What *presumably* saves the day is that
probability of hitting a limb-shorter result is prohibitively low. [Not
to forget that attacker would be limited by victim's resources.] Is this
reasonable assumption?

(*) Attacker might be in position to amplify the signal, for example by
invalidating victim's guest's cache in virtialized environment.


More information about the openssl-dev mailing list