[openssl-dev] Work on a new RNG for OpenSSL

Theodore Ts'o tytso at mit.edu
Wed Jun 28 02:17:22 UTC 2017


On Tue, Jun 27, 2017 at 04:12:48PM -0500, Benjamin Kaduk wrote:
> 
> While you're here, would you mind confirming/denying the claim I read
> that the reason the linux /dev/random tracks an entropy estimate and
> blocks when it gets too low is to preserve backward security in the face
> of attacks against SHA1?

The explanation is a bit more complicated than that.  Linux was the
first OS to have a built-in random number generator suitable for use
in cryptographic applications.  Linux's /dev/random dates back to the
days when we still had export control (this is why a cryptographic
hash was used instead of an encryption algorithm), and when we didn't
have nearly as much understanding about cryptoanalysis as we do now.

As a result, just about all of the RNG's from that era (such the
original PGP's RNG, designed by Colin Plumb, and with whom I consulted
extensively when I designed Linux's /dev/random system) were much more
focused on collecting environmental noise and using cryptographic
primitives for mixing.  We were much less comfortable "back in the
day" in being willing to trust in the strength of any cryptographic
primitives.  Given the weaknesses that were found in MD4 and MD5
(which date from that era), I'd say that our caution was justified.

The original entropy estimation and accounting philosophy comes from
these historical reasons.  The reason why it is still preserved is
mainly because the shift from trusting in cryptographic primitives
versus entropy estimates was gradual, and by the time we had reached
general consensus, just about everyone was using /dev/urandom anyway.
And there was no real point in changing /dev/random.

In addition, as it turns out, we still need an entropy estimate
anyway.  First of all, you still need to decide when you have gathered
enough environmental "noise" that you are willing to consider the CRNG
to be fully initialized.  Using some kind of entropy estimate is good
way to do this which allows us to have some kind of standard which is
portable across wide variety of CPU architecture and hardware
configurations.

Secondly, if you are using certain types of hardware random number
generators, they may be expensive to use them continuously, so using
the entropy accounting to control how often to read from the hwrng can
be useful.  This also allows us to assign a "derating percentage" if
we don't want to invest complete trust in the quality of the output
from said hardware RNG.

In any case, at this point what I recommend is that people just use
getrandom(2) and be happy.  Getrandom will block until CRNG is
initialized (which is only really an issue during early boot on most
systems), and eliminates a lot of the other potential pitfalls with
using /dev/urandom (for example, it's not vulnerable to a file
descriptor exhaustion attack combined with the universal problem that
most userspace application programmers don't bother to check error
returns).

If your libc doesn't support getrandom(2) for whatever reason, open
and read from /dev/urandom instead.  Unless you are running during
early boot (and doing something really silly like generating long term
public keys a few seconds after the box is powered on, fresh from the
factory), you should be fine so long as do proper error checking of
all of your system calls.

Cheers,

						- Ted


More information about the openssl-dev mailing list