[openssl-dev] Work on a new RNG for OpenSSL

John Denker ssx at av8n.com
Thu Jun 29 15:01:11 UTC 2017


Executive summary:

As has been said many times before, what we need (but do not have)
is /one/ source of randomness that never blocks and never returns
bits that are guessable by the adversary.

In favorable cases, using getrandom(,,0) \*\ is appropriate
for openssl.  There are problems with that, but switching to
getrandom(,,GRND_RANDOM) \**\ would not solve the problems.

\*\  Reading /dev/urandom is almost the same.
\**\ Reading /dev/random is essentially the same.

In cases where getrandom() is not good enough, the problems tend
to be highly platform-dependent.  Many of these problems would be
quite difficult for openssl to detect (much less solve).  Some
platforms are not secure and cannot be made secure.

On 06/27/2017 01:40 PM, Theodore Ts'o wrote:

>> My recommendation for Linux is to use getrandom(2) the flags field set
>> to zero.
>> [...] /dev/urandom (which has the same performance characteristics as the
>> getrandom system call)


Similarly, on 06/29/2017 04:03 AM, Dimitry Andric gave what might
be considered the usually-correct answer for the wrong reasons:

> In short, almost everybody should use /dev/urandom

OK.  There's also getrandom().

> and /dev/random is kept alive for old programs.

[...]

> The Linux random(4) manpage says:
> 
>        The /dev/random device is a legacy interface which  dates  back
>        to a time where the cryptographic primitives used in the imple‐
>        mentation of /dev/urandom were not  widely  trusted.   It  will
>        return random bytes only within the estimated number of bits of
>        fresh  noise  in  the  entropy  pool,  blocking  if  necessary.
>        /dev/random is suitable for applications that need high quality
>        randomness, and can afford indeterminate delays.

That's what the manpage says ... but does anybody believe it?

On 06/27/2017 06:22 PM, Ted told us not to trust what it says in the man
pages.

Oddly enough, all the advice given above (including the list traffic
and the man pages) is flatly contradicted by what it says in the most
up-to-date kernel source, namely:

>>>  /dev/random is suitable for use when very high
>>>  quality randomness is desired (for example, for key generation

Reference:
  https://git.kernel.org/pub/scm/linux/kernel/git/tytso/random.git/tree/drivers/char/random.c?id=e2682130931f#n111

All it all, it's hardly surprising that users are confused.

==================================

When it was introduced, the random / urandom split was advertised as
a way of solving certain problems with the old approach.  To block or
not to block, that is the question.....  The problems didn't actually
get solved, just shifted.  The split requires users (rather than the
RNG designers) to deal with the problems.  The fact that the recently-
introduced getrandom(2) call has flags such as GRND_RANDOM and
GRND_NONBLOCK means that users are still on the hook for problems
they almost certainly cannot understand, much less solve.

The conclusion remains the same:  What we need (but do not have) is
/one/ source of randomness that never blocks and never returns bits
that are guessable by the adversary.

==========

In fact there are profound distinctions between an ideal HRNG and
an ideal PRNG.  AFAICT neither one exists in the real world, in the
same sense that ideal spheres and planes do not exist, but still
the idealizations are meaningful and helpful.

It seems likely that /dev/random was intended, at the time of the
split, to serve as approximate HRNG, while /dev/urandom was intended
to be a PRNG of some kind.  Using terms like "legacy interface" is
an astonishing mischaracterization of the distinction.

====

Similarly, it is strange to talk about

> a time where the cryptographic primitives used in the imple‐
>        mentation of /dev/urandom were not  widely  trusted

In fact, 
 ++ Improper seeding is, and has always been, the #1 threat to
  both /dev/random and /dev/urandom.
 ++ Compromise of the internal state is a threat to /dev/urandom.
  It is better to prevent this than to try to cure it.
  If the PRNG is compromised, probably a lot of other things are too.
 ++ Lousy architectural design is always a threat.
 ++ Coding errors are always a threat.
 ++ etc.
 ++ etc.
 -- Cryptanalytic attack against the outputs is way, Way, WAY
  down on the list, and always has been, assuming the crypto
  primitives are halfway decent, assuming the architecture and
  implementation are sound.



More information about the openssl-dev mailing list