[openssl-dev] Work on a new RNG for OpenSSL

Peter Waltenberg pwalten at au1.ibm.com
Wed Jun 28 01:41:11 UTC 2017


The next question you should be asking is: does our proposed design 
mitigate known issues ?. 
For example this:

http://www.pcworld.com/article/2886432/tens-of-thousands-of-home-routers-at-risk-with-duplicate-ssh-keys.html

Consider most of the worlds compute is now done on VM's where images are 
cloned, duplicated and restarted as a matter of course. Not vastly 
different from an embedded system where the clock powers up as 00:00 
1-Jan, 1970 on each image. If you can trust the OS to come up with unique 
state each time you can rely solely on the OS RNG - well provided you 
reseed often enough anyway, i.e. before key generation. That's also why 
seeding a chain of PRNG's once at startup is probably not sufficient here.

And FYI. On systems not backed with hardware RNG's /dev/random is 
extremely slow. 1-2 bytes/second is a DOS attack on it's own without any 
other effort required.

This isn't solely a matter of good software design. And yes, I know, hard 
problem. If it wasn't a hard problem you probably wouldn't be dealing with 
it now.


Peter




From:   Benjamin Kaduk via openssl-dev <openssl-dev at openssl.org>
To:     openssl-dev at openssl.org, Kurt Roeckx <kurt at roeckx.be>, John Denker 
<ssx at av8n.com>
Date:   28/06/2017 09:38
Subject:        Re: [openssl-dev] Work on a new RNG for OpenSSL
Sent by:        "openssl-dev" <openssl-dev-bounces at openssl.org>



On 06/27/2017 04:51 PM, Kurt Roeckx wrote:
On Tue, Jun 27, 2017 at 11:56:04AM -0700, John Denker via openssl-dev 
wrote:


On 06/27/2017 11:50 AM, Benjamin Kaduk via openssl-dev wrote:


Do you mean having openssl just pass through to
getrandom()/read()-from-'/dev/random'/etc. or just using those to seed
our own thing?

The former seems simpler and preferable to me (perhaps modulo linux's
broken idea about "running out of entropy")


That's a pretty big modulus.  As I wrote over on the crypto list:

The xenial 16.04 LTS manpage for getrandom(2) says quite explicitly:


Unnecessarily reading large quantities  of data will have a
negative impact on other users of the /dev/random and /dev/urandom
devices.


And that's an understatement.  Whether unnecessary or not, reading
not-particularly-large quantities of data is tantamount to a
denial of service attack against /dev/random and against its
upstream sources of randomness.

No later LTS is available.  Reference:
  http://manpages.ubuntu.com/manpages/xenial/man2/getrandom.2.html

Recently there has been some progress on this, as reflected in in
the zesty 17.04 manpage:
  http://manpages.ubuntu.com/manpages/zesty/man2/getrandom.2.html

However, in the meantime openssl needs to run on the platforms that
are out there, which includes a very wide range of platforms.


And I think it's actually because of changes in the Linux RNG that
the manpage has been changed, but they did not document the
different behavior of the kernel versions.

In case it wasn't clear, I think we should use the OS provided
source as a seed. By default that should be the only source of
randomness.



I think we can get away with using OS-provided randomness directly in many 
common cases.  /dev/urandom suffices once we know that the kernel RNG has 
been properly seeded.  On FreeBSD, /dev/urandom blocks until the kernel 
RNG is seeded; on other systems maybe we have to make one read from 
/dev/random to get the blocking behavior we want before switching to 
/dev/urandom for bulk reads.

-Ben-- 
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mta.openssl.org/pipermail/openssl-dev/attachments/20170628/20d142ac/attachment.html>


More information about the openssl-dev mailing list