[openssl-project] Entropy seeding the DRBG
Kurt Roeckx
kurt at roeckx.be
Mon Apr 30 13:10:01 UTC 2018
On Mon, Apr 30, 2018 at 02:42:53PM +0200, Richard Levitte wrote:
> In message <20180424172439.GA8068 at roeckx.be> on Tue, 24 Apr 2018 19:24:40 +0200, Kurt Roeckx <kurt at roeckx.be> said:
>
> kurt> On Tue, Apr 24, 2018 at 07:20:42AM +0200, Richard Levitte wrote:
> kurt> > Like I think I mentioned a few days ago, I'm currently on a conference. I'll take this up in more depth later this week.
> kurt> >
> kurt> > I have a question, though... Kurt said at some point that all that was needed on the VMS side was to collect data, the rest can be done elsewhere (thankfully). However, I don't really understand what the collected data is supposed to be. Just the same stream of bytes that I would feed the entropy acquisition, or something else? Is the time delta between samples a factor in this?
> kurt>
> kurt> The API support getting data that has 1 bit of entropy per 128 bit
> kurt> received (DRBG_MINMAX_FACTOR). If it's worse than that, you might
> kurt> have to write your own extract method.
>
> I might have to either way, don't I. A method I'm pondering is to
> pass all the data gathered (700-something bytes) through sha512 and
> add the result to the pool. I have no idea what that says about the
> entropy of the original data, which is at somewhere between 0.1 and
> 0.2 entropy bits per data bit according the 3rd order entropy
> calculation that I replicated from the Linux /dev/urandom driver.
>
> kurt> A stream of bytes it just fine.
> kurt>
> kurt> I think the tme delta will really depend on your source. If it
> kurt> really changes all the time, it really doesn't matter much how
> kurt> fast you do it. But I think some (most?) of the variables don't
> kurt> change that often.
>
> It doesn't change *all* the time, but with a 1-10 second sleep between
> data gatherings, there's always *something* that has changed enough
> to give a 3rd order diff from previous sampling that's > 0.
>
> So what I've done for now is to make two files, one that's the raw
> data, repeatedly gathered every 1-10 seconds until I got about 1 Mib
> of data, the other being a concatenation of sha512 calculations of
> those same (*) data until I filled that file up to 1 Mib. I suspect
> that the latter isn't quite valid, considering Paul said something
> about no transformation whatsoever, but I thought it would be worth a
> try.
The comment about not hashing it is if you want to use the tool to
do entropy estimation. Hashing it will not increase the entropy,
but the estimation will be totally wrong.
Passing the hashed data to the drbg as entropy input is fine if
you already know how much entropy that it contains.
Kurt
More information about the openssl-project
mailing list