[openssl-users] removing compression?

Jakob Bohm jb-openssl at wisemo.com
Tue Apr 7 08:30:32 UTC 2015

On 05/04/2015 02:06, Salz, Rich wrote:
>> by randomly interspersing flush commands into the data stream (description
>> and example implementation https://github.com/wnyc/breach_buster)?
>> It's not perfect but for some use cases better than having no compression at
>> all.
> Flushing the stream seems like an application-level thing to do, and not something openssl generally does.
> It might be better than having no compression at all, the question is do we need compression in openssl at all? :)
"Flushing the zlib stream" is a call to the zlib
library, which causes the insertion of extra bits
in the compressed stream.  It can only be done by
the layer that actually calls zlib, in this case
OpenSSL.   This is especially true, when (as a
critical aspect of this side channel mitigation)
the other parts of the SSL stream (record splitting,
TCP buffering) is intentionally NOT flushed.

Thus the point is that by having the code that
directly calls zlib (in this case OpenSSL) randomly
telling zlib to flush through data into completed
zlib-layer records, then bundling together the
compressed stream into the same TLS record, the
length and compressibility of the plaintext is
masked from observers looking at the TLS record
sizes, thereby mitigating or even blocking the
CRIME attack family.  To avoid simply forcing the
attacker to generate more transmissions with each
chosen plaintext portion, the randomization should
be deterministic for any given uncompressed
plaintext, yet highly unpredictable for anyone
without access to the OpenSSL internal state,
even across load-balanced processes.  So perhaps
the randomization should be keyed by a MAC of the
plaintext, which is in turn keyed by site constant
values, such as (for servers) the complete set of
loaded private keys and certificates, or (for
clients) various local secrets or a random value
persisted on first run and reused thereafter.

An open issue is how to design the randomization
so it also stops attackers who try multiple
similar chosen plaintext strings for each desired
probe of the secret plaintext, and then apply
statistical methods to filter out our masking

Adding equivalent code in a HTTP library would
similarly mitigate/block the BREACH attack family.

At the protocol level, the definition of deflate
compression causes all the generated variant
streams to be equivalent representations of the
same uncompressed plaintext, thus there is no
protocol visible change, just deliberate
compression inefficiency.  Conceptually, this is
somewhat similar to the 1/N-1 record splitting
used to mitigate IV chaining attacks in TLS 1.0
CBC encryption.

A completely alternative technique, not limited
to compressed streams, could be to randomly vary
the exact number of padding bytes within the
typically 4 bit) range permitted by the protocol,
but this would be limited to CBC mode encryption,
not being available for stream and GCM encryptions.


Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

More information about the openssl-users mailing list