[openssl-project] The problem of (implicit) relinking and changed behaviour
levitte at openssl.org
Sat Apr 14 19:32:31 UTC 2018
First, a note: I don't want this discussion to be just about technical
details, but also about philosophy, and guidance for policy making in
the long run. My feeling is that we've been... well, a bit lax with
regards to library upgrade and program relinking (explicit or
implicit, that shouldn't really matter).
Some time ago, I engaged in the exercise to see how well the test
programs from the 1.1.0 branch would do if linked with the 1.1.1
libraries (i.e. simulating a shared library upgrade from 1.1.0 to
1.1.1). See https://github.com/openssl/openssl/issues/5661
The conclusion drawn from this exercise is that TLSv1.3 has introduced
a behaviour in libssl 1.1.1 that is incompatible with libssl 1.1.0.
Not in every function, so for example, running basic s_server or
s_client without any special options will work without issues, but
just the fact that some amount of 1.1.0 tests fail when faced with
libssl 1.1.1 tells me that there are some incompatibilities to deal
Of course, one might argue that one can assume that a program that
can't deal with certain details will tell libssl to stick with TLSv1.2
or older... but I'm unsure if such assumptions are realistic, and I'm
again looking at the 1.1.0 test failures. Obviously, *we* didn't work
along such assumptions.
So regarding assumptions, there's only one assumption that I'm ready
to make: a program that worked correctly with libssl 1.1.0 and uses
its functionality as advertised should work the same with libssl
1.1.1. Note that I'm not saying that this excludes new features
"under the hood", but in that case, those new features should work
transparently enough that a program doesn't need to be changed because
of them. Also, note again that I'm not talking about recompilation,
but the implicit relinking that is what happens when a shared library
is upgraded but keeps the same library version number (no "bump").
(mind you, explicit relinking would make no different in this regard).
Does anyone disagree with that assumption?
So, how to deal with this?
1. There's the option of making the new release 1.2.0 instead of 1.1.1.
I think most of us aren't keen on this, but it has to be said.
2. Make TLSv1.2 the absolutely maximum TLS version available for
programs linked with libssl 1.1.0. This is what's done in this PR:
This makes sense insofar that it's safe, it works within the known
parameters for the library these programs were built for.
It also makes sense if we view TLSv1.3 as new functionality, and
new functionality is usually only available to those who
explicitely build their programs for the new library version.
TLSv1.3 is unusual in this sense because it's at least it great
part "under the hood", just no 100% transparently so.
3. .... I dunno, please share ideas if you have them.
Side discussion: Some of the failing 1.1.0 tests shows that we've
made some changes in 1.1.1 that we might not have thought would
a. 1.1.0's test/recipes/70-test_sslextension.t has a couple of tests
that are meant to fail (i.e. if the individual tests fail, the
recipe is successful). When run against 1.1.1 libraries, the
recipe fails, i.e. the injection of double hellos didn't get the
communication to fail, or so it seems...
b. 1.1.0's test/recipes/80-test_ssl_new.t fails in the second test
(protocol version checks) because it expects an InternalError
alert, but gets ClientFail instead. So the question here is, what
if some program actually pays attention to them? ... and it also
begs the question if the alert type change was a bug fix, and in
that case, why didn't it propagate to 1.1.0? Should it?
Richard Levitte levitte at openssl.org
OpenSSL Project http://www.openssl.org/~levitte/
More information about the openssl-project