[openssl-dev] [openssl.org #4094] Nonsensical pointer comparison in PACKET_buf_init

Alexander Cherepanov via RT rt at openssl.org
Sat Oct 17 02:41:05 UTC 2015


[Sorry, sent unfinished variant.]

On 2015-10-17 01:46, Ben Laurie via RT wrote:
> On Fri, 16 Oct 2015 at 01:32 Matt Caswell via RT <rt at openssl.org> wrote:
>> On 15/10/15 20:53, Alexander Cherepanov via RT wrote:
>>> On 2015-10-15 15:41, Matt Caswell via RT wrote:
>>>> The purpose of the sanity check is not then for security, but to guard
>>>> against programmer error. For a correctly functioning program this test
>>>> should never fail. For an incorrectly functioning program it may do. It
>>>> is not guaranteed to fail because the test could be compiled away but,
>>>> most likely, it will. We can have some degree of confidence that the
>>>> test works and does not get compiled away in most instances because, as
>>>> you point out, an explicit check for it appears in packettest.c and, to
>>>> date, we have had no reported failures.
>>>
>>> What was not entirely clear from the original bug report is that, while
>>> the check is not compiled away, it's compiled into something completely
>>> different from what is written in the source. Specifically, the check
>>> "buf + len < buf" is optimized into "len >> 63" on 64-bit platform, i.e.
>>> "(ssize_t)len < 0" or "len > SIZE_MAX / 2". This is not a check for
>>> overflow at all, it doesn't even depend on the value of "buf".
>>>
>>> If this is what was intended then it's better to write it explicitly. If
>>> this is not what was intended then some other approach is required.
>>
>> I'd say that is an instance of the compiler knowing better than us how
>> big |len| would have to be in order to trigger an overflow. Those rules
>> are going to be platform specific so we should not attempt to second
>> guess them, but instead let the optimiser do its job.

Matt, I'm confused. In your previous email you yourself (correctly) 
explained why this check does not guard against the pointer overflowing.

AIUI this check is not some clever trick, it's just ordinary 
simplification of "a + b < a" into "b < 0" by subtracting a common term 
from both sides (which is correct only if there is no overflow) with an 
additional twist that an unsigned integer are treated as signed. (IMHO 
this is a bug in compilers and I've just reported it in gcc -- 
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999. But it doesn't 
really matter for our discussion.)

On 2015-10-17 01:46, Ben Laurie via RT wrote:
> If it is, then the compiler is wrong, surely? e.g. if buf is 0xfff...fff,
> and len is 1, you get an overflow, which the optimised version does not
> catch.

Right.

> What I'm not understanding from this thread is what the argument is against
> avoiding undefined behaviour?

I guess the problem is that it's not entirely clear what this check is 
for at all. If it's there only to catch negative values it's easy to fix 
-- replace it with "len > SIZE_MAX /2", "len >> (sizeof len * 8 - 1)" or 
something.

If the check is not needed at all it's easy to fix too:-)

But if the intention was to specifically check for pointer overflow 
everything is a bit more complicated. You cannot check for a pointer 
overflow directly. There is no such notion in the C standards. Perhaps 
it's possible with casts to uintptr_t but it's kinda ugly.

-- 
Alexander Cherepanov




More information about the openssl-dev mailing list