Migrating from 1.0.2 g to 1.1.1d

Floodeenjr, Thomas thomas_floodeenjr at mentor.com
Thu Feb 6 21:38:27 UTC 2020


It looks like I need to call init() after new()

       m_evpCtx = EVP_ENCODE_CTX_new();
       EVP_EncodeInit(m_evpCtx);

From: openssl-users <openssl-users-bounces at openssl.org> On Behalf Of Floodeenjr, Thomas
Sent: Thursday, February 6, 2020 2:22 PM
To: openssl-users at openssl.org
Subject: RE: Migrating from 1.0.2 g to 1.1.1d

With the old init syntax in 1.0.2, EVP_EncodeInit(&m_evpCtx);, m_evpCtx-> length is initialized to '48'.

With the new syntax in 1.1.1, m_evpCtx = EVP_ENCODE_CTX_new();, m_evpCtx-> length is initialized to '0.

I believe this causes the while loop to loop forever until INT_MAX, thus overrunning my buffer.

Why does EVP_ENCODE_CTX_new() initialize to '0'? How do I fix this problem?

Thanks,
-Tom


From: openssl-users <openssl-users-bounces at openssl.org<mailto:openssl-users-bounces at openssl.org>> On Behalf Of Floodeenjr, Thomas
Sent: Thursday, February 6, 2020 11:25 AM
To: openssl-users at openssl.org<mailto:openssl-users at openssl.org>
Subject: Migrating from 1.0.2 g to 1.1.1d

Hello,

We are in the process of migrating from 1.0.2g to 1.1.1d. We adjusted to the changes, we think, and everything compiles. Many things also execute correctly.

We are currently seeing a crash in EVP_EncodeUpdate() after we process most of our data. (last line of the while loop, line 202, *out = '\0';)

    while (inl >= ctx->length && total <= INT_MAX) {
        j = evp_encodeblock_int(ctx, out, in, ctx->length);
        in += ctx->length;
        inl -= ctx->length;
        out += j;
        total += j;
        if ((ctx->flags & EVP_ENCODE_CTX_NO_NEWLINES) == 0) {
            *(out++) = '\n';
            total++;
        }
        *out = '\0';
    }

>             ModuleName.dll!EVP_EncodeUpdate(evp_Encode_Ctx_st * ctx, unsigned char * out, int * outl, const unsigned char * in, int inl) Line 202              C

We call it the function like this:
EVP_EncodeUpdate(m_evpCtx, &vTmpOut[0], &nOutSize, &_vInData[0], (int) nInSize);

EVP_ENCODE_CTX  *m_evpCtx;
std::vector<unsigned char> vTmpOut;
int nOutSize;
std::vector<unsigned char> & _vInData;

I know that EVP_EncodeUpdate() is vastly different between 1.0.2 and 1.1.1. Is there a problem with me calling the function this way? It has worked for many years using 1.0.1.

Any insight is appreciated.

Thanks,
-Tom
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mta.openssl.org/pipermail/openssl-users/attachments/20200206/c68dda9c/attachment.html>


More information about the openssl-users mailing list