[openssl-users] A script for hybrid encryption with openssl

Nick oinksocket at letterboxes.org
Tue Dec 18 11:18:01 UTC 2018

On 17/12/2018 22:02, Jakob Bohm via openssl-users wrote:
> A simpler way is to realize that the formats used by SMIME/CMS (specifically
> the PKCS#7 formats) allow almost unlimited file size, and any 2GiB limit is
> probably an artifact of either the openssl command line tool or some of the
> underlying OpenSSL libraries.

Yes. I started using openssl's smime implementation, then backed out when I
realised there were indeed limits - apparently in the underlying libraries.

On decrypting I got the same kind of errors described in this bug report thread
(and elsewhere if you search, but this is the most recent discussion I could find).

"Attempting to decrypt/decode a large smime encoded file created with openssl
fails regardless of the amount of OS memory available".

The key points are:

- streaming smime *encryption* has been implemented, but
- smime *decryption* is done in memory, consequentially you can't decrypt
anything over 1.5G
- possibly this is related to the BUF_MEM structure's dependency on the size of
an int

There's an RT ticket but I could not log in to read this.  But it appears to
have been migrated to Git-hub:


It's closed - I infer as "won't fix" (yet?) and this is still an issue as my
experience suggests, at least in the versions distributed for systems I will be

I was using openssl 1.0.2g-1ubuntu4.14 (Xenial) and I've verified it with
openssl 1.1.0g-2ubuntu4.3 (Bionic, the latest LTS release fro Ubuntu):

    $ openssl version -a
    OpenSSL 1.1.0g  2 Nov 2017
    built on: reproducible build, date unspecified
    platform: debian-amd64
    -DPOLY1305_ASM -DOPENSSLDIR="\"/usr/lib/ssl\""
    OPENSSLDIR: "/usr/lib/ssl"
    ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1"

    $ dd if=/dev/zero of=sample.txt count=2M bs=1024
    $ openssl req -x509 -nodes -newkey rsa:2048 -keyout
    mysqldump-secure.priv.pem -out mysqldump-secure.pub.pem
    $ openssl smime -encrypt -binary -text -aes256 -in sample.txt -out
    sample.txt.enc -outform DER -stream mysqldump-secure.pub.pem
    $ openssl smime -decrypt -binary -inkey mysqldump-secure.priv.pem -inform
    DEM -in sample.txt.enc -out sample.txt.restored

    Error reading S/MIME message
    139742630175168:error:07069041:memory buffer
    routines:BUF_MEM_grow_clean:malloc failure:../crypto/buffer/buffer.c:138:
    139742630175168:error:0D06B041:asn1 encoding
    routines:asn1_d2i_read_bio:malloc failure:../crypto/asn1/a_d2i_fp.c:191

> Anyway, setting up an alternative data format might be suitable if combined
> with other functionality requiring chunking, such as recovery from
> lost/corrupted data "blocks" (where each block is much much larger than
> a 1K "disk block"). 

I should add that I don't really care about the format, or even the use of
openssl - just the ability to tackle large files with the benefits of public key
encryption, in a self-contained way without needing fiddly work deploying the
keys (as GnuPG seems to require for its keyring, judging from my experience
deploying Backup-Ninja / Duplicity using Ansible.)  So other solutions, if tried
and tested, might work for me.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mta.openssl.org/pipermail/openssl-users/attachments/20181218/ed1351b6/attachment.html>

More information about the openssl-users mailing list