<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 17/12/2018 22:02, Jakob Bohm via
openssl-users wrote:<br>
</div>
<blockquote type="cite"
cite="mid:1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com">A
simpler way is to realize that the formats used by SMIME/CMS
(specifically
<br>
the PKCS#7 formats) allow almost unlimited file size, and any 2GiB
limit is
<br>
probably an artifact of either the openssl command line tool or
some of the
<br>
underlying OpenSSL libraries.
<br>
</blockquote>
<p><br>
</p>
<p>Yes. I started using openssl's smime implementation, then backed
out when I realised there were indeed limits - apparently in the
underlying libraries.</p>
<p>On decrypting I got the same kind of errors described in this bug
report thread (and elsewhere if you search, but this is the most
recent discussion I could find).</p>
<p>"Attempting to decrypt/decode a large smime encoded file created
with openssl fails regardless of the amount of OS memory
available".<br>
<a class="moz-txt-link-freetext" href="https://mta.openssl.org/pipermail/openssl-dev/2016-August/008237.html">https://mta.openssl.org/pipermail/openssl-dev/2016-August/008237.html</a></p>
<p>The key points are:</p>
- streaming smime *encryption* has been implemented, but<br>
- smime *decryption* is done in memory, consequentially you can't
decrypt anything over 1.5G<br>
- possibly this is related to the BUF_MEM structure's dependency on
the size of an int<br>
<br>
<p>There's an RT ticket but I could not log in to read this. But it
appears to have been migrated to Git-hub:</p>
<p><a class="moz-txt-link-freetext" href="https://github.com/openssl/openssl/issues/2515">https://github.com/openssl/openssl/issues/2515</a></p>
<p>It's closed - I infer as "won't fix" (yet?) and this is still an
issue as my experience suggests, at least in the versions
distributed for systems I will be using.<br>
</p>
<p><br>
</p>
<p>I was using openssl 1.0.2g-1ubuntu4.14 (Xenial) and I've verified
it with openssl 1.1.0g-2ubuntu4.3 (Bionic, the latest LTS release
fro Ubuntu):<br>
</p>
<blockquote><tt>$ openssl version -a<br>
OpenSSL 1.1.0g 2 Nov 2017<br>
built on: reproducible build, date unspecified<br>
platform: debian-amd64<br>
compiler: gcc -DDSO_DLFCN -DHAVE_DLFCN_H -DNDEBUG
-DOPENSSL_THREADS -DOPENSSL_NO_STATIC_ENGINE -DOPENSSL_PIC
-DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5
-DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM
-DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM
-DGHASH_ASM -DECP_NISTZ256_ASM -DPADLOCK_ASM -DPOLY1305_ASM
-DOPENSSLDIR="\"/usr/lib/ssl\""
-DENGINESDIR="\"/usr/lib/x86_64-linux-gnu/engines-1.1\"" <br>
OPENSSLDIR: "/usr/lib/ssl"<br>
ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1"<br>
<br>
$ dd if=/dev/zero of=sample.txt count=2M bs=1024
</tt><br>
<tt>$ openssl req -x509 -nodes -newkey rsa:2048 -keyout
mysqldump-secure.priv.pem -out mysqldump-secure.pub.pem</tt><br>
<tt>$ openssl smime -encrypt -binary -text -aes256 -in sample.txt
-out sample.txt.enc -outform DER -stream
mysqldump-secure.pub.pem</tt><br>
<tt>$ openssl smime -decrypt -binary -inkey
mysqldump-secure.priv.pem -inform DEM -in sample.txt.enc -out
sample.txt.restored<br>
<br>
</tt><tt>
Error reading S/MIME message</tt><br>
<tt>
139742630175168:error:07069041:memory buffer
routines:BUF_MEM_grow_clean:malloc
failure:../crypto/buffer/buffer.c:138:</tt><br>
<tt>
139742630175168:error:0D06B041:asn1 encoding
routines:asn1_d2i_read_bio:malloc
failure:../crypto/asn1/a_d2i_fp.c:191<br>
</tt></blockquote>
<p><br>
</p>
<p><br>
</p>
<blockquote type="cite"
cite="mid:1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com">Anyway,
setting up an alternative data format might be suitable if
combined
<br>
with other functionality requiring chunking, such as recovery from
<br>
lost/corrupted data "blocks" (where each block is much much larger
than
<br>
a 1K "disk block").
</blockquote>
<p><br>
</p>
<p>I should add that I don't really care about the format, or even
the use of openssl - just the ability to tackle large files with
the benefits of public key encryption, in a self-contained way
without needing fiddly work deploying the keys (as GnuPG seems to
require for its keyring, judging from my experience deploying
Backup-Ninja / Duplicity using Ansible.) So other solutions, if
tried and tested, might work for me.<br>
</p>
<p>Cheers,</p>
<p><br>
</p>
<p>Nick<br>
</p>
</body>
</html>