From a at juaristi.eus Tue Aug 4 08:25:44 2020 From: a at juaristi.eus (Ander Juaristi) Date: Tue, 04 Aug 2020 10:25:44 +0200 Subject: Callback functions higher up in the stack than X509_STORE_set_verify_cb? Message-ID: Hi list, I'm implementing OCSP stapling for wget2 with OpenSSL. And I was wondering if there's a better way. The way I'm doing this currently is by letting the handshake complete normally and check the received OCSP responses (stapled or not) at the end. Then, if OCSP does not verify, I close the connection. I.e. something like the following: do { retval = SSL_connect(ssl); /* */ } while (error == SSL_ERROR_WANT_READ || error == SSL_ERROR_WANT_WRITE); if (retval <= 0) { /* Error - tell the user and exit */ /* */ goto bail; } /* Check the OCSP response here */ ocsp_stap_length = SSL_get_tlsext_status_ocsp_resp(ssl, &ocsp_resp); certs = SSL_get_peer_cert_chain(ssl); if (!check_ocsp(ssl, certs, ocsp_resp)) { /* Error - OCSP cannot be verified */ goto bail; } The specs (RFC 6960 and RFC 6066) are not clear on whether how a non-conforming OCSP response should be handled: by sending an alert and aborting the handshake, or by closing the connection after the handshake has successfully completed. Please correct me if I'm wrong here. I'm currently doing the second one out of a purely technical lack of knowledge on how to do the first one, but I believe the first one would be cleaner. Previously, I would register a callback function with X509_STORE_set_verify_cb() and perform the OCSP checking there. This worked for traditional OCSP (RFC 6960). However it will not work for stapled OCSP, because that callback function is called after the certificates are read, but before the stapled OCSP is read. I was wondering if a hook point exists that would allow me to do this just before ChangeCipherSpec is sent by the client, as, at that point, all the information should already be available. TL;DR I want to hook at a point just before SSL_connect() returns. From openssl-users at dukhovni.org Tue Aug 4 15:24:31 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Tue, 4 Aug 2020 11:24:31 -0400 Subject: Callback functions higher up in the stack than X509_STORE_set_verify_cb? In-Reply-To: References: Message-ID: <20200804152431.GA40202@straasha.imrryr.org> On Tue, Aug 04, 2020 at 10:25:44AM +0200, Ander Juaristi wrote: > /* Check the OCSP response here */ > ocsp_stap_length = SSL_get_tlsext_status_ocsp_resp(ssl, &ocsp_resp); > > certs = SSL_get_peer_cert_chain(ssl); Side comment, if you end up sticking with post-handshake validation you probably want: SSL_get0_verified_chain(3) rather than SSL_get_peer_cert_chain(3). A better early hook into SSL cert chain verification is: SSL_CTX_set_cert_verify_callback(3) which you can you use to wrap X509_verify_cert(3) and do some post-processing after the verified chain is constructed. But this likely fires before the OCSP extension from the server is processed. > I was wondering if a hook point exists that would allow me to do this > just before ChangeCipherSpec is sent by the client, > as, at that point, all the information should already be available. You're looking for: SSL_CTX_set_tlsext_status_cb(3). -- Viktor. From mejaz at cyberia.net.sa Wed Aug 5 13:49:36 2020 From: mejaz at cyberia.net.sa (mejaz at cyberia.net.sa) Date: Wed, 5 Aug 2020 16:49:36 +0300 Subject: openssl-3 Message-ID: <001101d66b2f$4273d250$c75b76f0$@cyberia.net.sa> Hello, I have sucesfully installed openssl 3.x version but when I was trying to check the version wheather it installed sucesfully or not, it gives error as below , any assistance would be highly appreciated thanks in advance. [root at nc ~]# /usr/local/bin/openssl versioin -a /usr/local/bin/openssl: error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory I have redhat 8 O/S Regards Ejaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.mooc at gmail.com Wed Aug 5 19:49:28 2020 From: patrick.mooc at gmail.com (Patrick Mooc) Date: Wed, 5 Aug 2020 21:49:28 +0200 Subject: OpenSSL compliance with Linux distributions Message-ID: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> Hello, I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian distribution (Lenny). Is it possible to upgrade OpenSSL version without upgrading Linux Debian distribution ? If yes, up to which version of OpenSSL ? Are all versions of OpenSSL compliant with all Linux Debian distribution ? Thank you in advance for your answer. Best Regards, From aerowolf at gmail.com Wed Aug 5 20:10:10 2020 From: aerowolf at gmail.com (Kyle Hamilton) Date: Wed, 5 Aug 2020 15:10:10 -0500 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> Message-ID: It is never recommended to upgrade you distribution's version of OpenSSL with one you compile yourself. Doing so will often break all software installed by the distribution that uses it. If you need functionality from newer versions of OpenSSL, your options are to upgrade your OS version, or to install a local copy of OpenSSL and manually compile and link local copies of the applications that need the newer functionality. (Newer versions of OpenSSL do not maintain the same Application Binary Interface (ABI), which means that binaries compiled against older versions will not correctly operate or dynamically link against newer libraries. Also, distributions such as Debian can modify the ABI in such a way that nothing distributed directly by openssl.org can be compiled to meet it without source code modification.) -Kyle H On Wed, Aug 5, 2020, 14:49 Patrick Mooc wrote: > Hello, > > I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian > distribution (Lenny). > > Is it possible to upgrade OpenSSL version without upgrading Linux Debian > distribution ? > If yes, up to which version of OpenSSL ? > > Are all versions of OpenSSL compliant with all Linux Debian distribution ? > > > Thank you in advance for your answer. > > Best Regards, > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at taygeta.com Wed Aug 5 20:19:04 2020 From: skip at taygeta.com (Skip Carter) Date: Wed, 05 Aug 2020 13:19:04 -0700 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> Message-ID: <1596658744.20854.63.camel@taygeta.com> Patrick, I am also supporting servers running very old Linux systems and I can tell you that YES you can upgrade from source. I have built openssl-1.1.1 from source on such systems with no problems. On Wed, 2020-08-05 at 21:49 +0200, Patrick Mooc wrote: > Hello, > > I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian? > distribution (Lenny). > > Is it possible to upgrade OpenSSL version without upgrading Linux > Debian? > distribution ? > If yes, up to which version of OpenSSL ? > > Are all versions of OpenSSL compliant with all Linux Debian > distribution ? > > > Thank you in advance for your answer. > > Best Regards, > -- Dr Everett (Skip) Carter??0xF29BF36844FB7922 skip at taygeta.com Taygeta Scientific Inc 607 Charles Ave Seaside CA 93955 831-641-0645 x103 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From rajprudvi98 at gmail.com Wed Aug 5 20:21:35 2020 From: rajprudvi98 at gmail.com (prudvi raj) Date: Thu, 6 Aug 2020 01:51:35 +0530 Subject: 'in_addr_t' in openssl 1.1.1g ?? Message-ID: Hi there, I got this error during compilation , in file b_addr.c : In function 'BIO_lookup_ex': /b_addr.c:748:9: error: unknown type name 'in_addr_t' I see that "in_addr_t" is defined in "netinet/in.h" & "arpa/inet.h" in toolchain (typedef uint32_t in_addr_t;). i have even tried to #include<> these files directly but that doesn't seem to fix the error. Btw, these files are included already , but under conditional #if 's. I am surprised why the error persists , even after directly including the respective source file ?? Here's the config options i used : ./Configure no-threads no-dso no-ct no-shared no-zlib no-asm no-engine no-bf no-aria no-blake2 no-camellia no-cast no-md2 no-md4 no-mdc2 no-ocsp no-rc2 no-rc5 no-hw-padlock no-idea no-srp gcc --with-rand-seed=none --cross-compile-prefix=/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- PS : same error without any cross compile prefix , using only gcc. Thanks, Prudvi. -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.mooc at gmail.com Wed Aug 5 20:28:26 2020 From: patrick.mooc at gmail.com (Patrick Mooc) Date: Wed, 5 Aug 2020 22:28:26 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> Message-ID: <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> Thank you very much Kyle for your quick and clear answer. The reason why I want to upgrade OpenSSL version, is that I encounter a problem with 1 frame exchange between client and server. This frame is the first packet sent from client to server (Client Hello Packet) and the protocol used for this packet is SSLv2. I don't understand why, because I force the use of TLSv1 (in ssl.conf file as in application software), but only for this first exchange packet, SSLv2 is used. All other packets are well using TLSv10 as configured. I have also searched for forcing the use of TLSv10 ciphers in OpenSSL configuration and in application software, but I didn't succeed doing so. That's why I had in idea of upgrading OpenSSL version to avoid the use of SSLv2 protocol. Thus, if you have any idea of how to solve my problem without upgrading OpenSSL version or Linux distribution, It would be very nice. Thank you in advance for your answer. Best Regards, Le 05/08/2020 ? 22:10, Kyle Hamilton a ?crit?: > It is never recommended to upgrade you distribution's version of > OpenSSL with one you compile yourself.? Doing so will often break all > software installed by the distribution that uses it. > > If you need functionality from newer versions of OpenSSL, your options > are to upgrade your OS version, or to install a local copy of OpenSSL > and manually compile and link local copies of the applications that > need the newer functionality. > > (Newer versions of OpenSSL do not maintain the same Application Binary > Interface (ABI), which means that binaries compiled against older > versions will not correctly operate or dynamically link against newer > libraries. Also, distributions such as Debian can modify the ABI in > such a way that nothing distributed directly by openssl.org > can be compiled to meet it without source code > modification.) > > -Kyle H > > On Wed, Aug 5, 2020, 14:49 Patrick Mooc > wrote: > > Hello, > > I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian > distribution (Lenny). > > Is it possible to upgrade OpenSSL version without upgrading Linux > Debian > distribution ? > If yes, up to which version of OpenSSL ? > > Are all versions of OpenSSL compliant with all Linux Debian > distribution ? > > > Thank you in advance for your answer. > > Best Regards, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkaduk at akamai.com Wed Aug 5 20:37:23 2020 From: bkaduk at akamai.com (Benjamin Kaduk) Date: Wed, 5 Aug 2020 13:37:23 -0700 Subject: 'in_addr_t' in openssl 1.1.1g ?? In-Reply-To: References: Message-ID: <20200805203722.GX20623@akamai.com> On Thu, Aug 06, 2020 at 01:51:35AM +0530, prudvi raj wrote: > Hi there, > > I got this error during compilation , in file b_addr.c : > In function 'BIO_lookup_ex': > /b_addr.c:748:9: error: unknown type name 'in_addr_t' > > I see that "in_addr_t" is defined in "netinet/in.h" & "arpa/inet.h" in > toolchain (typedef uint32_t in_addr_t;). > i have even tried to #include<> these files directly but that doesn't seem > to fix the error. Btw, these files are included already , but under > conditional #if 's. > > I am surprised why the error persists , even after directly including the > respective source file ?? > > Here's the config options i used : > ./Configure no-threads no-dso no-ct no-shared no-zlib no-asm no-engine > no-bf no-aria no-blake2 no-camellia no-cast no-md2 no-md4 no-mdc2 no-ocsp > no-rc2 no-rc5 no-hw-padlock no-idea no-srp gcc --with-rand-seed=none > --cross-compile-prefix=/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- > > PS : same error without any cross compile prefix , using only gcc. The `./configdata.pm -d` output might be helpful. -Ben From patrick.mooc at gmail.com Wed Aug 5 20:39:21 2020 From: patrick.mooc at gmail.com (Patrick Mooc) Date: Wed, 5 Aug 2020 22:39:21 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <1596658744.20854.63.camel@taygeta.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <1596658744.20854.63.camel@taygeta.com> Message-ID: <32dd7046-980c-0e87-1e95-f1e742c900b9@gmail.com> Dear Skip, Thank you also very much for your quick answer. Ok, it could then be interesting to test an upgrade of openSSL on my system. My project is running on a Compact Flash card, so I think that I can test the upgrade directly on a device. Do you have some advices, some steps to follow (in order to limit risks as much as possible) ? Thank you in advance. Best Regards, Le 05/08/2020 ? 22:19, Skip Carter a ?crit?: > Patrick, > > I am also supporting servers running very old Linux systems and I can > tell you that YES you can upgrade from source. I have built > openssl-1.1.1 from source on such systems with no problems. > > On Wed, 2020-08-05 at 21:49 +0200, Patrick Mooc wrote: >> Hello, >> >> I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian >> distribution (Lenny). >> >> Is it possible to upgrade OpenSSL version without upgrading Linux >> Debian >> distribution ? >> If yes, up to which version of OpenSSL ? >> >> Are all versions of OpenSSL compliant with all Linux Debian >> distribution ? >> >> >> Thank you in advance for your answer. >> >> Best Regards, >> From bkaduk at akamai.com Wed Aug 5 20:46:16 2020 From: bkaduk at akamai.com (Benjamin Kaduk) Date: Wed, 5 Aug 2020 13:46:16 -0700 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> Message-ID: <20200805204615.GY20623@akamai.com> On Wed, Aug 05, 2020 at 10:28:26PM +0200, Patrick Mooc wrote: > Thank you very much Kyle for your quick and clear answer. > > The reason why I want to upgrade OpenSSL version, is that I encounter a > problem with 1 frame exchange between client and server. > > This frame is the first packet sent from client to server (Client Hello > Packet) and the protocol used for this packet is SSLv2. > I don't understand why, because I force the use of TLSv1 (in ssl.conf file > as in application software), but only for this first exchange packet, SSLv2 > is used. All other packets are well using TLSv10 as configured. > > I have also searched for forcing the use of TLSv10 ciphers in OpenSSL > configuration and in application software, but I didn't succeed doing so. > > That's why I had in idea of upgrading OpenSSL version to avoid the use of > SSLv2 protocol. > > > Thus, if you have any idea of how to solve my problem without upgrading > OpenSSL version or Linux distribution, It would be very nice. Using an "SSLv2-compatible" ClientHello is rather distinct from actually using the SSLv2 protocol; I believe that the former is what is happening for you. IIRC sending any TLS extension with the ClientHello suppresses the use of the v2-compatible format, so you might be able to do that. (I don't remember offhand which extensions are implemented in that old of an OpenSSL version, and whether they're enabled in the default build, though.) -Ben From rajprudvi98 at gmail.com Wed Aug 5 20:53:40 2020 From: rajprudvi98 at gmail.com (prudvi raj) Date: Thu, 6 Aug 2020 02:23:40 +0530 Subject: 'in_addr_t' in openssl 1.1.1g ?? In-Reply-To: <20200805203722.GX20623@akamai.com> References: <20200805203722.GX20623@akamai.com> Message-ID: Another thing , 'make && make all ' is successful , but the same openssl files when compiled during my project's compilation show this error . PROJECT DIR << make project here compiles all files. |- ..folder 1. |- openssl |-----... Btw, Project uses same CC - "/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-gcc" Hope this clears some things up. $ ./configdata.pm -d Command line (with current working directory = .): /usr/bin/perl ./Configure no-threads no-dso no-ct no-shared no-zlib no-asm no-engine no-bf no-aria no-blake2 no-camellia no-cast no-md2 no-md4 no-mdc2 no-ocsp no-rc2 no-rc5 no-hw-padlock no-idea no-srp gcc --with-rand-seed=none --cross-compile-prefix=/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- Perl information: /usr/bin/perl 5.10.1 for x86_64-linux-thread-multi Enabled features: async autoalginit autoerrinit autoload-config buildtest-c\+\+ capieng chacha cmac cms comp deprecated des dgram dh dsa dtls ec ec2m ecdh ecdsa err filenames gost hw(-.+)? makedepend multiblock nextprotoneg pinshared ocb poly1305 posix-io psk rc4 rdrand rfc3779 rmd160 scrypt seed siphash sm2 sm3 sm4 sock srtp sse2 ssl static-engine stdio tests tls ts ui-console whirlpool tls1 tls1-method tls1_1 tls1_1-method tls1_2 tls1_2-method tls1_3 dtls1 dtls1-method dtls1_2 dtls1_2-method Disabled features: afalgeng [cascade] OPENSSL_NO_AFALGENG aria [option] OPENSSL_NO_ARIA (skip crypto/aria) asan [default] OPENSSL_NO_ASAN asm [option] OPENSSL_NO_ASM bf [option] OPENSSL_NO_BF (skip crypto/bf) blake2 [option] OPENSSL_NO_BLAKE2 (skip crypto/blake2) camellia [option] OPENSSL_NO_CAMELLIA (skip crypto/camellia) cast [option] OPENSSL_NO_CAST (skip crypto/cast) crypto-mdebug [default] OPENSSL_NO_CRYPTO_MDEBUG crypto-mdebug-backtrace [default] OPENSSL_NO_CRYPTO_MDEBUG_BACKTRACE ct [option] OPENSSL_NO_CT (skip crypto/ct) devcryptoeng [default] OPENSSL_NO_DEVCRYPTOENG dso [option] OPENSSL_NO_DSO dynamic-engine [cascade] ec_nistp_64_gcc_128 [default] OPENSSL_NO_EC_NISTP_64_GCC_128 egd [default] OPENSSL_NO_EGD engine [option] OPENSSL_NO_ENGINE (skip crypto/engine, engines) external-tests [default] OPENSSL_NO_EXTERNAL_TESTS fuzz-libfuzzer [default] OPENSSL_NO_FUZZ_LIBFUZZER fuzz-afl [default] OPENSSL_NO_FUZZ_AFL heartbeats [default] OPENSSL_NO_HEARTBEATS idea [option] OPENSSL_NO_IDEA (skip crypto/idea) md2 [option] OPENSSL_NO_MD2 (skip crypto/md2) md4 [option] OPENSSL_NO_MD4 (skip crypto/md4) mdc2 [option] OPENSSL_NO_MDC2 (skip crypto/mdc2) msan [default] OPENSSL_NO_MSAN ocsp [option] OPENSSL_NO_OCSP (skip crypto/ocsp) pic [no-shared-target] rc2 [option] OPENSSL_NO_RC2 (skip crypto/rc2) rc5 [option] OPENSSL_NO_RC5 (skip crypto/rc5) sctp [default] OPENSSL_NO_SCTP shared [option] srp [option] OPENSSL_NO_SRP (skip crypto/srp) ssl-trace [default] OPENSSL_NO_SSL_TRACE threads [option] ubsan [default] OPENSSL_NO_UBSAN unit-test [default] OPENSSL_NO_UNIT_TEST weak-ssl-ciphers [default] OPENSSL_NO_WEAK_SSL_CIPHERS zlib [option] zlib-dynamic [default] ssl3 [default] OPENSSL_NO_SSL3 ssl3-method [default] OPENSSL_NO_SSL3_METHOD Config target attributes: AR => "ar", ARFLAGS => "r", CC => "gcc", CFLAGS => "-O3", HASHBANGPERL => "/usr/bin/env perl", RANLIB => "ranlib", RC => "windres", aes_asm_src => "aes_core.c aes_cbc.c", aes_obj => "aes_core.o aes_cbc.o", apps_aux_src => "", apps_init_src => "", apps_obj => "", bf_asm_src => "bf_enc.c", bf_obj => "bf_enc.o", bn_asm_src => "bn_asm.c", bn_obj => "bn_asm.o", bn_ops => "BN_LLONG", build_file => "Makefile", build_scheme => [ "unified", "unix" ], cast_asm_src => "c_enc.c", cast_obj => "c_enc.o", cflags => "", chacha_asm_src => "chacha_enc.c", chacha_obj => "chacha_enc.o", cmll_asm_src => "camellia.c cmll_misc.c cmll_cbc.c", cmll_obj => "camellia.o cmll_misc.o cmll_cbc.o", cppflags => "", cpuid_asm_src => "mem_clr.c", cpuid_obj => "mem_clr.o", defines => [ ], des_asm_src => "des_enc.c fcrypt_b.c", des_obj => "des_enc.o fcrypt_b.o", disable => [ ], dso_extension => ".so", ec_asm_src => "", ec_obj => "", enable => [ ], exe_extension => "", includes => [ ], keccak1600_asm_src => "keccak1600.c", keccak1600_obj => "keccak1600.o", lflags => "", lib_cflags => "", lib_cppflags => "", lib_defines => [ ], md5_asm_src => "", md5_obj => "", modes_asm_src => "", modes_obj => "", module_cflags => "", module_cppflags => "", module_cxxflags => "", module_defines => "", module_includes => "", module_ldflags => "", module_lflags => "", padlock_asm_src => "", padlock_obj => "", poly1305_asm_src => "", poly1305_obj => "", rc4_asm_src => "rc4_enc.c rc4_skey.c", rc4_obj => "rc4_enc.o rc4_skey.o", rc5_asm_src => "rc5_enc.c", rc5_obj => "rc5_enc.o", rmd160_asm_src => "", rmd160_obj => "", shared_cflag => "", shared_cppflag => "", shared_cxxflag => "", shared_defines => "", shared_extension => ".so", shared_extension_simple => ".so", shared_includes => "", shared_ldflag => "", shared_rcflag => "", shared_target => "", thread_defines => [ ], thread_scheme => "(unknown)", unistd => "", uplink_aux_src => "", uplink_obj => "", wp_asm_src => "wp_block.c", wp_obj => "wp_block.o", Recorded environment: AR = ARFLAGS = AS = ASFLAGS = BUILDFILE = CC = CFLAGS = CPP = CPPDEFINES = CPPFLAGS = CPPINCLUDES = CROSS_COMPILE = CXX = CXXFLAGS = HASHBANGPERL = LD = LDFLAGS = LDLIBS = MT = MTFLAGS = OPENSSL_LOCAL_CONFIG_DIR = PERL = RANLIB = RC = RCFLAGS = RM = WINDRES = __CNF_CFLAGS = __CNF_CPPDEFINES = __CNF_CPPFLAGS = __CNF_CPPINCLUDES = __CNF_CXXFLAGS = __CNF_LDFLAGS = __CNF_LDLIBS = Makevars: AR = /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-ar ARFLAGS = r CC = /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-gcc CFLAGS = -O3 CPPDEFINES = CPPFLAGS = CPPINCLUDES = CROSS_COMPILE = /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- CXXFLAGS = HASHBANGPERL = /usr/bin/env perl LDFLAGS = LDLIBS = PERL = /usr/bin/perl RANLIB = /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-ranlib RC = /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-windres RCFLAGS = NOTE: These variables only represent the configuration view. The build file template may have processed these variables further, please have a look at the build file for more exact data: Makefile build file: Makefile build file templates: Configurations/common0.tmpl Configurations/unix-Makefile.tmpl Configurations/common.tmpl On Thu, Aug 6, 2020 at 2:07 AM Benjamin Kaduk wrote: > On Thu, Aug 06, 2020 at 01:51:35AM +0530, prudvi raj wrote: > > Hi there, > > > > I got this error during compilation , in file b_addr.c : > > In function 'BIO_lookup_ex': > > /b_addr.c:748:9: error: unknown type name 'in_addr_t' > > > > I see that "in_addr_t" is defined in "netinet/in.h" & "arpa/inet.h" in > > toolchain (typedef uint32_t in_addr_t;). > > i have even tried to #include<> these files directly but that doesn't > seem > > to fix the error. Btw, these files are included already , but under > > conditional #if 's. > > > > I am surprised why the error persists , even after directly including the > > respective source file ?? > > > > Here's the config options i used : > > ./Configure no-threads no-dso no-ct no-shared no-zlib no-asm no-engine > > no-bf no-aria no-blake2 no-camellia no-cast no-md2 no-md4 no-mdc2 no-ocsp > > no-rc2 no-rc5 no-hw-padlock no-idea no-srp gcc --with-rand-seed=none > > > --cross-compile-prefix=/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- > > > > PS : same error without any cross compile prefix , using only gcc. > > The `./configdata.pm -d` output might be helpful. > > -Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkaduk at akamai.com Thu Aug 6 00:07:10 2020 From: bkaduk at akamai.com (Benjamin Kaduk) Date: Wed, 5 Aug 2020 17:07:10 -0700 Subject: 'in_addr_t' in openssl 1.1.1g ?? In-Reply-To: References: <20200805203722.GX20623@akamai.com> Message-ID: <20200806000709.GZ20623@akamai.com> Ah, so it really is the "gcc" configure target (I had to look up that such a thing even existed!). Unfortunately, 'gcc' implies 32-bit, and your x86_64-fslsdk-linux suggests that you're targetting a 64-bit system. Such a mismatch of configurations could easily cause this sort of compile error due to inconsistent input to the preprocessor conditionals. Would linux-x86_64 be more appropriate for your system? -Ben On Thu, Aug 06, 2020 at 02:23:40AM +0530, prudvi raj wrote: > Another thing , 'make && make all ' is successful , but the same openssl > files when compiled during my project's compilation show this error . > PROJECT DIR << make project here compiles all files. > |- ..folder 1. > |- openssl > |-----... > Btw, Project uses same CC - > "/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-gcc" > Hope this clears some things up. > $ ./configdata.pm -d > > Command line (with current working directory = .): > > /usr/bin/perl ./Configure no-threads no-dso no-ct no-shared no-zlib > no-asm no-engine no-bf no-aria no-blake2 no-camellia no-cast no-md2 no-md4 > no-mdc2 no-ocsp no-rc2 no-rc5 no-hw-padlock no-idea no-srp gcc > --with-rand-seed=none > --cross-compile-prefix=/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- > > Perl information: > > /usr/bin/perl > 5.10.1 for x86_64-linux-thread-multi > > Enabled features: > > async > autoalginit > autoerrinit > autoload-config > buildtest-c\+\+ > capieng > chacha > cmac > cms > comp > deprecated > des > dgram > dh > dsa > dtls > ec > ec2m > ecdh > ecdsa > err > filenames > gost > hw(-.+)? > makedepend > multiblock > nextprotoneg > pinshared > ocb > poly1305 > posix-io > psk > rc4 > rdrand > rfc3779 > rmd160 > scrypt > seed > siphash > sm2 > sm3 > sm4 > sock > srtp > sse2 > ssl > static-engine > stdio > tests > tls > ts > ui-console > whirlpool > tls1 > tls1-method > tls1_1 > tls1_1-method > tls1_2 > tls1_2-method > tls1_3 > dtls1 > dtls1-method > dtls1_2 > dtls1_2-method > > Disabled features: > > afalgeng [cascade] OPENSSL_NO_AFALGENG > aria [option] OPENSSL_NO_ARIA (skip > crypto/aria) > asan [default] OPENSSL_NO_ASAN > asm [option] OPENSSL_NO_ASM > bf [option] OPENSSL_NO_BF (skip > crypto/bf) > blake2 [option] OPENSSL_NO_BLAKE2 (skip > crypto/blake2) > camellia [option] OPENSSL_NO_CAMELLIA (skip > crypto/camellia) > cast [option] OPENSSL_NO_CAST (skip > crypto/cast) > crypto-mdebug [default] OPENSSL_NO_CRYPTO_MDEBUG > crypto-mdebug-backtrace [default] > OPENSSL_NO_CRYPTO_MDEBUG_BACKTRACE > ct [option] OPENSSL_NO_CT (skip > crypto/ct) > devcryptoeng [default] OPENSSL_NO_DEVCRYPTOENG > dso [option] OPENSSL_NO_DSO > dynamic-engine [cascade] > ec_nistp_64_gcc_128 [default] > OPENSSL_NO_EC_NISTP_64_GCC_128 > egd [default] OPENSSL_NO_EGD > engine [option] OPENSSL_NO_ENGINE (skip > crypto/engine, engines) > external-tests [default] OPENSSL_NO_EXTERNAL_TESTS > fuzz-libfuzzer [default] OPENSSL_NO_FUZZ_LIBFUZZER > fuzz-afl [default] OPENSSL_NO_FUZZ_AFL > heartbeats [default] OPENSSL_NO_HEARTBEATS > idea [option] OPENSSL_NO_IDEA (skip > crypto/idea) > md2 [option] OPENSSL_NO_MD2 (skip > crypto/md2) > md4 [option] OPENSSL_NO_MD4 (skip > crypto/md4) > mdc2 [option] OPENSSL_NO_MDC2 (skip > crypto/mdc2) > msan [default] OPENSSL_NO_MSAN > ocsp [option] OPENSSL_NO_OCSP (skip > crypto/ocsp) > pic [no-shared-target] > rc2 [option] OPENSSL_NO_RC2 (skip > crypto/rc2) > rc5 [option] OPENSSL_NO_RC5 (skip > crypto/rc5) > sctp [default] OPENSSL_NO_SCTP > shared [option] > srp [option] OPENSSL_NO_SRP (skip > crypto/srp) > ssl-trace [default] OPENSSL_NO_SSL_TRACE > threads [option] > ubsan [default] OPENSSL_NO_UBSAN > unit-test [default] OPENSSL_NO_UNIT_TEST > weak-ssl-ciphers [default] OPENSSL_NO_WEAK_SSL_CIPHERS > zlib [option] > zlib-dynamic [default] > ssl3 [default] OPENSSL_NO_SSL3 > ssl3-method [default] OPENSSL_NO_SSL3_METHOD > > Config target attributes: > > AR => "ar", > ARFLAGS => "r", > CC => "gcc", > CFLAGS => "-O3", > HASHBANGPERL => "/usr/bin/env perl", > RANLIB => "ranlib", > RC => "windres", > aes_asm_src => "aes_core.c aes_cbc.c", > aes_obj => "aes_core.o aes_cbc.o", > apps_aux_src => "", > apps_init_src => "", > apps_obj => "", > bf_asm_src => "bf_enc.c", > bf_obj => "bf_enc.o", > bn_asm_src => "bn_asm.c", > bn_obj => "bn_asm.o", > bn_ops => "BN_LLONG", > build_file => "Makefile", > build_scheme => [ "unified", "unix" ], > cast_asm_src => "c_enc.c", > cast_obj => "c_enc.o", > cflags => "", > chacha_asm_src => "chacha_enc.c", > chacha_obj => "chacha_enc.o", > cmll_asm_src => "camellia.c cmll_misc.c cmll_cbc.c", > cmll_obj => "camellia.o cmll_misc.o cmll_cbc.o", > cppflags => "", > cpuid_asm_src => "mem_clr.c", > cpuid_obj => "mem_clr.o", > defines => [ ], > des_asm_src => "des_enc.c fcrypt_b.c", > des_obj => "des_enc.o fcrypt_b.o", > disable => [ ], > dso_extension => ".so", > ec_asm_src => "", > ec_obj => "", > enable => [ ], > exe_extension => "", > includes => [ ], > keccak1600_asm_src => "keccak1600.c", > keccak1600_obj => "keccak1600.o", > lflags => "", > lib_cflags => "", > lib_cppflags => "", > lib_defines => [ ], > md5_asm_src => "", > md5_obj => "", > modes_asm_src => "", > modes_obj => "", > module_cflags => "", > module_cppflags => "", > module_cxxflags => "", > module_defines => "", > module_includes => "", > module_ldflags => "", > module_lflags => "", > padlock_asm_src => "", > padlock_obj => "", > poly1305_asm_src => "", > poly1305_obj => "", > rc4_asm_src => "rc4_enc.c rc4_skey.c", > rc4_obj => "rc4_enc.o rc4_skey.o", > rc5_asm_src => "rc5_enc.c", > rc5_obj => "rc5_enc.o", > rmd160_asm_src => "", > rmd160_obj => "", > shared_cflag => "", > shared_cppflag => "", > shared_cxxflag => "", > shared_defines => "", > shared_extension => ".so", > shared_extension_simple => ".so", > shared_includes => "", > shared_ldflag => "", > shared_rcflag => "", > shared_target => "", > thread_defines => [ ], > thread_scheme => "(unknown)", > unistd => "", > uplink_aux_src => "", > uplink_obj => "", > wp_asm_src => "wp_block.c", > wp_obj => "wp_block.o", > > Recorded environment: > > AR = > ARFLAGS = > AS = > ASFLAGS = > BUILDFILE = > CC = > CFLAGS = > CPP = > CPPDEFINES = > CPPFLAGS = > CPPINCLUDES = > CROSS_COMPILE = > CXX = > CXXFLAGS = > HASHBANGPERL = > LD = > LDFLAGS = > LDLIBS = > MT = > MTFLAGS = > OPENSSL_LOCAL_CONFIG_DIR = > PERL = > RANLIB = > RC = > RCFLAGS = > RM = > WINDRES = > __CNF_CFLAGS = > __CNF_CPPDEFINES = > __CNF_CPPFLAGS = > __CNF_CPPINCLUDES = > __CNF_CXXFLAGS = > __CNF_LDFLAGS = > __CNF_LDLIBS = > > Makevars: > > AR = > /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-ar > ARFLAGS = r > CC = > /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-gcc > CFLAGS = -O3 > CPPDEFINES = > CPPFLAGS = > CPPINCLUDES = > CROSS_COMPILE = > /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- > CXXFLAGS = > HASHBANGPERL = /usr/bin/env perl > LDFLAGS = > LDLIBS = > PERL = /usr/bin/perl > RANLIB = > /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-ranlib > RC = > /opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe-windres > RCFLAGS = > > NOTE: These variables only represent the configuration view. The build file > template may have processed these variables further, please have a look at > the > build file for more exact data: > Makefile > > build file: > > Makefile > > build file templates: > > Configurations/common0.tmpl > Configurations/unix-Makefile.tmpl > Configurations/common.tmpl > > > On Thu, Aug 6, 2020 at 2:07 AM Benjamin Kaduk wrote: > > > On Thu, Aug 06, 2020 at 01:51:35AM +0530, prudvi raj wrote: > > > Hi there, > > > > > > I got this error during compilation , in file b_addr.c : > > > In function 'BIO_lookup_ex': > > > /b_addr.c:748:9: error: unknown type name 'in_addr_t' > > > > > > I see that "in_addr_t" is defined in "netinet/in.h" & "arpa/inet.h" in > > > toolchain (typedef uint32_t in_addr_t;). > > > i have even tried to #include<> these files directly but that doesn't > > seem > > > to fix the error. Btw, these files are included already , but under > > > conditional #if 's. > > > > > > I am surprised why the error persists , even after directly including the > > > respective source file ?? > > > > > > Here's the config options i used : > > > ./Configure no-threads no-dso no-ct no-shared no-zlib no-asm no-engine > > > no-bf no-aria no-blake2 no-camellia no-cast no-md2 no-md4 no-mdc2 no-ocsp > > > no-rc2 no-rc5 no-hw-padlock no-idea no-srp gcc --with-rand-seed=none > > > > > --cross-compile-prefix=/opt/toolchains/adtn-6/sysroots/x86_64-fslsdk-linux/usr/bin/ppce500v2-fsl-linux-gnuspe/powerpc-fsl-linux-gnuspe- > > > > > > PS : same error without any cross compile prefix , using only gcc. > > > > The `./configdata.pm -d` output might be helpful. > > > > -Ben > > From openssl at openssl.org Thu Aug 6 13:44:20 2020 From: openssl at openssl.org (OpenSSL) Date: Thu, 6 Aug 2020 13:44:20 +0000 Subject: OpenSSL version 3.0.0-alpha6 published Message-ID: <20200806134420.GA4809@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 OpenSSL version 3.0 alpha 6 released ==================================== OpenSSL - The Open Source toolkit for SSL/TLS https://www.openssl.org/ OpenSSL 3.0 is currently in alpha. OpenSSL 3.0 alpha 6 has now been made available. Note: This OpenSSL pre-release has been provided for testing ONLY. It should NOT be used for security critical purposes. Specific notes on upgrading to OpenSSL 3.0 from previous versions, as well as known issues are available on the OpenSSL Wiki, here: https://wiki.openssl.org/index.php/OpenSSL_3.0 The alpha release is available for download via HTTPS and FTP from the following master locations (you can find the various FTP mirrors under https://www.openssl.org/source/mirror.html): * https://www.openssl.org/source/ * ftp://ftp.openssl.org/source/ The distribution file name is: o openssl-3.0.0-alpha6.tar.gz Size: 13963353 SHA1 checksum: bac4e232f5238c5f267c3e108227cfadbd4b7120 SHA256 checksum: 1e8143b152f33f76530da2eaedc5d841121ff9e7247a857390cceac6503f482b The checksums were calculated using the following commands: openssl sha1 openssl-3.0.0-alpha6.tar.gz openssl sha256 openssl-3.0.0-alpha6.tar.gz Please download and check this alpha release as soon as possible. To report a bug, open an issue on GitHub: https://github.com/openssl/openssl/issues Please check the release notes and mailing lists to avoid duplicate reports of known issues. (Of course, the source is also available on GitHub.) Yours, The OpenSSL Project Team. -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAl8r/u0ACgkQ2cTSbQ5g RJFJhgf8C6Wv+1W8JolzZ2erbPSDFXTUjOJGvqnR2+73wtYMkzZKMnYTpqiW9Jrx 5V6zQ2WIYhnWZ97nSP0woo/h3tr8rQIj71Cj3TPqO11zOrXda9Op+P9ncCNNXTuz /BS4HmnicV/pmrd2JMnFmo58tka9K47DhcACMKxuWPr32F40DJcr/yjvYnlf6k7y s5EWK7tv7NLYWu+UN+JO6LpJrTFWRTajQj2OEZh3+Gm07Qv98TaXXr3QeiEpimu6 xbDi8oCcAzA+bKr1WpTCNYIU9H6QZIc0QqPjhSsS9o64RDlK7laRQ6ETMmePxDUK u812RauTlxNuJHjy34a9k38kirPHaQ== =uzj7 -----END PGP SIGNATURE----- From psteuer9 at gmail.com Thu Aug 6 16:37:48 2020 From: psteuer9 at gmail.com (Patrick Steuer) Date: Thu, 6 Aug 2020 18:37:48 +0200 Subject: Software that uses OpenSSL Message-ID: Hi, is there a list of projects that use OpenSSL (for TLS or crypto in general) or that can be configured to use OpenSSL as a backend ? Best, Patrick From Michael.Wojcik at microfocus.com Thu Aug 6 17:44:17 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Thu, 6 Aug 2020 17:44:17 +0000 Subject: Software that uses OpenSSL In-Reply-To: References: Message-ID: > From: openssl-users On Behalf Of Patrick > Steuer > Sent: Thursday, 6 August, 2020 10:38 > > is there a list of projects that use OpenSSL (for TLS or crypto in > general) or that can be configured to use OpenSSL as a backend ? There are probably some partial lists, but there certainly is not a definitive one, since I know of products which use OpenSSL but don't advertise the fact. And while it's *possible* that certain well-resourced organizations have done their best to compile comprehensive lists, but it's unlikely even those are perfectly accurate, and in any case you and I don't have access to them. OpenSSL is very widely used. Enlyft claims they have 317844 *companies* using OpenSSL, for who knows how many products. Of course, many of those are internal use, or use in widely-used products and projects; but some significant fraction represents ISV products that depend on OpenSSL. A quick search didn't turn up any useful statistics on OpenSSL use in OSS projects. (github's dependency graph, for example, had no information.) Anything more precise than "a whole lot" will require some real research, I suspect. From psteuer9 at gmail.com Thu Aug 6 18:42:55 2020 From: psteuer9 at gmail.com (Patrick Steuer) Date: Thu, 6 Aug 2020 20:42:55 +0200 Subject: Software that uses OpenSSL In-Reply-To: References: Message-ID: > Anything more precise than "a whole lot" will require some real research, I suspect. Yes, thats my feeling as well. I hoped someone on here might have already done research in that direction (and possibly willing to share). My question was intended to be about notable OSS projects, sorry for not making that clear. To give some examples: node.js crypto https://nodejs.org/api/crypto.html python https://cryptography.io/en/latest/ ... I thought someone may already have put together a list with projects hat have an OpenSSL plugin or even use it as default. Best, Patrick From dank at kegel.com Thu Aug 6 19:21:13 2020 From: dank at kegel.com (Dan Kegel) Date: Thu, 6 Aug 2020 12:21:13 -0700 Subject: Software that uses OpenSSL In-Reply-To: References: Message-ID: On Ubuntu, the command apt-cache rdepends libssl1.1 lists 861 packages, belonging to something like 400 projects, that depend on openssl.... On Thu, Aug 6, 2020 at 11:43 AM Patrick Steuer wrote: > > Anything more precise than "a whole lot" will require some real > research, I suspect. > > Yes, thats my feeling as well. I hoped someone on here might have > already done research in that direction (and possibly willing to share). > > My question was intended to be about notable OSS projects, sorry for not > making that clear. > > To give some examples: > > node.js crypto https://nodejs.org/api/crypto.html > python https://cryptography.io/en/latest/ > ... > > I thought someone may already have put together a list with projects hat > have an OpenSSL plugin or even use it as default. > > Best, > Patrick > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.mooc at gmail.com Thu Aug 6 19:24:32 2020 From: patrick.mooc at gmail.com (Patrick Mooc) Date: Thu, 6 Aug 2020 21:24:32 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <20200805204615.GY20623@akamai.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> <20200805204615.GY20623@akamai.com> Message-ID: Thank you Ben for your answer. I had a look today for this point, but I didin't found anything about extension in the OpenSSL version I use (0.9.8). Maybe I have to modify OpenSSL configuration file (openssl.conf) and compile OpenSSL again. I will check this tomorrow. Best Regards, Le 05/08/2020 ? 22:46, Benjamin Kaduk a ?crit?: > On Wed, Aug 05, 2020 at 10:28:26PM +0200, Patrick Mooc wrote: >> Thank you very much Kyle for your quick and clear answer. >> >> The reason why I want to upgrade OpenSSL version, is that I encounter a >> problem with 1 frame exchange between client and server. >> >> This frame is the first packet sent from client to server (Client Hello >> Packet) and the protocol used for this packet is SSLv2. >> I don't understand why, because I force the use of TLSv1 (in ssl.conf file >> as in application software), but only for this first exchange packet, SSLv2 >> is used. All other packets are well using TLSv10 as configured. >> >> I have also searched for forcing the use of TLSv10 ciphers in OpenSSL >> configuration and in application software, but I didn't succeed doing so. >> >> That's why I had in idea of upgrading OpenSSL version to avoid the use of >> SSLv2 protocol. >> >> >> Thus, if you have any idea of how to solve my problem without upgrading >> OpenSSL version or Linux distribution, It would be very nice. > Using an "SSLv2-compatible" ClientHello is rather distinct from actually using > the SSLv2 protocol; I believe that the former is what is happening for you. > > IIRC sending any TLS extension with the ClientHello suppresses the use of the > v2-compatible format, so you might be able to do that. (I don't remember offhand > which extensions are implemented in that old of an OpenSSL version, and > whether they're enabled in the default build, though.) > > -Ben From quanah at symas.com Thu Aug 6 20:17:12 2020 From: quanah at symas.com (Quanah Gibson-Mount) Date: Thu, 06 Aug 2020 13:17:12 -0700 Subject: Software that uses OpenSSL In-Reply-To: References: Message-ID: <1795517526B2928E526CC788@[192.168.1.156]> --On Thursday, August 6, 2020 1:21 PM -0700 Dan Kegel wrote: > lists 861 packages, belonging to something like 400 projects, that depend > on openssl.... Unfortunately, due to Debian's odd take on the OpenSSL license, many projects that can use OpenSSL are compiled against alternative SSL libraries, so this can miss a lot of potential applications (OpenLDAP, for example). Hopefully with OpenSSL 3.0 and later, this won't be as much of an issue. --Quanah -- Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: From hkario at redhat.com Fri Aug 7 16:18:16 2020 From: hkario at redhat.com (Hubert Kario) Date: Fri, 07 Aug 2020 18:18:16 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> <20200805204615.GY20623@akamai.com> Message-ID: <75f3161c-5b66-476b-a6ac-95c21c634e22@redhat.com> On Thursday, 6 August 2020 21:24:32 CEST, Patrick Mooc wrote: > Thank you Ben for your answer. > > I had a look today for this point, but I didin't found anything > about extension in the OpenSSL version I use (0.9.8). > > Maybe I have to modify OpenSSL configuration file > (openssl.conf) and compile OpenSSL again. I will check this > tomorrow. changing configuration file won't affect behaviour of OpenSSL in your situation I don't remember if this was behaviour for 0.9.8, but IIRC 1.0.1 would send SSLv2 compatible Client Hello only if there were any SSLv2 compatible ciphers try explicitly disabling RC4-MD5 cipher, that may help > Best Regards, > > > Le 05/08/2020 ? 22:46, Benjamin Kaduk a ?crit : >> On Wed, Aug 05, 2020 at 10:28:26PM +0200, Patrick Mooc wrote: ... > > > -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic From dank at kegel.com Fri Aug 7 16:33:45 2020 From: dank at kegel.com (Dan Kegel) Date: Fri, 7 Aug 2020 09:33:45 -0700 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> Message-ID: Suggestion: get the source for the exact same version of openssl your system uses, and rebuild it with sslv2 disabled. e.g. sudo apt install build-essential devscripts sudo apt build-dep openssl mkdir tmp cd tmp apt source openssl cd openssl-* gedit debian/rules # see below debuild -b -uc -us cd .. sudo apt install *.deb While editing debian/rules in gedit, change the line CONFARGS = --prefix=/usr --openssldir=/usr/lib/ssl --libdir=lib/$(DEB_HOST_MULTIARCH) no-idea no-mdc2 no-rc5 no-zlib no-ssl3 enable-unit-test no-ssl3-method enable-rfc3779 enable-cms to add the no-ssl2 argument, or something like that. See https://wiki.openssl.org/index.php/Compilation_and_Installation But be careful! You probably want to have the original system .deb files for its openssl in an origopenssl dir so you can reinstall them with 'sudo dpkg -i origopenssl/*.deb' when this breaks. - Dan On Wed, Aug 5, 2020 at 1:28 PM Patrick Mooc wrote: > Thank you very much Kyle for your quick and clear answer. > > The reason why I want to upgrade OpenSSL version, is that I encounter a > problem with 1 frame exchange between client and server. > > This frame is the first packet sent from client to server (Client Hello > Packet) and the protocol used for this packet is SSLv2. > I don't understand why, because I force the use of TLSv1 (in ssl.conf file > as in application software), but only for this first exchange packet, SSLv2 > is used. All other packets are well using TLSv10 as configured. > > I have also searched for forcing the use of TLSv10 ciphers in OpenSSL > configuration and in application software, but I didn't succeed doing so. > > That's why I had in idea of upgrading OpenSSL version to avoid the use of > SSLv2 protocol. > > > Thus, if you have any idea of how to solve my problem without upgrading > OpenSSL version or Linux distribution, It would be very nice. > > > Thank you in advance for your answer. > > Best Regards, > > > Le 05/08/2020 ? 22:10, Kyle Hamilton a ?crit : > > It is never recommended to upgrade you distribution's version of OpenSSL > with one you compile yourself. Doing so will often break all software > installed by the distribution that uses it. > > If you need functionality from newer versions of OpenSSL, your options are > to upgrade your OS version, or to install a local copy of OpenSSL and > manually compile and link local copies of the applications that need the > newer functionality. > > (Newer versions of OpenSSL do not maintain the same Application Binary > Interface (ABI), which means that binaries compiled against older versions > will not correctly operate or dynamically link against newer libraries. > Also, distributions such as Debian can modify the ABI in such a way that > nothing distributed directly by openssl.org can be compiled to meet it > without source code modification.) > > -Kyle H > > On Wed, Aug 5, 2020, 14:49 Patrick Mooc wrote: > >> Hello, >> >> I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian >> distribution (Lenny). >> >> Is it possible to upgrade OpenSSL version without upgrading Linux Debian >> distribution ? >> If yes, up to which version of OpenSSL ? >> >> Are all versions of OpenSSL compliant with all Linux Debian distribution ? >> >> >> Thank you in advance for your answer. >> >> Best Regards, >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirkx at webweaving.org Fri Aug 7 17:07:29 2020 From: dirkx at webweaving.org (Dirk-Willem van Gulik) Date: Fri, 7 Aug 2020 19:07:29 +0200 Subject: odd error for ECDSA key in REQ. Message-ID: <988411A7-D6EF-416C-9459-75D0944DD1FF@webweaving.org> Below CSR gives me an odd error with the standard openssl REQ command: openssl req -inform DER -noout -pubkey Error getting public key 140673482679616:error:10067066:elliptic curve routines:ec_GFp_simple_oct2point:invalid encoding:../crypto/ec/ecp_oct.c:312: 140673482679616:error:10098010:elliptic curve routines:o2i_ECPublicKey:EC lib:../crypto/ec/ec_asn1.c:1175: 140673482679616:error:100D708E:elliptic curve routines:eckey_pub_decode:decode error:../crypto/ec/ec_ameth.c:157: 140673482679616:error:0B09407D:x509 certificate routines:x509_pubkey_decode:public key decode error:../crypto/x509/x_pubkey.c:125: Even though the ASN1 of the public key looks correct to me: SEQUENCE (2 elem) SEQUENCE (2 elem) OBJECT IDENTIFIER 1.2.840.10045.2.1 ecPublicKey (ANSI X9.62 public key type) OBJECT IDENTIFIER 1.2.840.10045.3.1.7 prime256v1 (ANSI X9.62 named elliptic curve) BIT STRING (536 bit) 0000010001000001000001000011100100110011100111000110100010100101101000? OCTET STRING (65 byte) 0439339C68A5A333143592C0A36D053F31D3AF6ED18FB54F4747B9DFC6DB6ABC715561? What would be a good way to further debug this ? With kind regards, Dw -----BEGIN CERTIFICATE REQUEST----- MIIBPzCB5QIBADCBgDELMAkGA1UEAxMCQ04xCjAIBgNVBAUTATExCjAIBgNVBAYT AUMxCjAIBgNVBAcTAUwxCjAIBgNVBAgTAVMxCjAIBgNVBAoTAU8xCzAJBgNVBAsT Ak9VMQowCAYDVQQMEwFUMQowCAYDVQQNEwFEMRAwDgYJKoZIhvcNAQkBEwFFMFsw EwYHKoZIzj0CAQYIKoZIzj0DAQcDRAAEQQQ5M5xopaMzFDWSwKNtBT8x069u0Y+1 T0dHud/G22q8cVVh8sVcpLUortLxxesEXCddpx/EeuxP+MN/RymHTMrjoAAwCgYI KoZIzj0EAwIDSQAwRgIhAO+K+TFCdYxQg7aT+B3wIVa6CCYxM/mL4/WHSrwXujJy AiEA7UsbQT/YRKaFDPn/U9jdrJaUmKsqKJvGwN7YVaMGdeo= -----END CERTIFICATE REQUEST----- From fm at frank4dd.com Sat Aug 8 02:16:56 2020 From: fm at frank4dd.com (Frank Migge) Date: Sat, 08 Aug 2020 11:16:56 +0900 Subject: odd error for ECDSA key in REQ. In-Reply-To: <988411A7-D6EF-416C-9459-75D0944DD1FF@webweaving.org> References: <988411A7-D6EF-416C-9459-75D0944DD1FF@webweaving.org> Message-ID: <4cf57fe260ac22582736350edb33c9d199282ce6.camel@frank4dd.com> Hi Dirk-Willem, Something is wrong with your EC key. The error mentions that it can't get the curve points from the key data. How did you generate the key? If it helps, here is a working CSR example, using a prime256v1 key for comparison: -----BEGIN CERTIFICATE REQUEST----- MIIBDjCBtAIBADArMQswCQYDVQQGEwJKUDEcMBoGA1UEAwwTdGVzdCBmb3IgcHJp bWUyNTZ2MTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOMQV0Vep+9Xnje6bKNy +8blwKEscr5LoUQCuwqaUT4HyPgXFE9E0r1PiWbC6bGkS26MuguOBp52X9H9z+NS zM6gJzAlBgkqhkiG9w0BCQ4xGDAWMBQGA1UdEQQNMAuCCWZtNGRkLmNvbTAKBggq hkjOPQQDAgNJADBGAiEA5uYlfkpRsJhBk+WwippCjupEpaCNaHwNyNqbj8qrR80C IQDCoJtaWhFGxbaAB2+o3gm87ZHJSDSjfrD2lEhlkbEXHQ== -----END CERTIFICATE REQUEST----- $ openssl req -inform PEM -noout -pubkey -in test.csr -----BEGIN PUBLIC KEY----- MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE4xBXRV6n71eeN7pso3L7xuXAoSxy vkuhRAK7CppRPgfI+BcUT0TSvU+JZsLpsaRLboy6C44GnnZf0f3P41LMzg== -----END PUBLIC KEY----- On Fri, 2020-08-07 at 19:07 +0200, Dirk-Willem van Gulik wrote: > Below CSR gives me an odd error with the standard openssl REQ > command: > > openssl req -inform DER -noout -pubkey > > Error getting public key > > 140673482679616:error:10067066:elliptic curve > routines:ec_GFp_simple_oct2point:invalid > encoding:../crypto/ec/ecp_oct.c:312: > 140673482679616:error:10098010:elliptic curve > routines:o2i_ECPublicKey:EC lib:../crypto/ec/ec_asn1.c:1175: > 140673482679616:error:100D708E:elliptic curve > routines:eckey_pub_decode:decode error:../crypto/ec/ec_ameth.c:157: > 140673482679616:error:0B09407D:x509 certificate > routines:x509_pubkey_decode:public key decode > error:../crypto/x509/x_pubkey.c:125: > > Even though the ASN1 of the public key looks correct to me: > > SEQUENCE (2 elem) > SEQUENCE (2 elem) > OBJECT IDENTIFIER 1.2.840.10045.2.1 ecPublicKey (ANSI X9.62 > public key type) > OBJECT IDENTIFIER 1.2.840.10045.3.1.7 prime256v1 (ANSI X9.62 > named elliptic curve) > BIT STRING (536 bit) > 000001000100000100000100001110010011001110011100011010001010010110100 > 0? > OCTET STRING (65 byte) > 0439339C68A5A333143592C0A36D053F31D3AF6ED18FB54F4747B9DFC6DB6ABC71556 > 1? > > What would be a good way to further debug this ? > > With kind regards, > > Dw > > -----BEGIN CERTIFICATE REQUEST----- > MIIBPzCB5QIBADCBgDELMAkGA1UEAxMCQ04xCjAIBgNVBAUTATExCjAIBgNVBAYT > AUMxCjAIBgNVBAcTAUwxCjAIBgNVBAgTAVMxCjAIBgNVBAoTAU8xCzAJBgNVBAsT > Ak9VMQowCAYDVQQMEwFUMQowCAYDVQQNEwFEMRAwDgYJKoZIhvcNAQkBEwFFMFsw > EwYHKoZIzj0CAQYIKoZIzj0DAQcDRAAEQQQ5M5xopaMzFDWSwKNtBT8x069u0Y+1 > T0dHud/G22q8cVVh8sVcpLUortLxxesEXCddpx/EeuxP+MN/RymHTMrjoAAwCgYI > KoZIzj0EAwIDSQAwRgIhAO+K+TFCdYxQg7aT+B3wIVa6CCYxM/mL4/WHSrwXujJy > AiEA7UsbQT/YRKaFDPn/U9jdrJaUmKsqKJvGwN7YVaMGdeo= > -----END CERTIFICATE REQUEST----- -- Frank Migge http://fm4dd.com | public at frank4dd.com From dirkx at webweaving.org Sat Aug 8 12:22:44 2020 From: dirkx at webweaving.org (Dirk-Willem van Gulik) Date: Sat, 8 Aug 2020 14:22:44 +0200 Subject: odd error for ECDSA key in REQ. In-Reply-To: <4cf57fe260ac22582736350edb33c9d199282ce6.camel@frank4dd.com> References: <988411A7-D6EF-416C-9459-75D0944DD1FF@webweaving.org> <4cf57fe260ac22582736350edb33c9d199282ce6.camel@frank4dd.com> Message-ID: The key is generated by a lovely HSM - which is by its nature a bit of a closed box. Whose vendor is very sure its software is right. So this helps a lot - and helps confirm what we thought ! Thanks, Dw > On 8 Aug 2020, at 04:16, Frank Migge wrote: > > Hi Dirk-Willem, > > Something is wrong with your EC key. The error mentions that it can't > get the curve points from the key data. How did you generate the key? > > If it helps, here is a working CSR example, using a prime256v1 key for > comparison: > > -----BEGIN CERTIFICATE REQUEST----- > MIIBDjCBtAIBADArMQswCQYDVQQGEwJKUDEcMBoGA1UEAwwTdGVzdCBmb3IgcHJp > bWUyNTZ2MTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOMQV0Vep+9Xnje6bKNy > +8blwKEscr5LoUQCuwqaUT4HyPgXFE9E0r1PiWbC6bGkS26MuguOBp52X9H9z+NS > zM6gJzAlBgkqhkiG9w0BCQ4xGDAWMBQGA1UdEQQNMAuCCWZtNGRkLmNvbTAKBggq > hkjOPQQDAgNJADBGAiEA5uYlfkpRsJhBk+WwippCjupEpaCNaHwNyNqbj8qrR80C > IQDCoJtaWhFGxbaAB2+o3gm87ZHJSDSjfrD2lEhlkbEXHQ== > -----END CERTIFICATE REQUEST----- > > > $ openssl req -inform PEM -noout -pubkey -in test.csr > -----BEGIN PUBLIC KEY----- > MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE4xBXRV6n71eeN7pso3L7xuXAoSxy > vkuhRAK7CppRPgfI+BcUT0TSvU+JZsLpsaRLboy6C44GnnZf0f3P41LMzg== > -----END PUBLIC KEY----- > > > On Fri, 2020-08-07 at 19:07 +0200, Dirk-Willem van Gulik wrote: >> Below CSR gives me an odd error with the standard openssl REQ >> command: >> >> openssl req -inform DER -noout -pubkey >> >> Error getting public key >> >> 140673482679616:error:10067066:elliptic curve >> routines:ec_GFp_simple_oct2point:invalid >> encoding:../crypto/ec/ecp_oct.c:312: >> 140673482679616:error:10098010:elliptic curve >> routines:o2i_ECPublicKey:EC lib:../crypto/ec/ec_asn1.c:1175: >> 140673482679616:error:100D708E:elliptic curve >> routines:eckey_pub_decode:decode error:../crypto/ec/ec_ameth.c:157: >> 140673482679616:error:0B09407D:x509 certificate >> routines:x509_pubkey_decode:public key decode >> error:../crypto/x509/x_pubkey.c:125: >> >> Even though the ASN1 of the public key looks correct to me: >> >> SEQUENCE (2 elem) >> SEQUENCE (2 elem) >> OBJECT IDENTIFIER 1.2.840.10045.2.1 ecPublicKey (ANSI X9.62 >> public key type) >> OBJECT IDENTIFIER 1.2.840.10045.3.1.7 prime256v1 (ANSI X9.62 >> named elliptic curve) >> BIT STRING (536 bit) >> 000001000100000100000100001110010011001110011100011010001010010110100 >> 0? >> OCTET STRING (65 byte) >> 0439339C68A5A333143592C0A36D053F31D3AF6ED18FB54F4747B9DFC6DB6ABC71556 >> 1? >> >> What would be a good way to further debug this ? >> >> With kind regards, >> >> Dw >> >> -----BEGIN CERTIFICATE REQUEST----- >> MIIBPzCB5QIBADCBgDELMAkGA1UEAxMCQ04xCjAIBgNVBAUTATExCjAIBgNVBAYT >> AUMxCjAIBgNVBAcTAUwxCjAIBgNVBAgTAVMxCjAIBgNVBAoTAU8xCzAJBgNVBAsT >> Ak9VMQowCAYDVQQMEwFUMQowCAYDVQQNEwFEMRAwDgYJKoZIhvcNAQkBEwFFMFsw >> EwYHKoZIzj0CAQYIKoZIzj0DAQcDRAAEQQQ5M5xopaMzFDWSwKNtBT8x069u0Y+1 >> T0dHud/G22q8cVVh8sVcpLUortLxxesEXCddpx/EeuxP+MN/RymHTMrjoAAwCgYI >> KoZIzj0EAwIDSQAwRgIhAO+K+TFCdYxQg7aT+B3wIVa6CCYxM/mL4/WHSrwXujJy >> AiEA7UsbQT/YRKaFDPn/U9jdrJaUmKsqKJvGwN7YVaMGdeo= >> -----END CERTIFICATE REQUEST----- > > > -- > Frank Migge > http://fm4dd.com | public at frank4dd.com > From doctor at doctor.nl2k.ab.ca Sat Aug 8 20:46:26 2020 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Sat, 8 Aug 2020 14:46:26 -0600 Subject: openssl-3 In-Reply-To: <001101d66b2f$4273d250$c75b76f0$@cyberia.net.sa> References: <001101d66b2f$4273d250$c75b76f0$@cyberia.net.sa> Message-ID: <20200808204626.GB4155@doctor.nl2k.ab.ca> On Wed, Aug 05, 2020 at 04:49:36PM +0300, mejaz at cyberia.net.sa wrote: > > > Hello, > > > > > > I have sucesfully installed openssl 3.x version but when I was trying to > check the version wheather it installed sucesfully or not, it gives error as > below , any assistance would be highly appreciated thanks in advance. > > > > [root at nc ~]# /usr/local/bin/openssl versioin -a > > /usr/local/bin/openssl: error while loading shared libraries: libssl.so.3: > cannot open shared object file: No such file or directory > > > > > > I have redhat 8 O/S > > > > > > Regards > > Ejaz > I am now happy with OPEnssl 3- alpha 6 OPenssh, squid, nginx and curl are running smoothly. -- Member - Liberal International This is doctor@@nl2k.ab.ca Ici doctor@@nl2k.ab.ca Yahweh, Queen & country!Never Satan President Republic!Beware AntiChrist rising! https://www.empire.kred/ROOTNK?t=94a1f39b Morphing the facts to fit our worldview is a regressive trait. -unknown From Erwann.Abalea at docusign.com Mon Aug 10 08:32:19 2020 From: Erwann.Abalea at docusign.com (Erwann Abalea) Date: Mon, 10 Aug 2020 08:32:19 +0000 Subject: [EXTERNAL] Re: odd error for ECDSA key in REQ. In-Reply-To: References: <988411A7-D6EF-416C-9459-75D0944DD1FF@webweaving.org> <4cf57fe260ac22582736350edb33c9d199282ce6.camel@frank4dd.com> Message-ID: <4CD7C507-392B-44BE-986C-21E03E426E2D@docusign.com> The key itself is good. Its encoding in the CSR isn't. Looks like the public key was X9.62 encoded in its uncompressed form (i.e. start with a 04 octet, and then the octets composing the x and y coordinates), and then wrapped into an ASN.1 OCTET STRING (i.e. use the 04 tag, plus a 0x41 length, and the encoded public key), and finally the BIT STRING encapsulation. The OCTET STRING is wrong here. Cordialement, Erwann Abalea ?Le 08/08/2020 14:24, ? openssl-users au nom de Dirk-Willem van Gulik ? a ?crit : The key is generated by a lovely HSM - which is by its nature a bit of a closed box. Whose vendor is very sure its software is right. So this helps a lot - and helps confirm what we thought ! Thanks, Dw > On 8 Aug 2020, at 04:16, Frank Migge wrote: > > Hi Dirk-Willem, > > Something is wrong with your EC key. The error mentions that it can't > get the curve points from the key data. How did you generate the key? > > If it helps, here is a working CSR example, using a prime256v1 key for > comparison: > > -----BEGIN CERTIFICATE REQUEST----- > MIIBDjCBtAIBADArMQswCQYDVQQGEwJKUDEcMBoGA1UEAwwTdGVzdCBmb3IgcHJp > bWUyNTZ2MTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOMQV0Vep+9Xnje6bKNy > +8blwKEscr5LoUQCuwqaUT4HyPgXFE9E0r1PiWbC6bGkS26MuguOBp52X9H9z+NS > zM6gJzAlBgkqhkiG9w0BCQ4xGDAWMBQGA1UdEQQNMAuCCWZtNGRkLmNvbTAKBggq > hkjOPQQDAgNJADBGAiEA5uYlfkpRsJhBk+WwippCjupEpaCNaHwNyNqbj8qrR80C > IQDCoJtaWhFGxbaAB2+o3gm87ZHJSDSjfrD2lEhlkbEXHQ== > -----END CERTIFICATE REQUEST----- > > > $ openssl req -inform PEM -noout -pubkey -in test.csr > -----BEGIN PUBLIC KEY----- > MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE4xBXRV6n71eeN7pso3L7xuXAoSxy > vkuhRAK7CppRPgfI+BcUT0TSvU+JZsLpsaRLboy6C44GnnZf0f3P41LMzg== > -----END PUBLIC KEY----- > > > On Fri, 2020-08-07 at 19:07 +0200, Dirk-Willem van Gulik wrote: >> Below CSR gives me an odd error with the standard openssl REQ >> command: >> >> openssl req -inform DER -noout -pubkey >> >> Error getting public key >> >> 140673482679616:error:10067066:elliptic curve >> routines:ec_GFp_simple_oct2point:invalid >> encoding:../crypto/ec/ecp_oct.c:312: >> 140673482679616:error:10098010:elliptic curve >> routines:o2i_ECPublicKey:EC lib:../crypto/ec/ec_asn1.c:1175: >> 140673482679616:error:100D708E:elliptic curve >> routines:eckey_pub_decode:decode error:../crypto/ec/ec_ameth.c:157: >> 140673482679616:error:0B09407D:x509 certificate >> routines:x509_pubkey_decode:public key decode >> error:../crypto/x509/x_pubkey.c:125: >> >> Even though the ASN1 of the public key looks correct to me: >> >> SEQUENCE (2 elem) >> SEQUENCE (2 elem) >> OBJECT IDENTIFIER 1.2.840.10045.2.1 ecPublicKey (ANSI X9.62 >> public key type) >> OBJECT IDENTIFIER 1.2.840.10045.3.1.7 prime256v1 (ANSI X9.62 >> named elliptic curve) >> BIT STRING (536 bit) >> 000001000100000100000100001110010011001110011100011010001010010110100 >> 0? >> OCTET STRING (65 byte) >> 0439339C68A5A333143592C0A36D053F31D3AF6ED18FB54F4747B9DFC6DB6ABC71556 >> 1? >> >> What would be a good way to further debug this ? >> >> With kind regards, >> >> Dw >> >> -----BEGIN CERTIFICATE REQUEST----- >> MIIBPzCB5QIBADCBgDELMAkGA1UEAxMCQ04xCjAIBgNVBAUTATExCjAIBgNVBAYT >> AUMxCjAIBgNVBAcTAUwxCjAIBgNVBAgTAVMxCjAIBgNVBAoTAU8xCzAJBgNVBAsT >> Ak9VMQowCAYDVQQMEwFUMQowCAYDVQQNEwFEMRAwDgYJKoZIhvcNAQkBEwFFMFsw >> EwYHKoZIzj0CAQYIKoZIzj0DAQcDRAAEQQQ5M5xopaMzFDWSwKNtBT8x069u0Y+1 >> T0dHud/G22q8cVVh8sVcpLUortLxxesEXCddpx/EeuxP+MN/RymHTMrjoAAwCgYI >> KoZIzj0EAwIDSQAwRgIhAO+K+TFCdYxQg7aT+B3wIVa6CCYxM/mL4/WHSrwXujJy >> AiEA7UsbQT/YRKaFDPn/U9jdrJaUmKsqKJvGwN7YVaMGdeo= >> -----END CERTIFICATE REQUEST----- > > > -- > Frank Migge > http://fm4dd.com | public at frank4dd.com > From Rakesh.Parihar at encora.com Mon Aug 10 10:55:52 2020 From: Rakesh.Parihar at encora.com (Rakesh Parihar) Date: Mon, 10 Aug 2020 10:55:52 +0000 Subject: Help - Building OpenSSL FIPS for 64 bit Android Message-ID: Hi All, I am seeking help on generating FIPS compliance OpenSSL libs for Android Native Application. I am trying to build openssl-1.0.2t with the FIPS module openssl-fips-2.0.16 to support 64-bit android devices, I have tried following the steps on the Openssl wiki and didn't found any specific verson dependency for 64bit support for android with FIPS. I also tried using a script found on GitHub that combines the building of the FIPS module and Openssl, I seem to be having issues with this as well. I have tried with many Openssl and FIPS versions to build for 64bit but not luck. Any pointers regarding the 64bit supported version of Openssl and FIPS module for Andorid and any help in building a FIPS compliant Openssl would be appreciated. Thanks & Regards Rakesh Parihar Sr. Software Engineer rakesh.parihar at encora.com Ahmedabad, IN [cid:86f62ca2-6ce9-4cc1-8932-c67a8cd82e85] encora.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-xuepomab.png Type: image/png Size: 10409 bytes Desc: Outlook-xuepomab.png URL: From vijay.chander at gmail.com Mon Aug 10 15:01:17 2020 From: vijay.chander at gmail.com (Vijay Chander) Date: Mon, 10 Aug 2020 08:01:17 -0700 Subject: OpenSSL FIPS for 1.1.x Message-ID: Hi, This link here below only seems to talk about 1.0.x https://wiki.openssl.org/index.php/FIPS_Library_and_Android Is there a wiki for openssl fips for openssl-1.1.0x ? Thanks, -vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Mon Aug 10 15:08:22 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 10 Aug 2020 16:08:22 +0100 Subject: OpenSSL FIPS for 1.1.x In-Reply-To: References: Message-ID: On 10/08/2020 16:01, Vijay Chander wrote: > Hi, > > This link here below only seems to talk about 1.0.x > https://wiki.openssl.org/index.php/FIPS_Library_and_Android > > Is there a wiki for openssl fips for openssl-1.1.0x ? There is no FIPS module for the 1.1.x series. We are currently working on a new module which will be integrated into OpenSSL 3.0 (i.e. its all one download, not two separate ones) Matt From vijay.chander at gmail.com Mon Aug 10 15:25:04 2020 From: vijay.chander at gmail.com (Vijay Chander) Date: Mon, 10 Aug 2020 08:25:04 -0700 Subject: OpenSSL FIPS for 1.1.x In-Reply-To: References: Message-ID: Thank you Matt. Our FIPS compliance vendor is recommending the following for openssl 1.1 from Oracle. https://github.com/oracle/solaris-userland/tree/master/components/openssl/openssl-fips-140/fipscanister-dev/patches Thanks, -vijay On Mon, Aug 10, 2020 at 8:08 AM Matt Caswell wrote: > > > On 10/08/2020 16:01, Vijay Chander wrote: > > Hi, > > > > This link here below only seems to talk about 1.0.x > > https://wiki.openssl.org/index.php/FIPS_Library_and_Android > > > > Is there a wiki for openssl fips for openssl-1.1.0x ? > > There is no FIPS module for the 1.1.x series. We are currently working > on a new module which will be integrated into OpenSSL 3.0 (i.e. its all > one download, not two separate ones) > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Mon Aug 10 16:09:49 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 10 Aug 2020 17:09:49 +0100 Subject: OpenSSL FIPS for 1.1.x In-Reply-To: References: Message-ID: <4d9250ff-7174-a68d-057d-78e428a7e44c@openssl.org> On 10/08/2020 16:25, Vijay Chander wrote: > > Thank you Matt. > > Our FIPS compliance vendor is recommending the following for openssl 1.1 > from Oracle. > ? > https://github.com/oracle/solaris-userland/tree/master/components/openssl/openssl-fips-140/fipscanister-dev/patches I can't comment on those patches because I know nothing about them. But there is no official module from the OpenSSL Project that works with 1.1.x and certainly not one covered by our FIPS certificates. Its possible that third parties have their own modules and certificates - I don't know. But if so you'd have to seek guidance from those third parties. Matt > > Thanks, > -vijay > > On Mon, Aug 10, 2020 at 8:08 AM Matt Caswell > wrote: > > > > On 10/08/2020 16:01, Vijay Chander wrote: > > Hi, > > > > This link here below only seems to talk about 1.0.x > > https://wiki.openssl.org/index.php/FIPS_Library_and_Android > > > > Is there a wiki for openssl fips for openssl-1.1.0x ? > > There is no FIPS module for the 1.1.x series. We are currently working > on a new module which will be integrated into OpenSSL 3.0 (i.e. its all > one download, not two separate ones) > > Matt > From vijay.chander at gmail.com Mon Aug 10 16:17:06 2020 From: vijay.chander at gmail.com (Vijay Chander) Date: Mon, 10 Aug 2020 09:17:06 -0700 Subject: OpenSSL FIPS for 1.1.x In-Reply-To: <4d9250ff-7174-a68d-057d-78e428a7e44c@openssl.org> References: <4d9250ff-7174-a68d-057d-78e428a7e44c@openssl.org> Message-ID: Cool. Thanks. On Mon, Aug 10, 2020 at 9:09 AM Matt Caswell wrote: > On 10/08/2020 16:25, Vijay Chander wrote: > > > > Thank you Matt. > > > > Our FIPS compliance vendor is recommending the following for openssl 1.1 > > from Oracle. > > > > > https://github.com/oracle/solaris-userland/tree/master/components/openssl/openssl-fips-140/fipscanister-dev/patches > > I can't comment on those patches because I know nothing about them. But > there is no official module from the OpenSSL Project that works with > 1.1.x and certainly not one covered by our FIPS certificates. Its > possible that third parties have their own modules and certificates - I > don't know. But if so you'd have to seek guidance from those third parties. > > Matt > > > > > Thanks, > > -vijay > > > > On Mon, Aug 10, 2020 at 8:08 AM Matt Caswell > > wrote: > > > > > > > > On 10/08/2020 16:01, Vijay Chander wrote: > > > Hi, > > > > > > This link here below only seems to talk about 1.0.x > > > https://wiki.openssl.org/index.php/FIPS_Library_and_Android > > > > > > Is there a wiki for openssl fips for openssl-1.1.0x ? > > > > There is no FIPS module for the 1.1.x series. We are currently > working > > on a new module which will be integrated into OpenSSL 3.0 (i.e. its > all > > one download, not two separate ones) > > > > Matt > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Rakesh.Parihar at encora.com Mon Aug 10 16:43:05 2020 From: Rakesh.Parihar at encora.com (Rakesh Parihar) Date: Mon, 10 Aug 2020 16:43:05 +0000 Subject: openssl-users Digest, Vol 69, Issue 7 In-Reply-To: References: , Message-ID: Hi Mark, Thanks for your response. Let me check with the details you provided. Rakesh Parihar Sr. Software Engineer rakesh.parihar at encora.com Ahmedabad, IN [cid:e36974b4-09e5-4d04-aa9a-0e25aa504920] encora.com ________________________________ From: Mark Minnoch Sent: 10 August 2020 21:28 To: Rakesh Parihar Subject: Fwd: openssl-users Digest, Vol 69, Issue 7 Hi Rakesh, I saw your post on the openssl-users list. We have a customer that is testing KeyPair's FIPS module Cert. #3503 (based on the OpenSSL FOM) on Android 64-bit. We made some minor changes to the Configure file (attached). You will need to follow the instructions in our Security Policy (Appendix A) to download the FIPS module distribution (from Oracle) and replace the Configure file with the attached Configure file. As you probably know, you are not allowed to make changes to the build process for the OpenSSL FOM Cert. #2398. We are enhancing the work the Oracle performed to update the OpenSSL FOM. Please let me know if you are successful using the KeyPair FIPS module in your testing. Mark J. Minnoch Co-Founder, CISSP KeyPair Consulting +1 (805) 550-3231 mobile https://KeyPair.us https://www.linkedin.com/in/minnoch We expertly guide technology companies in achieving their FIPS 140 goals UPDATED Blog post: RIP FIPS 186-2 ---------- Forwarded message --------- From: > Date: Mon, Aug 10, 2020 at 3:56 AM Subject: openssl-users Digest, Vol 69, Issue 7 To: > Send openssl-users mailing list submissions to openssl-users at openssl.org To subscribe or unsubscribe via the World Wide Web, visit https://mta.openssl.org/mailman/listinfo/openssl-users or, via email, send a message with subject or body 'help' to openssl-users-request at openssl.org You can reach the person managing the list at openssl-users-owner at openssl.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openssl-users digest..." Today's Topics: 1. Help - Building OpenSSL FIPS for 64 bit Android (Rakesh Parihar) ---------- Forwarded message ---------- From: Rakesh Parihar > To: "openssl-users at openssl.org" > Cc: Bcc: Date: Mon, 10 Aug 2020 10:55:52 +0000 Subject: Help - Building OpenSSL FIPS for 64 bit Android Hi All, I am seeking help on generating FIPS compliance OpenSSL libs for Android Native Application. I am trying to build openssl-1.0.2t with the FIPS module openssl-fips-2.0.16 to support 64-bit android devices, I have tried following the steps on the Openssl wiki and didn't found any specific verson dependency for 64bit support for android with FIPS. I also tried using a script found on GitHub that combines the building of the FIPS module and Openssl, I seem to be having issues with this as well. I have tried with many Openssl and FIPS versions to build for 64bit but not luck. Any pointers regarding the 64bit supported version of Openssl and FIPS module for Andorid and any help in building a FIPS compliant Openssl would be appreciated. Thanks & Regards Rakesh Parihar Sr. Software Engineer rakesh.parihar at encora.com Ahmedabad, IN [cid:173d90ac503cf1269a21] encora.com _______________________________________________ openssl-users mailing list openssl-users at openssl.org https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-xuepomab.png Type: image/png Size: 10409 bytes Desc: Outlook-xuepomab.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-ri5wwbmc.png Type: image/png Size: 10409 bytes Desc: Outlook-ri5wwbmc.png URL: From patrick.mooc at gmail.com Mon Aug 10 19:56:29 2020 From: patrick.mooc at gmail.com (Patrick Mooc) Date: Mon, 10 Aug 2020 21:56:29 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <75f3161c-5b66-476b-a6ac-95c21c634e22@redhat.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> <20200805204615.GY20623@akamai.com> <75f3161c-5b66-476b-a6ac-95c21c634e22@redhat.com> Message-ID: Hello Hubert, Thank you for your answser. I already did this test, but also without success. Best Regards, Le 07/08/2020 ? 18:18, Hubert Kario a ?crit?: > On Thursday, 6 August 2020 21:24:32 CEST, Patrick Mooc wrote: >> Thank you Ben for your answer. >> >> I had a look today for this point, but I didin't found anything about >> extension in the OpenSSL version I use (0.9.8). >> >> Maybe I have to modify OpenSSL configuration file (openssl.conf) and >> compile OpenSSL again. I will check this tomorrow. > > changing configuration file won't affect behaviour of OpenSSL in your > situation > > I don't remember if this was behaviour for 0.9.8, but IIRC 1.0.1 would > send > SSLv2 compatible Client Hello only if there were any SSLv2 compatible > ciphers > > try explicitly disabling RC4-MD5 cipher, that may help > >> Best Regards, >> >> >> Le 05/08/2020 ? 22:46, Benjamin Kaduk a ?crit : >>> On Wed, Aug 05, 2020 at 10:28:26PM +0200, Patrick Mooc wrote: ... >> >> >> > From patrick.mooc at gmail.com Mon Aug 10 19:57:23 2020 From: patrick.mooc at gmail.com (Patrick Mooc) Date: Mon, 10 Aug 2020 21:57:23 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> Message-ID: Hello, I tried to follow your procedure but I saw that I don't have same folders. That lets me know that I forgot to give an important point concerning my problem : the Debian distribution I use, is not on a PC, but it is an embedded one. It is a Qt project (also an old version of course, version 4.7) I made some new tests today and it seems that there is only one case in which the SSLv2 Client Hello packet is sent. It happens on a Soap call in a php scripting file. Thus I have to see how to constraint this Soap call not to use SSLv2 protocol. I guess that the php library used is also an old one, I have to check this. When this piece of code is not called, Client Hello packet are well sent with TLSv10 protocol. Best Regards, Le 07/08/2020 ? 18:33, Dan Kegel a ?crit?: > Suggestion: get the source for the exact same version of openssl?your > system uses, and rebuild it with sslv2 disabled. > > e.g. > > sudo apt install build-essential devscripts > sudo apt build-dep openssl > mkdir tmp > cd tmp > apt source openssl > cd openssl-* > gedit debian/rules? ? ?# see below > debuild -b -uc -us > cd .. > sudo apt install *.deb > > While editing debian/rules in gedit, change the line > > CONFARGS ?= --prefix=/usr --openssldir=/usr/lib/ssl > --libdir=lib/$(DEB_HOST_MULTIARCH) no-idea no-mdc2 no-rc5 no-zlib > no-ssl3 enable-unit-test no-ssl3-method enable-rfc3779 enable-cms > > to add the no-ssl2 argument, or something like that.? See > https://wiki.openssl.org/index.php/Compilation_and_Installation > > But be careful!? You probably want to have the original system .deb > files for its openssl in an origopenssl?dir > so you can reinstall them with 'sudo dpkg -i origopenssl/*.deb' when > this breaks. > > - Dan > > > On Wed, Aug 5, 2020 at 1:28 PM Patrick Mooc > wrote: > > Thank you very much Kyle for your quick and clear answer. > > The reason why I want to upgrade OpenSSL version, is that I > encounter a problem with 1 frame exchange between client and server. > > This frame is the first packet sent from client to server (Client > Hello Packet) and the protocol used for this packet is SSLv2. > I don't understand why, because I force the use of TLSv1 (in > ssl.conf file as in application software), but only for this first > exchange packet, SSLv2 is used. All other packets are well using > TLSv10 as configured. > > I have also searched for forcing the use of TLSv10 ciphers in > OpenSSL configuration and in application software, but I didn't > succeed doing so. > > That's why I had in idea of upgrading OpenSSL version to avoid the > use of SSLv2 protocol. > > > Thus, if you have any idea of how to solve my problem without > upgrading OpenSSL version or Linux distribution, It would be very > nice. > > > Thank you in advance for your answer. > > Best Regards, > > > Le 05/08/2020 ? 22:10, Kyle Hamilton a ?crit?: >> It is never recommended to upgrade you distribution's version of >> OpenSSL with one you compile yourself.? Doing so will often break >> all software installed by the distribution that uses it. >> >> If you need functionality from newer versions of OpenSSL, your >> options are to upgrade your OS version, or to install a local >> copy of OpenSSL and manually compile and link local copies of the >> applications that need the newer functionality. >> >> (Newer versions of OpenSSL do not maintain the same Application >> Binary Interface (ABI), which means that binaries compiled >> against older versions will not correctly operate or dynamically >> link against newer libraries. Also, distributions such as Debian >> can modify the ABI in such a way that nothing distributed >> directly by openssl.org can be compiled to >> meet it without source code modification.) >> >> -Kyle H >> >> On Wed, Aug 5, 2020, 14:49 Patrick Mooc > > wrote: >> >> Hello, >> >> I'm using an old version of OpenSSL (0.9.8g) on an old Linux >> Debian >> distribution (Lenny). >> >> Is it possible to upgrade OpenSSL version without upgrading >> Linux Debian >> distribution ? >> If yes, up to which version of OpenSSL ? >> >> Are all versions of OpenSSL compliant with all Linux Debian >> distribution ? >> >> >> Thank you in advance for your answer. >> >> Best Regards, >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl-users at dukhovni.org Tue Aug 11 01:19:06 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 10 Aug 2020 21:19:06 -0400 Subject: OpenSSL compliance with Linux distributions In-Reply-To: References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <0a8278fd-15a2-766e-53b4-a3fa996664c5@gmail.com> <20200805204615.GY20623@akamai.com> Message-ID: <20200811011906.GG40202@straasha.imrryr.org> On Thu, Aug 06, 2020 at 09:24:32PM +0200, Patrick Mooc wrote: > Thank you Ben for your answer. > > I had a look today for this point, but I didin't found anything about > extension in the OpenSSL version I use (0.9.8). If I am mistaken, OpenSSL 0.9.8 shuld have support for the SNI extension. It also supports using SSL_CTX_set_options() to set the SSL_OP_NO_SSLv2 option, which is likely the simplest way to ensure that SSLv2 is not used. These days one should probably also disable SSLv3 (via SSL_OP_NO_SSLv2), but even with that, there are likely some unaddressed security defects in OpenSSL 0.9.8 that make it unwise to continue using it in general. -- VIktor. From kurt at roeckx.be Wed Aug 12 15:38:27 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Wed, 12 Aug 2020 17:38:27 +0200 Subject: Lack of documentation for OPENSSL_ia32cap_P In-Reply-To: <311e4923-2cab-8ba9-0588-9cdc4ef90086@wisemo.com> References: <311e4923-2cab-8ba9-0588-9cdc4ef90086@wisemo.com> Message-ID: <20200812153827.GA863966@roeckx.be> On Thu, Jul 23, 2020 at 02:35:28AM +0200, Jakob Bohm via openssl-users wrote: > The OPENSSL_ia32cap_P variable, its bitfields and the code that sets > it (in assembler) seemto have no clear documentation. Have you seen the OPENSSL_ia32cap manpage? Kurt From rajprudvi98 at gmail.com Thu Aug 13 16:02:22 2020 From: rajprudvi98 at gmail.com (prudvi raj) Date: Thu, 13 Aug 2020 21:32:22 +0530 Subject: 'OPENSSLDIR' undeclared in openssl 1.1.1g Message-ID: Hi, I couldn't find where this macro is #defined , previously in 1.0.2 it was defined in opensslconf.h . So , i am getting this error during compilation : openssl/crypto/x509/x509_def.c:17:12: error: 'OPENSSLDIR' undeclared (first use in this function) . This error is resolved if OPENSSLDIR is #defined in opensslconf.h as /usr/local/ssl (default btw). Can someone help me out with this? , why the OPENSSLDIR isn't #defined in any .h files or was i missing something? Used : ./Configure no-threads no-dso no-shared no-zlib no-asm no-engine no-bf no-camellia no-cast no-md2 no-md4 no-mdc2 no-ocsp no-rc2 no-rc5 no-hw no-idea no-srp gcc --with-rand-seed=none Thanks, Prud. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dv at vollmann.ch Thu Aug 13 18:19:10 2020 From: dv at vollmann.ch (Detlef Vollmann) Date: Thu, 13 Aug 2020 20:19:10 +0200 Subject: NULL ciphers Message-ID: <614e16b2-6a41-78dd-9422-de462844bae4@vollmann.ch> Hello, with the following commands: openssl s_server -accept 18010 -cert srv.crt -key test.key \ -CAfile testca.crt -debug -cipher 'NULL-SHA256' -dtls1_2 openssl s_client -connect localhost:18010 -cert clnt.crt \ -key test.key -CAfile testca.crt -debug \ -cipher 'COMPLEMENTOFALL:eNULL' -dtls1_2 NULL ciphers work fine with OpenSSL 1.0.2g. With OpenSSL 1.1.1g the handshake fails on the server side with 140295725053248:error:14102438:SSL routines:dtls1_read_bytes:tlsv1 \ alert internal error:../ssl/record/rec_layer_d1.c:611:SSL alert number \ 80 Even on OpenSSL 1.1.1g 'openssl ciphers -v NULL' lists NULL-SHA256. I'm only using s_server and s_client as tests, but I have the same problem in my application if I use SSL_CTX_set_cipher_list(sslCtx, "NULL-SHA256"); What can I do to get NULL ciphers for no encryption working? Detlef From bkaduk at akamai.com Thu Aug 13 18:20:31 2020 From: bkaduk at akamai.com (Benjamin Kaduk) Date: Thu, 13 Aug 2020 11:20:31 -0700 Subject: NULL ciphers In-Reply-To: <614e16b2-6a41-78dd-9422-de462844bae4@vollmann.ch> References: <614e16b2-6a41-78dd-9422-de462844bae4@vollmann.ch> Message-ID: <20200813182030.GJ20623@akamai.com> On Thu, Aug 13, 2020 at 08:19:10PM +0200, Detlef Vollmann wrote: > Hello, > > with the following commands: > > openssl s_server -accept 18010 -cert srv.crt -key test.key \ > -CAfile testca.crt -debug -cipher 'NULL-SHA256' -dtls1_2 > > openssl s_client -connect localhost:18010 -cert clnt.crt \ > -key test.key -CAfile testca.crt -debug \ > -cipher 'COMPLEMENTOFALL:eNULL' -dtls1_2 > > NULL ciphers work fine with OpenSSL 1.0.2g. > > With OpenSSL 1.1.1g the handshake fails on the server side with > 140295725053248:error:14102438:SSL routines:dtls1_read_bytes:tlsv1 \ > alert internal error:../ssl/record/rec_layer_d1.c:611:SSL alert number \ > 80 > > Even on OpenSSL 1.1.1g 'openssl ciphers -v NULL' lists NULL-SHA256. > > I'm only using s_server and s_client as tests, but I have the same > problem in my application if I use > SSL_CTX_set_cipher_list(sslCtx, "NULL-SHA256"); > > What can I do to get NULL ciphers for no encryption working? -cipher 'COMPLEMENTOFALL:eNULL at SECLEVEL=0' From dv at vollmann.ch Thu Aug 13 18:34:27 2020 From: dv at vollmann.ch (Detlef Vollmann) Date: Thu, 13 Aug 2020 20:34:27 +0200 Subject: NULL ciphers In-Reply-To: <20200813182030.GJ20623@akamai.com> References: <614e16b2-6a41-78dd-9422-de462844bae4@vollmann.ch> <20200813182030.GJ20623@akamai.com> Message-ID: On 2020-08-13 20:20, Benjamin Kaduk wrote: > On Thu, Aug 13, 2020 at 08:19:10PM +0200, Detlef Vollmann wrote: >> Hello, >> >> with the following commands: >> >> openssl s_server -accept 18010 -cert srv.crt -key test.key \ >> -CAfile testca.crt -debug -cipher 'NULL-SHA256' -dtls1_2 >> >> openssl s_client -connect localhost:18010 -cert clnt.crt \ >> -key test.key -CAfile testca.crt -debug \ >> -cipher 'COMPLEMENTOFALL:eNULL' -dtls1_2 >> >> NULL ciphers work fine with OpenSSL 1.0.2g. >> >> With OpenSSL 1.1.1g the handshake fails on the server side with >> 140295725053248:error:14102438:SSL routines:dtls1_read_bytes:tlsv1 \ >> alert internal error:../ssl/record/rec_layer_d1.c:611:SSL alert number \ >> 80 >> >> Even on OpenSSL 1.1.1g 'openssl ciphers -v NULL' lists NULL-SHA256. >> >> I'm only using s_server and s_client as tests, but I have the same >> problem in my application if I use >> SSL_CTX_set_cipher_list(sslCtx, "NULL-SHA256"); >> >> What can I do to get NULL ciphers for no encryption working? > > -cipher 'COMPLEMENTOFALL:eNULL at SECLEVEL=0' Wow, great :-) Thanks a lot for this quick reply, it actually works :-) Detlef From mindentropy at gmail.com Thu Aug 13 20:03:30 2020 From: mindentropy at gmail.com (Gautam Bhat) Date: Fri, 14 Aug 2020 01:33:30 +0530 Subject: Help with Error: data too large for modulus Message-ID: Hi, I am trying to do a walkthrough of verifying a certificate signing. 1) I have pulled the signature as follows: openssl asn1parse -in cert.pem -out cert.sig -noout -strparse 638 The offset of 638 is because asn1parse of the cert.pem file produces: 625:d=2 hl=2 l= 9 prim: OBJECT :sha256WithRSAEncryption 636:d=2 hl=2 l= 0 prim: NULL 638:d=1 hl=4 l= 257 prim: BIT STRING 2) I have pulled the public key of the CA certificate as follows: openssl x509 -in ca_cert.pem -pubkey -noout > ca_cert.pubkey 3) I am trying to decrypt the signature file to get the hash as follows: openssl rsautl -verify -pubin -inkey ca_cert.pubkey -in cert.sig -asn1parse Unfortunately I get an error in the above step as: 140155781719872:error:04067084:rsa routines:rsa_ossl_public_decrypt:data too large for modulus:crypto/rsa/rsa_ossl.c:548: The size of the cert.sig file is 256 bytes. I am not sure where I am going wrong and would need some assistance. Thanks, Gautam. From pgnet.dev at gmail.com Thu Aug 13 20:48:57 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 13 Aug 2020 13:48:57 -0700 Subject: matching openssl's enc ciphers to php's openssl functions' ciphers: where's "chacha20-poly1305"? Message-ID: <127700f0-b39c-bb41-94fb-81f218f70543@gmail.com> I'm deploying a php app that makes use of php's openssl functions https://www.php.net/manual/en/ref.openssl.php atm, I've php -v PHP 7.4.8 (cli) (built: Jul 9 2020 08:57:23) ( NTS ) openssl version OpenSSL 1.1.1g FIPS 21 Apr 2020 The php app config defaults to an encryption method of $config['cipher_method'] = 'DES-EDE3-CBC'; for encrypting a session pwd, This key is used to encrypt the users imap password which is stored in the session record. I'd like to change that to a CHACHA20 variant. As listed by https://www.php.net/manual/en/function.openssl-get-cipher-methods.php the list of php-supported openssl ciphers includes [92] => chacha20 [93] => chacha20-poly1305 double checking available encryption ciphers @ openssl openssl enc -ciphers only lists -chacha20 not the add'l, -chacha20-poly1305 why is this^^ variant not shown? am I comparing apples & oranges here, looking at the wrong lists? perhaps just aliases for a singular cipher? From andreaerdna at libero.it Fri Aug 14 05:59:53 2020 From: andreaerdna at libero.it (Andrea Giudiceandrea) Date: Fri, 14 Aug 2020 07:59:53 +0200 Subject: Wrong signature type error trying to connect to gibs.earthdata.nasa.gov on Ubuntu 20.04 Message-ID: Hi all, on Ubuntu 20.04 LTS 64 bit, with OpenSSL version 1.1.1f, it is not possible to connect to a popular GIS OGC server at gibs.earthdata.nasa.gov:443 using OpenSSL or cUrl or Wget default parameters. The OpenSSL 1.1.1f package available for Ubuntu 20.04 is build with the "-DOPENSSL_TLS_SECURITY_LEVEL=2" option. The relevant errors are: "SSL routines:tls12_check_peer_sigalg:wrong signature type:../ssl/t1_lib.c:1145" and "SSL3 alert write:fatal:handshake failure". On the same machine it is possible to connect to that server using Firefox version 79.0 (the reported connection security properties are "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, 256 bit keys, TLS 1.2") or gnutls-cli version 3.6.13 (the reported connection security properties are "(TLS1.2-X.509)-(ECDHE-SECP384R1)-(RSA-SHA1)-(AES-256-GCM)"). The connection is also possible on Ubuntu 18.04 (OpenSSL 1.1.1 without the "-DOPENSSL_TLS_SECURITY_LEVEL=2" build option). I already know the source of the issue (the server uses SHA1 as peer signing digest which is not allowed at SECURITY LEVEL = 2) and how to workaround it (setting SECLEVEL=1 as a cli option or in openssl.cnf), but I'd like to know if it is due to a misconfigured / non compliant server or to a bug in OpenSSL. In the former case, I'd like to know some technical specifications to refer to in order to submit the issue to the gibs.earthdata.nasa.gov system administrators so that they can understand the problem and configure the server correctly. Best regards. Andrea Giudiceandrea Note: see the following excerpts from the connection logs: ************** $ openssl s_client -state -connect gibs.earthdata.nasa.gov:443 CONNECTED(00000003) SSL_connect:before SSL initialization SSL_connect:SSLv3/TLS write client hello SSL_connect:SSLv3/TLS write client hello SSL_connect:SSLv3/TLS read server hello depth=2 C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2009 Entrust, Inc. - for authorized use only", CN = Entrust Root Certification Authority - G2 verify return:1 depth=1 C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2012 Entrust, Inc. - for authorized use only", CN = Entrust Certification Authority - L1K verify return:1 depth=0 C = US, ST = Maryland, L = Greenbelt, O = NASA (National Aeronautics and Space Administration), CN = gibs.earthdata.nasa.gov verify return:1 SSL_connect:SSLv3/TLS read server certificate SSL3 alert write:fatal:handshake failure SSL_connect:error in error 139920655459648:error:1414D172:SSL routines:tls12_check_peer_sigalg:wrong signature type:../ssl/t1_lib.c:1145: [...] --- No client certificate CA names sent Server Temp Key: ECDH, P-384, 384 bits --- SSL handshake has read 5443 bytes and written 322 bytes Verification: OK --- New, (NONE), Cipher is (NONE) Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: ??? Protocol? : TLSv1.2 ??? Cipher??? : 0000 ??? Session-ID: 12B3427E761029EDED05CB26B3DD854ADE7B0D68061C2515A60A8A297AC968DB ??? Session-ID-ctx: ??? Master-Key: ??? PSK identity: None ??? PSK identity hint: None ??? SRP username: None ??? Start Time: 1597339233 ??? Timeout?? : 7200 (sec) ??? Verify return code: 0 (ok) ??? Extended master secret: no --- ************** ************** $ openssl s_client -connect gibs.earthdata.nasa.gov:443 -cipher DEFAULT at SECLEVEL=1 CONNECTED(00000003) depth=2 C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2009 Entrust, Inc. - for authorized use only", CN = Entrust Root Certification Authority - G2 verify return:1 depth=1 C = US, O = "Entrust, Inc.", OU = See www.entrust.net/legal-terms, OU = "(c) 2012 Entrust, Inc. - for authorized use only", CN = Entrust Certification Authority - L1K verify return:1 depth=0 C = US, ST = Maryland, L = Greenbelt, O = NASA (National Aeronautics and Space Administration), CN = gibs.earthdata.nasa.gov verify return:1 [...] --- No client certificate CA names sent Peer signing digest: SHA1 Peer signature type: RSA Server Temp Key: ECDH, P-384, 384 bits --- SSL handshake has read 5503 bytes and written 483 bytes Verification: OK --- New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: ??? Protocol? : TLSv1.2 ??? Cipher??? : ECDHE-RSA-AES256-GCM-SHA384 ??? Session-ID: A48C668A8154E1A81137873D8D7D6CCF77B4C31729074C8C37A67B4A1CE9B155 ??? Session-ID-ctx: ??? Master-Key: D0147A71395D3336D998B1499630E4D4BA965F1BC9D8E526EF232A7D15ECC7989AE3A8844693D628C47B76A7BA8BFC4B ??? PSK identity: None ??? PSK identity hint: None ??? SRP username: None ??? Start Time: 1597384544 ??? Timeout?? : 7200 (sec) ??? Verify return code: 0 (ok) ??? Extended master secret: no --- ************** From tm at t8m.info Fri Aug 14 06:41:59 2020 From: tm at t8m.info (Tomas Mraz) Date: Fri, 14 Aug 2020 08:41:59 +0200 Subject: Wrong signature type error trying to connect to gibs.earthdata.nasa.gov on Ubuntu 20.04 In-Reply-To: References: Message-ID: <9f2851e0-099a-4646-ba77-a38b0498890d@t8m.info> It is not a bug in OpenSSL and it is not a misconfiguration or non-compliance on the server side either. Basically to enhance security the default seclevel on Debian and Ubuntu was raised to 2 which doesn't allow SHA1 signatures which are weak. The server apparently doesn't support them which indicates that it is some older implementation but that doesn't necessarily mean it is non-compliant. It is just less capable. However the SHA1 signatures are regarded as seriously weakened currently, so it would be certainly a very good idea to upgrade/fix the server to support SHA2 based signatures. ?Tom??? Mr?z 14. 8. 2020 8:00, 8:00, Andrea Giudiceandrea via openssl-users napsal/a: >Hi all, >on Ubuntu 20.04 LTS 64 bit, with OpenSSL version 1.1.1f, it is not >possible to connect to a popular GIS OGC server at >gibs.earthdata.nasa.gov:443 using OpenSSL or cUrl or Wget default >parameters. The OpenSSL 1.1.1f package available for Ubuntu 20.04 is >build with the "-DOPENSSL_TLS_SECURITY_LEVEL=2" option. > >The relevant errors are: "SSL routines:tls12_check_peer_sigalg:wrong >signature type:../ssl/t1_lib.c:1145" and "SSL3 alert >write:fatal:handshake failure". > >On the same machine it is possible to connect to that server using >Firefox version 79.0 (the reported connection security properties are >"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, 256 bit keys, TLS 1.2") or >gnutls-cli version 3.6.13 (the reported connection security properties >are "(TLS1.2-X.509)-(ECDHE-SECP384R1)-(RSA-SHA1)-(AES-256-GCM)"). >The connection is also possible on Ubuntu 18.04 (OpenSSL 1.1.1 without >the "-DOPENSSL_TLS_SECURITY_LEVEL=2" build option). > >I already know the source of the issue (the server uses SHA1 as peer >signing digest which is not allowed at SECURITY LEVEL = 2) and how to >workaround it (setting SECLEVEL=1 as a cli option or in openssl.cnf), >but I'd like to know if it is due to a misconfigured / non compliant >server or to a bug in OpenSSL. > >In the former case, I'd like to know some technical specifications to >refer to in order to submit the issue to the gibs.earthdata.nasa.gov >system administrators so that they can understand the problem and >configure the server correctly. > >Best regards. > >Andrea Giudiceandrea > >Note: > >see the following excerpts from the connection logs: > >************** >$ openssl s_client -state -connect gibs.earthdata.nasa.gov:443 >CONNECTED(00000003) >SSL_connect:before SSL initialization >SSL_connect:SSLv3/TLS write client hello >SSL_connect:SSLv3/TLS write client hello >SSL_connect:SSLv3/TLS read server hello >depth=2 C = US, O = "Entrust, Inc.", OU = See >www.entrust.net/legal-terms, OU = "(c) 2009 Entrust, Inc. - for >authorized use only", CN = Entrust Root Certification Authority - G2 >verify return:1 >depth=1 C = US, O = "Entrust, Inc.", OU = See >www.entrust.net/legal-terms, OU = "(c) 2012 Entrust, Inc. - for >authorized use only", CN = Entrust Certification Authority - L1K >verify return:1 >depth=0 C = US, ST = Maryland, L = Greenbelt, O = NASA (National >Aeronautics and Space Administration), CN = gibs.earthdata.nasa.gov >verify return:1 >SSL_connect:SSLv3/TLS read server certificate >SSL3 alert write:fatal:handshake failure >SSL_connect:error in error >139920655459648:error:1414D172:SSL >routines:tls12_check_peer_sigalg:wrong signature >type:../ssl/t1_lib.c:1145: >[...] >--- >No client certificate CA names sent >Server Temp Key: ECDH, P-384, 384 bits >--- >SSL handshake has read 5443 bytes and written 322 bytes >Verification: OK >--- >New, (NONE), Cipher is (NONE) >Server public key is 2048 bit >Secure Renegotiation IS supported >Compression: NONE >Expansion: NONE >No ALPN negotiated >SSL-Session: >??? Protocol? : TLSv1.2 >??? Cipher??? : 0000 >??? Session-ID: >12B3427E761029EDED05CB26B3DD854ADE7B0D68061C2515A60A8A297AC968DB >??? Session-ID-ctx: >??? Master-Key: >??? PSK identity: None >??? PSK identity hint: None >??? SRP username: None >??? Start Time: 1597339233 >??? Timeout?? : 7200 (sec) >??? Verify return code: 0 (ok) >??? Extended master secret: no >--- >************** > >************** >$ openssl s_client -connect gibs.earthdata.nasa.gov:443 -cipher >DEFAULT at SECLEVEL=1 >CONNECTED(00000003) >depth=2 C = US, O = "Entrust, Inc.", OU = See >www.entrust.net/legal-terms, OU = "(c) 2009 Entrust, Inc. - for >authorized use only", CN = Entrust Root Certification Authority - G2 >verify return:1 >depth=1 C = US, O = "Entrust, Inc.", OU = See >www.entrust.net/legal-terms, OU = "(c) 2012 Entrust, Inc. - for >authorized use only", CN = Entrust Certification Authority - L1K >verify return:1 >depth=0 C = US, ST = Maryland, L = Greenbelt, O = NASA (National >Aeronautics and Space Administration), CN = gibs.earthdata.nasa.gov >verify return:1 >[...] >--- >No client certificate CA names sent >Peer signing digest: SHA1 >Peer signature type: RSA >Server Temp Key: ECDH, P-384, 384 bits >--- >SSL handshake has read 5503 bytes and written 483 bytes >Verification: OK >--- >New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384 >Server public key is 2048 bit >Secure Renegotiation IS supported >Compression: NONE >Expansion: NONE >No ALPN negotiated >SSL-Session: >??? Protocol? : TLSv1.2 >??? Cipher??? : ECDHE-RSA-AES256-GCM-SHA384 >??? Session-ID: >A48C668A8154E1A81137873D8D7D6CCF77B4C31729074C8C37A67B4A1CE9B155 >??? Session-ID-ctx: >??? Master-Key: >D0147A71395D3336D998B1499630E4D4BA965F1BC9D8E526EF232A7D15ECC7989AE3A8844693D628C47B76A7BA8BFC4B >??? PSK identity: None >??? PSK identity hint: None >??? SRP username: None >??? Start Time: 1597384544 >??? Timeout?? : 7200 (sec) >??? Verify return code: 0 (ok) >??? Extended master secret: no >--- >************** From andreaerdna at libero.it Fri Aug 14 08:35:16 2020 From: andreaerdna at libero.it (Andrea Giudiceandrea) Date: Fri, 14 Aug 2020 10:35:16 +0200 Subject: Wrong signature type error trying to connect to gibs.earthdata.nasa.gov on Ubuntu 20.04 In-Reply-To: <9f2851e0-099a-4646-ba77-a38b0498890d@t8m.info> References: <9f2851e0-099a-4646-ba77-a38b0498890d@t8m.info> Message-ID: <249c6690-8bf3-2f82-8a98-cf9478612c87@libero.it> Hi ?Tom???, thank you very much for the clarification. Best regards. Andrea Il 14/08/2020 08:41, Tomas Mraz ha scritto: > The server apparently doesn't support them which indicates that it is > some older implementation but that doesn't necessarily mean it is > non-compliant. It is just less capable. From pgnet.dev at gmail.com Fri Aug 14 18:32:03 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Fri, 14 Aug 2020 11:32:03 -0700 Subject: matching openssl's enc ciphers to php's openssl functions' ciphers: where's "chacha20-poly1305"? In-Reply-To: References: <127700f0-b39c-bb41-94fb-81f218f70543@gmail.com> Message-ID: <81bcf465-08be-3071-9ea1-307000bcabb6@gmail.com> On 8/13/20 3:03 PM, Thomas Dwyer III wrote: > I think you want "openssl ciphers" rather than "openssl enc -ciphers". Per the "enc" man page: > > The enc program does not support authenticated encryption modes like > CCM and GCM, and will not support such modes in the future. > > chacha20-poly1305 is an authenticated cipher. OpenSSL supports it but the enc command line utility does not. got it. thx! From jhb at FreeBSD.org Mon Aug 17 17:55:17 2020 From: jhb at FreeBSD.org (John Baldwin) Date: Mon, 17 Aug 2020 10:55:17 -0700 Subject: Testing TLS 1.0 with OpenSSL master Message-ID: <2b273716-f4a8-3752-70dc-79415ed64455@FreeBSD.org> Sadly, I need to be able to test some KTLS changes I have in FreeBSD that support legacy clients still using TLS 1.0. After seeing the note in CHANGES.md about TLS 1.0 signature algs no longer being permitted in the default security level, I tried using '-auth_level=0' to lower the security level, but this didn't work for either the client or server. Adding '@SECLEVEL=0' to the ciphers explicitly does work for the server, but I still see an odd regression with the client. Specifically, if you run the following server (OpenSSL 1.1.1) openssl s_server -cert cert.pem -key cert.key -accept 443 -msg -www -tls1 and then use the following OpenSSL 3.0.0 client: env OPENSSL_CONF=noetm.cnf openssl s_client -host oe1 -port 443 -msg -tls1 -cipher 'AES256-SHA at SECLEVEL=0' (where noetm.conf is a config file to disable ETM and force the use of MTE) then the connection hangs after the handshake. Last bits of server output: >>> TLS 1.0, Handshake [length 00aa], NewSessionTicket 04 00 00 a6 00 00 1c 20 00 a0 b8 1e 37 f3 ab 34 1f 42 05 b0 cf 7c 86 91 2d 20 10 99 e3 8c 61 f1 7a f7 0a d4 1a fb e8 11 76 b2 66 3a 0e bd 73 54 cf e7 f5 6c 01 f2 c6 bd 17 b4 0a 42 c0 b5 d1 87 22 ae 21 f0 2a 6a 79 3c 2e 33 71 8b e2 c8 ff 6c 8c 2a 34 58 ca 2d e6 52 7b 0a 3a 17 1b 51 3d 8d de f0 b9 0f 6b 4c 94 fd 49 fb 74 fa 0c 9e b2 32 98 dc 28 ca 66 01 ba 1c 24 a7 80 38 65 ac dd dc 7c 9f 1a 16 73 0f 57 51 73 d4 17 35 5e 71 1c 32 10 6b b7 b6 1f 8b 7b e6 88 c2 05 73 5f 95 26 50 6a 08 7f 04 66 1e b8 5f db 51 >>> ??? [length 0005] 14 03 01 00 01 >>> TLS 1.0, ChangeCipherSpec [length 0001] 01 >>> ??? [length 0005] 16 03 01 00 30 >>> TLS 1.0, Handshake [length 0010], Finished 14 00 00 0c 35 b9 fa 86 7f 32 75 62 71 c5 16 a3 Last bits of client output: <<< TLS 1.0, Handshake [length 00aa], NewSessionTicket 04 00 00 a6 00 00 1c 20 00 a0 b8 1e 37 f3 ab 34 1f 42 05 b0 cf 7c 86 91 2d 20 10 99 e3 8c 61 f1 7a f7 0a d4 1a fb e8 11 76 b2 66 3a 0e bd 73 54 cf e7 f5 6c 01 f2 c6 bd 17 b4 0a 42 c0 b5 d1 87 22 ae 21 f0 2a 6a 79 3c 2e 33 71 8b e2 c8 ff 6c 8c 2a 34 58 ca 2d e6 52 7b 0a 3a 17 1b 51 3d 8d de f0 b9 0f 6b 4c 94 fd 49 fb 74 fa 0c 9e b2 32 98 dc 28 ca 66 01 ba 1c 24 a7 80 38 65 ac dd dc 7c 9f 1a 16 73 0f 57 51 73 d4 17 35 5e 71 1c 32 10 6b b7 b6 1f 8b 7b e6 88 c2 05 73 5f 95 26 50 6a 08 7f 04 66 1e b8 5f db 51 <<< ??? [length 0005] 14 03 01 00 01 <<< ??? [length 0005] 16 03 01 00 30 I get the same hang if I run the client against a 'master' server (the 'master' server requires an explicit -cipher as well). So I guess two questions: 1) Is 'auth_level' supposed to work for this? The CHANGES.md change references SSL_CTX_set_security_level and openssl(1) claims that '-auth_level' changes this? Is the CHANGES.md entry wrong and only SECLEVEL=0 for the ciphers work by design? 2) The hang when using a 'master' client seems like a regression? -- John Baldwin From roderickklein at xs4all.nl Mon Aug 17 22:55:40 2020 From: roderickklein at xs4all.nl (Roderick Klein) Date: Tue, 18 Aug 2020 00:55:40 +0200 Subject: Adding support for OS/2 back to Open SSL 1.1.1. Message-ID: <5F3B0AEC.2050901@xs4all.nl> Hello, New to this list. I am looking at compiling OpenSSL 1.1.1. on OS/2 with GCC. Would OpenSSL be willing to accept patches to re-enable OS/2 in the OpenSSL ? Best regards, Roderick Klein President OS/2 VOICE From jb-openssl at wisemo.com Tue Aug 18 03:58:31 2020 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 18 Aug 2020 05:58:31 +0200 Subject: Software that uses OpenSSL In-Reply-To: <1795517526B2928E526CC788@[192.168.1.156]> References: <1795517526B2928E526CC788@[192.168.1.156]> Message-ID: <5c358472-59b4-c162-27d8-268a7b4f044c@wisemo.com> On 06/08/2020 22:17, Quanah Gibson-Mount wrote: > > > --On Thursday, August 6, 2020 1:21 PM -0700 Dan Kegel > wrote: > >> lists 861 packages, belonging to something like 400 projects, that >> depend >> on openssl.... > > Unfortunately, due to Debian's odd take on the OpenSSL license, many > projects that can use OpenSSL are compiled against alternative SSL > libraries, so this can miss a lot of potential applications (OpenLDAP, > for example). > It's not an odd take.? The SSLeay license explicitly bans releasing OpenSSL code under the GPL (as part of SSLeay's own copyleft provisions). GPL version 2 explicitly prohibits OS bundled GPL code from linking to OS-bundled non-GPL code, so this can be done only by violating the SSLeay license. So no OS distribution can include GPL 2 code using OpenSSL 1.x.x GPL version 2 explicitly allows independently distibuted copies of GPL 2 programs to link to any OS-bundled libs, including OS-bundled OpenSSL (this clause was intended to allow linking to stuff like the Microsoft or Sun OS libraries) Some GPL version 2 programs include an extra license permission to link against OpenSSL even when those GPL version 2 programs are bundled with the OS. > Hopefully with OpenSSL 3.0 and later, this won't be as much of an issue. Does the Apache 2.0 license allow redistributing code under GPL 2 ? > > --Quanah > > -- > > Quanah Gibson-Mount > Product Architect > Symas Corporation > Packaged, certified, and supported LDAP solutions powered by OpenLDAP: > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com Transformervej 29, 2860 Soborg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From jb-openssl at wisemo.com Tue Aug 18 04:10:36 2020 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 18 Aug 2020 06:10:36 +0200 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <1596658744.20854.63.camel@taygeta.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <1596658744.20854.63.camel@taygeta.com> Message-ID: <7660312f-2c82-516f-6939-62f0a7c1be53@wisemo.com> The key thing to do is to make those client applications not request the ssl23-method from OpenSSL 0.9.x . ssl23 explicitly requests this backward-compatibility feature while OpenSSL 3.x.x apparently deleted the ability to respond to this "historic" TLS hello format, which is also sent by some not-that-old web browsers. On 05/08/2020 22:19, Skip Carter wrote: > Patrick, > > I am also supporting servers running very old Linux systems and I can > tell you that YES you can upgrade from source. I have built > openssl-1.1.1 from source on such systems with no problems. > > On Wed, 2020-08-05 at 21:49 +0200, Patrick Mooc wrote: >> Hello, >> >> I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian >> distribution (Lenny). >> >> Is it possible to upgrade OpenSSL version without upgrading Linux >> Debian >> distribution ? >> If yes, up to which version of OpenSSL ? >> >> Are all versions of OpenSSL compliant with all Linux Debian >> distribution ? >> >> >> Thank you in advance for your answer. >> >> Best Regards, >> Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. http://www.wisemo.com Transformervej 29, 2860 Soborg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From matt at openssl.org Tue Aug 18 09:56:56 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 18 Aug 2020 10:56:56 +0100 Subject: Adding support for OS/2 back to Open SSL 1.1.1. In-Reply-To: <5F3B0AEC.2050901@xs4all.nl> References: <5F3B0AEC.2050901@xs4all.nl> Message-ID: On 17/08/2020 23:55, Roderick Klein wrote: > New to this list. I am looking at compiling OpenSSL 1.1.1. on OS/2 with > GCC. Would OpenSSL be willing to accept patches to re-enable OS/2 in the > OpenSSL ? Such patches are unlikely to be accepted into 1.1.1 since that is a stable release. 3.0 is still in development but with a beta due in about 9 days at which point we hit feature freeze and no new platforms will be allowed. I have no idea if anyone actually uses OS/2. Generally speaking such platforms are community supported - so if there is someone willing to do the port and its not egregiously invasive to the current code then we tend to accept such PRs. But if the community support then goes away it may get removed again from future versions. Matt From Tobias.Wolf at t-systems.com Tue Aug 18 09:58:10 2020 From: Tobias.Wolf at t-systems.com (Tobias.Wolf at t-systems.com) Date: Tue, 18 Aug 2020 09:58:10 +0000 Subject: cross compiling on linux for macos Message-ID: Hi guy, Can somebody give me a hint for the following topic please? I want to cross compile the latest openssl v1.1 on linux (centos 7) as target macos 32/64 bit. Thanks in advance Tobi -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Aug 18 10:11:40 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 18 Aug 2020 11:11:40 +0100 Subject: OpenSSL compliance with Linux distributions In-Reply-To: <7660312f-2c82-516f-6939-62f0a7c1be53@wisemo.com> References: <377c5ab6-3315-02c3-66ca-8d92764d547e@gmail.com> <1596658744.20854.63.camel@taygeta.com> <7660312f-2c82-516f-6939-62f0a7c1be53@wisemo.com> Message-ID: On 18/08/2020 05:10, Jakob Bohm via openssl-users wrote: > The key thing to do is to make those client applications not request the > ssl23-method from OpenSSL 0.9.x . > ssl23 explicitly requests this backward-compatibility feature while > OpenSSL 3.x.x apparently deleted the > ability to respond to this "historic" TLS hello format, which is also > sent by some not-that-old web browsers. This capability has not been deleted from OpenSSL 3.0. It is still able to respond to SSLv2 format ClientHellos. Although testing that does reveal a bug (which may actually be the same one as reported by John Baldwin in the thread "Testing TLS 1.0 with OpenSSL master"). Matt > > > On 05/08/2020 22:19, Skip Carter wrote: >> Patrick, >> >> I am also supporting servers running very old Linux systems and I can >> tell you that YES you can upgrade from source. I have built >> ?? openssl-1.1.1 from source on such systems with no problems. >> >> On Wed, 2020-08-05 at 21:49 +0200, Patrick Mooc wrote: >>> Hello, >>> >>> I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian >>> distribution (Lenny). >>> >>> Is it possible to upgrade OpenSSL version without upgrading Linux >>> Debian >>> distribution ? >>> If yes, up to which version of OpenSSL ? >>> >>> Are all versions of OpenSSL compliant with all Linux Debian >>> distribution ? >>> >>> >>> Thank you in advance for your answer. >>> >>> Best Regards, >>> > > > Enjoy > > Jakob From matt at openssl.org Tue Aug 18 16:49:32 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 18 Aug 2020 17:49:32 +0100 Subject: Testing TLS 1.0 with OpenSSL master In-Reply-To: <2b273716-f4a8-3752-70dc-79415ed64455@FreeBSD.org> References: <2b273716-f4a8-3752-70dc-79415ed64455@FreeBSD.org> Message-ID: On 17/08/2020 18:55, John Baldwin wrote: > 1) Is 'auth_level' supposed to work for this? The CHANGES.md change > references SSL_CTX_set_security_level and openssl(1) claims that > '-auth_level' changes this? Is the CHANGES.md entry wrong and only > SECLEVEL=0 for the ciphers work by design? openssl(1) says this about auth_level: "Set the certificate chain authentication security level to I. The authentication security level determines the acceptable signature and public key strength when verifying certificate chains." However, the problem you are seeing is about *handshake* signatures using SHA1 - so auth_level is not appropriate. > > 2) The hang when using a 'master' client seems like a regression? > Fix for this issue here: https://github.com/openssl/openssl/pull/12670 Matt From swapna at gigamon.com Tue Aug 18 17:50:43 2020 From: swapna at gigamon.com (Swapna Pinnamaraju) Date: Tue, 18 Aug 2020 17:50:43 +0000 Subject: FIPS canister questions In-Reply-To: References: Message-ID: Hi everyone. We are running CentOS 7.8 and the OpenSSL that comes with it, 'OpenSSL 1.0.2k-fips'. We have built the latest FOM 2.0 and now we want to incorporate the output of the FOM build into our CentOS 7.8 system. So we have two questions. 1. How do we install the output of the FOM build (fipscanister.o et al) on the CentOS system such that the existing OpenSSL will start using the new canister? 1. How do we verify that libcrypto is indeed using the new fipscanister.o? Thanks in advance. Swapna Pinnamaraju | Sr. Staff Software Engineer Gigamon | www.gigamon.com Address: 3300 Olcott Street, Santa Clara CA 95054 This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. NOTE that all incoming emails sent to Gigamon email accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional emails ("spam"). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tm at t8m.info Tue Aug 18 18:05:38 2020 From: tm at t8m.info (Tomas Mraz) Date: Tue, 18 Aug 2020 20:05:38 +0200 Subject: FIPS canister questions In-Reply-To: References: Message-ID: <3568b867-1cff-4d8b-816c-9a146019a784@t8m.info> Hello, there is no way to do that. The CentOS OpenSSL build does not allow using the upstream Fips object module. In theory you could replace the CentOS openssl library with upstream 1.0.2 library built in way that it allows using the fipscanister.o however it would require non-trivial patching of the upstream OpenSSL 1.0.2 code to make it compatible with the rest of the system. ?Tom??? Mr?z 18. 8. 2020 19:51, 19:51, Swapna Pinnamaraju napsal/a: >Hi everyone. > >We are running CentOS 7.8 and the OpenSSL that comes with it, 'OpenSSL >1.0.2k-fips'. We have built the latest FOM 2.0 and now we want to >incorporate the output of the FOM build into our CentOS 7.8 system. So >we have two questions. > > >1. How do we install the output of the FOM build (fipscanister.o et >al) on the CentOS system such that the existing OpenSSL will start >using the new canister? > > >1. How do we verify that libcrypto is indeed using the new >fipscanister.o? > >Thanks in advance. > >Swapna Pinnamaraju | Sr. Staff Software Engineer >Gigamon | www.gigamon.com >Address: 3300 Olcott Street, Santa Clara CA 95054 > > >This message may contain confidential and privileged information. If it >has been sent to you in error, please reply to advise the sender of the >error and then immediately delete it. If you are not the intended >recipient, do not read, copy, disclose or otherwise use this message. >The sender disclaims any liability for such unauthorized use. NOTE that >all incoming emails sent to Gigamon email accounts will be archived and >may be scanned by us and/or by external service providers to detect and >prevent threats to our systems, investigate illegal or inappropriate >behavior, and/or eliminate unsolicited promotional emails ("spam"). From rousskov at measurement-factory.com Tue Aug 18 21:31:55 2020 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Tue, 18 Aug 2020 17:31:55 -0400 Subject: SSL_ERROR_WANT_TIME: Pause SSL_connect to fetch intermediate certificates Message-ID: <2181c43c-2374-5aa1-ba1d-80168f350de8@measurement-factory.com> Hello, TLDR: How can we pause the SSL_connect() progress and return to its caller after the origin certificate is fetched/decrypted, but before OpenSSL starts validating it (so that we can fetch the missing intermediate certificates without threads or blocking I/O)? ASYNC_pause_job() does not seem to be the right answer. My team is working on an HTTP proxy Squid. Squid does not have the luxury of knowing what secure servers it will be talking to (on behalf of its clients). Thus, it cannot simply preload _intermediate_ certificates for servers that do not supply them in their TLS handshakes (e.g. https://incomplete-chain.badssl.com/ ) The standard solution for the missing intermediate certificate problem is to fetch the missing intermediate certificates upon their discovery. AIA (RFC 5280) is a mechanism that applications can use to learn about the location of the missing intermediate certificates. Popular browsers fetch what they miss: If you go to the above URL, your browser will probably be happy! As you know, OpenSSL provides the certificate verification callback that can discover that the origin certificate chain is incomplete. An application using threads or blocking I/O can probably "pause" its verification callback execution, fetch the intermediate certificates, and then complete validation before happily returning to the SSL_connect() caller. Life is easy when you can use threads or block thousands of concurrent transactions! Unfortunately, Squid can use neither threads nor blocking I/O. Upon discovery of the missing intermediate certificates, Squid has to return to the SSL_connect() caller, fetch the certificates, and then resume SSL_connect() with the fetched certificates available to OpenSSL. However, the certificate verification callback does not have a "please call me later" return code. We can only return "yes, valid" or "no, invalid" results. We found an ugly workaround for the above problem. Here is a simplified description: Using OpenSSL BIO API, Squid parses the server certificate during the TLS handshake, discovers the missing intermediate certificates, pauses TLS I/O (this results in SSL_connect() returning back to the caller with SSL_ERROR_WANT_READ), fetches the missing certificates, supplies them to OpenSSL, and then resumes SSL_connect(). This hack has worked for many years. Now comes TLS v1.3. Squid code can no longer parse the certificates before OpenSSL because they are contained in the _encrypted_ part of the server handshake. Thus, Squid cannot discover what is missing and fetch that for OpenSSL before certificate validation starts. What can we do? How can we pause the SSL_connect() progress after the origin certificate is fetched but before it is validated? I am aware of the ASYNC_pause_job() and related async APIs in OpenSSL. If I interpret related documentation, discussions, and our test results correctly, that API is not meant as the correct answer for our problem. Today, abusing that API will probably work. Tomorrow, internal/unpredictable OpenSSL changes might break our Squid enhancements beyond repair as detailed below. Somewhat counter-intuitively, the OpenSSL async API is meant for activities that can work correctly _without_ becoming asynchronous (i.e. without being paused to temporary give way to other activities). Squid cannot fetch the missing intermediate certificates without pausing TLS negotiations with the server... The async API was added to support custom OpenSSL engines, not application callbacks. The API does not guarantee that an ASYNC_pause_job() will actually pause processing and return to the SSL_connect() caller! That will only happen if OpenSSL internal code does not call ASYNC_block_pause(), effectively converting all subsequent ASYNC_pause_job() calls into a no-op. That pause-nullification was added to work around deadlocks, but it effectively places the API off limits to user-level code that cannot control the timing of those ASYNC_block_pause() calls. Squid could kill the current TLS session (and its TCP connection), fetch the missing certificates, and then retry from scratch, but that is a very ugly (unreliable, wasteful, and noisy) solution. Can you think of another trick? Thank you, Alex. P.S. Squid does not support BoringSSL, but BoringSSL's SSL_ERROR_WANT_CERTIFICATE_VERIFY result of the certificate validation callback seemingly addresses our use case. I do not know whether OpenSSL decision makers would be open to adding something along those lines and decided to ask for existing solutions here before proposing adding SSL_ERROR_WANT_TIME :-). From norm.green at gemtalksystems.com Wed Aug 19 01:01:29 2020 From: norm.green at gemtalksystems.com (Norm Green) Date: Tue, 18 Aug 2020 18:01:29 -0700 Subject: Checking if a key can sign / verify in 3.0 Message-ID: In 3.0 I see this new function in evp.h : int EVP_PKEY_can_sign(const EVP_PKEY *pkey); Is there an equivalent way to check if a key can verify? I'm not seeing an obvious way to do that.? Previously I used EVP_PKEY_meth_get_verifyctx() but that call is now deprecated in 3.0. thanks, Norm Green From matt at openssl.org Wed Aug 19 09:29:19 2020 From: matt at openssl.org (Matt Caswell) Date: Wed, 19 Aug 2020 10:29:19 +0100 Subject: SSL_ERROR_WANT_TIME: Pause SSL_connect to fetch intermediate certificates In-Reply-To: <2181c43c-2374-5aa1-ba1d-80168f350de8@measurement-factory.com> References: <2181c43c-2374-5aa1-ba1d-80168f350de8@measurement-factory.com> Message-ID: <9bad0c98-e55e-17fe-fc4d-c73f32f32676@openssl.org> On 18/08/2020 22:31, Alex Rousskov wrote: > As you know, OpenSSL provides the certificate verification callback that > can discover that the origin certificate chain is incomplete. An > application using threads or blocking I/O can probably "pause" its > verification callback execution, fetch the intermediate certificates, > and then complete validation before happily returning to the > SSL_connect() caller. Life is easy when you can use threads or block > thousands of concurrent transactions! I suspect this is the way most people do it. > What can we do? How can we pause the SSL_connect() progress after the > origin certificate is fetched but before it is validated? We should really have a proper callback for this purpose. PRs welcome! (Doesn't help you right now though). > I am aware of the ASYNC_pause_job() and related async APIs in OpenSSL. > If I interpret related documentation, discussions, and our test results > correctly, that API is not meant as the correct answer for our problem. > Today, abusing that API will probably work. Tomorrow, > internal/unpredictable OpenSSL changes might break our Squid > enhancements beyond repair as detailed below. > > Somewhat counter-intuitively, the OpenSSL async API is meant for > activities that can work correctly _without_ becoming asynchronous (i.e. > without being paused to temporary give way to other activities). Squid > cannot fetch the missing intermediate certificates without pausing TLS > negotiations with the server... > > The async API was added to support custom OpenSSL engines, not > application callbacks. The API does not guarantee that an > ASYNC_pause_job() will actually pause processing and return to the > SSL_connect() caller! That will only happen if OpenSSL internal code > does not call ASYNC_block_pause(), effectively converting all subsequent > ASYNC_pause_job() calls into a no-op. That pause-nullification was added > to work around deadlocks, but it effectively places the API off limits > to user-level code that cannot control the timing of those > ASYNC_block_pause() calls. The async API is meant for any scenario where user code may want to perform async processing. Its design is NOT restricted to engines - although that is certainly where it is normally used. However there are no assumptions made anywhere that it will be exclusively restricted to engines. ASYNC_block_pause() is intended as a user level API, and a quick search of the codebase reveals that the only place we use it internally is in our tests - it does not appear in the library code. The intention is that you should be able to rely on being inside a job in any callbacks, if you've started the connection inside one. "Somewhat counter-intuitively, the OpenSSL async API is meant for activities that can work correctly _without_ becoming asynchronous (i.e. without being paused to temporary give way to other activities)" I have no idea what you mean by this. The whole point of ASYNC_pause_job() is to temporarily give way to other activities. One issue you might encounter with the ASYNC APIs is that they are not available on some less-common platforms. Basically anything without setcontext/swapcontext support (e.g. IIRC I think android may fall into this category). > Can you think of another trick? One possibility that springs to mind (which is also an ugly hack) is to defer the validation of the certificates. So, you have a verify callback that always says "ok". But any further reads on the underlying BIO always return with "retry" until such time as any intermediate certificates have been fetched and the chain has been verified "for real". The main problem I can see with this approach is there is no easy way to send the right alert back to the server in the event of failure. > P.S. Squid does not support BoringSSL, but BoringSSL's > SSL_ERROR_WANT_CERTIFICATE_VERIFY result of the certificate validation > callback seemingly addresses our use case. I do not know whether OpenSSL > decision makers would be open to adding something along those lines and > decided to ask for existing solutions here before proposing adding > SSL_ERROR_WANT_TIME :-). I'd definitely be open to adding it - although it wouldn't be backported to a stable branch. Matt From rousskov at measurement-factory.com Wed Aug 19 19:35:54 2020 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Wed, 19 Aug 2020 15:35:54 -0400 Subject: SSL_ERROR_WANT_TIME: Pause SSL_connect to fetch intermediate certificates In-Reply-To: <9bad0c98-e55e-17fe-fc4d-c73f32f32676@openssl.org> References: <2181c43c-2374-5aa1-ba1d-80168f350de8@measurement-factory.com> <9bad0c98-e55e-17fe-fc4d-c73f32f32676@openssl.org> Message-ID: <8aa099cd-cb86-190c-f054-ff32ac826706@measurement-factory.com> On 8/19/20 5:29 AM, Matt Caswell wrote: > We should really have a proper callback for this purpose. PRs welcome! > (Doesn't help you right now though). Thank you for a prompt, thoughtful, and useful response. I believe that we are on the same page as far as async API overall intentions, and I am also very glad to hear that the OpenSSL team may welcome an addition of a proper callback to address Squid's use case. I know Squid needs are not unique. I do not yet know whether I can contribute (or facilitate contribution of) such an enhancement, but this green light is meaningful progress already! >> "Somewhat counter-intuitively, the OpenSSL async API is meant for >> activities that can work correctly _without_ becoming asynchronous (i.e. >> without being paused to temporary give way to other activities)" > I have no idea what you mean by this. Sorry for not detailing this accusation. I was worried that my email was already too long/verbose... I will detail it below. > The whole point of ASYNC_pause_job() is to temporarily give way to > other activities. Yes, giving way to other activities is the whole point of the async API optimization. Unfortunately, being only an optimization, the API is not enough when the callback MUST "give way to other activities". AFAICT, OpenSSL does not guarantee that ASYNC_pause_job() in a callback will actually "give way to other activities" because OpenSSL does guarantee that some engine or OpenSSL native code does not hold a ASYNC_block_pause() "lock" while calling the callback. The engine code that async API supports well may look like this[1]: myengine() { while (!something_happened()) ASYNC_pause_job(); // application MAY get control here ... use something ... } The callback code that async API does not support looks like this: mycallback() { if (!something_happened()) ASYNC_pause_job(); // application MUST get control here assert(something_happened()); ... use something ... } Please note that replacing "if" with "while" in mycallback() would make the compiled code identical with myengine() but would not solve the problem: Instead of the failed assertion, the callback would get into an infinite loop... The callback _relies_ on the application making progress (e.g., fetching the missing intermediate certificates or declaring a fetch failure before resuming SSL_connect()). This callback cannot work correctly without the application actually getting control. That is why the pausing call comments are different: MAY vs. MUST. Does this clarify what I meant? Do you agree that OpenSSL async API is not suitable for callbacks that _require_ ASYNC_pause_job() to return control to the application? [1] This myengine() example is inspired by your explanation at https://mta.openssl.org/pipermail/openssl-dev/2015-October/003031.html > ASYNC_block_pause() ... does not appear in the library code True, but it did appear there in the past, right? I am looking at commit 625146d as an example. Those calls were removed more than a year later in 75e2c87 AFAICT, but I see no guarantee that they will not reappear again. And even if OpenSSL now has a policy against using ASYNC_block_pause() internally, or a policy against holding an ASYNC_block_pause() "lock" while calling any callback, some custom engine might do that at the "wrong" for the above mycallback() moment, right? If you think that fears about something inside OpenSSL/engines preventing our callback from returning control to the application are unfounded, then using async API may be the best long-term solution for Squid. Short-term, it does not work "as is" because OpenSSL STACKSIZE appears to be too small (leading to weird crashes that disappear if we increase STACKSIZE from 32768 to 524288 bytes), but perhaps we can somehow hack around that. > One possibility that springs to mind (which is also an ugly hack) is to > defer the validation of the certificates. So, you have a verify callback > that always says "ok". But any further reads on the underlying BIO > always return with "retry" until such time as any intermediate > certificates have been fetched and the chain has been verified "for > real". The main problem I can see with this approach is there is no easy > way to send the right alert back to the server in the event of failure. We were also concerned that X509_verify_cert() is not enough to fully mimic the existing OpenSSL certificate validation procedure because the internal OpenSSL ssl_verify_cert_chain() does not just call X509_verify_cert(). It also does some DANE-related manipulations, for example. Are those fears unfounded? In other words, is calling X509_verify_cert() directly always enough to make the right certificate validation decision? Thanks a lot, Alex. From simonkbaby at gmail.com Wed Aug 19 20:50:26 2020 From: simonkbaby at gmail.com (SIMON BABY) Date: Wed, 19 Aug 2020 13:50:26 -0700 Subject: query on dns resolver Message-ID: I was looking at the openssl 1.0.2j code and trying to find how it resolves the dns domain name IP address from name. 1. Does it use the OS supported utilities like nslookup, gethostip etc? 2. Do we need a recursive dns server IP address to define in resolv.conf? 3. Can I know the APIs and files where I can start looking (for the dns resolution). Thank you for your time. Regards Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From christopher.j.zurcher at intel.com Wed Aug 19 23:26:12 2020 From: christopher.j.zurcher at intel.com (Zurcher, Christopher J) Date: Wed, 19 Aug 2020 23:26:12 +0000 Subject: Assembly build issues for UEFI with nasm and RtlVirtualUnwind Message-ID: Within the TianoCore/EDK2 project for UEFI, the prescribed assembler is NASM. In order build the 64-bit assembly config of OpenSSL with .nasm files, it appears that the Windows API function RtlVirtualUnwind is required. For my current implementation I have provided a stub function to satisfy the build but I would like to remove this function altogether. Is there a config flag I am missing that would let me build with .nasm files and without RtlVirtualUnwind? As far as I can tell, I have to set perlasm_scheme to nasm to get the correct output format, but this also forces the win64 flag to be set, which always includes __imp_RtlVirtualUnwind. Additionally, I am avoiding AVX instructions in 64- and 32-bit configs by hiding the nasm executable from the perl assembly generators (to skip the version check), but it would be helpful to have some sort of flag to disable AVX. Thanks, Christopher Zurcher From beldmit at gmail.com Thu Aug 20 08:59:01 2020 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Thu, 20 Aug 2020 11:59:01 +0300 Subject: query on dns resolver In-Reply-To: References: Message-ID: OpenSSL uses gethostbyname/gethostbyaddr grep -r gethost . will give you some clues On Wed, Aug 19, 2020 at 11:51 PM SIMON BABY wrote: > I was looking at the openssl 1.0.2j code and trying to find how it > resolves the dns domain name IP address from name. > > 1. Does it use the OS supported utilities like nslookup, gethostip etc? > 2. Do we need a recursive dns server IP address to define in resolv.conf? > 3. Can I know the APIs and files where I can start looking (for the dns > resolution). > > Thank you for your time. > > Regards > Simon > -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Thu Aug 20 14:37:48 2020 From: matt at openssl.org (Matt Caswell) Date: Thu, 20 Aug 2020 15:37:48 +0100 Subject: SSL_ERROR_WANT_TIME: Pause SSL_connect to fetch intermediate certificates In-Reply-To: <8aa099cd-cb86-190c-f054-ff32ac826706@measurement-factory.com> References: <2181c43c-2374-5aa1-ba1d-80168f350de8@measurement-factory.com> <9bad0c98-e55e-17fe-fc4d-c73f32f32676@openssl.org> <8aa099cd-cb86-190c-f054-ff32ac826706@measurement-factory.com> Message-ID: <6b83f533-3edb-200b-b5b5-4647e897ebea@openssl.org> On 19/08/2020 20:35, Alex Rousskov wrote: > Does this clarify what I meant? Do you agree that OpenSSL async API is > not suitable for callbacks that _require_ ASYNC_pause_job() to return > control to the application? Yes, it clarifies what you meant. And, yes, its true that strictly speaking that *could* happen. ASYNC_block_pause() was introduced to handle the problem where we are holding a lock and therefore must not return control to the user without releasing that lock. As a general rule we want to keep the sections of code that perform work under a lock to an absolute minimum. It would not seem like a great idea to me to call user callbacks from libssl while holding such a lock. We have no idea what those callbacks are going to do, and which APIs they will call. The chances of a deadlock occurring seem very high under those circumstances, unless restrictions are placed on what the callback can do, and those restrictions are very clearly documented. So, yes you are right. But in practice I'm not sure how much I'd really worry about this theoretical restriction. That's of course for you to decide. > If you think that fears about something inside OpenSSL/engines > preventing our callback from returning control to the application are > unfounded, then using async API may be the best long-term solution for > Squid. Short-term, it does not work "as is" because OpenSSL STACKSIZE > appears to be too small (leading to weird crashes that disappear if we > increase STACKSIZE from 32768 to 524288 bytes), but perhaps we can > somehow hack around that. Hmm. Yes this is a problem with the current implementation. The selection of STACKSIZE is somewhat arbitrary. It would be nice if the stack size grew as required, but I'm not sure if that's even technically possible. A workaround might be for us to expose some API to set it - but exposing such internal details is also quite horrible. > > >> One possibility that springs to mind (which is also an ugly hack) is to >> defer the validation of the certificates. So, you have a verify callback >> that always says "ok". But any further reads on the underlying BIO >> always return with "retry" until such time as any intermediate >> certificates have been fetched and the chain has been verified "for >> real". The main problem I can see with this approach is there is no easy >> way to send the right alert back to the server in the event of failure. > > We were also concerned that X509_verify_cert() is not enough to fully > mimic the existing OpenSSL certificate validation procedure because the > internal OpenSSL ssl_verify_cert_chain() does not just call > X509_verify_cert(). It also does some DANE-related manipulations, for > example. Are those fears unfounded? In other words, is calling > X509_verify_cert() directly always enough to make the right certificate > validation decision? > Does squid use the DANE APIs? If not I'm not sure it makes much difference. In any case the "manipulation" seems limited to setting DANE information in the X509_STORE_CTX which presumably could be replicated by: X509_STORE_CTX_set0_dane(ctx, SSL_get0_dane()); However, I'm not really the person to ask about the DANE implementation. Maybe Viktor Dukhovni will chip in with his thoughts. Matt From openssl-users at dukhovni.org Thu Aug 20 16:41:53 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 20 Aug 2020 12:41:53 -0400 Subject: query on dns resolver In-Reply-To: References: Message-ID: <20200820164153.GN86346@straasha.imrryr.org> On Thu, Aug 20, 2020 at 11:59:01AM +0300, Dmitry Belyavsky wrote: > OpenSSL uses gethostbyname/gethostbyaddr Also getaddrinfo(3), I hope in preference to the obsolete interfaces. There is no explicit use of DNS in OpenSSL, and many OpenSSL applications open their own TCP connections, and then ask OpenSSL to perform a handshake over an already connected socket, in which case OpenSSL does no name lookups at all. -- Viktor. From dv at vollmann.ch Thu Aug 20 19:28:49 2020 From: dv at vollmann.ch (Detlef Vollmann) Date: Thu, 20 Aug 2020 21:28:49 +0200 Subject: Surprising behaviour of DTLSv1_listen Message-ID: <1e8b6cf2-167b-a62c-5ca4-544d379aeb4e@vollmann.ch> Hello, if I do: // ctx is setup with certificate, key and cookie callbacks BIO *bio = BIO_new_dgram(sock, BIO_NOCLOSE); SSL *ssl = SSL_new(ctx); SSL_set_bio(ssl, bio, bio.get()); DTLS_set_link_mtu(ssl, 1000); SSL_set_options(ssl, SSL_OP_COOKIE_EXCHANGE); SSL_set_accept_state(ssl); SSL_accept(ssl); then the MTU setting works as expected, i.e. the ServerHello is split into two DTLS handshake fragments. But if I do: BIO *bio = BIO_new_dgram(sock, BIO_NOCLOSE); SSL *ssl = SSL_new(ctx); SSL_set_bio(ssl, bio, bio.get()); DTLS_set_link_mtu(ssl, 1000); SSL_set_options(ssl, SSL_OP_COOKIE_EXCHANGE); SSL_set_accept_state(ssl); DTLSv1_listen(ssl, addr.get()); SSL_accept(ssl); then the ServerHello is sent as a single packet (>1500 bytes). I think the reason is that DTLSv1_listen() internally calls SSL_clear(). I find this pretty surprising. I personally don't really care too much, as I'll do my own cookie handshake without DTLSv1_listen() before I call SSL_accept(), but I thought I'd report it anyway. Detlef From dv at vollmann.ch Thu Aug 20 19:44:05 2020 From: dv at vollmann.ch (Detlef Vollmann) Date: Thu, 20 Aug 2020 21:44:05 +0200 Subject: Real MTU problems with BIO pair Message-ID: <4af435e0-5ef3-9d46-389d-cb0f2021b688@vollmann.ch> Hello, if I create a BIO pair with BIO_new_bio_pair(&int_bio, 0, &ext_bio_, 0); then I tried to use SSL_set_mtu(), DTLS_set_link_mtu() and SSL_CTX_set_max_send_fragment(ctx, 1000). None of them gave me an error, but also none of them worked: the ServerHello was still sent as a single packet (>1500 bytes). If I create the BIO pair using BIO_new_bio_pair(&int_bio, 1000, &ext_bio_, 1000); then the ServerHello is fragmented, but not into DTLS handshake fragments, but just into separate UDP packets, that neither s_client nor my own client can work with. Is there any way to set the maximum fragment size for DTLS handshake with a BIO pair? Thanks, Detlef From openssl-users at dukhovni.org Fri Aug 21 07:54:42 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 21 Aug 2020 03:54:42 -0400 Subject: query on dns resolver In-Reply-To: References: <20200820164153.GN86346@straasha.imrryr.org> Message-ID: <20200821075442.GA37427@straasha.imrryr.org> On Thu, Aug 20, 2020 at 11:56:45PM +0200, David von Oheimb wrote: > OpenSSL has one function, namely BIO_lookup_ex(), that uses DNS lookup > functions. Since commit 28a0841bf58e3813b2e07ad22f19484308e2f70a of > 02 Feb 2016 it uses getaddrinfo(). Right, but even this is not "DNS lookup". It is hostname + service name resolution via the operating system's mechanisms for resolving hostnames and service names. This may, or may not, involve DNS lookups. There is no code in OpenSSL that *directly* performs DNS lookups. -- Viktor. From dv at vollmann.ch Fri Aug 21 15:05:51 2020 From: dv at vollmann.ch (Detlef Vollmann) Date: Fri, 21 Aug 2020 17:05:51 +0200 Subject: Real MTU problems with BIO pair In-Reply-To: <4af435e0-5ef3-9d46-389d-cb0f2021b688@vollmann.ch> References: <4af435e0-5ef3-9d46-389d-cb0f2021b688@vollmann.ch> Message-ID: <9355095a-34da-006c-8951-ba850f1dba07@vollmann.ch> On 2020-08-20 21:44, Detlef Vollmann wrote: > if I create a BIO pair with > ? BIO_new_bio_pair(&int_bio, 0, &ext_bio_, 0); > > then I tried to use SSL_set_mtu(), DTLS_set_link_mtu() > and SSL_CTX_set_max_send_fragment(ctx, 1000). > None of them gave me an error, but also none of them worked: > the ServerHello was still sent as a single packet (>1500 bytes). It turned out that this was not true: it actually were two packets but written to the BIO together before SSL_accept() returned, so my side of the bio pair got on a BIO_read() one single big packet and sent it to the socket and the wire as one UDP packet. > If I create the BIO pair using > ? BIO_new_bio_pair(&int_bio, 1000, &ext_bio_, 1000); > then the ServerHello is fragmented, but not into DTLS > handshake fragments, but just into separate UDP packets, > that neither s_client nor my own client can work with. > > Is there any way to set the maximum fragment size for > DTLS handshake with a BIO pair? One solution is to set the MTU and the int_bio size to exactly the same value. Another option would be to use BIO_set_callback_ex() and send the data to the socket after each BIO_write() into int_bio, but the problem here is that BIO_set_data() cannot be used as the ptr is already used for the peer address. Detlef From bkaduk at akamai.com Fri Aug 21 17:48:23 2020 From: bkaduk at akamai.com (Benjamin Kaduk) Date: Fri, 21 Aug 2020 10:48:23 -0700 Subject: Real MTU problems with BIO pair In-Reply-To: <9355095a-34da-006c-8951-ba850f1dba07@vollmann.ch> References: <4af435e0-5ef3-9d46-389d-cb0f2021b688@vollmann.ch> <9355095a-34da-006c-8951-ba850f1dba07@vollmann.ch> Message-ID: <20200821174823.GS20623@akamai.com> On Fri, Aug 21, 2020 at 05:05:51PM +0200, Detlef Vollmann wrote: > On 2020-08-20 21:44, Detlef Vollmann wrote: > > > > Is there any way to set the maximum fragment size for > > DTLS handshake with a BIO pair? > One solution is to set the MTU and the int_bio size to > exactly the same value. > Another option would be to use BIO_set_callback_ex() and send > the data to the socket after each BIO_write() into int_bio, > but the problem here is that BIO_set_data() cannot be used > as the ptr is already used for the peer address. There's always EX_DATA... -Ben From norm.green at gemtalksystems.com Fri Aug 21 17:59:12 2020 From: norm.green at gemtalksystems.com (Norm Green) Date: Fri, 21 Aug 2020 10:59:12 -0700 Subject: Checking if a key can sign / verify in 3.0 In-Reply-To: References: Message-ID: <38cec410-c733-c4a4-aa0b-faf0d85d1ace@gemtalksystems.com> No comments on my question? Should there not be a way to know if an EVP_PKEY is valid for verification besides attempting the verify operation and getting a weird error code?? Doesn't seem like too much to expect since we already have EVP_PKEY_can_sign(). I'm happy to implement EVP_PKEY_can_verify() with some assurance such a PR would be accepted. Norm Green On 8/18/2020 6:01 PM, Norm Green wrote: > In 3.0 I see this new function in evp.h : > > int EVP_PKEY_can_sign(const EVP_PKEY *pkey); > > Is there an equivalent way to check if a key can verify? I'm not > seeing an obvious way to do that.? Previously I used > EVP_PKEY_meth_get_verifyctx() but that call is now deprecated in 3.0. > > thanks, > > Norm Green > From dv at vollmann.ch Fri Aug 21 18:32:55 2020 From: dv at vollmann.ch (Detlef Vollmann) Date: Fri, 21 Aug 2020 20:32:55 +0200 Subject: Real MTU problems with BIO pair In-Reply-To: <20200821174823.GS20623@akamai.com> References: <4af435e0-5ef3-9d46-389d-cb0f2021b688@vollmann.ch> <9355095a-34da-006c-8951-ba850f1dba07@vollmann.ch> <20200821174823.GS20623@akamai.com> Message-ID: <6d563672-6f1d-8e11-aabb-ad70df873f05@vollmann.ch> On 2020-08-21 19:48, Benjamin Kaduk wrote: > On Fri, Aug 21, 2020 at 05:05:51PM +0200, Detlef Vollmann wrote: >> On 2020-08-20 21:44, Detlef Vollmann wrote: >>> >>> Is there any way to set the maximum fragment size for >>> DTLS handshake with a BIO pair? >> One solution is to set the MTU and the int_bio size to >> exactly the same value. >> Another option would be to use BIO_set_callback_ex() and send >> the data to the socket after each BIO_write() into int_bio, >> but the problem here is that BIO_set_data() cannot be used >> as the ptr is already used for the peer address. > > There's always EX_DATA... Thanks for the pointer. Using my own hash table would also be an option. But in the meantime I found that I can define my own BIO_METHOD, so this is probably my preferred option. Detlef From rajprudvi98 at gmail.com Mon Aug 24 10:18:23 2020 From: rajprudvi98 at gmail.com (prudvi raj) Date: Mon, 24 Aug 2020 15:48:23 +0530 Subject: Failure of ..new() for CTX objects in openssl 1.1.1g Message-ID: Hi, we are upgrading our codebase to openssl 1.1.1g from openssl 1.0.2k Previously, all the ctx objects are allocated memory using "calloc" typedef struct CryptWrapMDContext_t { #ifdef OPENSSL EVP_MD_CTX evpMDCtx; ...... struct CryptWrapMDContext_t *pNext; } Allocation : return ((CryptWrapMDContext_t *) calloc (1, sizeof (CryptWrapMDContext_t))); Now that in openssl 1.1.1 , as objects are opaque , we have to use pointers (*) & new() . typedef struct CryptWrapMDContext_t { #ifdef OPENSSL EVP_MD_CTX *evpMDCtx; ...... struct CryptWrapMDContext_t *pNext; } CryptWrapMDContext_t; So Allocation becomes : CryptWrapMDContext_t *pTemp; pTemp = ((CryptWrapMDContext_t *) calloc (1, sizeof (CryptWrapMDContext_t))); pTemp-> evpMDCtx = EVP_MD_CTX_new(); return pTemp; But , we are seeing crash upon the call of EVP_MD_CTX_new(); (new is returning null) So, are there any probable reasons why the new() has failed ?? Regards, prud. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhb at FreeBSD.org Mon Aug 24 20:38:41 2020 From: jhb at FreeBSD.org (John Baldwin) Date: Mon, 24 Aug 2020 13:38:41 -0700 Subject: Testing TLS 1.0 with OpenSSL master In-Reply-To: References: <2b273716-f4a8-3752-70dc-79415ed64455@FreeBSD.org> Message-ID: On 8/18/20 9:49 AM, Matt Caswell wrote: > > > On 17/08/2020 18:55, John Baldwin wrote: >> 1) Is 'auth_level' supposed to work for this? The CHANGES.md change >> references SSL_CTX_set_security_level and openssl(1) claims that >> '-auth_level' changes this? Is the CHANGES.md entry wrong and only >> SECLEVEL=0 for the ciphers work by design? > > openssl(1) says this about auth_level: > > "Set the certificate chain authentication security level to I. > The authentication security level determines the acceptable signature > and public key strength when verifying certificate chains." > > However, the problem you are seeing is about *handshake* signatures > using SHA1 - so auth_level is not appropriate. I think what I found confusing is that later in the text it says this: "See SSL_CTX_set_security_level(3) for the definitions of the available levels." so I had assumed it was calling that function. >> 2) The hang when using a 'master' client seems like a regression? >> > > Fix for this issue here: > > https://github.com/openssl/openssl/pull/12670 Thanks! -- John Baldwin From kurt at roeckx.be Tue Aug 25 12:50:57 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Tue, 25 Aug 2020 14:50:57 +0200 Subject: Testing TLS 1.0 with OpenSSL master In-Reply-To: References: <2b273716-f4a8-3752-70dc-79415ed64455@FreeBSD.org> Message-ID: <20200825125057.GA1564979@roeckx.be> On Mon, Aug 24, 2020 at 01:38:41PM -0700, John Baldwin wrote: > On 8/18/20 9:49 AM, Matt Caswell wrote: > > > > > > On 17/08/2020 18:55, John Baldwin wrote: > >> 1) Is 'auth_level' supposed to work for this? The CHANGES.md change > >> references SSL_CTX_set_security_level and openssl(1) claims that > >> '-auth_level' changes this? Is the CHANGES.md entry wrong and only > >> SECLEVEL=0 for the ciphers work by design? > > > > openssl(1) says this about auth_level: > > > > "Set the certificate chain authentication security level to I. > > The authentication security level determines the acceptable signature > > and public key strength when verifying certificate chains." > > > > However, the problem you are seeing is about *handshake* signatures > > using SHA1 - so auth_level is not appropriate. > > I think what I found confusing is that later in the text it says this: > > "See SSL_CTX_set_security_level(3) for the definitions of the available > levels." > > so I had assumed it was calling that function. It calls X509_VERIFY_PARAM_set_auth_level(), which also says to look at SSL_CTX_set_security_level(). If you call SSL_CTX_set_security_level(), X509_VERIFY_PARAM_set_auth_level() will be called with the same value. Kurt From matt at openssl.org Wed Aug 26 09:46:53 2020 From: matt at openssl.org (Matt Caswell) Date: Wed, 26 Aug 2020 10:46:53 +0100 Subject: Checking if a key can sign / verify in 3.0 In-Reply-To: References: Message-ID: On 19/08/2020 02:01, Norm Green wrote: > In 3.0 I see this new function in evp.h : > > int EVP_PKEY_can_sign(const EVP_PKEY *pkey); > > Is there an equivalent way to check if a key can verify? I'm not seeing > an obvious way to do that.? Previously I used > EVP_PKEY_meth_get_verifyctx() but that call is now deprecated in 3.0. That function checks whether the algorithm used by the key is capable of doing signature operations. It does *not* check whether the key itself has all the required components in order to perform the signature (nor whether there are any available provider implementations that implement it). >From the docs: "EVP_PKEY_can_sign() checks if the functionality for the key type of I supports signing. No other check is done, such as whether I contains a private key." Since there's not much point in having an algorithm that can create signatures, which can't also verify them, then the two operations are equivalent, i.e. if we had a function called `EVP_PKEY_can_verify()` it would be synonymous with `EVP_PKEY_can_sign()`. Matt From angus at magsys.co.uk Wed Aug 26 13:41:00 2020 From: angus at magsys.co.uk (Angus Robertson - Magenta Systems Ltd) Date: Wed, 26 Aug 2020 14:41 +0100 (BST) Subject: New NID for acmeIdentifier Message-ID: Is it possible for a new NID and object to be added to support creating and checking the Let's Encrypt ACME TLS-ALPN-01 challenge in which a temporary X509 certificate is created with a specific X509v3 extension containing shared information. Currently, I get a new NID with: OBJ_create('1.3.6.1.5.5.7.1.31','acmeIdentifier','X509v3 ACME Identifier') Angus From paul.dale at oracle.com Wed Aug 26 14:00:21 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Thu, 27 Aug 2020 00:00:21 +1000 Subject: New NID for acmeIdentifier In-Reply-To: References: Message-ID: This would require a line in crypto/objects/objects.txt and a "make update?. A pull request would be the way to get this in. Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 26 Aug 2020, at 11:41 pm, Angus Robertson - Magenta Systems Ltd wrote: > > Is it possible for a new NID and object to be added to support creating > and checking the Let's Encrypt ACME TLS-ALPN-01 challenge in which a > temporary X509 certificate is created with a specific X509v3 extension > containing shared information. > > Currently, I get a new NID with: > > OBJ_create('1.3.6.1.5.5.7.1.31','acmeIdentifier','X509v3 ACME > Identifier') > > Angus > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kris at amongbytes.com Wed Aug 26 16:21:36 2020 From: kris at amongbytes.com (Kris Kwiatkowski) Date: Wed, 26 Aug 2020 17:21:36 +0100 Subject: Integration of new algorithms In-Reply-To: References: Message-ID: Hello, I'm working on development of OpenSSL ENGINE that integrates post-quantum algorithms (new NIDs). During integration I need to modify OpenSSL code to add custom function, but would prefer not to need add anything to OpenSSL code (so engine can be dynmicaly loaded by any modern OpenSSL). So, In three cases, namely when the code is in callbacks for keygen, encryption and ctrl (called by EVP_PKEY_CTX_ctrl, EVP_PKEY_encrypt and EVP_PKEY_keygen) I need to get NID of the scheme. The problem is that, those functions are called with EVP_PKEY_CTX object provided as an argument. The NID is stored in the EVP_PKEY_CTX->pmeth->pkey_id. I think (AFAIK) there is no API which would return that value. I've added a simple function that returns pkey_id from the ctx, but that means that I need to change OpenSSL code. Is there any way to get NID without changing OpenSSL? Kind regards, Kris -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Wed Aug 26 21:36:38 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Thu, 27 Aug 2020 07:36:38 +1000 Subject: Integration of new algorithms In-Reply-To: References: Message-ID: Kris, Dynamically allocate yourself a block of NIDs, one for each algorithm, using OBJ_new_nid(). Note also, that there is a preferable option if you are working against the upcoming 3.0. Instead of developing an engine, create a provider. This avoids NIDs completely and was designed from the ground up to support what you want. Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 27 Aug 2020, at 2:21 am, Kris Kwiatkowski wrote: > > Hello, > > I'm working on development of OpenSSL ENGINE that integrates > post-quantum algorithms (new NIDs). During integration I > need to modify OpenSSL code to add custom function, but would > prefer not to need add anything to OpenSSL code (so engine > can be dynmicaly loaded by any modern OpenSSL). > > So, In three cases, namely when the code is in callbacks for keygen, > encryption and ctrl (called by EVP_PKEY_CTX_ctrl, EVP_PKEY_encrypt > and EVP_PKEY_keygen) I need to get NID of the scheme. The problem > is that, those functions are called with EVP_PKEY_CTX object > provided as an argument. The NID is stored in the > EVP_PKEY_CTX->pmeth->pkey_id. I think (AFAIK) there is no API > which would return that value. > > I've added a simple function that returns pkey_id from the ctx, but > that means that I need to change OpenSSL code. Is there any way > to get NID without changing OpenSSL? > > Kind regards, > Kris > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishwaskn at gmail.com Thu Aug 27 04:32:22 2020 From: vishwaskn at gmail.com (vishwas k.n.) Date: Thu, 27 Aug 2020 10:02:22 +0530 Subject: enabling null cipher Message-ID: Hello All, Could someone please let me know what is the right way to enable null-ciphers in openssl. I want to do some performance evaluations with openssl and as a part of the exercise, want to tabulate performance with null encryption ciphers too. Want to get this working with openssl s_server to begin with. I have come across various answers where people have suggested: 1. specify the cipher list using SSL_CTX_set_cipher_list with cipher list being only eNULL. 2. SSL_CTX_set_security_level with level=0. Have tried doing this from the client side but to no avail. On the server side, I have added -cipher "COMPLEMENTOFALL" to s_server to add the null ciphers. Is there a config option that needs to be enabled or a code change to go with ? thanks, -vishwas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Thu Aug 27 08:38:55 2020 From: matt at openssl.org (Matt Caswell) Date: Thu, 27 Aug 2020 09:38:55 +0100 Subject: enabling null cipher In-Reply-To: References: Message-ID: <3884583b-3062-2ef7-2b45-acee7969c015@openssl.org> This should do it: openssl s_server -cert /path/to/cert -key /path/to/key -cipher eNULL at SECLEVEL=0 -no_tls1_3 >From the client side: openssl s_client -cipher eNULL at SECLEVEL=0 Matt On 27/08/2020 05:32, vishwas k.n. wrote: > Hello All, > > Could someone please let me know what is the right way to enable > null-ciphers in openssl. I want to do some performance evaluations with > openssl and as a part of the exercise, want to tabulate performance with > null encryption ciphers too. > > Want to get this working with openssl s_server to begin with. > > I have come across various answers where people have suggested: > 1. specify the cipher list using?SSL_CTX_set_cipher_list with cipher > list being only eNULL. > 2.?SSL_CTX_set_security_level with level=0. > Have tried doing this from the client side but to?no avail. > > On the server side, I have added -cipher "COMPLEMENTOFALL" to s_server > to add the null ciphers. > > Is there a config option that needs to be enabled or a code change to go > with ? > > thanks, > -vishwas. From dirkx at webweaving.org Fri Aug 28 13:49:23 2020 From: dirkx at webweaving.org (Dirk-Willem van Gulik) Date: Fri, 28 Aug 2020 15:49:23 +0200 Subject: simple ASN1 sequence - not quite understanding what goes wrong Message-ID: <7645743F-F5D9-4317-B9EC-F904D976C6A5@webweaving.org> I've got a very simple sequence of to integers that I am trying to convert to DER. Bt I am getting an error or segfault in the final i2d step (lengt -1 for i2d_X9_62). Any advice on what is going wrong here ? With kind regards, Dw. #include #include #include #include #include #include #include typedef struct X9_62_st { ASN1_INTEGER *p; ASN1_INTEGER *q; } X9_62; ASN1_SEQUENCE(X_9_62) = { ASN1_SIMPLE(X9_62, p, ASN1_INTEGER), ASN1_SIMPLE(X9_62, q, ASN1_INTEGER) }; const ASN1_ITEM X9_62_it; DECLARE_ASN1_ALLOC_FUNCTIONS(X9_62) DECLARE_ASN1_FUNCTIONS(X9_62) IMPLEMENT_ASN1_FUNCTIONS(X9_62) int main(int argc, char **argv) { const unsigned char pbin[] = {1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8}; const unsigned char qbin[] = {0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8}; assert(sizeof(pbin) == 32); assert(sizeof(qbin) == 32); X9_62 *x962 = X9_62_new(); BIGNUM * p = BN_bin2bn(pbin, sizeof(pbin), NULL); assert(p); fprintf(stderr,"P: %s\n",BN_bn2hex(p)); BIGNUM * q = BN_bin2bn(qbin, sizeof(qbin), NULL); assert(q); fprintf(stderr,"Q: %s\n",BN_bn2hex(q)); x962->p = BN_to_ASN1_INTEGER(p, NULL); assert(x962->p); x962->q = BN_to_ASN1_INTEGER(q, NULL); assert(x962->q); unsigned char buff [32 * 1024]; unsigned char *outp = buff; int len = i2d_X9_62(x962, NULL); assert(len >=0 && len < sizeof(buff); len = i2d_X9_62(x962, outp); for (size_t i = 0; i < len; i++) putchar(buff[i]); X9_62_free(x962); return (0); }; From dirkx at webweaving.org Fri Aug 28 17:19:38 2020 From: dirkx at webweaving.org (Dirk-Willem van Gulik) Date: Fri, 28 Aug 2020 19:19:38 +0200 Subject: simple ASN1 sequence - not quite understanding what goes wrong In-Reply-To: <7645743F-F5D9-4317-B9EC-F904D976C6A5@webweaving.org> References: <7645743F-F5D9-4317-B9EC-F904D976C6A5@webweaving.org> Message-ID: Answering my own question - I forgot the END of sequence in the marco. Functional code below. Dw. > On 28 Aug 2020, at 15:49, Dirk-Willem van Gulik wrote: > > I've got a very simple sequence of to integers that I am trying to convert to DER. > > Bt I am getting an error or segfault in the final i2d step (lengt -1 for i2d_X9_62). > > Any advice on what is going wrong here ? > > With kind regards, > > Dw. #include #include #include #include #include #include #include typedef struct X962_st { ASN1_INTEGER *p; ASN1_INTEGER *q; } X962; DECLARE_ASN1_FUNCTIONS(X962) ASN1_SEQUENCE(X962) = { ASN1_SIMPLE(X962, p, ASN1_INTEGER), ASN1_SIMPLE(X962, q, ASN1_INTEGER) }ASN1_SEQUENCE_END(X962); DECLARE_ASN1_ALLOC_FUNCTIONS(X962) IMPLEMENT_ASN1_FUNCTIONS(X962) int main(int argc, char **argv) { const unsigned char pbin[] = {1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8}; const unsigned char qbin[] = {0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8}; assert(sizeof(pbin) == 32); assert(sizeof(qbin) == 32); X962 *x962 = X962_new(); BIGNUM * p = BN_bin2bn(pbin, sizeof(pbin), NULL); assert(p); x962->p = BN_to_ASN1_INTEGER(p, NULL); fprintf(stderr,"P: %s\n",BN_bn2hex(p)); assert(x962->p); BIGNUM * q = BN_bin2bn(qbin, sizeof(qbin), NULL); assert(q); x962->q = BN_to_ASN1_INTEGER(q, NULL); fprintf(stderr,"Q: %s\n",BN_bn2hex(q)); assert(x962->q); int len = i2d_X962(x962, NULL); assert(len>0 && len < 1000); unsigned char buff[32 * 1024]; unsigned char *outp = buff; len = i2d_X962(x962, &outp ); for (size_t i = 0; i < len; i++) putchar(buff[i]); X962_free(x962); return (0); }; -------------- next part -------------- An HTML attachment was scrubbed... URL: From osmanzakir90 at hotmail.com Fri Aug 28 17:52:09 2020 From: osmanzakir90 at hotmail.com (Osman Zakir) Date: Fri, 28 Aug 2020 17:52:09 +0000 Subject: Parsing ClientHello Message for HTTP/2 Upgrade Request -- How do I do this? Message-ID: Hi, everyone. As I said in the subject, I want to know how to parse the ClientHello message to find the HTTP/2 upgrade request if it's there. I'm using Boost.BEAST for HTTPS, but it only has support for HTTP/1.1 so I need to write code for supporting HTTP/2 myself if I want that. I also want to know how to find it, but I found something for that here: https://github.com/boostorg/beast/blob/5154233350d13a08d70f0a3a46c73bb1093225dd/include/boost/beast/core/detect_ssl.hpp#L96 I host the app on my own computer. The source code is on GitHub here: https://github.com/DragonOsman/currency_converter . The URL is https://dragonosman.dynu.net:5501/ . Any help is appreciated. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Fri Aug 28 19:48:12 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 28 Aug 2020 19:48:12 +0000 Subject: Parsing ClientHello Message for HTTP/2 Upgrade Request -- How do I do this? In-Reply-To: References: Message-ID: > From: openssl-users On Behalf Of Osman Zakir > Sent: Friday, 28 August, 2020 11:52 > As I said in the subject, I want to know how to parse the ClientHello message > to find the HTTP/2 upgrade request if it's there. I've never had to do this myself, but my understanding is that a client can request HTTP/2 in the ClientHello using ALPN. So presumably on the server side you want to register an ALPN callback with SSL_CTX_set_alpn_select_cb. What you *shouldn't* be doing, if you're using OpenSSL, is parsing any TLS message yourself. Of course, HTTP/2 upgrade can also be done at the HTTP protocol level, which seems like a far more sensible choice to me. > I need to write code for supporting HTTP/2 myself if I want that. Here's the real question: Why would you want HTTP/2? HTTP/2 offers only marginal advantages over HTTP/1.1 for most applications. Its main justification is for server farms handling huge workloads. And, frankly, even for that use case I tend to agree with Poul-Henning Kamp (https://cacm.acm.org/magazines/2015/3/183605-http-2-0/fulltext). HTTP/2 is a lousy protocol created to cater to the needs of a handful of large industry players. By supporting it, you're substantially increasing your attack surface and adding complexity, both of which are Really Bad Ideas for security. If you must have HTTP/2, I recommend negotiating it at the HTTP protocol level. Don't add complexity at the crypto-protocol level (i.e. TLS) if you don't have to. That's a recipe for vulnerabilities. -- Michael Wojcik From dirkx at webweaving.org Sun Aug 30 13:23:00 2020 From: dirkx at webweaving.org (Dirk-Willem van Gulik) Date: Sun, 30 Aug 2020 15:23:00 +0200 Subject: ASN1 integer conversion - why is this correct ? Message-ID: <2190B4B1-7797-48A2-9232-DB33F995A97D@webweaving.org> I am converting an unsigned integer (P,Q of an ECDSA 256 bit curve) from a 32 byte array (as provided by Microsoft its .NET cryptographic framework) to an ANS1_INTEGER. The steps taken are: unsigned char in[32] = .. r = BN_bin2bn(in, 32, NULL); BN_to_ASN1_INTEGER(r, asn1intptr); All works well; except for these two test cases: in[]32 = FF F0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 in[]32 = FF F0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 FF // < last bits set Which both yield: 2:d=1 hl=2 l= 33 prim: INTEGER :EBFFF00000000000000000000000000000000000000000000000000000000000 And in[]32 = 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 FF Which yields: 37:d=1 hl=2 l= 33 prim: INTEGER :FF03000000000000000000000000000000000000000000000000000000000000 Could someone explain me what happens here, especially to the last 0xFF bits ? With kind regards, Actual code at [1]; test script output of gen-tc.sh[2] in [3]. Dw. 1: https://github.com/minvws/nl-contact-tracing-odds-and-ends/tree/master/dotNet_ms64_to_x962 2: https://github.com/minvws/nl-contact-tracing-odds-and-ends/blob/master/dotNet_ms64_to_x962/gen-tc.sh 3: https://github.com/minvws/nl-contact-tracing-odds-and-ends/blob/master/dotNet_ms64_to_x962/test.txt From dar at xoe.solutions Sun Aug 30 22:45:41 2020 From: dar at xoe.solutions (David Arnold) Date: Sun, 30 Aug 2020 17:45:41 -0500 Subject: Cert hot-reloading Message-ID: <58FWFQ.39F8042CDQTO1@xoe.solutions> Hi, If you prefer this mailing list over github issues, I still want to ask for comments on: Certificate hot-reloading #12753 Specifically, my impression is that this topic has died down a bit and from the linked mailing list threads, in my eye, no concrete conclusion was drawn. I'm not sure how to rank this motion in the context of OpenSSL development, but I guess OpenSSL is used to producing ripple effects, so the man-hour argument might be a genuinely valid one. Please inform my research about this issue with your comments! BR, David A -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl-users at dukhovni.org Sun Aug 30 23:28:47 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 30 Aug 2020 19:28:47 -0400 Subject: Cert hot-reloading In-Reply-To: <58FWFQ.39F8042CDQTO1@xoe.solutions> References: <58FWFQ.39F8042CDQTO1@xoe.solutions> Message-ID: <20200830232847.GH44511@straasha.imrryr.org> On Sun, Aug 30, 2020 at 05:45:41PM -0500, David Arnold wrote: > If you prefer this mailing list over github issues, I still want to ask > for comments on: > > Certificate hot-reloading #12753 > > > Specifically, my impression is that this topic has died down a bit and > from the linked mailing list threads, in my eye, no concrete conclusion > was drawn. > > I'm not sure how to rank this motion in the context of OpenSSL > development, but I guess OpenSSL is used to producing ripple effects, > so the man-hour argument might be a genuinely valid one. > > Please inform my research about this issue with your comments! This is a worthwhile topic. It has a few interesting aspects: 1. Automatic key+cert reloads upon updates of key+cert chain PEM files. This can be tricky when processes start privileged, load the certs and then drop privs, and are no longer able to reopen the key + cert chain file. - Here, for POSIX systems I'd go with an approach where it is the containing directory that is restricted to root or similar, and the actual cert files are group and or world readable. The process can then keep the directory file descriptor open, and then openat(2) to periodically check the cert file, reloading when the metadata changes. - With non-POSIX systems, or applications that don't drop privs, the openat(2) is not needed, and one just checks the cert chain periodically. - Another option is to use passphrase-protected keys, and load the secret passphrase at process start from a separate read-protected file, while the actual private key + cert chain file is world readable, with the access control via protecting the passphrase file. - In all cases, it is important to keep both the private key and the cert in the same file, and open it just once to read both, avoiding races in which the key and cert are read in a way that results in one or the other being stale. 2. Having somehow obtained a new key + cert chain, one now wants to non-disruptively apply them to running servers. Here there are two potential approaches: - Hot plug a new pointer into an existing SSL_CTX structure. While the update itself could be made atomic, the readers of such pointers might read them more than once to separately extract the key and the cert chain, without checking that they're using the same pointer for both operations. This is bound to be fragile, though not necessarily impossible. - Build a new SSL_CTX, and use it to accept *new* connections, while existing connections use whatever SSL_CTX they started with. I believe this can work well, because "SSL" handles increment the reference count of the associated SSL_CTX when they're created, and decrement it when destroyed. So when you create a replacement SSL_CTX, you can just SSL_CTX_free() the old, and it will only actually be deleted when the last SSL connection tied to that SSL_CTX is destroyed. It is true that typical SSL_CTX construction is modestly expensive (loading CA stores and the like) but some of that could be handled by sharing and reference-counting the stores. So my preferred approach would be to create a new SSL_CTX, and get new connections using that. Now in a multi-threaded server, it could be a bit tricky to ensure that the SSL_CTX_free() does not happen before all threads reading the pointer to the latest SSL_CTX see the new pointer installed. Something equivalent to RCU may be needed to ensure that the free only happens after the new pointer is visible in all threads. Designs addressing various parts of this would be cool, provided they're well thought out, and not just single-use-case quick hacks. -- Viktor. From karl at denninger.net Sun Aug 30 23:52:16 2020 From: karl at denninger.net (Karl Denninger) Date: Sun, 30 Aug 2020 19:52:16 -0400 Subject: Cert hot-reloading In-Reply-To: <20200830232847.GH44511@straasha.imrryr.org> References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <20200830232847.GH44511@straasha.imrryr.org> Message-ID: <47acfe79-6198-1040-8db1-1d9b71b97676@denninger.net> On 8/30/2020 19:28, Viktor Dukhovni wrote: > On Sun, Aug 30, 2020 at 05:45:41PM -0500, David Arnold wrote: > >> If you prefer this mailing list over github issues, I still want to ask >> for comments on: >> >> Certificate hot-reloading #12753 >> >> >> Specifically, my impression is that this topic has died down a bit and >> from the linked mailing list threads, in my eye, no concrete conclusion >> was drawn. >> >> I'm not sure how to rank this motion in the context of OpenSSL >> development, but I guess OpenSSL is used to producing ripple effects, >> so the man-hour argument might be a genuinely valid one. >> >> Please inform my research about this issue with your comments! > This is a worthwhile topic. It has a few interesting aspects: > > 1. Automatic key+cert reloads upon updates of key+cert chain PEM > files. This can be tricky when processes start privileged, > load the certs and then drop privs, and are no longer able > to reopen the key + cert chain file. > > - Here, for POSIX systems I'd go with an approach where > it is the containing directory that is restricted to > root or similar, and the actual cert files are group > and or world readable. The process can then keep > the directory file descriptor open, and then openat(2) > to periodically check the cert file, reloading when > the metadata changes. > > - With non-POSIX systems, or applications that don't > drop privs, the openat(2) is not needed, and one > just checks the cert chain periodically. > > - Another option is to use passphrase-protected keys, > and load the secret passphrase at process start from > a separate read-protected file, while the actual > private key + cert chain file is world readable, > with the access control via protecting the passphrase > file. > > - In all cases, it is important to keep both the private > key and the cert in the same file, and open it just > once to read both, avoiding races in which the key > and cert are read in a way that results in one or > the other being stale. > > 2. Having somehow obtained a new key + cert chain, one > now wants to non-disruptively apply them to running > servers. Here there are two potential approaches: > > - Hot plug a new pointer into an existing SSL_CTX structure. > While the update itself could be made atomic, the readers > of such pointers might read them more than once to separately > extract the key and the cert chain, without checking that > they're using the same pointer for both operations. > > This is bound to be fragile, though not necessarily > impossible. > > - Build a new SSL_CTX, and use it to accept *new* connections, > while existing connections use whatever SSL_CTX they started > with. I believe this can work well, because "SSL" handles > increment the reference count of the associated SSL_CTX > when they're created, and decrement it when destroyed. > > So when you create a replacement SSL_CTX, you can just > SSL_CTX_free() the old, and it will only actually > be deleted when the last SSL connection tied to that > SSL_CTX is destroyed. > > It is true that typical SSL_CTX construction is modestly > expensive (loading CA stores and the like) but some of > that could be handled by sharing and reference-counting > the stores. > > So my preferred approach would be to create a new SSL_CTX, and get new > connections using that. Now in a multi-threaded server, it could be a > bit tricky to ensure that the SSL_CTX_free() does not happen before all > threads reading the pointer to the latest SSL_CTX see the new pointer > installed. Something equivalent to RCU may be needed to ensure that the > free only happens after the new pointer is visible in all threads. > > Designs addressing various parts of this would be cool, provided they're > well thought out, and not just single-use-case quick hacks. This works now; I use it with an application that checks in with a license server and can grab a new cert.? OpenSSL appears to have no problem with setting up a new SSL_CTX and using it for new connections; the old ones continue onward until they terminate, and new ones are fine as well. This appears to be be ok with the current code; I've yet to have it blow up in my face although at present the certs in question are reasonably long-lived.? Whether it's robust enough to handle very short-term certificates I do not know. -- Karl Denninger karl at denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4897 bytes Desc: S/MIME Cryptographic Signature URL: From openssl at jordan.maileater.net Mon Aug 31 00:19:53 2020 From: openssl at jordan.maileater.net (Jordan Brown) Date: Mon, 31 Aug 2020 00:19:53 +0000 Subject: Cert hot-reloading In-Reply-To: <58FWFQ.39F8042CDQTO1@xoe.solutions> References: <58FWFQ.39F8042CDQTO1@xoe.solutions> Message-ID: <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> Well, I can restate the problem that I encountered. We deliver an integrated storage system.? Under the covers it is a modified Solaris running a usual collection of proprietary and open-source components.? We supply an administrative user interface that, among many other things, lets you manage a list of "trusted" certificates - typically CA certificates that a program would use to authenticate its peers.? That is, it's the equivalent of Firefox's Tools / Options / Privacy & Security / Certificates / View Certificates, and the "Servers" and "Authorities" tabs there, with the additional tidbit that for each certificate you can control which services (e.g. LDAP, et cetera) that certificate is trusted for. When an administrator makes a change to the trusted-certificates list, we want that change to take effect, system-wide. The problem is that that means that some number of processes with active OpenSSL contexts need to drop those contexts and recreate them, and we don't know which processes those are.? Client operations are typically driven through a library, not a separate daemon, and so there's no centralized way to know which processes might be TLS clients.? In addition, there's the question of how to *tell* the process to recreate the context.? Simply restarting them may involve disruption of various sorts. What we'd like would be for OpenSSL to, on every authentication, stat the file or directory involved, and if it's changed then wipe the in-memory cache. Yes, aspects of this are system-specific, but that's true of many things.? There could easily be an internal API that captures a current-stage object, and another that answers "is this still the same".? The default implementation could always say "yes". -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From aerowolf at gmail.com Mon Aug 31 00:54:34 2020 From: aerowolf at gmail.com (Kyle Hamilton) Date: Sun, 30 Aug 2020 19:54:34 -0500 Subject: Cert hot-reloading In-Reply-To: <20200830232847.GH44511@straasha.imrryr.org> References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <20200830232847.GH44511@straasha.imrryr.org> Message-ID: I'm not sure I can follow the "in all cases it's important to keep the key and cert in the same file" argument, particularly in line with openat() usage on the cert file after privilege to open the key file has been dropped. I agree that key/cert staleness is important to address in some manner, but I don't think it's necessarily appropriate here. I also don't think it's necessarily okay to add a new requirement that e.g. letsencrypt clients reconcatentate their keys and certs, and that all of the Apache-style configuration guides be rewritten to consolidate the key and cert files. On a simple certificate renewal without a rekey, the best current practice is sufficient. (As well, a letsencrypt client would possibly need to run privileged in that scenario to reread the private key file in order to reconcatenate it, which is not currently actually necessary. Increasing the privileges required for any non-OS service for any purpose that isn't related to OS kernel privilege requirements feels a bit disingenuous.) Of course, if you want to alter the conditions which led to the best current practice (and impose retraining on everyone), that's a different matter. But I still think increasing privilege requirements would be a bad thing, under the least-privilege principle. -Kyle H On Sun, Aug 30, 2020, 18:36 Viktor Dukhovni wrote: > On Sun, Aug 30, 2020 at 05:45:41PM -0500, David Arnold wrote: > > > If you prefer this mailing list over github issues, I still want to ask > > for comments on: > > > > Certificate hot-reloading #12753 > > > > > > Specifically, my impression is that this topic has died down a bit and > > from the linked mailing list threads, in my eye, no concrete conclusion > > was drawn. > > > > I'm not sure how to rank this motion in the context of OpenSSL > > development, but I guess OpenSSL is used to producing ripple effects, > > so the man-hour argument might be a genuinely valid one. > > > > Please inform my research about this issue with your comments! > > This is a worthwhile topic. It has a few interesting aspects: > > 1. Automatic key+cert reloads upon updates of key+cert chain PEM > files. This can be tricky when processes start privileged, > load the certs and then drop privs, and are no longer able > to reopen the key + cert chain file. > > - Here, for POSIX systems I'd go with an approach where > it is the containing directory that is restricted to > root or similar, and the actual cert files are group > and or world readable. The process can then keep > the directory file descriptor open, and then openat(2) > to periodically check the cert file, reloading when > the metadata changes. > > - With non-POSIX systems, or applications that don't > drop privs, the openat(2) is not needed, and one > just checks the cert chain periodically. > > - Another option is to use passphrase-protected keys, > and load the secret passphrase at process start from > a separate read-protected file, while the actual > private key + cert chain file is world readable, > with the access control via protecting the passphrase > file. > > - In all cases, it is important to keep both the private > key and the cert in the same file, and open it just > once to read both, avoiding races in which the key > and cert are read in a way that results in one or > the other being stale. > > 2. Having somehow obtained a new key + cert chain, one > now wants to non-disruptively apply them to running > servers. Here there are two potential approaches: > > - Hot plug a new pointer into an existing SSL_CTX structure. > While the update itself could be made atomic, the readers > of such pointers might read them more than once to separately > extract the key and the cert chain, without checking that > they're using the same pointer for both operations. > > This is bound to be fragile, though not necessarily > impossible. > > - Build a new SSL_CTX, and use it to accept *new* connections, > while existing connections use whatever SSL_CTX they started > with. I believe this can work well, because "SSL" handles > increment the reference count of the associated SSL_CTX > when they're created, and decrement it when destroyed. > > So when you create a replacement SSL_CTX, you can just > SSL_CTX_free() the old, and it will only actually > be deleted when the last SSL connection tied to that > SSL_CTX is destroyed. > > It is true that typical SSL_CTX construction is modestly > expensive (loading CA stores and the like) but some of > that could be handled by sharing and reference-counting > the stores. > > So my preferred approach would be to create a new SSL_CTX, and get new > connections using that. Now in a multi-threaded server, it could be a > bit tricky to ensure that the SSL_CTX_free() does not happen before all > threads reading the pointer to the latest SSL_CTX see the new pointer > installed. Something equivalent to RCU may be needed to ensure that the > free only happens after the new pointer is visible in all threads. > > Designs addressing various parts of this would be cool, provided they're > well thought out, and not just single-use-case quick hacks. > > -- > Viktor. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dar at xoe.solutions Mon Aug 31 02:24:13 2020 From: dar at xoe.solutions (David Arnold) Date: Sun, 30 Aug 2020 21:24:13 -0500 Subject: Cert hot-reloading In-Reply-To: References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <20200830232847.GH44511@straasha.imrryr.org> Message-ID: Should aspects of an implementation be configurable behavior with a sane default? I'd guess so... Hot-plugging the pointer seems to force atomicity considerations down-stream, which might be educationally a good thing for openssl to press for. It also addresses Jordan's use case, for however application specific it might be. For compat reasons, a "legacy" mode which creates a new context for *new* connections might be the necessary "bridge" into that transformation. For change detection: I think "on next authentication" has enough (or even better) guarantees over a periodic loop. For file read atomicity: What are the options to keep letsencrypt & co at comfort? Although the hereditary "right (expectation) for comfort" is somewhat offset by a huge gain in functionality. It still feels like a convincing deal. - add a staleness check on every change detection? (maybe costly?) - consume a tar if clients want those guarantees? (opt-out or opt-out?) On Sun, Aug 30, 2020 at 19:54, Kyle Hamilton wrote: > I'm not sure I can follow the "in all cases it's important to keep > the key and cert in the same file" argument, particularly in line > with openat() usage on the cert file after privilege to open the key > file has been dropped. I agree that key/cert staleness is important > to address in some manner, but I don't think it's necessarily > appropriate here. > > I also don't think it's necessarily okay to add a new requirement > that e.g. letsencrypt clients reconcatentate their keys and certs, > and that all of the Apache-style configuration guides be rewritten to > consolidate the key and cert files. On a simple certificate renewal > without a rekey, the best current practice is sufficient. (As well, > a letsencrypt client would possibly need to run privileged in that > scenario to reread the private key file in order to reconcatenate it, > which is not currently actually necessary. Increasing the privileges > required for any non-OS service for any purpose that isn't related to > OS kernel privilege requirements feels a bit disingenuous.) > > Of course, if you want to alter the conditions which led to the best > current practice (and impose retraining on everyone), that's a > different matter. But I still think increasing privilege > requirements would be a bad thing, under the least-privilege > principle. > > -Kyle H > > On Sun, Aug 30, 2020, 18:36 Viktor Dukhovni > > > wrote: >> On Sun, Aug 30, 2020 at 05:45:41PM -0500, David Arnold wrote: >> >> > If you prefer this mailing list over github issues, I still want >> to ask >> > for comments on: >> > >> > Certificate hot-reloading #12753 >> > <> >> > >> > Specifically, my impression is that this topic has died down a >> bit and >> > from the linked mailing list threads, in my eye, no concrete >> conclusion >> > was drawn. >> > >> > I'm not sure how to rank this motion in the context of OpenSSL >> > development, but I guess OpenSSL is used to producing ripple >> effects, >> > so the man-hour argument might be a genuinely valid one. >> > >> > Please inform my research about this issue with your comments! >> >> This is a worthwhile topic. It has a few interesting aspects: >> >> 1. Automatic key+cert reloads upon updates of key+cert chain >> PEM >> files. This can be tricky when processes start privileged, >> load the certs and then drop privs, and are no longer able >> to reopen the key + cert chain file. >> >> - Here, for POSIX systems I'd go with an approach where >> it is the containing directory that is restricted to >> root or similar, and the actual cert files are group >> and or world readable. The process can then keep >> the directory file descriptor open, and then openat(2) >> to periodically check the cert file, reloading when >> the metadata changes. >> >> - With non-POSIX systems, or applications that don't >> drop privs, the openat(2) is not needed, and one >> just checks the cert chain periodically. >> >> - Another option is to use passphrase-protected keys, >> and load the secret passphrase at process start from >> a separate read-protected file, while the actual >> private key + cert chain file is world readable, >> with the access control via protecting the passphrase >> file. >> >> - In all cases, it is important to keep both the private >> key and the cert in the same file, and open it just >> once to read both, avoiding races in which the key >> and cert are read in a way that results in one or >> the other being stale. >> >> 2. Having somehow obtained a new key + cert chain, one >> now wants to non-disruptively apply them to running >> servers. Here there are two potential approaches: >> >> - Hot plug a new pointer into an existing SSL_CTX structure. >> While the update itself could be made atomic, the readers >> of such pointers might read them more than once to >> separately >> extract the key and the cert chain, without checking that >> they're using the same pointer for both operations. >> >> This is bound to be fragile, though not necessarily >> impossible. >> >> - Build a new SSL_CTX, and use it to accept *new* >> connections, >> while existing connections use whatever SSL_CTX they >> started >> with. I believe this can work well, because "SSL" handles >> increment the reference count of the associated SSL_CTX >> when they're created, and decrement it when destroyed. >> >> So when you create a replacement SSL_CTX, you can just >> SSL_CTX_free() the old, and it will only actually >> be deleted when the last SSL connection tied to that >> SSL_CTX is destroyed. >> >> It is true that typical SSL_CTX construction is modestly >> expensive (loading CA stores and the like) but some of >> that could be handled by sharing and reference-counting >> the stores. >> >> So my preferred approach would be to create a new SSL_CTX, and get >> new >> connections using that. Now in a multi-threaded server, it could >> be a >> bit tricky to ensure that the SSL_CTX_free() does not happen before >> all >> threads reading the pointer to the latest SSL_CTX see the new >> pointer >> installed. Something equivalent to RCU may be needed to ensure >> that the >> free only happens after the new pointer is visible in all threads. >> >> Designs addressing various parts of this would be cool, provided >> they're >> well thought out, and not just single-use-case quick hacks. >> >> -- >> Viktor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aerowolf at gmail.com Mon Aug 31 05:26:15 2020 From: aerowolf at gmail.com (Kyle Hamilton) Date: Mon, 31 Aug 2020 00:26:15 -0500 Subject: Cert hot-reloading In-Reply-To: <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> Message-ID: Could this be dealt with by the simple removal of any caching layer between an SSL_CTX and a directory processed by openssl c_rehash? Would reading the filesystem on every certificate verification be too heavy for your use case? On Sun, Aug 30, 2020 at 7:20 PM Jordan Brown wrote: > > Well, I can restate the problem that I encountered. > > We deliver an integrated storage system. Under the covers it is a modified Solaris running a usual collection of proprietary and open-source components. We supply an administrative user interface that, among many other things, lets you manage a list of "trusted" certificates - typically CA certificates that a program would use to authenticate its peers. That is, it's the equivalent of Firefox's Tools / Options / Privacy & Security / Certificates / View Certificates, and the "Servers" and "Authorities" tabs there, with the additional tidbit that for each certificate you can control which services (e.g. LDAP, et cetera) that certificate is trusted for. > > When an administrator makes a change to the trusted-certificates list, we want that change to take effect, system-wide. > > The problem is that that means that some number of processes with active OpenSSL contexts need to drop those contexts and recreate them, and we don't know which processes those are. Client operations are typically driven through a library, not a separate daemon, and so there's no centralized way to know which processes might be TLS clients. In addition, there's the question of how to *tell* the process to recreate the context. Simply restarting them may involve disruption of various sorts. > > What we'd like would be for OpenSSL to, on every authentication, stat the file or directory involved, and if it's changed then wipe the in-memory cache. > > Yes, aspects of this are system-specific, but that's true of many things. There could easily be an internal API that captures a current-stage object, and another that answers "is this still the same". The default implementation could always say "yes". > > -- > Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris From karl at denninger.net Mon Aug 31 13:29:24 2020 From: karl at denninger.net (Karl Denninger) Date: Mon, 31 Aug 2020 09:29:24 -0400 Subject: Cert hot-reloading In-Reply-To: <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> Message-ID: On 8/30/2020 20:19, Jordan Brown wrote: > Well, I can restate the problem that I encountered. > > We deliver an integrated storage system.? Under the covers it is a > modified Solaris running a usual collection of proprietary and > open-source components.? We supply an administrative user interface > that, among many other things, lets you manage a list of "trusted" > certificates - typically CA certificates that a program would use to > authenticate its peers.? That is, it's the equivalent of Firefox's > Tools / Options / Privacy & Security / Certificates / View > Certificates, and the "Servers" and "Authorities" tabs there, with the > additional tidbit that for each certificate you can control which > services (e.g. LDAP, et cetera) that certificate is trusted for. > > When an administrator makes a change to the trusted-certificates list, > we want that change to take effect, system-wide. > > The problem is that that means that some number of processes with > active OpenSSL contexts need to drop those contexts and recreate them, > and we don't know which processes those are.? Client operations are > typically driven through a library, not a separate daemon, and so > there's no centralized way to know which processes might be TLS > clients.? In addition, there's the question of how to *tell* the > process to recreate the context.? Simply restarting them may involve > disruption of various sorts. > > What we'd like would be for OpenSSL to, on every authentication, stat > the file or directory involved, and if it's changed then wipe the > in-memory cache. > > Yes, aspects of this are system-specific, but that's true of many > things.? There could easily be an internal API that captures a > current-stage object, and another that answers "is this still the > same".? The default implementation could always say "yes". I'm trying to figure out why you want to replace the context in an *existing* connection that is currently passing data rather than for new ones. For new ones, as I've noted, it already works as you'd likely expect it to work, at least in my use case, including in multiple threads where the context is picked up and used for connections in more than one place.? I've had no trouble with this and a perusal of the documentation (but not the code in depth) suggested it would be safe due to how OpenSSL does reference counts. While some of the client connections to the back end in my use case are "controlled" (an app on a phone, for example, so I could have control over what happens on the other end and could, for example, send down a sequence demanding the client close and reconnect) there is also a general web-style interface so the connecting party could be on any of the commodity web browsers, over which I have no code control. Example meta-code: get_lock(mutex) if (web_context) { /* If there is an existing context then free it up */ ??? SSL_CTX_free(web_context); ??? www_context = NULL;??? /* It is not ok to attempt to use SSL */ } www_context = SSL_CTX_new(server_method);??? /* Now get a new context */ .... (set options, callbacks, verification requirement on certs presented, DH and ECDH preferences, cert and key, etc) if NOT (happy with the previous sequence of options, setting key and cert, etc) { ? ? SSL_CTX_free(web_context); ??? web_context = NULL; } unlock(mutex) Then in the code that actually accepts a new connection: get_lock(mutex) if (web_context) { ??? ssl_socket = starttls(inbound_socket, www_context, &error); ??? .... check non-null to know it's ok, if it is, store and use it } unlock(mutex) ("starttls" does an SSL_new on the context passed, does the SSL_set_fd and SSL_accept, etc, handles any errors generated from that and if everything is ok returns the SSL structure) I've had no trouble with this for a good long time; if there are existing connections they continue to run on the previous www_context until they close.? New connections come off the new one.? You just have to run a mutex to make sure that you don't try to create a new connection while the "re-keying" is "in process". -- Karl Denninger karl at denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4897 bytes Desc: S/MIME Cryptographic Signature URL: From ceo at teo-en-ming.com Mon Aug 31 14:15:05 2020 From: ceo at teo-en-ming.com (Turritopsis Dohrnii Teo En Ming) Date: Mon, 31 Aug 2020 22:15:05 +0800 Subject: Testing Message-ID: -- -----BEGIN EMAIL SIGNATURE----- The Gospel for all Targeted Individuals (TIs): [The New York Times] Microwave Weapons Are Prime Suspect in Ills of U.S. Embassy Workers Link: https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html ******************************************************************************************** Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020): [1] https://tdtemcerts.wordpress.com/ [2] https://tdtemcerts.blogspot.sg/ [3] https://www.scribd.com/user/270125049/Teo-En-Ming -----END EMAIL SIGNATURE----- From ceo-teo-en-ming at outlook.com Mon Aug 31 14:04:59 2020 From: ceo-teo-en-ming at outlook.com (Turritopsis Dohrnii Teo En Ming) Date: Mon, 31 Aug 2020 14:04:59 +0000 Subject: How to Migrate Wordpress Website from 32-bit CentOS Linux 6.3 to 64-bit CentOS Linux 8.2 (2004) Message-ID: Subject: How to Migrate Wordpress Website from 32-bit CentOS Linux 6.3 to 64-bit CentOS Linux 8.2 (2004) Author of this Guide: Mr. Turritopsis Dohrnii Teo En Ming (TARGETED INDIVIDUAL) Country: Singapore Date: 31 August 2020 Monday Singapore Time Type of Publication: Plain Text Document Version: 20200831.01 SECTION 1 Information Gathering Stage ===================================== Host operating system is Windows Server 2008 R2 Standard Host Processor: Intel Xeon CPU E5620 @ 2.40 GHz Host Memory: 24 GB RAM Old Oracle VirtualBox version is 4.1.18 Upgrade to Virtualbox version 6.1.12 (COMPLETED SUCCESSFULLY AFTER RESTARTING WINDOWS SERVER) Old CentOS Linux VM is version 6.3 (32-bit only) Old Apache web server version 2.2.15 Old MySQL database server version 5.1.61 Old PHP version 5.6.40 Interface eth0: AAA.BBB.CCC.3/24 (ifconfig) Gateway: AAA.BBB.CCC.2 (ip route) (Gateway is the next hop router which is also the Fortigate firewall) /etc/resolv.conf (for DNS Client): nameserver AAA.BBB.CCC.1 (This is the Windows Server with DNS Server role installed) How to login to OLD MySQL database server: mysql -u root -p Old hostname: centos63.teo-en-ming-corp.com Old Virtual Machine Settings ============================ 4 GB RAM, 2 processors, 20 GB storage, network adapter: bridged to broadcom bcm5709c NEW Virtual Machine Settings ============================ 4 GB RAM, 4 processors, 100 GB storage, network adapter: bridged to broadcom bcm5709c After using Advanced IP scanner and checking DHCP scope in Microsoft DHCP server in Windows Server, Unused IP address: AAA.BBB.CCC.4 (Use this IP address for new CentOS 8.2 Linux VM) SECTION 2 Installation of NEW CentOS 8.2 Linux Virtual Machine ============================================================== New Hostname: centos82.teo-en-ming-corp.com NEW IP: AAA.BBB.CCC.4 Subnet mask: 255.255.255.0 (Class C) Gateway: AAA.BBB.CCC.2 DNS1: 8.8.8.8 Problem ======= CentOS 8.2 Linux 64-bit will not start and run because VirtualBox is too old (version 4.1.18). Intel Virtualization and VT-d already enabled in server BIOS previously. So running 64-bit virtual machines is not an issue. Solution ======== After upgrading to VirtualBox 6.1.12, CentOS 8.2 Linux 64-bit is able to start and run. SECTION 3 Generate a Backup of ALL Databases in the Old VM =========================================================== Reference Guide: How to backup and restore MySQL databases using the mysqldump command Link: https://www.sqlshack.com/how-to-backup-and-restore-mysql-databases-using-the-mysqldump-command/ Reference Guide: How to Show Users in MySQL using Linux Link: https://www.hostinger.com/tutorials/mysql-show-users/ # cd /root # mysqldump -u root -p --all-databases > all-databases-20200829.sql # du -h all-databases-20200829.sql 70M all-databases-20200829.sql SECTION 4 Disable SELinux (Security Enhanced Linux) =================================================== You MUST disable SELinux, otherwise Apache web server will not work. If you DO NOT want to disable SELinux, you must be an expert in SELinux to configure SELinux. # nano /etc/selinux/config SELINUX=disabled # reboot SECTION 5 Disable firewalld Software Firewall ============================================= Because already protected by Fortigate firewall at the perimeter. # systemctl disable firewalld # reboot SECTION 6 LAMP (Linux, Apache, MySQL and PHP) Installation ========================================================== I will be installing Apache web server 2.4.37-21, MariaDB server 3:10.3.17-1, PHP 7.2.24-1 and OpenSSL 1:1.1.1c-15 in 64-bit CentOS Linux 8.2 (2004). Sub-Section on Installing Apache Web Server =========================================== # dnf install php php-fpm php-gd You *MUST* install php-gd, otherwise Apache Web Server cannot execute PHP scripts. # dnf install httpd # systemctl enable httpd # systemctl start httpd [root at centos82 ~]# ps -ef | grep httpd root 33214 1 0 22:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 33351 33214 0 22:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 33352 33214 1 22:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 33355 33214 1 22:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 33357 33214 0 22:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND root 36374 7368 0 22:03 pts/0 00:00:00 grep --color=auto httpd On the OLD CentOS 6.3 server: # cd /etc/httpd # tar cfvz apacheconf.tar.gz conf conf.d # mv apacheconf.tar.gz /root On the NEW CentOS 8.2 server: # cd /etc/httpd # cp -r conf conf.original # cp -r conf.d conf.d.original # scp root at AAA.BBB.CCC.3:/root/apacheconf.tar.gz . # tar xfvz apacheconf.tar.gz On the OLD CentOS 6.3 server: # cd /var/www/html # tar cfvz websites.tar.gz * (1.4 GB) On the NEW CentOS 8.2 server: # cd /var/www/html # scp root at AAA.BBB.CCC.3:/root/websites.tar.gz . # tar xfvz websites.tar.gz Continuing on the NEW CentOS 8.2 server ======================================= How to troubleshoot Apache web server ===================================== The following are TWO very important Linux troubleshooting commands. # systemctl status httpd (check the error Apache web server gives out) # httpd -t (for checking Apache web server configuration syntax) Make the following changes to /etc/httpd/conf/httpd.conf, as follows: Rationale for unloading modules here: The modules were already loaded in config files in another location /etc/httpd/conf.modules.d, so we disable in httpd.conf to avoid duplication. If there are duplication for loading modules, Apache web server cannot start. # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # #LoadModule auth_basic_module modules/mod_auth_basic.so #LoadModule auth_digest_module modules/mod_auth_digest.so #LoadModule authn_file_module modules/mod_authn_file.so #LoadModule authn_alias_module modules/mod_authn_alias.so #LoadModule authn_anon_module modules/mod_authn_anon.so #LoadModule authn_dbm_module modules/mod_authn_dbm.so #LoadModule authn_default_module modules/mod_authn_default.so #LoadModule authz_host_module modules/mod_authz_host.so #LoadModule authz_user_module modules/mod_authz_user.so #LoadModule authz_owner_module modules/mod_authz_owner.so #LoadModule authz_groupfile_module modules/mod_authz_groupfile.so #LoadModule authz_dbm_module modules/mod_authz_dbm.so #LoadModule authz_default_module modules/mod_authz_default.so #LoadModule ldap_module modules/mod_ldap.so #LoadModule authnz_ldap_module modules/mod_authnz_ldap.so #LoadModule include_module modules/mod_include.so #LoadModule log_config_module modules/mod_log_config.so #LoadModule logio_module modules/mod_logio.so #LoadModule env_module modules/mod_env.so #LoadModule ext_filter_module modules/mod_ext_filter.so #LoadModule mime_magic_module modules/mod_mime_magic.so #LoadModule expires_module modules/mod_expires.so #LoadModule deflate_module modules/mod_deflate.so #LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so #LoadModule setenvif_module modules/mod_setenvif.so #LoadModule mime_module modules/mod_mime.so #LoadModule dav_module modules/mod_dav.so #LoadModule status_module modules/mod_status.so #LoadModule autoindex_module modules/mod_autoindex.so #LoadModule info_module modules/mod_info.so #LoadModule dav_fs_module modules/mod_dav_fs.so #LoadModule vhost_alias_module modules/mod_vhost_alias.so #LoadModule negotiation_module modules/mod_negotiation.so #LoadModule dir_module modules/mod_dir.so #LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so #LoadModule userdir_module modules/mod_userdir.so #LoadModule alias_module modules/mod_alias.so #LoadModule substitute_module modules/mod_substitute.so #LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_balancer_module modules/mod_proxy_balancer.so #LoadModule proxy_ftp_module modules/mod_proxy_ftp.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_ajp_module modules/mod_proxy_ajp.so #LoadModule proxy_connect_module modules/mod_proxy_connect.so #LoadModule cache_module modules/mod_cache.so #LoadModule suexec_module modules/mod_suexec.so #LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so #LoadModule version_module modules/mod_version.so IncludeOptional conf.d/*.conf (Notice the use of OPTIONAL) Include conf.modules.d/*.conf (DEFAULT CONFIG FILES INSTALLED BY APACHE WEB SERVER 2.4.37) Install the Secure Sockets Layer (SSL) module for Apache web server # dnf install mod_ssl Make the following changes to /etc/httpd/conf.d/ssl.conf, as follows: #SSLMutex default (MUST be disabled) Transferring Public Key from OLD server to NEW server: # cd /etc/pki/tls/certs # scp root at AAA.BBB.CCC.3:/root/teo-en-ming-corp.crt . Transferring Private Key from OLD server to NEW server: # cd /etc/pki/tls/private/ # scp root at AAA.BBB.CCC.3:/root/teo-en-ming-corp.key . Install Python3 module for Apache web server: # dnf install python3-mod_wsgi Make the following changes to /etc/httpd/conf.d/wsgi.conf, as follows: LoadModule wsgi_module modules/mod_wsgi_python3.so Install the Perl module: # dnf install epel-release # dnf install mod_perl Make the following changes to /etc/httpd/conf.d/perl.conf, as follows: #LoadModule perl_module modules/mod_perl.so Because Perl module is already loaded in /etc/httpd/conf.modules.d/ Disable SSL virtual hosts for now (Our server will support only http and no https at the moment) # cd /etc/httpd/conf.modules.d/ # mv 00-ssl.conf 00-ssl.conf.original # cd /etc/httpd/conf.d # mv ssl.conf ssl.conf.1 Sub-Section on Installing MariaDB (MySQL) Database Server ========================================================= # dnf install mariadb-server # systemctl enable mariadb # systemctl start mariadb Reference Guide: How to Use SCP Command to Securely Transfer Files Link: https://linuxize.com/post/how-to-use-scp-command-to-securely-transfer-files/ Transfer backup of ALL databases from OLD server to NEW server: # scp root at AAA.BBB.CCC.3:/root/all-databases-20200829.sql . Restore ALL databases on NEW server: # mysql < all-databases-20200829.sql Login to MySQL (MariaDB): # mysql Check all MySQL users are imported: MariaDB [(none)]> select user from mysql.user; +------------------+ | user | +------------------+ | root | | | | root | | | | root | | aaa | | bbb | | ccc | +------------------+ 8 rows in set (0.001 sec) Sub-Section on Installing PHP 7.2 ================================= We WON'T be using PHP configuration from OLD CentOS 6.3 server: # /etc/httpd/conf.d # mv php.conf php.conf.63 Use the PHP configuration on NEW CentOS 8.2 server: # cp php.conf.rpmnew php.conf ERROR ENCOUNTERED ================= ERROR: Your PHP installation appears to be missing the MySQL extension which is required by WordPress Solution is found at: https://www.howtoforge.com/tutorial/centos-lamp-server-apache-mysql-php/ SOLUTION ======== MUST install php-mysqlnd # dnf install php-mysqlnd SECTION 7 Apache Web Server Virtual Hosts ========================================= /etc/httpd/conf/httpd.conf ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # #NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # # # ServerAdmin webmaster at dummy-host.example.com # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common # #NameVirtualHost *:80 ServerAdmin ceo at teo-en-ming-corp.com DocumentRoot /var/www/html/Teo-En-Ming-Corp ServerName teo-en-ming-corp.com redirect permanent / http://www.teo-en-ming-corp.com ServerAdmin ceo at teo-en-ming-corp.com DocumentRoot /var/www/html/Teo-En-Ming-Corp ServerName www.teo-en-ming-corp.com RewriteEngine off AllowOverride All RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(js|ico|gif|jpg|png|css)$ /index.php ServerAdmin ceo at teo-en-ming-corp.com DocumentRoot /var/www/html/DonaldTrump ServerName donaldtrump.com.sg redirect permanent / http://www.donaldtrump.com.sg ServerAdmin ceo at teo-en-ming-corp.com DocumentRoot /var/www/html/DonaldTrump ServerName www.donaldtrump.com.sg RewriteEngine off RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(js|ico|gif|jpg|png|css)$ /index.php # # ServerAdmin ceo at teo-en-ming-corp.com # DocumentRoot /var/webmiln # ServerName centos.teo-en-ming-corp.com # redirect permanent / https://centos.teo-en-ming-corp.com:10000 # ServerAdmin ceo at teo-en-ming-corp.com DocumentRoot /var/www/html/Teo-En-Ming-Corp_old ServerName old.teo-en-ming-corp.com RewriteEngine off RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(js|ico|gif|jpg|png|css)$ /index.php SECTION 8 .htaccess =================== /var/www/html/Teo-En-Ming-Corp/wp-admin/.htaccess: order deny,allow deny from all allow from AAA.BBB.CCC.DDD allow from AAA.BBB.CCC.DDD allow from AAA.BBB.CCC.DDD allow from AAA.BBB.CCC.DDD allow from AAA.BBB.CCC.DDD SECTION 9 FORTIGATE FIREWALL (STATIC NAT/PORT FORWARDING CONFIGURATION) ======================================================================= Create Virtual IPs for Static NAT/port forwarding. Edit Virtual IP =============== Name: Wordpress-Website Interface: Internet (wan1) Type: Static NAT External IP Address/Range: AAA.BBB.CCC.DDD - AAA.BBB.CCC.DDD Mapped IP Address/Range: AAA.BBB.CCC.4 - AAA.BBB.CCC.4 Optional Filters: No Port Forwarding: No Click OK. Then create IPv4 firewall polic(ies) from WAN1 to Internal using the created Virtual IP, allowing http, https, icmp, ssh, and/or other networking protocols as you wish. You may also use Security Profiles in Fortigate firewall as you wish: Antivirus Web Filter DNS Filter Application Control FortiClient Compliance SSL/SSH Inspection Web Rating Overrides Custom Signatures -----BEGIN EMAIL SIGNATURE----- The Gospel for all Targeted Individuals (TIs): [The New York Times] Microwave Weapons Are Prime Suspect in Ills of U.S. Embassy Workers Link: https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html ******************************************************************************************** Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020): [1] https://tdtemcerts.wordpress.com/ [2] https://tdtemcerts.blogspot.sg/ [3] https://www.scribd.com/user/270125049/Teo-En-Ming -----END EMAIL SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Roos at f1-outsourcing.eu Mon Aug 31 14:28:53 2020 From: M.Roos at f1-outsourcing.eu (Marc Roos) Date: Mon, 31 Aug 2020 16:28:53 +0200 Subject: Testing In-Reply-To: Message-ID: <"H00000710017aa24.1598884133.sx.f1-outsourcing.eu*"@MHS> Why don't you block the whole compute cloud of amazon? ec2-3-21-30-127.us-east-2.compute.amazonaws.com -----Original Message----- To: openssl-users at openssl.org Subject: Testing -- -----BEGIN EMAIL SIGNATURE----- The Gospel for all Targeted Individuals (TIs): [The New York Times] Microwave Weapons Are Prime Suspect in Ills of U.S. Embassy Workers Link: https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html ************************************************************************ ******************** Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020): [1] https://tdtemcerts.wordpress.com/ [2] https://tdtemcerts.blogspot.sg/ [3] https://www.scribd.com/user/270125049/Teo-En-Ming -----END EMAIL SIGNATURE----- From openssl at jordan.maileater.net Mon Aug 31 16:30:41 2020 From: openssl at jordan.maileater.net (Jordan Brown) Date: Mon, 31 Aug 2020 16:30:41 +0000 Subject: Cert hot-reloading In-Reply-To: References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <20200830232847.GH44511@straasha.imrryr.org> Message-ID: <01010174455b3de0-98c1a9d4-8626-4acf-8588-2fe98360cc36-000000@us-west-2.amazonses.com> On 8/30/2020 7:24 PM, David Arnold wrote: > Hot-plugging the pointer seems to force atomicity considerations > down-stream, which might be > educationally a good thing for openssl to press for. It also addresses > Jordan's use case, for however > application specific it might be. For compat reasons, a "legacy" mode > which creates a new context > for *new* connections might be the necessary "bridge" into that > transformation. I haven't particularly thought about the implementation; that seemed like Just Work.? There might need to be reference counts on the structures involved so that they can be safely "freed" while they are in active use by another thread.? Simply swapping out a pointer isn't going to be enough because you can't know whether another thread already picked up a copy of that pointer and so you can't know when you can free the old structure.? As I think about it more, there might be a challenge fitting such a mechanism into the existing functions. -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at jordan.maileater.net Mon Aug 31 16:33:33 2020 From: openssl at jordan.maileater.net (Jordan Brown) Date: Mon, 31 Aug 2020 16:33:33 +0000 Subject: Cert hot-reloading In-Reply-To: References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> Message-ID: <01010174455ddba5-b97e9c7c-9cc5-4638-ab58-3e5101351819-000000@us-west-2.amazonses.com> On 8/30/2020 10:26 PM, Kyle Hamilton wrote: > Could this be dealt with by the simple removal of any caching layer > between an SSL_CTX and a directory processed by openssl c_rehash? > Would reading the filesystem on every certificate verification be too > heavy for your use case? That might well be sufficient.? Rereading the file would probably be low-cost compared to the network connection. -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at jordan.maileater.net Mon Aug 31 16:46:32 2020 From: openssl at jordan.maileater.net (Jordan Brown) Date: Mon, 31 Aug 2020 16:46:32 +0000 Subject: Cert hot-reloading In-Reply-To: References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <0101017441e271f3-92a7df7b-3354-4663-98ef-6ac8b9691939-000000@us-west-2.amazonses.com> Message-ID: <010101744569c103-939d1966-439e-4e7c-a7eb-35ac7e83d519-000000@us-west-2.amazonses.com> On 8/31/2020 6:29 AM, Karl Denninger wrote: > > I'm trying to figure out why you want to replace the context in an > *existing* connection that is currently passing data rather than for > new ones. > No, not for existing connections, just for new ones using the same context. Note that I'm interested in the client case, not the server case - in the list of trusted certificates set up with SSL_CTX_load_verify_locations().? (Though the same issues, and maybe more, would apply to a server that is verifying client certificates.) The hypothetical application does something like: ctx = set_up_ctx(); forever { ??? ... ??? connection = new_connection(ctx); ??? ... ??? close_connection(connection) ??? ... } The application could certainly create the context before making each connection, but probably doesn't - after all, the whole idea of contexts is to make one and then use it over and over again. It's been a very long time since I last really looked at this[*], but I believe that I experimentally verified that simply deleting a certificate from the file system was not enough to make future connections refuse that certificate.? *Adding* a certificate to the directory works, because there's no negative caching, but *removing* one doesn't work. [*] Which tells you that although my purist sense says that it would be nice to have and would improve correctness, customers aren't lined up waiting for it. -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Mon Aug 31 22:37:14 2020 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 1 Sep 2020 00:37:14 +0200 Subject: Testing In-Reply-To: <"H00000710017aa24.1598884133.sx.f1-outsourcing.eu*"@MHS> References: <"H00000710017aa24.1598884133.sx.f1-outsourcing.eu*"@MHS> Message-ID: On 2020-08-31 16:28, Marc Roos wrote: > Why don't you block the whole compute cloud of amazon? > ec2-3-21-30-127.us-east-2.compute.amazonaws.com Please note, that at least our company hosts a secondary MX in the EC2 cloud, with the option to direct my posts to the list through that server.? However proper PTR record, SPF, DKIM and DMARC checks should all pass for such posts. Thus rather than blindly blacklisting the Amazon hosting service, maybe make the OpenSSL mail server check those things to catch erroneous transmissions from web servers. > > -----Original Message----- > > To: openssl-users at openssl.org > Subject: Testing > > > > -- > -----BEGIN EMAIL SIGNATURE----- > > The Gospel for all Targeted Individuals (TIs): > > [The New York Times] Microwave Weapons Are Prime Suspect in Ills of U.S. > Embassy Workers > > Link: > https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html > > ************************************************************************ > ******************** > > Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic > Qualifications as at 14 Feb 2019 and refugee seeking attempts at the > United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug > 2019) and Australia (25 Dec 2019 to 9 Jan 2020): > > [1] https://tdtemcerts.wordpress.com/ > > [2] https://tdtemcerts.blogspot.sg/ > > [3] https://www.scribd.com/user/270125049/Teo-En-Ming > > -----END EMAIL SIGNATURE----- > > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From openssl-users at dukhovni.org Mon Aug 31 23:52:39 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 31 Aug 2020 19:52:39 -0400 Subject: Cert hot-reloading In-Reply-To: References: <58FWFQ.39F8042CDQTO1@xoe.solutions> <20200830232847.GH44511@straasha.imrryr.org> Message-ID: <20200831235239.GK44511@straasha.imrryr.org> On Sun, Aug 30, 2020 at 07:54:34PM -0500, Kyle Hamilton wrote: > I'm not sure I can follow the "in all cases it's important to keep the key > and cert in the same file" argument, particularly in line with openat() > usage on the cert file after privilege to open the key file has been > dropped. I agree that key/cert staleness is important to address in some > manner, but I don't think it's necessarily appropriate here. Well, the OP had in mind very frequent certificate chain rollover, where presumably, in at least some deployments also the key would roll over frequently along with the cert. If the form of the key/cert rollover is to place new keys and certs into files, then *atomicity* of these updates becomes important, so that applications loading a new key+chain pair see a matching key and certificate and not some cert unrelated to the key. This, e.g., Postfix now supports loading both the key and the cert directly from the same open file, reading both sequentially, without racing atomic file replacements when reopening the file separately to reach keys and certs. If we're going to automate things more, and exercise them with much higher frequency. The automation needs to be robust! Note that nothing prevents applications that have separate configuration for the key and cert locations from opening the same file twice. If they're using the normal OpenSSL PEM read key/cert routines, the key is ignored when reading certs and the certs are ignored when reading the key. Therefore, the single-file model is unconditionally superior in this context. Yes, some tools (e.g. certbot), don't yet do the right thing and atomically update a single file with both the key and the obtained certs. This problem can be solved. We're talking about new capabilities here, and don't need to adhere to outdated process models. -- Viktor.