From paul at mad-scientist.net Sun Nov 1 16:16:24 2020 From: paul at mad-scientist.net (Paul Smith) Date: Sun, 01 Nov 2020 11:16:24 -0500 Subject: OpenSSL 1.1.1h not detecting expired certs Message-ID: I have a server linked (statically) with OpenSSL 1.1.1g (GNU/Linux, 64bit). I built everything myself, I'm not using any system libraries. I have a test in my test suite that constructs an expired self-signed cert and attempts to use it to connect to the server. When I link my server with OpenSSL 1.1.1g, it is detected properly and I see in the log (this is a construct of various openssl error info): SSL_accept failed: error:14094415:SSL routines:ssl3_read_bytes:sslv3 alert certificate expired::0:SSL alert number 45 If I leave EVERYTHING the same about my environment and re-link the server with OpenSSL 1.1.1h instead (just re-linking the binaries with a new static libssl libcrypto), then this expired certificate is no longer detected by the server and the connection succeeds. To be sure I also tried recompiling with the 1.1.1h headers and see the same behavior. I can see that the expiration date is indeed wrong: $ openssl x509 -enddate -noout -in expired/trustStore.pem notAfter=Oct 27 15:58:50 2020 GMT but this is not noticed by my server. Does anyone have any ideas about what I might check to figure out what's happening here? The release notes discuss enabling MinProtocol and MaxProtocol; I do not use these and in fact I don't invoke SSL_CONF_*() at all. Is this an issue? Should I do this? From paul at mad-scientist.net Sun Nov 1 16:59:01 2020 From: paul at mad-scientist.net (Paul Smith) Date: Sun, 01 Nov 2020 11:59:01 -0500 Subject: OpenSSL 1.1.1h not detecting expired certs In-Reply-To: References: Message-ID: On Sun, 2020-11-01 at 11:16 -0500, Paul Smith wrote: > Does anyone have any ideas about what I might check to figure out > what's happening here? The release notes discuss enabling > MinProtocol and MaxProtocol; I do not use these and in fact I don't > invoke SSL_CONF_*() at all. Is this an issue? Should I do this? Hm. OK, I checked my code and I wasn't using SSL_CONF_*(), but I was using this after I created my SSL_CTX: _ctxt = SSL_CTX_new(TLS_method()); SSL_CTX_set_min_proto_version(_ctxt, TLS1_2_VERSION); Does that no longer work properly for some reason? If I replace the above with this: _ctxt = SSL_CTX_new(TLS_method()); SSL_CONF_CTX* cctxt = SSL_CONF_CTX_new(); SSL_CONF_CTX_set_ssl_ctx(cctxt, _ctxt); SSL_CONF_cmd(cctxt, "MinProtocol", "TLSv1.2"); Now it works. Is this a bug? Or was I just never using the interface properly? If I switch to the new method of configuration, it's not clear to me whether or not I need to preserve the SSL_CONF_CTX structure after the above code bit, as long as the SSL_CTX is there, or if I can free it immediately afterward. Based on the way it's used it seems like it only needs to exist as long as I need to configure the SSL_CTX, then it can go away and the SSL_CTX can live on. From openssl-users at dukhovni.org Sun Nov 1 20:27:04 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 1 Nov 2020 15:27:04 -0500 Subject: OpenSSL 1.1.1h not detecting expired certs In-Reply-To: References: Message-ID: <20201101202704.GM1464@straasha.imrryr.org> On Sun, Nov 01, 2020 at 11:16:24AM -0500, Paul Smith wrote: > I have a test in my test suite that constructs an expired self-signed > cert and attempts to use it to connect to the server. When I link my > server with OpenSSL 1.1.1g, it is detected properly and I see in the > log (this is a construct of various openssl error info): > > SSL_accept failed: error:14094415:SSL routines:ssl3_read_bytes:sslv3 > alert certificate expired::0:SSL alert number 45 Just to make sure I've understood you correctly, the certificate in question is used as a client certificate, right? And the server is both soliciting and *requiring* client certificates? What software is the client using? Is the (partly) negotiated protocol TLS 1.2 or TLS 1.3? If the client uses some random self-signed certificate, why does it matter whether it is expired or not? It is untrusted regardless... Or is the server configured to explicitly trust this self-signed certificate, but you want to do that only until "expiration"? What verify callback, if any, are you using in your server? > If I leave EVERYTHING the same about my environment and re-link the > server with OpenSSL 1.1.1h instead (just re-linking the binaries with a > new static libssl libcrypto), then this expired certificate is no > longer detected by the server and the connection succeeds. It would be helpful if you posted the client public certificate (no need for the private key). Details of its construction can affect the verification failure mode. -- Viktor. From mahendra.sp at gmail.com Mon Nov 2 09:00:33 2020 From: mahendra.sp at gmail.com (Mahendra SP) Date: Mon, 2 Nov 2020 14:30:33 +0530 Subject: Decrypt error when using openssl 1.1.1b during SSL handshake In-Reply-To: References: <27e3795f-e587-8149-dbc3-ee4c4270b233@openssl.org> Message-ID: Hi Matt, Error is reported from this: FILE:../openssl-1.1.1b/ssl/statem/statem_srvr.c, FUNCTION:415, LINE:3055, reason=147, alert=51 We see that hardware is returning 48 bytes. Even if the decrypted premaster data is correct, openssl is expecting more than 48 bytes in return. This check fails as decrypt_len is 48. decrypt_len < 11 + SSL_MAX_MASTER_KEY_LENGTH We compared the data returned when software is used. Decrypt_len is 256 bytes and the last 48 bytes are actual premaster secret. Also, openssl checks for if the first byte is 0 and second byte is 2. We are trying to rectify this issue in hardware and return the correct data. Please suggest if you have any comments for the above info. Thanks Mahendra On Fri, Oct 30, 2020 at 7:50 PM Matt Caswell wrote: > > > On 30/10/2020 11:22, Mahendra SP wrote: > > Hi Matt, > > > > Thank you for the inputs. > > Yes, we had encountered the padding issue initially. But we added > > support for RSA_NO_PADDING in our hardware. That's why we are able to > > successfully decrypt the premaster secret in the hardware. > > Hence the issue does not seem to be related to padding. We have > > confirmed this by comparing the premaster secret on both client and > > server and they are the same. > > Ok, good. > > > > > We suspect in this case, verification of "encrypted handshake message" > > failure is happening. > > It's possible. It would be helpful if you can get more information from > the error stack on the server, e.g. by using ERR_print_errors_fp() or > something similar. I'm particularly interested in identifying the source > file and line number where the decrypt_error is coming from. Printing > the error stack should give us that information. There are a number of > places that a "decrypt error" can occur and it would be helpful to > identify which one is the cause of the problem. > > > > We understand constant_time_xx APIs get used for CBC padding validation. > > Will this have any dependency on the compiler optimization or asm > > flags? > > CBC padding validation is fairly independent of anything to do with RSA, > so I think its unlikely to be the culprit here. Of course sometimes > compiler optimization/asm issues do occur so it can't be ruled out > entirely - but it's not where I would start looking. > > > Will this issue be seen if hardware takes more time for the > > operation? > > > > No. Constant time here just means that we implement the code without any > branching based on secret data (e.g. no "if" statements/while loops etc > based on secret dependent data). It has very little to do with how long > something actually takes to process. > > > > Here is the snippet of the wireshark where our device acting as server > > closes the connection with decryption failure. > > Thanks. To narrow it down further I need to figure out which line of > code the decrypt_error is coming from as described above. > > Matt > > > > > If you need any further info, please let us know. > > image.png > > Thanks > > Mahendra > > > > Please suggest. > > > > > > > > On Fri, Oct 30, 2020 at 3:32 PM Matt Caswell > > wrote: > > > > > > > > On 30/10/2020 09:18, Mahendra SP wrote: > > > Hi All. > > > > > > We have upgraded openssl version to 1.1.1b > > > > > > With this, we are seeing decryption error during SSL handshake for > the > > > below explained scenario. Our device acts as an SSL server. > > > > > > We have external hardware to offload RSA private key operations > using > > > the engine. > > > Decryption of pre-master secret is done using hardware and is > > > successful. We compared the pre-master secret on both server and > > client > > > and they match. > > > However, we see that SSL handshake fails with "decrypt error (51)" > > with > > > an alert number 21. Verifying the encrypted finish message on the > > server > > > side fails. > > > > > > This issue does not happen with software performing RSA private key > > > operations. > > > > > > Can someone help with the reason for decryption failure? Below is > the > > > compiler and processor details. It is 64 bit. > > > arm-linux-gnueabihf-gcc -march=armv7ve -mthumb -mfpu=neon > > -mfloat-abi=hard > > > > Potentially this is related to the use of PSS padding in libssl > which is > > mandated in TLSv1.3. The TLSv1.3 spec also requires its use even in > > TLSv1.2. > > > > The PSS padding is implemented within the EVP layer. Ultimately EVP > > calls the function RSA_private_encrypt() with padding set to > > RSA_NO_PADDING. > > > > Assuming your engine is implemented via a custom RSA_METHOD does it > > support RSA_private_encrypt(() with RSA_NO_PADDING? If not this is > > likely to be the problem. > > > > More discussion of this is here: > > > > https://github.com/openssl/openssl/issues/7968 > > > > Also related is the recent discussion on this list about the CAPI > engine > > and this issue: > > > > https://github.com/openssl/openssl/issues/8872 > > > > Matt > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Mon Nov 2 11:19:17 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 2 Nov 2020 11:19:17 +0000 Subject: Decrypt error when using openssl 1.1.1b during SSL handshake In-Reply-To: References: <27e3795f-e587-8149-dbc3-ee4c4270b233@openssl.org> Message-ID: On 02/11/2020 09:00, Mahendra SP wrote: > Hi Matt, > > Error is reported from this: > FILE:../openssl-1.1.1b/ssl/statem/statem_srvr.c, FUNCTION:415, > LINE:3055, reason=147, alert=51 > > We see that hardware is returning 48 bytes. Even if the decrypted > premaster data is correct, openssl?is expecting more than 48 bytes in > return. > This check fails as decrypt_len is 48. > decrypt_len < 11 + SSL_MAX_MASTER_KEY_LENGTH Just above this line we call RSA_private_decrypt() with padding set to RSA_NO_PADDING. We expect the output *once padding is removed* to be 48 bytes. But RSA_private_decrypt() should be returning the data *with the padding included* (because we called it with RSA_NO_PADDING). The minimum valid padding length is 11 bytes (hence the check above). So it looks to me like the engine is incorrectly ignoring the RSA_NO_PADDING, and stripping the padding anyway. Matt > > We compared the data returned?when software is used. Decrypt_len is 256 > bytes and the last 48 bytes are actual premaster secret. Also, openssl > checks for if the first byte is 0 and second byte is 2.? > We are trying to rectify this issue in hardware and return the correct data. > > Please suggest if you have any comments for the above info. > > Thanks > Mahendra > > On Fri, Oct 30, 2020 at 7:50 PM Matt Caswell > wrote: > > > > On 30/10/2020 11:22, Mahendra SP wrote: > > Hi Matt, > > > > Thank you for the?inputs. > > Yes, we had encountered?the padding issue initially. But we added > > support for RSA_NO_PADDING in our hardware. That's why we are able to > > successfully decrypt the premaster secret in the hardware. > > Hence the issue does not seem to be related?to padding. We have > > confirmed this by comparing the premaster secret on both client and > > server and they are the same. > > Ok, good. > > > > > We suspect in this case, verification of "encrypted handshake?message" > > failure is happening. > > It's possible. It would be helpful if you can get more information from > the error stack on the server, e.g. by using ERR_print_errors_fp() or > something similar. I'm particularly interested in identifying the source > file and line number where the decrypt_error is coming from. Printing > the error stack should give us that information. There are a number of > places that a "decrypt error" can occur and it would be helpful to > identify which one is the cause of the problem. > > > > We understand constant_time_xx APIs get?used for CBC padding > validation. > > Will this have? any dependency on the compiler optimization or asm > > flags? > > CBC padding validation is fairly independent of anything to do with RSA, > so I think its unlikely to be the culprit here. Of course sometimes > compiler optimization/asm issues do occur so it can't be ruled out > entirely - but it's not where I would start looking. > > > Will this issue be seen if hardware takes more time for?the > > operation? > > > > No. Constant time here just means that we implement the code without any > branching based on secret data (e.g. no "if" statements/while loops etc > based on secret dependent data). It has very little to do with how long > something actually takes to process. > > > > Here is the snippet of the wireshark where?our device acting as server > > closes the connection with decryption failure. > > Thanks. To narrow it down further I need to figure out which line of > code the decrypt_error is coming from as described above. > > Matt > > > > > If you need any further info, please let us know.? > > image.png > > Thanks > > Mahendra > > > > Please suggest. > > > > > > > > On Fri, Oct 30, 2020 at 3:32 PM Matt Caswell > > >> wrote: > > > > > > > >? ? ?On 30/10/2020 09:18, Mahendra SP wrote: > >? ? ?> Hi All. > >? ? ?> > >? ? ?> We have upgraded openssl version to 1.1.1b > >? ? ?> > >? ? ?> With this, we are seeing decryption error during SSL > handshake for the > >? ? ?> below explained scenario. Our device acts as an SSL server. > >? ? ?> > >? ? ?> We have external hardware to offload RSA private key > operations using > >? ? ?> the engine. > >? ? ?> Decryption of pre-master secret is done using hardware and is > >? ? ?> successful. We compared the pre-master secret on both server and > >? ? ?client > >? ? ?> and they match. > >? ? ?> However, we see that SSL handshake fails with "decrypt error > (51)" > >? ? ?with > >? ? ?> an alert number 21. Verifying the encrypted finish message > on the > >? ? ?server > >? ? ?> side fails. > >? ? ?> > >? ? ?> This issue does not happen with software performing RSA > private key > >? ? ?> operations. > >? ? ?> > >? ? ?> Can someone help with the reason for decryption failure? > Below is the > >? ? ?> compiler and processor details. It is 64 bit. > >? ? ?> arm-linux-gnueabihf-gcc ?-march=armv7ve -mthumb -mfpu=neon > >? ? ?-mfloat-abi=hard > > > >? ? ?Potentially this is related to the use of PSS padding in > libssl which is > >? ? ?mandated in TLSv1.3. The TLSv1.3 spec also requires its use > even in > >? ? ?TLSv1.2. > > > >? ? ?The PSS padding is implemented within the EVP layer. > Ultimately EVP > >? ? ?calls the function RSA_private_encrypt() with padding set to > >? ? ?RSA_NO_PADDING. > > > >? ? ?Assuming your engine is implemented via a custom RSA_METHOD > does it > >? ? ?support RSA_private_encrypt(() with RSA_NO_PADDING? If not this is > >? ? ?likely to be the problem. > > > >? ? ?More discussion of this is here: > > > >? ? ?https://github.com/openssl/openssl/issues/7968 > > > >? ? ?Also related is the recent discussion on this list about the > CAPI engine > >? ? ?and this issue: > > > >? ? ?https://github.com/openssl/openssl/issues/8872 > > > >? ? ?Matt > > > From sanperumalv at gmail.com Mon Nov 2 14:57:03 2020 From: sanperumalv at gmail.com (perumal v) Date: Mon, 2 Nov 2020 20:27:03 +0530 Subject: openssl ocsp(responder) cmd is giving error for ipv6 Message-ID: HI All, I tried openssl ocsp for ipv6 and got the error message for the OCSP. IPv6 address with the "[]" bracket. --------------------------------------------------- openssl ocsp -url http://*[2001:DB8:64:FF9B:0:0:A0A:285E]*:8090/ocsp-100/ -issuer /etc/cert/ipsec/cert0/ca.crt -CAfile /etc/cert/ipsec/cert0/ca.crt -cert /etc/cert/ipsec/cert0/cert.crt *Error creating connect BIO* *140416130504448:error:20088081:BIO routines:BIO_parse_hostserv:ambiguous host or service:crypto/bio/b_addr.c:547:* IPv6 address without the "[]" bracket. -------------------------------------------------- openssl ocsp -url http://*2001:DB8:64:FF9B:0:0:A0A:285E*:8090/ocsp-100/ -issuer /etc/cert/ipsec/cert0/ca.crt -CAfile /etc/cert/ipsec/cert0/ca.crt -cert /etc/cert/ipsec/cert0/cert.crt *Error connecting BIOError querying OCSP responder* i am using openssl version : *openssl version* *OpenSSL 1.1.1h 22 Sep 2020* Can anybody help me to find out if it is the correct way to use it?. Thanks, Perumal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From angus at magsys.co.uk Mon Nov 2 16:51:00 2020 From: angus at magsys.co.uk (Angus Robertson - Magenta Systems Ltd) Date: Mon, 2 Nov 2020 16:51 +0000 (GMT Standard Time) Subject: Project direction In-Reply-To: <5AC4D5D6-F58C-4C97-BAD6-2F6D2F73BEF7@oracle.com> Message-ID: > The idea being that supporting existing users means not changing > the existing API, whereas catering to new users means working > towards a new fresh consistent API. OpenSSL has been in use for getting on for 20 years (I think) and may still be in use in another 20 years, so can not stay still to make life easy for old projects, it has to evolve for new projects, as it does. But any changes should be clearly documented and should not require the use of third party sites like ABI to discover new APIs and changes to old ones. Major changes are usually in the changelog, but can be hard to find when updating from a much earlier release. There should really be detailed articles about upgrading from any long term release to the latest release, with simple lists of all exports or macros removed or added, or whose use has changed. Also, there is an assumption OpenSSL is only used by other C developers, by the use of public macros that are not usable in any other language. BoringSSL replaced macros with exports and OpenSSL should consider doing the same. Currently changing a macro to an export is rarely documented, so it's hard for those that have rewritten macros in other languages to know something will be broken. There needs to be more task oriented documentation, for instance collecting the APIs needed to create a CSR or certificate, using APIs rather than command line tools which is where much of the documentation currently resides. For instance there is no documentation about building a stack of extensions to add SANs to requests and certificates so a lot of research is needed to adds SANs to a certificate. Angus From mcr at sandelman.ca Mon Nov 2 18:19:49 2020 From: mcr at sandelman.ca (Michael Richardson) Date: Mon, 02 Nov 2020 13:19:49 -0500 Subject: Project direction In-Reply-To: References: Message-ID: <5676.1604341189@localhost> Angus Robertson - Magenta Systems Ltd wrote: > Also, there is an assumption OpenSSL is only used by other C developers, > by the use of public macros that are not usable in any other language. > BoringSSL replaced macros with exports and OpenSSL should consider > doing the same. This. > There needs to be more task oriented documentation, for instance > collecting the APIs needed to create a CSR or certificate, using APIs > rather than command line tools which is where much of the documentation > currently resides. For instance there is no documentation about > building a stack of extensions to add SANs to requests and certificates > so a lot of research is needed to adds SANs to a certificate. My claim is that much of the "applications" should be removed from the core system, and should be re-implemented in a cleaner way using the APIs. I.e. into a separate git repo with it's own release schedule. They should serve as exemplars for using the APIs, which they are often are not. In particular, the way that many things are only doable via "configuration file" is a serious problem. Yes, the applications are used as part of the tests, but I'm not saying that they shouldn't be pulled in as a github. Could Perl wrapper be used for more? Could it be used exclusively? (No calls out to "openssl ca" to generate a certificate...) The tests do not serve as particularly good examplars, because of the mix of C and perl, sometimes the perl is just running some .c code that was compiled... sometimes not. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tomiii at tomiii.com Tue Nov 3 00:55:00 2020 From: tomiii at tomiii.com (Thomas Dwyer III) Date: Mon, 2 Nov 2020 16:55:00 -0800 Subject: PRNG not available when multiple providers are configured? Message-ID: I'm having trouble getting RAND_status() to return 1 when my openssl.cnf has both the default provider and the fips provider configured at the same time: openssl_conf = openssl_init [openssl_init] providers = provider_sect [provider_sect] default = default_sect fips = fips_sect [default_sect] activate = 1 .include /conf/openssl/fips.cnf If I remove either default or fips from [provider_sect] then RAND_status() returns 1. If I leave them both specified there, RAND_status() always returns 0. Is this the expected behavior or am I doing something wrong? I understand that I must specify properties when fetching algorithms in order to get deterministic behavior with multiple providers loaded. Is there an analogous API for the PRNG that I'm overlooking? Interestingly, setting activate=0 for either provider is not sufficient to work around this issue. Thanks, Tom.III -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Tue Nov 3 15:09:10 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Tue, 3 Nov 2020 15:09:10 +0000 Subject: openssl ocsp(responder) cmd is giving error for ipv6 In-Reply-To: References: Message-ID: > From: openssl-users On Behalf Of perumal v > Sent: Monday, 2 November, 2020 07:57 > I tried openssl ocsp for ipv6 and got the error message for the OCSP. > openssl ocsp -url http://[2001:DB8:64:FF9B:0:0:A0A:285E]:8090/ocsp-100/ -issuer ... > Error creating connect BIO > 140416130504448:error:20088081:BIO routines:BIO_parse_hostserv:ambiguous host or > service:crypto/bio/b_addr.c:547: A quick look at the code suggests this is a bug in OpenSSL. OCSP_parse_url removes the square brackets from a literal IPv6 address in the URL, but BIO_parse_hostserv requires they be present. But I didn't look closely, so I'm not entirely sure that's the issue. > IPv6 address without the "[]" bracket. The square brackets are required by the URL specification. There's no point testing without them. -- Michael Wojcik From matt at openssl.org Tue Nov 3 15:13:40 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 3 Nov 2020 15:13:40 +0000 Subject: PRNG not available when multiple providers are configured? In-Reply-To: References: Message-ID: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> On 03/11/2020 00:55, Thomas Dwyer III wrote: > I'm having trouble getting RAND_status() to return 1 when my openssl.cnf > has both the default provider and the fips provider configured at the > same time: > > ? ? ? ? openssl_conf = openssl_init > > ? ? ? ? [openssl_init] > ? ? ? ? providers = provider_sect > > ? ? ? ? [provider_sect] > ? ? ? ? default = default_sect > ? ? ? ? fips = fips_sect > > ? ? ? ? [default_sect] > ? ? ? ? activate = 1 > > ? ? ? ? .include /conf/openssl/fips.cnf > > If I remove either default or fips from [provider_sect] then > RAND_status() returns 1. If I leave them both specified there, > RAND_status() always returns 0. Is this the expected behavior or am I > doing something wrong? I understand that I must specify properties when > fetching algorithms in order to get deterministic behavior with multiple > providers loaded. Is there an analogous API for the PRNG that I'm > overlooking? > > Interestingly, setting activate=0 for either provider is not sufficient > to work around this issue. I tested this out and was able to replicate your behaviour. The reasons are a little complicated (see below) but the TL;DR summary is that there is an error in your config file. The ".include" line should specify a config file relative to OPENSSLDIR (or OPENSSL_CONF_INCLUDE if it is set). It cannot be an absolute path, and hence fips.cnf is not being found. I've seen this error a few times now so I'm thinking that we should perhaps allow absolute paths. I'm not sure what the reason for disallowing them was. The reason it works if you comment out the "default" line is because that means the only provider left is the FIPS one. But the config line for that is faulty and therefore activating it fails. Ultimately we have not succesfully activated any provider. When you call RAND_status() it will attempt to fetch the RAND implementation and find that no providers have been activated. In this case we fallback and automatically activate the default provider. Hence you end up with RAND_status() still working. If you comment out the "fips" line then it works because it doesn't attempt to do anything with the fips provider, successfully activates the default provider, and hence RAND_status() works as expected. If you have both lines in the config file then it first successfully activates the default provider. It next attempts to activate the fips provider and fails. The way the config code works is that if any of the configured providers fail to activate then it backs out and deactivates all of them. At this point we're in a situation where a provider has been successfully activated and then deactivated again. The fallback activation of the default provider only kicks in if you've not attempted to activate any providers by the time you first need one. Therefore the default provider doesn't activate as a fallback either. Ultimately you end up with no active providers and RAND_status() fails. We really should have a way of getting more verbose output in the event of config issues like this. Matt From matt at openssl.org Tue Nov 3 15:29:21 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 3 Nov 2020 15:29:21 +0000 Subject: PRNG not available when multiple providers are configured? In-Reply-To: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: <775ec543-7e11-1d32-6653-dde50eddbce6@openssl.org> On 03/11/2020 15:13, Matt Caswell wrote: > I've seen this error a few times now so I'm thinking that we should > perhaps allow absolute paths. I'm not sure what the reason for > disallowing them was. I raised this issue about this: https://github.com/openssl/openssl/issues/13302 > We really should have a way of getting more verbose output in the event > of config issues like this. And for this one I've raised this: https://github.com/openssl/openssl/issues/13303 Matt From tmraz at redhat.com Tue Nov 3 18:03:37 2020 From: tmraz at redhat.com (Tomas Mraz) Date: Tue, 03 Nov 2020 19:03:37 +0100 Subject: PRNG not available when multiple providers are configured? In-Reply-To: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: On Tue, 2020-11-03 at 15:13 +0000, Matt Caswell wrote: > > The reasons are a little complicated (see below) but the TL;DR > summary > is that there is an error in your config file. The ".include" line > should specify a config file relative to OPENSSLDIR (or > OPENSSL_CONF_INCLUDE if it is set). It cannot be an absolute path, > and > hence fips.cnf is not being found. > > I've seen this error a few times now so I'm thinking that we should > perhaps allow absolute paths. I'm not sure what the reason for > disallowing them was. This is actually a regression. The absolute paths worked fine in 1.1.1 but it is also not clear to me why an absolute path would not work even with the current master unless you set OPENSSL_CONF_INCLUDE. The OPENSSL_CONF_INCLUDE is unconditionally prepended to the include path so that is the reason why absolute paths do not work properly if you set OPENSSL_CONF_INCLUDE. -- Tom?? Mr?z No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] From tomiii at tomiii.com Tue Nov 3 18:41:58 2020 From: tomiii at tomiii.com (Thomas Dwyer III) Date: Tue, 3 Nov 2020 10:41:58 -0800 Subject: PRNG not available when multiple providers are configured? In-Reply-To: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: On Tue, Nov 3, 2020 at 7:13 AM Matt Caswell wrote: > > > On 03/11/2020 00:55, Thomas Dwyer III wrote: > > I'm having trouble getting RAND_status() to return 1 when my openssl.cnf > > has both the default provider and the fips provider configured at the > > same time: > > > > openssl_conf = openssl_init > > > > [openssl_init] > > providers = provider_sect > > > > [provider_sect] > > default = default_sect > > fips = fips_sect > > > > [default_sect] > > activate = 1 > > > > .include /conf/openssl/fips.cnf > > > > If I remove either default or fips from [provider_sect] then > > RAND_status() returns 1. If I leave them both specified there, > > RAND_status() always returns 0. Is this the expected behavior or am I > > doing something wrong? I understand that I must specify properties when > > fetching algorithms in order to get deterministic behavior with multiple > > providers loaded. Is there an analogous API for the PRNG that I'm > > overlooking? > > > > Interestingly, setting activate=0 for either provider is not sufficient > > to work around this issue. > > I tested this out and was able to replicate your behaviour. > > The reasons are a little complicated (see below) but the TL;DR summary > is that there is an error in your config file. The ".include" line > should specify a config file relative to OPENSSLDIR (or > OPENSSL_CONF_INCLUDE if it is set). It cannot be an absolute path, and > hence fips.cnf is not being found. > This explanation does not match my observations. strace clearly shows my fips.cnf getting opened and read when my openssl.cnf has an absolute path. Likewise, strace shows stat64("fips.cnf", ...) failing (and thus no subsequent open() call) when I use a relative path. Furthermore, the documentation at https://www.openssl.org/docs/manmaster/man5/config.html says this should be an absolute path. That said, see below.. > I've seen this error a few times now so I'm thinking that we should > perhaps allow absolute paths. I'm not sure what the reason for > disallowing them was. > > The reason it works if you comment out the "default" line is because > that means the only provider left is the FIPS one. But the config line > for that is faulty and therefore activating it fails. Ultimately we have > not succesfully activated any provider. When you call RAND_status() it > will attempt to fetch the RAND implementation and find that no providers > have been activated. In this case we fallback and automatically activate > the default provider. Hence you end up with RAND_status() still working. > > If you comment out the "fips" line then it works because it doesn't > attempt to do anything with the fips provider, successfully activates > the default provider, and hence RAND_status() works as expected. > > If you have both lines in the config file then it first successfully > activates the default provider. It next attempts to activate the fips > provider and fails. The way the config code works is that if any of the > configured providers fail to activate then it backs out and deactivates > all of them. At this point we're in a situation where a provider has > been successfully activated and then deactivated again. The fallback > activation of the default provider only kicks in if you've not attempted > to activate any providers by the time you first need one. Therefore the > default provider doesn't activate as a fallback either. Ultimately you > end up with no active providers and RAND_status() fails. > Ah ha! This explanation makes sense to me and indeed pointed me at the real problem. I had recompiled OpenSSL but I forgot to update the hmac in fips.cnf via fipsinstall. So yes, the fips provider was failing to activate because of that. As soon I fixed the hmac RAND_status() started working for me. So THANKS for that! :-) Tom.III > We really should have a way of getting more verbose output in the event > of config issues like this. > > Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Tue Nov 3 21:35:47 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Wed, 4 Nov 2020 07:35:47 +1000 Subject: PRNG not available when multiple providers are configured? In-Reply-To: References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: > Ah ha! This explanation makes sense to me and indeed pointed me at the real problem. I had recompiled OpenSSL but I forgot to update the hmac in fips.cnf via fipsinstall. So yes, the fips provider was failing to activate because of that. As soon I fixed the hmac RAND_status() started working for me. So THANKS for that! :-) Not producing any diagnostic output for a failed checksum seems like a bug. Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Tue Nov 3 21:34:36 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Wed, 4 Nov 2020 07:34:36 +1000 Subject: PRNG not available when multiple providers are configured? In-Reply-To: References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: Adding: config_diagnostics = 1 At the same level as the openssl_conf line should produce more output. Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 4 Nov 2020, at 4:41 am, Thomas Dwyer III wrote: > > On Tue, Nov 3, 2020 at 7:13 AM Matt Caswell > wrote: > > > On 03/11/2020 00:55, Thomas Dwyer III wrote: > > I'm having trouble getting RAND_status() to return 1 when my openssl.cnf > > has both the default provider and the fips provider configured at the > > same time: > > > > openssl_conf = openssl_init > > > > [openssl_init] > > providers = provider_sect > > > > [provider_sect] > > default = default_sect > > fips = fips_sect > > > > [default_sect] > > activate = 1 > > > > .include /conf/openssl/fips.cnf > > > > If I remove either default or fips from [provider_sect] then > > RAND_status() returns 1. If I leave them both specified there, > > RAND_status() always returns 0. Is this the expected behavior or am I > > doing something wrong? I understand that I must specify properties when > > fetching algorithms in order to get deterministic behavior with multiple > > providers loaded. Is there an analogous API for the PRNG that I'm > > overlooking? > > > > Interestingly, setting activate=0 for either provider is not sufficient > > to work around this issue. > > I tested this out and was able to replicate your behaviour. > > The reasons are a little complicated (see below) but the TL;DR summary > is that there is an error in your config file. The ".include" line > should specify a config file relative to OPENSSLDIR (or > OPENSSL_CONF_INCLUDE if it is set). It cannot be an absolute path, and > hence fips.cnf is not being found. > > > This explanation does not match my observations. strace clearly shows my fips.cnf getting opened and read when my openssl.cnf has an absolute path. Likewise, strace shows stat64("fips.cnf", ...) failing (and thus no subsequent open() call) when I use a relative path. Furthermore, the documentation at https://www.openssl.org/docs/manmaster/man5/config.html says this should be an absolute path. > > That said, see below.. > > > > I've seen this error a few times now so I'm thinking that we should > perhaps allow absolute paths. I'm not sure what the reason for > disallowing them was. > > The reason it works if you comment out the "default" line is because > that means the only provider left is the FIPS one. But the config line > for that is faulty and therefore activating it fails. Ultimately we have > not succesfully activated any provider. When you call RAND_status() it > will attempt to fetch the RAND implementation and find that no providers > have been activated. In this case we fallback and automatically activate > the default provider. Hence you end up with RAND_status() still working. > > If you comment out the "fips" line then it works because it doesn't > attempt to do anything with the fips provider, successfully activates > the default provider, and hence RAND_status() works as expected. > > If you have both lines in the config file then it first successfully > activates the default provider. It next attempts to activate the fips > provider and fails. The way the config code works is that if any of the > configured providers fail to activate then it backs out and deactivates > all of them. At this point we're in a situation where a provider has > been successfully activated and then deactivated again. The fallback > activation of the default provider only kicks in if you've not attempted > to activate any providers by the time you first need one. Therefore the > default provider doesn't activate as a fallback either. Ultimately you > end up with no active providers and RAND_status() fails. > > Ah ha! This explanation makes sense to me and indeed pointed me at the real problem. I had recompiled OpenSSL but I forgot to update the hmac in fips.cnf via fipsinstall. So yes, the fips provider was failing to activate because of that. As soon I fixed the hmac RAND_status() started working for me. So THANKS for that! :-) > > > Tom.III > > > > > We really should have a way of getting more verbose output in the event > of config issues like this. > > Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Nov 4 08:46:11 2020 From: matt at openssl.org (Matt Caswell) Date: Wed, 4 Nov 2020 08:46:11 +0000 Subject: PRNG not available when multiple providers are configured? In-Reply-To: References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: <4a79047e-de03-0705-21b2-02a04baf8bba@openssl.org> On 03/11/2020 18:03, Tomas Mraz wrote: > On Tue, 2020-11-03 at 15:13 +0000, Matt Caswell wrote: >> >> The reasons are a little complicated (see below) but the TL;DR >> summary >> is that there is an error in your config file. The ".include" line >> should specify a config file relative to OPENSSLDIR (or >> OPENSSL_CONF_INCLUDE if it is set). It cannot be an absolute path, >> and >> hence fips.cnf is not being found. >> >> I've seen this error a few times now so I'm thinking that we should >> perhaps allow absolute paths. I'm not sure what the reason for >> disallowing them was. > > This is actually a regression. The absolute paths worked fine in 1.1.1 > but it is also not clear to me why an absolute path would not work even > with the current master unless you set OPENSSL_CONF_INCLUDE. The > OPENSSL_CONF_INCLUDE is unconditionally prepended to the include path > so that is the reason why absolute paths do not work properly if you > set OPENSSL_CONF_INCLUDE. > This is indeed the case in my environment. I did have OPENSSL_CONF_INCLUDE set - but I would expect an absolute path to override it. Matt From sanperumalv at gmail.com Wed Nov 4 09:13:21 2020 From: sanperumalv at gmail.com (perumal v) Date: Wed, 4 Nov 2020 14:43:21 +0530 Subject: openssl ocsp(responder) cmd is giving error for ipv6 In-Reply-To: References: Message-ID: HI, it started working after modification in OCSP_parse_url change is *highlighted* below and basically keeping [] brackets for ipv6 : OCSP_parse_url p = host; if (host[0] == '[') { /* ipv6 literal */ *// host++; * p = strchr(host, ']'); if (!p) goto parse_err; * // *p = '\0';* p++; } Is this the correct way to do so? Thanks for your help Michael. Thanks Perumal On Tue, Nov 3, 2020 at 8:40 PM Michael Wojcik wrote: > > From: openssl-users On Behalf Of > perumal v > > Sent: Monday, 2 November, 2020 07:57 > > > I tried openssl ocsp for ipv6 and got the error message for the OCSP. > > > openssl ocsp -url http://[2001:DB8:64:FF9B:0:0:A0A:285E]:8090/ocsp-100/ > -issuer ... > > Error creating connect BIO > > 140416130504448:error:20088081:BIO routines:BIO_parse_hostserv:ambiguous > host or > > service:crypto/bio/b_addr.c:547: > > A quick look at the code suggests this is a bug in OpenSSL. OCSP_parse_url > removes the square brackets from a literal IPv6 address in the URL, but > BIO_parse_hostserv requires they be present. But I didn't look closely, so > I'm not entirely sure that's the issue. > > > IPv6 address without the "[]" bracket. > > The square brackets are required by the URL specification. There's no > point testing without them. > > -- > Michael Wojcik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Nov 4 09:10:58 2020 From: matt at openssl.org (Matt Caswell) Date: Wed, 4 Nov 2020 09:10:58 +0000 Subject: PRNG not available when multiple providers are configured? In-Reply-To: References: <959d8d79-1139-fdc0-01a3-06e3e7bbe073@openssl.org> Message-ID: Ah! I had completely forgotten about this option. Matt On 03/11/2020 21:34, Dr Paul Dale wrote: > Adding: > |? ? config_diagnostics = 1| > At the same level as the?openssl_conf?line should produce more output. > > Pauli > --? > Dr Paul Dale | Distinguished Architect | Cryptographic Foundations? > Phone +61 7 3031 7217 > Oracle Australia > > > > >> On 4 Nov 2020, at 4:41 am, Thomas Dwyer III > > wrote: >> >> On Tue, Nov 3, 2020 at 7:13 AM Matt Caswell > > wrote: >> >> >> >> On 03/11/2020 00:55, Thomas Dwyer III wrote: >> > I'm having trouble getting RAND_status() to return 1 when my >> openssl.cnf >> > has both the default provider and the fips provider configured >> at the >> > same time: >> >? >> > ? ? ? ? openssl_conf = openssl_init >> >? >> > ? ? ? ? [openssl_init] >> > ? ? ? ? providers = provider_sect >> >? >> > ? ? ? ? [provider_sect] >> > ? ? ? ? default = default_sect >> > ? ? ? ? fips = fips_sect >> >? >> > ? ? ? ? [default_sect] >> > ? ? ? ? activate = 1 >> >? >> > ? ? ? ? .include /conf/openssl/fips.cnf >> >? >> > If I remove either default or fips from [provider_sect] then >> > RAND_status() returns 1. If I leave them both specified there, >> > RAND_status() always returns 0. Is this the expected behavior or >> am I >> > doing something wrong? I understand that I must specify >> properties when >> > fetching algorithms in order to get deterministic behavior with >> multiple >> > providers loaded. Is there an analogous API for the PRNG that I'm >> > overlooking? >> >? >> > Interestingly, setting activate=0 for either provider is not >> sufficient >> > to work around this issue. >> >> I tested this out and was able to replicate your behaviour. >> >> The reasons are a little complicated (see below) but the TL;DR summary >> is that there is an error in your config file. The ".include" line >> should specify a config file relative to OPENSSLDIR (or >> OPENSSL_CONF_INCLUDE if it is set). It cannot be an absolute path, and >> hence fips.cnf is not being found. >> >> >> >> This explanation does not match my observations. strace clearly shows >> my fips.cnf getting opened and read when my openssl.cnf has an >> absolute path. Likewise, strace shows stat64("fips.cnf", ...) failing >> (and thus no subsequent open() call) when I use a relative path. >> Furthermore, the documentation >> at?https://www.openssl.org/docs/manmaster/man5/config.html >> ?says >> this should be an absolute path.* >> * >> >> That said, see below.. >> >> >> >> I've seen this error a few times now so I'm thinking that we should >> perhaps allow absolute paths. I'm not sure what the reason for >> disallowing them was. >> >> The reason it works if you comment out the "default" line is because >> that means the only provider left is the FIPS one. But the config line >> for that is faulty and therefore activating it fails. Ultimately >> we have >> not succesfully activated any provider. When you call RAND_status() it >> will attempt to fetch the RAND implementation and find that no >> providers >> have been activated. In this case we fallback and automatically >> activate >> the default provider. Hence you end up with RAND_status() still >> working. >> >> If you comment out the "fips" line then it works because it doesn't >> attempt to do anything with the fips provider, successfully activates >> the default provider, and hence RAND_status() works as expected. >> >> If you have both lines in the config file then it first successfully >> activates the default provider. It next attempts to activate the fips >> provider and fails. The way the config code works is that if any >> of the >> configured providers fail to activate then it backs out and >> deactivates >> all of them. At this point we're in a situation where a provider has >> been successfully activated and then deactivated again. The fallback >> activation of the default provider only kicks in if you've not >> attempted >> to activate any providers by the time you first need one. >> Therefore the >> default provider doesn't activate as a fallback either. Ultimately you >> end up with no active providers and RAND_status() fails. >> >> >> Ah ha! This explanation makes sense to me and indeed pointed me at the >> real problem. I had recompiled OpenSSL but I forgot to update the hmac >> in fips.cnf via fipsinstall. So yes, the fips provider was failing to >> activate because of that. As soon I fixed the hmac RAND_status() >> started working for me. So THANKS for that! :-) >> >> >> Tom.III >> >> >> >> >> We really should have a way of getting more verbose output in the >> event >> of config issues like this. >> >> Matt > From Michael.Wojcik at microfocus.com Wed Nov 4 14:37:01 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Wed, 4 Nov 2020 14:37:01 +0000 Subject: openssl ocsp(responder) cmd is giving error for ipv6 In-Reply-To: References: Message-ID: > From: perumal v > Sent: Wednesday, 4 November, 2020 02:13 > change is highlighted below and basically keeping [] brackets for ipv6 : > > OCSP_parse_url > p = host; > if (host[0] == '[') { > /* ipv6 literal */ > // host++; > p = strchr(host, ']'); > if (!p) > goto parse_err; > // *p = '\0'; > p++; > } > Is this the correct way to do so? Based on my very cursory investigation, that looks right to me, but I don't know where else OCSP_parse_url might be used, and whether anything else depends on the existing semantics of removing the brackets. Someone should take a closer look. You could open an issue in GitHub and do a pull request for your change, to make your suggestion official. -- Michael Wojcik From frederic.bricout at afnic.fr Wed Nov 4 14:50:46 2020 From: frederic.bricout at afnic.fr (Frederic Bricout) Date: Wed, 4 Nov 2020 15:50:46 +0100 (CET) Subject: TLS 1.1 AES-CBC explicit IV In-Reply-To: <1579536710.1278424.1604501393989.JavaMail.zimbra@afnic.fr> Message-ID: <657874463.1278727.1604501446464.JavaMail.zimbra@afnic.fr> Hi, I'm searching information about the way you implement tls v1.1 for cbc mode I've read the rfc4346 It mention that it use explicit IV I've read the openssl code in openssl 1.0.1. And I don't know how it was implemented. I think at the beginning of the message you add (mask || R) but after I dont understand if the chaining method it the same as tls1.0 => E(key,residu || data ) Can you explain me a bit the process because I'm lost and I didn't find any internet information about implementation. Best Regards Fred From openssl at openssl.org Thu Nov 5 14:34:17 2020 From: openssl at openssl.org (OpenSSL) Date: Thu, 5 Nov 2020 14:34:17 +0000 Subject: OpenSSL version 3.0.0-alpha8 published Message-ID: <20201105143416.GA8468@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 OpenSSL version 3.0 alpha 8 released ==================================== OpenSSL - The Open Source toolkit for SSL/TLS https://www.openssl.org/ OpenSSL 3.0 is currently in alpha. OpenSSL 3.0 alpha 8 has now been made available. Note: This OpenSSL pre-release has been provided for testing ONLY. It should NOT be used for security critical purposes. Specific notes on upgrading to OpenSSL 3.0 from previous versions, as well as known issues are available on the OpenSSL Wiki, here: https://wiki.openssl.org/index.php/OpenSSL_3.0 The alpha release is available for download via HTTPS and FTP from the following master locations (you can find the various FTP mirrors under https://www.openssl.org/source/mirror.html): * https://www.openssl.org/source/ * ftp://ftp.openssl.org/source/ The distribution file name is: o openssl-3.0.0-alpha8.tar.gz Size: 14011376 SHA1 checksum: a6063ebb15b4e600b60fbb50b3102c6f2e3438ff SHA256 checksum: a6c7b618a6a37cf0cebbc583b49e6d22d86e2d777e60173433eada074c32eea4 The checksums were calculated using the following commands: openssl sha1 openssl-3.0.0-alpha8.tar.gz openssl sha256 openssl-3.0.0-alpha8.tar.gz Please download and check this alpha release as soon as possible. To report a bug, open an issue on GitHub: https://github.com/openssl/openssl/issues Please check the release notes and mailing lists to avoid duplicate reports of known issues. (Of course, the source is also available on GitHub.) Yours, The OpenSSL Project Team. -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAl+kBlYACgkQ2cTSbQ5g RJHOmQgAhqFZMut75DD4WChUdbwnlt+liy4SBVq+uG5zxSX8ayyiWoxkaQxMrI55 eyYWkLc05imDlM6dPgQQnBbLgDBUj6lPPN3bzAu/jPNC8Wk+9zwPdwLxKKnbMnoX gHGVFEuAJeILT6jldQwyHL1O+YV0KFANZE09jt/jBqaMtnT8pcVgxe+9txLtWVPw zLnh+t2Z9Pzhi8jz9I7LArVqgYOrnHHrFs1plzqz6YkTXCahGAoP5wtKFL1AS9eo J3EPrLNpLcYjLJWAt6kIgIP6J7pBxmqp5411b1dKAqSzNd6RTm8N11YNOP6lDCy9 28Mu393UJc5I8GvB+taGs8oMXxQCIQ== =Zocb -----END PGP SIGNATURE----- From jetson23 at hotmail.com Thu Nov 5 16:54:58 2020 From: jetson23 at hotmail.com (Jason Schultz) Date: Thu, 5 Nov 2020 16:54:58 +0000 Subject: Questions regarding OpenSSL 3.0 and corresponding FIPS Module Message-ID: I read the most recent (10/20) update to the OpenSSL 3.0 release page here: https://www.openssl.org/blog/blog/2020/10/20/OpenSSL3.0Alpha7/ As well as the release strategy: https://wiki.openssl.org/index.php?title=OpenSSL_3.0_Release_Schedule&oldid=3099 I have not done anything with the Alpha releases so far, but I noticed the note "Basic functionality plus basic FIPS module". Does this mean that there is a FIPS module available to test with in the alpha(and presumably beta) releases? If the answer to that question is "yes", I'm assuming that the validation of that FIPS Module can't/won't start until after the Final OpenSSL 3.0 release. The timeframe for that validation is TBD, as it always varies. The Final 3.0 release is currently behind schedule as it was estimated "early Q4 2020". Any ideas on how much behind that release is? Thanks in advance for any information. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Thu Nov 5 17:28:20 2020 From: matt at openssl.org (Matt Caswell) Date: Thu, 5 Nov 2020 17:28:20 +0000 Subject: Questions regarding OpenSSL 3.0 and corresponding FIPS Module In-Reply-To: References: Message-ID: <381be099-fd35-f228-b5d3-b94404b92726@openssl.org> On 05/11/2020 16:54, Jason Schultz wrote: > I read the most recent (10/20) update to the OpenSSL 3.0 release page here: > > https://www.openssl.org/blog/blog/2020/10/20/OpenSSL3.0Alpha7/ > > As well as the release > strategy:?https://wiki.openssl.org/index.php?title=OpenSSL_3.0_Release_Schedule&oldid=3099 > > I have not done anything with the Alpha releases so far, but I noticed > the note "Basic functionality plus basic FIPS module". > > Does this mean that there is a FIPS module available to test with in the > alpha(and presumably beta) releases? Yes. > > If the answer to that question is "yes", I'm assuming that the > validation of that FIPS Module can't/won't start until after the Final > OpenSSL 3.0 release. The timeframe for that validation is TBD, as it > always varies. Also yes. > > The Final 3.0 release is currently behind schedule as it was estimated > "early Q4 2020". Any ideas on how much behind that release is? That is still the latest "official" time, but clearly that cannot be achieved now given that we were supposed to have a beta in September in that timeline. We still have quite a bit of work to do to get to a beta release (https://github.com/openssl/openssl/milestone/17). The best I can offer is that the final release will be "sometime in the New Year". Matt From m.kosuri at f5.com Mon Nov 9 08:58:22 2020 From: m.kosuri at f5.com (Venkata Mallikarjunarao Kosuri) Date: Mon, 9 Nov 2020 08:58:22 +0000 Subject: How to make ocsp responder busy Message-ID: Hi We are trying to work scenario to openssl OCSP responder busy, but we are not sure how to make OCSP responder busy could please throw some pointer to work on. Ref https://www.openssl.org/docs/man1.0.2/man1/ocsp.html Thanks Malli -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Mon Nov 9 19:17:58 2020 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Mon, 9 Nov 2020 20:17:58 +0100 Subject: How to make ocsp responder busy In-Reply-To: References: Message-ID: <7cf45351-50cb-0ff0-4944-a38fc68628fc@wisemo.com> On 2020-11-09 09:58, Venkata Mallikarjunarao Kosuri via openssl-users wrote: > > Hi > > We are trying to work scenario to openssl OCSP responder busy, but we > are not sure how to make OCSP responder busy could please throw some > pointer to work on. > > Ref https://www.openssl.org/docs/man1.0.2/man1/ocsp.html > > > Thanks > > Malli > An OCSP responder is not supposed to be busy.? Ever. CAs that are trusted by the big web browsers are contractually required to keep theirs available 24x7. The man page you reference doesn't contain the word "busy" Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From Paul.OKeefe at riverbed.com Mon Nov 9 21:19:15 2020 From: Paul.OKeefe at riverbed.com (Paul O'Keefe) Date: Mon, 9 Nov 2020 21:19:15 +0000 Subject: RSA_METHOD.rsa_sign not called in FIPS mode Message-ID: I'm using an OpenSSL engine that uses the RSA_FLAG_SIGN_VER flag and implements RSA_METHOD.rsa_sign() instead rsa_priv_enc(). This is mainly because of the requirement that it work with Windows CryptoAPI which does not support low-level RSA signing (see CAPI engine). Everything works as it should until FIPS mode is enabled. Under FIPS mode, the "non-implemented" rsa_priv_enc() is called and an error is returned. The simplified backtrace is: #0 rsa_priv_enc // non-implemented engine function #1 FIPS_rsa_sign_digest // FIPS canister #2 pkey_rsa_sign #3 EVP_SignFinal It appears that FIPS_rsa_sign_digest() never checks RSA_FLAG_SIGN_VER or calls rsa_sign() - it simply defaults to rsa_priv_enc(). I can't find any place rsa_sign is called. There are posts that specifically reference running CAPI with FIPS mode, so I don't know what I'm missing. http://openssl.6102.n7.nabble.com/FIPS-with-CAPI-Engine-td26273.html Using OpenSSL 1.0.2o and FIPS canister 2.0.2 (older but I checked the latest release and it behaves the same). Thank you. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From shivakumar2696 at gmail.com Tue Nov 10 09:19:12 2020 From: shivakumar2696 at gmail.com (shiva kumar) Date: Tue, 10 Nov 2020 14:49:12 +0530 Subject: CRYPTO_mem_leaks Error in openssl 1.1.1d Message-ID: Hi, I'm trying to use the CRYPTO_mem_leaks API in openssl 1.1.1d, but during compilation I'm getting error as *Unsatisfied symbol "CRYPTO_mem_leaks" * I have Included the header #include one doubt is it is defined under crypto.h #ifndef OPENSSL_NO_CRYPTO_MDEBUG CRYPTO_mem_leaks // defined #endif in opensslconf.h is defined as #ifndef OPENSSL_NO_CRYPTO_MDEBUG #define OPENSSL_NO_CRYPTO_MDEBUG #endif How to resolve the issue? please help me Regards Shivakumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Nov 10 11:22:01 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 10 Nov 2020 11:22:01 +0000 Subject: CRYPTO_mem_leaks Error in openssl 1.1.1d In-Reply-To: References: Message-ID: <91411dfd-89d1-ea37-d694-e5f127a2f823@openssl.org> On 10/11/2020 09:19, shiva kumar wrote: > Hi,? > I'm trying to use the?CRYPTO_mem_leaks? API in openssl 1.1.1d, but > during compilation I'm getting error as? > *Unsatisfied?symbol "CRYPTO_mem_leaks"?* > > I have Included the header? > #include > > one doubt is it is defined under crypto.h > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > CRYPTO_mem_leaks //? defined > #endif? > > in opensslconf.h is defined as?? > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > #define?OPENSSL_NO_CRYPTO_MDEBUG > #endif? > > How to resolve?the issue? The CRYPTO_mem_leaks API is not available by default. You have to compile a version of OpenSSL that has it enabled using the "enable-crypto-mdebug" option to "Configure": https://github.com/openssl/openssl/blob/6e933b35492a4dc3370b9f49890646dadca82cd8/INSTALL#L327-L329 Matt From shivakumar2696 at gmail.com Tue Nov 10 13:25:42 2020 From: shivakumar2696 at gmail.com (shiva kumar) Date: Tue, 10 Nov 2020 18:55:42 +0530 Subject: CRYPTO_mem_leaks Error in openssl 1.1.1d In-Reply-To: <91411dfd-89d1-ea37-d694-e5f127a2f823@openssl.org> References: <91411dfd-89d1-ea37-d694-e5f127a2f823@openssl.org> Message-ID: Any alternatives for this, if the compiled version doesn't enabled the flag? On Tue, 10 Nov 2020 at 4:52 PM, Matt Caswell wrote: > > > On 10/11/2020 09:19, shiva kumar wrote: > > Hi, > > I'm trying to use the CRYPTO_mem_leaks API in openssl 1.1.1d, but > > during compilation I'm getting error as > > *Unsatisfied symbol "CRYPTO_mem_leaks" * > > > > I have Included the header > > #include > > > > one doubt is it is defined under crypto.h > > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > > CRYPTO_mem_leaks // defined > > #endif > > > > in opensslconf.h is defined as > > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > > #define OPENSSL_NO_CRYPTO_MDEBUG > > #endif > > > > How to resolve the issue? > > The CRYPTO_mem_leaks API is not available by default. You have to > compile a version of OpenSSL that has it enabled using the > "enable-crypto-mdebug" option to "Configure": > > > https://github.com/openssl/openssl/blob/6e933b35492a4dc3370b9f49890646dadca82cd8/INSTALL#L327-L329 > > Matt > > -- *With Best Regards* *Shivakumar S* -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Nov 10 13:34:46 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 10 Nov 2020 13:34:46 +0000 Subject: CRYPTO_mem_leaks Error in openssl 1.1.1d In-Reply-To: References: <91411dfd-89d1-ea37-d694-e5f127a2f823@openssl.org> Message-ID: <1f61bc7b-f026-62cb-8c80-00b62689b7b9@openssl.org> On 10/11/2020 13:25, shiva kumar wrote: > Any alternatives for this, if the compiled version doesn't enabled the flag? valgrind? Matt > > On Tue, 10 Nov 2020 at 4:52 PM, Matt Caswell > wrote: > > > > On 10/11/2020 09:19, shiva kumar wrote: > > Hi,? > > I'm trying to use the?CRYPTO_mem_leaks? API in openssl 1.1.1d, but > > during compilation I'm getting error as? > > *Unsatisfied?symbol "CRYPTO_mem_leaks"?* > > > > I have Included the header? > > #include > > > > one doubt is it is defined under crypto.h > > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > > CRYPTO_mem_leaks //? defined > > #endif? > > > > in opensslconf.h is defined as?? > > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > > #define?OPENSSL_NO_CRYPTO_MDEBUG > > #endif? > > > > How to resolve?the issue? > > The CRYPTO_mem_leaks API is not available by default. You have to > compile a version of OpenSSL that has it enabled using the > "enable-crypto-mdebug" option to "Configure": > > https://github.com/openssl/openssl/blob/6e933b35492a4dc3370b9f49890646dadca82cd8/INSTALL#L327-L329 > > Matt > > -- > *With Best Regards* > *Shivakumar S* From shivakumar2696 at gmail.com Tue Nov 10 13:37:02 2020 From: shivakumar2696 at gmail.com (shiva kumar) Date: Tue, 10 Nov 2020 19:07:02 +0530 Subject: CRYPTO_mem_leaks Error in openssl 1.1.1d In-Reply-To: <1f61bc7b-f026-62cb-8c80-00b62689b7b9@openssl.org> References: <91411dfd-89d1-ea37-d694-e5f127a2f823@openssl.org> <1f61bc7b-f026-62cb-8c80-00b62689b7b9@openssl.org> Message-ID: Can you please provide me examples or links to refer it. On Tue, 10 Nov 2020 at 7:04 PM, Matt Caswell wrote: > > > On 10/11/2020 13:25, shiva kumar wrote: > > Any alternatives for this, if the compiled version doesn't enabled the > flag? > > valgrind? > > > Matt > > > > > On Tue, 10 Nov 2020 at 4:52 PM, Matt Caswell > > wrote: > > > > > > > > On 10/11/2020 09:19, shiva kumar wrote: > > > Hi, > > > I'm trying to use the CRYPTO_mem_leaks API in openssl 1.1.1d, but > > > during compilation I'm getting error as > > > *Unsatisfied symbol "CRYPTO_mem_leaks" * > > > > > > I have Included the header > > > #include > > > > > > one doubt is it is defined under crypto.h > > > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > > > CRYPTO_mem_leaks // defined > > > #endif > > > > > > in opensslconf.h is defined as > > > #ifndef OPENSSL_NO_CRYPTO_MDEBUG > > > #define OPENSSL_NO_CRYPTO_MDEBUG > > > #endif > > > > > > How to resolve the issue? > > > > The CRYPTO_mem_leaks API is not available by default. You have to > > compile a version of OpenSSL that has it enabled using the > > "enable-crypto-mdebug" option to "Configure": > > > > > https://github.com/openssl/openssl/blob/6e933b35492a4dc3370b9f49890646dadca82cd8/INSTALL#L327-L329 > > > > Matt > > > > -- > > *With Best Regards* > > *Shivakumar S* > -- *With Best Regards* *Shivakumar S* -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Nov 10 13:39:46 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 10 Nov 2020 13:39:46 +0000 Subject: CRYPTO_mem_leaks Error in openssl 1.1.1d In-Reply-To: References: <91411dfd-89d1-ea37-d694-e5f127a2f823@openssl.org> <1f61bc7b-f026-62cb-8c80-00b62689b7b9@openssl.org> Message-ID: <57117b25-4a16-8618-f656-de54ff474f34@openssl.org> On 10/11/2020 13:37, shiva kumar wrote: > Can you please provide me examples or links to refer it. Google is your friend here. https://valgrind.org/ The above site has a quick start guide which should help. Matt > > On Tue, 10 Nov 2020 at 7:04 PM, Matt Caswell > wrote: > > > > On 10/11/2020 13:25, shiva kumar wrote: > > Any alternatives for this, if the compiled version doesn't enabled > the flag? > > valgrind? > > > Matt > > > > > On Tue, 10 Nov 2020 at 4:52 PM, Matt Caswell > > >> wrote: > > > > > > > >? ? ?On 10/11/2020 09:19, shiva kumar wrote: > >? ? ?> Hi,? > >? ? ?> I'm trying to use the?CRYPTO_mem_leaks? API in openssl > 1.1.1d, but > >? ? ?> during compilation I'm getting error as? > >? ? ?> *Unsatisfied?symbol "CRYPTO_mem_leaks"?* > >? ? ?> > >? ? ?> I have Included the header? > >? ? ?> #include > >? ? ?> > >? ? ?> one doubt is it is defined under crypto.h > >? ? ?> #ifndef OPENSSL_NO_CRYPTO_MDEBUG > >? ? ?> CRYPTO_mem_leaks //? defined > >? ? ?> #endif? > >? ? ?> > >? ? ?> in opensslconf.h is defined as?? > >? ? ?> #ifndef OPENSSL_NO_CRYPTO_MDEBUG > >? ? ?> #define?OPENSSL_NO_CRYPTO_MDEBUG > >? ? ?> #endif? > >? ? ?> > >? ? ?> How to resolve?the issue? > > > >? ? ?The CRYPTO_mem_leaks API is not available by default. You have to > >? ? ?compile a version of OpenSSL that has it enabled using the > >? ? ?"enable-crypto-mdebug" option to "Configure": > > > >? ? > ?https://github.com/openssl/openssl/blob/6e933b35492a4dc3370b9f49890646dadca82cd8/INSTALL#L327-L329 > > > >? ? ?Matt > > > > -- > > *With Best Regards* > > *Shivakumar S* > > -- > *With Best Regards* > *Shivakumar S* From dfreed at epic.com Wed Nov 11 16:28:40 2020 From: dfreed at epic.com (Dan Freed) Date: Wed, 11 Nov 2020 16:28:40 +0000 Subject: Deleted client certificate trust expectations Message-ID: Hello, I have a question/issue about how OpenSSL should handle a deleted client certificate. It appears that once a trusted certificate is read from the filesystem, it remains trusted throughout the lifespan of the server process. I wrote a small SSL web service that reproduces the issue I'm having with my application. Pardon the Perl syntax - I've not rewritten this in C but I think the intent is clear. This code reproduces the scenario: use Socket; use Net::SSLeay qw(die_now die_if_ssl_error); Net::SSLeay::load_error_strings(); Net::SSLeay::SSLeay_add_ssl_algorithms(); Net::SSLeay::randomize(); $our_ip = "\0\0\0\0"; $port = 1235; $sockaddr_template = 'S n a4 x8'; $our_serv_params = pack ($sockaddr_template, &AF_INET, $port, $our_ip); socket (S, &AF_INET, &SOCK_STREAM, 0) or die "socket: $!"; bind (S, $our_serv_params) or die "bind: $!"; listen (S, 5); $ctx = Net::SSLeay::CTX_new (); $key = "client.key"; $cert = "client.crt"; $trust_dir = "/client_trusted_certificates"; Net::SSLeay::CTX_use_RSAPrivateKey_file($ctx, $key, Net::SSLeay::FILETYPE_PEM()); Net::SSLeay::CTX_use_certificate_file($ctx, $cert, Net::SSLeay::FILETYPE_PEM()); Net::SSLeay::CTX_set_session_id_context($ctx,'sessiontest',length('sessiontest')); Net::SSLeay::CTX_load_verify_locations($ctx,"",$trust_dir); Net::SSLeay::CTX_set_verify($ctx,&Net::SSLeay::VERIFY_PEER, \&verify_client_cert); while (1) { $addr = accept (NS, S); select (NS); $| = 1; select (STDOUT); $ssl = Net::SSLeay::new($ctx); Net::SSLeay::set_fd($ssl, fileno(NS)); $err = Net::SSLeay::accept($ssl); $got = Net::SSLeay::read($ssl); print $got."\n"; Net::SSLeay::write ($ssl, uc ($got)); Net::SSLeay::free ($ssl); close NS; } sub verify_client_cert { my ($pre_verify,$x509_store) = @_; print "Pre-verify: $pre_verify\n"; print "ctx error: ".Net::SSLeay::X509_STORE_CTX_get_error($x509_store)."\n"; return $pre_verify; } This all works as it should, and verify_client_cert() is called appropriately when the client cert is provided. The issue I'm having is how the verify process works when a certificate is removed from the trusted directory while this service is running. If a client connects with a client cert and the service verifies that certificate, then the trusted client cert is removed from /trusted_clients, then the client connects again - the client cert will still verify. The client cert will continue to verify until I restart the server. An strace of the process confirms that it only opens the trusted directory once, subsequent connections using this client cert do not re-open or look for the file in the trust directory. My understanding of how this should work was that it should read the contents of that directory at the time the verify takes place, not when CTX_set_verify() is called, but that doesn't seem to be what is happening. Another interesting bit is that the inverse is not true. If I add a cert to the trusted directory, it immediately uses it without having to restart the process. I assume that if I used a certificate revocation list and revoked the client cert this wouldn't be an issue, but why are the directory contents cached? Is this for performance reasons? Thanks Dan Freed -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfreed at epic.com Wed Nov 11 16:41:50 2020 From: dfreed at epic.com (Dan Freed) Date: Wed, 11 Nov 2020 16:41:50 +0000 Subject: Deleted client certificate trust expectations In-Reply-To: References: Message-ID: Sorry I realized I didn't include the OpenSSL version I was using. This is with OpenSSL 1.1.1d 10 Sep 2019. -Dan From: openssl-users Date: Wednesday, November 11, 2020 at 10:29 AM To: openssl-users at openssl.org Subject: Deleted client certificate trust expectations External Mail. Careful of links / attachments. Submit Helpdesk if unsure. Hello, I have a question/issue about how OpenSSL should handle a deleted client certificate. It appears that once a trusted certificate is read from the filesystem, it remains trusted throughout the lifespan of the server process. I wrote a small SSL web service that reproduces the issue I'm having with my application. Pardon the Perl syntax - I've not rewritten this in C but I think the intent is clear. This code reproduces the scenario: use Socket; use Net::SSLeay qw(die_now die_if_ssl_error); Net::SSLeay::load_error_strings(); Net::SSLeay::SSLeay_add_ssl_algorithms(); Net::SSLeay::randomize(); $our_ip = "\0\0\0\0"; $port = 1235; $sockaddr_template = 'S n a4 x8'; $our_serv_params = pack ($sockaddr_template, &AF_INET, $port, $our_ip); socket (S, &AF_INET, &SOCK_STREAM, 0) or die "socket: $!"; bind (S, $our_serv_params) or die "bind: $!"; listen (S, 5); $ctx = Net::SSLeay::CTX_new (); $key = "client.key"; $cert = "client.crt"; $trust_dir = "/client_trusted_certificates"; Net::SSLeay::CTX_use_RSAPrivateKey_file($ctx, $key, Net::SSLeay::FILETYPE_PEM()); Net::SSLeay::CTX_use_certificate_file($ctx, $cert, Net::SSLeay::FILETYPE_PEM()); Net::SSLeay::CTX_set_session_id_context($ctx,'sessiontest',length('sessiontest')); Net::SSLeay::CTX_load_verify_locations($ctx,"",$trust_dir); Net::SSLeay::CTX_set_verify($ctx,&Net::SSLeay::VERIFY_PEER, \&verify_client_cert); while (1) { $addr = accept (NS, S); select (NS); $| = 1; select (STDOUT); $ssl = Net::SSLeay::new($ctx); Net::SSLeay::set_fd($ssl, fileno(NS)); $err = Net::SSLeay::accept($ssl); $got = Net::SSLeay::read($ssl); print $got."\n"; Net::SSLeay::write ($ssl, uc ($got)); Net::SSLeay::free ($ssl); close NS; } sub verify_client_cert { my ($pre_verify,$x509_store) = @_; print "Pre-verify: $pre_verify\n"; print "ctx error: ".Net::SSLeay::X509_STORE_CTX_get_error($x509_store)."\n"; return $pre_verify; } This all works as it should, and verify_client_cert() is called appropriately when the client cert is provided. The issue I'm having is how the verify process works when a certificate is removed from the trusted directory while this service is running. If a client connects with a client cert and the service verifies that certificate, then the trusted client cert is removed from /trusted_clients, then the client connects again - the client cert will still verify. The client cert will continue to verify until I restart the server. An strace of the process confirms that it only opens the trusted directory once, subsequent connections using this client cert do not re-open or look for the file in the trust directory. My understanding of how this should work was that it should read the contents of that directory at the time the verify takes place, not when CTX_set_verify() is called, but that doesn't seem to be what is happening. Another interesting bit is that the inverse is not true. If I add a cert to the trusted directory, it immediately uses it without having to restart the process. I assume that if I used a certificate revocation list and revoked the client cert this wouldn't be an issue, but why are the directory contents cached? Is this for performance reasons? Thanks Dan Freed -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at jordan.maileater.net Wed Nov 11 16:41:52 2020 From: openssl at jordan.maileater.net (Jordan Brown) Date: Wed, 11 Nov 2020 16:41:52 +0000 Subject: Deleted client certificate trust expectations In-Reply-To: References: Message-ID: <01010175b82f5abd-0f2b62c3-986a-4015-84f3-ffc2d6d90596-000000@us-west-2.amazonses.com> What you observe is indeed reality; we ran into it too.? (Though we ran into it in the context of a long-running client verifying server certificates.) My assumption is that it's for performance, and that's sensible, but it would sure be nice to figure out how to detect those changes.? If a stat() on each verification is considered too expensive, maybe there could be a timeout, that if the file hasn't been checked in the last ten minutes then check it. -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl-users at dukhovni.org Wed Nov 11 17:53:13 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Wed, 11 Nov 2020 12:53:13 -0500 Subject: Deleted client certificate trust expectations In-Reply-To: References: Message-ID: <20201111175313.GG1464@straasha.imrryr.org> On Wed, Nov 11, 2020 at 04:28:40PM +0000, Dan Freed wrote: > I have a question/issue about how OpenSSL should handle a deleted > client certificate. It appears that once a trusted certificate is read > from the filesystem, it remains trusted throughout the lifespan of the > server process. The built-in trust stores (code behind CAfile and CApath) are caching stores. They use an in memory cache of trusted certificates that is pre-loaded in the case of CAfile, and demand-loaded on a cache miss in the case of CApath. Once a certificate is loaded, it remains in the cache. The cache is part of the X509_STORE object that is associated with the SSL_CTX. Though I don't see it exposed in the Perl API, it is possible to flush the X509_STORE cache by calling: SSL_CTX *ctx; X509_STORE *store; STACK_OF(X509) *objs; X509 *x; ... store = SSL_CTX_get_cert_store(ctx); X509_STORE_lock(store); st = X509_STORE_get0_objects(store); while ((x = sk_X509_pop(st)) != NULL) X509_free(x); X509_STORE_unlock(store); .... An application that uses only CApath and does not wish to cache trusted certificates indefinitely, can use this to flush the cache. Note that this does not work well with CAfile, since the file is read just once, so you'd need to explicitly reload the CAfile: lookup = X509_STORE_add_lookup(ctx, X509_LOOKUP_file()); if (lookup == NULL) return 0; if (X509_LOOKUP_load_file(lookup, file, X509_FILETYPE_PEM) != 1) return 0; But keep in mind that X509_LOOKUP_load_file is not atomic, it adds certificates to the store one at a time. Therefore flushing and reloading the store should happen in the same thread and should not happen concurrently in multiple threads. A sufficiently sophisticated user can of course add a custom store that uses no cache, or a more sophisticated cache with expiration times, ... > My understanding of how this should work was that it should read the > contents of that directory at the time the verify takes place, not > when CTX_set_verify() is called, but that doesn't seem to be what is > happening. The directory content is (partly) cached, with the cache growing incrementally as additional certificates are loaded. -- Viktor. From dfreed at epic.com Wed Nov 11 23:28:46 2020 From: dfreed at epic.com (Dan Freed) Date: Wed, 11 Nov 2020 23:28:46 +0000 Subject: Deleted client certificate trust expectations In-Reply-To: <20201111175313.GG1464@straasha.imrryr.org> References: , <20201111175313.GG1464@straasha.imrryr.org> Message-ID: Thanks for the help. This got me on the right track. -Dan From: openssl-users Date: Wednesday, November 11, 2020 at 12:02 PM To: openssl-users at openssl.org Subject: Re: Deleted client certificate trust expectations External Mail. Careful of links / attachments. Submit Helpdesk if unsure. On Wed, Nov 11, 2020 at 04:28:40PM +0000, Dan Freed wrote: > I have a question/issue about how OpenSSL should handle a deleted > client certificate. It appears that once a trusted certificate is read > from the filesystem, it remains trusted throughout the lifespan of the > server process. The built-in trust stores (code behind CAfile and CApath) are caching stores. They use an in memory cache of trusted certificates that is pre-loaded in the case of CAfile, and demand-loaded on a cache miss in the case of CApath. Once a certificate is loaded, it remains in the cache. The cache is part of the X509_STORE object that is associated with the SSL_CTX. Though I don't see it exposed in the Perl API, it is possible to flush the X509_STORE cache by calling: SSL_CTX *ctx; X509_STORE *store; STACK_OF(X509) *objs; X509 *x; ... store = SSL_CTX_get_cert_store(ctx); X509_STORE_lock(store); st = X509_STORE_get0_objects(store); while ((x = sk_X509_pop(st)) != NULL) X509_free(x); X509_STORE_unlock(store); .... An application that uses only CApath and does not wish to cache trusted certificates indefinitely, can use this to flush the cache. Note that this does not work well with CAfile, since the file is read just once, so you'd need to explicitly reload the CAfile: lookup = X509_STORE_add_lookup(ctx, X509_LOOKUP_file()); if (lookup == NULL) return 0; if (X509_LOOKUP_load_file(lookup, file, X509_FILETYPE_PEM) != 1) return 0; But keep in mind that X509_LOOKUP_load_file is not atomic, it adds certificates to the store one at a time. Therefore flushing and reloading the store should happen in the same thread and should not happen concurrently in multiple threads. A sufficiently sophisticated user can of course add a custom store that uses no cache, or a more sophisticated cache with expiration times, ... > My understanding of how this should work was that it should read the > contents of that directory at the time the verify takes place, not > when CTX_set_verify() is called, but that doesn't seem to be what is > happening. The directory content is (partly) cached, with the cache growing incrementally as additional certificates are loaded. -- Viktor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From teoenming.nov2020 at gmail.com Thu Nov 12 10:49:17 2020 From: teoenming.nov2020 at gmail.com (Turritopsis Dohrnii Teo En Ming) Date: Thu, 12 Nov 2020 18:49:17 +0800 Subject: Guide on Renewing SSL Certificate for Apache, Postfix and Dovecot on CentOS 6.8 Linux Message-ID: Guide on Renewing SSL Certificate for Apache, Postfix and Dovecot on CentOS 6.8 Linux ===================================================================================== Author: Mr. Turritopsis Dohrnii Teo En Ming (TARGETED INDIVIDUAL) Country: Singapore Date: 12 November 2020 Thursday Singapore Time Type of Publication: Plain Text Document Version: 20201112.01 Generating Certificate Signing Request (CSR) Using OpenSSL command on Linux =========================================================================== Reference Guide: Generating CSR on Apache + OpenSSL/ModSSL/Nginx + Heroku Link: https://www.namecheap.com/support/knowledgebase/article.aspx/9446/14/generating-csr-on-apache--opensslmodsslnginx--heroku/#4 # cd /root # which openssl # openssl req -new -newkey rsa:2048 -nodes -keyout teo-en-ming-corp.key -out teo-en-ming-corp.csr Generating a 2048 bit RSA private key ...............................................................................................................................................................................+++ ........................................................................+++ writing new private key to 'teo-en-ming-corp.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:SG State or Province Name (full name) []:Singapore Locality Name (eg, city) [Default City]:Singapore Organization Name (eg, company) [Default Company Ltd]:Teo En Ming Corporation Organizational Unit Name (eg, section) []:IT Department Common Name (eg, your name or your server's hostname) []:*. teo-en-ming-corp.com.sg (USE WILDCARD!!!) Email Address []:ceo at teo-en-ming-corp.com Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # mkdir teo-en-ming # mv teo-en-ming-corp.csr teo-en-ming-corp.key teo-en-ming/ # cd teo-en-ming [root at mail.teo-en-ming-corp.com.sg teo-en-ming]# ls -al total 16 drwxr-xr-x 2 root root 4096 Nov 11 11:43 . dr-xr-x---. 14 root root 4096 Nov 11 11:43 .. -rw-r--r-- 1 root root 1119 Nov 11 11:42 teo-en-ming-corp.csr -rw-r--r-- 1 root root 1708 Nov 11 11:42 teo-en-ming-corp.key # cat teo-en-ming-corp.csr (Display Certificate Signing Request) -----BEGIN CERTIFICATE REQUEST----- -----END CERTIFICATE REQUEST----- # cat teo-en-ming-corp.key (Display Private/Secret Key) -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY----- Result from AlphaSSL Portal ============================ Congratulations! Your order has been placed successfully. Your order number is : You'll need to copy the following Domain Verification Code and place it in a text file called "gsdv.txt" which you'll then need to put in one of the approved locations Meta Tag : http://teo-en-ming-corp.com.sg/.well-known/pki-validation/gsdv.txt https://teo-en-ming-corp.com.sg/.well-known/pki-validation/gsdv.txt To complete the URL Verification, close the browser. Open the SSL Configuration Link in new browser and click on "Complete Url Verification". End of Result from AlphaSSL Portal ================================== Domain Verification for SSL Certificate ======================================= # cd /home/teo-en-ming-corp/public_html # mkdir .well-known # cd .well-known # mkdir pki-validation # cd pki-validation/ Edit gsdv.txt. # nano gsdv.txt Begin Email from AlphaSSL ========================= Email Subject: : Your SSL Certificate for *.teo-en-ming-corp.com.sg has been issued ------------------------------------------------------------------------------- Please note that this email is automatically sent from a noreply mailbox. To contact AlphaSSL please use the Contact Details at the footer of this email. ------------------------------------------------------------------------------- Dear Turritopsis Dohrnii Teo En Ming, Your AlphaSSL Certificate has now been issued and is ready to be installed. Your SSL Certificate can be found at the bottom of this email. CERTIFICATE DETAILS -------------------------------------------------- Order Number: Common Name: *.teo-en-ming-corp.com.sg INSTALLING YOUR CERTIFICATE ---------------------------------------------------- Your SSL Certificate and Intermediate Certificate must be installed on your server. Please note that as of March 31st 2014, SHA-256 will become the default hashing algorithm used unless SHA-1 was selected during the ordering process. You can find guides on installing your certificate with the Support Center online at: http://www.alphassl.com/support QUICK INSTALLATION GUIDE ---------------------------------------------------- 1) Using a text editor, copy the SSL Certificate text from the bottom of this email (including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines) and save it to a file such as yourdomain.txt 2) Retrieve the Intermediate Certificate (selecting SHA-1 or SHA-256 as appropriate) from the Support Center at: https://www.alphassl.com/support/install-root-certificate.html 3) Using a text editor, copy the Intermediate Certificate text (including the ----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines) and save it to a file such as intermediate_domain_ca.txt 4) Copy these .txt files to your server and then rename them with .crt extensions 5) Install the Intermediate and SSL Certificates 6) Restart your server 7) To test for installation errors please use our SSL Configuration Checker located at https://sslcheck.globalsign.com/en_US 8) Install your Site Seal with the instructions show at: http://www.alphassl.com/support/ssl-site-seal.html 9) We suggest you back-up your SSL Certificate and Private Key pair and keep it safe, all IIS users can use the Export Wizard We hope that your application process was quick and easy and you have enjoyed the AlphaSSL experience. Thank you for choosing AlphaSSL, if you have any questions or issues please do not hesitate to contact us. CONTACT US -------------------------------------------------- For Sales, Technical Support & Account Queries: W: http://www.alphassl.com/support E: support at alphassl.com T: US Toll Free: 877 SSLALPHA (+1 877 775 2574) | Fax: 720 528 8160 T: EU: +44 1622 766 700 | Fax: +44 1622 662 255 --------------------------------------------------- LOW COST. TRUSTED BY ALL BROWSERS. SSL MADE EASY. --------------------------------------------------- YOUR SSL CERTIFICATE -------------------------------------------------- (Formatted for the majority of web server software including IIS and Apache based servers): -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- End of Email from AlphaSSL =========================== # cd /root/teo-en-ming # nano teo-en-ming-corp.crt (Saving the SSL Certificate/Public Key) -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- # nano intermediate_domain_ca.crt (Saving the intermediate CA certificate) -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- Installing SSL Certificate on Postfix SMTP Server ================================================= Backup the Postfix configuration files first before you modify anything. # cd /etc/postfix # cp main.cf main.teoenming # cp master.cf master.teoenming Reference Guide: Installing and configuring SSL on Postfix/Dovecot mail server Link: https://www.namecheap.com/support/knowledgebase/article.aspx/9795/69/installing-and-configuring-ssl-on-postfixdovecot-mail-server Copy the public and private key over from /root/teo-en-ming to /etc/postfix. # cd /root/teo-en-ming/ # cp * /etc/postfix # cd /etc/postfix Edit the Postfix configuration file. # nano main.cf smtpd_tls_cert_file = /etc/postfix/teo-en-ming-corp.crt smtpd_tls_key_file = /etc/postfix/teo-en-ming-corp.key smtpd_tls_CAfile = /etc/postfix/intermedia_domain_ca.crt ***Please note that the previous IT support company did not enable SSL/TLS for SMTP Server.*** Restart the Postfix SMTP Server. # service postfix restart Installing SSL Certificate on Dovecot IMAP and POP3 Incoming Mail Server ========================================================================= Backup the auxiliary Dovecot configuration file first before you modify anything. # cd /etc/dovecot/conf.d # cp 10-ssl.conf 10-ssl.teoenming Begin Redundant/Useless Section =============================== Please do not follow the instructions in this section. # cd /etc/pki/dovecot/certs # cp /root/teo-en-ming/teo-en-ming-corp.crt . # cd /etc/pki/dovecot/private/ # cp /root/teo-en-ming/teo-en-ming-corp.key . # cd /etc/dovecot/conf.d Edit 10-ssl.conf. # nano 10-ssl.conf ssl_cert = You can also configure SSL Certificate using Webmin. I will publish a guide on this in the future. Also, 16 screenshots will be published in the future. End of Guide ============ -----BEGIN EMAIL SIGNATURE----- The Gospel for all Targeted Individuals (TIs): [The New York Times] Microwave Weapons Are Prime Suspect in Ills of U.S. Embassy Workers Link: https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html ******************************************************************************************** Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020): [1] https://tdtemcerts.wordpress.com/ [2] https://tdtemcerts.blogspot.sg/ [3] https://www.scribd.com/user/270125049/Teo-En-Ming -----END EMAIL SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From brice at famille-andre.be Fri Nov 13 12:06:25 2020 From: brice at famille-andre.be (=?UTF-8?B?QnJpY2UgQW5kcsOp?=) Date: Fri, 13 Nov 2020 13:06:25 +0100 Subject: Server application hangs on SS_read, even when client disconnects Message-ID: Hello, I have developed a client-server application with openssl and I have a recurrent bug where, sometimes, server instance seems to be definitively stuck in SSL_read call. I have put more details of the problem here below, but it seems that in some rare execution cases, the server performs a SSL_read, the client disconnects in the meantime, and the server never detects the disconnection and remains stuck in the SSL_read operation. My server runs on a Debian 6.3, and my version of openssl is 1.1.0l. Here is an extract of the code that manages the SSL connexion at server side : ctx = SSL_CTX_new(SSLv23_server_method()); BIO* bio = BIO_new_file("dhkey.pem", "r"); if (bio == NULL) ... DH* ret = PEM_read_bio_DHparams(bio, NULL, NULL, NULL); BIO_free(bio); if (SSL_CTX_set_tmp_dh(ctx, ret) < 0) ... SSL_CTX_set_default_passwd_cb_userdata(ctx, (void*)key); if (SSL_CTX_use_PrivateKey_file(ctx, "server.key", SSL_FILETYPE_PEM) <= 0) ... if (SSL_CTX_use_certificate_file(ctx, "server.crt", SSL_FILETYPE_PEM) <= 0) ... if (SSL_CTX_check_private_key(ctx) == 0) ... SSL_CTX_set_cipher_list(ctx, "ALL"); ssl_in = SSL_new(ctx); BIO* sslclient_in = BIO_new_socket(in_sock, BIO_NOCLOSE); SSL_set_bio(ssl_in, sslclient_in, sslclient_in); int r_in = SSL_accept(ssl_in); if (r_in != 1) ... ... /* Place where program hangs : */ int read = SSL_read(ssl_in, &(((char*)ptr)[nb_read]), size-nb_read); Here is the full stack-trace where the program hangs : #0 0x00007f836575d210 in __read_nocancel () from /lib/x86_64-linux-gnu/libpthread.so.0 #1 0x00007f8365c8ccec in ?? () from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 #2 0x00007f8365c8772b in BIO_read () from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 #3 0x00007f83659879a2 in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.1 #4 0x00007f836598b70d in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.1 #5 0x00007f8365989113 in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.1 #6 0x00007f836598eff6 in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.1 #7 0x00007f8365998dc9 in SSL_read () from /usr/lib/x86_64-linux-gnu/libssl.so.1.1 #8 0x000055b7b3e98289 in Socket::SslRead (this=0x7ffdc6131900, size=4, ptr=0x7ffdc613066c) at ../../Utilities/Database/Sync/server/Communication/Socket.cpp:80 Here is the result of "netstat -natp | grep " : tcp 32 0 5.196.111.132:5412 109.133.193.70:51822 CLOSE_WAIT 19218/./MabeeServer tcp 32 0 5.196.111.132:5412 109.133.193.70:51696 CLOSE_WAIT 19218/./MabeeServer tcp 32 0 5.196.111.132:5412 109.133.193.70:51658 CLOSE_WAIT 19218/./MabeeServer tcp 0 0 5.196.111.132:5413 85.27.92.8:25856 ESTABLISHED 19218/./MabeeServer tcp 32 0 5.196.111.132:5412 109.133.193.70:51818 CLOSE_WAIT 19218/./MabeeServer tcp 32 0 5.196.111.132:5412 109.133.193.70:51740 CLOSE_WAIT 19218/./MabeeServer tcp 0 0 5.196.111.132:5412 85.27.92.8:26305 ESTABLISHED 19218/./MabeeServer tcp6 0 0 ::1:36448 ::1:5432 ESTABLISHED 19218/./MabeeServer >From this log, I can see that I have two established connections with remote client machine on IP 109.133.193.70. Note that it's normal to have two connexions because my client-server protocol relies on two distinct TCP connexions. >From this, I logged the result of a "tcpdump -i any -nn host 85.27.92.8" during two days (and during those two days, my server instance remained stuck in SSL_read...). On this log, I see no packet exchange on ports 85.27.92.8:25856 or 85.27.92.8:26305. I see some burst of packets exchanged on other client TCP ports, but probably due to the client that performs other requests to the server (and thus, the server that is forking new instances with connections on other client ports). This let me think that the connexion on which the SSL_read is listening is definitively dead (no more TCP keepalive), and that, for a reason I do not understand, the SSL_read keeps blocked into it. Note that the normal behavior of my application is : client connects, server daemon forks a new instance, communication remains a few seconds with forked server instance, client disconnects and the forked process finished. Note also that normally, client performs a proper disconnection (SSL_shutdown, etc.). But I cannot guarantee it never interrupts on a more abrupt way (connection lost, client crash, etc.). Any advice on what is going wrong ? Many thanks, Brice -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Fri Nov 13 14:42:10 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 13 Nov 2020 14:42:10 +0000 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: > From: openssl-users On Behalf Of Brice Andr? > Sent: Friday, 13 November, 2020 05:06 > ... it seems that in some rare execution cases, the server performs a SSL_read, > the client disconnects in the meantime, and the server never detects the > disconnection and remains stuck in the SSL_read operation. ... > #0 0x00007f836575d210 in __read_nocancel () from /lib/x86_64-linux-gnu/libpthread.so.0 > #1 0x00007f8365c8ccec in ?? () from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > #2 0x00007f8365c8772b in BIO_read () from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 So OpenSSL is in a blocking read of the socket descriptor. > tcp 0 0 http://5.196.111.132:5413 http://85.27.92.8:25856 ESTABLISHED 19218/./MabeeServer > tcp 0 0 http://5.196.111.132:5412 http://85.27.92.8:26305 ESTABLISHED 19218/./MabeeServer > From this log, I can see that I have two established connections with remote > client machine on IP 109.133.193.70. Note that it's normal to have two connexions > because my client-server protocol relies on two distinct TCP connexions. So the client has not, in fact, disconnected. When a system closes one end of a TCP connection, the stack will send a TCP packet with either the FIN or the RST flag set. (Which one you get depends on whether the stack on the closing side was holding data for the conversation which the application hadn't read.) The sockets are still in ESTABLISHED state; therefore, no FIN or RST has been received by the local stack. There are various possibilities: - The client system has not in fact closed its end of the conversation. Sometimes this happens for reasons that aren't immediately apparent; for example, if the client forked and allowed the descriptor for the conversation socket to be inherited by the child, and the child still has it open. - The client system shut down suddenly (crashed) and so couldn't send the FIN/RST. - There was a failure in network connectivity between the two systems, and consequently the FIN/RST couldn't be received by the local system. - The connection is in a state where the peer can't send the FIN/RST, for example because the local side's receive window is zero. That shouldn't be the case, since OpenSSL is (apparently) blocked in a receive on the connection. but as I don't have the complete picture I can't rule it out. > This let me think that the connexion on which the SSL_read is listening is > definitively dead (no more TCP keepalive) "definitely dead" doesn't have any meaning in TCP. That's not one of the TCP states, or part of the other TCP or IP metadata associated with the local port (which is what matters). Do you have keepalives enabled? > and that, for a reason I do not understand, the SSL_read keeps blocked into it. The reason is simple: The connection is still established, but there's no data to receive. The question isn't why SSL_read is blocking; it's why you think the connection is gone, but the stack thinks otherwise. > Note that the normal behavior of my application is : client connects, server > daemon forks a new instance, Does the server parent process close its copy of the conversation socket? -- Michael Wojcik From brice at famille-andre.be Fri Nov 13 16:13:28 2020 From: brice at famille-andre.be (=?UTF-8?B?QnJpY2UgQW5kcsOp?=) Date: Fri, 13 Nov 2020 17:13:28 +0100 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: Hello, And many thanks for the answer. "Does the server parent process close its copy of the conversation socket?" : I checked in my code, but it seems that no. Is it needed ? May it explain my problem ? " Do you have keepalives enabled?" To be honest, I did not know it was possible to not enable them. I checked with command "netstat -tnope" and it tells me that it is not enabled. I suppose that, if for some reason, the communication with the client is lost (crash of client, loss of network, etc.) and keepalive is not enabled, this may fully explain my problem ? If yes, do you have an idea of why keepalive is not enabled ? I thought that by default on linux it was ? Many thanks, Brice Le ven. 13 nov. 2020 ? 15:43, Michael Wojcik a ?crit : > > From: openssl-users On Behalf Of > Brice Andr? > > Sent: Friday, 13 November, 2020 05:06 > > > ... it seems that in some rare execution cases, the server performs a > SSL_read, > > the client disconnects in the meantime, and the server never detects the > > disconnection and remains stuck in the SSL_read operation. > > ... > > > #0 0x00007f836575d210 in __read_nocancel () from > /lib/x86_64-linux-gnu/libpthread.so.0 > > #1 0x00007f8365c8ccec in ?? () from > /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > > #2 0x00007f8365c8772b in BIO_read () from > /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > > So OpenSSL is in a blocking read of the socket descriptor. > > > tcp 0 0 http://5.196.111.132:5413 > http://85.27.92.8:25856 ESTABLISHED 19218/./MabeeServer > > tcp 0 0 http://5.196.111.132:5412 > http://85.27.92.8:26305 ESTABLISHED 19218/./MabeeServer > > > From this log, I can see that I have two established connections with > remote > > client machine on IP 109.133.193.70. Note that it's normal to have two > connexions > > because my client-server protocol relies on two distinct TCP connexions. > > So the client has not, in fact, disconnected. > > When a system closes one end of a TCP connection, the stack will send a > TCP packet > with either the FIN or the RST flag set. (Which one you get depends on > whether the > stack on the closing side was holding data for the conversation which the > application > hadn't read.) > > The sockets are still in ESTABLISHED state; therefore, no FIN or RST has > been > received by the local stack. > > There are various possibilities: > > - The client system has not in fact closed its end of the conversation. > Sometimes > this happens for reasons that aren't immediately apparent; for example, if > the > client forked and allowed the descriptor for the conversation socket to be > inherited > by the child, and the child still has it open. > > - The client system shut down suddenly (crashed) and so couldn't send the > FIN/RST. > > - There was a failure in network connectivity between the two systems, and > consequently > the FIN/RST couldn't be received by the local system. > > - The connection is in a state where the peer can't send the FIN/RST, for > example > because the local side's receive window is zero. That shouldn't be the > case, since > OpenSSL is (apparently) blocked in a receive on the connection. but as I > don't have > the complete picture I can't rule it out. > > > This let me think that the connexion on which the SSL_read is listening > is > > definitively dead (no more TCP keepalive) > > "definitely dead" doesn't have any meaning in TCP. That's not one of the > TCP states, > or part of the other TCP or IP metadata associated with the local port > (which is > what matters). > > Do you have keepalives enabled? > > > and that, for a reason I do not understand, the SSL_read keeps blocked > into it. > > The reason is simple: The connection is still established, but there's no > data to > receive. The question isn't why SSL_read is blocking; it's why you think > the > connection is gone, but the stack thinks otherwise. > > > Note that the normal behavior of my application is : client connects, > server > > daemon forks a new instance, > > Does the server parent process close its copy of the conversation socket? > > -- > Michael Wojcik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Fri Nov 13 17:50:18 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 13 Nov 2020 17:50:18 +0000 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: > From: Brice Andr? > Sent: Friday, 13 November, 2020 09:13 > "Does the server parent process close its copy of the conversation socket?" > I checked in my code, but it seems that no. Is it needed? You'll want to do it, for a few reasons: - You'll be leaking descriptors in the server, and eventually it will hit its limit. - If the child process dies without cleanly closing its end of the conversation, the parent will still have an open descriptor for the socket, so the network stack won't terminate the TCP connection. - A related problem: If the child just closes its socket without calling shutdown, no FIN will be sent to the client system (because the parent still has its copy of the socket open). The client system will have the connection in one of the termination states (FIN_WAIT, maybe? I don't have my references handy) until it times out. - A bug in the parent process might cause it to operate on the connected socket, causing unexpected traffic on the connection. - All such sockets will be inherited by future child processes, and one of them might erroneously perform some operation on one of them. Obviously there could also be a security issue with this, depending on what your application does. Basically, when a descriptor is "handed off" to a child process by forking, you generally want to close it in the parent, unless it's used for parent-child communication. (There are some cases where the parent wants to keep it open for some reason, but they're rare.) On a similar note, if you exec a different program in the child process (I wasn't sure from your description), it's a good idea for the parent to set the FD_CLOEXEC option (with fcntl) on its listening socket and any other descriptors that shouldn't be passed along to child processes. You could close these manually in the child process between the fork and exec, but FD_CLOEXEC is often easier to maintain. For some applications, you might just dup2 the socket over descriptor 0 or descriptor 3, depending on whether the child needs access to stdio, and then close everything higher. Closing descriptors not needed by the child process is a good idea even if you don't exec, since it can prevent various problems and vulnerabilities that result from certain classes of bugs. It's a defensive measure. The best source for this sort of recommendation, in my opinion, remains W. Richard Stevens' /Advanced Programming in the UNIX Environment/. The book is old, and Linux isn't UNIX, but I don't know of any better explanation of how and why to do things in a UNIX-like OS. And my favorite source of TCP/IP information is Stevens' /TCP/IP Illustrated/. > May it explain my problem? In this case, I don't offhand see how it does, but I may be overlooking something. > I suppose that, if for some reason, the communication with the client is lost > (crash of client, loss of network, etc.) and keepalive is not enabled, this may > fully explain my problem ? It would give you those symptoms, yes. > If yes, do you have an idea of why keepalive is not enabled? The Host Requirements RFC mandates that it be disabled by default. I think the primary reasoning for that was to avoid re-establishing virtual circuits (e.g. dial-up connections) for long-running connections that had long idle periods. Linux may well have a kernel tunable or similar to enable TCP keepalive by default, but it seems to be switched off on your system. You'd have to consult the documentation for your distribution, I think. By default (again per the Host Requirements RFC), it takes quite a long time for TCP keepalive to detect a broken connection. It doesn't start probing until the connection has been idle for 2 hours, and then you have to wait for the TCP retransmit timer times the retransmit count to be exhausted - typically over 10 minutes. Again, some OSes let you change these defaults, and some let you change them on an individual connection. -- Michael Wojcik From sanarayana at rbbn.com Fri Nov 13 19:10:28 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Fri, 13 Nov 2020 19:10:28 +0000 Subject: ## Application accessing 'ex_kusage' ## Message-ID: Hi , We are porting our Application from openssl 1.0.1 to openssl 3.0. in related to this activity we require to access the variable 'ex_kusage' pointed by X509 But there are no set utilities available to access this variable. Only X509_get_key_usage Is available. Our code for 1.0.1 is as below. Please suggest the right way to achieve this. ASN1_BIT_STRING *usage; 662 x509->ex_kusage = 0; 663 664 if((usage=(ASN1_BIT_STRING *)X509_get_ext_d2i(x509, NID_key_usage, NULL, NULL))) 665 { 666 if(usage->length > 0) 667 { 668 x509->ex_kusage = usage->data[0]; 669 if(usage->length > 1) 670 x509->ex_kusage |= usage->data[1] << 8; 671 } 672 else 673 x509->ex_kusage = 0; 674 ASN1_BIT_STRING_free(usage); 675 } Regards, Sunil ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From brice at famille-andre.be Sat Nov 14 07:52:51 2020 From: brice at famille-andre.be (=?UTF-8?B?QnJpY2UgQW5kcsOp?=) Date: Sat, 14 Nov 2020 08:52:51 +0100 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: Hello Michael, Thanks for all those information. I corrected your suggested point (close parent process sockets). I also activated keepalive, with values adapted to my application. I hope this will solve my issue, but as the problem may take several weeks to occur, I will not know immediately if this was the origin :-) Many thanks for your help. Regards, Brice Le ven. 13 nov. 2020 ? 18:52, Michael Wojcik a ?crit : > > From: Brice Andr? > > Sent: Friday, 13 November, 2020 09:13 > > > "Does the server parent process close its copy of the conversation > socket?" > > I checked in my code, but it seems that no. Is it needed? > > You'll want to do it, for a few reasons: > > - You'll be leaking descriptors in the server, and eventually it will hit > its limit. > - If the child process dies without cleanly closing its end of the > conversation, > the parent will still have an open descriptor for the socket, so the > network stack > won't terminate the TCP connection. > - A related problem: If the child just closes its socket without calling > shutdown, > no FIN will be sent to the client system (because the parent still has its > copy of > the socket open). The client system will have the connection in one of the > termination > states (FIN_WAIT, maybe? I don't have my references handy) until it times > out. > - A bug in the parent process might cause it to operate on the connected > socket, > causing unexpected traffic on the connection. > - All such sockets will be inherited by future child processes, and one of > them might > erroneously perform some operation on one of them. Obviously there could > also be a > security issue with this, depending on what your application does. > > Basically, when a descriptor is "handed off" to a child process by > forking, you > generally want to close it in the parent, unless it's used for parent-child > communication. (There are some cases where the parent wants to keep it > open for > some reason, but they're rare.) > > On a similar note, if you exec a different program in the child process (I > wasn't > sure from your description), it's a good idea for the parent to set the > FD_CLOEXEC > option (with fcntl) on its listening socket and any other descriptors that > shouldn't > be passed along to child processes. You could close these manually in the > child > process between the fork and exec, but FD_CLOEXEC is often easier to > maintain. > > For some applications, you might just dup2 the socket over descriptor 0 or > descriptor 3, depending on whether the child needs access to stdio, and > then close > everything higher. > > Closing descriptors not needed by the child process is a good idea even if > you > don't exec, since it can prevent various problems and vulnerabilities that > result > from certain classes of bugs. It's a defensive measure. > > The best source for this sort of recommendation, in my opinion, remains W. > Richard > Stevens' /Advanced Programming in the UNIX Environment/. The book is old, > and Linux > isn't UNIX, but I don't know of any better explanation of how and why to > do things > in a UNIX-like OS. > > And my favorite source of TCP/IP information is Stevens' /TCP/IP > Illustrated/. > > > May it explain my problem? > > In this case, I don't offhand see how it does, but I may be overlooking > something. > > > I suppose that, if for some reason, the communication with the client is > lost > > (crash of client, loss of network, etc.) and keepalive is not enabled, > this may > > fully explain my problem ? > > It would give you those symptoms, yes. > > > If yes, do you have an idea of why keepalive is not enabled? > > The Host Requirements RFC mandates that it be disabled by default. I think > the > primary reasoning for that was to avoid re-establishing virtual circuits > (e.g. > dial-up connections) for long-running connections that had long idle > periods. > > Linux may well have a kernel tunable or similar to enable TCP keepalive by > default, but it seems to be switched off on your system. You'd have to > consult > the documentation for your distribution, I think. > > By default (again per the Host Requirements RFC), it takes quite a long > time for > TCP keepalive to detect a broken connection. It doesn't start probing > until the > connection has been idle for 2 hours, and then you have to wait for the TCP > retransmit timer times the retransmit count to be exhausted - typically > over 10 > minutes. Again, some OSes let you change these defaults, and some let you > change > them on an individual connection. > > -- > Michael Wojcik > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahulmg1983 at gmail.com Sat Nov 14 11:00:26 2020 From: rahulmg1983 at gmail.com (Rahul Godbole) Date: Sat, 14 Nov 2020 16:30:26 +0530 Subject: RAND_bytes() thread safety Message-ID: <6F609719-DC52-49D8-BC7A-10FAF62B51E6@gmail.com> Hi Is OpenSSL function RAND_bytes () thread safe? Thanks Rahul From space.ship.traveller at gmail.com Sun Nov 15 01:10:36 2020 From: space.ship.traveller at gmail.com (Samuel Williams) Date: Sun, 15 Nov 2020 14:10:36 +1300 Subject: CA no longer verifying certificates Message-ID: Hello I generate a CA (self signed), and then generate a certificate from that CA, which should be used by a HTTP/2 client and server during testing. This code was working as recently as 12 months ago, but it seems like something has stopped it from verifying correctly. Here is how the CA is generated, along with a certificate store which is used for verification: https://github.com/socketry/async-rspec/blob/4e4c2e59fdb93daab0aa11917f02a05d0fd746e3/lib/async/rspec/ssl.rb#L47-L79 Later, this CA is used to generate a certificate: https://github.com/socketry/async-rspec/blob/4e4c2e59fdb93daab0aa11917f02a05d0fd746e3/lib/async/rspec/ssl.rb#L85-L110 Finally, we want to check that this is a valid configuration: https://github.com/socketry/async-rspec/blob/4e4c2e59fdb93daab0aa11917f02a05d0fd746e3/spec/async/rspec/ssl_spec.rb#L35-L37 Like I said, this was passing, as recently as April. However, it's now failing with error code 18: "self signed certificate". I've tried a number of things but cannot figure out what's changed and what I need to do to make this work again (except disable verification completely which is not what I want). Any ideas what I need to do to make this work again? Thanks Samuel From space.ship.traveller at gmail.com Sun Nov 15 01:54:46 2020 From: space.ship.traveller at gmail.com (Samuel Williams) Date: Sun, 15 Nov 2020 14:54:46 +1300 Subject: CA no longer verifying certificates In-Reply-To: References: Message-ID: Oh my, I figured it out after digging through the OpenSSL source code. My CA certificate and the client certificate both had the same common name, so they were clobbering each other. Changing the name of the CA certificate solved the problem. On Sun, 15 Nov 2020 at 14:10, Samuel Williams wrote: > > Hello > > I generate a CA (self signed), and then generate a certificate from > that CA, which should be used by a HTTP/2 client and server during > testing. > > This code was working as recently as 12 months ago, but it seems like > something has stopped it from verifying correctly. > > Here is how the CA is generated, along with a certificate store which > is used for verification: > > https://github.com/socketry/async-rspec/blob/4e4c2e59fdb93daab0aa11917f02a05d0fd746e3/lib/async/rspec/ssl.rb#L47-L79 > > Later, this CA is used to generate a certificate: > > https://github.com/socketry/async-rspec/blob/4e4c2e59fdb93daab0aa11917f02a05d0fd746e3/lib/async/rspec/ssl.rb#L85-L110 > > Finally, we want to check that this is a valid configuration: > > https://github.com/socketry/async-rspec/blob/4e4c2e59fdb93daab0aa11917f02a05d0fd746e3/spec/async/rspec/ssl_spec.rb#L35-L37 > > Like I said, this was passing, as recently as April. However, it's now > failing with error code 18: "self signed certificate". > > I've tried a number of things but cannot figure out what's changed and > what I need to do to make this work again (except disable verification > completely which is not what I want). > > Any ideas what I need to do to make this work again? > > Thanks > Samuel From rui.zang at yandex.com Mon Nov 16 01:48:07 2020 From: rui.zang at yandex.com (rui zang) Date: Mon, 16 Nov 2020 09:48:07 +0800 Subject: test cases failed after enabling ktls Message-ID: <258201605491145@mail.yandex.com> An HTML attachment was scrubbed... URL: From rui.zang at yandex.com Mon Nov 16 07:56:19 2020 From: rui.zang at yandex.com (rui zang) Date: Mon, 16 Nov 2020 15:56:19 +0800 Subject: test cases failed after enabling ktls In-Reply-To: <258201605491145@mail.yandex.com> References: <258201605491145@mail.yandex.com> Message-ID: <172291605513240@mail.yandex.com> Resend in plain text. ====================================== Greetings, I am trying openssl+ktls on ubuntu 20.04. I have tried openssl-3.0.0-alpha8 from https://www.openssl.org/source/openssl-3.0.0-alpha8.tar.gz and also the current master branch from the github repo. The kernels I have tried are v5.9 and v5.9.8. On every combination, same group of test case could not pass. Test Summary Report ------------------- 70-test_key_share.t (Wstat: 1536 Tests: 22 Failed: 6) Failed tests: 1, 4, 6-7, 13-14 Non-zero exit status: 6 70-test_sslextension.t (Wstat: 256 Tests: 8 Failed: 1) Failed test: 8 Non-zero exit status: 1 70-test_sslrecords.t (Wstat: 768 Tests: 20 Failed: 3) Failed tests: 18-20 Non-zero exit status: 3 70-test_sslsigalgs.t (Wstat: 1536 Tests: 26 Failed: 6) Failed tests: 1, 6, 22-23, 25-26 Non-zero exit status: 6 70-test_sslsignature.t (Wstat: 256 Tests: 4 Failed: 1) Failed test: 1 Non-zero exit status: 1 70-test_sslversions.t (Wstat: 512 Tests: 8 Failed: 2) Failed tests: 5, 7 Non-zero exit status: 2 70-test_tls13cookie.t (Wstat: 512 Tests: 2 Failed: 2) Failed tests: 1-2 Non-zero exit status: 2 70-test_tls13downgrade.t (Wstat: 256 Tests: 6 Failed: 1) Failed test: 6 Non-zero exit status: 1 70-test_tls13kexmodes.t (Wstat: 7424 Tests: 1 Failed: 1) Failed test: 1 Non-zero exit status: 29 Parse errors: Bad plan. You planned 11 tests but ran 1. 70-test_tls13messages.t (Wstat: 7424 Tests: 1 Failed: 0) Non-zero exit status: 29 Parse errors: Bad plan. You planned 17 tests but ran 1. 70-test_tls13psk.t (Wstat: 7424 Tests: 1 Failed: 1) Failed test: 1 Non-zero exit status: 29 Parse errors: Bad plan. You planned 5 tests but ran 1. 70-test_tlsextms.t (Wstat: 256 Tests: 10 Failed: 1) Failed test: 10 Non-zero exit status: 1 Files=223, Tests=3571, 615 wallclock secs (11.00 usr 0.93 sys + 343.65 cusr 84.69 csys = 440.27 CPU) Result: FAIL make[1]: *** [Makefile:3197: _tests] Error 1 make[1]: Leaving directory '/home/ubuntu/openssl' make: *** [Makefile:3194: tests] Error 2 Complete `make test` output (kernel v5.9.8 + openssl master) is copied here https://cl1p.net/openssl_ktls_make_test_failure (due to the 100K limit of this mailing list) I am sure that the kernel tls module is loaded correctly since /proc/net/tls_stat is exposed correctly and I can see the counters increasing while doing `make test`. So is this supposed to happen? What should I do to make ktls work? Thanks, Rui Zang From matt at openssl.org Mon Nov 16 10:16:24 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 16 Nov 2020 10:16:24 +0000 Subject: ## Application accessing 'ex_kusage' ## In-Reply-To: References: Message-ID: On 13/11/2020 19:10, Narayana, Sunil Kumar wrote: > Hi , > > ??????????????? We are porting our Application from ?openssl 1.0.1 to > openssl 3.0. in related to this activity we require to access the > variable ?*ex_kusage*? pointed by *X509* > > But there are no set utilities available to access this variable. Only > ?X509_get_key_usage Is available. > > ? > > Our code for 1.0.1 is as below. Please suggest the right way to achieve > this. I'd like to ask why you feel you need to do this at all. It seems to me like you are replicating libcrypto internal code in your own application. This is code in libcrypto: /* Handle (basic) key usage */ if ((usage = X509_get_ext_d2i(x, NID_key_usage, &i, NULL)) != NULL) { x->ex_kusage = 0; if (usage->length > 0) { x->ex_kusage = usage->data[0]; if (usage->length > 1) x->ex_kusage |= usage->data[1] << 8; } x->ex_flags |= EXFLAG_KUSAGE; ASN1_BIT_STRING_free(usage); /* Check for empty key usage according to RFC 5280 section 4.2.1.3 */ if (x->ex_kusage == 0) { ERR_raise(ERR_LIB_X509, X509V3_R_EMPTY_KEY_USAGE); x->ex_flags |= EXFLAG_INVALID; } } else if (i != -1) { x->ex_flags |= EXFLAG_INVALID; } So it seems very similar to what you are trying to do, and I guess some earlier version of this code was the original source of what is in your application now. The purpose of this code is to decode the key usage extension and cache it in the internal `ex_flags` value. This code gets called in numerous code paths whenever we need to query extension data - including if you were to call X509_get_key_usage(). Your application seems to want to manage for itself when libcrypto does this caching. It should not need to do so - it's entirely internal. My guess is that, perhaps, in some older version of OpenSSL the caching didn't happen when it was supposed to and you implemented this workaround?? Or possibly the workaround is still needed due to a bug in OpenSSL that still doesn't do the caching when needed? If so I'd like to understand the circumstances behind that. Matt From matt at openssl.org Mon Nov 16 10:35:06 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 16 Nov 2020 10:35:06 +0000 Subject: RAND_bytes() thread safety In-Reply-To: <6F609719-DC52-49D8-BC7A-10FAF62B51E6@gmail.com> References: <6F609719-DC52-49D8-BC7A-10FAF62B51E6@gmail.com> Message-ID: On 14/11/2020 11:00, Rahul Godbole wrote: > Is OpenSSL function RAND_bytes () thread safe? Short answer: Yes Longer answer: Yes assuming that: - you are using >= OpenSSL 1.1.0 or - you are using OpenSSL 1.0.2 or below and you have set up the locking callbacks AND - You have not compiled OpenSSL with "no-threads" Matt From matt at openssl.org Mon Nov 16 11:45:20 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 16 Nov 2020 11:45:20 +0000 Subject: test cases failed after enabling ktls In-Reply-To: <172291605513240@mail.yandex.com> References: <258201605491145@mail.yandex.com> <172291605513240@mail.yandex.com> Message-ID: <749b7676-2c84-1b63-9b75-575f903e14f7@openssl.org> On 16/11/2020 07:56, rui zang wrote: > Resend in plain text. > ====================================== > > Greetings, > > I am trying openssl+ktls on ubuntu 20.04. > I have tried openssl-3.0.0-alpha8 from https://www.openssl.org/source/openssl-3.0.0-alpha8.tar.gz > and also the current master branch from the github repo. > The kernels I have tried are v5.9 and v5.9.8. > On every combination, same group of test case could not pass. Please can you open this as a github issue? Thanks Matt > > Test Summary Report > ------------------- > 70-test_key_share.t (Wstat: 1536 Tests: 22 Failed: 6) > Failed tests: 1, 4, 6-7, 13-14 > Non-zero exit status: 6 > 70-test_sslextension.t (Wstat: 256 Tests: 8 Failed: 1) > Failed test: 8 > Non-zero exit status: 1 > 70-test_sslrecords.t (Wstat: 768 Tests: 20 Failed: 3) > Failed tests: 18-20 > Non-zero exit status: 3 > 70-test_sslsigalgs.t (Wstat: 1536 Tests: 26 Failed: 6) > Failed tests: 1, 6, 22-23, 25-26 > Non-zero exit status: 6 > 70-test_sslsignature.t (Wstat: 256 Tests: 4 Failed: 1) > Failed test: 1 > Non-zero exit status: 1 > 70-test_sslversions.t (Wstat: 512 Tests: 8 Failed: 2) > Failed tests: 5, 7 > Non-zero exit status: 2 > 70-test_tls13cookie.t (Wstat: 512 Tests: 2 Failed: 2) > Failed tests: 1-2 > Non-zero exit status: 2 > 70-test_tls13downgrade.t (Wstat: 256 Tests: 6 Failed: 1) > Failed test: 6 > Non-zero exit status: 1 > 70-test_tls13kexmodes.t (Wstat: 7424 Tests: 1 Failed: 1) > Failed test: 1 > Non-zero exit status: 29 > Parse errors: Bad plan. You planned 11 tests but ran 1. > 70-test_tls13messages.t (Wstat: 7424 Tests: 1 Failed: 0) > Non-zero exit status: 29 > Parse errors: Bad plan. You planned 17 tests but ran 1. > 70-test_tls13psk.t (Wstat: 7424 Tests: 1 Failed: 1) > Failed test: 1 > Non-zero exit status: 29 > Parse errors: Bad plan. You planned 5 tests but ran 1. > 70-test_tlsextms.t (Wstat: 256 Tests: 10 Failed: 1) > Failed test: 10 > Non-zero exit status: 1 > Files=223, Tests=3571, 615 wallclock secs (11.00 usr 0.93 sys + 343.65 cusr 84.69 csys = 440.27 CPU) > Result: FAIL > make[1]: *** [Makefile:3197: _tests] Error 1 > make[1]: Leaving directory '/home/ubuntu/openssl' > make: *** [Makefile:3194: tests] Error 2 > > Complete `make test` output (kernel v5.9.8 + openssl master) is copied here https://cl1p.net/openssl_ktls_make_test_failure (due to the 100K limit of this mailing list) > I am sure that the kernel tls module is loaded correctly since /proc/net/tls_stat is exposed correctly and I can see the counters increasing while doing `make test`. > So is this supposed to happen? What should I do to make ktls work? > > Thanks, > Rui Zang > From jps at xce.pt Tue Nov 17 02:33:30 2020 From: jps at xce.pt (=?utf-8?Q?Jo=C3=A3o_Santos?=) Date: Tue, 17 Nov 2020 02:33:30 +0000 Subject: Handling BIO errors Message-ID: <009450BD-7D9B-47D7-A1D2-8A4476EE52BE@xce.pt> I'm writing a daemon that talks to a server using HTTP/2 over TLS 1.2+ and leveraging OpenSSL 1.1.1h to provide the TLS support. At the moment I think that I have the whole TLS part figured, and I could probably have the project running by now if I used SSL_set_fd to assign a connected socket to the underlying BIO of an SSL object, but I want to simplify the code as much as possible by using the highest level interfaces at my disposal, which in the case of OpenSSL means using BIO objects. Unfortunately I'm having a problem which is that I can't figure out how to convert error codes returned by ERR_get_error and split by ERR_GET_LIB, ERR_GET_FUNC, and ERR_GET_REASON into constants that I can use in a switch statement to react to BIO errors. This is not a problem for SSL filter BIOs since those have their own error reporting functions, but is a problem for Internet socket source BIOs since BIO_do_connect in particular can fail due to a system call error, a DNS error,, or even an error generated by lower level OpenSSL functions and other BIOs in the chain, and I cannot find any manual pages documenting these error constants, if they even exist. Here's a small working example that illustrates the problem that I'm having: #include #include #include int main(void) { ERR_load_ERR_strings(); BIO *bio = BIO_new_connect("wwx.google.com:80"); printf("Connected: %ld\n", BIO_do_connect(bio)); ERR_print_errors_fp(stderr); return 0; } Running this code, which has a misspelled hostname on purpose so that it can fail, results in the following printed out to the console: Connected: -1 4667342272:error:2008F002:BIO routines:BIO_lookup_ex:system lib:crypto/bio/b_addr.c:726:nodename nor servname provided, or not known What could I do in that code to use a switch statement on the kind of information printed by ERR_print_errors_fp? I know that, in this example, the error is from getaddrinfo, since I recognize the error message, but assuming that I want to handle that specific error, what can I match the library, function, and reason error codes against? Thanks in advance! From rui.zang at yandex.com Tue Nov 17 03:08:27 2020 From: rui.zang at yandex.com (rui zang) Date: Tue, 17 Nov 2020 11:08:27 +0800 Subject: test cases failed after enabling ktls In-Reply-To: <749b7676-2c84-1b63-9b75-575f903e14f7@openssl.org> References: <258201605491145@mail.yandex.com> <172291605513240@mail.yandex.com> <749b7676-2c84-1b63-9b75-575f903e14f7@openssl.org> Message-ID: <1089471605582408@mail.yandex.com> Thanks, please check out https://github.com/openssl/openssl/issues/13424 Regards, Rui Zang 16.11.2020, 19:45, "Matt Caswell" : > On 16/11/2020 07:56, rui zang wrote: >> ?Resend in plain text. >> ?====================================== >> >> ?Greetings, >> >> ?I am trying openssl+ktls on ubuntu 20.04. >> ?I have tried openssl-3.0.0-alpha8 from https://www.openssl.org/source/openssl-3.0.0-alpha8.tar.gz >> ?and also the current master branch from the github repo. >> ?The kernels I have tried are v5.9 and v5.9.8. >> ?On every combination, same group of test case could not pass. > > Please can you open this as a github issue? > > Thanks > > Matt > >> ?Test Summary Report >> ?------------------- >> ?70-test_key_share.t (Wstat: 1536 Tests: 22 Failed: 6) >> ???Failed tests: 1, 4, 6-7, 13-14 >> ???Non-zero exit status: 6 >> ?70-test_sslextension.t (Wstat: 256 Tests: 8 Failed: 1) >> ???Failed test: 8 >> ???Non-zero exit status: 1 >> ?70-test_sslrecords.t (Wstat: 768 Tests: 20 Failed: 3) >> ???Failed tests: 18-20 >> ???Non-zero exit status: 3 >> ?70-test_sslsigalgs.t (Wstat: 1536 Tests: 26 Failed: 6) >> ???Failed tests: 1, 6, 22-23, 25-26 >> ???Non-zero exit status: 6 >> ?70-test_sslsignature.t (Wstat: 256 Tests: 4 Failed: 1) >> ???Failed test: 1 >> ???Non-zero exit status: 1 >> ?70-test_sslversions.t (Wstat: 512 Tests: 8 Failed: 2) >> ???Failed tests: 5, 7 >> ???Non-zero exit status: 2 >> ?70-test_tls13cookie.t (Wstat: 512 Tests: 2 Failed: 2) >> ???Failed tests: 1-2 >> ???Non-zero exit status: 2 >> ?70-test_tls13downgrade.t (Wstat: 256 Tests: 6 Failed: 1) >> ???Failed test: 6 >> ???Non-zero exit status: 1 >> ?70-test_tls13kexmodes.t (Wstat: 7424 Tests: 1 Failed: 1) >> ???Failed test: 1 >> ???Non-zero exit status: 29 >> ???Parse errors: Bad plan. You planned 11 tests but ran 1. >> ?70-test_tls13messages.t (Wstat: 7424 Tests: 1 Failed: 0) >> ???Non-zero exit status: 29 >> ???Parse errors: Bad plan. You planned 17 tests but ran 1. >> ?70-test_tls13psk.t (Wstat: 7424 Tests: 1 Failed: 1) >> ???Failed test: 1 >> ???Non-zero exit status: 29 >> ???Parse errors: Bad plan. You planned 5 tests but ran 1. >> ?70-test_tlsextms.t (Wstat: 256 Tests: 10 Failed: 1) >> ???Failed test: 10 >> ???Non-zero exit status: 1 >> ?Files=223, Tests=3571, 615 wallclock secs (11.00 usr 0.93 sys + 343.65 cusr 84.69 csys = 440.27 CPU) >> ?Result: FAIL >> ?make[1]: *** [Makefile:3197: _tests] Error 1 >> ?make[1]: Leaving directory '/home/ubuntu/openssl' >> ?make: *** [Makefile:3194: tests] Error 2 >> >> ?Complete `make test` output (kernel v5.9.8 + openssl master) is copied here https://cl1p.net/openssl_ktls_make_test_failure (due to the 100K limit of this mailing list) >> ?I am sure that the kernel tls module is loaded correctly since /proc/net/tls_stat is exposed correctly and I can see the counters increasing while doing `make test`. >> ?So is this supposed to happen? What should I do to make ktls work? >> >> ?Thanks, >> ?Rui Zang From jb-openssl at wisemo.com Tue Nov 17 03:13:54 2020 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 17 Nov 2020 04:13:54 +0100 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: <66c3b7e7-871f-b0c0-3e4c-1968c8d6b91c@wisemo.com> (Top posting to match what Mr. Andr? does): TCP without keepalive will time out the connection a few minutes after sending any data that doesn't get a response. TCP without keepalive with no outstanding send (so only a blocking recv) and nothing outstanding at the other end will probably hang almost forever as there is nothing indicating that there is actual data lost in transit. On 2020-11-13 17:13, Brice Andr? wrote: > Hello, > > And many thanks for the answer. > > "Does the server parent process close its copy of the conversation > socket?" : I checked in my code, but it seems that no. Is it needed? ? > May it explain my problem ? > > " Do you have keepalives enabled?" To be honest, I did not know it was > possible to not enable them. I checked with command "netstat -tnope" > and it tells me that it is not enabled. > > I suppose that, if for some reason, the communication with the client > is lost (crash of client, loss of network, etc.) and keepalive is not > enabled, this may fully explain my problem ? > > If yes, do you have an idea of why keepalive is not enabled ? I > thought that by default on linux it was ? > > Many thanks, > Brice > > > Le?ven. 13 nov. 2020 ??15:43, Michael Wojcik > > > a ?crit?: > > > From: openssl-users > On Behalf Of Brice Andr? > > Sent: Friday, 13 November, 2020 05:06 > > > ... it seems that in some rare execution cases, the server > performs a SSL_read, > > the client disconnects in the meantime, and the server never > detects the > > disconnection and remains stuck in the SSL_read operation. > > ... > > > #0? 0x00007f836575d210 in __read_nocancel () from > /lib/x86_64-linux-gnu/libpthread.so.0 > > #1? 0x00007f8365c8ccec in ?? () from > /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > > #2? 0x00007f8365c8772b in BIO_read () from > /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > > So OpenSSL is in a blocking read of the socket descriptor. > > > tcp? ? ? ? 0? ? ? 0 http://5.196.111.132:5413 > http://85.27.92.8:25856 > ? ? ? ESTABLISHED 19218/./MabeeServer > > tcp? ? ? ? 0? ? ? 0 http://5.196.111.132:5412 > http://85.27.92.8:26305 > ? ? ? ESTABLISHED 19218/./MabeeServer > > > From this log, I can see that I have two established connections > with remote > > client machine on IP 109.133.193.70. Note that it's normal to > have two connexions > > because my client-server protocol relies on two distinct TCP > connexions. > > So the client has not, in fact, disconnected. > > When a system closes one end of a TCP connection, the stack will > send a TCP packet > with either the FIN or the RST flag set. (Which one you get > depends on whether the > stack on the closing side was holding data for the conversation > which the application > hadn't read.) > > The sockets are still in ESTABLISHED state; therefore, no FIN or > RST has been > received by the local stack. > > There are various possibilities: > > - The client system has not in fact closed its end of the > conversation. Sometimes > this happens for reasons that aren't immediately apparent; for > example, if the > client forked and allowed the descriptor for the conversation > socket to be inherited > by the child, and the child still has it open. > > - The client system shut down suddenly (crashed) and so couldn't > send the FIN/RST. > > - There was a failure in network connectivity between the two > systems, and consequently > the FIN/RST couldn't be received by the local system. > > - The connection is in a state where the peer can't send the > FIN/RST, for example > because the local side's receive window is zero. That shouldn't be > the case, since > OpenSSL is (apparently) blocked in a receive on the connection. > but as I don't have > the complete picture I can't rule it out. > > > This let me think that the connexion on which the SSL_read is > listening is > > definitively dead (no more TCP keepalive) > > "definitely dead" doesn't have any meaning in TCP. That's not one > of the TCP states, > or part of the other TCP or IP metadata associated with the local > port (which is > what matters). > > Do you have keepalives enabled? > > > and that, for a reason I do not understand, the SSL_read keeps > blocked into it. > > The reason is simple: The connection is still established, but > there's no data to > receive. The question isn't why SSL_read is blocking; it's why you > think the > connection is gone, but the stack thinks otherwise. > > > Note that the normal behavior of my application is : client > connects, server > > daemon forks a new instance, > > Does the server parent process close its copy of the conversation > socket? > > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From venomdz at gmail.com Tue Nov 17 06:33:14 2020 From: venomdz at gmail.com (Vernon D'souza) Date: Tue, 17 Nov 2020 12:03:14 +0530 Subject: SSL_peek_ex() hangs multiple times at random Message-ID: Hi Everyone, I'm currently using the networking library libneon (version 31.2) which internally uses openSSL 1.1.1d The issue is that a hang occurs at random in SSL_peek_ex() API multiple times in a day. 'strace ' shows the SSL_peek_ex() API is stuck in an unfinished read. Could anyone give me some suggestions on how to debug this issue further? -------------- next part -------------- An HTML attachment was scrubbed... URL: From aerowolf at gmail.com Tue Nov 17 09:37:02 2020 From: aerowolf at gmail.com (Kyle Hamilton) Date: Tue, 17 Nov 2020 03:37:02 -0600 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: There's another reason why you'll want to close your socket with SSL_close(): SSL (and TLS) view a prematurely-closed stream as an exceptional condition to be reported to the application. This is to prevent truncation attacks against the data communication layer. While your application may not need that level of protection, it helps to keep the state of your application in lockstep with the state of the TLS protocol. If your application doesn't expect to send any more data, SSL_close() sends another record across the TCP connection to tell the remote side that it should not keep the descriptor open. -Kyle H On Fri, Nov 13, 2020 at 11:51 AM Michael Wojcik wrote: > > > From: Brice Andr? > > Sent: Friday, 13 November, 2020 09:13 > > > "Does the server parent process close its copy of the conversation socket?" > > I checked in my code, but it seems that no. Is it needed? > > You'll want to do it, for a few reasons: > > - You'll be leaking descriptors in the server, and eventually it will hit its limit. > - If the child process dies without cleanly closing its end of the conversation, > the parent will still have an open descriptor for the socket, so the network stack > won't terminate the TCP connection. > - A related problem: If the child just closes its socket without calling shutdown, > no FIN will be sent to the client system (because the parent still has its copy of > the socket open). The client system will have the connection in one of the termination > states (FIN_WAIT, maybe? I don't have my references handy) until it times out. > - A bug in the parent process might cause it to operate on the connected socket, > causing unexpected traffic on the connection. > - All such sockets will be inherited by future child processes, and one of them might > erroneously perform some operation on one of them. Obviously there could also be a > security issue with this, depending on what your application does. > > Basically, when a descriptor is "handed off" to a child process by forking, you > generally want to close it in the parent, unless it's used for parent-child > communication. (There are some cases where the parent wants to keep it open for > some reason, but they're rare.) > > On a similar note, if you exec a different program in the child process (I wasn't > sure from your description), it's a good idea for the parent to set the FD_CLOEXEC > option (with fcntl) on its listening socket and any other descriptors that shouldn't > be passed along to child processes. You could close these manually in the child > process between the fork and exec, but FD_CLOEXEC is often easier to maintain. > > For some applications, you might just dup2 the socket over descriptor 0 or > descriptor 3, depending on whether the child needs access to stdio, and then close > everything higher. > > Closing descriptors not needed by the child process is a good idea even if you > don't exec, since it can prevent various problems and vulnerabilities that result > from certain classes of bugs. It's a defensive measure. > > The best source for this sort of recommendation, in my opinion, remains W. Richard > Stevens' /Advanced Programming in the UNIX Environment/. The book is old, and Linux > isn't UNIX, but I don't know of any better explanation of how and why to do things > in a UNIX-like OS. > > And my favorite source of TCP/IP information is Stevens' /TCP/IP Illustrated/. > > > May it explain my problem? > > In this case, I don't offhand see how it does, but I may be overlooking something. > > > I suppose that, if for some reason, the communication with the client is lost > > (crash of client, loss of network, etc.) and keepalive is not enabled, this may > > fully explain my problem ? > > It would give you those symptoms, yes. > > > If yes, do you have an idea of why keepalive is not enabled? > > The Host Requirements RFC mandates that it be disabled by default. I think the > primary reasoning for that was to avoid re-establishing virtual circuits (e.g. > dial-up connections) for long-running connections that had long idle periods. > > Linux may well have a kernel tunable or similar to enable TCP keepalive by > default, but it seems to be switched off on your system. You'd have to consult > the documentation for your distribution, I think. > > By default (again per the Host Requirements RFC), it takes quite a long time for > TCP keepalive to detect a broken connection. It doesn't start probing until the > connection has been idle for 2 hours, and then you have to wait for the TCP > retransmit timer times the retransmit count to be exhausted - typically over 10 > minutes. Again, some OSes let you change these defaults, and some let you change > them on an individual connection. > > -- > Michael Wojcik > From Michael.Wojcik at microfocus.com Tue Nov 17 13:56:28 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Tue, 17 Nov 2020 13:56:28 +0000 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: > From: Kyle Hamilton > Sent: Tuesday, 17 November, 2020 02:37 > On Fri, Nov 13, 2020 at 11:51 AM Michael Wojcik > wrote: > > > > > From: Brice Andr? > > > Sent: Friday, 13 November, 2020 09:13 > > > > > "Does the server parent process close its copy of the conversation socket?" > > > I checked in my code, but it seems that no. Is it needed? > > > > You'll want to do it, for a few reasons: ... > > There's another reason why you'll want to close your socket with > SSL_close(): SSL (and TLS) view a prematurely-closed stream as an > exceptional condition to be reported to the application. This is to > prevent truncation attacks against the data communication layer. > While your application may not need that level of protection, it helps > to keep the state of your application in lockstep with the state of > the TLS protocol. If your application doesn't expect to send any more > data, SSL_close() sends another record across the TCP connection to > tell the remote side that it should not keep the descriptor open. This is true, but not what we're talking about here. When the application is done with the conversation, it should use SSL_close to terminate the conversation. Here, though, we're talking about the server parent process closing its descriptor for the socket after forking the child process. At that point the application is not done with the conversation, and calling SSL_close in the server would be a mistake. Now, if the server is unable to start a child process (e.g. fork fails because the user's process limit has been reached), or if for whatever other reason it decides to terminate the conversation without further processing, SSL_close would be appropriate. -- Michael Wojcik From matt at openssl.org Tue Nov 17 14:06:17 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 17 Nov 2020 14:06:17 +0000 Subject: Server application hangs on SS_read, even when client disconnects In-Reply-To: References: Message-ID: <6814f303-d81e-912b-6790-f8bebee2cc19@openssl.org> On 17/11/2020 13:56, Michael Wojcik wrote: >> From: Kyle Hamilton >> Sent: Tuesday, 17 November, 2020 02:37 >> On Fri, Nov 13, 2020 at 11:51 AM Michael Wojcik >> wrote: >>> >>>> From: Brice Andr? >>>> Sent: Friday, 13 November, 2020 09:13 >>> >>>> "Does the server parent process close its copy of the conversation socket?" >>>> I checked in my code, but it seems that no. Is it needed? >>> >>> You'll want to do it, for a few reasons: ... >> >> There's another reason why you'll want to close your socket with >> SSL_close(): SSL (and TLS) view a prematurely-closed stream as an >> exceptional condition to be reported to the application. This is to >> prevent truncation attacks against the data communication layer. >> While your application may not need that level of protection, it helps >> to keep the state of your application in lockstep with the state of >> the TLS protocol. If your application doesn't expect to send any more >> data, SSL_close() sends another record across the TCP connection to >> tell the remote side that it should not keep the descriptor open. > > This is true, but not what we're talking about here. When the > application is done with the conversation, it should use SSL_close > to terminate the conversation. > > Here, though, we're talking about the server parent process closing > its descriptor for the socket after forking the child process. At that > point the application is not done with the conversation, and calling > SSL_close in the server would be a mistake. > > Now, if the server is unable to start a child process (e.g. fork fails > because the user's process limit has been reached), or if for whatever > other reason it decides to terminate the conversation without further > processing, SSL_close would be appropriate. Just for clarity, there is no such function as SSL_close. I assume SSL_shutdown is what people mean. Matt From dipto181 at gmail.com Tue Nov 17 20:40:17 2020 From: dipto181 at gmail.com (Shariful Alam) Date: Tue, 17 Nov 2020 13:40:17 -0700 Subject: Can't link a static library with custom OpenSSL rsa engine Message-ID: Hello, I have a custom rsa engine. It builds and works fine. Later, I have added a static library with my custom engine code. My code compiles. However, when I try to load the custom engine it shows *invalid engine "rsa-engine-new". *The full error is given below, x at x:~/Downloads/x/x/x/rsa_engine$ openssl rsautl -decrypt -inkey private.pem -in msg.enc -engine rsa-engine-new invalid engine "rsa-engine-new" 140112744122112:error:25066067:DSO support routines:dlfcn_load:could not load the shared library:crypto/dso/dso_dlfcn.c:119:filename(/opt/openssl/lib/engines-1.1/rsa-engine-new.so): /opt/openssl/lib/engines-1.1/rsa-engine-new.so: undefined symbol: dune_init 140112744122112:error:25070067:DSO support routines:DSO_load:could not load the shared library:crypto/dso/dso_lib.c:162: 140112744122112:error:260B6084:engine routines:dynamic_load:dso not found:crypto/engine/eng_dyn.c:414: 140112744122112:error:2606A074:engine routines:ENGINE_by_id:no such engine:crypto/engine/eng_list.c:334:id=rsa-engine-new 140112744122112:error:25066067:DSO support routines:dlfcn_load:could not load the shared library:crypto/dso/dso_dlfcn.c:119:filename(librsa-engine-new.so): librsa-engine-new.so: cannot open shared object file: No such file or directory 140112744122112:error:25070067:DSO support routines:DSO_load:could not load the shared library:crypto/dso/dso_lib.c:162: 140112744122112:error:260B6084:engine routines:dynamic_load:dso not found:crypto/engine/eng_dyn.c:414: Now the error doesn't say much about the cause of invalid engine. However my guess is it is from the "*undefined symbol: dune_init*". "dune_init" is from the static library. Therefire I believe my linking is not working. I use the following Makefile to compile the engine, 1. rsa-engine: rsa/rsa.c rsa/bignum.c rsa/aes.c rsa/x509parse.c rsa/pem.c 2. gcc -fPIC -o rsa/rsa.o -c rsa/rsa.c 3. gcc -fPIC -o rsa/bignum.o -c rsa/bignum.c 4. gcc -fPIC -o rsa/aes.o -c rsa/aes.c 5. gcc -fPIC -o rsa/x509parse.o -c rsa/x509parse.c 6. gcc -fPIC -o rsa/pem.o -c rsa/pem.c 7. gcc -fPIC -c rsa-engine.c 8. gcc -shared -o librsa_engine.so libdune/libdune.a -lcrypto rsa-engine.o rsa/rsa.o rsa/bignum.o rsa/aes.o rsa/x509parse.o rsa/pem.o 9. mv librsa_engine.so rsa-engine-new.so 10. sudo cp rsa-engine-new.so /opt/openssl/lib/engines-1.1/ 11. clean: 12. rm -f *.o rsa/*.o *.so rsa-engine So, can anyone please if my guess is correct or not? If my guess is correct, how can I fix my Makefile? N.B: Static library - libdune/libdune.a is in the same directory with the main rsa-engine.c - libdune/libdune.a is compiled with -fPIC flag Thanks, Shariful -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt.w.heberlein at hpe.com Tue Nov 17 21:02:02 2020 From: kurt.w.heberlein at hpe.com (Heberlein, Kurt William) Date: Tue, 17 Nov 2020 21:02:02 +0000 Subject: Can't link a static library with custom OpenSSL rsa engine In-Reply-To: References: Message-ID: You might try changing this: 8. gcc -shared -o librsa_engine.so libdune/libdune.a -lcrypto rsa-engine.o rsa/rsa.o rsa/bignum.o rsa/aes.o rsa/x509parse.o rsa/pem.o to this: gcc ?shared ?o librsa_engine.so ?L./libdune rsa_engine.o rsa/rsa.o rsa/bignum.o rsa/aes.o rsa/x509parse.o rsa/pem.o ?Wl,-Bstatic ?llibdune ?Wl,-Bdynamic ?lcrypto just a guess. cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott_n at xypro.com Tue Nov 17 21:58:56 2020 From: scott_n at xypro.com (Scott Neugroschl) Date: Tue, 17 Nov 2020 21:58:56 +0000 Subject: Can't link a static library with custom OpenSSL rsa engine In-Reply-To: References: Message-ID: You need to put the static library at the END of your link command. A static library is searched when it is encountered in the link stream, and only the items needed will be used from it. Because you have it first, there are no undefined symbols, and no items will be used from it. From: openssl-users On Behalf Of Shariful Alam Sent: Tuesday, November 17, 2020 12:40 PM To: openssl-users at openssl.org Subject: Can't link a static library with custom OpenSSL rsa engine Hello, I have a custom rsa engine. It builds and works fine. Later, I have added a static library with my custom engine code. My code compiles. However, when I try to load the custom engine it shows invalid engine "rsa-engine-new". The full error is given below, x at x:~/Downloads/x/x/x/rsa_engine$ openssl rsautl -decrypt -inkey private.pem -in msg.enc -engine rsa-engine-new invalid engine "rsa-engine-new" 140112744122112:error:25066067:DSO support routines:dlfcn_load:could not load the shared library:crypto/dso/dso_dlfcn.c:119:filename(/opt/openssl/lib/engines-1.1/rsa-engine-new.so): /opt/openssl/lib/engines-1.1/rsa-engine-new.so: undefined symbol: dune_init 140112744122112:error:25070067:DSO support routines:DSO_load:could not load the shared library:crypto/dso/dso_lib.c:162: 140112744122112:error:260B6084:engine routines:dynamic_load:dso not found:crypto/engine/eng_dyn.c:414: 140112744122112:error:2606A074:engine routines:ENGINE_by_id:no such engine:crypto/engine/eng_list.c:334:id=rsa-engine-new 140112744122112:error:25066067:DSO support routines:dlfcn_load:could not load the shared library:crypto/dso/dso_dlfcn.c:119:filename(librsa-engine-new.so): librsa-engine-new.so: cannot open shared object file: No such file or directory 140112744122112:error:25070067:DSO support routines:DSO_load:could not load the shared library:crypto/dso/dso_lib.c:162: 140112744122112:error:260B6084:engine routines:dynamic_load:dso not found:crypto/engine/eng_dyn.c:414: Now the error doesn't say much about the cause of invalid engine. However my guess is it is from the "undefined symbol: dune_init". "dune_init" is from the static library. Therefire I believe my linking is not working. I use the following Makefile to compile the engine, 1. rsa-engine: rsa/rsa.c rsa/bignum.c rsa/aes.c rsa/x509parse.c rsa/pem.c 2. gcc -fPIC -o rsa/rsa.o -c rsa/rsa.c 3. gcc -fPIC -o rsa/bignum.o -c rsa/bignum.c 4. gcc -fPIC -o rsa/aes.o -c rsa/aes.c 5. gcc -fPIC -o rsa/x509parse.o -c rsa/x509parse.c 6. gcc -fPIC -o rsa/pem.o -c rsa/pem.c 7. gcc -fPIC -c rsa-engine.c 8. gcc -shared -o librsa_engine.so libdune/libdune.a -lcrypto rsa-engine.o rsa/rsa.o rsa/bignum.o rsa/aes.o rsa/x509parse.o rsa/pem.o 9. mv librsa_engine.so rsa-engine-new.so 10. sudo cp rsa-engine-new.so /opt/openssl/lib/engines-1.1/ 11. clean: 12. rm -f *.o rsa/*.o *.so rsa-engine So, can anyone please if my guess is correct or not? If my guess is correct, how can I fix my Makefile? N.B: Static library * libdune/libdune.a is in the same directory with the main rsa-engine.c * libdune/libdune.a is compiled with -fPIC flag Thanks, Shariful -------------- next part -------------- An HTML attachment was scrubbed... URL: From guerinp at talasi.fr Wed Nov 18 11:24:38 2020 From: guerinp at talasi.fr (=?UTF-8?Q?Patrice_Gu=c3=a9rin?=) Date: Wed, 18 Nov 2020 12:24:38 +0100 Subject: openssl s_client connection fails Message-ID: Hello, I experience the following on Linux Debian 9 (openssl 1.1.0l) : When using openssl s_client to connect on a site, I get the following CONNECTED(00000003) 3072988928:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:../ssl/record/rec_layer_s3.c:1407:SSL alert number 40 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 176 bytes Verification: OK --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: ??? Protocol? : TLSv1.2 ??? Cipher??? : 0000 ??? Session-ID: ??? Session-ID-ctx: ??? Master-Key: ??? PSK identity: None ??? PSK identity hint: None ??? SRP username: None ??? Start Time: 1605691623 ??? Timeout?? : 7200 (sec) ??? Verify return code: 0 (ok) ??? Extended master secret: no --- The same arises with -tls1, -tls1_1 and -tls1_2. So I've built the latest 1.1.1h and test it in the same conditions and it works in all cases... Does anybody have an idea on what's going wrong ? Thank you in advance. Kind regards Patrice. From matt at openssl.org Wed Nov 18 11:40:33 2020 From: matt at openssl.org (Matt Caswell) Date: Wed, 18 Nov 2020 11:40:33 +0000 Subject: openssl s_client connection fails In-Reply-To: References: Message-ID: On 18/11/2020 11:24, Patrice Gu?rin wrote: > 3072988928:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert > handshake failure:../ssl/record/rec_layer_s3.c:1407:SSL alert number 40 This is a very generic "something went wrong" alert that is being received from the server and could be due to any number of issues. Do you have access to the server in question? If so there may be more clues in the server logs that might explain it. > Does anybody have an idea on what's going wrong ? One thing that springs to mind that often goes wrong is SNI handling. s_client changed between 1.1.0 and 1.1.1 to always provider SNI by default. If the server requires SNI then it could explain this behaviour. Your can add SNI in 1.1.0 by using the "-servername" command line option followed by the name of the server in question, e.g. $ openssl s_client -connect www.openssl.org -port 443 -servername www.openssl.org Matt > > Thank you in advance. > Kind regards > Patrice. > From guerinp at talasi.fr Wed Nov 18 15:23:13 2020 From: guerinp at talasi.fr (=?UTF-8?Q?Patrice_Gu=c3=a9rin?=) Date: Wed, 18 Nov 2020 16:23:13 +0100 Subject: Fwd: Re: openssl s_client connection fails In-Reply-To: References: Message-ID: <1111cba7-bec2-ce32-227a-2d33ac0d86c0@talasi.fr> Hi All, Sorry, send to missing. Patrice. -------- Message transf?r? -------- Sujet?: Re: openssl s_client connection fails Date?: Wed, 18 Nov 2020 11:40:33 +0000 De?: Matt Caswell Pour?: openssl-users at openssl.org On 18/11/2020 11:24, Patrice Gu?rin wrote: > 3072988928:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert > handshake failure:../ssl/record/rec_layer_s3.c:1407:SSL alert number 40 This is a very generic "something went wrong" alert that is being received from the server and could be due to any number of issues. Do you have access to the server in question? If so there may be more clues in the server logs that might explain it. > Does anybody have an idea on what's going wrong ? One thing that springs to mind that often goes wrong is SNI handling. s_client changed between 1.1.0 and 1.1.1 to always provider SNI by default. If the server requires SNI then it could explain this behaviour. Your can add SNI in 1.1.0 by using the "-servername" command line option followed by the name of the server in question, e.g. $ openssl s_client -connect www.openssl.org -port 443 -servername www.openssl.org Matt > > Thank you in advance. > Kind regards > Patrice. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guerinp at talasi.fr Wed Nov 18 15:23:32 2020 From: guerinp at talasi.fr (=?UTF-8?Q?Patrice_Gu=c3=a9rin?=) Date: Wed, 18 Nov 2020 16:23:32 +0100 Subject: Fwd: Re: openssl s_client connection fails In-Reply-To: References: Message-ID: <1ea6a1f3-3672-98d4-1e83-58be892f6c54@talasi.fr> Hi All, Sorry, send to missing. Patrice. -------- Message transf?r? -------- Sujet?: Re: openssl s_client connection fails Date?: Wed, 18 Nov 2020 14:46:45 +0000 De?: Matt Caswell Pour?: Patrice Gu?rin On 18/11/2020 14:33, Patrice Gu?rin wrote: > Hello Matt, > > Thank you for your very fast answer > > No, I don't have access to the server though it's publicly available. > > Well done ! You are a genius ! > The SNI trick resolves the connection failiure... > > So, I have another questions : > 1/? Do I have to always add SNI? support, even in the case it's not > necessary ? It is good practice to always add SNI support. Server's that don't need it will ignore it. Increasingly servers are requiring it. Unless you happen to know the server's requirements before you start there is no way of knowing before you start a connection whether the server will be picky about it. This is one of the main reasons why the default behaviour for s_client was changed between 1.1.0 and 1.1.1. > 2/ If I want to switch to 1.1.1, are the API and libraries compatible > with 1.1.0. I've used no-deprecated, but I can remove it and rebuild > Yes 1.1.1 is backwards compatible with 1.1.0 so in theory it should be a drop in replacement. I don't think we deprecated anything in 1.1.1 as far as I remember so no-deprecated shouldn't make a difference. The biggest change between 1.1.0 and 1.1.1 is the addition of TLSv1.3 support. This brings with it numerous implications for applications using libssl which you should be aware of. See: https://wiki.openssl.org/index.php/TLS1.3 Matt > Thank you very very much. > Kind regards, > Patrice. > > Matt Caswell a ?crit?: >> >> On 18/11/2020 11:24, Patrice Gu?rin wrote: >>> 3072988928:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert >>> handshake failure:../ssl/record/rec_layer_s3.c:1407:SSL alert number 40 >> This is a very generic "something went wrong" alert that is being >> received from the server and could be due to any number of issues. Do >> you have access to the server in question? If so there may be more clues >> in the server logs that might explain it. >> >>> Does anybody have an idea on what's going wrong ? >> One thing that springs to mind that often goes wrong is SNI handling. >> s_client changed between 1.1.0 and 1.1.1 to always provider SNI by >> default. If the server requires SNI then it could explain this >> behaviour. Your can add SNI in 1.1.0 by using the "-servername" command >> line option followed by the name of the server in question, e.g. >> >> $ openssl s_client -connect www.openssl.org -port 443 -servername >> www.openssl.org >> >> Matt >> >>> Thank you in advance. >>> Kind regards >>> Patrice. >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanarayana at rbbn.com Thu Nov 19 16:24:38 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Thu, 19 Nov 2020 16:24:38 +0000 Subject: Application accessing 'ex_kusage' Message-ID: Hi Matt, Thanks for the response, the application code has been around a very long time and no one knows the rationale behind it As I understand from your reply that the application need not have to do these operations internally, so we will go ahead and stub it out for now Regards, Sunil From: openssl-users On Behalf Of openssl-users-request at openssl.org Sent: 17 November 2020 08:44 To: openssl-users at openssl.org Subject: openssl-users Digest, Vol 72, Issue 14 ________________________________ NOTICE: This email was received from an EXTERNAL sender ________________________________ Send openssl-users mailing list submissions to openssl-users at openssl.org To subscribe or unsubscribe via the World Wide Web, visit https://mta.openssl.org/mailman/listinfo/openssl-users or, via email, send a message with subject or body 'help' to openssl-users-request at openssl.org You can reach the person managing the list at openssl-users-owner at openssl.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openssl-users digest..." Today's Topics: 1. Re: ## Application accessing 'ex_kusage' ## (Matt Caswell) 2. Re: RAND_bytes() thread safety (Matt Caswell) 3. Re: test cases failed after enabling ktls (Matt Caswell) 4. Handling BIO errors (Jo?o Santos) 5. Re: test cases failed after enabling ktls (rui zang) 6. Re: Server application hangs on SS_read, even when client disconnects (Jakob Bohm) ---------------------------------------------------------------------- Message: 1 Date: Mon, 16 Nov 2020 10:16:24 +0000 From: Matt Caswell > To: openssl-users at openssl.org Subject: Re: ## Application accessing 'ex_kusage' ## Message-ID: > Content-Type: text/plain; charset=utf-8 On 13/11/2020 19:10, Narayana, Sunil Kumar wrote: > Hi , > > ??????????????? We are porting our Application from ?openssl 1.0.1 to > openssl 3.0. in related to this activity we require to access the > variable ?*ex_kusage*? pointed by *X509* > > But there are no set utilities available to access this variable. Only > ?X509_get_key_usage Is available. > > ? > > Our code for 1.0.1 is as below. Please suggest the right way to achieve > this. I'd like to ask why you feel you need to do this at all. It seems to me like you are replicating libcrypto internal code in your own application. This is code in libcrypto: /* Handle (basic) key usage */ if ((usage = X509_get_ext_d2i(x, NID_key_usage, &i, NULL)) != NULL) { x->ex_kusage = 0; if (usage->length > 0) { x->ex_kusage = usage->data[0]; if (usage->length > 1) x->ex_kusage |= usage->data[1] << 8; } x->ex_flags |= EXFLAG_KUSAGE; ASN1_BIT_STRING_free(usage); /* Check for empty key usage according to RFC 5280 section 4.2.1.3 */ if (x->ex_kusage == 0) { ERR_raise(ERR_LIB_X509, X509V3_R_EMPTY_KEY_USAGE); x->ex_flags |= EXFLAG_INVALID; } } else if (i != -1) { x->ex_flags |= EXFLAG_INVALID; } So it seems very similar to what you are trying to do, and I guess some earlier version of this code was the original source of what is in your application now. The purpose of this code is to decode the key usage extension and cache it in the internal `ex_flags` value. This code gets called in numerous code paths whenever we need to query extension data - including if you were to call X509_get_key_usage(). Your application seems to want to manage for itself when libcrypto does this caching. It should not need to do so - it's entirely internal. My guess is that, perhaps, in some older version of OpenSSL the caching didn't happen when it was supposed to and you implemented this workaround?? Or possibly the workaround is still needed due to a bug in OpenSSL that still doesn't do the caching when needed? If so I'd like to understand the circumstances behind that. Matt ------------------------------ Message: 2 Date: Mon, 16 Nov 2020 10:35:06 +0000 From: Matt Caswell > To: openssl-users at openssl.org Subject: Re: RAND_bytes() thread safety Message-ID: > Content-Type: text/plain; charset=utf-8 On 14/11/2020 11:00, Rahul Godbole wrote: > Is OpenSSL function RAND_bytes () thread safe? Short answer: Yes Longer answer: Yes assuming that: - you are using >= OpenSSL 1.1.0 or - you are using OpenSSL 1.0.2 or below and you have set up the locking callbacks AND - You have not compiled OpenSSL with "no-threads" Matt ------------------------------ Message: 3 Date: Mon, 16 Nov 2020 11:45:20 +0000 From: Matt Caswell > To: openssl-users at openssl.org Subject: Re: test cases failed after enabling ktls Message-ID: <749b7676-2c84-1b63-9b75-575f903e14f7 at openssl.org> Content-Type: text/plain; charset=utf-8 On 16/11/2020 07:56, rui zang wrote: > Resend in plain text. > ====================================== > > Greetings, > > I am trying openssl+ktls on ubuntu 20.04. > I have tried openssl-3.0.0-alpha8 from https://www.openssl.org/source/openssl-3.0.0-alpha8.tar.gz > and also the current master branch from the github repo. > The kernels I have tried are v5.9 and v5.9.8. > On every combination, same group of test case could not pass. Please can you open this as a github issue? Thanks Matt > > Test Summary Report > ------------------- > 70-test_key_share.t (Wstat: 1536 Tests: 22 Failed: 6) > Failed tests: 1, 4, 6-7, 13-14 > Non-zero exit status: 6 > 70-test_sslextension.t (Wstat: 256 Tests: 8 Failed: 1) > Failed test: 8 > Non-zero exit status: 1 > 70-test_sslrecords.t (Wstat: 768 Tests: 20 Failed: 3) > Failed tests: 18-20 > Non-zero exit status: 3 > 70-test_sslsigalgs.t (Wstat: 1536 Tests: 26 Failed: 6) > Failed tests: 1, 6, 22-23, 25-26 > Non-zero exit status: 6 > 70-test_sslsignature.t (Wstat: 256 Tests: 4 Failed: 1) > Failed test: 1 > Non-zero exit status: 1 > 70-test_sslversions.t (Wstat: 512 Tests: 8 Failed: 2) > Failed tests: 5, 7 > Non-zero exit status: 2 > 70-test_tls13cookie.t (Wstat: 512 Tests: 2 Failed: 2) > Failed tests: 1-2 > Non-zero exit status: 2 > 70-test_tls13downgrade.t (Wstat: 256 Tests: 6 Failed: 1) > Failed test: 6 > Non-zero exit status: 1 > 70-test_tls13kexmodes.t (Wstat: 7424 Tests: 1 Failed: 1) > Failed test: 1 > Non-zero exit status: 29 > Parse errors: Bad plan. You planned 11 tests but ran 1. > 70-test_tls13messages.t (Wstat: 7424 Tests: 1 Failed: 0) > Non-zero exit status: 29 > Parse errors: Bad plan. You planned 17 tests but ran 1. > 70-test_tls13psk.t (Wstat: 7424 Tests: 1 Failed: 1) > Failed test: 1 > Non-zero exit status: 29 > Parse errors: Bad plan. You planned 5 tests but ran 1. > 70-test_tlsextms.t (Wstat: 256 Tests: 10 Failed: 1) > Failed test: 10 > Non-zero exit status: 1 > Files=223, Tests=3571, 615 wallclock secs (11.00 usr 0.93 sys + 343.65 cusr 84.69 csys = 440.27 CPU) > Result: FAIL > make[1]: *** [Makefile:3197: _tests] Error 1 > make[1]: Leaving directory '/home/ubuntu/openssl' > make: *** [Makefile:3194: tests] Error 2 > > Complete `make test` output (kernel v5.9.8 + openssl master) is copied here https://cl1p.net/openssl_ktls_make_test_failure (due to the 100K limit of this mailing list) > I am sure that the kernel tls module is loaded correctly since /proc/net/tls_stat is exposed correctly and I can see the counters increasing while doing `make test`. > So is this supposed to happen? What should I do to make ktls work? > > Thanks, > Rui Zang > ------------------------------ Message: 4 Date: Tue, 17 Nov 2020 02:33:30 +0000 From: Jo?o Santos > To: openssl-users at openssl.org Subject: Handling BIO errors Message-ID: <009450BD-7D9B-47D7-A1D2-8A4476EE52BE at xce.pt> Content-Type: text/plain; charset=us-ascii I'm writing a daemon that talks to a server using HTTP/2 over TLS 1.2+ and leveraging OpenSSL 1.1.1h to provide the TLS support. At the moment I think that I have the whole TLS part figured, and I could probably have the project running by now if I used SSL_set_fd to assign a connected socket to the underlying BIO of an SSL object, but I want to simplify the code as much as possible by using the highest level interfaces at my disposal, which in the case of OpenSSL means using BIO objects. Unfortunately I'm having a problem which is that I can't figure out how to convert error codes returned by ERR_get_error and split by ERR_GET_LIB, ERR_GET_FUNC, and ERR_GET_REASON into constants that I can use in a switch statement to react to BIO errors. This is not a problem for SSL filter BIOs since those have their own error reporting functions, but is a problem for Internet socket source BIOs since BIO_do_connect in particular can fail due to a system call error, a DNS error,, or even an error generated by lower level OpenSSL functions and other BIOs in the chain, and I cannot find any manual pages documenting these error constants, if they even exist. Here's a small working example that illustrates the problem that I'm having: #include #include #include int main(void) { ERR_load_ERR_strings(); BIO *bio = BIO_new_connect("http://wwx.google.com:80"); printf("Connected: %ld\n", BIO_do_connect(bio)); ERR_print_errors_fp(stderr); return 0; } Running this code, which has a misspelled hostname on purpose so that it can fail, results in the following printed out to the console: Connected: -1 4667342272:error:2008F002:BIO routines:BIO_lookup_ex:system lib:crypto/bio/b_addr.c:726:nodename nor servname provided, or not known What could I do in that code to use a switch statement on the kind of information printed by ERR_print_errors_fp? I know that, in this example, the error is from getaddrinfo, since I recognize the error message, but assuming that I want to handle that specific error, what can I match the library, function, and reason error codes against? Thanks in advance! ------------------------------ Message: 5 Date: Tue, 17 Nov 2020 11:08:27 +0800 From: rui zang > To: Matt Caswell >, "openssl-users at openssl.org" > Subject: Re: test cases failed after enabling ktls Message-ID: <1089471605582408 at mail.yandex.com> Content-Type: text/plain; charset=utf-8 Thanks, please check out https://github.com/openssl/openssl/issues/13424 Regards, Rui Zang 16.11.2020, 19:45, "Matt Caswell" >: > On 16/11/2020 07:56, rui zang wrote: >> ?Resend in plain text. >> ?====================================== >> >> ?Greetings, >> >> ?I am trying openssl+ktls on ubuntu 20.04. >> ?I have tried openssl-3.0.0-alpha8 from https://www.openssl.org/source/openssl-3.0.0-alpha8.tar.gz >> ?and also the current master branch from the github repo. >> ?The kernels I have tried are v5.9 and v5.9.8. >> ?On every combination, same group of test case could not pass. > > Please can you open this as a github issue? > > Thanks > > Matt > >> ?Test Summary Report >> ?------------------- >> ?70-test_key_share.t (Wstat: 1536 Tests: 22 Failed: 6) >> ???Failed tests: 1, 4, 6-7, 13-14 >> ???Non-zero exit status: 6 >> ?70-test_sslextension.t (Wstat: 256 Tests: 8 Failed: 1) >> ???Failed test: 8 >> ???Non-zero exit status: 1 >> ?70-test_sslrecords.t (Wstat: 768 Tests: 20 Failed: 3) >> ???Failed tests: 18-20 >> ???Non-zero exit status: 3 >> ?70-test_sslsigalgs.t (Wstat: 1536 Tests: 26 Failed: 6) >> ???Failed tests: 1, 6, 22-23, 25-26 >> ???Non-zero exit status: 6 >> ?70-test_sslsignature.t (Wstat: 256 Tests: 4 Failed: 1) >> ???Failed test: 1 >> ???Non-zero exit status: 1 >> ?70-test_sslversions.t (Wstat: 512 Tests: 8 Failed: 2) >> ???Failed tests: 5, 7 >> ???Non-zero exit status: 2 >> ?70-test_tls13cookie.t (Wstat: 512 Tests: 2 Failed: 2) >> ???Failed tests: 1-2 >> ???Non-zero exit status: 2 >> ?70-test_tls13downgrade.t (Wstat: 256 Tests: 6 Failed: 1) >> ???Failed test: 6 >> ???Non-zero exit status: 1 >> ?70-test_tls13kexmodes.t (Wstat: 7424 Tests: 1 Failed: 1) >> ???Failed test: 1 >> ???Non-zero exit status: 29 >> ???Parse errors: Bad plan. You planned 11 tests but ran 1. >> ?70-test_tls13messages.t (Wstat: 7424 Tests: 1 Failed: 0) >> ???Non-zero exit status: 29 >> ???Parse errors: Bad plan. You planned 17 tests but ran 1. >> ?70-test_tls13psk.t (Wstat: 7424 Tests: 1 Failed: 1) >> ???Failed test: 1 >> ???Non-zero exit status: 29 >> ???Parse errors: Bad plan. You planned 5 tests but ran 1. >> ?70-test_tlsextms.t (Wstat: 256 Tests: 10 Failed: 1) >> ???Failed test: 10 >> ???Non-zero exit status: 1 >> ?Files=223, Tests=3571, 615 wallclock secs (11.00 usr 0.93 sys + 343.65 cusr 84.69 csys = 440.27 CPU) >> ?Result: FAIL >> ?make[1]: *** [Makefile:3197: _tests] Error 1 >> ?make[1]: Leaving directory '/home/ubuntu/openssl' >> ?make: *** [Makefile:3194: tests] Error 2 >> >> ?Complete `make test` output (kernel v5.9.8 + openssl master) is copied here https://cl1p.net/openssl_ktls_make_test_failure (due to the 100K limit of this mailing list) >> ?I am sure that the kernel tls module is loaded correctly since /proc/net/tls_stat is exposed correctly and I can see the counters increasing while doing `make test`. >> ?So is this supposed to happen? What should I do to make ktls work? >> >> ?Thanks, >> ?Rui Zang ------------------------------ Message: 6 Date: Tue, 17 Nov 2020 04:13:54 +0100 From: Jakob Bohm > To: openssl-users at openssl.org Subject: Re: Server application hangs on SS_read, even when client disconnects Message-ID: <66c3b7e7-871f-b0c0-3e4c-1968c8d6b91c at wisemo.com> Content-Type: text/plain; charset=utf-8; format=flowed (Top posting to match what Mr. Andr? does): TCP without keepalive will time out the connection a few minutes after sending any data that doesn't get a response. TCP without keepalive with no outstanding send (so only a blocking recv) and nothing outstanding at the other end will probably hang almost forever as there is nothing indicating that there is actual data lost in transit. On 2020-11-13 17:13, Brice Andr? wrote: > Hello, > > And many thanks for the answer. > > "Does the server parent process close its copy of the conversation > socket?" : I checked in my code, but it seems that no. Is it needed? ? > May it explain my problem ? > > " Do you have keepalives enabled?" To be honest, I did not know it was > possible to not enable them. I checked with command "netstat -tnope" > and it tells me that it is not enabled. > > I suppose that, if for some reason, the communication with the client > is lost (crash of client, loss of network, etc.) and keepalive is not > enabled, this may fully explain my problem ? > > If yes, do you have an idea of why keepalive is not enabled ? I > thought that by default on linux it was ? > > Many thanks, > Brice > > > Le?ven. 13 nov. 2020 ??15:43, Michael Wojcik > >> > a ?crit?: > > > From: openssl-users > > On Behalf Of Brice Andr? > > Sent: Friday, 13 November, 2020 05:06 > > > ... it seems that in some rare execution cases, the server > performs a SSL_read, > > the client disconnects in the meantime, and the server never > detects the > > disconnection and remains stuck in the SSL_read operation. > > ... > > > #0? 0x00007f836575d210 in __read_nocancel () from > /lib/x86_64-linux-gnu/libpthread.so.0 > > #1? 0x00007f8365c8ccec in ?? () from > /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > > #2? 0x00007f8365c8772b in BIO_read () from > /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 > > So OpenSSL is in a blocking read of the socket descriptor. > > > tcp? ? ? ? 0? ? ? 0 http://5.196.111.132:5413 > > http://85.27.92.8:25856 > > ? ? ? ESTABLISHED 19218/./MabeeServer > > tcp? ? ? ? 0? ? ? 0 http://5.196.111.132:5412 > > http://85.27.92.8:26305 > > ? ? ? ESTABLISHED 19218/./MabeeServer > > > From this log, I can see that I have two established connections > with remote > > client machine on IP 109.133.193.70. Note that it's normal to > have two connexions > > because my client-server protocol relies on two distinct TCP > connexions. > > So the client has not, in fact, disconnected. > > When a system closes one end of a TCP connection, the stack will > send a TCP packet > with either the FIN or the RST flag set. (Which one you get > depends on whether the > stack on the closing side was holding data for the conversation > which the application > hadn't read.) > > The sockets are still in ESTABLISHED state; therefore, no FIN or > RST has been > received by the local stack. > > There are various possibilities: > > - The client system has not in fact closed its end of the > conversation. Sometimes > this happens for reasons that aren't immediately apparent; for > example, if the > client forked and allowed the descriptor for the conversation > socket to be inherited > by the child, and the child still has it open. > > - The client system shut down suddenly (crashed) and so couldn't > send the FIN/RST. > > - There was a failure in network connectivity between the two > systems, and consequently > the FIN/RST couldn't be received by the local system. > > - The connection is in a state where the peer can't send the > FIN/RST, for example > because the local side's receive window is zero. That shouldn't be > the case, since > OpenSSL is (apparently) blocked in a receive on the connection. > but as I don't have > the complete picture I can't rule it out. > > > This let me think that the connexion on which the SSL_read is > listening is > > definitively dead (no more TCP keepalive) > > "definitely dead" doesn't have any meaning in TCP. That's not one > of the TCP states, > or part of the other TCP or IP metadata associated with the local > port (which is > what matters). > > Do you have keepalives enabled? > > > and that, for a reason I do not understand, the SSL_read keeps > blocked into it. > > The reason is simple: The connection is still established, but > there's no data to > receive. The question isn't why SSL_read is blocking; it's why you > think the > connection is gone, but the stack thinks otherwise. > > > Note that the normal behavior of my application is : client > connects, server > > daemon forks a new instance, > > Does the server parent process close its copy of the conversation > socket? > > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded ------------------------------ Subject: Digest Footer _______________________________________________ openssl-users mailing list openssl-users at openssl.org https://mta.openssl.org/mailman/listinfo/openssl-users ------------------------------ End of openssl-users Digest, Vol 72, Issue 14 ********************************************* ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanarayana at rbbn.com Fri Nov 20 13:46:00 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Fri, 20 Nov 2020 13:46:00 +0000 Subject: set/get utilities are not available to access variable 'num' of structure bio_st Message-ID: Hi , We are porting our Application from openssl 1.0.1 to openssl 3.0. In related to this activity we require to access the variable 'num' of structure bio_st. In older versions the variable was accessed to set and get value using pointer operator (bi->num ). Since this is not allowed in 3.0 we are looking for the Get/Set utilities similar to other member (BIO_set_flags/ BIO_get_flags) Is this not supported in 3.0 ? If yes, Please guide the proper alternatives. Regards, Sunil ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Fri Nov 20 13:55:34 2020 From: matt at openssl.org (Matt Caswell) Date: Fri, 20 Nov 2020 13:55:34 +0000 Subject: set/get utilities are not available to access variable 'num' of structure bio_st In-Reply-To: References: Message-ID: <53108b39-21f8-dea0-c3c3-fe5517a5613f@openssl.org> On 20/11/2020 13:46, Narayana, Sunil Kumar wrote: > Hi , > > ??????????????? We are porting our Application from ?openssl 1.0.1 to > openssl 3.0. In related to this activity we require to access the > variable ?*num*? of structure *bio_st. * > > In older versions the variable was accessed to set and get value using > pointer operator (bi->num ). > > Since this is not allowed in 3.0 we are looking for the Get/Set > utilities similar to other member*(BIO_set_flags/ BIO_get_flags) * > > ? > > Is this not supported in 3.0 ? If yes, Please guide the proper alternatives. What kind of BIO are you using? Different BIOs may provide different mechanisms to get hold of this value. For example a number of file descriptor based BIOs provide BIO_get_fd(). Matt From skip at taygeta.com Fri Nov 20 16:43:59 2020 From: skip at taygeta.com (Skip Carter) Date: Fri, 20 Nov 2020 08:43:59 -0800 Subject: EC curve preferences Message-ID: <1605890639.1675.24.camel@taygeta.com> I am sure this in the documentation somewhere; but where ? What are the preferred ECDH curves for a given keysize ? Which curves are considered obsolete/deprecated/untrustworthy ? -- Dr Everett (Skip) Carter??0xF29BF36844FB7922 skip at taygeta.com Taygeta Scientific Inc 607 Charles Ave Seaside CA 93955 831-641-0645 x103 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From Michael.Wojcik at microfocus.com Fri Nov 20 18:03:22 2020 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 20 Nov 2020 18:03:22 +0000 Subject: EC curve preferences In-Reply-To: <1605890639.1675.24.camel@taygeta.com> References: <1605890639.1675.24.camel@taygeta.com> Message-ID: > From: openssl-users On Behalf Of Skip > Carter > Sent: Friday, 20 November, 2020 09:44 > > What are the preferred ECDH curves for a given keysize ? Which curves > are considered obsolete/deprecated/untrustworthy ? For TLSv1.3, this is easy. RFC 8446 B.3.1.4 only allows the following: secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019), x25519(0x001D), x448(0x001E). Those are your choices. If you want interoperability, enable them all; if you want maximum security, only use X25519 and X448. See safecurves.cr.yp.to for the arguments in favor of the latter position. Frankly, unless you're dealing with something of very high value or that needs to resist breaking for a long time, I don't see any real-world risk in using the SEC 2 curves. You might want to disallow just secp256r1 if you're concerned about that key size becoming tractable under new attacks or quantum computing within your threat timeframe. Ultimately, this is a question for your threat model. For TLSv1.2, well... - Some people recommend avoiding non-prime curves (i.e. over binary fields, such as the sect* ones) for intellectual-property reasons. I'm not going to try to get into that, because IANAL and even if I were, I wouldn't touch that without a hefty retainer. - Current consensus, more or less, seems to be to use named curves and not custom ones. The arguments for that seem pretty persuasive to me. So don't use custom curves. - Beyond that? Well, here's one Stack Exchange response from Thomas Pornin (who knows a hell of a lot more about this stuff than I do) where he suggests using just prime256v1 (which is the same as secp256r1 I believe?) and secp384r1: https://security.stackexchange.com/questions/78621/which-elliptic-curve-should-i-use Those are the curves in Suite B, before the NSA decided to emit vague warnings about ECC. They subsequently decided P384 aka secp384r1 is OK until post-quantum primitives are standardized. So if your application prefers secp384r1 for TLSv1.2, then you can decide whether to also allow prime256v1 for interoperability. Again, that's a question for your threat model. All that said, some people will have different, and quite possibly better-informed, opinions on this. -- Michael Wojcik From phill at hallambaker.com Fri Nov 20 18:53:20 2020 From: phill at hallambaker.com (Phillip Hallam-Baker) Date: Fri, 20 Nov 2020 13:53:20 -0500 Subject: EC curve preferences In-Reply-To: <1605890639.1675.24.camel@taygeta.com> References: <1605890639.1675.24.camel@taygeta.com> Message-ID: There are currently two sets of preferred curves. CABForum approved use of the NIST curves from Suite B at 384 bits (and 521??) several years ago. Those are currently the only curves for which FIPS-140 certified HSMs are currently available and thus the only ones that can be supported by WebPKI CAs. The IRTF CFRG RG approved replacement curves based on rigid construction several years ago, These are intended to be the curves used in the future. In particular, these are the curves most likely to end up being supported in crypto co processors for CPUs. On Fri, Nov 20, 2020 at 11:44 AM Skip Carter wrote: > > I am sure this in the documentation somewhere; but where ? > > What are the preferred ECDH curves for a given keysize ? Which curves > are considered obsolete/deprecated/untrustworthy ? > > > -- > Dr Everett (Skip) Carter 0xF29BF36844FB7922 > skip at taygeta.com > > Taygeta Scientific Inc > 607 Charles Ave > Seaside CA 93955 > 831-641-0645 x103 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From uri at ll.mit.edu Fri Nov 20 19:16:32 2020 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Fri, 20 Nov 2020 19:16:32 +0000 Subject: EC curve preferences Message-ID: <039823DC-12D8-4A86-8F60-7611444EF13F@ll.mit.edu> Those "rigid curves" that will be used in the future - future how distant, and for how long? Regards, Uri > On Nov 20, 2020, at 13:54, Phillip Hallam-Baker wrote: > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5874 bytes Desc: not available URL: From openssl-users at dukhovni.org Fri Nov 20 20:15:11 2020 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 20 Nov 2020 15:15:11 -0500 Subject: EC curve preferences In-Reply-To: <1605890639.1675.24.camel@taygeta.com> References: <1605890639.1675.24.camel@taygeta.com> Message-ID: <20201120201511.GX1464@straasha.imrryr.org> On Fri, Nov 20, 2020 at 08:43:59AM -0800, Skip Carter wrote: > I am sure this in the documentation somewhere; but where ? > > What are the preferred ECDH curves for a given keysize ? Which curves > are considered obsolete/deprecated/untrustworthy ? Is this a general question about industry best-practices or a question about OpenSSL default or configurable behaviour? Or in other words, is this a theory question or a how-to question? Also, are you asking specifically about TLS, or more broadly (e.g. EC in CMS). For SSL, curve selection is controlled via the functions documented under: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set1_groups.html But this does not specify the default list, which is in ssl/t1_lib.c: /* The default curves */ static const uint16_t eccurves_default[] = { 29, /* X25519 (29) */ 23, /* secp256r1 (23) */ 30, /* X448 (30) */ 25, /* secp521r1 (25) */ 24, /* secp384r1 (24) */ }; The full list of "available" curves is: /* * Table of curve information. * Do not delete entries or reorder this array! It is used as a lookup * table: the index of each entry is one less than the TLS curve id. */ static const TLS_GROUP_INFO nid_list[] = { {NID_sect163k1, 80, TLS_CURVE_CHAR2}, /* sect163k1 (1) */ {NID_sect163r1, 80, TLS_CURVE_CHAR2}, /* sect163r1 (2) */ {NID_sect163r2, 80, TLS_CURVE_CHAR2}, /* sect163r2 (3) */ {NID_sect193r1, 80, TLS_CURVE_CHAR2}, /* sect193r1 (4) */ {NID_sect193r2, 80, TLS_CURVE_CHAR2}, /* sect193r2 (5) */ {NID_sect233k1, 112, TLS_CURVE_CHAR2}, /* sect233k1 (6) */ {NID_sect233r1, 112, TLS_CURVE_CHAR2}, /* sect233r1 (7) */ {NID_sect239k1, 112, TLS_CURVE_CHAR2}, /* sect239k1 (8) */ {NID_sect283k1, 128, TLS_CURVE_CHAR2}, /* sect283k1 (9) */ {NID_sect283r1, 128, TLS_CURVE_CHAR2}, /* sect283r1 (10) */ {NID_sect409k1, 192, TLS_CURVE_CHAR2}, /* sect409k1 (11) */ {NID_sect409r1, 192, TLS_CURVE_CHAR2}, /* sect409r1 (12) */ {NID_sect571k1, 256, TLS_CURVE_CHAR2}, /* sect571k1 (13) */ {NID_sect571r1, 256, TLS_CURVE_CHAR2}, /* sect571r1 (14) */ {NID_secp160k1, 80, TLS_CURVE_PRIME}, /* secp160k1 (15) */ {NID_secp160r1, 80, TLS_CURVE_PRIME}, /* secp160r1 (16) */ {NID_secp160r2, 80, TLS_CURVE_PRIME}, /* secp160r2 (17) */ {NID_secp192k1, 80, TLS_CURVE_PRIME}, /* secp192k1 (18) */ {NID_X9_62_prime192v1, 80, TLS_CURVE_PRIME}, /* secp192r1 (19) */ {NID_secp224k1, 112, TLS_CURVE_PRIME}, /* secp224k1 (20) */ {NID_secp224r1, 112, TLS_CURVE_PRIME}, /* secp224r1 (21) */ {NID_secp256k1, 128, TLS_CURVE_PRIME}, /* secp256k1 (22) */ {NID_X9_62_prime256v1, 128, TLS_CURVE_PRIME}, /* secp256r1 (23) */ {NID_secp384r1, 192, TLS_CURVE_PRIME}, /* secp384r1 (24) */ {NID_secp521r1, 256, TLS_CURVE_PRIME}, /* secp521r1 (25) */ {NID_brainpoolP256r1, 128, TLS_CURVE_PRIME}, /* brainpoolP256r1 (26) */ {NID_brainpoolP384r1, 192, TLS_CURVE_PRIME}, /* brainpoolP384r1 (27) */ {NID_brainpoolP512r1, 256, TLS_CURVE_PRIME}, /* brainpool512r1 (28) */ {EVP_PKEY_X25519, 128, TLS_CURVE_CUSTOM}, /* X25519 (29) */ {EVP_PKEY_X448, 224, TLS_CURVE_CUSTOM}, /* X448 (30) */ }; -- Viktor. From sanarayana at rbbn.com Mon Nov 23 11:28:00 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Mon, 23 Nov 2020 11:28:00 +0000 Subject: set/get utilities are not available to access variable 'num' of structure bio_st (Matt Caswell) Message-ID: Hi Matt, We are using MEM type BIO. similar to the openssl library ?BIO_TYPE_MEM ? we have an internal type defined like ex:- ?BIO_TYPE_XYZ_MEM? and all other mem utilities are internally defined. Like XYZ_mem_new/XYZ_mem_read ? etc these utilities are accessing the bio_st variable ?num?. please suggest set/get utilities to handle this scenario. Regards, Sunil From: openssl-users On Behalf Of openssl-users-request at openssl.org Sent: 20 November 2020 23:34 To: openssl-users at openssl.org Subject: openssl-users Digest, Vol 72, Issue 19 ________________________________ NOTICE: This email was received from an EXTERNAL sender ________________________________ Send openssl-users mailing list submissions to openssl-users at openssl.org To subscribe or unsubscribe via the World Wide Web, visit https://mta.openssl.org/mailman/listinfo/openssl-users or, via email, send a message with subject or body 'help' to openssl-users-request at openssl.org You can reach the person managing the list at openssl-users-owner at openssl.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openssl-users digest..." Today's Topics: 1. set/get utilities are not available to access variable 'num' of structure bio_st (Narayana, Sunil Kumar) 2. Re: set/get utilities are not available to access variable 'num' of structure bio_st (Matt Caswell) 3. EC curve preferences (Skip Carter) 4. RE: EC curve preferences (Michael Wojcik) ---------------------------------------------------------------------- Message: 1 Date: Fri, 20 Nov 2020 13:46:00 +0000 From: "Narayana, Sunil Kumar" > To: "openssl-users at openssl.org" > Subject: set/get utilities are not available to access variable 'num' of structure bio_st Message-ID: > Content-Type: text/plain; charset="utf-8" Hi , We are porting our Application from openssl 1.0.1 to openssl 3.0. In related to this activity we require to access the variable 'num' of structure bio_st. In older versions the variable was accessed to set and get value using pointer operator (bi->num ). Since this is not allowed in 3.0 we are looking for the Get/Set utilities similar to other member (BIO_set_flags/ BIO_get_flags) Is this not supported in 3.0 ? If yes, Please guide the proper alternatives. Regards, Sunil ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ Message: 2 Date: Fri, 20 Nov 2020 13:55:34 +0000 From: Matt Caswell > To: openssl-users at openssl.org Subject: Re: set/get utilities are not available to access variable 'num' of structure bio_st Message-ID: <53108b39-21f8-dea0-c3c3-fe5517a5613f at openssl.org> Content-Type: text/plain; charset=utf-8 On 20/11/2020 13:46, Narayana, Sunil Kumar wrote: > Hi , > > ??????????????? We are porting our Application from ?openssl 1.0.1 to > openssl 3.0. In related to this activity we require to access the > variable ?*num*? of structure *bio_st. * > > In older versions the variable was accessed to set and get value using > pointer operator (bi->num ). > > Since this is not allowed in 3.0 we are looking for the Get/Set > utilities similar to other member*(BIO_set_flags/ BIO_get_flags) * > > ? > > Is this not supported in 3.0 ? If yes, Please guide the proper alternatives. What kind of BIO are you using? Different BIOs may provide different mechanisms to get hold of this value. For example a number of file descriptor based BIOs provide BIO_get_fd(). Matt ------------------------------ Message: 3 Date: Fri, 20 Nov 2020 08:43:59 -0800 From: Skip Carter > To: OpenSSL Users > Subject: EC curve preferences Message-ID: <1605890639.1675.24.camel at taygeta.com> Content-Type: text/plain; charset="utf-8" I am sure this in the documentation somewhere; but where ? What are the preferred ECDH curves for a given keysize ? Which curves are considered obsolete/deprecated/untrustworthy ? -- Dr Everett (Skip) Carter??0xF29BF36844FB7922 skip at taygeta.com Taygeta Scientific Inc 607 Charles Ave Seaside CA 93955 831-641-0645 x103 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: > ------------------------------ Message: 4 Date: Fri, 20 Nov 2020 18:03:22 +0000 From: Michael Wojcik > To: Skip Carter >, OpenSSL Users > Subject: RE: EC curve preferences Message-ID: > Content-Type: text/plain; charset="utf-8" > From: openssl-users > On Behalf Of Skip > Carter > Sent: Friday, 20 November, 2020 09:44 > > What are the preferred ECDH curves for a given keysize ? Which curves > are considered obsolete/deprecated/untrustworthy ? For TLSv1.3, this is easy. RFC 8446 B.3.1.4 only allows the following: secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019), x25519(0x001D), x448(0x001E). Those are your choices. If you want interoperability, enable them all; if you want maximum security, only use X25519 and X448. See safecurves.cr.yp.to for the arguments in favor of the latter position. Frankly, unless you're dealing with something of very high value or that needs to resist breaking for a long time, I don't see any real-world risk in using the SEC 2 curves. You might want to disallow just secp256r1 if you're concerned about that key size becoming tractable under new attacks or quantum computing within your threat timeframe. Ultimately, this is a question for your threat model. For TLSv1.2, well... - Some people recommend avoiding non-prime curves (i.e. over binary fields, such as the sect* ones) for intellectual-property reasons. I'm not going to try to get into that, because IANAL and even if I were, I wouldn't touch that without a hefty retainer. - Current consensus, more or less, seems to be to use named curves and not custom ones. The arguments for that seem pretty persuasive to me. So don't use custom curves. - Beyond that? Well, here's one Stack Exchange response from Thomas Pornin (who knows a hell of a lot more about this stuff than I do) where he suggests using just prime256v1 (which is the same as secp256r1 I believe?) and secp384r1: https://security.stackexchange.com/questions/78621/which-elliptic-curve-should-i-use Those are the curves in Suite B, before the NSA decided to emit vague warnings about ECC. They subsequently decided P384 aka secp384r1 is OK until post-quantum primitives are standardized. So if your application prefers secp384r1 for TLSv1.2, then you can decide whether to also allow prime256v1 for interoperability. Again, that's a question for your threat model. All that said, some people will have different, and quite possibly better-informed, opinions on this. -- Michael Wojcik ------------------------------ Subject: Digest Footer _______________________________________________ openssl-users mailing list openssl-users at openssl.org https://mta.openssl.org/mailman/listinfo/openssl-users ------------------------------ End of openssl-users Digest, Vol 72, Issue 19 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Mon Nov 23 11:41:49 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 23 Nov 2020 11:41:49 +0000 Subject: set/get utilities are not available to access variable 'num' of structure bio_st (Matt Caswell) In-Reply-To: References: Message-ID: <63aead3f-1488-1461-0a80-7809febf04c9@openssl.org> On 23/11/2020 11:28, Narayana, Sunil Kumar wrote: > Hi Matt, > ????? ???????? We are using ?MEM type BIO. similar to the openssl > library ?BIO_TYPE_MEM ? we have an internal type defined like ex:- > ?BIO_TYPE_XYZ_MEM? ?and all other mem utilities are internally defined. > > Like XYZ_mem_new/XYZ_mem_read ? etc? these utilities are accessing the > bio_st variable ?*num?*. > > please suggest set/get utilities to handle this scenario. If I understand correctly you want to store an "int" value internally to a custom BIO. Custom BIOs can associate an arbitrary data structure with the BIO object and store whatever they like in it using the BIO_get_data() and BIO_set_data() functions. For example: typedef struct { int num; } XYZ_PRIVATE_DATA; static int XYZ_mem_new(BIO *bio) { XYZ_PRIVATE_DATA *data = OPENSSL_zalloc(sizeof(*data)); if (data == NULL) return 0; /* Store whatever you like in num */ data->num = 5; BIO_set_data(bio, data); } static int XYZ_mem_free(BIO *bio) { XYZ_PRIVATE_DATA *data = BIO_get_data(bio); OPENSSL_free(data); BIO_set_data(bio, NULL); } static int XYZ_mem_read(BIO *bio, char *out, int outl) { XYZ_PRIVATE_DATA *data = BIO_get_data(bio); /* Do the read operation and use data->num as required */ } Matt > > > Regards, > > Sunil > > *From:*openssl-users *On Behalf Of > *openssl-users-request at openssl.org > *Sent:* 20 November 2020 23:34 > *To:* openssl-users at openssl.org > *Subject:* openssl-users Digest, Vol 72, Issue 19 > > ? > > ------------------------------------------------------------------------ > > NOTICE: This email was received from an EXTERNAL sender > > ------------------------------------------------------------------------ > > > Send openssl-users mailing list submissions to > openssl-users at openssl.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mta.openssl.org/mailman/listinfo/openssl-users > or, via email, send a message with subject or body 'help' to > openssl-users-request at openssl.org > > You can reach the person managing the list at > openssl-users-owner at openssl.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openssl-users digest..." > > > Today's Topics: > > 1. set/get utilities are not available to access variable > 'num' of structure bio_st (Narayana, Sunil Kumar) > 2. Re: set/get utilities are not available to access variable > 'num' of structure bio_st (Matt Caswell) > 3. EC curve preferences (Skip Carter) > 4. RE: EC curve preferences (Michael Wojcik) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 20 Nov 2020 13:46:00 +0000 > From: "Narayana, Sunil Kumar" > > To: "openssl-users at openssl.org " > > > Subject: set/get utilities are not available to access variable > 'num' of structure bio_st > Message-ID: > > > > Content-Type: text/plain; charset="utf-8" > > Hi , > We are porting our Application from openssl 1.0.1 to openssl 3.0. In > related to this activity we require to access the variable 'num' of > structure bio_st. > In older versions the variable was accessed to set and get value using > pointer operator (bi->num ). > Since this is not allowed in 3.0 we are looking for the Get/Set > utilities similar to other member (BIO_set_flags/ BIO_get_flags) > > Is this not supported in 3.0 ? If yes, Please guide the proper alternatives. > > Regards, > Sunil > > > ----------------------------------------------------------------------------------------------------------------------- > Notice: This e-mail together with any attachments may contain > information of Ribbon Communications Inc. that > is confidential and/or proprietary for the sole use of the intended > recipient. Any review, disclosure, reliance or > distribution by others or forwarding without express permission is > strictly prohibited. If you are not the intended > recipient, please notify the sender immediately and then delete all > copies, including any attachments. > ----------------------------------------------------------------------------------------------------------------------- > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 2 > Date: Fri, 20 Nov 2020 13:55:34 +0000 > From: Matt Caswell > > To: openssl-users at openssl.org > Subject: Re: set/get utilities are not available to access variable > 'num' of structure bio_st > Message-ID: <53108b39-21f8-dea0-c3c3-fe5517a5613f at openssl.org > > > Content-Type: text/plain; charset=utf-8 > > > > On 20/11/2020 13:46, Narayana, Sunil Kumar wrote: >> Hi , >> >> ??????????????? We are porting our Application from ?openssl 1.0.1 to >> openssl 3.0. In related to this activity we require to access the >> variable ?*num*? of structure *bio_st. * >> >> In older versions the variable was accessed to set and get value using >> pointer operator (bi->num ). >> >> Since this is not allowed in 3.0 we are looking for the Get/Set >> utilities similar to other member*(BIO_set_flags/ BIO_get_flags) * >> >> ? >> >> Is this not supported in 3.0 ? If yes, Please guide the proper > alternatives. > > What kind of BIO are you using? Different BIOs may provide different > mechanisms to get hold of this value. For example a number of file > descriptor based BIOs provide BIO_get_fd(). > > Matt > > > > ------------------------------ > > Message: 3 > Date: Fri, 20 Nov 2020 08:43:59 -0800 > From: Skip Carter > > To: OpenSSL Users > > Subject: EC curve preferences > Message-ID: <1605890639.1675.24.camel at taygeta.com > > > Content-Type: text/plain; charset="utf-8" > > > I am sure this in the documentation somewhere; but where ? > > What are the preferred ECDH curves for a given keysize ? Which curves > are considered obsolete/deprecated/untrustworthy ? > > > -- > Dr Everett (Skip) Carter??0xF29BF36844FB7922 > skip at taygeta.com > > Taygeta Scientific Inc > 607 Charles Ave > Seaside CA 93955 > 831-641-0645 x103 > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 659 bytes > Desc: This is a digitally signed message part > URL: > > > ------------------------------ > > Message: 4 > Date: Fri, 20 Nov 2020 18:03:22 +0000 > From: Michael Wojcik > > To: Skip Carter >, OpenSSL Users > > > Subject: RE: EC curve preferences > Message-ID: > > > > Content-Type: text/plain; charset="utf-8" > >> From: openssl-users > On Behalf Of Skip >> Carter >> Sent: Friday, 20 November, 2020 09:44 >> >> What are the preferred ECDH curves for a given keysize ? Which curves >> are considered obsolete/deprecated/untrustworthy ? > > For TLSv1.3, this is easy. RFC 8446 B.3.1.4 only allows the following: > secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019), x25519(0x001D), > x448(0x001E). Those are your choices. If you want interoperability, > enable them all; if you want maximum security, only use X25519 and X448. > See safecurves.cr.yp.to for the arguments in favor of the latter position. > > Frankly, unless you're dealing with something of very high value or that > needs to resist breaking for a long time, I don't see any real-world > risk in using the SEC 2 curves. You might want to disallow just > secp256r1 if you're concerned about that key size becoming tractable > under new attacks or quantum computing within your threat timeframe. > Ultimately, this is a question for your threat model. > > > For TLSv1.2, well... > > - Some people recommend avoiding non-prime curves (i.e. over binary > fields, such as the sect* ones) for intellectual-property reasons. I'm > not going to try to get into that, because IANAL and even if I were, I > wouldn't touch that without a hefty retainer. > > - Current consensus, more or less, seems to be to use named curves and > not custom ones. The arguments for that seem pretty persuasive to me. So > don't use custom curves. > > - Beyond that? Well, here's one Stack Exchange response from Thomas > Pornin (who knows a hell of a lot more about this stuff than I do) where > he suggests using just prime256v1 (which is the same as secp256r1 I > believe?) and secp384r1: > > https://security.stackexchange.com/questions/78621/which-elliptic-curve-should-i-use > > Those are the curves in Suite B, before the NSA decided to emit vague > warnings about ECC. They subsequently decided P384 aka secp384r1 is OK > until post-quantum primitives are standardized. So if your application > prefers secp384r1 for TLSv1.2, then you can decide whether to also allow > prime256v1 for interoperability. Again, that's a question for your > threat model. > > All that said, some people will have different, and quite possibly > better-informed, opinions on this. > > -- > Michael Wojcik > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openssl-users mailing list > openssl-users at openssl.org > https://mta.openssl.org/mailman/listinfo/openssl-users > > > ------------------------------ > > End of openssl-users Digest, Vol 72, Issue 19 > ********************************************* > From fgerlits at cloudera.com Mon Nov 23 12:03:55 2020 From: fgerlits at cloudera.com (Ferenc Gerlits) Date: Mon, 23 Nov 2020 13:03:55 +0100 Subject: TLS with Client Authentication using private key from Windows store Message-ID: Hi, I am trying to use openssl to implement a client-side TLS connection with Client Authentication on Windows, using a non-exportable private key stored in the Windows Certificate Store. Currently, our code can use a private key stored in a local file, and if the key in the Windows store was exportable, I could export it and use it in the existing code. But the key is non-exportable, which is a problem. Does anyone know how to do this? So far, I have found suggestions to use the CAPI engine (eg. https://groups.google.com/g/mailing.openssl.users/c/_rdJLc7emAY?pli=1), but no examples of how to do that, and also some tickets (eg. https://github.com/openssl/openssl/issues/12859) which say that the CAPI engine does not work with TLS >= 1.2 on openssl 1.1.1, so that doesn't look like a good solution. Any help would be appreciated! Thank you, Ferenc -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanarayana at rbbn.com Tue Nov 24 15:03:00 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Tue, 24 Nov 2020 15:03:00 +0000 Subject: set/get utilities are not available to access variable 'num' of structure bio_st (Matt Caswell) In-Reply-To: <63aead3f-1488-1461-0a80-7809febf04c9@openssl.org> References: <63aead3f-1488-1461-0a80-7809febf04c9@openssl.org> Message-ID: Hi Matt, Thanks for the reply.. we would implement as you suggested. From: Matt Caswell Sent: 23 November 2020 17:12 To: Narayana, Sunil Kumar ; openssl-users at openssl.org Subject: Re: set/get utilities are not available to access variable 'num' of structure bio_st (Matt Caswell) ________________________________ NOTICE: This email was received from an EXTERNAL sender ________________________________ On 23/11/2020 11:28, Narayana, Sunil Kumar wrote: > Hi Matt, > We are using MEM type BIO. similar to the openssl > library ?BIO_TYPE_MEM ? we have an internal type defined like ex:- > ?BIO_TYPE_XYZ_MEM? and all other mem utilities are internally defined. > > Like XYZ_mem_new/XYZ_mem_read ? etc these utilities are accessing the > bio_st variable ?*num?*. > > please suggest set/get utilities to handle this scenario. If I understand correctly you want to store an "int" value internally to a custom BIO. Custom BIOs can associate an arbitrary data structure with the BIO object and store whatever they like in it using the BIO_get_data() and BIO_set_data() functions. For example: typedef struct { int num; } XYZ_PRIVATE_DATA; static int XYZ_mem_new(BIO *bio) { XYZ_PRIVATE_DATA *data = OPENSSL_zalloc(sizeof(*data)); if (data == NULL) return 0; /* Store whatever you like in num */ data->num = 5; BIO_set_data(bio, data); } static int XYZ_mem_free(BIO *bio) { XYZ_PRIVATE_DATA *data = BIO_get_data(bio); OPENSSL_free(data); BIO_set_data(bio, NULL); } static int XYZ_mem_read(BIO *bio, char *out, int outl) { XYZ_PRIVATE_DATA *data = BIO_get_data(bio); /* Do the read operation and use data->num as required */ } Matt > > > Regards, > > Sunil > > *From:*openssl-users > *On Behalf Of > *openssl-users-request at openssl.org > *Sent:* 20 November 2020 23:34 > *To:* openssl-users at openssl.org > *Subject:* openssl-users Digest, Vol 72, Issue 19 > > > > ------------------------------------------------------------------------ > > NOTICE: This email was received from an EXTERNAL sender > > ------------------------------------------------------------------------ > > > Send openssl-users mailing list submissions to > openssl-users at openssl.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mta.openssl.org/mailman/listinfo/openssl-users > or, via email, send a message with subject or body 'help' to > openssl-users-request at openssl.org > > You can reach the person managing the list at > openssl-users-owner at openssl.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openssl-users digest..." > > > Today's Topics: > > 1. set/get utilities are not available to access variable > 'num' of structure bio_st (Narayana, Sunil Kumar) > 2. Re: set/get utilities are not available to access variable > 'num' of structure bio_st (Matt Caswell) > 3. EC curve preferences (Skip Carter) > 4. RE: EC curve preferences (Michael Wojcik) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 20 Nov 2020 13:46:00 +0000 > From: "Narayana, Sunil Kumar" > > > To: "openssl-users at openssl.org " > >> > Subject: set/get utilities are not available to access variable > 'num' of structure bio_st > Message-ID: > > > > > Content-Type: text/plain; charset="utf-8" > > Hi , > We are porting our Application from openssl 1.0.1 to openssl 3.0. In > related to this activity we require to access the variable 'num' of > structure bio_st. > In older versions the variable was accessed to set and get value using > pointer operator (bi->num ). > Since this is not allowed in 3.0 we are looking for the Get/Set > utilities similar to other member (BIO_set_flags/ BIO_get_flags) > > Is this not supported in 3.0 ? If yes, Please guide the proper alternatives. > > Regards, > Sunil > > > ----------------------------------------------------------------------------------------------------------------------- > Notice: This e-mail together with any attachments may contain > information of Ribbon Communications Inc. that > is confidential and/or proprietary for the sole use of the intended > recipient. Any review, disclosure, reliance or > distribution by others or forwarding without express permission is > strictly prohibited. If you are not the intended > recipient, please notify the sender immediately and then delete all > copies, including any attachments. > ----------------------------------------------------------------------------------------------------------------------- > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > > ------------------------------ > > Message: 2 > Date: Fri, 20 Nov 2020 13:55:34 +0000 > From: Matt Caswell >> > To: openssl-users at openssl.org > Subject: Re: set/get utilities are not available to access variable > 'num' of structure bio_st > Message-ID: <53108b39-21f8-dea0-c3c3-fe5517a5613f at openssl.org > > > Content-Type: text/plain; charset=utf-8 > > > > On 20/11/2020 13:46, Narayana, Sunil Kumar wrote: >> Hi , >> >> ??????????????? We are porting our Application from ?openssl 1.0.1 to >> openssl 3.0. In related to this activity we require to access the >> variable ?*num*? of structure *bio_st. * >> >> In older versions the variable was accessed to set and get value using >> pointer operator (bi->num ). >> >> Since this is not allowed in 3.0 we are looking for the Get/Set >> utilities similar to other member*(BIO_set_flags/ BIO_get_flags) * >> >> ? >> >> Is this not supported in 3.0 ? If yes, Please guide the proper > alternatives. > > What kind of BIO are you using? Different BIOs may provide different > mechanisms to get hold of this value. For example a number of file > descriptor based BIOs provide BIO_get_fd(). > > Matt > > > > ------------------------------ > > Message: 3 > Date: Fri, 20 Nov 2020 08:43:59 -0800 > From: Skip Carter >> > To: OpenSSL Users > > > Subject: EC curve preferences > Message-ID: <1605890639.1675.24.camel at taygeta.com > > > Content-Type: text/plain; charset="utf-8" > > > I am sure this in the documentation somewhere; but where ? > > What are the preferred ECDH curves for a given keysize ? Which curves > are considered obsolete/deprecated/untrustworthy ? > > > -- > Dr Everett (Skip) Carter??0xF29BF36844FB7922 > skip at taygeta.com > > Taygeta Scientific Inc > 607 Charles Ave > Seaside CA 93955 > 831-641-0645 x103 > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 659 bytes > Desc: This is a digitally signed message part > URL: > > > > ------------------------------ > > Message: 4 > Date: Fri, 20 Nov 2020 18:03:22 +0000 > From: Michael Wojcik > > > To: Skip Carter >>, OpenSSL Users > >> > Subject: RE: EC curve preferences > Message-ID: > > > > > Content-Type: text/plain; charset="utf-8" > >> From: openssl-users > > On Behalf Of Skip >> Carter >> Sent: Friday, 20 November, 2020 09:44 >> >> What are the preferred ECDH curves for a given keysize ? Which curves >> are considered obsolete/deprecated/untrustworthy ? > > For TLSv1.3, this is easy. RFC 8446 B.3.1.4 only allows the following: > secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019), x25519(0x001D), > x448(0x001E). Those are your choices. If you want interoperability, > enable them all; if you want maximum security, only use X25519 and X448. > See safecurves.cr.yp.to for the arguments in favor of the latter position. > > Frankly, unless you're dealing with something of very high value or that > needs to resist breaking for a long time, I don't see any real-world > risk in using the SEC 2 curves. You might want to disallow just > secp256r1 if you're concerned about that key size becoming tractable > under new attacks or quantum computing within your threat timeframe. > Ultimately, this is a question for your threat model. > > > For TLSv1.2, well... > > - Some people recommend avoiding non-prime curves (i.e. over binary > fields, such as the sect* ones) for intellectual-property reasons. I'm > not going to try to get into that, because IANAL and even if I were, I > wouldn't touch that without a hefty retainer. > > - Current consensus, more or less, seems to be to use named curves and > not custom ones. The arguments for that seem pretty persuasive to me. So > don't use custom curves. > > - Beyond that? Well, here's one Stack Exchange response from Thomas > Pornin (who knows a hell of a lot more about this stuff than I do) where > he suggests using just prime256v1 (which is the same as secp256r1 I > believe?) and secp384r1: > > https://security.stackexchange.com/questions/78621/which-elliptic-curve-should-i-use > > Those are the curves in Suite B, before the NSA decided to emit vague > warnings about ECC. They subsequently decided P384 aka secp384r1 is OK > until post-quantum primitives are standardized. So if your application > prefers secp384r1 for TLSv1.2, then you can decide whether to also allow > prime256v1 for interoperability. Again, that's a question for your > threat model. > > All that said, some people will have different, and quite possibly > better-informed, opinions on this. > > -- > Michael Wojcik > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openssl-users mailing list > openssl-users at openssl.org > https://mta.openssl.org/mailman/listinfo/openssl-users > > > ------------------------------ > > End of openssl-users Digest, Vol 72, Issue 19 > ********************************************* > ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From janjust at nikhef.nl Tue Nov 24 15:12:58 2020 From: janjust at nikhef.nl (Jan Just Keijser) Date: Tue, 24 Nov 2020 16:12:58 +0100 Subject: TLS with Client Authentication using private key from Windows store In-Reply-To: References: Message-ID: <3e5344f2-0519-055a-72fa-3e3148246fc5@nikhef.nl> Hi Ferenc, On 23/11/20 13:03, Ferenc Gerlits via openssl-users wrote: > Hi, > > I am trying to use openssl to implement a client-side TLS connection > with Client Authentication on Windows, using a non-exportable private > key stored in the Windows Certificate Store.? Currently, our code can > use a private key stored in a local file, and if the key in the > Windows store was exportable, I could export it and use it in the > existing code.? But the key is non-exportable, which is a problem. > > Does anyone know how to do this? > > So far, I have found suggestions to use the CAPI engine (eg. > https://groups.google.com/g/mailing.openssl.users/c/_rdJLc7emAY?pli=1), > but no examples of how to do that, and also some tickets (eg. > https://github.com/openssl/openssl/issues/12859) which say that the > CAPI engine does not work with TLS >= 1.2 on openssl 1.1.1, so that > doesn't look like a good solution. > > OpenVPN 2.4+? can use the Windows Certificate Store to encrypt and sign traffic using CNG (Crypto Next Gen, I believe). I'd suggest you download the source code and examine the file? cryptoapi.c for details. HTH, JJK From openssl at openssl.org Thu Nov 26 15:33:49 2020 From: openssl at openssl.org (OpenSSL) Date: Thu, 26 Nov 2020 15:33:49 +0000 Subject: OpenSSL version 3.0.0-alpha9 published Message-ID: <20201126153349.GA19701@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 OpenSSL version 3.0 alpha 9 released ==================================== OpenSSL - The Open Source toolkit for SSL/TLS https://www.openssl.org/ OpenSSL 3.0 is currently in alpha. OpenSSL 3.0 alpha 9 has now been made available. Note: This OpenSSL pre-release has been provided for testing ONLY. It should NOT be used for security critical purposes. Specific notes on upgrading to OpenSSL 3.0 from previous versions, as well as known issues are available on the OpenSSL Wiki, here: https://wiki.openssl.org/index.php/OpenSSL_3.0 The alpha release is available for download via HTTPS and FTP from the following master locations (you can find the various FTP mirrors under https://www.openssl.org/source/mirror.html): * https://www.openssl.org/source/ * ftp://ftp.openssl.org/source/ The distribution file name is: o openssl-3.0.0-alpha9.tar.gz Size: 14058484 SHA1 checksum: 9b5faa69485659407583fd653f8064fb425fc0c4 SHA256 checksum: 5762545c972d5e48783c751d3188ac19f6f9154ee4899433ba15f01c56b3eee6 The checksums were calculated using the following commands: openssl sha1 openssl-3.0.0-alpha9.tar.gz openssl sha256 openssl-3.0.0-alpha9.tar.gz Please download and check this alpha release as soon as possible. To report a bug, open an issue on GitHub: https://github.com/openssl/openssl/issues Please check the release notes and mailing lists to avoid duplicate reports of known issues. (Of course, the source is also available on GitHub.) Yours, The OpenSSL Project Team. -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAl+/wWAACgkQ2cTSbQ5g RJGGWQgAr12trYLeMYhAMzTnfQXOv+M16DrJyPZoyZyVNee3rcmOUA18Uiiji45F BlauG3D/ShIJZ4zMs/jjVRnc/MqAZBphgO4Ow0XlFl+fkqess9hk/buerNZs9lbu Xp/yRPO8d9hTB3ni1VPnaFlnRGKVZydR7p0s2b5j/ps6o0OVKwBxjFnX3Lr9loPs HkiXZMdmZp2woTJc+Ch5KCzpZcVAWs14v6ZgKsMLIxkD3iU1NjSacR4AAEdwhd4m 4X3GSOMTzHniOWEGaRKJM8nYiaKyajnq386re5wsqK1J6EqRTQ73QgXhK0Ge1lC0 Eh9Mmg/7ajFmjLThcWqJVgy2m+9/Gw== =t8pi -----END PGP SIGNATURE----- From sanarayana at rbbn.com Thu Nov 26 17:32:20 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Thu, 26 Nov 2020 17:32:20 +0000 Subject: HMAC is deprecated in 3.0 getting error 'HMAC' was not declared in this scope Message-ID: Hi, We are trying to upgrade our application from openssl usage of 1.0.2 to openssl 3.0, during which we observe following errors. Need help to identify the alternatives. Error1 : error: 'HMAC' was not declared in this scope HMAC(EVP_sha1(), (const void*) cookie_secret, COOKIE_SECRET_LENGTH, * HMAC is deprecated in 3.0 but not finding the suitable replacement to use in application. * We feel we can use EVP_MAC API's, but not sure the exact API which replaces HMAC Error2 : error: invalid use of incomplete type 'SSL' {aka 'struct ssl_st'} ssl->d1->mtu = MAX_SEND_PKT_SIZE; * Our application is trying to set the MTU size but can not access d1 of 'struct ssl_st'. * Any utility exposed to applications similar to API dtls1_ctrl which is mostly an internal one Regards, Sunil ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Thu Nov 26 22:02:54 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Fri, 27 Nov 2020 08:02:54 +1000 Subject: HMAC is deprecated in 3.0 getting error 'HMAC' was not declared in this scope In-Reply-To: References: Message-ID: <500A510C-33D2-48CB-85D0-FB5659EC4DB4@oracle.com> There is no direct replacement for the MHAC call at this point, EVP_MAC needs to be used. I?d suggest reading the EVP_MAC(3) man page. There is an example down the bottom. Does SSL_set_mtu() do what you require? Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 27 Nov 2020, at 3:32 am, Narayana, Sunil Kumar wrote: > > Hi, > We are trying to upgrade our application from openssl usage of 1.0.2 to openssl 3.0, during which we observe following errors. > Need help to identify the alternatives. > > Error1 : error: 'HMAC' was not declared in this scope HMAC(EVP_sha1(), (const void*) cookie_secret, COOKIE_SECRET_LENGTH, > HMAC is deprecated in 3.0 but not finding the suitable replacement to use in application. > We feel we can use EVP_MAC API?s, but not sure the exact API which replaces HMAC > > > Error2 : error: invalid use of incomplete type 'SSL' {aka 'struct ssl_st'} ssl->d1->mtu = MAX_SEND_PKT_SIZE; > Our application is trying to set the MTU size but can not access d1 of 'struct ssl_st'. > Any utility exposed to applications similar to API dtls1_ctrl which is mostly an internal one > > > Regards, > Sunil > > > Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Thu Nov 26 23:04:55 2020 From: matt at openssl.org (Matt Caswell) Date: Thu, 26 Nov 2020 23:04:55 +0000 Subject: HMAC is deprecated in 3.0 getting error 'HMAC' was not declared in this scope In-Reply-To: References: Message-ID: <643b10eb-80e6-8fc0-9cb8-aa870cb8775f@openssl.org> On 26/11/2020 17:32, Narayana, Sunil Kumar wrote: > Error2 : error: invalid use of incomplete type 'SSL' {aka 'struct > ssl_st'} ssl->d1->mtu = MAX_SEND_PKT_SIZE; Use SSL_set_mtu(ssl, MAX_SEND_PKT_SIZE) instead. Matt From sanarayana at rbbn.com Fri Nov 27 13:51:47 2020 From: sanarayana at rbbn.com (Narayana, Sunil Kumar) Date: Fri, 27 Nov 2020 13:51:47 +0000 Subject: Regarding #def for 'SSL_R_PEER_ERROR_NO_CIPHER' and 'SSL_R_NO_CERTIFICATE_RETURNED' in openssl3.0 Message-ID: Hi, We are trying to upgrade our application from openssl usage of 1.0.2 to openssl 3.0, during which we observe following errors. Looks like the below #def been removed from 1.1 onwards, Should application also need to take off from its usage ? or is there any alternative to be used in application ? Please suggest error: 'SSL_R_PEER_ERROR_NO_CIPHER' was not declared in this scope case SSL_R_PEER_ERROR_NO_CIPHER: error: 'SSL_R_NO_CERTIFICATE_RETURNED' was not declared in this scope case SSL_R_NO_CERTIFICATE_RETURNED: Regards, Sunil ----------------------------------------------------------------------------------------------------------------------- Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please notify the sender immediately and then delete all copies, including any attachments. ----------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahendra.sp at gmail.com Mon Nov 30 15:02:25 2020 From: mahendra.sp at gmail.com (Mahendra SP) Date: Mon, 30 Nov 2020 20:32:25 +0530 Subject: Question related to default RAND usage and update with engine RAND Message-ID: Hi All, We are planning to use our own RAND implementation using an engine. What we observe is, during Openssl init, default RAND gets initialized to openssl RAND. Then later we initialize our engine RAND. Even though we make our RAND as default, we see that still openssl uses the initial default RAND. Here is what could be happening. In the function RAND_get_rand_method, default_RAND_meth gets initialized to openssl RAND. As there is a NULL check for default_RAND_meth , default_RAND_meth never gets updated as it is not NULL. Even if engine RAND is registered and available for use, default_RAND_meth never gets updated. Given the code snippet below. const RAND_METHOD *RAND_get_rand_method(void) { const RAND_METHOD *tmp_meth = NULL; if (!RUN_ONCE(&rand_init, do_rand_init)) return NULL; CRYPTO_THREAD_write_lock(rand_meth_lock); if (default_RAND_meth == NULL) { #ifndef OPENSSL_NO_ENGINE ENGINE *e; /* If we have an engine that can do RAND, use it. */ if ((e = ENGINE_get_default_RAND()) != NULL && (tmp_meth = ENGINE_get_RAND(e)) != NULL) { funct_ref = e; default_RAND_meth = tmp_meth; } else { ENGINE_finish(e); default_RAND_meth = &rand_meth; } #else default_RAND_meth = &rand_meth; #endif } tmp_meth = default_RAND_meth; CRYPTO_THREAD_unlock(rand_meth_lock); return tmp_meth; } Should we remove the NULL check for default_RAND_meth to fix this issue ? Or is there any other way? Thanks Mahendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahendra.sp at gmail.com Mon Nov 30 15:49:35 2020 From: mahendra.sp at gmail.com (Mahendra SP) Date: Mon, 30 Nov 2020 21:19:35 +0530 Subject: Need inputs for engine cleanup Message-ID: Hi All, We are using the openssl 1.1.1 version and using the ENGINE implementation for some crypto operation. Engine gets loaded dynamically and initialized successfully and we are able to use the engine. However, we plan to stop using this engine from the application side once we are done with it. When we try to stop using the engine, our engine references do not get removed. We have tried this sequence: ENGINE_free(); ENGINE_finish(); However, ENGINE_remove() seems to remove the engine correctly and we see that our engine does not get referred after this remove call. Can someone please provide a correct way of removing the engine so that engine is no longer available for usage ? Thanks Mahendra -------------- next part -------------- An HTML attachment was scrubbed... URL: