From charlesm at mcn.org Sat Dec 1 00:25:12 2018 From: charlesm at mcn.org (Charles Mills) Date: Fri, 30 Nov 2018 16:25:12 -0800 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> Message-ID: <035b01d4890c$5312f070$f938d150$@mcn.org> Well, it ought then to say "I couldn't find any certificates at all" rather than "I found a self-signed certificate" when it did not. I used to manage product developers. Sometimes I would point out a need for product improvement and they would say "the code doesn't work that way." I would reply "I understand. I'm asking you to change the code." Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Viktor Dukhovni Sent: Friday, November 30, 2018 3:35 PM To: openssl-users at openssl.org Subject: Re: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath > On Nov 30, 2018, at 5:00 PM, Charles Mills wrote: > > "Self-signed certificate in certificate chain" does not to me convey "No certificate hash links" (or "CA certificate not found in hash links"). That's not really possible, because the code that's doing certificate validation works with an abstract certificate store API, and does not know whether a particular certificate should or should not have been listed a trust-anchor in some store. All we know is that we've reached a self-signed certificate in the chain (so no further issuers can be found) and it is not in any of the trust stores, so verification fails. Perhaps we could document the errors in a bit more depth, but I don't think it is possible to tell you that your CApath was missing some specific symlink. -- -- Viktor. -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From openssl-users at dukhovni.org Sat Dec 1 00:36:59 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 30 Nov 2018 19:36:59 -0500 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <035b01d4890c$5312f070$f938d150$@mcn.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> Message-ID: <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> > On Nov 30, 2018, at 7:25 PM, Charles Mills wrote: > > Well, it ought then to say "I couldn't find any certificates at all" rather > than "I found a self-signed certificate" when it did not. A self-signed certificate was found, in the chain being verified. The message should likely be more clear (perhaps along the lines suggested by Michael Wojcik), but it is not incorrect. -- Viktor. From dnsands at sandia.gov Sat Dec 1 00:33:20 2018 From: dnsands at sandia.gov (Sands, Daniel) Date: Sat, 1 Dec 2018 00:33:20 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> Message-ID: <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> On Fri, 2018-11-30 at 23:55 +0000, Michael Wojcik wrote: > > "Self-signed certificate in certificate chain" does not to me > > > convey "No > > > certificate hash links" (or "CA certificate not found in hash > > > links"). > > > Viktor's points are all good ones, but considering how often this > particular message causes confusion for users and developers (at > least in my experience), I wonder whether changing the text to > "Untrusted self-signed certificate in certificate chain" would help. > That would suggest to the user that the problem might be an issue > with the trust store. > My .02: The message "Self-signed certificate in certificate chain" does make it sound like OpenSSL rejected the certificate precisely because it's self signed, and not because it's an untrusted root certificate. I would suggest a less misleading reason, at least. From openssl-users at dukhovni.org Sat Dec 1 01:38:01 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 30 Nov 2018 20:38:01 -0500 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> Message-ID: > On Nov 30, 2018, at 7:33 PM, Sands, Daniel via openssl-users wrote: > >> Viktor's points are all good ones, but considering how often this >> particular message causes confusion for users and developers (at >> least in my experience), I wonder whether changing the text to >> "Untrusted self-signed certificate in certificate chain" would help. >> That would suggest to the user that the problem might be an issue >> with the trust store. >> > My .02: The message "Self-signed certificate in certificate chain" > does make it sound like OpenSSL rejected the certificate precisely > because it's self signed, and not because it's an untrusted root > certificate. I would suggest a less misleading reason, at least. Are there compatibility concerns around changing error message text for which users may have created regex patterns in scripts? I agree the text could be better, but not sure in what releases if any to change the text, since the change may cause issues for some users. -- Viktor. From Michael.Wojcik at microfocus.com Sat Dec 1 19:12:24 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Sat, 1 Dec 2018 19:12:24 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Viktor Dukhovni > Sent: Friday, November 30, 2018 18:38 > > Are there compatibility concerns around changing error message > text for which users may have created regex patterns in scripts? > > I agree the text could be better, but not sure in what releases > if any to change the text, since the change may cause issues > for some users. Sure, this is always a concern. Maybe the change could be considered for OpenSSL 3.0, since that's a major release. -- Michael Wojcik Distinguished Engineer, Micro Focus From charlesm at mcn.org Sat Dec 1 20:29:42 2018 From: charlesm at mcn.org (Charles Mills) Date: Sat, 1 Dec 2018 12:29:42 -0800 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> Message-ID: <040501d489b4$97481830$c5d84890$@mcn.org> I could easily be wrong -- you guys know more about certificates than I ever will -- but I do not *think* there is any self-signed certificate in this scenario. There should be exactly two certificates in this discussion: 1. The client certificate. It is not self-signed (in the correct sense of the term, as opposed to the erroneous popular sense): it is signed by my "in-house" CA. 2. The CA certificate. Yes, it is a root and self-signed, but you didn't find it, right? (Because of my error in not running the hash utility.) If you found it what is the problem? Does the hashing process imply trust? Then the error message should be "untrusted CA certificate," no? (There is only one certificate in the CApath folder.) Am I missing something? Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Viktor Dukhovni Sent: Friday, November 30, 2018 4:37 PM To: openssl-users at openssl.org Subject: Re: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath > On Nov 30, 2018, at 7:25 PM, Charles Mills wrote: > > Well, it ought then to say "I couldn't find any certificates at all" rather > than "I found a self-signed certificate" when it did not. A self-signed certificate was found, in the chain being verified. The message should likely be more clear (perhaps along the lines suggested by Michael Wojcik), but it is not incorrect. From openssl-users at dukhovni.org Sat Dec 1 20:46:46 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sat, 1 Dec 2018 15:46:46 -0500 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <040501d489b4$97481830$c5d84890$@mcn.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> <040501d489b4$97481830$c5d84890$@mcn.org> Message-ID: <20181201204646.GA79754@straasha.imrryr.org> On Sat, Dec 01, 2018 at 12:29:42PM -0800, Charles Mills wrote: > I could easily be wrong -- you guys know more about certificates than I ever > will -- but I do not *think* there is any self-signed certificate in this > scenario. There should be exactly two certificates in this discussion: > > 1. The client certificate. It is not self-signed (in the correct sense of > the term, as opposed to the erroneous popular sense): it is signed by my > "in-house" CA. > > 2. The CA certificate. Yes, it is a root and self-signed, but you didn't > find it, right? You seem to be stuck on a narrow meaning of the word "found". The self-signed certificate *was* found, but not in the trust-store. It was found in the chain of certificates sent by the client to the server for validation. That's what the error message is telling you, the chain building algorithm found a self-signed certificate in the peer's chain, without finding a suitable trust-anchor in the trust-store. So validation cannot proceed further and fails. > (Because of my error in not running the hash utility.) > If you found it what is the problem? ... Everything from here down is based on an incorrect reading of the word "found". > Am I missing something? Yes: "found" != "found in the trust store" Think "encountered" rather than "found" if that's more clear. -- Viktor. From openssl-users at dukhovni.org Sat Dec 1 20:53:12 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sat, 1 Dec 2018 15:53:12 -0500 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> Message-ID: <20181201205312.GB79754@straasha.imrryr.org> On Sat, Dec 01, 2018 at 07:12:24PM +0000, Michael Wojcik wrote: > > Are there compatibility concerns around changing error message > > text for which users may have created regex patterns in scripts? > > > > I agree the text could be better, but not sure in what releases > > if any to change the text, since the change may cause issues > > for some users. > > Sure, this is always a concern. Maybe the change could be considered for OpenSSL 3.0, since that's a major release. Care to create a PR against the "master" branch? Something along the lines of: "Provided chain ends with untrusted self-signed certificate" or better. Here "untrusted" might mean not trusted for the requested purpose, but more precise is not always more clear. -- Viktor. From charlesm at mcn.org Sat Dec 1 21:46:51 2018 From: charlesm at mcn.org (Charles Mills) Date: Sat, 1 Dec 2018 13:46:51 -0800 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <20181201204646.GA79754@straasha.imrryr.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> <040501d489b4$97481830$c5d84890$@mcn.org> <20181201204646.GA79754@straasha.imrryr.org> Message-ID: <040e01d489bf$5e31fc40$1a95f4c0$@mcn.org> > It was found in the chain of certificates sent by the client to the > server for validation Again, I could be wrong but that is my point. I do not think the client is sending a chain of certificates, but rather only one, the CA-signed client certificate. (I wrote and configured the client, and generated the certificate, and loaded it into the certificate store.) Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Viktor Dukhovni Sent: Saturday, December 1, 2018 12:47 PM To: openssl-users at openssl.org Subject: Re: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath On Sat, Dec 01, 2018 at 12:29:42PM -0800, Charles Mills wrote: > I could easily be wrong -- you guys know more about certificates than I ever > will -- but I do not *think* there is any self-signed certificate in this > scenario. There should be exactly two certificates in this discussion: > > 1. The client certificate. It is not self-signed (in the correct sense of > the term, as opposed to the erroneous popular sense): it is signed by my > "in-house" CA. > > 2. The CA certificate. Yes, it is a root and self-signed, but you didn't > find it, right? You seem to be stuck on a narrow meaning of the word "found". The self-signed certificate *was* found, but not in the trust-store. It was found in the chain of certificates sent by the client to the server for validation. That's what the error message is telling From levitte at openssl.org Sun Dec 2 03:45:19 2018 From: levitte at openssl.org (Richard Levitte) Date: Sun, 02 Dec 2018 04:45:19 +0100 (CET) Subject: [openssl-users] openssl 1.1.1 opaque structures In-Reply-To: References: Message-ID: <20181202.044519.1520531109363416067.levitte@openssl.org> Did you ever get an answer to that? There is a call BN_num_bytes(), so the fix should be this: *var = rc_vmalloc(BN_num_bytes(bn)); (*var)->l = BN_bn2bin(bn, (unsigned char *)(*var)->v); Cheers, Richard ( you should probably study include/openssl/bn.h in depth ) In message on Mon, 26 Nov 2018 11:15:27 +0530, priya p said: > I am trying to fix this part of code: > > int Func1 (var, bn) { > *var = rc_vmalloc(bn->top * BN_BYTES); ------------------> Trying to fix this. Error it throws is " error: > dereferencing pointer to incomplete type". > > (*var)->l = BN_bn2bin(bn, (unsigned char *)(*var)->v); > . > . > } > > Thanks, > Priya > > On Mon, 26 Nov 2018 at 11:06, Viktor Dukhovni wrote: > > > On Nov 26, 2018, at 12:14 AM, priya p wrote: > > > > I am unable to get the API to access bn->top value or any bn members in openssl 1.1.1 . > > Can you help me with the pointers to those APIs ? > > What actual problem are you trying to solve? Accessing bn->top is > a goal in itself. > > -- > Viktor. > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > From aerowolf at gmail.com Sun Dec 2 06:28:59 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Sat, 1 Dec 2018 22:28:59 -0800 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <040e01d489bf$5e31fc40$1a95f4c0$@mcn.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> <040501d489b4$97481830$c5d84890$@mcn.org> <20181201204646.GA79754@straasha.imrryr.org> <040e01d489bf$5e31fc40$1a95f4c0$@mcn.org> Message-ID: Wireshark and other packet capture tools can help you determine exactly what's in the chain sent by the client. If the self-signed root isn't being sent, then the "self-signed certificate in certificate chain" error should never have been sent, and a bug report on that issue would be appropriate. If the root is being sent, though, having some idea of what you're doing when constructing your sessions could help us to figure out why it is when you didn't intend it to be. -Kyle H On Sat, Dec 1, 2018 at 1:47 PM Charles Mills wrote: > > > It was found in the chain of certificates sent by the client to the > > server for validation > > Again, I could be wrong but that is my point. I do not think the client is > sending a chain of certificates, but rather only one, the CA-signed client > certificate. (I wrote and configured the client, and generated the > certificate, and loaded it into the certificate store.) > > Charles > > -----Original Message----- > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Viktor Dukhovni > Sent: Saturday, December 1, 2018 12:47 PM > To: openssl-users at openssl.org > Subject: Re: [openssl-users] Self-signed error when using > SSL_CTX_load_verify_locations CApath > > On Sat, Dec 01, 2018 at 12:29:42PM -0800, Charles Mills wrote: > > > I could easily be wrong -- you guys know more about certificates than I > ever > > will -- but I do not *think* there is any self-signed certificate in this > > scenario. There should be exactly two certificates in this discussion: > > > > 1. The client certificate. It is not self-signed (in the correct sense of > > the term, as opposed to the erroneous popular sense): it is signed by my > > "in-house" CA. > > > > 2. The CA certificate. Yes, it is a root and self-signed, but you didn't > > find it, right? > > You seem to be stuck on a narrow meaning of the word "found". The > self-signed certificate *was* found, but not in the trust-store. > > It was found in the chain of certificates sent by the client to the > server for validation. That's what the error message is telling > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From dkg at fifthhorseman.net Sat Dec 1 19:54:37 2018 From: dkg at fifthhorseman.net (Daniel Kahn Gillmor) Date: Sat, 01 Dec 2018 14:54:37 -0500 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> Message-ID: <87zhtp9g5e.fsf@fifthhorseman.net> On Fri 2018-11-30 20:38:01 -0500, Viktor Dukhovni wrote: > Are there compatibility concerns around changing error message > text for which users may have created regex patterns in scripts? I advocate making the error message in english more comprehensible. Michael Wojcik's suggestion of "Untrusted self-signed certificate in certificate chain" more accurately reflects the semantics of this error message. The error message is X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN, whic his #defined in x509_vfy.h as 19, and 19 even shows up in the specific error message. Scripts should be keying on this value, not on the human-readable text. Scripts which expect certain human-readable text will fail when the text is localized (not done in OpenSSL yet, but perhaps it should be at some point, it certainly is in glibc and other libraries), or when the text is improved to be more accurate (this case). We shouldn't let those scripts stop us from improving OpenSSL going forward at least, though i can understand if folks are more reluctant to change old verisions in a point release. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From openssl-users at dukhovni.org Sun Dec 2 22:13:18 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 2 Dec 2018 17:13:18 -0500 Subject: [openssl-users] How to disable EECDH in OpenSSL 1.0.2 and 1.1.x? Message-ID: <20181202221317.GJ79754@straasha.imrryr.org> [ While I could ask off-list, or RTFS, someone else might have the same question later, so might as well ask on-list. ] Postfix added support for ECDHE ciphers long ago, back when OpenSSL 1.0.0 was shiny and new, and the server-side ECDHE support was enabled by specifying a single preferred "temp" ECDH curve. At the time we allowed users to configure: smtpd_tls_eecdh_grade = none | strong | ultra which was later expanded to: smtpd_tls_eecdh_grade = none | strong | ultra | auto as documented at: http://www.postfix.org/postconf.5.html#smtpd_tls_eecdh_grade http://www.postfix.org/postconf.5.html#tls_eecdh_strong_curve http://www.postfix.org/postconf.5.html#tls_eecdh_ultra_curve http://www.postfix.org/postconf.5.html#tls_eecdh_auto_curves The "none" setting is documented to disable ECDHE, and did that by simply doing nothing, that is by not setting a specific ECDH temp curve and also not calling SSL_CTX_set_ecdh_auto(). But doing nothing no longer has the same effect in OpenSSL 1.1.0 and later, where ECDHE curve negotiation is always on, and SSL_CTX_set_ecdh_auto() is basically a NOOP (that returns "failure" if the requested behaviour is ECDHE "off"). I thought I might get the same effect by configuring an empty curve list, but OpenSSL 1.1.x, does not accept an empty list, and in any case that might also affect DHE support, since IIRC there's now a unified list of curves and FFDHE groups, and may not be an interface for configuring just the curves? Is there still a way to support the "none" setting other than to modify the cipherlist (ciphers = "!kECDHE:...")? The Postfix code that deals with DH settings is separate from the code that deals with ciphers, and I'd prefer to get these mixed up. I should say that I understand that turning off ECDHE is increasingly unwise, interoperability can and will suffer. So I may well decide to drop support for "none" and pretend the user meant "auto", but I'd like to understand the available options first. -- Viktor. From matt at openssl.org Sun Dec 2 22:48:33 2018 From: matt at openssl.org (Matt Caswell) Date: Sun, 2 Dec 2018 22:48:33 +0000 Subject: [openssl-users] How to disable EECDH in OpenSSL 1.0.2 and 1.1.x? In-Reply-To: <20181202221317.GJ79754@straasha.imrryr.org> References: <20181202221317.GJ79754@straasha.imrryr.org> Message-ID: On 02/12/2018 22:13, Viktor Dukhovni wrote: > > [ While I could ask off-list, or RTFS, someone else might have the > same question later, so might as well ask on-list. ] > > Postfix added support for ECDHE ciphers long ago, back when OpenSSL > 1.0.0 was shiny and new, and the server-side ECDHE support was > enabled by specifying a single preferred "temp" ECDH curve. At the > time we allowed users to configure: > > smtpd_tls_eecdh_grade = none | strong | ultra > > which was later expanded to: > > smtpd_tls_eecdh_grade = none | strong | ultra | auto > > as documented at: > > http://www.postfix.org/postconf.5.html#smtpd_tls_eecdh_grade > http://www.postfix.org/postconf.5.html#tls_eecdh_strong_curve > http://www.postfix.org/postconf.5.html#tls_eecdh_ultra_curve > http://www.postfix.org/postconf.5.html#tls_eecdh_auto_curves > > The "none" setting is documented to disable ECDHE, and did that by > simply doing nothing, that is by not setting a specific ECDH temp > curve and also not calling SSL_CTX_set_ecdh_auto(). But doing > nothing no longer has the same effect in OpenSSL 1.1.0 and later, > where ECDHE curve negotiation is always on, and SSL_CTX_set_ecdh_auto() > is basically a NOOP (that returns "failure" if the requested behaviour > is ECDHE "off"). > > I thought I might get the same effect by configuring an empty curve > list, but OpenSSL 1.1.x, does not accept an empty list, and in any > case that might also affect DHE support, since IIRC there's now a > unified list of curves and FFDHE groups, and may not be an interface > for configuring just the curves? > > Is there still a way to support the "none" setting other than to > modify the cipherlist (ciphers = "!kECDHE:...")? The Postfix > code that deals with DH settings is separate from the code > that deals with ciphers, and I'd prefer to get these mixed up. AFAIK this can't be done. If you don't want ECDHE then you should not configure ECDHE ciphersuites. WRT a unifed lists of curves that's not quite the case. TLSv1.3 has a single "supported_groups" list for both FFDHE and ECDHE - but OpenSSL does not support FFDHE in TLSv1.3 so in an OpenSSL context this still only relates to ECDHE groups. Matt > > I should say that I understand that turning off ECDHE is increasingly > unwise, interoperability can and will suffer. So I may well decide > to drop support for "none" and pretend the user meant "auto", but > I'd like to understand the available options first. > From Michal.Trojnara at stunnel.org Sun Dec 2 23:10:28 2018 From: Michal.Trojnara at stunnel.org (Michal Trojnara) Date: Mon, 3 Dec 2018 00:10:28 +0100 Subject: [openssl-users] stunnel 5.50 released Message-ID: <9e157ef6-fb10-5a66-af12-ff7cf41071ab@stunnel.org> Dear Users, I have released version 5.50 of stunnel. Version 5.50, 2018.12.02, urgency: MEDIUM * New features ? - 32-bit Windows builds replaced with 64-bit builds. ? - OpenSSL DLLs updated to version 1.1.1. ? - Check whether "output" is not a relative file name. ? - Major code cleanup in the configuration file parser. ? - Added sslVersion, sslVersionMin and sslVersionMax ??? for OpenSSL 1.1.0 and later. * Bugfixes ? - Fixed PSK session resumption with TLS 1.3. ? - Fixed a memory leak in WIN32 logging subsystem. ? - Allow for zero value (ignored) TLS options. ? - Partially refactored configuration file parsing ??? and logging subsystems for clearer code and minor ??? bugfixes. * Caveats ? - We removed FIPS support from our standard builds. ??? FIPS will still be available with bespoke builds. Home page: https://www.stunnel.org/ Download: https://www.stunnel.org/downloads.html SHA-256 hashes: 951d92502908b852a297bd9308568f7c36598670b84286d3e05d4a3a550c0149? stunnel-5.50.tar.gz e855d58a05dca0943a5da8d030b5904630ee9cff47c3d747d326e151724f3bc8? stunnel-5.50-win64-installer.exe ad6c952cd26951c5a986efe8034b71af07c951e11d06e0b0ce73ef82594b1041? stunnel-5.50-android.zip Best regards, ??? Mike -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From charlesm at mcn.org Mon Dec 3 00:38:19 2018 From: charlesm at mcn.org (Charles Mills) Date: Sun, 2 Dec 2018 16:38:19 -0800 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list Message-ID: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> I have an OpenSSL (v1.1.0f) server application that processes client certificates. The doc for SSL_CTX_load_verify_locations() states "In server mode, when requesting a client certificate, the server must send the list of CAs of which it will accept client certificates. This list is not influenced by the contents of CAfile or CApath and must explicitly be set using the SSL_CTX_set_client_CA_list family of functions." The application makes no calls to SSL_CTX_set_client_CA_list() yet receives client certificates without errors. Can someone please explain the discrepancy. I'm especially wondering if I have set a trap that will spring down the road: "yes it works, but if a user does X then it will not work." Thanks! Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesm at mcn.org Mon Dec 3 00:43:17 2018 From: charlesm at mcn.org (Charles Mills) Date: Sun, 2 Dec 2018 16:43:17 -0800 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> <040501d489b4$97481830$c5d84890$@mcn.org> <20181201204646.GA79754@straasha.imrryr.org> <040e01d489bf$5e31fc40$1a95f4c0$@mcn.org> Message-ID: <051001d48aa1$2e69cb40$8b3d61c0$@mcn.org> Sorry, I do not have a packet capture tool configured. I have a verify callback with a lot of trace messages. I can see that it is only entered once; X509_STORE_CTX_get_error_depth() is 1. Does that tell us anything useful? Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Kyle Hamilton Sent: Saturday, December 1, 2018 10:29 PM To: openssl-users Subject: Re: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath Wireshark and other packet capture tools can help you determine exactly what's in the chain sent by the client. If the self-signed root isn't being sent, then the "self-signed certificate in certificate chain" error should never have been sent, and a bug report on that issue would be appropriate. If the root is being sent, though, having some idea of what you're doing when constructing your sessions could help us to figure out why it is when you didn't intend it to be. -Kyle H On Sat, Dec 1, 2018 at 1:47 PM Charles Mills wrote: > > > It was found in the chain of certificates sent by the client to the > > server for validation > > Again, I could be wrong but that is my point. I do not think the client is > sending a chain of certificates, but rather only one, the CA-signed client > certificate. (I wrote and configured the client, and generated the > certificate, and loaded it into the certificate store.) > > Charles > > -----Original Message----- > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Viktor Dukhovni > Sent: Saturday, December 1, 2018 12:47 PM > To: openssl-users at openssl.org > Subject: Re: [openssl-users] Self-signed error when using > SSL_CTX_load_verify_locations CApath > > On Sat, Dec 01, 2018 at 12:29:42PM -0800, Charles Mills wrote: > > > I could easily be wrong -- you guys know more about certificates than I > ever > > will -- but I do not *think* there is any self-signed certificate in this > > scenario. There should be exactly two certificates in this discussion: > > > > 1. The client certificate. It is not self-signed (in the correct sense of > > the term, as opposed to the erroneous popular sense): it is signed by my > > "in-house" CA. > > > > 2. The CA certificate. Yes, it is a root and self-signed, but you didn't > > find it, right? > > You seem to be stuck on a narrow meaning of the word "found". The > self-signed certificate *was* found, but not in the trust-store. > > It was found in the chain of certificates sent by the client to the > server for validation. That's what the error message is telling > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From charlesm at mcn.org Mon Dec 3 01:14:44 2018 From: charlesm at mcn.org (Charles Mills) Date: Sun, 2 Dec 2018 17:14:44 -0800 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list Message-ID: <051701d48aa5$93721070$ba563150$@mcn.org> Do I need to say no calls to SSL_CTX_set_client_CA_list() nor any of the three related functions listed on the man page? Charles From: Charles Mills [mailto:charlesm at mcn.org] Sent: Sunday, December 2, 2018 4:38 PM To: 'openssl-users at openssl.org' Subject: Question on necessity of SSL_CTX_set_client_CA_list I have an OpenSSL (v1.1.0f) server application that processes client certificates. The doc for SSL_CTX_load_verify_locations() states "In server mode, when requesting a client certificate, the server must send the list of CAs of which it will accept client certificates. This list is not influenced by the contents of CAfile or CApath and must explicitly be set using the SSL_CTX_set_client_CA_list family of functions." The application makes no calls to SSL_CTX_set_client_CA_list() yet receives client certificates without errors. Can someone please explain the discrepancy. I'm especially wondering if I have set a trap that will spring down the road: "yes it works, but if a user does X then it will not work." Thanks! Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl-users at dukhovni.org Mon Dec 3 01:50:19 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 2 Dec 2018 20:50:19 -0500 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> Message-ID: > On Dec 2, 2018, at 7:38 PM, Charles Mills wrote: > > I have an OpenSSL (v1.1.0f) server application that processes client certificates. > > The doc for SSL_CTX_load_verify_locations() states ?In server mode, when requesting a client certificate, the server must send the list of CAs of which it will accept client certificates. This list is not influenced by the contents of CAfile or CApath and must explicitly be set using the SSL_CTX_set_client_CA_list family of functions.? > > The application makes no calls to SSL_CTX_set_client_CA_list() yet receives client certificates without errors. > > Can someone please explain the discrepancy. I?m especially wondering if I have set a trap that will spring down the road: ?yes it works, but if a user does X then it will not work.? The default list is empty. Some client implementations, IIRC Java's TLS stack or at least some Java TLS toolkits, will not use a client certificate unless the server's list is non-empty, and perhaps may also require that it include a CA name that matches an issuer of their certificate. Other clients have but one default certificate and use it regardless of the server's CA list. Your mileage may vary. -- Viktor. From openssl-users at dukhovni.org Mon Dec 3 01:54:29 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 2 Dec 2018 20:54:29 -0500 Subject: [openssl-users] Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <051001d48aa1$2e69cb40$8b3d61c0$@mcn.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <035b01d4890c$5312f070$f938d150$@mcn.org> <686C3EF8-553F-4FEA-A88A-22B19146A296@dukhovni.org> <040501d489b4$97481830$c5d84890$@mcn.org> <20181201204646.GA79754@straasha.imrryr.org> <040e01d489bf$5e31fc40$1a95f4c0$@mcn.org> <051001d48aa1$2e69cb40$8b3d61c0$@mcn.org> Message-ID: > On Dec 2, 2018, at 7:43 PM, Charles Mills wrote: > > Sorry, I do not have a packet capture tool configured. > > I have a verify callback with a lot of trace messages. I can see that it is > only entered once; X509_STORE_CTX_get_error_depth() is 1. > > Does that tell us anything useful? No further information is required. Your client certificate chain includes a self-signed root CA as a direct issuer of its certificate. That root CA was not found in the server's trust store. Someone should submit a pull request to improve the error message, if they've not done so yet. -- -- Viktor. From andreas.fuchs at sit.fraunhofer.de Mon Dec 3 09:41:19 2018 From: andreas.fuchs at sit.fraunhofer.de (Fuchs, Andreas) Date: Mon, 3 Dec 2018 09:41:19 +0000 Subject: [openssl-users] Question on implementing the ameth ctrl ASN1_PKEY_CTRL_DEFAULT_MD_NID In-Reply-To: References: <9F48E1A823B03B4790B7E6E69430724D014B459655@exch2010c.sit.fraunhofer.de> <9F48E1A823B03B4790B7E6E69430724D014B45A823@exch2010c.sit.fraunhofer.de>, Message-ID: <9F48E1A823B03B4790B7E6E69430724D014B46156F@EXCH2010B.sit.fraunhofer.de> Thanks for the hint... I'll implement this. Nevertheless, padding is not supported as far as I understand, right ? Thus, in order to prevent SHA256 on a P384 curve, I'll have to set the DEFAULT_MD_NID hint, right ? Could anybody give me some feedback, whether my intended approach is correct ? ________________________________________ From: openssl-users [openssl-users-bounces at openssl.org] on behalf of Blumenthal, Uri - 0553 - MITLL [uri at ll.mit.edu] Sent: Friday, November 30, 2018 18:44 To: openssl-users at openssl.org; William Roberts Subject: Re: [openssl-users] Question on implementing the ameth ctrl ASN1_PKEY_CTRL_DEFAULT_MD_NID The way I understand the ECDSA standard, it is supposed to truncate the provided hash - which is why it is possible to have ECDSA-over-P256-SHA384. One possibility would be for you to truncate the SHA2 output yourself, IMHO. ?On 11/30/18, 12:36 PM, "openssl-users on behalf of Fuchs, Andreas" wrote: The problem is as follows: The digest parameter of the TPM2_Sign command is checked against the hash algorithms supported by the TPM. If the TPM only supports SHA256, then the maximum size for the digest parameter is 32 bytes. So you cannot pass in a SHA512 hash, even though the TPM does not even perform a hash operation. Kind of stupid, I know, but thats how it goes. For RSA, I could "emulate" signing by using the TPM2_RSA_Decrypt command. For ECDSA however there is no equivalent. Thus the tpm2-tss-engine will only support up to SHA384 (since that's what most TPMs support). Therefore, the engine needs to communicate to OpenSSL's TLS not to negotiate SHA512. That was apparently added f?r 1.0.1 and 1.1.1 recently as the ASN1_PKEY_CTRL_DEFAULT_MD_NID ameth ctrl. I just don't know enough about OpenSSL as to where to start with this. Anyone have any hints please ? ________________________________________ From: William Roberts [bill.c.roberts at gmail.com] Sent: Friday, November 30, 2018 15:55 To: openssl-users at openssl.org Cc: Fuchs, Andreas Subject: Re: [openssl-users] Question on implementing the ameth ctrl ASN1_PKEY_CTRL_DEFAULT_MD_NID On Wed, Nov 28, 2018 at 1:22 AM Fuchs, Andreas wrote: > > Hi all, > > I'm currently implementing a TPM2 engine for OpenSSL over at https://github.com/tpm2-software/tpm2-tss-engine > The problem I'm facing is that OpenSSL's TLS negotiation will request ECDSA from my engine with any hash alg, even though the TPM's keys are restricted to just one specific hash alg. What about when keys aren't restricted to one specific signing scheme and support raw encrypt/decrypt? You could just synthesize it by building up the signature structure on the client side and using the raw primitives to encrypt the signing structure directly. > > Most recently, David Woodhouse pointed out the possibility to require a certain hash-alg from the key to TLS via the ameth ASN1_PKEY_CTRL_DEFAULT_MD_NID at https://github.com/tpm2-software/tpm2-tss-engine/issues/31 > > Since I'm not that familiar with OpenSSL, I wanted to confirm that I'm following the right path for implementing this. > Thus: Is the following approach correct ? > > So, at https://github.com/tpm2-software/tpm2-tss-engine/blob/master/src/tpm2-tss-engine-ecc.c#L328: > - I need to call "const EVP_PKEY_ASN1_METHOD *EVP_PKEY_get0_asn1(const EVP_PKEY *pkey)" to get the ameth ? > - I need to call EVP_PKEY_asn1_set_ctrl(EVP_PKEY_ASN1_METHOD *ameth, (*pkey_ctrl)) to some pkey_ctrl for ECC keys of mine ? > - That pkey_ctrl is a int (*pkey_ctrl) (EVP_PKEY *pkey, int op, long arg1, void *arg2)) that implements the op ASN1_PKEY_CTRL_DEFAULT_MD_NID ? > - That pkey_ctrl()'s ASN1_PKEY_CTRL_DEFAULT_MD_NID looks up the hash for the provided pkey's ecc key from the tpm2data and returns it via *(int *)arg2 = NID_sha1 or NID_sha256 or etc and then returns 1 or 2 or something ? > - Which one of the return codes (1 or 2) makes it mandatory rather than recommended ? > > Thanks a lot for any advice, > Andreas > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From Michael.Wojcik at microfocus.com Mon Dec 3 15:22:20 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Mon, 3 Dec 2018 15:22:20 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <20181201205312.GB79754@straasha.imrryr.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Viktor Dukhovni > Sent: Saturday, December 01, 2018 13:53 > > On Sat, Dec 01, 2018 at 07:12:24PM +0000, Michael Wojcik wrote: > > > > Are there compatibility concerns around changing error message > > > text for which users may have created regex patterns in scripts? > > > > > > I agree the text could be better, but not sure in what releases > > > if any to change the text, since the change may cause issues > > > for some users. > > > > Sure, this is always a concern. Maybe the change could be considered for > > OpenSSL 3.0, since that's a major release. > > Care to create a PR against the "master" branch? Something > along the lines of: > > "Provided chain ends with untrusted self-signed certificate" > > or better. Here "untrusted" might mean not trusted for the requested > purpose, but more precise is not always more clear. I should be able to do that. (My OpenSSL contributor paperwork is still in progress, but since this PR wouldn't include any actual code, I don't think I need to wait for that.) May be a few days before I get a chance to do it. -- Michael Wojcik Distinguished Engineer, Micro Focus From charlesm at mcn.org Mon Dec 3 17:53:12 2018 From: charlesm at mcn.org (Charles Mills) Date: Mon, 3 Dec 2018 09:53:12 -0800 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> Message-ID: <05aa01d48b31$0f645900$2e2d0b00$@mcn.org> I appreciate it. OpenSSL is of course a great product but it can be a little mystifying to debug. I am a developer and I understand the problem of "layering" and virtualization, where the component that realizes there is a problem is so far removed that it does not know what the underlying real problem is. That said, I would suggest that "Provided chain ends with untrusted self-signed certificate" still does not really convey "no relevant CA certificate found in the provided path." Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Michael Wojcik Sent: Monday, December 3, 2018 7:22 AM To: openssl-users at openssl.org Subject: Re: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Viktor Dukhovni > Sent: Saturday, December 01, 2018 13:53 > > On Sat, Dec 01, 2018 at 07:12:24PM +0000, Michael Wojcik wrote: > > > > Are there compatibility concerns around changing error message > > > text for which users may have created regex patterns in scripts? > > > > > > I agree the text could be better, but not sure in what releases > > > if any to change the text, since the change may cause issues > > > for some users. > > > > Sure, this is always a concern. Maybe the change could be considered for > > OpenSSL 3.0, since that's a major release. > > Care to create a PR against the "master" branch? Something > along the lines of: > > "Provided chain ends with untrusted self-signed certificate" > > or better. Here "untrusted" might mean not trusted for the requested > purpose, but more precise is not always more clear. I should be able to do that. (My OpenSSL contributor paperwork is still in progress, but since this PR wouldn't include any actual code, I don't think I need to wait for that.) May be a few days before I get a chance to do it. -- Michael Wojcik Distinguished Engineer, Micro Focus -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From charlesm at mcn.org Mon Dec 3 17:54:47 2018 From: charlesm at mcn.org (Charles Mills) Date: Mon, 3 Dec 2018 09:54:47 -0800 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> Message-ID: <05b101d48b31$47d9c720$d78d5560$@mcn.org> Got it. Thanks. I would think the basic client case is "one certificate, one CA" so I think I will roll with what we have (especially since the product has been out there for years with no reported problems in this area -- although I think client certificate usage is rare) but keep the issue in mind if a problem comes up. Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Viktor Dukhovni Sent: Sunday, December 2, 2018 5:50 PM To: openssl-users at openssl.org Subject: Re: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list > On Dec 2, 2018, at 7:38 PM, Charles Mills wrote: > > I have an OpenSSL (v1.1.0f) server application that processes client certificates. > > The doc for SSL_CTX_load_verify_locations() states ?In server mode, when requesting a client certificate, the server must send the list of CAs of which it will accept client certificates. This list is not influenced by the contents of CAfile or CApath and must explicitly be set using the SSL_CTX_set_client_CA_list family of functions.? > > The application makes no calls to SSL_CTX_set_client_CA_list() yet receives client certificates without errors. > > Can someone please explain the discrepancy. I?m especially wondering if I have set a trap that will spring down the road: ?yes it works, but if a user does X then it will not work.? The default list is empty. Some client implementations, IIRC Java's TLS stack or at least some Java TLS toolkits, will not use a client certificate unless the server's list is non-empty, and perhaps may also require that it include a CA name that matches an issuer of their certificate. Other clients have but one default certificate and use it regardless of the server's CA list. Your mileage may vary. -- Viktor. -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From dnsands at sandia.gov Mon Dec 3 18:47:20 2018 From: dnsands at sandia.gov (Sands, Daniel) Date: Mon, 3 Dec 2018 18:47:20 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <20181201205312.GB79754@straasha.imrryr.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> Message-ID: On Sat, 2018-12-01 at 15:53 -0500, Viktor Dukhovni wrote: > On Sat, Dec 01, 2018 at 07:12:24PM +0000, Michael Wojcik wrote: > > > > Are there compatibility concerns around changing error message > > > text for which users may have created regex patterns in scripts? > > > > > > I agree the text could be better, but not sure in what releases > > > if any to change the text, since the change may cause issues > > > for some users. > > > > Sure, this is always a concern. Maybe the change could be > > considered for OpenSSL 3.0, since that's a major release. > > Care to create a PR against the "master" branch? Something > along the lines of: > > "Provided chain ends with untrusted self-signed certificate" > > or better. Here "untrusted" might mean not trusted for the requested > purpose, but more precise is not always more clear. Just wondering, is there a different error for an untrusted cross- signed root? If it's the same error, then maybe remove "self-signed" from the above message too, because that would not always be the case either. From Michael.Wojcik at microfocus.com Mon Dec 3 18:57:58 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Mon, 3 Dec 2018 18:57:58 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <05aa01d48b31$0f645900$2e2d0b00$@mcn.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> <05aa01d48b31$0f645900$2e2d0b00$@mcn.org> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Charles Mills > Sent: Monday, December 03, 2018 10:53 > > I appreciate it. OpenSSL is of course a great product but it can be a little > mystifying to debug. If I were ever to write a book about OpenSSL, "a great product but a little mystifying" would be an appropriate epigraph. Maybe Ivan should use it for the next edition of his OpenSSL Cookbook. (Recommended, by the way, or its larger sibling Bulletproof TLS; find them at feistyduck.com.) Not that it hasn't gotten better over the years: better encapsulation and abstraction, a lot more convenience functionality, a lot more explanation and samples on the OpenSSL wiki (which I think didn't even exist when I first started using OpenSSL). I have great appreciation for the team's efforts. But SSL/TLS is a great big ball of hair to begin with, and while I have tremendous respect for Eric Young, Steven Hensen, and the rest of the original contributors, the OpenSSL source is not exactly a monument to readability. (Though even in the early versions there were some important steps in that direction, like mostly consistent, safe naming conventions for external identifiers, thank goodness.) -- Michael Wojcik Distinguished Engineer, Micro Focus From Michael.Wojcik at microfocus.com Mon Dec 3 18:57:58 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Mon, 3 Dec 2018 18:57:58 +0000 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <05b101d48b31$47d9c720$d78d5560$@mcn.org> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Charles Mills > Sent: Monday, December 03, 2018 10:55 > > Got it. Thanks. I would think the basic client case is "one certificate, one CA" I'm going to disagree somewhat with this assumption, but not necessarily with your decision. That assumption is probably safe for some use cases, but not all. For example, Windows-based clients that use Microsoft's TLS implementation (SChannel, via CAPI or CNG or any of the various wrapper APIs, including the .NET Framework) have access to all the "personal" certificates in the Windows per-machine and per-user certificate stores. In a Windows domain environment, certificates may be added to those stores by central administration, as well as being created or added locally. So such clients could have quite a few client certificates available to them. Some other TLS implementations can optionally use the Windows certificate stores. I believe Netscape's NSS can be configured to do so, for example. A suitable JSSE provider is included with the standard Windows JRE and JDK distributions. And OpenSSL itself has a CAPI engine that can (probably) be used to pull client certificates from the Windows stores. (I say "probably" because when we went to use the OpenSSL CAPI engine some years ago, we ran into some issues caused by Microsoft's awkward provider mechanism and how it interacts with private-key storage, and I ended up enhancing the OpenSSL CAPI module in various ways. So I don't recall what exactly works with it out of the box.) There are other environments which similarly provide centralized storage of certificates (and corresponding private keys) to all clients. zOS does, for example, at least if you're using the RACF security provider. Perhaps more importantly, as Viktor noted, some clients won't send a certificate at all unless they have one signed by a CA on the server's list, or at least only if the server sends a non-empty list. The list is also useful for clients that want to help the user select from among a set of certificates. > so I think I will roll with what we have (especially since the product has been > out there for years with no reported problems in this area -- although I think > client certificate usage is rare) but keep the issue in mind if a problem comes > up. Despite what I wrote above, the important thing, of course, is what your users need. If they haven't needed a server that sends a CA list, there's a good chance they won't need one any time soon. Often there are better things to address first. TLS configuration is important, but certainly for the software projects I work on there are any number of important areas for further work. You can't do everything at once. -- Michael Wojcik Distinguished Engineer, Micro Focus From openssl-users at dukhovni.org Mon Dec 3 19:53:16 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 3 Dec 2018 14:53:16 -0500 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> Message-ID: <43CDE844-0D2A-4B07-A96A-69AF90481B1C@dukhovni.org> > On Dec 3, 2018, at 1:47 PM, Sands, Daniel via openssl-users wrote: > > Just wondering, is there a different error for an untrusted cross- > signed root? If it's the same error, then maybe remove "self-signed" > from the above message too, because that would not always be the case > either. A cross-signed CA certificate is not self-signed (or even self-issued), the two are mutually exclusive: This specification covers two classes of certificates: CA certificates and end entity certificates. CA certificates may be further divided into three classes: cross-certificates, self-issued Cooper, et al. Standards Track [Page 12] RFC 5280 PKIX Certificate and CRL Profile May 2008 certificates, and self-signed certificates. Cross-certificates are CA certificates in which the issuer and subject are different entities. Cross-certificates describe a trust relationship between the two CAs. Self-issued certificates are CA certificates in which the issuer and subject are the same entity. Self-issued certificates are generated to support changes in policy or operations. Self- signed certificates are self-issued certificates where the digital signature may be verified by the public key bound into the certificate. Self-signed certificates are used to convey a public key for use to begin certification paths. End entity certificates are issued to subjects that are not authorized to issue certificates. In OpenSSL there's no such thing as a "cross-signed root", the constructed chain contains a leaf certificate, some set of cross-signed or self-issued intermediate certificates, and finally a self-signed "root" (ignoring for the moment support for "partial chains" and DANE). -- Viktor. From charlesm at mcn.org Mon Dec 3 20:24:22 2018 From: charlesm at mcn.org (Charles Mills) Date: Mon, 3 Dec 2018 12:24:22 -0800 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> <05aa01d48b31$0f645900$2e2d0b00$@mcn.org> Message-ID: <05ff01d48b46$2d9a1030$88ce3090$@mcn.org> LOL. Amen to that. It has gotten a WHOLE lot better. I started with OpenSSL somewhere around 2010 and the documentation was EXTREMELY sparse to say the list. Lots of functions documented as "under construction." Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Michael Wojcik Sent: Monday, December 3, 2018 10:58 AM To: openssl-users at openssl.org Subject: Re: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Charles Mills > Sent: Monday, December 03, 2018 10:53 > > I appreciate it. OpenSSL is of course a great product but it can be a little > mystifying to debug. If I were ever to write a book about OpenSSL, "a great product but a little mystifying" would be an appropriate epigraph. Maybe Ivan should use it for the next edition of his OpenSSL Cookbook. (Recommended, by the way, or its larger sibling Bulletproof TLS; find them at feistyduck.com.) Not that it hasn't gotten better over the years: better encapsulation and abstraction, a lot more convenience functionality, a lot more explanation and samples on the OpenSSL wiki (which I think didn't even exist when I first started using OpenSSL). I have great appreciation for the team's efforts. But SSL/TLS is a great big ball of hair to begin with, and while I have tremendous respect for Eric Young, Steven Hensen, and the rest of the original contributors, the OpenSSL source is not exactly a monument to readability. (Though even in the early versions there were some important steps in that direction, like mostly consistent, safe naming conventions for external identifiers, thank goodness.) -- Michael Wojcik Distinguished Engineer, Micro Focus -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From charlesm at mcn.org Mon Dec 3 20:35:04 2018 From: charlesm at mcn.org (Charles Mills) Date: Mon, 3 Dec 2018 12:35:04 -0800 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> Message-ID: <060c01d48b47$ac112290$043367b0$@mcn.org> > zOS does, for example, at least if you're using the RACF security provider. Ha! Spoken like a Micro Focus guy! One of the most likely clients for this server is in fact implemented on z/OS. Just FYI, the key variable is not so much RACF: (a.) RACF is just (in this case) a certificate store, not a TLS implementation; and (b.) I think the other two ESM's (CA TSS and ACF2) are 99% equivalent in their certificate facilities. The TLS implementation is named System SSL (sometimes known as GSK). That is the TLS library, roughly parallel to OpenSSL. (In fact I don't know of any other TLS implementation on z/OS other than the OpenSSL port -- but there could be some.) GSK also implements its own certificate store, but I don't think it is widely used in production. Yes, there would be lots of certificates in the certificate store, but at least in the case of the client I wrote, you configure it in advance to use a particular named certificate, so the server application itself does not have any choice at run time. It is "one certificate, take it or leave it." Thanks for the heads-up on Windows. I develop for both platforms, but I am much less familiar with all of the ins and outs of Windows. OCSP and OCSP stapling are currently higher on my wish list than this. Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Michael Wojcik Sent: Monday, December 3, 2018 10:58 AM To: openssl-users at openssl.org Subject: Re: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Charles Mills > Sent: Monday, December 03, 2018 10:55 > > Got it. Thanks. I would think the basic client case is "one certificate, one CA" I'm going to disagree somewhat with this assumption, but not necessarily with your decision. That assumption is probably safe for some use cases, but not all. For example, Windows-based clients that use Microsoft's TLS implementation (SChannel, via CAPI or CNG or any of the various wrapper APIs, including the .NET Framework) have access to all the "personal" certificates in the Windows per-machine and per-user certificate stores. In a Windows domain environment, certificates may be added to those stores by central administration, as well as being created or added locally. So such clients could have quite a few client certificates available to them. Some other TLS implementations can optionally use the Windows certificate stores. I believe Netscape's NSS can be configured to do so, for example. A suitable JSSE provider is included with the standard Windows JRE and JDK distributions. And OpenSSL itself has a CAPI engine that can (probably) be used to pull client certificates from the Windows stores. (I say "probably" because when we went to use the OpenSSL CAPI engine some years ago, we ran into some issues caused by Microsoft's awkward provider mechanism and how it interacts with private-key storage, and I ended up enhancing the OpenSSL CAPI module in various ways. So I don't recall what exactly works with it out of the box.) There are other environments which similarly provide centralized storage of certificates (and corresponding private keys) to all clients. zOS does, for example, at least if you're using the RACF security provider. Perhaps more importantly, as Viktor noted, some clients won't send a certificate at all unless they have one signed by a CA on the server's list, or at least only if the server sends a non-empty list. The list is also useful for clients that want to help the user select from among a set of certificates. > so I think I will roll with what we have (especially since the product has been > out there for years with no reported problems in this area -- although I think > client certificate usage is rare) but keep the issue in mind if a problem comes > up. Despite what I wrote above, the important thing, of course, is what your users need. If they haven't needed a server that sends a CA list, there's a good chance they won't need one any time soon. Often there are better things to address first. TLS configuration is important, but certainly for the software projects I work on there are any number of important areas for further work. You can't do everything at once. -- Michael Wojcik Distinguished Engineer, Micro Focus -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From openssl-users at dukhovni.org Mon Dec 3 20:40:09 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 3 Dec 2018 15:40:09 -0500 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <060c01d48b47$ac112290$043367b0$@mcn.org> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> Message-ID: > On Dec 3, 2018, at 3:35 PM, Charles Mills wrote: > > OCSP and OCSP stapling are currently higher on my wish list than this. Good luck with OCSP, the documentation could definitely be better, and various projects get it wrong. IIRC curl gets OCSP right, so you could look there for example code, some other projects go through the motions, but don't always achieve a robust result. [ FWIW, I don't care much for OCSP, it's often not required, so it is then not clear what security properties it provides. ] -- Viktor. From charlesm at mcn.org Mon Dec 3 20:45:19 2018 From: charlesm at mcn.org (Charles Mills) Date: Mon, 3 Dec 2018 12:45:19 -0800 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> Message-ID: <061201d48b49$1aefbea0$50cf3be0$@mcn.org> Those darned customers are asking for it! I do understand the privacy exposure. Don't know if the customers do or do not. Charles -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Viktor Dukhovni Sent: Monday, December 3, 2018 12:40 PM To: openssl-users at openssl.org Subject: Re: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list > On Dec 3, 2018, at 3:35 PM, Charles Mills wrote: > > OCSP and OCSP stapling are currently higher on my wish list than this. Good luck with OCSP, the documentation could definitely be better, and various projects get it wrong. IIRC curl gets OCSP right, so you could look there for example code, some other projects go through the motions, but don't always achieve a robust result. [ FWIW, I don't care much for OCSP, it's often not required, so it is then not clear what security properties it provides. ] From openssl at foocrypt.net Tue Dec 4 04:09:34 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Tue, 4 Dec 2018 15:09:34 +1100 Subject: [openssl-users] Telecommunication and Other Legislation Amendment (Assistance and Access) Bill 2018 In-Reply-To: <81D70BA6-D8E3-4414-840D-CDC57215A57B@foocrypt.net> References: <81D70BA6-D8E3-4414-840D-CDC57215A57B@foocrypt.net> Message-ID: <68558DEA-6DA1-4723-8B92-306030E7FB70@foocrypt.net> It?s looking like AssAccess will be law here by the end of the week. Anyone know of a ?good? country to live / work in ? How many Openssl developers are within Australian boarders ? From vieuxtech at gmail.com Tue Dec 4 04:56:47 2018 From: vieuxtech at gmail.com (Sam Roberts) Date: Mon, 3 Dec 2018 20:56:47 -0800 Subject: [openssl-users] what is the relationship between (Client)SignatureAlgorithms and cipher_list()? Message-ID: Do they overlap in purpose, so the cipher list can be used to limit the signature algorithms? Or are the signature algorithms used for different purposes than the cipher suites in the cipher list? If they have to be configured seperately, is the mechanism to use https://www.openssl.org/docs/man1.1.1/man3/SSL_CONF_cmd_value_type.html ? Thanks! Sam From jb-openssl at wisemo.com Tue Dec 4 15:15:11 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 4 Dec 2018 16:15:11 +0100 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <20181201205312.GB79754@straasha.imrryr.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> Message-ID: <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> On 01/12/2018 21:53, Viktor Dukhovni wrote: > On Sat, Dec 01, 2018 at 07:12:24PM +0000, Michael Wojcik wrote: > >>> Are there compatibility concerns around changing error message >>> text for which users may have created regex patterns in scripts? >>> >>> I agree the text could be better, but not sure in what releases >>> if any to change the text, since the change may cause issues >>> for some users. >> Sure, this is always a concern. Maybe the change could be considered for OpenSSL 3.0, since that's a major release. > Care to create a PR against the "master" branch? Something > along the lines of: > > "Provided chain ends with untrusted self-signed certificate" > > or better. Here "untrusted" might mean not trusted for the requested > purpose, but more precise is not always more clear. > Perhaps s/untrusted/unknown/ as in "Provided chain ends with unknown self-signed certificate". Or even better, two different error codes: ?- "Only self-signed end certificate provided" ?- "Provided chain ends with unknown root certificate" (Deciding which one keeps the old error code is left as ?an exercise). (Distinguishing a self-siged end cert from a self-signed ?root when no other certificate is provided is also left ?as an exercise). Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From matt at openssl.org Tue Dec 4 17:17:49 2018 From: matt at openssl.org (Matt Caswell) Date: Tue, 4 Dec 2018 17:17:49 +0000 Subject: [openssl-users] what is the relationship between (Client)SignatureAlgorithms and cipher_list()? In-Reply-To: References: Message-ID: On 04/12/2018 04:56, Sam Roberts wrote: > Do they overlap in purpose, so the cipher list can be used to limit > the signature algorithms? Or are the signature algorithms used for > different purposes than the cipher suites in the cipher list? The answer varies depending on whether you are talking about TLSv1.2 or TLSv1.3. I'll discuss normal sig algs first (as opposed to client sig algs). In both TLSv1.2 and TLSv1.3 the sig algs describe what signature schemes the client is willing to accept from the server for the signature created in a ServerKeyExchange message (TLSv1.2) or a CertificateVerify message (TLSv1.3). The server will create the signature using the algorithm associated with the public key in the certificate. A TLSv1.2 ciphersuite contains a number of different components - one of which is an algorithm that will be used for signing. Therefore, in TLSv1.2, the ciphersuite that eventually gets selected must be consistent with the algorithm associated with the public key in the certificate and the selected sig alg. In TLSv1.2 on the client side OpenSSL will automatically restrict the cipher list to only include those ciphersuites which are consistent with its configured sig algs. So, for example, if the client is not willing to accept ECDSA based sig algs then it will not offer any ECDSA based ciphersuites. In TLSv1.2 on the server side OpenSSL will attempt to choose a ciphersuite from its configured set that is both consistent with the sig algs sent from the client and with its configured certificates. So for example if the server has both an RSA and an ECDSA certificate available but the client only offers RSA based sig algs, then the server will select an RSA based ciphersuite, and vice versa. In TLSv1.3 things are slightly different because TLSv1.3 ciphersuites do not specify a signature algorithm at all. So, on the client side, the TLSv1.3 ciphersuites sent are unaffected by the configured sig algs. On the server side the TLSv1.3 ciphersuite is selected independent of the sig algs. The server independently chooses a signature algorithm to use entirely based on its available certificate algorithms. Client sig algs are similar to normal sig algs but they only come in to play when client auth has been configured, i.e. the server requests a certificate from the client and the client provides one. In this case, both in TLSv1.2 and TLSv1.3, the sig alg selected is entirely based on the algorithm in the client cert. It is not impacted by the ciphersuite at all. So, for example, the ciphersuite could be based on RSA (because the server's certificate is based on RSA) but the client certificate's algorithm could be ECDSA (and hence the client sig alg in use will be ECDSA based). > > If they have to be configured seperately, is the mechanism to use > https://www.openssl.org/docs/man1.1.1/man3/SSL_CONF_cmd_value_type.html > ? They do have to be configured separately. You can do this via the SSL_CONF interface if you wish. Or you can set them directly using the functions described on these man pages: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set1_sigalgs.html https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_cipher_list.html Matt From anipatel at cisco.com Tue Dec 4 17:21:59 2018 From: anipatel at cisco.com (Animesh Patel (anipatel)) Date: Tue, 4 Dec 2018 17:21:59 +0000 Subject: [openssl-users] OCSP response signed by self-signed trusted responder validation Message-ID: <460801FD-7F2E-4D9A-A3A6-5373F0431D34@cisco.com> Have a question with implementing an OCSP requestor that can handle validating an OCSP response that is not signed by the CA who issued the certificate that we are requesting the OCSP status for but rather, the OCSP response is signed by a self-signed trusted responder that includes the OCSP Signing EKU and the self-signed certificate is configured as trusted on the requesting system. Question is how to get past the check in OCSP_basic_verify() that calls ocsp_check_issuer() with the responder chain and fails in ocsp_match_issuerid() since the issuer ID doesn?t match the self-signed responder certificate ID causing the verify to fail with ?OCSP routines:OCSP_basic_verify:root ca not trusted in ocsp_vfy.c line 176.? Could someone please shed light on how this is expected to work for this scenario? Is it expected that the self-signed certificate needs to be added to have explicit trust so that it is allowed via the call to X509_check_trust() or is there something else I?m missing here? Thanks, Animesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Tue Dec 4 17:39:27 2018 From: rsalz at akamai.com (Salz, Rich) Date: Tue, 4 Dec 2018 17:39:27 +0000 Subject: [openssl-users] OCSP response signed by self-signed trusted responder validation In-Reply-To: <460801FD-7F2E-4D9A-A3A6-5373F0431D34@cisco.com> References: <460801FD-7F2E-4D9A-A3A6-5373F0431D34@cisco.com> Message-ID: <3DD2EF16-E241-4350-B213-4B29A0305812@akamai.com> The responder isn?t supposed to be self-signed. It?s supposed to be signed by the CA issuing the certs. That way you know that the CA ?trusts? the responder. Now, having said that, what you want to do is reasonable ? think of it as ?out of band? trust. You will probably have to modify the source to support it, however. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anipatel at cisco.com Tue Dec 4 17:54:05 2018 From: anipatel at cisco.com (Animesh Patel (anipatel)) Date: Tue, 4 Dec 2018 17:54:05 +0000 Subject: [openssl-users] OCSP response signed by self-signed trusted responder validation In-Reply-To: <3DD2EF16-E241-4350-B213-4B29A0305812@akamai.com> References: <460801FD-7F2E-4D9A-A3A6-5373F0431D34@cisco.com> <3DD2EF16-E241-4350-B213-4B29A0305812@akamai.com> Message-ID: Thanks for the quick response Rich! Just a quick follow on. Per RFC6960 for OCSP, there are 3 options: All definitive response messages SHALL be digitally signed. The key used to sign the response MUST belong to one of the following: - the CA who issued the certificate in question - a Trusted Responder whose public key is trusted by the requestor - a CA Designated Responder (Authorized Responder, defined in Section 4.2.2.2) who holds a specially marked certificate issued directly by the CA, indicating that the responder may issue OCSP responses for that CA I?m seeing the self-signed and/or even a separate PKI root or hierarchy that is designated to sign responses as the 2nd option above which is essentially an ?out of band? trust that is configured on the requestor ahead of time. Are you saying option 2 from the RFC is not supported within OpenSSL and would require changes? Or am I misinterpreting option 2 above. Lastly, I assuming my understanding is correct, I was thinking X509_check_trust() allows for communicating this ?out of band? trust to OpenSSL for validation of OCSP responses, is this not what this trust setting is for? Thanks, Animesh From: "Salz, Rich" Date: Tuesday, December 4, 2018 at 12:39 PM To: "anipatel at cisco.com" , "openssl-users at openssl.org" Subject: Re: [openssl-users] OCSP response signed by self-signed trusted responder validation The responder isn?t supposed to be self-signed. It?s supposed to be signed by the CA issuing the certs. That way you know that the CA ?trusts? the responder. Now, having said that, what you want to do is reasonable ? think of it as ?out of band? trust. You will probably have to modify the source to support it, however. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Tue Dec 4 17:56:13 2018 From: rsalz at akamai.com (Salz, Rich) Date: Tue, 4 Dec 2018 17:56:13 +0000 Subject: [openssl-users] OCSP response signed by self-signed trusted responder validation In-Reply-To: References: <460801FD-7F2E-4D9A-A3A6-5373F0431D34@cisco.com> <3DD2EF16-E241-4350-B213-4B29A0305812@akamai.com> Message-ID: Perhaps you can build a trust store to handle your needs. I am not sure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anipatel at cisco.com Tue Dec 4 18:01:18 2018 From: anipatel at cisco.com (Animesh Patel (anipatel)) Date: Tue, 4 Dec 2018 18:01:18 +0000 Subject: [openssl-users] OCSP response signed by self-signed trusted responder validation In-Reply-To: References: <460801FD-7F2E-4D9A-A3A6-5373F0431D34@cisco.com> <3DD2EF16-E241-4350-B213-4B29A0305812@akamai.com> Message-ID: <399554DB-739C-45D2-9641-97831568A30C@cisco.com> Thanks again Rich. If anyone else has any ideas please share. From: "Salz, Rich" Date: Tuesday, December 4, 2018 at 12:56 PM To: "anipatel at cisco.com" , "openssl-users at openssl.org" Subject: Re: [openssl-users] OCSP response signed by self-signed trusted responder validation Perhaps you can build a trust store to handle your needs. I am not sure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Tue Dec 4 22:58:32 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Tue, 4 Dec 2018 22:58:32 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Jakob Bohm via openssl-users > Sent: Tuesday, December 04, 2018 08:15 > > Care to create a PR against the "master" branch? Something > > along the lines of: > > > > "Provided chain ends with untrusted self-signed certificate" > > > > or better. Here "untrusted" might mean not trusted for the requested > > purpose, but more precise is not always more clear. > > > Perhaps s/untrusted/unknown/ as in > > "Provided chain ends with unknown self-signed certificate". Yes, that might be better. Or maybe "unrecognized". Of course there's scope for someone to misinterpret regardless of which term is used. I can suggest various alternatives in the PR and let the team decide. > Or even better, two different error codes: > > - "Only self-signed end certificate provided" > > - "Provided chain ends with unknown root certificate" > > (Deciding which one keeps the old error code is left as > an exercise). I can raise that as a possibility too, in the PR. Obviously it's a bit more work than simply changing the existing text. -- Michael Wojcik Distinguished Engineer, Micro Focus From uri at ll.mit.edu Tue Dec 4 23:19:53 2018 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Tue, 4 Dec 2018 23:19:53 +0000 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> Message-ID: > "Provided chain ends with unknown self-signed certificate". I like this. IMHO "unrecognized" would be more confusing. I hope the team makes up their mind quickly. ?On 12/4/18, 6:17 PM, "openssl-users on behalf of Michael Wojcik" wrote: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Jakob Bohm via openssl-users > Sent: Tuesday, December 04, 2018 08:15 > > Care to create a PR against the "master" branch? Something > > along the lines of: > > > > "Provided chain ends with untrusted self-signed certificate" > > > > or better. Here "untrusted" might mean not trusted for the requested > > purpose, but more precise is not always more clear. > > > Perhaps s/untrusted/unknown/ as in > > "Provided chain ends with unknown self-signed certificate". Yes, that might be better. Or maybe "unrecognized". Of course there's scope for someone to misinterpret regardless of which term is used. I can suggest various alternatives in the PR and let the team decide. > Or even better, two different error codes: > > - "Only self-signed end certificate provided" > > - "Provided chain ends with unknown root certificate" > > (Deciding which one keeps the old error code is left as > an exercise). I can raise that as a possibility too, in the PR. Obviously it's a bit more work than simply changing the existing text. -- Michael Wojcik Distinguished Engineer, Micro Focus -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5249 bytes Desc: not available URL: From openssl-users at dukhovni.org Tue Dec 4 23:50:30 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Tue, 4 Dec 2018 18:50:30 -0500 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> Message-ID: <20181204235030.GU79754@straasha.imrryr.org> On Tue, Dec 04, 2018 at 04:15:11PM +0100, Jakob Bohm via openssl-users wrote: > > Care to create a PR against the "master" branch? Something > > along the lines of: > > > > "Provided chain ends with untrusted self-signed certificate" > > > > or better. Here "untrusted" might mean not trusted for the requested > > purpose, but more precise is not always more clear. > > Perhaps s/untrusted/unknown/ as in > > "Provided chain ends with unknown self-signed certificate". I don't see why "unknown" is better, it could under certain conditions be "known", but not trusted. > Or even better, two different error codes: > > - "Only self-signed end certificate provided" > > - "Provided chain ends with unknown root certificate" That already exists: crypto/x509/x509_txt.c: case X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT: return "self signed certificate"; case X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN: return "self signed certificate in certificate chain"; -- Viktor. From zhongju_li at yahoo.com Wed Dec 5 00:00:28 2018 From: zhongju_li at yahoo.com (zhongju li) Date: Wed, 5 Dec 2018 00:00:28 +0000 (UTC) Subject: [openssl-users] Creating PKCS#8 from pvk format References: <330952622.1937937.1543968028802.ref@mail.yahoo.com> Message-ID: <330952622.1937937.1543968028802@mail.yahoo.com> Hello,I am working on a small homework which requires convert pvk private key to PKCS#8 format. The code is based on OpenSSL 1.0.2.?I can get pvk private key components (Public exponent, modulus, prime1, prime2, exponent1, exponent2, coefficient, private exponent) properly, and convert to a validRSA format (RSA_check_key()returns success).?Now I need to convert the key in RSA format to EVP_PKEY, then to PKCS#8. I have tried the following functions, all of these functions return 0 (failure) without anyfurther debugging information/clues:EVP_PKEY_assign_RSA(pEvpkey, rsa);EVP_PKEY_set1_RSA(pEvpkey, rsa);PEM_write_bio_RSAPrivateKey (out, rsa, cipher, NULL, 0, NULL, NULL);PEM_write_bio_PKCS8PrivateKey(out, pEvpkey, 0, 0, 0, 0, 0);?I did google searching, but have not figured out why the about functions failed (one posting mentions ?export version? vs. domestic version??).?So, I?d like to get some help, 1. hopefully, with more debug information. 2. suggestion: based on OpenSSL 1.0.2, what are the correct function-chain to change pvk private key to PKCS#5? Any suggestions, input are appreciated.Xuan ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From wiml at omnigroup.com Wed Dec 5 00:40:17 2018 From: wiml at omnigroup.com (Wim Lewis) Date: Tue, 4 Dec 2018 16:40:17 -0800 Subject: [openssl-users] Creating PKCS#8 from pvk format In-Reply-To: <330952622.1937937.1543968028802@mail.yahoo.com> References: <330952622.1937937.1543968028802.ref@mail.yahoo.com> <330952622.1937937.1543968028802@mail.yahoo.com> Message-ID: <8B64E15B-00B1-4302-869E-680160F1071D@omnigroup.com> On 4. des. 2018, at 4:00 e.h., zhongju li via openssl-users wrote: > Now I need to convert the key in RSA format to EVP_PKEY, then to PKCS#8. I have tried the following functions, all of these functions return 0 (failure) without any further debugging information/clues: > EVP_PKEY_assign_RSA(pEvpkey, rsa); Is it possible that pEvpkey or rsa is NULL? (You need to create a EVP_PKEY with EVP_PKEY_new() before putting a specific key into it.) Otherwise, have you checked whether there is anything in the openssl error stack (using ERR_get_error(), ERR_print_errors_fp(), or similar)? > I did google searching, but have not figured out why the about functions failed (one posting mentions ?export version? vs. domestic version??). There used to be different versions because of US export laws but I don't think that has been the case for many years. From zhongju_li at yahoo.com Wed Dec 5 03:05:21 2018 From: zhongju_li at yahoo.com (zhongju li) Date: Wed, 5 Dec 2018 03:05:21 +0000 (UTC) Subject: [openssl-users] Creating PKCS#8 from pvk format In-Reply-To: <8B64E15B-00B1-4302-869E-680160F1071D@omnigroup.com> References: <330952622.1937937.1543968028802.ref@mail.yahoo.com> <330952622.1937937.1543968028802@mail.yahoo.com> <8B64E15B-00B1-4302-869E-680160F1071D@omnigroup.com> Message-ID: <1356809009.2019733.1543979121685@mail.yahoo.com> Hi Wim,Thank you for your quick response.1. Yes. I called EVP_PKEY_new() before calling EVP_PKEY_assign_RSA(pEvpkey, rsa); 2. For your second quetion: no. I have not checked there is anything in the openssl error stack. I will check the openssl error stack. 3. (1). If it works, is EVP_PKEY_assign_RSA(pEvpkey, rsa) the correct function to call to get pEvpkey (EVP_PKEY) from a rsa private key?Is there any other alternative function to get pEvpkey (EVP_PKEY) from a rsa private key?(2), Once getting pEvpkey, can I call the following functions to get PKC#8 der format:(a). PKCS8_PRIV_KEY_INFO *p8 = EVP_PKEY2PKCS8(pEvpkey); (b). int der_len = i2d_PKCS8_PRIV_KEY_INFO(p8, &der); Do you expect the above function call work? If not, what are the correct way to get pkcs#8 der? from pvk format? Thank you On Tuesday, December 4, 2018, 7:40:19 PM EST, Wim Lewis wrote: On 4. des. 2018, at 4:00 e.h., zhongju li via openssl-users wrote: > Now I need to convert the key in RSA format to EVP_PKEY, then to PKCS#8. I have tried the following functions, all of these functions return 0 (failure) without any further debugging information/clues: > EVP_PKEY_assign_RSA(pEvpkey, rsa); Is it possible that pEvpkey or rsa is NULL? (You need to create a EVP_PKEY with EVP_PKEY_new() before putting a specific key into it.) Otherwise, have you checked whether there is anything in the openssl error stack (using ERR_get_error(), ERR_print_errors_fp(), or similar)? > I did google searching, but have not figured out why the about functions failed (one posting mentions ?export version? vs. domestic version??). There used to be different versions because of US export laws but I don't think that has been the case for many years. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janjust at nikhef.nl Wed Dec 5 09:49:07 2018 From: janjust at nikhef.nl (Jan Just Keijser) Date: Wed, 5 Dec 2018 10:49:07 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> Message-ID: <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> Hi, On 03/12/18 21:40, Viktor Dukhovni wrote: >> On Dec 3, 2018, at 3:35 PM, Charles Mills wrote: >> >> OCSP and OCSP stapling are currently higher on my wish list than this. > Good luck with OCSP, the documentation could definitely be better, and > various projects get it wrong. IIRC curl gets OCSP right, so you > could look there for example code, some other projects go through the > motions, but don't always achieve a robust result. > > [ FWIW, I don't care much for OCSP, it's often not required, so it is > then not clear what security properties it provides. ] the only reason to use OCSP I currently have is in Firefox:? if you turn off "Query OCSP responder servers" in Firefox then EV certificates will no longer show up with their owner/domain name. Now the question is:?? does Firefox get OCSP "right" ;) ? cheers, JJK / Jan Just Keijser From vlebourl at gmail.com Wed Dec 5 13:21:28 2018 From: vlebourl at gmail.com (Vincent Le Bourlot) Date: Wed, 5 Dec 2018 14:21:28 +0100 Subject: [openssl-users] version OPENSSL_1_1_1 not defined in file libcrypto.so.1.1 with link time reference Message-ID: Hi After a fresh build of branch OpenSSL_1_1_1-stable on our ppc64 machine, openssl seems broken for an unknown reason? Executing `openssl version` results in: ``` $ openssl version openssl: relocation error: openssl: symbol SCRYPT_PARAMS_it, version OPENSSL_1_1_1 not defined in file libcrypto.so.1.1 with link time reference ``` Same kind of error happens when ctest tries to upload a file to our CDash: ``` ctest: relocation error: ctest: symbol SSL_CTX_set_post_handshake_auth, version OPENSSL_1_1_1 not defined in file libssl.so.1.1 with link time reference ``` Any help would be greatly appreciated! Thanks in advance Vincent -------------------------------------- Vincent Le Bourlot http://vlebourlot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vieuxtech at gmail.com Wed Dec 5 16:08:48 2018 From: vieuxtech at gmail.com (Sam Roberts) Date: Wed, 5 Dec 2018 08:08:48 -0800 Subject: [openssl-users] version OPENSSL_1_1_1 not defined in file libcrypto.so.1.1 with link time reference In-Reply-To: References: Message-ID: On Wed, Dec 5, 2018 at 5:22 AM Vincent Le Bourlot wrote: > After a fresh build of branch OpenSSL_1_1_1-stable on our ppc64 machine, openssl seems broken for an unknown reason? > Executing `openssl version` results in: I'm no expert, but try `ldd openssl`, is it dynamically linking against the libcrypto/libssl that you just built? If not, try setting LD_LIBRARY_PATH (I had to do that with my local builds from source). From openssl-users at dukhovni.org Wed Dec 5 16:59:15 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Wed, 5 Dec 2018 11:59:15 -0500 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> Message-ID: <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> > On Dec 5, 2018, at 4:49 AM, Jan Just Keijser wrote: > > The only reason to use OCSP I currently have is in Firefox: if you turn off > "Query OCSP responder servers" in Firefox then EV certificates will no longer > show up with their owner/domain name. IIRC Apple's Safari is ending support for EV, and some say that EV has failed, and are not sorry to see it go. > Now the question is: does Firefox get OCSP "right" ;) ? Very likely yes. The Firefox TLS stack is maintained by experts. [ Also, FWIW, Firefox uses the "nss" library, not OpenSSL. ] -- Viktor. From lindblad at gmx.com Thu Dec 6 00:15:52 2018 From: lindblad at gmx.com (Eric Lindblad) Date: Thu, 6 Dec 2018 01:15:52 +0100 Subject: [openssl-users] Kermit Project Message-ID: An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Thu Dec 6 09:03:21 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Thu, 6 Dec 2018 10:03:21 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> Message-ID: On 05/12/2018 17:59, Viktor Dukhovni wrote: >> On Dec 5, 2018, at 4:49 AM, Jan Just Keijser wrote: >> >> The only reason to use OCSP I currently have is in Firefox: if you turn off >> "Query OCSP responder servers" in Firefox then EV certificates will no longer >> show up with their owner/domain name. > IIRC Apple's Safari is ending support for EV, and some say that EV > has failed, and are not sorry to see it go. This is very bad for security.? So far the only real failures have been: 1. Some cloud provider(s) actively want to reduce all TLS security to ? the anonymous form provided by Let's encrypt, and are doing their worst ? to sabotage EV providing CAs. 2. As part of this campaign, those same cloud provider(s) take every ? opportunity to declare EV (and even OV) certificates as worthless ? and irrelevant. 3. At least one of those cloud provider(s) publishes a widely used ? "browser", in which they have preemptively removed support. Apple being tricked into removing support (contrary to their public hard stance on user security) is sad. >> Now the question is: does Firefox get OCSP "right" ;) ? > Very likely yes. The Firefox TLS stack is maintained by experts. > [ Also, FWIW, Firefox uses the "nss" library, not OpenSSL. ] > However Firefox code also contains lots of idiotic usability bugs, even in the code that talks to the TLS stack.? It is quite possible that the "OCSP must be on" rule is another bad usability hangover from the set of badly thought out UI changes made to initially promote EV certificates, just like the hiding of company names from non-EV certificates that actually contain them (so called OV certificates). Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From jb-openssl at wisemo.com Thu Dec 6 09:18:47 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Thu, 6 Dec 2018 10:18:47 +0100 Subject: [openssl-users] [EXTERNAL] Re: Self-signed error when using SSL_CTX_load_verify_locations CApath In-Reply-To: <20181204235030.GU79754@straasha.imrryr.org> References: <027201d488d4$c06f2220$414d6660$@mcn.org> <032001d488f8$2dc529f0$894f7dd0$@mcn.org> <858BE247-C4BB-4C6B-B387-A9DC47C65915@dukhovni.org> <7d550152a633423611aa8df6bdc6a7897a97838f.camel@sandia.gov> <20181201205312.GB79754@straasha.imrryr.org> <27f1c6bb-832c-17d9-3ee5-a23bfeb726dc@wisemo.com> <20181204235030.GU79754@straasha.imrryr.org> Message-ID: On 05/12/2018 00:50, Viktor Dukhovni wrote: > On Tue, Dec 04, 2018 at 04:15:11PM +0100, Jakob Bohm via openssl-users wrote: > >>> Care to create a PR against the "master" branch? Something >>> along the lines of: >>> >>> "Provided chain ends with untrusted self-signed certificate" >>> >>> or better. Here "untrusted" might mean not trusted for the requested >>> purpose, but more precise is not always more clear. >> Perhaps s/untrusted/unknown/ as in >> >> "Provided chain ends with unknown self-signed certificate". > I don't see why "unknown" is better, it could under certain conditions > be "known", but not trusted. Unknown would differ from untrusted in cases where there is some setting indicating that some certificates in the CA directory are trusted only for some/no purposes. This could (in current or future code) represent things such as the trust bits in "Trusted Certificate" files. >> Or even better, two different error codes: >> >> - "Only self-signed end certificate provided" >> >> - "Provided chain ends with unknown root certificate" > That already exists: > > crypto/x509/x509_txt.c: > > case X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT: > return "self signed certificate"; > case X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN: > return "self signed certificate in certificate chain"; > In that case, maybe change the text to: ? "Provided chain ends with an unknown and thus untrusted root certificate" This would capture both the fact that the root is unknown (not in the CA stores configured/loaded) and that this is the specific fact causing it to be untrusted. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From openssl at foocrypt.net Thu Dec 6 09:47:20 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Thu, 6 Dec 2018 20:47:20 +1100 Subject: [openssl-users] AssAccess was passed with no amendments Message-ID: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> Does OpenSSL have a policy stance on government enforced back doors ? -- Regards, Mark A. Lane Cryptopocalypse NOW 01 04 2016 Volumes 0.0 -> 10.0 Now available through iTunes - iBooks @ https://itunes.apple.com/au/author/mark-a.-lane/id1100062966?mt=11 ? Mark A. Lane 1980 - 2018, All Rights Reserved. ? FooCrypt 1980 - 2018, All Rights Reserved. ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2018, All Rights Reserved. ? Cryptopocalypse 1980 - 2018, All Rights Reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at stroeder.com Thu Dec 6 10:48:09 2018 From: michael at stroeder.com (=?UTF-8?Q?Michael_Str=c3=b6der?=) Date: Thu, 6 Dec 2018 11:48:09 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> Message-ID: <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> On 12/6/18 10:03 AM, Jakob Bohm via openssl-users wrote: > On 05/12/2018 17:59, Viktor Dukhovni wrote: >> IIRC Apple's Safari is ending support for EV, and some say that EV >> has failed, and are not sorry to see it go. > > This is very bad for security.? So far the only real failures have > been: > > 1. Some cloud provider(s) actively want to reduce all TLS security to > ? the anonymous form provided by Let's encrypt, and are doing their worst > ? to sabotage EV providing CAs. Quoting from Peter Gutmann's "Engineering Security", section "EV Certificates: PKI-me-Harder" Indeed, cynics would say that this was exactly the problem that certificates and CAs were supposed to solve in the first place, and that ?high-assurance? certificates are just a way of charging a second time for an existing service. I fully agree with the above and I'm also for removing this crap from the browser UI. Ciao, Michael. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3829 bytes Desc: S/MIME Cryptographic Signature URL: From jb-openssl at wisemo.com Thu Dec 6 12:11:59 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Thu, 6 Dec 2018 13:11:59 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> Message-ID: On 06/12/2018 11:48, Michael Str?der wrote: > On 12/6/18 10:03 AM, Jakob Bohm via openssl-users wrote: >> On 05/12/2018 17:59, Viktor Dukhovni wrote: >>> IIRC Apple's Safari is ending support for EV, and some say that EV >>> has failed, and are not sorry to see it go. >> This is very bad for security.? So far the only real failures have >> been: >> >> 1. Some cloud provider(s) actively want to reduce all TLS security to >> ? the anonymous form provided by Let's encrypt, and are doing their worst >> ? to sabotage EV providing CAs. > Quoting from Peter Gutmann's "Engineering Security", > section "EV Certificates: PKI-me-Harder" > > Indeed, cynics would say that this was exactly the problem that > certificates and CAs were supposed to solve in the first place, and > that ?high-assurance? certificates are just a way of charging a > second time for an existing service. > > I fully agree with the above and I'm also for removing this crap from > the browser UI. Peter Gutman, for all his talents, dislikes PKI with a vengeance. EV is a standard for OV certificates done right.? Which involves more thorough identity checks, stricter rules for the CAs to follow etc. The real point of EV certificates is to separate CAs that do a good job from those that do a more sloppy job, without completely distrusting the mediocre CA operations. Due to market forces, the good CAs also offer the weaker certificate types at a lower price to compete with the mediocre CAs that are aren't good/thorough enough to do the full job. The way EV certs are highlighted in Browsers (Green bar etc.) was a way to create market demand for the higher quality.? They could be indicated in some other useful way of cause, but the distinguishment between "The CA did something to check the name and real world address in the certificate" (OV) versus "The CA checked the name and and real world address thoroughly in accordance with the higher quality standard" (EV) is still of some significance. If you look at that long list of CA roots preinstalled in a typical browser, only a minority are authorized, trusted and audited to issue to the higher EV standard. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From uri at ll.mit.edu Thu Dec 6 20:06:58 2018 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Thu, 6 Dec 2018 20:06:58 +0000 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> Message-ID: > > Quoting from Peter Gutmann's "Engineering Security", > > section "EV Certificates: PKI-me-Harder" > > > > Indeed, cynics would say that this was exactly the problem that > > certificates and CAs were supposed to solve in the first place, and > > that ?high-assurance? certificates are just a way of charging a > > second time for an existing service. > > Peter Gutman, for all his talents, dislikes PKI with a vengeance. > EV is a standard for OV certificates done right. Which involves more > thorough identity checks, stricter rules for the CAs to follow etc. > > The real point of EV certificates is to separate CAs that do a good > job from those that do a more sloppy job, without completely distrusting > the mediocre CA operations. So, a CA that's supposed to validate its customer before issuing a certificate, may do a "more sloppy job" if he doesn't cough up some extra money. I think Peter is exactly right here. CA either do their job, or they don't. If they agree to certify a set of attributes, they ought to verify each one of them. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5211 bytes Desc: not available URL: From openssl-users at dukhovni.org Thu Dec 6 20:16:05 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 6 Dec 2018 15:16:05 -0500 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> Message-ID: > On Dec 6, 2018, at 3:06 PM, Blumenthal, Uri - 0553 - MITLL wrote: > > So, a CA that's supposed to validate its customer before issuing a certificate, may do a "more sloppy job" if he doesn't cough up some extra money. > > I think Peter is exactly right here. CA either do their job, or they don't. If they agree to certify a set of attributes, they ought to verify each one of them. While the point of EV was that it certified a binding to a (domain + business name) rather than just a domain with DV, it turned out that displaying the business name was also subject to abuse, and the security gain proved elusive. https://www.troyhunt.com/extended-validation-certificates-are-dead/ -- Viktor. From jb-openssl at wisemo.com Thu Dec 6 22:56:14 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Thu, 6 Dec 2018 23:56:14 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> Message-ID: <8f667a91-0d89-f51c-8509-796f618f9caf@wisemo.com> On 06/12/2018 21:16, Viktor Dukhovni wrote: >> On Dec 6, 2018, at 3:06 PM, Blumenthal, Uri - 0553 - MITLL wrote: >> >> So, a CA that's supposed to validate its customer before issuing a certificate, may do a "more sloppy job" if he doesn't cough up some extra money. >> >> I think Peter is exactly right here. CA either do their job, or they don't. If they agree to certify a set of attributes, they ought to verify each one of them. No, Uri you get it wrong.? Different levels of certainty is the point. Consider it like this: DV: A regular printed business card that you can get from a ? vending machine, proves very little. ? ? The CA just checks that the person or robot requesting the ? certificate has some semblance of control over the domain ? name at the time of issuance.? Price is as low as $0. OV: A debit card with the supposed owners name on it, available ? from a number of companies that do minimal checking, but still ? a better ID proof than a business card. ? ? The CA must check that the company name and address are true, ? using some basic steps such as checking that a company by that ? name exists at that address and confirms they are the ones ? requesting the certificate.? There is no check that the company ? name is an official name or that the company has a business ? license etc.? A traditional lemonade stand run by children can ? potentially get an OV certificate if they stay in one place for ? the time it takes to get the certificate.? (A CA agent visiting ? the company site is enough checking of company existence for OV). EV: A proper photo ID with serious identity checking before being ? issued, like a government passport.? Includes the holders ? legal name and government ID number (literally), which can be ? used to look up the subjects legal status. ? ? The CA must check public records, and do some hard checks that ? the request is officially from that company.? There is a 50+ ? pages official specification listing how every tidbit of ? this information must be checked.? The CA cannot limit ? its own liability for certain failures to less than $2000. Each step up the ladder gives the user more certainty the person/website is who it says it is, but is more expensive and difficult to obtain for the person/website.? Each step also costs more money for the CA to check, because there is more work to do. The "make it look green" and "fights crime" slogans were just the old marketing campaign, repeated endlessly as a more efficient sales pressure than the real explanation. > While the point of EV was that it certified a binding to a (domain + business name) > rather than just a domain with DV, it turned out that displaying the business name > was also subject to abuse, and the security gain proved elusive. > > https://www.troyhunt.com/extended-validation-certificates-are-dead/ A traveling salesman for a cloud provider. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From openssl-users at dukhovni.org Thu Dec 6 23:12:07 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 6 Dec 2018 18:12:07 -0500 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <8f667a91-0d89-f51c-8509-796f618f9caf@wisemo.com> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <8f667a91-0d89-f51c-8509-796f618f9caf@wisemo.com> Message-ID: <78BA59AF-31BF-4FB1-A73A-9630B3833039@dukhovni.org> > On Dec 6, 2018, at 5:56 PM, Jakob Bohm via openssl-users wrote: > >> While the point of EV was that it certified a binding to a (domain + business name) >> rather than just a domain with DV, it turned out that displaying the business name >> was also subject to abuse, and the security gain proved elusive. >> >> https://www.troyhunt.com/extended-validation-certificates-are-dead/ > > A traveling salesman for a cloud provider. That's an ad-hominem argument. Just because he may have an agenda, does not mean he's wrong. One might wish he were wrong, but perhaps the market has spoken otherwise. Or perhaps he really is wrong, we'll see... -- Viktor. From michael at stroeder.com Fri Dec 7 11:18:32 2018 From: michael at stroeder.com (=?UTF-8?Q?Michael_Str=c3=b6der?=) Date: Fri, 7 Dec 2018 12:18:32 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <8f667a91-0d89-f51c-8509-796f618f9caf@wisemo.com> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <8f667a91-0d89-f51c-8509-796f618f9caf@wisemo.com> Message-ID: <69f506b3-8079-60ca-c4e5-8265a22b68c8@stroeder.com> On 12/6/18 11:56 PM, Jakob Bohm via openssl-users wrote: > Different levels of certainty is the point. Which never worked well in practice, no matter how hard people tried to clearly define levels if certainty. Ciao, Michael. From aerowolf at gmail.com Fri Dec 7 22:00:53 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Fri, 7 Dec 2018 16:00:53 -0600 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> Message-ID: CAs *do* verify the attributes they certify. That they're not presented as such is not the fault of the CAs, but rather of the browsers who insist on not changing or improving their UI. The thing is, if I run a website with a forum that I don't ask for money on and don't want any transactions happening on, why should I have to pay for the same level of certainty of my identity that a company like Amazon needs? (Why does Amazon need that much certainty? Well, I could set up wireless access points around coffee shops in December, point the DNS provided at those WAPs to my own server and run a fake amazon.com site to capture account credentials and credit cards. Without EV, that's a plausible attack. Especially with SSL being not-by-default, someone could type amazon.com and it can be intercepted without showing any certificate warning -- which then allows a redirect to a lookalike amazon.com name that could get certified by something like LetsEncrypt.) Plus, clouds have had a protocol available for doing queries to certs and keys held by other parties for several years. Cloudflare developed this protocol for banks, for whom loss of control of the certificate key is a reportable event, but who also often need DDoS protection. There's no reason it can't be extended to other clouds and sites -- unless Cloudflare patented it and wants royalties, in which case their rent-seeking is destroying the security of the web by convincing cloud salesmen to say that EV is too much trouble to deal with and thus should be killed off in the marketplace. Demanding that EV be perfect and dropping support for it if it has any found vulnerability falls into a class of human behavior known as "letting the perfect be the enemy of the good", which is also known as "cutting off the nose to spite the face". It still cuts down on a huge number of potential attacks, and doing away with it allows those attacks to flourish again. (Which, by the way, is what organized crime would prefer to permit.) -Kyle H On Thu, Dec 6, 2018, 14:07 Blumenthal, Uri - 0553 - MITLL > > Quoting from Peter Gutmann's "Engineering Security", > > > section "EV Certificates: PKI-me-Harder" > > > > > > Indeed, cynics would say that this was exactly the problem that > > > certificates and CAs were supposed to solve in the first > place, and > > > that ?high-assurance? certificates are just a way of charging a > > > second time for an existing service. > > > > Peter Gutman, for all his talents, dislikes PKI with a vengeance. > > EV is a standard for OV certificates done right. Which involves more > > thorough identity checks, stricter rules for the CAs to follow etc. > > > > The real point of EV certificates is to separate CAs that do a good > > job from those that do a more sloppy job, without completely > distrusting > > the mediocre CA operations. > > So, a CA that's supposed to validate its customer before issuing a > certificate, may do a "more sloppy job" if he doesn't cough up some extra > money. > > I think Peter is exactly right here. CA either do their job, or they > don't. If they agree to certify a set of attributes, they ought to verify > each one of them. > > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From uri at ll.mit.edu Fri Dec 7 22:30:10 2018 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Fri, 7 Dec 2018 22:30:10 +0000 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> Message-ID: <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> If there's a non-EV CA that would give you a cert for DNS name amazon.com - I'd like to make sure it's in my list and marked Not Trusted. Regards, Uri Sent from my iPhone > On Dec 7, 2018, at 17:02, Kyle Hamilton wrote: > > CAs *do* verify the attributes they certify. That they're not presented as such is not the fault of the CAs, but rather of the browsers who insist on not changing or improving their UI. > > The thing is, if I run a website with a forum that I don't ask for money on and don't want any transactions happening on, why should I have to pay for the same level of certainty of my identity that a company like Amazon needs? > > (Why does Amazon need that much certainty? Well, I could set up wireless access points around coffee shops in December, point the DNS provided at those WAPs to my own server and run a fake amazon.com site to capture account credentials and credit cards. Without EV, that's a plausible attack. Especially with SSL being not-by-default, someone could type amazon.com and it can be intercepted without showing any certificate warning -- which then allows a redirect to a lookalike amazon.com name that could get certified by something like LetsEncrypt.) > > Plus, clouds have had a protocol available for doing queries to certs and keys held by other parties for several years. Cloudflare developed this protocol for banks, for whom loss of control of the certificate key is a reportable event, but who also often need DDoS protection. There's no reason it can't be extended to other clouds and sites -- unless Cloudflare patented it and wants royalties, in which case their rent-seeking is destroying the security of the web by convincing cloud salesmen to say that EV is too much trouble to deal with and thus should be killed off in the marketplace. > > Demanding that EV be perfect and dropping support for it if it has any found vulnerability falls into a class of human behavior known as "letting the perfect be the enemy of the good", which is also known as "cutting off the nose to spite the face". It still cuts down on a huge number of potential attacks, and doing away with it allows those attacks to flourish again. (Which, by the way, is what organized crime would prefer to permit.) > > -Kyle H > > >> On Thu, Dec 6, 2018, 14:07 Blumenthal, Uri - 0553 - MITLL > > > Quoting from Peter Gutmann's "Engineering Security", >> > > section "EV Certificates: PKI-me-Harder" >> > > >> > > Indeed, cynics would say that this was exactly the problem that >> > > certificates and CAs were supposed to solve in the first place, and >> > > that ?high-assurance? certificates are just a way of charging a >> > > second time for an existing service. >> > >> > Peter Gutman, for all his talents, dislikes PKI with a vengeance. >> > EV is a standard for OV certificates done right. Which involves more >> > thorough identity checks, stricter rules for the CAs to follow etc. >> > >> > The real point of EV certificates is to separate CAs that do a good >> > job from those that do a more sloppy job, without completely distrusting >> > the mediocre CA operations. >> >> So, a CA that's supposed to validate its customer before issuing a certificate, may do a "more sloppy job" if he doesn't cough up some extra money. >> >> I think Peter is exactly right here. CA either do their job, or they don't. If they agree to certify a set of attributes, they ought to verify each one of them. >> >> >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >> > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5801 bytes Desc: not available URL: From Michael.Wojcik at microfocus.com Fri Dec 7 22:44:23 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 7 Dec 2018 22:44:23 +0000 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Blumenthal, Uri - 0553 - MITLL > Sent: Friday, December 07, 2018 15:30 > If there's a non-EV CA that would give you a cert for DNS name amazon.com - I'd like to make sure it's in my list and > marked Not Trusted. Wrong threat model, I think. While it's certainly possible that someone could trick or coerce one of the (many) CAs trusted by popular browsers into issuing a DV certificate for *.amazon.com or similar, Certificate Transparency would (eventually) catch that. Homograph attacks combined with phishing would be much cheaper and easier. Get a DV certificate from Let's Encrypt for anazom.com or amazom.com, or any of the Unicode homograph possibilies (Cyrillic small letter a and small letter o are both applicable here) to catch the vast majority of users who haven't enabled raw punycode display (assuming their browser even supports it). Phishing is easy with a forged Amazon email about any purchase - users will tend to think someone has hacked their Amazon account and follow the link to investigate without questioning the provenance of the link itself. Part of the point of EV certificates was supposed to be making the difference in trust visible to end users. If user agents ignore the EV distinction, then I for one don't see how EV certificates are worth a premium. Stronger requirements don't accomplish anything if those requirements can't be verified by the vast majority of users. -- Michael Wojcik Distinguished Engineer, Micro Focus From michael at stroeder.com Sat Dec 8 11:58:46 2018 From: michael at stroeder.com (=?UTF-8?Q?Michael_Str=c3=b6der?=) Date: Sat, 8 Dec 2018 12:58:46 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> Message-ID: On 12/7/18 11:44 PM, Michael Wojcik wrote: > Homograph attacks combined with phishing would be much cheaper and > easier. Get a DV certificate from Let's Encrypt for anazom.com or > amazom.com, or any of the Unicode homograph possibilies> > Part of the point of EV certificates was supposed to be making the > difference in trust visible to end users. And how do you avoid such homograph attack on subject DN attribute "O" (organization's name) when display the holy EV green sign? => EV certs also don't help in this case. Also in case of amazon.com most users know the pure domain name but not the *exact* company name, not to speak of the multitude of names of all the subsidiaries. Ciao, Michael. From hemantranvir at gmail.com Mon Dec 10 10:30:05 2018 From: hemantranvir at gmail.com (Hemant Ranvir) Date: Mon, 10 Dec 2018 19:30:05 +0900 Subject: [openssl-users] AES encrypt expanded key is different with no-asm Message-ID: Dear all, After extracting openssl-1.1.1.tar.gz, openssl can be configured without asm by passing no-asm flag during config command. The expanded key can be obtained like follows: //Getting expanded key from inside openssl //Copied from crypto/evp/e_aes.c typedef struct { union { double align; AES_KEY ks; } ks; block128_f block; union { cbc128_f cbc; ctr128_f ctr; } stream; } EVP_AES_KEY; EVP_CIPHER_CTX *cipher_ctx = ssl->enc_write_ctx; EVP_AES_KEY * cipher_data = EVP_CIPHER_CTX_get_cipher_data(cipher_ctx); printf("Encrypted Expanded Key is : "); for(i=0;i<((cipher_ctx->cipher->key_len)/sizeof(cipher_data->ks.ks.rd_key[0])*11);i++) { printf("%08x", cipher_data->ks.ks.rd_key[i]); } printf("\n"); To get the 128 bit encrypted key : unsigned char* key = unsigned char* malloc(16); int i; for (i=0; i<4; i++) { key[4*i] = cipher_data->ks.ks.rd_key[i] >> 24; key[4*i+1] = cipher_data->ks.ks.rd_key[i] >> 16; key[4*i+2] = cipher_data->ks.ks.rd_key[i] >> 8; key[4*i+3] = cipher_data->ks.ks.rd_key[i]; } I am using this 128 bit key and using it in *Rijndael* Key Schedule function to get the expanded key. The expanded key will be 128*11 bit long. This expanded key is equal to the expanded key obtained from accessing structures inside openssl(shown in section "Getting expanded key from inside openssl" ) which is expected. Now if I configure openssl without no-asm flag and get the expanded key from inside openssl and compare it with the expanded key calculated using the function I wrote. They are not equal. As far as I know there is only one way to calculate expanded key. I have even checked whether the expanded key inside openssl is inverse cipher expanded key but yet it is different. Can someone point me in the right direction. Thanks! -- Best Regards, Hemant Ranvir *"To live a creative life, we must lose our fear of being wrong.**" - J.C.Pearce* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mksarav at gmail.com Mon Dec 10 10:41:20 2018 From: mksarav at gmail.com (M K Saravanan) Date: Mon, 10 Dec 2018 18:41:20 +0800 Subject: [openssl-users] The 9 Lives of Bleichenbacher's CAT - Is there a CVE for OpenSSL? Message-ID: Hi, I read the recent research paper: The 9 Lives of Bleichenbacher's CAT: New Cache ATtacks on TLS Implementations by Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, David Wong, and Yuval Yarom Nov 30, 2018 Research Paper: https://eprint.iacr.org/2018/1173.pdf As per this paper, OpenSSL was also vulnerable but OpenSSL fixed them independently of the authors' disclosure. ============= APPENDIX A VULNERABILITIES DESCRIPTION A. OpenSSL TLS Implementation [...] However, OpenSSL?s code does contain two side channel vulnerabilities. One vulnerability has been described in Section IV-A and the other is presented here. We note that OpenSSL replaced the vulnerable code in both locations with constant-time implementations independently of our disclosure. ============= The paper does not list the CVE for the openssl vulnerability. Is there a CVE for this? What are the affected versions and in which version they were fixed? with regards, Saravanan From Matthias.St.Pierre at ncp-e.com Mon Dec 10 11:11:14 2018 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Mon, 10 Dec 2018 11:11:14 +0000 Subject: [openssl-users] The 9 Lives of Bleichenbacher's CAT - Is there a CVE for OpenSSL? In-Reply-To: References: Message-ID: <56fd61a88a074b07a8991ea11bb4a777@Ex13.ncp.local> > The paper does not list the CVE for the openssl vulnerability. > > Is there a CVE for this? What are the affected versions and in which > version they were fixed? A similar question has been asked at the end of the GitHub issue https://github.com/openssl/openssl/issues/7739. As far as I know, the question is still unanswered... HTH Matthias From Michael.Wojcik at microfocus.com Mon Dec 10 13:41:40 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Mon, 10 Dec 2018 13:41:40 +0000 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of Michael Str?der > Sent: Saturday, December 08, 2018 06:59 > > On 12/7/18 11:44 PM, Michael Wojcik wrote: > > Homograph attacks combined with phishing would be much cheaper and > > easier. Get a DV certificate from Let's Encrypt for anazom.com or > > amazom.com, or any of the Unicode homograph possibilies> > > Part of the point of EV certificates was supposed to be making the > > difference in trust visible to end users. > And how do you avoid such homograph attack on subject DN attribute "O" > (organization's name) when display the holy EV green sign? > > => EV certs also don't help in this case. > > Also in case of amazon.com most users know the pure domain name but not > the *exact* company name, not to speak of the multitude of names of all > the subsidiaries. Oh, I agree (at least on the latter point - I'm not sure how concerned I am about homograph attacks on the subject DN, since the common UAs are verifiying subjAltName values and ignoring the DN). That's why I wrote "was *supposed* to be". I don't think EV certificates accomplished this goal. I've never felt EV certificates were very useful, and they got progressively worse over time. Remember back in July when Entrust's Chris Baily put language on the CA/BF ballot (Ballot 255, specifically, if anyone wants to look it up) to restrict EV certificates to entities that had been incorporated for at least 18 months? That's the kind of terrible thinking that the EV process produced. The Stripe certificate fiasco that led to Baily's proposal is another example of why EV certificates Just Don't Work. The idea of having different certificates at different trust levels might be salvageable, but the EV implementation put the burden of evaluating those trust levels on the user (because user agents just passed it on to them), and the vast majority of users aren't in any position to do that. Nor were they in any position to determine how those trust levels ought to affect their threat model (that was the hole exploited by the Stripe attack). A site with a legitimate EV certificate might still misrepresent itself, perform hostile actions, or be vulnerable to attack (or already subverted) - EV says nothing about those risks. -- Michael Wojcik Distinguished Engineer, Micro Focus From jb-openssl at wisemo.com Tue Dec 11 07:23:47 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 11 Dec 2018 08:23:47 +0100 Subject: [openssl-users] AES encrypt expanded key is different with no-asm In-Reply-To: References: Message-ID: On 10/12/2018 11:30, Hemant Ranvir wrote: > Dear all, > ? ? After extracting openssl-1.1.1.tar.gz, openssl can be configured > without asm by passing no-asm flag during config command. > > ? ? The expanded key can be obtained like follows: > //Getting expanded key from inside openssl > //Copied from crypto/evp/e_aes.c > typedef struct { > ? union { > ? ? ? double align; > ? ? ? AES_KEY ks; > ? } ks; > ? block128_f block; > ? union { > ? ? ? cbc128_f cbc; > ? ? ? ctr128_f ctr; > ? } stream; > } EVP_AES_KEY; > > EVP_CIPHER_CTX *cipher_ctx = ssl->enc_write_ctx; > EVP_AES_KEY *?cipher_data = EVP_CIPHER_CTX_get_cipher_data(cipher_ctx); > printf("Encrypted Expanded Key is : "); > for(i=0;i<((cipher_ctx->cipher->key_len)/sizeof(cipher_data->ks.ks.rd_key[0])*11);i++) > { > ? ? printf("%08x", cipher_data->ks.ks.rd_key[i]); > } > printf("\n"); > > ?To get the 128 bit encrypted key : > unsigned char* key = unsigned?char* malloc(16); > ? int i; > ? for (i=0; i<4; i++) { > ? ? ? key[4*i]? ?= cipher_data->ks.ks.rd_key[i] >> 24; > ? ? ? key[4*i+1] = cipher_data->ks.ks.rd_key[i] >> 16; > ? ? ? key[4*i+2] = cipher_data->ks.ks.rd_key[i] >> 8; > ? ? ? key[4*i+3] = cipher_data->ks.ks.rd_key[i]; > ? } > > I am using this 128 bit key and using it in *Rijndael*?Key Schedule > function to get the expanded key. The expanded key will be 128*11 bit > long. > This expanded key is equal to the expanded key obtained from accessing > structures inside openssl(shown in section?"Getting expanded key from > inside openssl" ) which is expected. > > Now if I configure openssl without no-asm flag and get the expanded > key from inside openssl and compare it with the expanded key > calculated using the function I wrote. They are not equal. As far as I > know there is only one way to calculate expanded key. I have even > checked whether the expanded key inside openssl is inverse cipher > expanded key but yet it is different. > Can someone point me in the right direction. > Thanks! > > There have always been multiple ways to store the expanded AES key, each optimized a different implementation of the inner loops in the encryption block function.? It is highly likely the assembler implementation for any given processor uses a different inner loop, and thus a different expanded key data layout, than the generic C code. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From jb-openssl at wisemo.com Tue Dec 11 07:35:04 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 11 Dec 2018 08:35:04 +0100 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> Message-ID: On 10/12/2018 14:41, Michael Wojcik wrote: >> From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf >> Of Michael Str?der >> Sent: Saturday, December 08, 2018 06:59 >> >> On 12/7/18 11:44 PM, Michael Wojcik wrote: >>> Homograph attacks combined with phishing would be much cheaper and >>> easier. Get a DV certificate from Let's Encrypt for anazom.com or >>> amazom.com, or any of the Unicode homograph possibilies> >>> Part of the point of EV certificates was supposed to be making the >>> difference in trust visible to end users. >> And how do you avoid such homograph attack on subject DN attribute "O" >> (organization's name) when display the holy EV green sign? >> >> => EV certs also don't help in this case. >> >> Also in case of amazon.com most users know the pure domain name but not >> the *exact* company name, not to speak of the multitude of names of all >> the subsidiaries. > Oh, I agree (at least on the latter point - I'm not sure how concerned I am about homograph attacks on the subject DN, since the common UAs are verifiying subjAltName values and ignoring the DN). That's why I wrote "was *supposed* to be". I don't think EV certificates accomplished this goal. > > I've never felt EV certificates were very useful, and they got progressively worse over time. Remember back in July when Entrust's Chris Baily put language on the CA/BF ballot (Ballot 255, specifically, if anyone wants to look it up) to restrict EV certificates to entities that had been incorporated for at least 18 months? That's the kind of terrible thinking that the EV process produced. > > The Stripe certificate fiasco that led to Baily's proposal is another example of why EV certificates Just Don't Work. The idea of having different certificates at different trust levels might be salvageable, but the EV implementation put the burden of evaluating those trust levels on the user (because user agents just passed it on to them), and the vast majority of users aren't in any position to do that. Nor were they in any position to determine how those trust levels ought to affect their threat model (that was the hole exploited by the Stripe attack). A site with a legitimate EV certificate might still misrepresent itself, perform hostile actions, or be vulnerable to attack (or already subverted) - EV says nothing about those risks. The Stripe certificate fiasco relied heavily on browsers not displaying the EV certificate fields (specificlly Jurisdiction of incorporation) correctly along with the name, as clearly spelled out in the EV specification. That Jurisdiction field along with the uniqueness checks done by the authorities of the jurisdiction is what is supposed to prevent homographs in the O field.? For example, using Cyrillic letters in a de jure company name is unlikely to be allowed outside the Cyrillic using jurisdictions (former USSR, Serbia, maybe Bosnia and Montenegro). ?If displayed, users should readily notice the wrong country in the green bar. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From hemantranvir at gmail.com Tue Dec 11 07:37:22 2018 From: hemantranvir at gmail.com (Hemant Ranvir) Date: Tue, 11 Dec 2018 16:37:22 +0900 Subject: [openssl-users] AES encrypt expanded key is different with no-asm In-Reply-To: References: Message-ID: Hi Jacob, thanks for the input. On Tue 11 Dec, 2018, 4:24 PM Jakob Bohm via openssl-users, < openssl-users at openssl.org> wrote: > On 10/12/2018 11:30, Hemant Ranvir wrote: > > Dear all, > > After extracting openssl-1.1.1.tar.gz, openssl can be configured > > without asm by passing no-asm flag during config command. > > > > The expanded key can be obtained like follows: > > //Getting expanded key from inside openssl > > //Copied from crypto/evp/e_aes.c > > typedef struct { > > union { > > double align; > > AES_KEY ks; > > } ks; > > block128_f block; > > union { > > cbc128_f cbc; > > ctr128_f ctr; > > } stream; > > } EVP_AES_KEY; > > > > EVP_CIPHER_CTX *cipher_ctx = ssl->enc_write_ctx; > > EVP_AES_KEY * cipher_data = EVP_CIPHER_CTX_get_cipher_data(cipher_ctx); > > printf("Encrypted Expanded Key is : "); > > > for(i=0;i<((cipher_ctx->cipher->key_len)/sizeof(cipher_data->ks.ks.rd_key[0])*11);i++) > > > { > > printf("%08x", cipher_data->ks.ks.rd_key[i]); > > } > > printf("\n"); > > > > To get the 128 bit encrypted key : > > unsigned char* key = unsigned char* malloc(16); > > int i; > > for (i=0; i<4; i++) { > > key[4*i] = cipher_data->ks.ks.rd_key[i] >> 24; > > key[4*i+1] = cipher_data->ks.ks.rd_key[i] >> 16; > > key[4*i+2] = cipher_data->ks.ks.rd_key[i] >> 8; > > key[4*i+3] = cipher_data->ks.ks.rd_key[i]; > > } > > > > I am using this 128 bit key and using it in *Rijndael* Key Schedule > > function to get the expanded key. The expanded key will be 128*11 bit > > long. > > This expanded key is equal to the expanded key obtained from accessing > > structures inside openssl(shown in section "Getting expanded key from > > inside openssl" ) which is expected. > > > > Now if I configure openssl without no-asm flag and get the expanded > > key from inside openssl and compare it with the expanded key > > calculated using the function I wrote. They are not equal. As far as I > > know there is only one way to calculate expanded key. I have even > > checked whether the expanded key inside openssl is inverse cipher > > expanded key but yet it is different. > > Can someone point me in the right direction. > > Thanks! > > > > > There have always been multiple ways to store the expanded AES > key, each optimized a different implementation of the inner > loops in the encryption block function. It is highly likely > the assembler implementation for any given processor uses a > different inner loop, and thus a different expanded key data > layout, than the generic C code. > > > Enjoy > > Jakob > -- > Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com > Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 > This public discussion message is non-binding and may contain errors. > WiseMo - Remote Service Management for PCs, Phones and Embedded > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aerowolf at gmail.com Tue Dec 11 10:13:07 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Tue, 11 Dec 2018 04:13:07 -0600 Subject: [openssl-users] Question on necessity of SSL_CTX_set_client_CA_list In-Reply-To: References: <050b01d48aa0$7d351be0$779f53a0$@mcn.org> <05b101d48b31$47d9c720$d78d5560$@mcn.org> <060c01d48b47$ac112290$043367b0$@mcn.org> <2f4931ad-ce4e-3b03-dcd5-0c268203c5b4@nikhef.nl> <1B3A82E8-FD3D-427A-894D-BEE483C749B9@dukhovni.org> <2df0e89e-0d78-e735-7433-e06863400d4c@stroeder.com> <3F77FFFF-FF4D-4641-B7C3-B14FAEEA8DDB@ll.mit.edu> Message-ID: Because only showing the O= is insufficient, you also need to show the jurisdiction the O= is based in. (In the case of Amazon, it's a Delaware corporation.) The fact that browsers are getting tricked into thinking EV doesn't help is only because their UX designers refuse to allow the information which is necessary for actual trust to be displayed. It's not the fault of the CAs or the EV guidelines, it's fully within the hands of the browsers to fix. But they're worried about "providing free advertising for the CAs" (when I suggested putting the name of the certifier on the chrome, so that any change would raise a flag in the users' mind) or "information overload for the users" and "insufficient space for other important things" (when I suggested putting more of the Subject DN on the chrome), even though those are things that would legitimately put the onus of being tricked fairly on the user, and off of the browsers which currently don't readily provide the information. Regardless, in my view it really doesn't matter. I lost faith in the browsers being willing to continue to improve things (i.e., work against the identity homograph arms race) long ago. So now they want to backslide? I've done my duty to try to convince them to continue to evolve against the threat landscape. The onus of and blame for their unwillingness to do so is on them. Now, I guess we'll only get to see how much of it might stick in court. On Sat, Dec 8, 2018, 05:59 Michael Str?der On 12/7/18 11:44 PM, Michael Wojcik wrote: > > Homograph attacks combined with phishing would be much cheaper and > > easier. Get a DV certificate from Let's Encrypt for anazom.com or > > amazom.com, or any of the Unicode homograph possibilies> > > Part of the point of EV certificates was supposed to be making the > > difference in trust visible to end users. > And how do you avoid such homograph attack on subject DN attribute "O" > (organization's name) when display the holy EV green sign? > By including the jurisdiction the O is organized in, of course. O=Amazon Inc,ST=Delaware,C=US. (That's the point of hierarchical names, after all. It's to reduce namespace collisions in spaces -- like independent political entities -- which don't often cooperate together to inhibit problems like these.) Interesting note: I could register a corporation named "Bank of America Corporation" in any state BofA doesn't currently have a presence, to obtain a potentially EV-valid certificate, and their only recourse might be a trademark lawsuit. If I registered it in a foreign nation, they wouldn't have any recourse at all unless they already had a presence in that nation. (Though they might try to convince the feds to prosecute me for attempted fraud, even if I didn't do anything to actually attempt or complete a fraud under that name.) Does this mean that EV is useless? No, it means that the overarching legal regime enables attacks that certificates already provide the means to combat -- but only if the certificate-consuming software properly implements it. The idea that a browser thinks EV is useless is worth nothing. It just means that they won't invest into this area of security the way they will into preventing their processes from being hijacked by arbitrary code. Should they have to invest in this way? I don't know. They took on the role on their own, either to try to build trust in web-based commerce (where they succeeded to the tune of tens of billions of dollars in economic activity every year) or because they had to try to "keep up with the Joneses" (i.e., Mozilla and Microsoft and Google, who were doing it for the more altruistic reason). I can't judge whether they "should". I just know enough to recognize what they "did". > => EV certs also don't help in this case. > > Also in case of amazon.com most users know the pure domain name but not > the *exact* company name, not to speak of the multitude of names of all > the subsidiaries. > Subsidiary names are relatively irrelevant, as long as the same subsidiary name shows up when they do the same thing. If it turns out that there's a need for them to become relevant, a DNS record with the expected Subject DN could be published, or a sitemap with the expected name of the subsidiary in question could be made available, or any of a host of other options could be explored and done. (And let's not forget the homograph attack enabled by the lack of https-by-default.) These arguments you make are arguments for letting the nonexistent perfect be the enemy of the existing good. They're also arguments for not trying to work toward the hypothetical ideal, and for throwing the baby out with the bathwater. > Ciao, Michael. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prithiraj.das at gmail.com Wed Dec 12 07:07:11 2018 From: prithiraj.das at gmail.com (prithiraj das) Date: Wed, 12 Dec 2018 07:07:11 +0000 Subject: [openssl-users] RSA Public Key error Message-ID: Hi, I have a RSA public key(PKCS 1v1.5) that I have obtained from somewhere. That key has been obtained after removing the first 24 bytes from the originally generated RSA public key. Those 24 bytes are being replaced by some custom 16 byte information which is being used as some sort of identifier in some future task and those 16 bytes are playing no role in encryption. OpenSSL fails to read this key. asn1parse shows some parsing error and most importantly RSA encryption in OpenSSL using this key fails. The untampered version of the RSA public key generated from the same source and containing the original 24 bytes at the beginning of the key is successfully read by OpenSSL and the RSA encryption using that key is also successful in OpenSSL. But our requirement is to use the first key containing the custom 16 byte information. My understanding is that the first 24 bytes of RSA public key following PKCS standards doesn't contain the modulus and exponent details required for RSA encryption. But OpenSSL seems to require these 24 bytes for encryption. Can someone please confirm what kind of information is present in the first 24 bytes of RSA Public key and/or why does OpenSSL need it? If possible, please suggest a solution to work with that RSA public key containing custom 16 byte information at the beginning of the key. Thanks and Regards, Prithiraj -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Wed Dec 12 09:33:36 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Wed, 12 Dec 2018 15:03:36 +0530 Subject: [openssl-users] Multiple client connection to Nginx server Message-ID: Hi, We are using a Crypto Accelerator Engine to offload AESGCM and RSA parameters. Trying to connect multiple clients simultaneously with a single Nginx server, which is using this accelerator. The Key and IV is passing only at handshake, and after handshake this set of key and IV is using for all encryption and decryption. So at Engine side, we are storing this Key and IV to a buffer and while encrypting/decrypting , this Key and IV is used from this buffer. But, while multiple client connects, the last saved Key/IV is getting for all clients. So, is there any way to get a unique ID foer each client connection ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Wed Dec 12 11:54:30 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Wed, 12 Dec 2018 17:24:30 +0530 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: References: Message-ID: Hi, Any help on this ? On Wed, Dec 12, 2018 at 3:03 PM ASHIQUE CK wrote: > Hi, > We are using a Crypto Accelerator Engine to offload AESGCM and RSA > parameters. Trying to connect multiple clients simultaneously with a single > Nginx server, which is using this accelerator. The Key and IV is passing > only at handshake, and after handshake this set of key and IV is using for > all encryption and decryption. So at Engine side, we are storing this Key > and IV to a buffer and while encrypting/decrypting , this Key and IV is > used from this buffer. But, while multiple client connects, the last saved > Key/IV is getting for all clients. > So, is there any way to get a unique ID foer each client > connection ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Erwann.Abalea at docusign.com Wed Dec 12 12:31:48 2018 From: Erwann.Abalea at docusign.com (Erwann Abalea) Date: Wed, 12 Dec 2018 12:31:48 +0000 Subject: [openssl-users] RSA Public Key error In-Reply-To: References: Message-ID: <28510262-6708-4CB1-B070-2CA5128923EC@docusign.com> Bonjour, Assuming the first 24 bytes you?re talking about are the very beginning of the SPKI structure (that is, the enclosing SEQUENCE, and the AlgorithmIdentifier), that means you?ve replaced up to the first byte of the BITSTRING containing the public key (this byte indicates the number of unused bits) for a 2048bits RSA key with 16 custom bytes. That?s perfectly normal for OpenSSL to refuse to load that beast, and for asn1parse to return errors (the first bytes do not represent a correct DER encoding of anything). Think of it as ? I took a Jpeg file, replaced some bytes at the beginning by my own, and now I can?t open the file again ?. Those bytes are there for a reason. A quick solution would be to *add* your 16 bytes before the public key, and remove them when passing the rest of the bytes to OpenSSL. Cordialement, Erwann Abalea De : openssl-users au nom de prithiraj das R?pondre ? : "openssl-users at openssl.org" Date : mercredi 12 d?cembre 2018 ? 08:08 ? : "openssl-users at openssl.org" Objet : [openssl-users] RSA Public Key error Hi, I have a RSA public key(PKCS 1v1.5) that I have obtained from somewhere. That key has been obtained after removing the first 24 bytes from the originally generated RSA public key. Those 24 bytes are being replaced by some custom 16 byte information which is being used as some sort of identifier in some future task and those 16 bytes are playing no role in encryption. OpenSSL fails to read this key. asn1parse shows some parsing error and most importantly RSA encryption in OpenSSL using this key fails. The untampered version of the RSA public key generated from the same source and containing the original 24 bytes at the beginning of the key is successfully read by OpenSSL and the RSA encryption using that key is also successful in OpenSSL. But our requirement is to use the first key containing the custom 16 byte information. My understanding is that the first 24 bytes of RSA public key following PKCS standards doesn't contain the modulus and exponent details required for RSA encryption. But OpenSSL seems to require these 24 bytes for encryption. Can someone please confirm what kind of information is present in the first 24 bytes of RSA Public key and/or why does OpenSSL need it? If possible, please suggest a solution to work with that RSA public key containing custom 16 byte information at the beginning of the key. Thanks and Regards, Prithiraj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Wed Dec 12 14:25:57 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Wed, 12 Dec 2018 15:25:57 +0100 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: References: Message-ID: On 12/12/2018 12:54, ASHIQUE CK wrote: > Hi, > Any help on this ? > > On Wed, Dec 12, 2018 at 3:03 PM ASHIQUE CK > wrote: > > Hi, > We are using a Crypto Accelerator Engine to offload AESGCM and RSA > parameters. Trying to connect multiple clients simultaneously with > a single Nginx server, which is using this accelerator.? The Key > and IV is passing only at handshake, and after handshake this set > of key and IV is using for all encryption and decryption. So at > Engine side, we are storing this Key and IV to a buffer and while > encrypting/decrypting , this Key and IV is used from this buffer. > But, while multiple client connects, the last saved Key/IV is > getting for all clients. > ? ? ? ? So, is there any way to get a unique ID foer each client > connection ? > > The following assumes that the accelerator is accessed using an OpenSSL "engine" plugin, if instead you are inserting code in NGINX to hand over the complete SSL/TLS record processing to the hardware, then a different approach is needed. OpenSSL Crypto Engines are not limited to SSL/TLS but can be used for other tasks using the OpenSSL libcrypto library. Thus the way this works is that the SSL/TLS requests an EVP "handle" for each key that it wants to use, this handle then maps (indirectly) to a structure passed to the engine, which is unique to each key. A correctly implemented engine is supposed to use that structure to tell the difference between different keys stored in the actual hardware. For the case of GCM key/IV pairs, it may be that in some situations OpenSSL requests more than one EVP key instance for the same key, typically to allow each to have its own independent state (for GCM, this is the counter, for CBC it would be the IV chaining from block to block).? The simple solution is to just treat them as different keys, but if this uses too many hardware key storage locations, an engine may use some way to recognize the reused key, share the hardware object and keep count of how many "handles" point to that key. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From ckashiquekvk at gmail.com Wed Dec 12 14:53:11 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Wed, 12 Dec 2018 20:23:11 +0530 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: References: Message-ID: Hi, Thanks for your reply. Openssl only passes (ctx,type,arg,ptr) in the case of header and (ctx,out,in,inl) in the case of message, these two are the only links to engine after the handshake process for the whole process. In my case, I am downloading a file from nginx root directory using a client program. How can I get a unique id, so that I can copy the respective Key and Iv everytime when a sslwrite request comes from a client with that id. Because I am trying to run 3 clients simultaneously for downloading a file. I am able to download only at one client ,the last connected one, and other two shows that tag verification failed. Because both those connections got the same key and Iv of the last connection. So for every client connection, is there any way to get a unique id so that i can load respective Key and Iv. But the only link from openssl to the engine are the above mentioned two cases. Only what I am getting some other information is from *ctx*. Can I do something with that *ctx *get unique id. Thanks On Wed 12 Dec, 2018, 7:56 PM Jakob Bohm via openssl-users < openssl-users at openssl.org wrote: > On 12/12/2018 12:54, ASHIQUE CK wrote: > > Hi, > > Any help on this ? > > > > On Wed, Dec 12, 2018 at 3:03 PM ASHIQUE CK > > wrote: > > > > Hi, > > We are using a Crypto Accelerator Engine to offload AESGCM and RSA > > parameters. Trying to connect multiple clients simultaneously with > > a single Nginx server, which is using this accelerator. The Key > > and IV is passing only at handshake, and after handshake this set > > of key and IV is using for all encryption and decryption. So at > > Engine side, we are storing this Key and IV to a buffer and while > > encrypting/decrypting , this Key and IV is used from this buffer. > > But, while multiple client connects, the last saved Key/IV is > > getting for all clients. > > So, is there any way to get a unique ID foer each client > > connection ? > > > > > The following assumes that the accelerator is accessed using an > OpenSSL "engine" plugin, if instead you are inserting code in NGINX > to hand over the complete SSL/TLS record processing to the hardware, > then a different approach is needed. > > OpenSSL Crypto Engines are not limited to SSL/TLS but can be used > for other tasks using the OpenSSL libcrypto library. > > Thus the way this works is that the SSL/TLS requests an EVP "handle" > for each key that it wants to use, this handle then maps (indirectly) > to a structure passed to the engine, which is unique to each key. > > A correctly implemented engine is supposed to use that structure to > tell the difference between different keys stored in the actual > hardware. > > For the case of GCM key/IV pairs, it may be that in some situations > OpenSSL requests more than one EVP key instance for the same key, > typically to allow each to have its own independent state (for GCM, > this is the counter, for CBC it would be the IV chaining from block > to block). The simple solution is to just treat them as different > keys, but if this uses too many hardware key storage locations, an > engine may use some way to recognize the reused key, share the > hardware object and keep count of how many "handles" point to that > key. > > > > Enjoy > > Jakob > -- > Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com > Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 > This public discussion message is non-binding and may contain errors. > WiseMo - Remote Service Management for PCs, Phones and Embedded > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcr at sandelman.ca Wed Dec 12 19:59:38 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Wed, 12 Dec 2018 14:59:38 -0500 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: References: Message-ID: <28371.1544644778@localhost> ASHIQUE CK wrote: > We are using a Crypto Accelerator Engine to offload AESGCM and RSA > parameters. Trying to connect multiple clients simultaneously with a > single Nginx server, which is using this accelerator. The Key and IV You probably need to tell us: 1) which engine? did you write this engine? 2) whose driver? 3) what version of openssl? 4) what version of nginx? 5) how did you observe the problem you described? 6) is it different for, for instance, apache? or some other server software? > is passing only at handshake, and after handshake this set of key and > IV is using for all encryption and decryption. So at Engine side, we > are storing this Key and IV to a buffer and while > encrypting/decrypting , this Key and IV is used from this buffer. But, > while multiple client connects, the last saved Key/IV is getting for > all clients. > So, is there any way to get a unique ID foer each client connection ? > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ckashiquekvk at gmail.com Thu Dec 13 03:30:25 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Thu, 13 Dec 2018 09:00:25 +0530 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: <28371.1544644778@localhost> References: <28371.1544644778@localhost> Message-ID: Hi, 1. The engine that we wrote is by the reference of qat, is just an interface which receives the openssl parameters of AES and RSA and offload them to an FPGA hardware accelerator. 2. 3. Openssl 1.1.0 h 4. Uses f-stack nginx 1.10.1 5. We ran nginx server which have a 1 Gb file in its root directory. Then connected 3 clients to this server. These clients waits after handshake is done. After I run 3rd client, I gave a Get request through 1 st client to download that 1 gb file. But it showed error message, "decryption failed or bad record mac". When I debugged using gdb, I understood that Tag verification is getting failed. But the matter is, I am storing the Key and IV at the time of handshake itself, to a buffer in my engine. When an SSLRead or SSLWrite occur, I will copy the saved Key and Iv to fill the respective descriptors. But, in this case what happens is, if there is 3rd client handshake occurred, its key and iv stored in a buffer. And when I give a Sslwrite in the 1st client, it used the last saved key and iv, but it is actually key and iv of 3 rd client. But I can download the file if I give get request through the last handshaked client. So what I can do is, save the key and iv of different clients in different buffers. If the SSLread/write from any client comes, then just offload the key and iv from the respective buffer. But for that, i need a unique id for each client, which must be the same for a client in the entire connection. How can i get the unique id. Beyond the parameters *in, *out, inl (in the case of plaintext/ cipher text offloading) and *ptr, *type, *arg (in the case of header/aad offload) only what I have is ctx. With this ctx, can i get a unique id or is there any way to solve this problem. 6. Didn't tried with Apache server. Thanks On Thu 13 Dec, 2018, 1:30 AM Michael Richardson > ASHIQUE CK wrote: > > We are using a Crypto Accelerator Engine to offload AESGCM and RSA > > parameters. Trying to connect multiple clients simultaneously with a > > single Nginx server, which is using this accelerator. The Key and IV > > You probably need to tell us: > > 1) which engine? did you write this engine? > 2) whose driver? > 3) what version of openssl? > 4) what version of nginx? > 5) how did you observe the problem you described? > 6) is it different for, for instance, apache? or some other server > software? > > > is passing only at handshake, and after handshake this set of key and > > IV is using for all encryption and decryption. So at Engine side, we > > are storing this Key and IV to a buffer and while > > encrypting/decrypting , this Key and IV is used from this buffer. But, > > while multiple client connects, the last saved Key/IV is getting for > > all clients. > > So, is there any way to get a unique ID foer each client connection ? > > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Thu Dec 13 05:16:18 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Thu, 13 Dec 2018 10:46:18 +0530 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: References: <28371.1544644778@localhost> Message-ID: 4. f-stack nginx server 1.11.10 On Thu, Dec 13, 2018 at 9:00 AM ASHIQUE CK wrote: > Hi, > 1. The engine that we wrote is by the reference of qat, is just an > interface which receives the openssl parameters of AES and RSA and offload > them to an FPGA hardware accelerator. > 2. > 3. Openssl 1.1.0 h > 4. Uses f-stack nginx 1.10.1 > 5. We ran nginx server which have a 1 Gb file in its root directory. Then > connected 3 clients to this server. These clients waits after handshake is > done. After I run 3rd client, I gave a Get request through 1 st client to > download that 1 gb file. But it showed error message, "decryption failed or > bad record mac". When I debugged using gdb, I understood that Tag > verification is getting failed. But the matter is, I am storing the Key and > IV at the time of handshake itself, to a buffer in my engine. When an > SSLRead or SSLWrite occur, I will copy the saved Key and Iv to fill the > respective descriptors. > But, in this case what happens is, if there is 3rd client handshake > occurred, its key and iv stored in a buffer. And when I give a Sslwrite in > the 1st client, it used the last saved key and iv, but it is actually key > and iv of 3 rd client. But I can download the file if I give get request > through the last handshaked client. > So what I can do is, save the key and iv of different clients in > different buffers. If the SSLread/write from any client comes, then just > offload the key and iv from the respective buffer. But for that, i need a > unique id for each client, which must be the same for a client in the > entire connection. > How can i get the unique id. Beyond the parameters *in, *out, inl (in > the case of plaintext/ cipher text offloading) and *ptr, *type, *arg (in > the case of header/aad offload) only what I have is ctx. With this ctx, can > i get a unique id or is there any way to solve this problem. > 6. Didn't tried with Apache server. > > Thanks > > On Thu 13 Dec, 2018, 1:30 AM Michael Richardson >> >> ASHIQUE CK wrote: >> > We are using a Crypto Accelerator Engine to offload AESGCM and RSA >> > parameters. Trying to connect multiple clients simultaneously with a >> > single Nginx server, which is using this accelerator. The Key and IV >> >> You probably need to tell us: >> >> 1) which engine? did you write this engine? >> 2) whose driver? >> 3) what version of openssl? >> 4) what version of nginx? >> 5) how did you observe the problem you described? >> 6) is it different for, for instance, apache? or some other server >> software? >> >> > is passing only at handshake, and after handshake this set of key and >> > IV is using for all encryption and decryption. So at Engine side, we >> > are storing this Key and IV to a buffer and while >> > encrypting/decrypting , this Key and IV is used from this buffer. But, >> > while multiple client connects, the last saved Key/IV is getting for >> > all clients. >> > So, is there any way to get a unique ID foer each client connection ? >> > >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From filipe.mfgfernandes at gmail.com Thu Dec 13 08:41:15 2018 From: filipe.mfgfernandes at gmail.com (Filipe Fernandes) Date: Thu, 13 Dec 2018 08:41:15 +0000 Subject: [openssl-users] Multiple client connection to Nginx server In-Reply-To: References: <28371.1544644778@localhost> Message-ID: Hi, Socket file descriptor is unique during the entire connection time. You could save the data using the fd as key to a hashtable entry. Regards Na(o) quinta, 13 de dez de 2018, 05:16, ASHIQUE CK escreveu: > 4. f-stack nginx server 1.11.10 > > On Thu, Dec 13, 2018 at 9:00 AM ASHIQUE CK wrote: > >> Hi, >> 1. The engine that we wrote is by the reference of qat, is just an >> interface which receives the openssl parameters of AES and RSA and offload >> them to an FPGA hardware accelerator. >> 2. >> 3. Openssl 1.1.0 h >> 4. Uses f-stack nginx 1.10.1 >> 5. We ran nginx server which have a 1 Gb file in its root directory. Then >> connected 3 clients to this server. These clients waits after handshake is >> done. After I run 3rd client, I gave a Get request through 1 st client to >> download that 1 gb file. But it showed error message, "decryption failed or >> bad record mac". When I debugged using gdb, I understood that Tag >> verification is getting failed. But the matter is, I am storing the Key and >> IV at the time of handshake itself, to a buffer in my engine. When an >> SSLRead or SSLWrite occur, I will copy the saved Key and Iv to fill the >> respective descriptors. >> But, in this case what happens is, if there is 3rd client handshake >> occurred, its key and iv stored in a buffer. And when I give a Sslwrite in >> the 1st client, it used the last saved key and iv, but it is actually key >> and iv of 3 rd client. But I can download the file if I give get request >> through the last handshaked client. >> So what I can do is, save the key and iv of different clients in >> different buffers. If the SSLread/write from any client comes, then just >> offload the key and iv from the respective buffer. But for that, i need a >> unique id for each client, which must be the same for a client in the >> entire connection. >> How can i get the unique id. Beyond the parameters *in, *out, inl (in >> the case of plaintext/ cipher text offloading) and *ptr, *type, *arg (in >> the case of header/aad offload) only what I have is ctx. With this ctx, can >> i get a unique id or is there any way to solve this problem. >> 6. Didn't tried with Apache server. >> >> Thanks >> >> On Thu 13 Dec, 2018, 1:30 AM Michael Richardson > >>> >>> ASHIQUE CK wrote: >>> > We are using a Crypto Accelerator Engine to offload AESGCM and RSA >>> > parameters. Trying to connect multiple clients simultaneously with a >>> > single Nginx server, which is using this accelerator. The Key and IV >>> >>> You probably need to tell us: >>> >>> 1) which engine? did you write this engine? >>> 2) whose driver? >>> 3) what version of openssl? >>> 4) what version of nginx? >>> 5) how did you observe the problem you described? >>> 6) is it different for, for instance, apache? or some other server >>> software? >>> >>> > is passing only at handshake, and after handshake this set of key and >>> > IV is using for all encryption and decryption. So at Engine side, we >>> > are storing this Key and IV to a buffer and while >>> > encrypting/decrypting , this Key and IV is used from this buffer. But, >>> > while multiple client connects, the last saved Key/IV is getting for >>> > all clients. >>> > So, is there any way to get a unique ID foer each client connection ? >>> > >>> -- >>> openssl-users mailing list >>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >>> >> -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prateep.kumar at broadcom.com Thu Dec 13 08:56:50 2018 From: prateep.kumar at broadcom.com (Prateep Kumar) Date: Thu, 13 Dec 2018 14:26:50 +0530 Subject: [openssl-users] Delay in converting CRL to binary data Message-ID: Hello, We are converting a *CRL* (Size *3.687 MB*) to binary data using *X509_CRL_get_REVOKED()* and it is taking *167.977* seconds to process the same. Please let us know if this is an expected behavior or something should be done to improve the above observation. With Regards, Prateep -------------- next part -------------- An HTML attachment was scrubbed... URL: From darshanmody at avaya.com Thu Dec 13 11:44:31 2018 From: darshanmody at avaya.com (Mody, Darshan (Darshan)) Date: Thu, 13 Dec 2018 11:44:31 +0000 Subject: [openssl-users] Openssl version in RHEL 8 Message-ID: Hi I am checking RHEL 8 feasibility on our systems. I observe that openssl fips module [root at puoasvorsr07 ~]# openssl version OpenSSL 1.1.1 FIPS 11 Sep 2018 [root at puoasvorsr07 ~]# My query is openssl 1.1.1 FIPS is also in the beta phase? Thanks Darshan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Thu Dec 13 12:49:20 2018 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 13 Dec 2018 12:49:20 +0000 Subject: [openssl-users] Openssl version in RHEL 8 Message-ID: * [root at puoasvorsr07 ~]# openssl version * OpenSSL 1.1.1 FIPS 11 Sep 2018 Is that a version you built yourself, or from RedHat? I believe it is RedHat?s version, which did their own FIPS work. The OpenSSL FIPS module is starting development. -------------- next part -------------- An HTML attachment was scrubbed... URL: From uri at ll.mit.edu Thu Dec 13 19:23:15 2018 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Thu, 13 Dec 2018 19:23:15 +0000 Subject: [openssl-users] FW: Dgst sigopt parameters? Message-ID: <09B57D48-BDA1-4DB6-8558-E25B10B4E9FC@ll.mit.edu> I still would like to know where all the acceptable "dgst -sigopt" parameters are described for RSA and ECDSA. Google search and scouring openssl.org manual pages did not bring me anything. ?On 8/24/17, 5:42 PM, "Blumenthal, Uri - 0553 - MITLL" wrote: OpenSSL dgst manual page only days that sigopt value are algorithm-specific. Where are they described for ECDSA and RSA PSS? Thanks! Regards, Uri -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5249 bytes Desc: not available URL: From openssl-users at dukhovni.org Thu Dec 13 22:07:14 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 13 Dec 2018 17:07:14 -0500 Subject: [openssl-users] FW: Dgst sigopt parameters? In-Reply-To: <09B57D48-BDA1-4DB6-8558-E25B10B4E9FC@ll.mit.edu> References: <09B57D48-BDA1-4DB6-8558-E25B10B4E9FC@ll.mit.edu> Message-ID: <2A61B052-4052-4BA0-B785-CAA5F3F919B4@dukhovni.org> > On Dec 13, 2018, at 2:23 PM, Blumenthal, Uri - 0553 - MITLL wrote: > > I still would like to know where all the acceptable "dgst -sigopt" parameters are described for RSA and ECDSA. > > Google search and scouring openssl.org manual pages did not bring me anything. Take a look at the "-pkeyopt" option of pkeyutl(1). I believe these are the same options. If we ignore key generation parameters, all I'm finding is: dh: dh_pad rsa: rsa_padding_mode rsa: rsa_pss_saltlen rsa: rsa_mgf1_md rsa: rsa_oaep_md rsa: rsa_oaep_label And "dh_pad" many not be applicable to dgst(1). -- Viktor. From uri at ll.mit.edu Thu Dec 13 22:10:27 2018 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Thu, 13 Dec 2018 22:10:27 +0000 Subject: [openssl-users] FW: Dgst sigopt parameters? In-Reply-To: <2A61B052-4052-4BA0-B785-CAA5F3F919B4@dukhovni.org> References: <09B57D48-BDA1-4DB6-8558-E25B10B4E9FC@ll.mit.edu> <2A61B052-4052-4BA0-B785-CAA5F3F919B4@dukhovni.org> Message-ID: <039C7D3D-046D-452E-A64C-068B988CA136@ll.mit.edu> Viktor, Thank you! So, I should expect the format of the sigopt parameters to be the same as of pkeyopt? That's very good to know. I wish the man page mentioned this. ;-) Regards, Uri Sent from my iPhone On Dec 13, 2018, at 17:08, Viktor Dukhovni wrote: >> On Dec 13, 2018, at 2:23 PM, Blumenthal, Uri - 0553 - MITLL wrote: >> >> I still would like to know where all the acceptable "dgst -sigopt" parameters are described for RSA and ECDSA. >> >> Google search and scouring openssl.org manual pages did not bring me anything. > > Take a look at the "-pkeyopt" option of pkeyutl(1). I believe these are the > same options. > > If we ignore key generation parameters, all I'm finding is: > > dh: dh_pad > rsa: rsa_padding_mode > rsa: rsa_pss_saltlen > rsa: rsa_mgf1_md > rsa: rsa_oaep_md > rsa: rsa_oaep_label > > And "dh_pad" many not be applicable to dgst(1). > > -- > Viktor. > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5801 bytes Desc: not available URL: From darshanmody at avaya.com Fri Dec 14 07:00:00 2018 From: darshanmody at avaya.com (Mody, Darshan (Darshan)) Date: Fri, 14 Dec 2018 07:00:00 +0000 Subject: [openssl-users] Openssl version in RHEL 8 In-Reply-To: References: Message-ID: Thanks Rich Warm Regards Darshan From: openssl-users On Behalf Of Salz, Rich via openssl-users Sent: Thursday, December 13, 2018 6:19 PM To: openssl-users at openssl.org Subject: Re: [openssl-users] Openssl version in RHEL 8 * [root at puoasvorsr07 ~]# openssl version * OpenSSL 1.1.1 FIPS 11 Sep 2018 Is that a version you built yourself, or from RedHat? I believe it is RedHat?s version, which did their own FIPS work. The OpenSSL FIPS module is starting development. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bmeeker51 at buckeye-express.com Fri Dec 14 18:25:29 2018 From: bmeeker51 at buckeye-express.com (bmeeker51 at buckeye-express.com) Date: Fri, 14 Dec 2018 13:25:29 -0500 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> Message-ID: <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> the silence is deafening On 2018-12-06 04:47, openssl at foocrypt.net wrote: > Does OpenSSL have a policy stance on government enforced back doors ? > > -- > > Regards, > > Mark A. Lane > > Cryptopocalypse NOW 01 04 2016 > > Volumes 0.0 -> 10.0 Now available through iTunes - iBooks @ > https://itunes.apple.com/au/author/mark-a.-lane/id1100062966?mt=11 > > ? Mark A. Lane 1980 - 2018, All Rights Reserved. > ? FooCrypt 1980 - 2018, All Rights Reserved. > ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2018, All > Rights Reserved. > ? Cryptopocalypse 1980 - 2018, All Rights Reserved. From Michael.Wojcik at microfocus.com Fri Dec 14 19:49:00 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 14 Dec 2018 19:49:00 +0000 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> Message-ID: > On 2018-12-06 04:47, openssl at foocrypt.net wrote: > > Does OpenSSL have a policy stance on government enforced back doors ? > > > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf > Of bmeeker51 at buckeye-express.com > Sent: Friday, December 14, 2018 13:25 > > the silence is deafening "OpenSSL" doesn't have a "policy stance" on anything. It's a software package. This is openssl-users, not openssl-official-opinions-of-the-OpenSSL-Foundation. Or openssl-political-discussions, for that matter. I imagine many people who are subscribed to this list are not in favor of the legislation in question. However, that is not a subject pertinent to the list, and openssl-users remains valuable to its subscribers in large part because most of the traffic remains on-topic. There are plenty of forums where people have expressed, and continue to express, their opinions of the Assistance and Access Bill. That includes numerous cryptography and security experts, and representatives of organizations which are active in those areas. Some random posts in openssl-users will not materially change the course or weight of that discussion. -- Michael Wojcik Distinguished Engineer, Micro Focus From bmeeker51 at buckeye-express.com Fri Dec 14 22:42:14 2018 From: bmeeker51 at buckeye-express.com (bmeeker51 at buckeye-express.com) Date: Fri, 14 Dec 2018 17:42:14 -0500 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> Message-ID: Though you could infer my opinion, I was not trying to create a political debate as you allude. I'm sure many users would agree that the A&A bill is profoundly relevant and "on-topic" considering OpenSSL has an Australian developer. I simply wanted a clear statement so I can make an informed decision whether or not I should use OpenSSL in future projects. I now have my answer. Thank you. On 2018-12-14 14:49, Michael Wojcik wrote: >> On 2018-12-06 04:47, openssl at foocrypt.net wrote: >> > Does OpenSSL have a policy stance on government enforced back doors ? >> > >> From: openssl-users [mailto:openssl-users-bounces at openssl.org] On >> Behalf >> Of bmeeker51 at buckeye-express.com >> Sent: Friday, December 14, 2018 13:25 >> >> the silence is deafening > > "OpenSSL" doesn't have a "policy stance" on anything. It's a software > package. > > This is openssl-users, not > openssl-official-opinions-of-the-OpenSSL-Foundation. Or > openssl-political-discussions, for that matter. > > I imagine many people who are subscribed to this list are not in favor > of the legislation in question. However, that is not a subject > pertinent to the list, and openssl-users remains valuable to its > subscribers in large part because most of the traffic remains > on-topic. > > There are plenty of forums where people have expressed, and continue > to express, their opinions of the Assistance and Access Bill. That > includes numerous cryptography and security experts, and > representatives of organizations which are active in those areas. Some > random posts in openssl-users will not materially change the course or > weight of that discussion. > > -- > Michael Wojcik > Distinguished Engineer, Micro Focus From openssl-users at dukhovni.org Fri Dec 14 23:42:44 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 14 Dec 2018 18:42:44 -0500 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> Message-ID: <37E89E05-F7DC-436F-A12D-1C3A4D3576C9@dukhovni.org> > On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: > > I simply wanted a clear statement so I can make an informed decision whether or not I should use OpenSSL in future projects. I now have my answer. Thank you. This is not the right forum for that question. The bill is too new for a policy response to have been considered or agreed. OpenSSL has committers from many countries. OpenSSH also has an Australian maintainer, have they published a policy? I am sure there are Australian contributors to Linux, NetBSD, FreeBSD, OpenBSD, Android, ... Avoiding all taint from anything touched by Australia will not be easy. -- Viktor. From openssl at foocrypt.net Sat Dec 15 00:19:36 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Sat, 15 Dec 2018 11:19:36 +1100 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: <37E89E05-F7DC-436F-A12D-1C3A4D3576C9@dukhovni.org> References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> <37E89E05-F7DC-436F-A12D-1C3A4D3576C9@dukhovni.org> Message-ID: Rather than going down the political or policy line, perhaps it may be prudent to discuss the technical solutions to testing the engine, regardless of the OS it is running on. How does one validate and test the engines during / after compile to ensure their ?trust? ? > On 15 Dec 2018, at 10:42, Viktor Dukhovni wrote: > >> On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: >> >> I simply wanted a clear statement so I can make an informed decision whether or not I should use OpenSSL in future projects. I now have my answer. Thank you. > > This is not the right forum for that question. The bill is too > new for a policy response to have been considered or agreed. > > OpenSSL has committers from many countries. OpenSSH also > has an Australian maintainer, have they published a policy? > > I am sure there are Australian contributors to Linux, NetBSD, > FreeBSD, OpenBSD, Android, ... > > Avoiding all taint from anything touched by Australia will not > be easy. > > -- > Viktor. > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From jgh at wizmail.org Sat Dec 15 15:00:55 2018 From: jgh at wizmail.org (Jeremy Harris) Date: Sat, 15 Dec 2018 15:00:55 +0000 Subject: [openssl-users] client-side ocsp stapling Message-ID: <2e76a5aa-34ea-2b09-f510-43f7845e7af1@wizmail.org> Hi, The manpage for SSL_CTX_set_tlsext_status_cb() describes the calls in terms of the client requesting stapling from the server, Is the reverse possible - the server requesting stapling by the client? Should the same calls be used, by the alternate ends, or if not, what? This arose in the context of must-staple certs being used as client certs. -- Thanks, Jeremy From yang.yang at baishancloud.com Mon Dec 17 04:27:26 2018 From: yang.yang at baishancloud.com (Paul Yang) Date: Mon, 17 Dec 2018 12:27:26 +0800 Subject: [openssl-users] Openssl speed command for AESGCM In-Reply-To: References: Message-ID: <222BA3CE-1E45-4AC6-A96D-06AADFA4EF44@baishancloud.com> Yes, try something like: openssl speed -evp aes-128-gcm > On Nov 23, 2018, at 13:11, ASHIQUE CK wrote: > > Hi, > Does Openssl has speed command for AESGCM ? > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From openssl at foocrypt.net Mon Dec 17 05:59:31 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Mon, 17 Dec 2018 16:59:31 +1100 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> <37E89E05-F7DC-436F-A12D-1C3A4D3576C9@dukhovni.org> Message-ID: <727FC0EE-507B-4B87-B906-CCA9110C972B@foocrypt.net> Just in time for xmas, Second byte of T.O.L.A. via another P.J.C.I.S. @ https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Intelligence_and_Security/ReviewofTOLAAct > On 15 Dec 2018, at 11:19, openssl at foocrypt.net wrote: > > Rather than going down the political or policy line, perhaps it may be prudent to discuss the technical solutions to testing the engine, regardless of the OS it is running on. > > How does one validate and test the engines during / after compile to ensure their ?trust? ? > > > >> On 15 Dec 2018, at 10:42, Viktor Dukhovni > wrote: >> >>> On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: >>> >>> I simply wanted a clear statement so I can make an informed decision whether or not I should use OpenSSL in future projects. I now have my answer. Thank you. >> >> This is not the right forum for that question. The bill is too >> new for a policy response to have been considered or agreed. >> >> OpenSSL has committers from many countries. OpenSSH also >> has an Australian maintainer, have they published a policy? >> >> I am sure there are Australian contributors to Linux, NetBSD, >> FreeBSD, OpenBSD, Android, ... >> >> Avoiding all taint from anything touched by Australia will not >> be easy. >> >> -- >> Viktor. >> >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > ? Regards, Mark A. Lane Be Protected, Get ?.?.. The FooKey METHOD : http://foocrypt.net/the-fookey-method The common flaws in ALL encryption technologies to date are : 1. Typing on a KeyBoard to enter the password 2. Clicking on the Mouse / Pointer device that controls the location of the cursor 3. Some person or device looking / recording your screen as you type the password 4. The human developing a password that is easily guess, or can be brute forced due to its length 5. Sharing the password with a third party to decrypt the data 6. Storing the encrypted data in a secure location so no unauthorised access can be made to either the key(s) to decrypt the data or the encrypted data itself 7. The Right Wing Policies of the Liberal Party of Australia, being forced into law so they can all make it to the xmas party?! FooCrypt, A Tale Of Cynical Cyclical Encryption, takes away the above ?BAD GUYS? by providing you with software engineered to alleviate all the above. ? Mark A. Lane 1980 - 2017, All Rights Reserved. ? FooCrypt 1980 - 2017, All Rights Reserved. ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2017, All Rights Reserved. ? Cryptopocalypse 1980 - 2017, All Rights Reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aerowolf at gmail.com Mon Dec 17 06:32:30 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Mon, 17 Dec 2018 00:32:30 -0600 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> <37E89E05-F7DC-436F-A12D-1C3A4D3576C9@dukhovni.org> Message-ID: Getting the key for any given communication from OpenSSL is definitely doable if you're not using an engine. If you are using an engine, it may or may not be even possible. In any case, maintaining that key once you have it is definitely out of scope of OpenSSL. As an app developer subject to that law, it is up to you to figure out a way to keep it available for compliance purposes. I'm not part of the OpenSSL team, so I have no capacity to make a policy statement on their behalf. However, I'm pretty sure that OpenSSL is not going to alter its API or its library design to make it easier for a bolt-on AusAssAccess module to be written that directly queries the state of the library or its structures. That said, in the past it's been bandied about that an originating software package subject to the law could encrypt the symmetric key not only to the intended recipient, but also to a hardcoded compliance key. A receiving software package subject to the law would have to modify its receipt process to store a copy of the symmetric key elsewhere when it first decrypted a message -- probably also encrypted to a hardcoded compliance key. The downside is "what happens when that compliance key is compromised"? (or, for that matter, if the compliance key is lost.) And it will be compromised or lost, someday, some way. That's the reason so many people have been against backdoors like this -- the security of the system is good, but the security of human beings tasked with maintaining the security of the system is nowhere near as good. -Kyle H On Fri, Dec 14, 2018, 18:20 openssl at foocrypt.net Rather than going down the political or policy line, perhaps it may be > prudent to discuss the technical solutions to testing the engine, > regardless of the OS it is running on. > > How does one validate and test the engines during / after compile to > ensure their ?trust? ? > > > > > On 15 Dec 2018, at 10:42, Viktor Dukhovni > wrote: > > > >> On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: > >> > >> I simply wanted a clear statement so I can make an informed decision > whether or not I should use OpenSSL in future projects. I now have my > answer. Thank you. > > > > This is not the right forum for that question. The bill is too > > new for a policy response to have been considered or agreed. > > > > OpenSSL has committers from many countries. OpenSSH also > > has an Australian maintainer, have they published a policy? > > > > I am sure there are Australian contributors to Linux, NetBSD, > > FreeBSD, OpenBSD, Android, ... > > > > Avoiding all taint from anything touched by Australia will not > > be easy. > > > > -- > > Viktor. > > > > -- > > openssl-users mailing list > > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > On Fri, Dec 14, 2018, 18:20 openssl at foocrypt.net Rather than going down the political or policy line, perhaps it may be > prudent to discuss the technical solutions to testing the engine, > regardless of the OS it is running on. > > How does one validate and test the engines during / after compile to > ensure their ?trust? ? > > > > > On 15 Dec 2018, at 10:42, Viktor Dukhovni > wrote: > > > >> On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: > >> > >> I simply wanted a clear statement so I can make an informed decision > whether or not I should use OpenSSL in future projects. I now have my > answer. Thank you. > > > > This is not the right forum for that question. The bill is too > > new for a policy response to have been considered or agreed. > > > > OpenSSL has committers from many countries. OpenSSH also > > has an Australian maintainer, have they published a policy? > > > > I am sure there are Australian contributors to Linux, NetBSD, > > FreeBSD, OpenBSD, Android, ... > > > > Avoiding all taint from anything touched by Australia will not > > be easy. > > > > -- > > Viktor. > > > > -- > > openssl-users mailing list > > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shreyabhandare25 at gmail.com Mon Dec 17 08:29:20 2018 From: shreyabhandare25 at gmail.com (Shreya Bhandare) Date: Mon, 17 Dec 2018 13:59:20 +0530 Subject: [openssl-users] How to find the right bug Message-ID: Hello, i am very new to openssl and contributing to large code bases in general, I did my first contribution to openssl which got me familiar with the process etc. It is time for me to dig deeper and find a bug that actually helps me understand some part of code base and I'm having a hard time to actually find such a bug, any help in pointing out a bug that that any of you have come across, that you think would be a good start or something that you're not finding time to do, I would be happy to do it (with a little help). Or any help on pointers on how to find the right bug for you when you don't know much about the code base would also be very helpful, i wouldn't want to take up much time of anybody :) Also i wans't sure which of the mailers was the more appropriate one to post this question so adding both, won't happen the next time! Thanks, Shreya -------------- next part -------------- An HTML attachment was scrubbed... URL: From Erwann.Abalea at docusign.com Mon Dec 17 10:41:20 2018 From: Erwann.Abalea at docusign.com (Erwann Abalea) Date: Mon, 17 Dec 2018 10:41:20 +0000 Subject: [openssl-users] RSA Public Key error In-Reply-To: References: <28510262-6708-4CB1-B070-2CA5128923EC@docusign.com> Message-ID: <3611CDB1-E3F6-4787-88F1-51C1998C831E@docusign.com> Bonjour, Without knowing what functions you?re calling when you try to encrypt data using the key Key3_wo16, I can only guess. And I?m guessing that you?re calling a function that expects to find a public key encoded in a SubjectPublicKeyInfo structure, and since this Key3_wo16 object is not such a structure, the function fails. What you can do is : * Take your public keys (for example Key2_w16) * Check that the first 16 bytes are what you expect to have * Pass the remainder of the file to the d2i_RSAPublicKey() function * Use the resulting RSA public key the way you want Cordialement, Erwann Abalea De : prithiraj das Date : lundi 17 d?cembre 2018 ? 08:23 ? : Erwann Abalea , "openssl-users at openssl.org" Objet : Re: [openssl-users] RSA Public Key error Hi Erwann/All, Thank you for your earlier response. I have done a couple of tests on the originally generated 2048-bit RSA public key (let's say Key1_org) and the key file containing 16 byte custom information after removing 24 bytes from the originally generated key file and prepending those 16 bytes (let's say Key2_w16). For my experiment(s), I also removed those 16 bytes from the key Key2_w16 (which contains custom information) and the rest of the bytes were written into a file. Lets name this keyfile Key3_wo16. I believe the presence of custom 16 byte information resulted in asn1parse encoding/decoding errors as mentioned in the previous mail.. So now, Key3_wo16 = Key2_w16 - the first 16 bytes = Key1_org - the first 24 bytes. And I performed asn1parse on Key3_wo16. The output of asn1parse on this key is shown in the image file asn1parse of 24 byte removed.jpg which is attached in the mail. And I also performed 2 asn1parse strparse opertions on the originally generated public key Key1_org with strparse offsets 19 and 24. I have attached screenshots of the same with names asn1parse strparse 19.jpg and asn1parse strparse 24.jpg respectively. The outputs in all cases are the same. In the screenshots, the (removed/blurred) respective INTEGER values in all screenshots are the same. What I want to know is why is OpenSSL throwing an error when try to encrypt data using the key Key3_wo16? The same command used for encryption works when the key Key1_org is used. I believe the INTEGER values contain the modulus and exponent information and so, I was expecting the encryption to be successful but OpenSSL fails to accept this key. Can anyone please tell me what is going wrong here? Apart from the solution suggested by Erwann , can anyone please suggest an alternative solution as we need to work with the Key2_w16 ( the key containing the custom 16 byte information after removing the originally present first 24 bytes)? That is the only keyfile received by us. Thanks and Regards, Prithiraj On Wed, 12 Dec 2018 at 12:32, Erwann Abalea via openssl-users > wrote: Bonjour, Assuming the first 24 bytes you?re talking about are the very beginning of the SPKI structure (that is, the enclosing SEQUENCE, and the AlgorithmIdentifier), that means you?ve replaced up to the first byte of the BITSTRING containing the public key (this byte indicates the number of unused bits) for a 2048bits RSA key with 16 custom bytes. That?s perfectly normal for OpenSSL to refuse to load that beast, and for asn1parse to return errors (the first bytes do not represent a correct DER encoding of anything). Think of it as ? I took a Jpeg file, replaced some bytes at the beginning by my own, and now I can?t open the file again ?. Those bytes are there for a reason. A quick solution would be to *add* your 16 bytes before the public key, and remove them when passing the rest of the bytes to OpenSSL. Cordialement, Erwann Abalea De : openssl-users > au nom de prithiraj das > R?pondre ? : "openssl-users at openssl.org" > Date : mercredi 12 d?cembre 2018 ? 08:08 ? : "openssl-users at openssl.org" > Objet : [openssl-users] RSA Public Key error Hi, I have a RSA public key(PKCS 1v1.5) that I have obtained from somewhere. That key has been obtained after removing the first 24 bytes from the originally generated RSA public key. Those 24 bytes are being replaced by some custom 16 byte information which is being used as some sort of identifier in some future task and those 16 bytes are playing no role in encryption. OpenSSL fails to read this key. asn1parse shows some parsing error and most importantly RSA encryption in OpenSSL using this key fails. The untampered version of the RSA public key generated from the same source and containing the original 24 bytes at the beginning of the key is successfully read by OpenSSL and the RSA encryption using that key is also successful in OpenSSL. But our requirement is to use the first key containing the custom 16 byte information. My understanding is that the first 24 bytes of RSA public key following PKCS standards doesn't contain the modulus and exponent details required for RSA encryption. But OpenSSL seems to require these 24 bytes for encryption. Can someone please confirm what kind of information is present in the first 24 bytes of RSA Public key and/or why does OpenSSL need it? If possible, please suggest a solution to work with that RSA public key containing custom 16 byte information at the beginning of the key. Thanks and Regards, Prithiraj -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at foocrypt.net Mon Dec 17 11:26:51 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Mon, 17 Dec 2018 22:26:51 +1100 Subject: [openssl-users] AssAccess was passed with no amendments In-Reply-To: References: <2F3815AC-C4E5-4A8A-85A9-091B80353559@foocrypt.net> <63e2a48b65b174296ba6fd8299d6f065@buckeye-express.com> <37E89E05-F7DC-436F-A12D-1C3A4D3576C9@dukhovni.org> Message-ID: <3F8DB999-2237-43FB-BECE-B6C3A9EA8530@foocrypt.net> Kyle Anyone in their rights minds understands the dangers with government key escrow systems and governments requesting back doors or delaying the remediation of existing zero days, local and remote exploits so that they can utilise them for their own intelligence or law enforcement purposes. From FooCrypt?s perspective, FooCrypt utilises OpenSSL as an engine calling the binary via an exec rather than calling a purposely built library from the OpenSSL source code. This has added security as it enables the end user to select an appropriate version of the engine that they have access to as per their own countries legal requirements around encryption software. FooCrypt is distributed on macOS platforms as a read only disk image, on linux and windows systems as a Debian package, and as a customised SOE in a read only bootable Live ISO file which can be burnt to an old fashioned DVD, and booted via a VM or on cut down hardware with no physical disk / network / bluetooth etc. The encrypted data objects can be sent via any messaging service / email / snail mail postage / fax / protocol / etc. Technically, if an end user, utilised the Live ISO on a blackbox system, with a deadman switch on the power, there is no way to ?escrow? they keys for anyone. Not only is AssAccess an affront to the sanity of those who are left in Australia still managing to work in the encryption space since they criminalised encryption under the Defence Trade Acts additions of encryption into the Defence Strategic Goods Listing, it has been politicised by our degenerate LNP government with make believe claims that have no founding and belittles those with any technical understanding of the issues. From a users perspective, end users should be able to ?trust? the encryption software they use and not have to deal with the perception of ?back doors? requested by Governments, which can?t be reported by those who are crunchy the code, as the Government is threatening them with a 5 year jail term and massive fines for disclosing the Governments attempts to circumvent security. > On 17 Dec 2018, at 17:32, Kyle Hamilton wrote: > > Getting the key for any given communication from OpenSSL is definitely doable if you're not using an engine. If you are using an engine, it may or may not be even possible. > > In any case, maintaining that key once you have it is definitely out of scope of OpenSSL. As an app developer subject to that law, it is up to you to figure out a way to keep it available for compliance purposes. > > I'm not part of the OpenSSL team, so I have no capacity to make a policy statement on their behalf. However, I'm pretty sure that OpenSSL is not going to alter its API or its library design to make it easier for a bolt-on AusAssAccess module to be written that directly queries the state of the library or its structures. > > That said, in the past it's been bandied about that an originating software package subject to the law could encrypt the symmetric key not only to the intended recipient, but also to a hardcoded compliance key. A receiving software package subject to the law would have to modify its receipt process to store a copy of the symmetric key elsewhere when it first decrypted a message -- probably also encrypted to a hardcoded compliance key. > > The downside is "what happens when that compliance key is compromised"? (or, for that matter, if the compliance key is lost.) And it will be compromised or lost, someday, some way. That's the reason so many people have been against backdoors like this -- the security of the system is good, but the security of human beings tasked with maintaining the security of the system is nowhere near as good. > > -Kyle H > > On Fri, Dec 14, 2018, 18:20 openssl at foocrypt.net wrote: > Rather than going down the political or policy line, perhaps it may be prudent to discuss the technical solutions to testing the engine, regardless of the OS it is running on. > > How does one validate and test the engines during / after compile to ensure their ?trust? ? > > > > > On 15 Dec 2018, at 10:42, Viktor Dukhovni > wrote: > > > >> On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: > >> > >> I simply wanted a clear statement so I can make an informed decision whether or not I should use OpenSSL in future projects. I now have my answer. Thank you. > > > > This is not the right forum for that question. The bill is too > > new for a policy response to have been considered or agreed. > > > > OpenSSL has committers from many countries. OpenSSH also > > has an Australian maintainer, have they published a policy? > > > > I am sure there are Australian contributors to Linux, NetBSD, > > FreeBSD, OpenBSD, Android, ... > > > > Avoiding all taint from anything touched by Australia will not > > be easy. > > > > -- > > Viktor. > > > > -- > > openssl-users mailing list > > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > On Fri, Dec 14, 2018, 18:20 openssl at foocrypt.net wrote: > Rather than going down the political or policy line, perhaps it may be prudent to discuss the technical solutions to testing the engine, regardless of the OS it is running on. > > How does one validate and test the engines during / after compile to ensure their ?trust? ? > > > > > On 15 Dec 2018, at 10:42, Viktor Dukhovni > wrote: > > > >> On Dec 14, 2018, at 5:42 PM, bmeeker51 at buckeye-express.com wrote: > >> > >> I simply wanted a clear statement so I can make an informed decision whether or not I should use OpenSSL in future projects. I now have my answer. Thank you. > > > > This is not the right forum for that question. The bill is too > > new for a policy response to have been considered or agreed. > > > > OpenSSL has committers from many countries. OpenSSH also > > has an Australian maintainer, have they published a policy? > > > > I am sure there are Australian contributors to Linux, NetBSD, > > FreeBSD, OpenBSD, Android, ... > > > > Avoiding all taint from anything touched by Australia will not > > be easy. > > > > -- > > Viktor. > > > > -- > > openssl-users mailing list > > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Mon Dec 17 12:41:10 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Mon, 17 Dec 2018 18:11:10 +0530 Subject: [openssl-users] Openssl async support Message-ID: Hi all, I have some queries regarding OpenSSL async operation. Current setup ------------- I have one* OpenSSL dynamic engine (with RSA and AES-GCM support) *and linked it with *Nginx* server. Multiple *WGET* commands on the client side. Current issue ------------- Since OpenSSL *do_cipher call *(the function in which actual AES-GCM encryption/decryption happening) comes from one client at a time which is reducing file downloading performance. So we need an *asynchronous operation in OpenSSL* ie. we need multiple do_cipher calls at the same time from which we should submit requests to HW without affecting the incoming requests and should wait for HW output. Queries -------- 1) Is there is any other scheme for multiple do_cipher calls at a time?. 2) Any method to enable asynchronous call from OpenSSL? Versions ------------- Openssl - 1.1.0h Nginx1.11.10 Wget 1.17.1 Kindly support me. Please inform me if any more inputs needed. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Mon Dec 17 12:52:54 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Mon, 17 Dec 2018 12:52:54 +0000 Subject: [openssl-users] How to find the right bug In-Reply-To: References: Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Shreya Bhandare > Sent: Monday, December 17, 2018 03:29 > i am very new to openssl and contributing to large code bases in general, I did my first contribution > to openssl which got me familiar with the process etc. It is time for me to dig deeper and find a bug > that actually helps me understand some part of code base and I'm having a hard time to actually > find such a bug Have you looked through the open issues at https://github.com/openssl/openssl/issues? There are at least a few labeled "good first issue" (I'm not sure how many because I'm not enabling a bunch of scripts just to get github's filtering to work), and in any case there are plenty there to choose from. -- Michael Wojcik Distinguished Engineer, Micro Focus From oinksocket at letterboxes.org Mon Dec 17 15:21:11 2018 From: oinksocket at letterboxes.org (Nick) Date: Mon, 17 Dec 2018 15:21:11 +0000 Subject: [openssl-users] A script for hybrid encryption with openssl Message-ID: <7aa6f1de-94c3-d6c4-a437-6b8aa2cba5bc@letterboxes.org> Hello, I've written a script to try and work around openssl's lack of a way to encrypt large files with public key or hybrid cryptography. I gather SMIME works for files < ~ 2.5GB but the current implementation cannot decrypt files larger than this. My use case is automated server back-ups, for which I need to back up arbitrarily large files and copy the result to S3 for storage, but I don't want to store a decryption key on the server. I contemplated splitting the archives, except this seemed about as much work as writing something which stored an encrypted one-time password with the payload and using symmetric encryption. As I'm not really a crypto/security expert, I thought I'd post it here and ask for some feedback on it. https://github.com/wu-lee/hencrypt Thanks! Nick From jb-openssl at wisemo.com Mon Dec 17 22:02:52 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Mon, 17 Dec 2018 23:02:52 +0100 Subject: [openssl-users] A script for hybrid encryption with openssl In-Reply-To: <7aa6f1de-94c3-d6c4-a437-6b8aa2cba5bc@letterboxes.org> References: <7aa6f1de-94c3-d6c4-a437-6b8aa2cba5bc@letterboxes.org> Message-ID: <1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com> On 17/12/2018 16:21, Nick wrote: > Hello, > > I've written a script to try and work around openssl's lack of a way to encrypt > large files with public key or hybrid cryptography. I gather SMIME works for > files < ~ 2.5GB but the current implementation cannot decrypt files larger than > this. > > My use case is automated server back-ups, for which I need to back up > arbitrarily large files and copy the result to S3 for storage, but I don't want > to store a decryption key on the server. I contemplated splitting the archives, > except this seemed about as much work as writing something which stored an > encrypted one-time password with the payload and using symmetric encryption. > > As I'm not really a crypto/security expert, I thought I'd post it here and ask > for some feedback on it. > > https://github.com/wu-lee/hencrypt > > A simpler way is to realize that the formats used by SMIME/CMS (specifically the PKCS#7 formats) allow almost unlimited file size, and any 2GiB limit is probably an artifact of either the openssl command line tool or some of the underlying OpenSSL libraries. It would be interesting to hear from someone familiar with that part of the OpenSSL API which calls to use to actually do CMS signing/encryption (and verification/decryption) of data too large to fit in available memory, and how to handle the data length BER encoding for values larger than a size_t. Anyway, setting up an alternative data format might be suitable if combined with other functionality requiring chunking, such as recovery from lost/corrupted data "blocks" (where each block is much much larger than a 1K "disk block"). Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From mikeb at preveil.com Mon Dec 17 22:06:41 2018 From: mikeb at preveil.com (Mike Blaguszewski) Date: Mon, 17 Dec 2018 17:06:41 -0500 Subject: [openssl-users] Problems with deriving EC public key from private Message-ID: Some code of mine reads a NIST P256 private key from bytes and derives the public key from it, and this derived public key is incorrect about 0.4% of the time. I?ve attached a sample program that does the following. 1. Generate a key-pair of type NID_X9_62_prime256v1 2. Write the public and private components to memory 3. Read the private key back from memory, derive the public key, and write that back out. 4. Compare this ?round-tripped? public key to the public key generated in step 2. The public key from step 2 almost always matches the public key from step 3, but about 0.4% of the time they will differ. (The sample program runs a loop to determine this.) Further experiments suggest it?s the private_key_from_binary() function that is the problem, where I derive the public key using EC_POINT_mul(). The sample program omits error checking, but in the production code no errors are reported. Does anyone see a flaw in my logic, especially in how I?m deriving the public key from the private key? Also let me know if this would be better submitted as a GitHub issue, or even if it needs to be handled as a paid support request. Thanks, Mike -------------- next part -------------- A non-text attachment was scrubbed... Name: ec_key_example.cxx Type: application/octet-stream Size: 3140 bytes Desc: not available URL: From bbrumley at gmail.com Tue Dec 18 04:42:28 2018 From: bbrumley at gmail.com (Billy Brumley) Date: Tue, 18 Dec 2018 06:42:28 +0200 Subject: [openssl-users] Problems with deriving EC public key from private In-Reply-To: References: Message-ID: On Tue, Dec 18, 2018 at 12:07 AM Mike Blaguszewski wrote: > > Some code of mine reads a NIST P256 private key from bytes and derives the public key from it, and this derived public key is incorrect about 0.4% of the time. I?ve attached a sample program that does the following. > > 1. Generate a key-pair of type NID_X9_62_prime256v1 > 2. Write the public and private components to memory > 3. Read the private key back from memory, derive the public key, and write that back out. > 4. Compare this ?round-tripped? public key to the public key generated in step 2. > > The public key from step 2 almost always matches the public key from step 3, but about 0.4% of the time they will differ. (The sample program runs a loop to determine this.) Further experiments suggest it?s the private_key_from_binary() function that is the problem, where I derive the public key using EC_POINT_mul(). The sample program omits error checking, but in the production code no errors are reported. > > Does anyone see a flaw in my logic, especially in how I?m deriving the public key from the private key? Also let me know if this would be better submitted as a GitHub issue, or even if it needs to be handled as a paid support request. The sample code just segfaults for me in the first iteration, before really generating a key, so it's hard to test: Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7a7e3e0 in pkey_set_type (pkey=0x380, type=408, str=0x0, len=-1) at crypto/evp/p_lib.c:181 (gdb) bt #0 0x00007ffff7a7e3e0 in pkey_set_type (pkey=0x380, type=408, str=0x0, len=-1) at crypto/evp/p_lib.c:181 #1 0x00007ffff7a7e546 in EVP_PKEY_set_type (pkey=0x380, type=408) at crypto/evp/p_lib.c:221 #2 0x00007ffff7a7e663 in EVP_PKEY_assign (pkey=0x380, type=408, key=0x5555557587c0) at crypto/evp/p_lib.c:249 #3 0x00007ffff7a248fb in pkey_ec_keygen (ctx=0x555555758760, pkey=0x380) at crypto/ec/ec_pmeth.c:416 #4 0x00007ffff7a80912 in EVP_PKEY_keygen (ctx=0x555555758760, ppkey=0x7fffffffdd18) at crypto/evp/pmeth_gn.c:107 #5 0x0000555555555046 in generate_ec_key () at foo.c:18 #6 0x0000555555555256 in main () at foo.c:73 But 0.4% is suspiciously close to 1/256, so I'm willing to bet your problem surrounds your size assumptions in various functions. Check the manpage of e.g. EC_POINT_point2oct and grep for usage in the library, but the idea is to pass NULL first, then malloc, then pass that pointer. BN_bn2bin is different. Probably the size won't be fixed (e.g., there is a 1/256 chance you'll have one byte less, i.e. leading zero). So all the static 32 and 33 byte assumptions aren't holding. Also BN_bn2bin and EC_KEY_oct2priv are not inverses of each other IIRC. (The former is raw bytes, latter ASN1 encoding.) BBB From mikeb at preveil.com Tue Dec 18 05:59:09 2018 From: mikeb at preveil.com (Mike Blaguszewski) Date: Tue, 18 Dec 2018 00:59:09 -0500 Subject: [openssl-users] Problems with deriving EC public key from private In-Reply-To: References: Message-ID: <996E96DB-E43C-4B99-8623-694A6E9A9BD5@preveil.com> On Dec 17, 2018, at 11:42 PM, Billy Brumley wrote: > > But 0.4% is suspiciously close to 1/256, so I'm willing to bet your > problem surrounds your size assumptions in various functions. Check > the manpage of e.g. EC_POINT_point2oct and grep for usage in the > library, but the idea is to pass NULL first, then malloc, then pass > that pointer. BN_bn2bin is different. Probably the size won't be fixed > (e.g., there is a 1/256 chance you'll have one byte less, i.e. leading > zero). Thanks so much! That was exactly it. Switching from BN_bn2bin() to EC_KEY_priv2oct() resolves the problem. (As does BN_bn2binpad(), but using the more standard binary format seems preferable.) I will also look into pre-flighting the calls with a NULL buffer. Mike P.S. not sure why it crashed for you, but I?d guess some combination of different OpenSSL versions and an error return being ignored by the sample code. I appreciate you taking a look despite that. From beldmit at gmail.com Tue Dec 18 08:21:07 2018 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Tue, 18 Dec 2018 11:21:07 +0300 Subject: [openssl-users] Sending empty renegotiaion_info Message-ID: Hello, Is it possible to send empty renegotiation_info extension instead of TLS_EMPTY_RENEGOTIATION_INFO_SCSV using openssl s_client? If yes, is it possible to test secure renegotiation afterward? Thank you! -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From alibekj at yahoo.com Tue Dec 18 10:10:28 2018 From: alibekj at yahoo.com (Alibek Jorajev) Date: Tue, 18 Dec 2018 10:10:28 +0000 (UTC) Subject: [openssl-users] FIPS module v3 In-Reply-To: References: Message-ID: <321756724.6615988.1545127828033@mail.yahoo.com> Hi everyone, I have been following OpenSSL blog and know that work on new OpenSSL FIPS module has started. Current FIPS module (v.2) has end of life (December 2019) and I assume that new FIPS module will be by that time.? but can someone tell me - is there are approximate dates -? will it be available earlier? thanks,Alibek -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Tue Dec 18 10:36:30 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Tue, 18 Dec 2018 16:06:30 +0530 Subject: [openssl-users] Openssl async support In-Reply-To: References: Message-ID: Hi all, I truly understand that everyone might be busy with your work and didn't found time to reply. That's okay, but incase you have accidendly forgot to reply, please accept this as a gentle reminder. On Mon, Dec 17, 2018 at 6:11 PM ASHIQUE CK wrote: > Hi all, > > I have some queries regarding OpenSSL async operation. > > Current setup > ------------- > I have one* OpenSSL dynamic engine (with RSA and AES-GCM support) *and > linked it with *Nginx* server. Multiple *WGET* commands on the client > side. > > Current issue > ------------- > Since OpenSSL *do_cipher call *(the function in which actual AES-GCM > encryption/decryption happening) comes from one client at a time which is > reducing file downloading performance. So we need an *asynchronous > operation in OpenSSL* ie. we need multiple do_cipher calls at the same > time from which we should submit requests to HW without affecting the > incoming requests and should wait for HW output. > > Queries > -------- > 1) Is there is any other scheme for multiple do_cipher calls at a time?. > 2) Any method to enable asynchronous call from OpenSSL? > > Versions > ------------- > Openssl - 1.1.0h > Nginx1.11.10 > Wget 1.17.1 > > Kindly support me. Please inform me if any more inputs needed. Thanks in > advance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oinksocket at letterboxes.org Tue Dec 18 11:18:01 2018 From: oinksocket at letterboxes.org (Nick) Date: Tue, 18 Dec 2018 11:18:01 +0000 Subject: [openssl-users] A script for hybrid encryption with openssl In-Reply-To: <1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com> References: <7aa6f1de-94c3-d6c4-a437-6b8aa2cba5bc@letterboxes.org> <1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com> Message-ID: <67ce24cb-d791-a891-728c-9279985d38a5@letterboxes.org> On 17/12/2018 22:02, Jakob Bohm via openssl-users wrote: > A simpler way is to realize that the formats used by SMIME/CMS (specifically > the PKCS#7 formats) allow almost unlimited file size, and any 2GiB limit is > probably an artifact of either the openssl command line tool or some of the > underlying OpenSSL libraries. Yes. I started using openssl's smime implementation, then backed out when I realised there were indeed limits - apparently in the underlying libraries. On decrypting I got the same kind of errors described in this bug report thread (and elsewhere if you search, but this is the most recent discussion I could find). "Attempting to decrypt/decode a large smime encoded file created with openssl fails regardless of the amount of OS memory available". https://mta.openssl.org/pipermail/openssl-dev/2016-August/008237.html The key points are: - streaming smime *encryption* has been implemented, but - smime *decryption* is done in memory, consequentially you can't decrypt anything over 1.5G - possibly this is related to the BUF_MEM structure's dependency on the size of an int There's an RT ticket but I could not log in to read this.? But it appears to have been migrated to Git-hub: https://github.com/openssl/openssl/issues/2515 It's closed - I infer as "won't fix" (yet?) and this is still an issue as my experience suggests, at least in the versions distributed for systems I will be using. I was using openssl 1.0.2g-1ubuntu4.14 (Xenial) and I've verified it with openssl 1.1.0g-2ubuntu4.3 (Bionic, the latest LTS release fro Ubuntu): $ openssl version -a OpenSSL 1.1.0g? 2 Nov 2017 built on: reproducible build, date unspecified platform: debian-amd64 compiler: gcc -DDSO_DLFCN -DHAVE_DLFCN_H -DNDEBUG -DOPENSSL_THREADS -DOPENSSL_NO_STATIC_ENGINE -DOPENSSL_PIC -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/usr/lib/ssl\"" -DENGINESDIR="\"/usr/lib/x86_64-linux-gnu/engines-1.1\"" OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1" $ dd if=/dev/zero of=sample.txt count=2M bs=1024 $ openssl req -x509 -nodes -newkey rsa:2048 -keyout mysqldump-secure.priv.pem -out mysqldump-secure.pub.pem $ openssl smime -encrypt -binary -text -aes256 -in sample.txt -out sample.txt.enc -outform DER -stream mysqldump-secure.pub.pem $ openssl smime -decrypt -binary -inkey mysqldump-secure.priv.pem -inform DEM -in sample.txt.enc -out sample.txt.restored Error reading S/MIME message 139742630175168:error:07069041:memory buffer routines:BUF_MEM_grow_clean:malloc failure:../crypto/buffer/buffer.c:138: 139742630175168:error:0D06B041:asn1 encoding routines:asn1_d2i_read_bio:malloc failure:../crypto/asn1/a_d2i_fp.c:191 > Anyway, setting up an alternative data format might be suitable if combined > with other functionality requiring chunking, such as recovery from > lost/corrupted data "blocks" (where each block is much much larger than > a 1K "disk block"). I should add that I don't really care about the format, or even the use of openssl - just the ability to tackle large files with the benefits of public key encryption, in a self-contained way without needing fiddly work deploying the keys (as GnuPG seems to require for its keyring, judging from my experience deploying Backup-Ninja / Duplicity using Ansible.)? So other solutions, if tried and tested, might work for me. Cheers, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcr at sandelman.ca Tue Dec 18 16:48:09 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Tue, 18 Dec 2018 11:48:09 -0500 Subject: [openssl-users] does -subj suppress challenge Password prompt Message-ID: <3788.1545151689@localhost> From my colleague Peter. Peter is attempting to generate a variety of CSR requests for use in examples for an IETF ACE WG on coap-est. Below my problem: the standard openssl.cnf file is attached. The openssl version is 1.0.1f. When I do the following shell script: ________________________________________________________ countryName="/C=US" stateOrProvinceName="/ST=CA" localityName="/L=Oak Park" organizationName="/O=Example Inc" organizationalUnitName="/OU=Acme" emailAddress="/emailAddress=piet at example.com" commonName="/CN=Root CA" DN=$countryName$stateOrProvinceName$localityName DN=$DN$organizationName$organizationalUnitName$commonName echo $DN { above from Bob's PKI document} openssl req -config ./openssl.cnf \ -new -sha256 -key test.key -out test.csr __________________________________________________ I get prompts for the subject names Subject: C=au, ST=ddd, L=ddd, O=ssss, OU=aaaa, CN=aaaa/emailAddress=aaaaa and a prompt for challengePssword When I change openssl command to: openssl req -config ./openssl.cnf\ -subj "$DN"\ -new -sha256 -key test.key -out test.csr no more prompts, but the challengePassword has disappeared from the attibutes section. How can I define the challengePassword while still using -subj thanks for an answer, Peter -- Peter van der Stok vanderstok consultancy mailto: consultancy at vanderstok.org, stokcons at bbhmail.nl www: www.vanderstok.org tel NL: +31(0)492474673 F: +33(0)966015248 Below is his openssl.cnf: # # OpenSSL example configuration file. # This is mostly being used for generation of certificate requests. # # This definition stops the following lines choking if HOME isn't # defined. HOME = . RANDFILE = $ENV::HOME/.rnd # Extra OBJECT IDENTIFIER info: #oid_file = $ENV::HOME/.oid oid_section = new_oids # To use this configuration file with the "-extfile" option of the # "openssl x509" utility, name here the section containing the # X.509v3 extensions to use: # extensions = # (Alternatively, use a configuration file that has only # X.509v3 extensions in its main [= default] section.) [ new_oids ] # We can add new OIDs in here for use by 'ca', 'req' and 'ts'. # Add a simple OID like this: # testoid1=1.2.3.4 # Or use config file substitution like this: # testoid2=${testoid1}.5.6 # Policies used by the TSA examples. tsa_policy1 = 1.2.3.4.1 tsa_policy2 = 1.2.3.4.5.6 tsa_policy3 = 1.2.3.4.5.7 #################################################################### [ ca ] default_ca = CA_default # The default ca section #################################################################### [ CA_default ] dir = ./demoCA # Where everything is kept certs = $dir/certs # Where the issued certs are kept crl_dir = $dir/crl # Where the issued crl are kept database = $dir/index.txt # database index file. #unique_subject = no # Set to 'no' to allow creation of # several ctificates with same subject. new_certs_dir = $dir/newcerts # default place for new certs. certificate = $dir/cacert.pem # The CA certificate serial = $dir/serial # The current serial number crlnumber = $dir/crlnumber # the current crl number # must be commented out to leave a V1 CRL crl = $dir/crl.pem # The current CRL private_key = $dir/private/cakey.pem# The private key RANDFILE = $dir/private/.rand # private random number file x509_extensions = usr_cert # The extentions to add to the cert # Comment out the following two lines for the "traditional" # (and highly broken) format. name_opt = ca_default # Subject Name options cert_opt = ca_default # Certificate field options # Extension copying option: use with caution. # copy_extensions = copy # Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs # so this is commented out by default to leave a V1 CRL. # crlnumber must also be commented out to leave a V1 CRL. # crl_extensions = crl_ext default_days = 365 # how long to certify for default_crl_days= 30 # how long before next CRL default_md = default # use public key default MD preserve = no # keep passed DN ordering # A few difference way of specifying how similar the request should look # For type CA, the listed attributes must be the same, and the optional # and supplied fields are just that :-) policy = policy_match # For the CA policy [ policy_match ] countryName = match stateOrProvinceName = match organizationName = match organizationalUnitName = optional commonName = supplied emailAddress = optional # For the 'anything' policy # At this point in time, you must list all acceptable 'object' # types. [ policy_anything ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional #################################################################### [ req ] default_bits = 2048 default_keyfile = privkey.pem distinguished_name = req_distinguished_name attributes = req_attributes x509_extensions = v3_ca # The extentions to add to the self signed cert # Passwords for private keys if not present they will be prompted for # input_password = secret # output_password = secret # This sets a mask for permitted string types. There are several options. # default: PrintableString, T61String, BMPString. # pkix : PrintableString, BMPString (PKIX recommendation before 2004) # utf8only: only UTF8Strings (PKIX recommendation after 2004). # nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings). # MASK:XXXX a literal mask value. # WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings. string_mask = utf8only # req_extensions = v3_req # The extensions to add to a certificate request [ req_distinguished_name ] countryName = Country Name (2 letter code) countryName_default = AU countryName_min = 2 countryName_max = 2 stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Some-State localityName = Locality Name (eg, city) 0.organizationName = Organization Name (eg, company) 0.organizationName_default = Internet Widgits Pty Ltd # we can do this but it is not needed normally :-) #1.organizationName = Second Organization Name (eg, company) #1.organizationName_default = World Wide Web Pty Ltd organizationalUnitName = Organizational Unit Name (eg, section) #organizationalUnitName_default = commonName = Common Name (e.g. server FQDN or YOUR name) commonName_max = 64 emailAddress = Email Address emailAddress_max = 64 # SET-ex3 = SET extension number 3 [ req_attributes ] challengePassword = A challenge password challengePassword_min = 4 challengePassword_max = 20 #unstructuredName = An optional company name [ usr_cert ] # These extensions are added when 'ca' signs a request. # This goes against PKIX guidelines but some CAs do it and some software # requires this to avoid interpreting an end user certificate as a CA. basicConstraints=CA:FALSE # Here are some examples of the usage of nsCertType. If it is omitted # the certificate can be used for anything *except* object signing. # This is OK for an SSL server. # nsCertType = server # For an object signing certificate this would be used. # nsCertType = objsign # For normal client use this is typical # nsCertType = client, email # and for everything including object signing: # nsCertType = client, email, objsign # This is typical in keyUsage for a client certificate. # keyUsage = nonRepudiation, digitalSignature, keyEncipherment # This will be displayed in Netscape's comment listbox. nsComment = "OpenSSL Generated Certificate" # PKIX recommendations harmless if included in all certificates. subjectKeyIdentifier=hash authorityKeyIdentifier=keyid,issuer # This stuff is for subjectAltName and issuerAltname. # Import the email address. # subjectAltName=email:copy # An alternative to produce certificates that aren't # deprecated according to PKIX. # subjectAltName=email:move # Copy subject details # issuerAltName=issuer:copy #nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem #nsBaseUrl #nsRevocationUrl #nsRenewalUrl #nsCaPolicyUrl #nsSslServerName # This is required for TSA certificates. # extendedKeyUsage = critical,timeStamping [ v3_req ] # Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment [ v3_ca ] # Extensions for a typical CA # PKIX recommendation. subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer # This is what PKIX recommends but some broken software chokes on critical # extensions. #basicConstraints = critical,CA:true # So we do this instead. basicConstraints = CA:true # Key usage: this is typical for a CA certificate. However since it will # prevent it being used as an test self-signed certificate it is best # left out by default. # keyUsage = cRLSign, keyCertSign # Some might want this also # nsCertType = sslCA, emailCA # Include email address in subject alt name: another PKIX recommendation # subjectAltName=email:copy # Copy issuer details # issuerAltName=issuer:copy # DER hex encoding of an extension: beware experts only! # obj=DER:02:03 # Where 'obj' is a standard or added object # You can even override a supported extension: # basicConstraints= critical, DER:30:03:01:01:FF [ crl_ext ] # CRL extensions. # Only issuerAltName and authorityKeyIdentifier make any sense in a CRL. # issuerAltName=issuer:copy authorityKeyIdentifier=keyid:always [ proxy_cert_ext ] # These extensions should be added when creating a proxy certificate # This goes against PKIX guidelines but some CAs do it and some software # requires this to avoid interpreting an end user certificate as a CA. basicConstraints=CA:FALSE # Here are some examples of the usage of nsCertType. If it is omitted # the certificate can be used for anything *except* object signing. # This is OK for an SSL server. # nsCertType = server # For an object signing certificate this would be used. # nsCertType = objsign # For normal client use this is typical # nsCertType = client, email # and for everything including object signing: # nsCertType = client, email, objsign # This is typical in keyUsage for a client certificate. # keyUsage = nonRepudiation, digitalSignature, keyEncipherment # This will be displayed in Netscape's comment listbox. nsComment = "OpenSSL Generated Certificate" # PKIX recommendations harmless if included in all certificates. subjectKeyIdentifier=hash authorityKeyIdentifier=keyid,issuer # This stuff is for subjectAltName and issuerAltname. # Import the email address. # subjectAltName=email:copy # An alternative to produce certificates that aren't # deprecated according to PKIX. # subjectAltName=email:move # Copy subject details # issuerAltName=issuer:copy #nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem #nsBaseUrl #nsRevocationUrl #nsRenewalUrl #nsCaPolicyUrl #nsSslServerName # This really needs to be in place for it to be a proxy certificate. proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo #################################################################### [ tsa ] default_tsa = tsa_config1 # the default TSA section [ tsa_config1 ] # These are used by the TSA reply generation only. dir = ./demoCA # TSA root directory serial = $dir/tsaserial # The current serial number (mandatory) crypto_device = builtin # OpenSSL engine to use for signing signer_cert = $dir/tsacert.pem # The TSA signing certificate # (optional) certs = $dir/cacert.pem # Certificate chain to include in reply # (optional) signer_key = $dir/private/tsakey.pem # The TSA private key (optional) default_policy = tsa_policy1 # Policy if request did not specify it # (optional) other_policies = tsa_policy2, tsa_policy3 # acceptable policies (optional) digests = md5, sha1 # Acceptable message digests (mandatory) accuracy = secs:1, millisecs:500, microsecs:100 # (optional) clock_precision_digits = 0 # number of digits after dot. (optional) ordering = yes # Is ordering defined for timestamps? # (optional, default: no) tsa_name = yes # Must the TSA name be included in the reply? # (optional, default: no) ess_cert_id_chain = no # Must the ESS cert id chain be included? # (optional, default: no) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From vieuxtech at gmail.com Tue Dec 18 18:04:04 2018 From: vieuxtech at gmail.com (Sam Roberts) Date: Tue, 18 Dec 2018 10:04:04 -0800 Subject: [openssl-users] A script for hybrid encryption with openssl In-Reply-To: <67ce24cb-d791-a891-728c-9279985d38a5@letterboxes.org> References: <7aa6f1de-94c3-d6c4-a437-6b8aa2cba5bc@letterboxes.org> <1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com> <67ce24cb-d791-a891-728c-9279985d38a5@letterboxes.org> Message-ID: On Tue, Dec 18, 2018 at 3:18 AM Nick wrote: > I should add that I don't really care about the format, or even the use of openssl - just the ability to tackle large files with the benefits of public key encryption, in a self-contained way without needing fiddly work deploying the keys (as GnuPG seems to require for its keyring, judging from my experience deploying Backup-Ninja / Duplicity using Ansible.) Maybe you should look at gpg directly, `gpg --symmetric` uses a passphrase, which doesn't sound fiddly. From jain61 at gmail.com Tue Dec 18 19:35:30 2018 From: jain61 at gmail.com (N Jain) Date: Tue, 18 Dec 2018 14:35:30 -0500 Subject: [openssl-users] Fwd: SSL_free Segmentation Fault In-Reply-To: References: Message-ID: Hi, I am using openssl for ARM based target and I have cross compiled OpenSSLv1.0.2l from sources with FIPS. I have implemented the DTLSv1.2 based Server using OpenSSL APIs and able to run it on my target. Issue I am facing is when there is network failure I try to clean up the current DTLS session but I always get segmentation fault during SSL_free. If I remove SSL_free the segmentation fault goes away but I need to call it in order to free up the ssl session memory. While further debugging using GDB I found (gdb) bt #0 0xb6e3cc10 in dtls1_get_record () from /usr/lib/libssl.so.1.0.0 #1 0xb6e3d928 in dtls1_read_bytes () from /usr/lib/libssl.so.1.0.0 #2 0xb6e28264 in ssl3_read () from /usr/lib/libssl.so.1.0.0 #3 0x000a7180 in ?? () Code snippet: SSL_set_shutdown(p_cinfo->m_pssl, SSL_SENT_SHUTDOWN | SSL_RECEIVED_SHUTDOWN); stat = SSL_shutdown(p_cinfo->m_pssl); switch(stat) { case 1: printf("Shutdown successfull\n"); break; case 0: case -1: default: printf("Error Shutting down \n"); print_ssl_err(p_cinfo->m_pssl, stat); } * SSL_free(p_cinfo->m_pssl); * Any clues for above issue will be very helpful. Also I would like to know how to identify the long term release for 1.0.2 series with most of the bug fixes which I could use for my project. Thanks NJ -- Sent from: http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Tue Dec 18 20:57:04 2018 From: paul.dale at oracle.com (Paul Dale) Date: Tue, 18 Dec 2018 12:57:04 -0800 (PST) Subject: [openssl-users] FIPS module v3 In-Reply-To: <321756724.6615988.1545127828033@mail.yahoo.com> References: <321756724.6615988.1545127828033@mail.yahoo.com> Message-ID: <37f9568f-d085-410d-a6d0-213bbc73d9a5@default> There are no committed to dates of any kind at present. The project is underway but it is too early to set a schedule, yet alone a completion date. Pauli -- Oracle Dr Paul Dale | Cryptographer | Network Security & Encryption Phone +61 7 3031 7217 Oracle Australia From: Alibek Jorajev via openssl-users [mailto:openssl-users at openssl.org] Sent: Tuesday, 18 December 2018 8:10 PM To: openssl-users at openssl.org Subject: [openssl-users] FIPS module v3 Hi everyone, I have been following OpenSSL blog and know that work on new OpenSSL FIPS module has started. Current FIPS module (v.2) has end of life (December 2019) and I assume that new FIPS module will be by that time.? but can someone tell me - is there are approximate dates -? will it be available earlier? thanks, Alibek From antiac at gmail.com Tue Dec 18 22:12:59 2018 From: antiac at gmail.com (Antonio Iacono) Date: Tue, 18 Dec 2018 23:12:59 +0100 Subject: [openssl-users] Support for CAdES Basic Electronic Signatures (CAdES-BES) Message-ID: Hi everyone, the patch discussed in this pull request https://github.com/openssl/openssl/pull/7893 adds support for adding ESS signing-certificate[-v2] attributes to CMS signedData. Although it implements only a small part of the RFC 5126 - CMS Advanced Electronic Signatures (CAdES), it is sufficient many cases to enable the openssl cms app to create signatures which comply with legal requirements of some European States (e.g Italy). Feedback are welcome, thanks, Antonio -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.yang at baishancloud.com Wed Dec 19 02:34:27 2018 From: yang.yang at baishancloud.com (Paul Yang) Date: Wed, 19 Dec 2018 10:34:27 +0800 Subject: [openssl-users] Openssl async support In-Reply-To: References: Message-ID: <9714A91B-C844-4EAE-82A5-FD7239515741@baishancloud.com> Read this: https://www.openssl.org/docs/man1.1.0/crypto/ASYNC_start_job.html Usually async operations happen in engines when they need to talk to hardware but you can still utilize async mechanism in pure software if you have the scenario > On Dec 18, 2018, at 18:36, ASHIQUE CK wrote: > > Hi all, > > I truly understand that everyone might be busy with your work and didn't found time to reply. That's okay, but incase you have accidendly forgot to reply, please accept this as a gentle reminder. > > > > > > On Mon, Dec 17, 2018 at 6:11 PM ASHIQUE CK > wrote: > Hi all, > > I have some queries regarding OpenSSL async operation. > > Current setup > ------------- > I have one OpenSSL dynamic engine (with RSA and AES-GCM support) and linked it with Nginx server. Multiple WGET commands on the client side. > > Current issue > ------------- > Since OpenSSL do_cipher call (the function in which actual AES-GCM encryption/decryption happening) comes from one client at a time which is reducing file downloading performance. So we need an asynchronous operation in OpenSSL ie. we need multiple do_cipher calls at the same time from which we should submit requests to HW without affecting the incoming requests and should wait for HW output. > > Queries > -------- > 1) Is there is any other scheme for multiple do_cipher calls at a time?. > 2) Any method to enable asynchronous call from OpenSSL? > > Versions > ------------- > Openssl - 1.1.0h > Nginx1.11.10 > Wget 1.17.1 > > Kindly support me. Please inform me if any more inputs needed. Thanks in advance. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at mad-scientist.net Wed Dec 19 01:54:30 2018 From: paul at mad-scientist.net (Paul Smith) Date: Tue, 18 Dec 2018 20:54:30 -0500 Subject: [openssl-users] Two questions on OpenSSL EVP API Message-ID: <329575d84ff8c598faadec8736a634b318ffb814.camel@mad-scientist.net> Hi all; I'm working with OpenSSL 1.1.1a, using the EVP interface to encrypt/decrypt with various ciphers/modes. I had a couple of questions: First, the encrypt update docs say: > the amount of data written may be anything from zero bytes to > (inl + cipher_block_size - 1) Is that really true? For example if my block size is 16 and my input length is 4, could the encrypt step really write as many as 19 bytes (4 + 16 - 1)? I would have thought that the true maximum would be round-up(inl, cipher_block_size); that is, for inl values 1-15 you'd get 16 bytes, and for inl values 16-31 you'd get 32 bytes, etc. (I'm not actually sure whether inl of 16 gets you 16 or 32 bytes...) Am I wrong about that? Would some ciphers/modes write beyond the end of the current "block" and into the next one? Second, the type of the outl parameter on EVP encrypt update is "int", rather than (as I would have expected) "unsigned int". Is there a possibility that EVP would set &outl to a negative value and if so, what would that mean? Do I need to check for this in my code? Same with inl; why isn't it "unsigned int"? Is there ever a reason to pass in a negative value? From beldmit at gmail.com Wed Dec 19 05:57:45 2018 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Wed, 19 Dec 2018 08:57:45 +0300 Subject: [openssl-users] Two questions on OpenSSL EVP API In-Reply-To: <329575d84ff8c598faadec8736a634b318ffb814.camel@mad-scientist.net> References: <329575d84ff8c598faadec8736a634b318ffb814.camel@mad-scientist.net> Message-ID: Hello Paul, On Wed, Dec 19, 2018 at 6:02 AM Paul Smith wrote: > Hi all; I'm working with OpenSSL 1.1.1a, using the EVP interface to > encrypt/decrypt with various ciphers/modes. > > I had a couple of questions: > > > First, the encrypt update docs say: > > > the amount of data written may be anything from zero bytes to > > (inl + cipher_block_size - 1) > > Is that really true? For example if my block size is 16 and my input > length is 4, could the encrypt step really write as many as 19 bytes > (4 + 16 - 1)? > > I would have thought that the true maximum would be round-up(inl, > cipher_block_size); that is, for inl values 1-15 you'd get 16 bytes, > and for inl values 16-31 you'd get 32 bytes, etc. (I'm not actually > sure whether inl of 16 gets you 16 or 32 bytes...) > > Am I wrong about that? Would some ciphers/modes write beyond the end > of the current "block" and into the next one? > When you use a block cipher and pass data less than block size, it is stored in the internal buffer. In this case you do not get encrypted data until there is enough plain text to encrypt the full block. When you add more data, if you pass enough data to finalize a previously unfinished block, you get more long ciphertext than plaintext passed in a particular call of CipherUpdate. > > > Second, the type of the outl parameter on EVP encrypt update is "int", > rather than (as I would have expected) "unsigned int". Is there a > possibility that EVP would set &outl to a negative value and if so, > what would that mean? Do I need to check for this in my code? Same > with inl; why isn't it "unsigned int"? Is there ever a reason to pass > in a negative value? > I strongly suspect just historical reasons here. -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at mad-scientist.net Wed Dec 19 06:56:26 2018 From: paul at mad-scientist.net (Paul Smith) Date: Wed, 19 Dec 2018 01:56:26 -0500 Subject: [openssl-users] EVP_DecryptUpdate: why is this failing when out == in? Message-ID: As I understand it, it's legal to provide the exact same input and output buffer to EVP_EncryptUpdate and EVP_DecryptUpdate, but it's not legal to provide pointers into different parts of the same buffer. That's a good check. However, my implementation is getting triggered by this code in EVP_DecryptUpdate(): if (ctx->final_used) { /* see comment about PTRDIFF_T comparison above */ => if (((PTRDIFF_T)out == (PTRDIFF_T)in) || is_partially_overlapping(out, in, b)) { EVPerr(EVP_F_EVP_DECRYPTUPDATE, EVP_R_PARTIALLY_OVERLAPPING); return 0; } Can someone explain why, only in this specific situation where we're decrypting the final block, we require that OUT and IN not be the same buffer? Everywhere else we check is_partially_overlapping() only, without equality. I read the comment about PTRDIFF_T but I didn't come up with a reason for the equality check. This check was added back in 2016 in SHA 5fc77684f1 FWIW. From paul at mad-scientist.net Wed Dec 19 07:01:07 2018 From: paul at mad-scientist.net (Paul Smith) Date: Wed, 19 Dec 2018 02:01:07 -0500 Subject: [openssl-users] Two questions on OpenSSL EVP API In-Reply-To: References: <329575d84ff8c598faadec8736a634b318ffb814.camel@mad-scientist.net> Message-ID: On Wed, 2018-12-19 at 08:57 +0300, Dmitry Belyavsky wrote: > > I would have thought that the true maximum would be round-up(inl, > > cipher_block_size); that is, for inl values 1-15 you'd get 16 > > bytes, and for inl values 16-31 you'd get 32 bytes, etc. (I'm not > > actually sure whether inl of 16 gets you 16 or 32 bytes...) > > > > Am I wrong about that? Would some ciphers/modes write beyond the > > end of the current "block" and into the next one? > > When you use a block cipher and pass data less than block size, it is > stored in the internal buffer. In this case you do not get encrypted > data until there is enough plain text to encrypt the full block. > > When you add more data, if you pass enough data to finalize a > previously unfinished block, you get more long ciphertext than > plaintext passed in a particular call of CipherUpdate. I see. So you potentially need enough for an almost full previous block, plus the current data. That makes sense. Thanks! From ckashiquekvk at gmail.com Wed Dec 19 12:03:09 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Wed, 19 Dec 2018 17:33:09 +0530 Subject: [openssl-users] Openssl async support In-Reply-To: References: Message-ID: Gentle reminder On Tue, Dec 18, 2018 at 4:06 PM ASHIQUE CK wrote: > Hi all, > > I truly understand that everyone might be busy with your work and didn't > found time to reply. That's okay, but incase you have accidendly forgot to > reply, please accept this as a gentle reminder. > > > > > > On Mon, Dec 17, 2018 at 6:11 PM ASHIQUE CK wrote: > >> Hi all, >> >> I have some queries regarding OpenSSL async operation. >> >> Current setup >> ------------- >> I have one* OpenSSL dynamic engine (with RSA and AES-GCM support) *and >> linked it with *Nginx* server. Multiple *WGET* commands on the client >> side. >> >> Current issue >> ------------- >> Since OpenSSL *do_cipher call *(the function in which actual AES-GCM >> encryption/decryption happening) comes from one client at a time which is >> reducing file downloading performance. So we need an *asynchronous >> operation in OpenSSL* ie. we need multiple do_cipher calls at the same >> time from which we should submit requests to HW without affecting the >> incoming requests and should wait for HW output. >> >> Queries >> -------- >> 1) Is there is any other scheme for multiple do_cipher calls at a time?. >> 2) Any method to enable asynchronous call from OpenSSL? >> >> Versions >> ------------- >> Openssl - 1.1.0h >> Nginx1.11.10 >> Wget 1.17.1 >> >> Kindly support me. Please inform me if any more inputs needed. Thanks in >> advance. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Wed Dec 19 12:18:34 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Wed, 19 Dec 2018 17:48:34 +0530 Subject: [openssl-users] Openssl speed command for AESGCM In-Reply-To: <222BA3CE-1E45-4AC6-A96D-06AADFA4EF44@baishancloud.com> References: <222BA3CE-1E45-4AC6-A96D-06AADFA4EF44@baishancloud.com> Message-ID: Thanks On Mon, Dec 17, 2018 at 9:59 AM Paul Yang wrote: > Yes, try something like: openssl speed -evp aes-128-gcm > > > On Nov 23, 2018, at 13:11, ASHIQUE CK wrote: > > > > Hi, > > Does Openssl has speed command for AESGCM ? > > -- > > openssl-users mailing list > > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Wed Dec 19 12:47:34 2018 From: levitte at openssl.org (Richard Levitte) Date: Wed, 19 Dec 2018 13:47:34 +0100 (CET) Subject: [openssl-users] Two questions on OpenSSL EVP API In-Reply-To: <329575d84ff8c598faadec8736a634b318ffb814.camel@mad-scientist.net> References: <329575d84ff8c598faadec8736a634b318ffb814.camel@mad-scientist.net> Message-ID: <20181219.134734.174698330468749838.levitte@openssl.org> In message <329575d84ff8c598faadec8736a634b318ffb814.camel at mad-scientist.net> on Tue, 18 Dec 2018 20:54:30 -0500, Paul Smith said: > Hi all; I'm working with OpenSSL 1.1.1a, using the EVP interface to > encrypt/decrypt with various ciphers/modes. > > I had a couple of questions: > > > First, the encrypt update docs say: > > > the amount of data written may be anything from zero bytes to > > (inl + cipher_block_size - 1) > > Is that really true? For example if my block size is 16 and my input > length is 4, could the encrypt step really write as many as 19 bytes > (4 + 16 - 1)? > > I would have thought that the true maximum would be round-up(inl, > cipher_block_size); that is, for inl values 1-15 you'd get 16 bytes, > and for inl values 16-31 you'd get 32 bytes, etc. (I'm not actually > sure whether inl of 16 gets you 16 or 32 bytes...) > > Am I wrong about that? Would some ciphers/modes write beyond the end > of the current "block" and into the next one? Some modes add extra data. For example, you get an IV block first when encrypting in CBC mode. > Second, the type of the outl parameter on EVP encrypt update is "int", > rather than (as I would have expected) "unsigned int". Is there a > possibility that EVP would set &outl to a negative value and if so, > what would that mean? Do I need to check for this in my code? Same > with inl; why isn't it "unsigned int"? Is there ever a reason to pass > in a negative value? This is most likely an artefact of how the API was originally written. Huge portions of the API have remained unchanged for quite a long time. If this API was written today, we would likely use size_t. Changing int to size_t is something I personally would like to do for some major release ('cause it will only happen in a major release), but that will also mean that applications using our libraries will have to change... You *can* pass in a negative value to EVP_EncryptUpdate, and all that will happen is... well, nothing much in the general case: if (inl <= 0) { *outl = 0; return inl == 0; } Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From mark at openssl.org Thu Dec 20 10:53:27 2018 From: mark at openssl.org (Mark J Cox) Date: Thu, 20 Dec 2018 10:53:27 +0000 Subject: [openssl-users] Celebrating 20 Years of OpenSSL Message-ID: Just about 20 years ago we released the first OpenSSL, but that wasn't the original name for the project. Read more in the blog post at https://www.openssl.org/blog/blog/2018/12/20/20years/ Regards, Mark J Cox -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgh at wizmail.org Thu Dec 20 13:00:22 2018 From: jgh at wizmail.org (Jeremy Harris) Date: Thu, 20 Dec 2018 13:00:22 +0000 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error Message-ID: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> Hi, Library version: OpenSSL: Compile: OpenSSL 1.0.2k-fips 26 Jan 2017 Runtime: OpenSSL 1.0.2k-fips 26 Jan 2017 : built on: reproducible build, date unspecified CentOS 7.6.181 "14142044:SSL routines:SSL_GET_SERVER_CERT_INDEX:internal error" What is the meaning of this error return from EVP_PKEY_verify() ? The term "CERT" implies certificate, but there isn't one involved here. -- Thanks, Jeremy From oinksocket at letterboxes.org Thu Dec 20 13:17:19 2018 From: oinksocket at letterboxes.org (Nick) Date: Thu, 20 Dec 2018 13:17:19 +0000 Subject: [openssl-users] A script for hybrid encryption with openssl In-Reply-To: References: <7aa6f1de-94c3-d6c4-a437-6b8aa2cba5bc@letterboxes.org> <1fa0f893-369c-33f4-baef-0b250a5260f0@wisemo.com> <67ce24cb-d791-a891-728c-9279985d38a5@letterboxes.org> Message-ID: On 18/12/2018 18:04, Sam Roberts wrote: > Maybe you should look at gpg directly, `gpg --symmetric` uses a passphrase, > which doesn't sound fiddly. Unfortunately that doesn't do what I want: I'm after something using public key encryption (asymmetric, or a hybrid). This is so I don't need to deploy the decryption key on the server. N From openssl-users at dukhovni.org Thu Dec 20 17:16:17 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 20 Dec 2018 12:16:17 -0500 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error In-Reply-To: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> References: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> Message-ID: <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> > On Dec 20, 2018, at 8:00 AM, Jeremy Harris wrote: > > Library version: OpenSSL: Compile: OpenSSL 1.0.2k-fips 26 Jan 2017 > Runtime: OpenSSL 1.0.2k-fips 26 Jan 2017 > built on: reproducible build, date unspecified CentOS 7.6.181 > > "14142044:SSL routines:SSL_GET_SERVER_CERT_INDEX:internal error" This is an SSL library error in your error stack. Likely left over from an earlier function call, with no ERR_clear_error() before the new call. > What is the meaning of this error return from EVP_PKEY_verify() ? It is not a crypto library error, and so cannot be a result of a call to EVP_PKEY_verify(). The function that reports that error is not reachable from libcrypto. > The term "CERT" implies certificate, but there isn't one involved > here. Perhaps clear your error stack and try again. -- Viktor. From jgh at wizmail.org Thu Dec 20 23:43:06 2018 From: jgh at wizmail.org (Jeremy Harris) Date: Thu, 20 Dec 2018 23:43:06 +0000 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error In-Reply-To: <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> References: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> Message-ID: <380cd6cb-d417-ff48-7cee-246f28d7c13c@wizmail.org> On 20/12/2018 17:16, Viktor Dukhovni wrote: >> "14142044:SSL routines:SSL_GET_SERVER_CERT_INDEX:internal error" > > This is an SSL library error in your error stack. Likely left > over from an earlier function call, with no ERR_clear_error() > before the new call. Thanks for the hint. You are correct, and a clear before that set of crypto operations gets me a far more reasonable message. The error seems to be left around after SSL_accept(), and yet it does not appear in my SNI callback. Worse, my verify callback (which I was expected to appear) does not seem to be being called. Yet the SSL_accept() succeeded. Any ideas on that? -- Cheers, Jeremy From paul at mad-scientist.net Fri Dec 21 00:33:12 2018 From: paul at mad-scientist.net (Paul Smith) Date: Thu, 20 Dec 2018 19:33:12 -0500 Subject: [openssl-users] EVP_DecryptUpdate: why is this failing when out == in? In-Reply-To: References: Message-ID: <5e1164e773c1a3134ee8d66be3be8c39c4124817.camel@mad-scientist.net> I filed https://github.com/openssl/openssl/issues/7941 about this FYI. Cheers! On Wed, 2018-12-19 at 01:56 -0500, Paul Smith wrote: > As I understand it, it's legal to provide the exact same input and > output buffer to EVP_EncryptUpdate and EVP_DecryptUpdate, but it's not > legal to provide pointers into different parts of the same buffer. > That's a good check. > > However, my implementation is getting triggered by this code in > EVP_DecryptUpdate(): > > if (ctx->final_used) { > /* see comment about PTRDIFF_T comparison above */ > => if (((PTRDIFF_T)out == (PTRDIFF_T)in) > || is_partially_overlapping(out, in, b)) { > EVPerr(EVP_F_EVP_DECRYPTUPDATE, EVP_R_PARTIALLY_OVERLAPPING); > return 0; > } > > Can someone explain why, only in this specific situation where we're > decrypting the final block, we require that OUT and IN not be the same > buffer? Everywhere else we check is_partially_overlapping() only, > without equality. > > I read the comment about PTRDIFF_T but I didn't come up with a reason > for the equality check. This check was added back in 2016 in SHA > 5fc77684f1 FWIW. From openssl-users at dukhovni.org Fri Dec 21 00:02:13 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 20 Dec 2018 19:02:13 -0500 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error In-Reply-To: <380cd6cb-d417-ff48-7cee-246f28d7c13c@wizmail.org> References: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> <380cd6cb-d417-ff48-7cee-246f28d7c13c@wizmail.org> Message-ID: > On Dec 20, 2018, at 6:43 PM, Jeremy Harris wrote: > > Thanks for the hint. You are correct, and a clear before that set > of crypto operations gets me a far more reasonable message. Makes sense. > The error seems to be left around after SSL_accept(), and yet > it does not appear in my SNI callback. Worse, my verify callback > (which I was expected to appear) does not seem to be being called. > Yet the SSL_accept() succeeded. > > Any ideas on that? You provide much too little detail. This particular "error" happens when a TLS 1.2 ciphersuite does not correspond to any any public key type for which OpenSSL might have a certificate. Perhaps another ciphersuite is then selected, as OpenSSL is trying to find one that works? Not all "errors" are actual problems, some are resolved by taking an alternative code path. Before beginning a new high-level operation in the SSL library it is good to (at least periodically) clear the error stack. Like "errno" it is not cleared on function entry, and persists until simply cleared or iteratively consumed for reporting. -- -- Viktor. From prithiraj.das at gmail.com Fri Dec 21 06:12:43 2018 From: prithiraj.das at gmail.com (prithiraj das) Date: Fri, 21 Dec 2018 06:12:43 +0000 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction Message-ID: I am using OpenSSL 1.1.1 from OpenSSL's website and trying to build OpenSSL on a Windows 64 bit machine using Perl 64 bit version and nasm v2.13.03. I have used the *no-shared* option in the Perl Configure to only build the static library and the resulting size of the *libcrypto.lib* file is almost 19 MB. The *.exe* file generated is 3173 KB. RSA functionality (keypair generation, encryption, decryption) is what we all need and as per the need, the goal is to reduce *libcrypto.lib *to less than 3 MB. Using the generated .exe file is not an option. Please suggest ways to reduce the libcrypto.lib size to less than 3 MB on this 64 bit machine keeping only RSA functionality. And, is it possible by any chance that the size of libcrypto.lib will be smaller if OpenSSL is being built on a Windows 32 bit machine using a Windows 32 bit configuration option VC-WIN32? Thanks and Regards, Prithiraj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgh at wizmail.org Fri Dec 21 14:24:18 2018 From: jgh at wizmail.org (Jeremy Harris) Date: Fri, 21 Dec 2018 14:24:18 +0000 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error In-Reply-To: References: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> <380cd6cb-d417-ff48-7cee-246f28d7c13c@wizmail.org> Message-ID: On 21/12/2018 00:02, Viktor Dukhovni wrote: >> Thanks for the hint. You are correct, and a clear before that set >> of crypto operations gets me a far more reasonable message. > > Makes sense. > >> The error seems to be left around after SSL_accept(), and yet >> it does not appear in my SNI callback. Worse, my verify callback >> (which I was expected to appear) does not seem to be being called. >> Yet the SSL_accept() succeeded. >> >> Any ideas on that? > > You provide much too little detail. This particular "error" > happens when a TLS 1.2 ciphersuite does not correspond to any > any public key type for which OpenSSL might have a certificate. A packet capture showed me the server side picking an aNULL ciphersuite. This, I suppose, explains the server-side verify callback never being called. The SSL_CTX_set_cipher_list() on both ends was aNULL:-aNULL:ALL:+RC4:!LOW:!EXPORT:!MD5:!aDSS:!kECDH:!kDH:!SEED:!IDEA:!RC2:!RC6:@STRENGTH (which I think was your suggestion from a while back?). Presumably the ALL has added aNULL ciphers back in, after the weird aNULL:-aNULL sequence (what might be the reason for that?), and the strength-sorting managed to put many anon ciphers before authenticating ones (I can see that in the suites list in the client hello). Appending another :!aNULL on the client brings sanity back; the server gets a verify callback and an ocsp callback, and this leftover error is not left in the stack. Is there some way of putting anon suites later in priority? Would :+aNULL after the ALL but before strength-sort be preferred? It does seem to do the right thing in the client hello. [ I do wish that OpenSSL had a settable debug level, the way that GnuTLS does, for showing internal operations such as suite-selection ] > Before beginning a new high-level operation in the SSL library it > is good to (at least periodically) clear the error stack. Like > "errno" it is not cleared on function entry, and persists until > simply cleared or iteratively consumed for reporting. It's rather awkward that one doesn't know exactly what such a clear might be required. Randomly spraying them around is hardly nice. The comparison with errno is poor; there, if the syscall failed you know that errno is valid. Here, if the library call fails you know only that one-or-more of the stack are valid, but not always the ones first accessible from the stack. I guess for now I'll put a clear after SSL_accept succeeds, and hope that suffices. -- Cheers, Jeremy From gisle.vanem at gmail.com Fri Dec 21 14:45:23 2018 From: gisle.vanem at gmail.com (Gisle Vanem) Date: Fri, 21 Dec 2018 15:45:23 +0100 Subject: [openssl-users] PerlASM for x64 Message-ID: I'm trying to understand how the generation of ASM-files are done on x64. (I have no problems on x86). With the generated Nmake makefile from a perl Configure VC-WIN64A-ONECORE when doing a: nmake crypto\aes\libcrypto-lib-aesni-x86_64.obj seems to do this: set ASM=nasm "f:/util/StrawberryPerl/perl/bin/perl.exe" "crypto\aes\asm\aesni-x86_64.pl" auto crypto\aes\aesni-x86_64.asm nasm -Ox -f win64 -DNEAR -g -o crypto\aes\libcrypto-lib-aesni-x86_64.obj "crypto\aes\aesni-x86_64.asm" Except for some warnings, nasm generates a valid libcrypto-lib-aesni-x86_64.obj. BUT, doing the same on the cmd-line: set ASM=nasm f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl auto tmp-file.s Generates a totally invalid 'tmp-file.s' file: --- tmp-file.s 2018-12-21 13:12:19 +++ crypto/aes/aesni-x86_64.asm 2018-12-21 13:11:47 @@ -1,4432 +1,5051 @@ -.text +default rel +%define XMMWORD +%define YMMWORD +%define ZMMWORD +section .text code align=64 -.globl aesni_encrypt -.type aesni_encrypt, at function -.align 16 +EXTERN OPENSSL_ia32cap_P +global aesni_encrypt + What is going on here? Some other exported env-var playing tricks? I experimented some more. I figured the "auto" does not work. But this works: perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s diff tmp-file.s crypto\aes\aesni-x86_64.asm No diffs. Why does the the generation of .asm-files be so damn hard to figure out? Some cmd-line help to show what "auto" does would be nice. -- --gv From openssl-users at dukhovni.org Fri Dec 21 16:20:43 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 21 Dec 2018 11:20:43 -0500 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error In-Reply-To: References: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> <380cd6cb-d417-ff48-7cee-246f28d7c13c@wizmail.org> Message-ID: <20181221162043.GD79754@straasha.imrryr.org> On Fri, Dec 21, 2018 at 02:24:18PM +0000, Jeremy Harris wrote: > > You provide much too little detail. This particular "error" > > happens when a TLS 1.2 ciphersuite does not correspond to any > > any public key type for which OpenSSL might have a certificate. > > A packet capture showed me the server side picking an aNULL ciphersuite. Which naturally does not map to any kind of certificate. While TLS 1.2 still lives and is still capable of aNULL ciphersuites, it might make sense to add a line of code to detect that condition, and not push anything onto the error stack... > This, I suppose, explains the server-side verify callback never > being called. That callback is about client certificates, but in TLS 1.2 and earlier, it is IIRC not valid to ask for client certificates when there is no server certificate. > The SSL_CTX_set_cipher_list() on both ends was > > aNULL:-aNULL:ALL:+RC4:!LOW:!EXPORT:!MD5:!aDSS:!kECDH:!kDH:!SEED:!IDEA:!RC2:!RC6:@STRENGTH > > (which I think was your suggestion from a while back?). Yes, for pure opportunistic TLS modes for Postfix, when neither side does anything with certificates. > Presumably the ALL has added aNULL ciphers back in, after the weird > aNULL:-aNULL sequence (what might be the reason for that?), IIRC, this is documented somehere. The most recentlky removed ciphers end up at the top of the "stack", and are therefore the most preferred when they are brought back. Therefore, aNULL:-aNULL:ALL produces a list in which the aNULL ciphers are preferred to all others. > and the strength-sorting managed to put many anon ciphers before > authenticating ones (I can see that in the suites list in the client hello). Yes, it is a "stable" sort. > Appending another :!aNULL on the client brings sanity back; the server > gets a verify callback and an ocsp callback, and this leftover error is > not left in the stack. If that's what you want, just use "DEFAULT:..." rather than "aNULL:-aNULL:ALL:...". > Is there some way of putting anon suites later in priority? Note, they're only used when the client enables them, i.e. the client has no intention of authenticating the server? Is there any point in using certificates at that point? That said, it does not sound like you want to support them at all, but if you do, then just "ALL" will leave them at a lower priority than "aRSA" and "aECDSA". > Would :+aNULL > after the ALL but before strength-sort be preferred? It does seem to do > the right thing in the client hello. You might do that, but I don't know of any clients that support *only* aNULL ciphers, so there's not much point in having them at all, if they're not preferred. > > Before beginning a new high-level operation in the SSL library it > > is good to (at least periodically) clear the error stack. Like > > "errno" it is not cleared on function entry, and persists until > > simply cleared or iteratively consumed for reporting. > > It's rather awkward that one doesn't know exactly what such a clear > might be required. Randomly spraying them around is hardly nice. > The comparison with errno is poor; there, if the syscall failed > you know that errno is valid. Here, if the library call fails > you know only that one-or-more of the stack are valid, but not > always the ones first accessible from the stack. > > I guess for now I'll put a clear after SSL_accept succeeds, and hope > that suffices. In Postfix we strive to clear the error stack before each high level operation that reports errors on failure. That way we're only reporting the relevant errors. -- Viktor. From openssl-users at dukhovni.org Fri Dec 21 17:43:55 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 21 Dec 2018 12:43:55 -0500 Subject: [openssl-users] SSL_GET_SERVER_CERT_INDEX:internal error In-Reply-To: <20181221162043.GD79754@straasha.imrryr.org> References: <0d825b5f-70dc-de00-1330-27de349f2a48@wizmail.org> <5AEC9BC6-1A0E-424C-A8A6-1B8D754C896D@dukhovni.org> <380cd6cb-d417-ff48-7cee-246f28d7c13c@wizmail.org> <20181221162043.GD79754@straasha.imrryr.org> Message-ID: <20181221174355.GE79754@straasha.imrryr.org> On Fri, Dec 21, 2018 at 11:20:43AM -0500, Viktor Dukhovni wrote: > Which naturally does not map to any kind of certificate. While TLS > 1.2 still lives and is still capable of aNULL ciphersuites, it might > make sense to add a line of code to detect that condition, and not > push anything onto the error stack... Perhaps this patch is too late for 1.0.2, which is on its last year of support, and so likely gets security fixes only, but here it is for the record: --- ssl/ssl_lib.c +++ ssl/ssl_lib.c @@ -2540,8 +2540,13 @@ int ssl_check_srvr_ecc_cert_and_alg(X509 *x, SSL *s) static int ssl_get_server_cert_index(const SSL *s) { + const SSL_CIPHER *c = s->s3->tmp.new_cipher; int idx; - idx = ssl_cipher_get_cert_index(s->s3->tmp.new_cipher); + + /* Certificate-less ciphers don't have a cert index, and that's OK */ + if (c->algorithm_auth & (SSL_aNULL | SSL_aPSK | SSL_aSRP)) + return -1; + idx = ssl_cipher_get_cert_index(c); if (idx == SSL_PKEY_RSA_ENC && !s->cert->pkeys[SSL_PKEY_RSA_ENC].x509) idx = SSL_PKEY_RSA_SIGN; if (idx == -1) It avoids needlessly generating the "error" you reported. -- Viktor. From Walter.H at mathemainzel.info Sat Dec 22 21:29:35 2018 From: Walter.H at mathemainzel.info (Walter H.) Date: Sat, 22 Dec 2018 22:29:35 +0100 Subject: [openssl-users] Subject CN and SANs Message-ID: <5C1EACBF.6090409@mathemainzel.info> Hello, I found several different certificates on the net some are like this: CN=example.com SANs are DNS:example.com, DNS:www.example.com and some are like this: CN=www.example.com SANs are DNS:example.com, DNS:www.example.com does this matter or is one them the preferred one? Thanks, Walter -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3491 bytes Desc: S/MIME Cryptographic Signature URL: From felipe at felipegasper.com Sat Dec 22 21:45:15 2018 From: felipe at felipegasper.com (Felipe Gasper) Date: Sat, 22 Dec 2018 16:45:15 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <5C1EACBF.6090409@mathemainzel.info> References: <5C1EACBF.6090409@mathemainzel.info> Message-ID: It shouldn?t matter. Technically subject.CN is deprecated anyway, but all the CAs still create it. -FG > On Dec 22, 2018, at 4:29 PM, Walter H. wrote: > > Hello, > > I found several different certificates on the net > > some are like this: > > CN=example.com > SANs are DNS:example.com, DNS:www.example.com > > and some are like this: > > CN=www.example.com > SANs are DNS:example.com, DNS:www.example.com > > does this matter or is one them the preferred one? > > Thanks, > Walter > > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From rsalz at akamai.com Sun Dec 23 02:12:33 2018 From: rsalz at akamai.com (Salz, Rich) Date: Sun, 23 Dec 2018 02:12:33 +0000 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <5C1EACBF.6090409@mathemainzel.info> References: <5C1EACBF.6090409@mathemainzel.info> Message-ID: <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> Putting the DNS name in the CN part of the subjectDN has been deprecated for a very long time (more than 10 years), although it is still supported by many existing browsers. New certificates should only use the subjectAltName extension. From felipe at felipegasper.com Sun Dec 23 02:38:18 2018 From: felipe at felipegasper.com (Felipe Gasper) Date: Sat, 22 Dec 2018 21:38:18 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> Message-ID: <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> > On Dec 22, 2018, at 9:12 PM, Salz, Rich via openssl-users wrote: > > Putting the DNS name in the CN part of the subjectDN has been deprecated for a very long time (more than 10 years), although it is still supported by many existing browsers. New certificates should only use the subjectAltName extension. Are any CAs actually doing that? I thought they all still included subject.CN. -F From rsalz at akamai.com Sun Dec 23 02:47:49 2018 From: rsalz at akamai.com (Salz, Rich) Date: Sun, 23 Dec 2018 02:47:49 +0000 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> Message-ID: <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> > >. New certificates should only use the subjectAltName extension. > Are any CAs actually doing that? I thought they all still included subject.CN. Yes, I think commercial CA's still do it. But that doesn't make my statement wrong :) From Walter.H at mathemainzel.info Sun Dec 23 09:24:55 2018 From: Walter.H at mathemainzel.info (Walter H.) Date: Sun, 23 Dec 2018 10:24:55 +0100 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> Message-ID: <5C1F5467.90609@mathemainzel.info> On 23.12.2018 03:47, Salz, Rich via openssl-users wrote: > > >. New certificates should only use the subjectAltName extension. > >> Are any CAs actually doing that? I thought they all still included subject.CN. > > Yes, I think commercial CA's still do it. But that doesn't make my statement wrong :) > Apache raises a warning at the following condition e.g. a virtual Host defines this: ServerName www.example.com:443 and the SSL certificate has a CN which does not correspond to CN=www.example.com, e.g. CN=example.com then the warning looks like this [Fri Dec 07 07:08:19.393876 2018] [ssl:warn] [pid 29746] AH01909: www.example.com:443:0 server certificate does NOT include an ID which matches the server name and fills up the logs Walter -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3491 bytes Desc: S/MIME Cryptographic Signature URL: From aerowolf at gmail.com Sun Dec 23 09:44:09 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Sun, 23 Dec 2018 03:44:09 -0600 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <5C1F5467.90609@mathemainzel.info> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> <5C1F5467.90609@mathemainzel.info> Message-ID: Does Apache only examine CN=, or does it also check subjectAltNames dNS entries? -Kyle H On Sun, Dec 23, 2018 at 3:25 AM Walter H. wrote: > > On 23.12.2018 03:47, Salz, Rich via openssl-users wrote: > > > >. New certificates should only use the subjectAltName extension. > > > >> Are any CAs actually doing that? I thought they all still included subject.CN. > > > > Yes, I think commercial CA's still do it. But that doesn't make my statement wrong :) > > > Apache raises a warning at the following condition > > e.g. a virtual Host defines this: > > ServerName www.example.com:443 > > and the SSL certificate has a CN which does not correspond to > CN=www.example.com, e.g. CN=example.com > > then the warning looks like this > > [Fri Dec 07 07:08:19.393876 2018] [ssl:warn] [pid 29746] AH01909: > www.example.com:443:0 server certificate does NOT include an ID which > matches the server name > > and fills up the logs > > Walter > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From Walter.H at mathemainzel.info Sun Dec 23 11:53:26 2018 From: Walter.H at mathemainzel.info (Walter H.) Date: Sun, 23 Dec 2018 12:53:26 +0100 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> <5C1F5467.90609@mathemainzel.info> Message-ID: <5C1F7736.2070702@mathemainzel.info> I tried the following the certificate had a CN of test.example.com and in subjectAltNames dNS were test.example.com and test.example.net when the Apache ServerName is test.example.net I get this warning [Sun Dec 23 12:45:03 2018] [warn] RSA server certificate CommonName (CN) `test.example.com' does NOT match server name!? so the CN matters ... so the server behavior is something different to the behavior of the client ... Walter On 23.12.2018 10:44, Kyle Hamilton wrote: > Does Apache only examine CN=, or does it also check subjectAltNames dNS entries? > > -Kyle H > > On Sun, Dec 23, 2018 at 3:25 AM Walter H. wrote: >> On 23.12.2018 03:47, Salz, Rich via openssl-users wrote: >>> > >. New certificates should only use the subjectAltName extension. >>> >>>> Are any CAs actually doing that? I thought they all still included subject.CN. >>> Yes, I think commercial CA's still do it. But that doesn't make my statement wrong :) >>> >> Apache raises a warning at the following condition >> >> e.g. a virtual Host defines this: >> >> ServerName www.example.com:443 >> >> and the SSL certificate has a CN which does not correspond to >> CN=www.example.com, e.g. CN=example.com >> >> then the warning looks like this >> >> [Fri Dec 07 07:08:19.393876 2018] [ssl:warn] [pid 29746] AH01909: >> www.example.com:443:0 server certificate does NOT include an ID which >> matches the server name >> >> and fills up the logs >> >> Walter -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3491 bytes Desc: S/MIME Cryptographic Signature URL: From felipe at felipegasper.com Sun Dec 23 12:21:34 2018 From: felipe at felipegasper.com (Felipe Gasper) Date: Sun, 23 Dec 2018 07:21:34 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <5C1F7736.2070702@mathemainzel.info> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> <5C1F5467.90609@mathemainzel.info> <5C1F7736.2070702@mathemainzel.info> Message-ID: Wow that?s pretty bad .. is that the current version of httpd?? That?d be worth a big report if so, IMO, though I?d imagine it?s an issue they?re aware of. -FG > On Dec 23, 2018, at 6:53 AM, Walter H. wrote: > > > I tried the following > > the certificate had a CN of test.example.com and in subjectAltNames dNS were > test.example.com and test.example.net > > when the Apache ServerName is test.example.net I get this warning > > [Sun Dec 23 12:45:03 2018] [warn] RSA server certificate CommonName (CN) `test.example.com' does NOT match server name!? > > so the CN matters ... > > so the server behavior is something different to the behavior of the client ... > > Walter > >> On 23.12.2018 10:44, Kyle Hamilton wrote: >> Does Apache only examine CN=, or does it also check subjectAltNames dNS entries? >> >> -Kyle H >> >>> On Sun, Dec 23, 2018 at 3:25 AM Walter H. wrote: >>>> On 23.12.2018 03:47, Salz, Rich via openssl-users wrote: >>>> > >. New certificates should only use the subjectAltName extension. >>>> >>>>> Are any CAs actually doing that? I thought they all still included subject.CN. >>>> Yes, I think commercial CA's still do it. But that doesn't make my statement wrong :) >>>> >>> Apache raises a warning at the following condition >>> >>> e.g. a virtual Host defines this: >>> >>> ServerName www.example.com:443 >>> >>> and the SSL certificate has a CN which does not correspond to >>> CN=www.example.com, e.g. CN=example.com >>> >>> then the warning looks like this >>> >>> [Fri Dec 07 07:08:19.393876 2018] [ssl:warn] [pid 29746] AH01909: >>> www.example.com:443:0 server certificate does NOT include an ID which >>> matches the server name >>> >>> and fills up the logs >>> >>> Walter > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From Walter.H at mathemainzel.info Sun Dec 23 13:50:05 2018 From: Walter.H at mathemainzel.info (Walter H.) Date: Sun, 23 Dec 2018 14:50:05 +0100 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <73819AD5-E996-40B6-B3C8-E7C283F1A8A9@felipegasper.com> <37ADA91D-222B-4822-AAC6-73A56784CB8C@akamai.com> <5C1F5467.90609@mathemainzel.info> <5C1F7736.2070702@mathemainzel.info> Message-ID: <5C1F928D.9030901@mathemainzel.info> I guess its a matter of which Linux you use, CentOS 7 doesn't give this warning; CentOS 6 warns about this; a Debian (don't really know which release) uname -a Linux a2f78 3.16.0-7-amd64 #1 SMP Debian 3.16.59-1 (2018-10-03) x86_64 GNU/Linux does warn ... Walter On 23.12.2018 13:21, Felipe Gasper wrote: > Wow that?s pretty bad .. is that the current version of httpd?? > > That?d be worth a big report if so, IMO, though I?d imagine it?s an issue they?re aware of. > > -FG > >> On Dec 23, 2018, at 6:53 AM, Walter H. wrote: >> >> >> I tried the following >> >> the certificate had a CN of test.example.com and in subjectAltNames dNS were >> test.example.com and test.example.net >> >> when the Apache ServerName is test.example.net I get this warning >> >> [Sun Dec 23 12:45:03 2018] [warn] RSA server certificate CommonName (CN) `test.example.com' does NOT match server name!? >> >> so the CN matters ... >> >> so the server behavior is something different to the behavior of the client ... >> >> Walter >> >>> On 23.12.2018 10:44, Kyle Hamilton wrote: >>> Does Apache only examine CN=, or does it also check subjectAltNames dNS entries? >>> >>> -Kyle H >>> >>>> On Sun, Dec 23, 2018 at 3:25 AM Walter H. wrote: >>>>> On 23.12.2018 03:47, Salz, Rich via openssl-users wrote: >>>>> > >. New certificates should only use the subjectAltName extension. >>>>> >>>>>> Are any CAs actually doing that? I thought they all still included subject.CN. >>>>> Yes, I think commercial CA's still do it. But that doesn't make my statement wrong :) >>>>> >>>> Apache raises a warning at the following condition >>>> >>>> e.g. a virtual Host defines this: >>>> >>>> ServerName www.example.com:443 >>>> >>>> and the SSL certificate has a CN which does not correspond to >>>> CN=www.example.com, e.g. CN=example.com >>>> >>>> then the warning looks like this >>>> >>>> [Fri Dec 07 07:08:19.393876 2018] [ssl:warn] [pid 29746] AH01909: >>>> www.example.com:443:0 server certificate does NOT include an ID which >>>> matches the server name >>>> >>>> and fills up the logs >>>> >>>> Walter >> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3491 bytes Desc: S/MIME Cryptographic Signature URL: From mcr at sandelman.ca Sun Dec 23 15:21:41 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Sun, 23 Dec 2018 10:21:41 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> Message-ID: <3571.1545578501@localhost> Salz, Rich via openssl-users wrote: > Putting the DNS name in the CN part of the subjectDN has been > deprecated for a very long time (more than 10 years), although it > is still supported by many existing browsers. New certificates > should only use the subjectAltName extension. Fair enough. It seems that the "openssl ca" mechanism still seem to want a subjectDN defined. Am I missing some mechanism that would let me omit all of that? Or is a patch needed to kill what seems like a current operational requirement? -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From openssl-users at dukhovni.org Sun Dec 23 19:11:48 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 23 Dec 2018 14:11:48 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <3571.1545578501@localhost> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> Message-ID: <7F405EB0-3D15-4C35-88A7-9BD313ABB305@dukhovni.org> > On Dec 23, 2018, at 10:21 AM, Michael Richardson wrote: > > It seems that the "openssl ca" mechanism still seem to want a subjectDN > defined. Am I missing some mechanism that would let me omit all of that? Or > is a patch needed to kill what seems like a current operational requirement? It is not a matter of "openssl ca". An X.509 certificate has a subjectDN, that's a required part of the certificate structure. However, a "DN" is a SEQUENCE of "RDNs", and that sequence can be empty, for example (requires "bash"): $ openssl req -config <( printf "%s\n[dn]\n%s\n[ext]\n%s\n" \ "distinguished_name = dn" \ "prompt = yes" \ "$(printf "subjectAltName = DNS:%s\n" "example.com")" ) \ -extensions ext -new -newkey rsa:1024 -nodes -keyout /dev/null \ -x509 -subj / 2>/dev/null | openssl x509 -noout -text -certopt no_pubkey,no_sigdump Certificate: Data: Version: 3 (0x2) Serial Number: 47:37:cb:39:a4:9c:be:c2:ea:42:2f:ed:e2:df:bc:62:bb:2b:cb:dd Signature Algorithm: sha256WithRSAEncryption Issuer: Validity Not Before: Dec 23 18:56:08 2018 GMT Not After : Jan 22 18:56:08 2019 GMT Subject: X509v3 extensions: X509v3 Subject Alternative Name: DNS:example.com Note the empty subjectDN and issuerDN. The latter violates RFC5280, but will suffice for this example. An RFC compliant *self-signed* certificate needs to have a non-empty issuer name, so it could be something like: $ openssl req -config <( printf "%s\n[dn]\n%s\n[ext]\n%s\n" \ "distinguished_name = dn" \ "prompt = yes" \ "$(printf "subjectAltName = DNS:%s\n" "example.com")" ) \ -extensions ext -new -newkey rsa:1024 -nodes -keyout /dev/null \ -x509 -subj "/O=Self" 2>/dev/null | openssl x509 -noout -text -certopt no_pubkey,no_sigdump Certificate: Data: Version: 3 (0x2) Serial Number: 6b:f0:9e:6c:ff:27:f3:cb:eb:79:10:6d:ac:9a:c2:54:e4:78:06:b0 Signature Algorithm: sha256WithRSAEncryption Issuer: O = Self Validity Not Before: Dec 23 19:08:51 2018 GMT Not After : Jan 22 19:08:51 2019 GMT Subject: O = Self X509v3 extensions: X509v3 Subject Alternative Name: DNS:example.com with an actual CA, the subject could be empty, and the issuer will be the CA's DN. -- Viktor. From levitte at openssl.org Sun Dec 23 20:08:15 2018 From: levitte at openssl.org (Richard Levitte) Date: Sun, 23 Dec 2018 21:08:15 +0100 (CET) Subject: [openssl-users] PerlASM for x64 In-Reply-To: References: Message-ID: <20181223.210815.534126485389681186.levitte@openssl.org> In message on Fri, 21 Dec 2018 15:45:23 +0100, Gisle Vanem said: > I'm trying to understand how the generation of ASM-files > are done on x64. (I have no problems on x86). > > With the generated Nmake makefile from a > perl Configure VC-WIN64A-ONECORE > > when doing a: > nmake crypto\aes\libcrypto-lib-aesni-x86_64.obj > > seems to do this: > set ASM=nasm > "f:/util/StrawberryPerl/perl/bin/perl.exe" > "crypto\aes\asm\aesni-x86_64.pl" auto crypto\aes\aesni-x86_64.asm > nasm -Ox -f win64 -DNEAR -g -o > crypto\aes\libcrypto-lib-aesni-x86_64.obj > "crypto\aes\aesni-x86_64.asm" > > Except for some warnings, nasm generates a valid > libcrypto-lib-aesni-x86_64.obj. > > BUT, doing the same on the cmd-line: > set ASM=nasm > f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl > auto tmp-file.s > > Generates a totally invalid 'tmp-file.s' file: > > --- tmp-file.s 2018-12-21 13:12:19 > +++ crypto/aes/aesni-x86_64.asm 2018-12-21 13:11:47 > @@ -1,4432 +1,5051 @@ > -.text > +default rel > +%define XMMWORD > +%define YMMWORD > +%define ZMMWORD > +section .text code align=64 > > -.globl aesni_encrypt > -.type aesni_encrypt, at function > -.align 16 > +EXTERN OPENSSL_ia32cap_P > +global aesni_encrypt > + > > What is going on here? Some other exported env-var playing > tricks? > > I experimented some more. I figured the "auto" does not work. > But this works: > perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s > diff tmp-file.s crypto\aes\aesni-x86_64.asm > > No diffs. > > Why does the the generation of .asm-files be so damn hard to > figure out? Some cmd-line help to show what "auto" does would > be nice. The "auto" flavor takes note of the output file extension. .asm vs .s in this case. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From aerowolf at gmail.com Sun Dec 23 21:29:42 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Sun, 23 Dec 2018 15:29:42 -0600 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <3571.1545578501@localhost> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> Message-ID: SubjectCN is an operational requirement of X.509, I believe. It's not optional in the data structure, at any rate. -Kyle H On Sun, Dec 23, 2018 at 9:22 AM Michael Richardson wrote: > > > Salz, Rich via openssl-users wrote: > > Putting the DNS name in the CN part of the subjectDN has been > > deprecated for a very long time (more than 10 years), although it > > is still supported by many existing browsers. New certificates > > should only use the subjectAltName extension. > > Fair enough. > > It seems that the "openssl ca" mechanism still seem to want a subjectDN > defined. Am I missing some mechanism that would let me omit all of that? Or > is a patch needed to kill what seems like a current operational requirement? > > -- > ] Never tell me the odds! | ipv6 mesh networks [ > ] Michael Richardson, Sandelman Software Works | IoT architect [ > ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From openssl-users at dukhovni.org Sun Dec 23 21:34:53 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 23 Dec 2018 16:34:53 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> Message-ID: <7B430C95-4832-4BA1-80F2-AD17AAC8C5A5@dukhovni.org> > On Dec 23, 2018, at 4:29 PM, Kyle Hamilton wrote: > > SubjectCN is an operational requirement of X.509, I believe. You're confusing the DN and the CN. > It's not optional in the data structure, at any rate. The subjectDN is not optional, but it can be empty sequence, and is empty for domains whose name exceeds the CN length limit of either 63 or 64 characters (can't recall which of the two just now, but that is not important). -- Viktor. From felipe at felipegasper.com Sun Dec 23 21:50:58 2018 From: felipe at felipegasper.com (Felipe Gasper) Date: Sun, 23 Dec 2018 16:50:58 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> Message-ID: <60291A72-46E4-437D-9079-893570D11B5C@felipegasper.com> Actually, per the latest CA/Browser forum guidelines, subject.CN is not only optional but ?discouraged?. -FG > On Dec 23, 2018, at 4:29 PM, Kyle Hamilton wrote: > > SubjectCN is an operational requirement of X.509, I believe. It's not > optional in the data structure, at any rate. > > -Kyle H > >> On Sun, Dec 23, 2018 at 9:22 AM Michael Richardson wrote: >> >> >> Salz, Rich via openssl-users wrote: >>> Putting the DNS name in the CN part of the subjectDN has been >>> deprecated for a very long time (more than 10 years), although it >>> is still supported by many existing browsers. New certificates >>> should only use the subjectAltName extension. >> >> Fair enough. >> >> It seems that the "openssl ca" mechanism still seem to want a subjectDN >> defined. Am I missing some mechanism that would let me omit all of that? Or >> is a patch needed to kill what seems like a current operational requirement? >> >> -- >> ] Never tell me the odds! | ipv6 mesh networks [ >> ] Michael Richardson, Sandelman Software Works | IoT architect [ >> ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ >> >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From alibekj at yahoo.com Sun Dec 23 22:10:38 2018 From: alibekj at yahoo.com (Alibek Jorajev) Date: Sun, 23 Dec 2018 22:10:38 +0000 (UTC) Subject: [openssl-users] FIPS module v3 In-Reply-To: <37f9568f-d085-410d-a6d0-213bbc73d9a5@default> References: <321756724.6615988.1545127828033@mail.yahoo.com> <37f9568f-d085-410d-a6d0-213bbc73d9a5@default> Message-ID: <116884437.9443842.1545603038691@mail.yahoo.com> thanks for your reply! On Tuesday, 18 December 2018, 20:57:40 GMT, Paul Dale wrote: There are no committed to dates of any kind at present. The project is underway but it is too early to set a schedule, yet alone a completion date. Pauli -- Oracle Dr Paul Dale | Cryptographer | Network Security & Encryption Phone +61 7 3031 7217 Oracle Australia From: Alibek Jorajev via openssl-users [mailto:openssl-users at openssl.org] Sent: Tuesday, 18 December 2018 8:10 PM To: openssl-users at openssl.org Subject: [openssl-users] FIPS module v3 Hi everyone, I have been following OpenSSL blog and know that work on new OpenSSL FIPS module has started. Current FIPS module (v.2) has end of life (December 2019) and I assume that new FIPS module will be by that time.? but can someone tell me - is there are approximate dates -? will it be available earlier? thanks, Alibek -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From aerowolf at gmail.com Sun Dec 23 23:01:02 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Sun, 23 Dec 2018 17:01:02 -0600 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <7B430C95-4832-4BA1-80F2-AD17AAC8C5A5@dukhovni.org> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <7B430C95-4832-4BA1-80F2-AD17AAC8C5A5@dukhovni.org> Message-ID: You're right, I typoed. SubjectDN is non-optional. But it can, as you mentioned, be an empty sequence. But for PKIX purposes, it can't be empty if it's an Issuer (because IssuerDN can't be empty in the certificates that it issues). -Kyle H On Sun, Dec 23, 2018 at 3:35 PM Viktor Dukhovni wrote: > > > > > On Dec 23, 2018, at 4:29 PM, Kyle Hamilton wrote: > > > > SubjectCN is an operational requirement of X.509, I believe. > > You're confusing the DN and the CN. > > > It's not optional in the data structure, at any rate. > > The subjectDN is not optional, but it can be empty sequence, and > is empty for domains whose name exceeds the CN length limit of either > 63 or 64 characters (can't recall which of the two just now, but that > is not important). > > -- > Viktor. > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From openssl-users at dukhovni.org Sun Dec 23 23:33:41 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 23 Dec 2018 18:33:41 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <7B430C95-4832-4BA1-80F2-AD17AAC8C5A5@dukhovni.org> Message-ID: > On Dec 23, 2018, at 6:01 PM, Kyle Hamilton wrote: > > You're right, I typoed. SubjectDN is non-optional. But it can, as > you mentioned, be an empty sequence. > > But for PKIX purposes, it can't be empty if it's an Issuer (because > IssuerDN can't be empty in the certificates that it issues). That's an odd use of "it", since the issuerDN while also a DN is not a subjectDN. The "it" that is the subjectDN is sometimes legitimately empty. The other "it" that is the issuerDN is supposed to always be non-empty, but some self-signed certificates violate that requirement with apparent impunity, e.g. nothing in OpenSSL requires a non-empty issuer DN in an end-entity self-signed certificate, if it breaks, the constraint would be at the application layer. -- Viktor. From Walter.H at mathemainzel.info Mon Dec 24 08:17:00 2018 From: Walter.H at mathemainzel.info (Walter H.) Date: Mon, 24 Dec 2018 09:17:00 +0100 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <60291A72-46E4-437D-9079-893570D11B5C@felipegasper.com> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <60291A72-46E4-437D-9079-893570D11B5C@felipegasper.com> Message-ID: <5C2095FC.40408@mathemainzel.info> and which CA does this as the forum guidelines say? On 23.12.2018 22:50, Felipe Gasper wrote: > Actually, per the latest CA/Browser forum guidelines, subject.CN is not only optional but ?discouraged?. > > -FG > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3491 bytes Desc: S/MIME Cryptographic Signature URL: From chris.gray at kiffer.be Mon Dec 24 09:59:43 2018 From: chris.gray at kiffer.be (chris.gray at kiffer.be) Date: Mon, 24 Dec 2018 09:59:43 -0000 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <60291A72-46E4-437D-9079-893570D11B5C@felipegasper.com> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <60291A72-46E4-437D-9079-893570D11B5C@felipegasper.com> Message-ID: A bit off-topic but is it also a good idea to follow these guidelines in non-browser use cases, for example for a client certificate which is used to autenticate on a TLS connection which will be used for another protocol such as MQTT? In this case the SubjectCN looks like a "natural" place to put the client's identity, but maybe it is still better to use subjectAltName? - Chris > Actually, per the latest CA/Browser forum guidelines, subject.CN is not > only optional but ???discouraged???. > > -FG > >> On Dec 23, 2018, at 4:29 PM, Kyle Hamilton wrote: >> >> SubjectCN is an operational requirement of X.509, I believe. It's not >> optional in the data structure, at any rate. >> >> -Kyle H >> >>> On Sun, Dec 23, 2018 at 9:22 AM Michael Richardson >>> wrote: >>> >>> >>> Salz, Rich via openssl-users wrote: >>>> Putting the DNS name in the CN part of the subjectDN has been >>>> deprecated for a very long time (more than 10 years), although it >>>> is still supported by many existing browsers. New certificates >>>> should only use the subjectAltName extension. >>> >>> Fair enough. >>> >>> It seems that the "openssl ca" mechanism still seem to want a subjectDN >>> defined. Am I missing some mechanism that would let me omit all of >>> that? Or >>> is a patch needed to kill what seems like a current operational >>> requirement? >>> >>> -- >>> ] Never tell me the odds! | ipv6 mesh >>> networks [ >>> ] Michael Richardson, Sandelman Software Works | IoT >>> architect [ >>> ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on >>> rails [ >>> >>> -- >>> openssl-users mailing list >>> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > From prithiraj.das at gmail.com Mon Dec 24 11:05:03 2018 From: prithiraj.das at gmail.com (prithiraj das) Date: Mon, 24 Dec 2018 11:05:03 +0000 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction In-Reply-To: References: Message-ID: Hi All, Please accept this as a gentle reminder to the previous mail in the mailchain. And also would a custom makefile (if created for this purpose) help in this regard? Thanks and Regards, Prithiraj On Fri, 21 Dec 2018 at 06:12, prithiraj das wrote: > I am using OpenSSL 1.1.1 from OpenSSL's website and trying to build > OpenSSL on a Windows 64 bit machine using Perl 64 bit version and nasm > v2.13.03. I have used the *no-shared* option in the Perl Configure to > only build the static library and the resulting size of the > *libcrypto.lib* file is almost 19 MB. The *.exe* file generated is 3173 > KB. RSA functionality (keypair generation, encryption, decryption) is what > we all need and as per the need, the goal is to reduce *libcrypto.lib *to > less than 3 MB. Using the generated .exe file is not an option. > Please suggest ways to reduce the libcrypto.lib size to less than 3 MB on > this 64 bit machine keeping only RSA functionality. > And, is it possible by any chance that the size of libcrypto.lib will be > smaller if OpenSSL is being built on a Windows 32 bit machine using a > Windows 32 bit configuration option VC-WIN32? > > Thanks and Regards, > Prithiraj > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Mon Dec 24 11:13:20 2018 From: matt at openssl.org (Matt Caswell) Date: Mon, 24 Dec 2018 11:13:20 +0000 Subject: [openssl-users] Sending empty renegotiaion_info In-Reply-To: References: Message-ID: On 18/12/2018 08:21, Dmitry Belyavsky wrote: > Hello, > > Is it possible to send empty renegotiation_info extension instead of > TLS_EMPTY_RENEGOTIATION_INFO_SCSV using openssl?s_client? No, this isn't possible. We only ever send the renegotiation_info extension on a reneg ClientHello. Matt From c.wehrmeyer at freshlions.de Mon Dec 24 11:51:17 2018 From: c.wehrmeyer at freshlions.de (Christian) Date: Mon, 24 Dec 2018 12:51:17 +0100 Subject: [openssl-users] Authentication over ECDHE Message-ID: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> Hello, people. I'm a beginner with OpenSSL and with cryptography in general, and have been wondering how to best implement an upcoming system. I apologise in advance for any grammar or orthography mistakes, as English isn't my native language. We have a local network with a databse in which we do most of our processing, and a public machine that runs a webserver. Periodically we have to connect to that server and query new data to process it. The connection to that server is not necessarily trusted. The problem is that our webserver is slow and clunky and generally just issues another process to deal with any request, which is unnecessary and slow. We want to streamline that process by having a local program run on the server sending a set of predefined queries over a predefined protocol, and then just sent that data back to the client. However, only a select few machines are supposed to be able to get any data from the server, like, those who have a certain private key. If a client can sign a ping that can be decrypted with the client side public key, and if the server can sign a ping that can be decrypted with the servers public key, then both sides are authenticated, and - from my limited understand - a MITM scenario is foiled (unless the MITM manages to steal either private key, which is why I also want to have password protection for the key. I'm away that putting the key into a program compromises the security of the key if an attacker manages to gain access to the server, but in this case it's just supposed to give us some time to stop the programs, close all holes, and generate new keys). This sounds like a typical RSA scenario, however I also want to have forward security, which requires me to use something with temporary keys only - I'm having ECDHE in mind for that, ECDHE-RSA-AES128-GCM-SHA256 in particular. However, after some research I found out that the "RSA" in that cipher only refers to the temporary keys that are being generated for this connection, and thus authentication would have to be issued on top of TLS, not within the means of TLS itself. And last, but not least I've read about an attack a little while back how some DH parameters (usually those with a size of 1024 bits) have become stale. If I want to have extra security, Speed isn't an incredible huge problem, as there will always be just one, at most two connections running with the server. As such its design can be incredible simple, and the connection can be more secure in terms of cryptography than default (4096 RSA keys and 2048 DH params wouldn't be an issue). I expect the bulk of the runtime to be spent on the database server side of things anyway. I don't want to use certificates. Either a client/server has the necessary private keys to sign data, or the connection is simply refused. I also don't want to use any password, because that requires to share a secret over a to this moment still unverified channel. My question is thusly how to implement authentication over ECDHE in the best way. My searches for "openssl c sign data with private key" doesn't yield any usable results, which suggests that there is some sort of misunderstanding with the concept of "signing ping/pong with respective private keys". Are there any functions or further documentation to be of help here? Please keep in mind that all of this has been Greek to me until last Friday, and that I'm by no way a cryptography expert. Thank you for your time and effort in advance. From gisle.vanem at gmail.com Mon Dec 24 12:17:51 2018 From: gisle.vanem at gmail.com (Gisle Vanem) Date: Mon, 24 Dec 2018 13:17:51 +0100 Subject: [openssl-users] PerlASM for x64 In-Reply-To: <20181223.210815.534126485389681186.levitte@openssl.org> References: <20181223.210815.534126485389681186.levitte@openssl.org> Message-ID: <3fc4d543-d71a-8c22-566a-d902c4f7da03@gmail.com> Richard Levitte wrote: >> I experimented some more. I figured the "auto" does not work. >> But this works: >> perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s >> diff tmp-file.s crypto\aes\aesni-x86_64.asm >> >> No diffs. >> >> Why does the the generation of .asm-files be so damn hard to >> figure out? Some cmd-line help to show what "auto" does would >> be nice. > > The "auto" flavor takes note of the output file extension. .asm vs .s > in this case. Thank, but it was a typo in my 1st email. The correct command was with a redirect: set ASM=nasm f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl auto > tmp-file.s Still the "auto" forces GNU-asm syntax with 'ASM=nasm'. I can only conclude the '$ASM' does nothing and only helps obfuscate things further. This works fine: set ASM= f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s -- --gv From matt at openssl.org Mon Dec 24 12:31:18 2018 From: matt at openssl.org (Matt Caswell) Date: Mon, 24 Dec 2018 12:31:18 +0000 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction In-Reply-To: References: Message-ID: <34f17a23-de26-ba1c-2922-f199772ae954@openssl.org> On 21/12/2018 06:12, prithiraj das wrote: > I am using OpenSSL 1.1.1 from OpenSSL's website and trying to build OpenSSL on a > Windows 64 bit machine using Perl 64 bit version and nasm v2.13.03. I have used > the *no-shared* option in the Perl Configure to only build the static library > and the resulting size of the?*libcrypto.lib*?file is almost 19 MB. The *.exe* > file generated is 3173 KB. RSA functionality (keypair generation, encryption, > decryption) is what we all need and as per the need, the goal is to > reduce?*libcrypto.lib *to less than 3 MB. Using the generated .exe file is not > an option. > Please suggest ways to reduce the libcrypto.lib size to less than 3 MB on this > 64 bit machine keeping only RSA functionality. > ?And, is it possible by any chance that the size of libcrypto.lib will be > smaller if OpenSSL is being built on a Windows 32 bit machine using a Windows 32 > bit configuration option VC-WIN32? You can try adding "-DOPENSSL_SMALL_FOOTPRINT" onto the end of your Configure line. You might also want to experiment with the various "no-*" options described in the INSTALL file. Matt From felipe at felipegasper.com Mon Dec 24 13:56:51 2018 From: felipe at felipegasper.com (Felipe Gasper) Date: Mon, 24 Dec 2018 08:56:51 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: <5C2095FC.40408@mathemainzel.info> References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <60291A72-46E4-437D-9079-893570D11B5C@felipegasper.com> <5C2095FC.40408@mathemainzel.info> Message-ID: <6FA5D69C-49BC-40DB-A5DB-F23EE6DF5091@felipegasper.com> I?m not sure, heh. ;-) -F > On Dec 24, 2018, at 3:17 AM, Walter H. wrote: > > and which CA does this as the forum guidelines say? > >> On 23.12.2018 22:50, Felipe Gasper wrote: >> Actually, per the latest CA/Browser forum guidelines, subject.CN is not only optional but ?discouraged?. >> >> -FG >> > > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From openssl-users at dukhovni.org Mon Dec 24 15:10:54 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 24 Dec 2018 10:10:54 -0500 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> Message-ID: <20181224151053.GH79754@straasha.imrryr.org> On Mon, Dec 24, 2018 at 12:51:17PM +0100, Christian wrote: > This sounds like a typical RSA scenario, however I also want to have > forward security, which requires me to use something with temporary keys > only - I'm having ECDHE in mind for that, ECDHE-RSA-AES128-GCM-SHA256 in > particular. However, after some research I found out that the "RSA" in > that cipher only refers to the temporary keys that are being generated > for this connection, and thus authentication would have to be issued on > top of TLS, not within the means of TLS itself. Your research has led you astray. The ECDHE-RSA-AES128-GCM-SHA25 ciphersuiteo *is* RSA authenticated and offers forward secrecy, the same is true also of its 256-bit twin: $ openssl ciphers -v kECDHE+AESGCM+aRSA | sed 's/ */ /g' ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD they are both quite strong, use 128-bit to optimize for speed or 256-bit against hypothetical attacks on 128-bit AES that don't break AES-256. These ciphers are for TLS 1.2. With OpenSSL 1.1.1 you might also consider TLS 1.3 ciphers, where the public algorithm is negotiated separately, TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD and you could use Ed25519 certificates and/or X25519 key exchange. -- Viktor. From c.wehrmeyer at freshlions.de Mon Dec 24 15:25:54 2018 From: c.wehrmeyer at freshlions.de (Christian) Date: Mon, 24 Dec 2018 16:25:54 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <20181224151053.GH79754@straasha.imrryr.org> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> Message-ID: <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> > Your research has led you astray. The ECDHE-RSA-AES128-GCM-SHA25 > ciphersuiteo *is* RSA authenticated and offers forward secrecy, Then how would I load my static RSA keys into my SSL_CTX? Simply by using SSL_CTX_use_PrivateKey_file on client and server? As far as I understand the mechanism that would only enable encryption, but not decryption. > they are both quite strong, use 128-bit to optimize for speed or > 256-bit against hypothetical attacks on 128-bit AES that don't break > AES-256. Actually, I've been told that AES256 is weaker than AES128 in theory, and have been discouraged to use it. > and you could use Ed25519 certificates and/or X25519 key exchange. I said I'd like to avoid using any certificates. I don't see the point of them if I'm going to use static keys anyways. And certificates, from my limited understanding, only establish external trust anyways. I want direct trust. From openssl-users at dukhovni.org Mon Dec 24 16:01:32 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 24 Dec 2018 11:01:32 -0500 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> Message-ID: <20181224160132.GJ79754@straasha.imrryr.org> On Mon, Dec 24, 2018 at 04:25:54PM +0100, Christian wrote: > > Your research has led you astray. The ECDHE-RSA-AES128-GCM-SHA25 > > ciphersuiteo *is* RSA authenticated and offers forward secrecy, > > Then how would I load my static RSA keys into my SSL_CTX? Simply by > using SSL_CTX_use_PrivateKey_file on client and server? To avoid trusted CAs, you have to load both a private key *and* a self-signed certificate. While certificate-less TLS is in theory possible with RFC7250 bare public keys, in practice no libraries I know of support this. Also, your CA does not have to be a third-party CA, you can generate your trusted issuer CA, its private key can be "off-line", making recovery from server key compromise somewhat simpler, but with so few systems in scope the difference is minor. > As far as I understand the mechanism that would only enable encryption, > but not decryption. Again, that's not the case, but you still need a certificate to go with that key. In the simplest case that certificate can be self-signed, and would be the only one (or one of a few) trusted by the verifier (via suitable settings of CAfile and CApath). > > they are both quite strong, use 128-bit to optimize for speed or > > 256-bit against hypothetical attacks on 128-bit AES that don't break > > AES-256. > > Actually, I've been told that AES256 is weaker than AES128 in theory, > and have been discouraged to use it. There are some concerns about the key schedule, but they've not panned out to attacks that make AES256 weaker than AES128. > > and you could use Ed25519 certificates and/or X25519 key exchange. > > I said I'd like to avoid using any certificates. I don't see the point > of them if I'm going to use static keys anyways. You're going to have (self-signed) certificates. They're essentially slightly bloated key containers. > And certificates, from my limited understanding, only establish external > trust anyways. I want direct trust. Certificates do not preclude direct trust. Self-signed certificates do not entail any outside parties. A suitable self-signed certificate and private key can be generated via: $ temp=$(mktemp chain.XXXXXXX) $ openssl req -new -newkey rsa:2048 -nodes -keyout /dev/stdout \ -x509 -subj / -days 36524 >> $temp && mv $temp self-chain.pem I think that password protection for the keys is a waste of time, but if you can use it if you wish. $ temp=$(mktemp chain.XXXXXXX) $ openssl genrsa -aes128 -out $temp 2048 $ openssl req -new -key $temp -x509 -subj / -days 36524 >> $temp && mv $temp self-chain.pem -- Viktor. From matt at openssl.org Mon Dec 24 16:29:49 2018 From: matt at openssl.org (Matt Caswell) Date: Mon, 24 Dec 2018 16:29:49 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> Message-ID: <9823d3dc-f8b9-0724-f1d5-156bcbe47d95@openssl.org> On 24/12/2018 11:51, Christian wrote: > Hello, people. I'm a beginner with OpenSSL and with cryptography in general, and > have been wondering how to best implement an upcoming system. > > I apologise in advance for any grammar or orthography mistakes, as English isn't > my native language. > > We have a local network with a databse in which we do most of our processing, > and a public machine that runs a webserver. Periodically we have to connect to > that server and query new data to process it. The connection to that server is > not necessarily trusted. > > The problem is that our webserver is slow and clunky and generally just issues > another process to deal with any request, which is unnecessary and slow. We want > to streamline that process by having a local program run on the server sending a > set of predefined queries over a predefined protocol, and then just sent that > data back to the client. However, only a select few machines are supposed to be > able to get any data from the server, like, those who have a certain private > key. If a client can sign a ping that can be decrypted with the client side > public key, and if the server can sign a ping that can be decrypted with the > servers public key, then both sides are authenticated, and - from my limited > understand - a MITM scenario is foiled (unless the MITM manages to steal either > private key, which is why I also want to have password protection for the key. > I'm away that putting the key into a program compromises the security of the key > if an attacker manages to gain access to the server, but in this case it's just > supposed to give us some time to stop the programs, close all holes, and > generate new keys). > > This sounds like a typical RSA scenario, however I also want to have forward > security, which requires me to use something with temporary keys only - I'm > having ECDHE in mind for that, ECDHE-RSA-AES128-GCM-SHA256 in particular. > However, after some research I found out that the "RSA" in that cipher only > refers to the temporary keys that are being generated for this connection, and > thus authentication would have to be issued on top of TLS, not within the means > of TLS itself. > > And last, but not least I've read about an attack a little while back how some > DH parameters (usually those with a size of 1024 bits) have become stale. If I > want to have extra security, > > Speed isn't an incredible huge problem, as there will always be just one, at > most two connections running with the server. As such its design can be > incredible simple, and the connection can be more secure in terms of > cryptography than default (4096 RSA keys and 2048 DH params wouldn't be an > issue). I expect the bulk of the runtime to be spent on the database server side > of things anyway. > > I don't want to use certificates. Either a client/server has the necessary > private keys to sign data, or the connection is simply refused. I also don't > want to use any password, because that requires to share a secret over a to this > moment still unverified channel. > > My question is thusly how to implement authentication over ECDHE in the best > way. My searches for "openssl c sign data with private key" doesn't yield any > usable results, which suggests that there is some sort of misunderstanding with > the concept of "signing ping/pong with respective private keys". Are there any > functions or further documentation to be of help here? Please keep in mind that > all of this has been Greek to me until last Friday, and that I'm by no way a > cryptography expert. > > Thank you for your time and effort in advance. How about using PSKs? That way you completely avoid the need for a certificate. Authentication is implied since both peers must have access to the PSK for the connection to succeed. ECDHE can be combined with the PSK to create a temporary key for the connection, thus giving you forward secrecy, e.g. using a ciphersuite such as ECDHE-PSK-AES128-CBC-SHA256. Matt From openssl-users at dukhovni.org Mon Dec 24 16:43:12 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 24 Dec 2018 11:43:12 -0500 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <9823d3dc-f8b9-0724-f1d5-156bcbe47d95@openssl.org> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <9823d3dc-f8b9-0724-f1d5-156bcbe47d95@openssl.org> Message-ID: <20181224164312.GK79754@straasha.imrryr.org> On Mon, Dec 24, 2018 at 04:29:49PM +0000, Matt Caswell wrote: > How about using PSKs? That way you completely avoid the need for a certificate. > Authentication is implied since both peers must have access to the PSK for the > connection to succeed. ECDHE can be combined with the PSK to create a temporary > key for the connection, thus giving you forward secrecy, e.g. using a > ciphersuite such as ECDHE-PSK-AES128-CBC-SHA256. This requires more complex application code on the client and server, so I would not recommend it. And IIRC there may be some complications with getting PSKs to work across both TLS 1.2 and TLS 1.3??? -- Viktor. From mcr at sandelman.ca Mon Dec 24 17:49:38 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Mon, 24 Dec 2018 12:49:38 -0500 Subject: [openssl-users] moving from PKCS7 to CMS functions Message-ID: <22462.1545673778@localhost> I am implementing a module for ruby-openssl to add CMS API access to ruby. (Once I figure it out, I will likely look at how to refactor PKCS7 API code, but I don't care about that yet) PKCS7 has the PKCS7_SIGNER_INFO object, and it is declared in pkcs7.h with DECLARE_ASN1_FUNCTIONS(). CMS has the CMS_SignerInfo object, but it is not declared in cms.h, and so has no _alloc/_free API. Is this an oversight? Or is there a some difference in the API which I have yet to understand which would mean that CMS_SignerInfo objects would never be allocated/freed. (I found it surprising that DECLARE_ASN1_FUNCTIONS() was in the X509_dup.pod file) -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From levitte at openssl.org Mon Dec 24 18:08:41 2018 From: levitte at openssl.org (Richard Levitte) Date: Mon, 24 Dec 2018 19:08:41 +0100 (CET) Subject: [openssl-users] PerlASM for x64 In-Reply-To: <3fc4d543-d71a-8c22-566a-d902c4f7da03@gmail.com> References: <20181223.210815.534126485389681186.levitte@openssl.org> <3fc4d543-d71a-8c22-566a-d902c4f7da03@gmail.com> Message-ID: <20181224.190841.1801525626083809360.levitte@openssl.org> In message <3fc4d543-d71a-8c22-566a-d902c4f7da03 at gmail.com> on Mon, 24 Dec 2018 13:17:51 +0100, Gisle Vanem said: > Richard Levitte wrote: > > >> I experimented some more. I figured the "auto" does not work. > >> But this works: > >> perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s > >> diff tmp-file.s crypto\aes\aesni-x86_64.asm > >> > >> No diffs. > >> > >> Why does the the generation of .asm-files be so damn hard to > >> figure out? Some cmd-line help to show what "auto" does would > >> be nice. > > The "auto" flavor takes note of the output file extension. .asm vs .s > > in this case. > > Thank, but it was a typo in my 1st email. The correct command was > with a redirect: > set ASM=nasm > f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl > auto > tmp-file.s That isn't a correct use of the script. All of the assembler perl scripts expect the output file as last argument, and the x86_64 ones do look at the output file and determines that if the extension is '.asm', nasm assembler is expected, otherwise you will get gas assembler. So if you redirect, the result is, mildly put, undefined. Thank you, though... it is time the assembler stuff gets documented, and I think I'm in a fairly good position to do so. I will not promise that it will happen fast, but it is in my backlog. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From rsalz at akamai.com Mon Dec 24 19:28:43 2018 From: rsalz at akamai.com (Salz, Rich) Date: Mon, 24 Dec 2018 19:28:43 +0000 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction In-Reply-To: References: Message-ID: If all you need is RSA then you will probably find it easier to write a makefile of your own. You will have to do multiple builds to get all the missing pieces, such as the BN facility, the memory allocation, the error stack, etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Mon Dec 24 19:44:15 2018 From: rsalz at akamai.com (Salz, Rich) Date: Mon, 24 Dec 2018 19:44:15 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <20181224160132.GJ79754@straasha.imrryr.org> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> Message-ID: <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> > While certificate-less TLS is in theory possible with RFC7250 bare public keys Pre-shared keys (PSK) don't require certs, maybe that meets the need. A thing to know about PSK is that each side is fully trusted, and if one side gets the key stolen, then the thief can pretend to be either side. From openssl-users at dukhovni.org Mon Dec 24 19:52:38 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 24 Dec 2018 14:52:38 -0500 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> Message-ID: <8847FD16-9999-4226-A183-276970706794@dukhovni.org> > On Dec 24, 2018, at 2:44 PM, Salz, Rich via openssl-users wrote: > > Pre-shared keys (PSK) don't require certs, maybe that meets the need. A thing to know about PSK is that each side is fully trusted, and if one side gets the key stolen, then the thief can pretend to be either side. PSK only makes sense for svelte SSL libraries that either run on devices with too little CPU to do public key crypto, or don't want to the pay the code footprint of X.509 certificate processing. For OpenSSL on a typical computer, PSK deployment and application support is more complex than just going with self-signed certs. The OP is IMHO better off avoiding PSKs. -- Viktor. From aerowolf at gmail.com Mon Dec 24 22:51:35 2018 From: aerowolf at gmail.com (Kyle Hamilton) Date: Mon, 24 Dec 2018 16:51:35 -0600 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <7B430C95-4832-4BA1-80F2-AD17AAC8C5A5@dukhovni.org> Message-ID: In order for an Issuer to exist in PKIX, it must be the Subject of another Certificate (or of a trust anchor). If a certificate identifies an Issuer, then the certificate cannot contain an empty sequence of RDNs in the Subject and still be conformant to PKIX. This is because the Subject of the issuing authority certificate needs to be copied (in some way which is compatible with RFC4518's string preparation rules) into the Issuer of the certificate that it issues. This is implied by the path validation algorithm, and stated explicitly in the last paragraph of RFC5280 section 4.1.2.4 which also refers to RFC5280 section 7.1. However, PKIX is just a profile of X.509, and alternative approaches to identifying the Issuer of a certificate exist. (For self-signed certificates, Issuer can be an empty sequence of RDNs, but I like to think of that as a degenerate case that is also explicitly not conformant to PKIX [RFC5280 section 4.1.2.4 last paragraph]. More importantly, the IssuerKeyIdentifier can also be set, and matched with the SubjectKeyIdentifier of another certificate. This use is contemplated in RFC 5280 section 4.2.1.2.) (Note, though, that RFC 5914, "Trust Anchor Format", defines certPath :== CertPathControls OPTIONAL. In this case, *only* IssuerKeyIdentifier/SubjectKeyIdentifier matching can work, and Issuer otherwise apparently should be blank because the Anchor has no taName/Subject. You have to love the inconsistency of the PKIX standards, yes?) I haven't ever seen anything claiming that OpenSSL is expected to be completely and invariably conformant to the PKIX profile. It's possible that it could be implied (if SSL or TLS specify that the certificates in their Certificate records either "SHALL" or "MUST" be PKIX-profiled -- which does not appear to be the case in RFC 8846, which defines TLS 1.3), but even then I'm not sure it would be appropriate to restrict its utility in the manner of preventing newer versions from interoperating with certificates issued by or which worked with older versions that permitted such degenerate cases. Merry Christmas (or happy holidays!), -Kyle H On Sun, Dec 23, 2018 at 5:33 PM Viktor Dukhovni wrote: > > > > > On Dec 23, 2018, at 6:01 PM, Kyle Hamilton wrote: > > > > You're right, I typoed. SubjectDN is non-optional. But it can, as > > you mentioned, be an empty sequence. > > > > But for PKIX purposes, it can't be empty if it's an Issuer (because > > IssuerDN can't be empty in the certificates that it issues). > > That's an odd use of "it", since the issuerDN while also a DN is not > a subjectDN. The "it" that is the subjectDN is sometimes legitimately > empty. The other "it" that is the issuerDN is supposed to always be > non-empty, but some self-signed certificates violate that requirement > with apparent impunity, e.g. nothing in OpenSSL requires a non-empty > issuer DN in an end-entity self-signed certificate, if it breaks, the > constraint would be at the application layer. > > -- > Viktor. > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl-users at dukhovni.org Mon Dec 24 23:16:17 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 24 Dec 2018 18:16:17 -0500 Subject: [openssl-users] Subject CN and SANs In-Reply-To: References: <5C1EACBF.6090409@mathemainzel.info> <6BBE1AEB-B043-4705-9C29-9FA22F72CDE1@akamai.com> <3571.1545578501@localhost> <7B430C95-4832-4BA1-80F2-AD17AAC8C5A5@dukhovni.org> Message-ID: > On Dec 24, 2018, at 5:51 PM, Kyle Hamilton wrote: > If a certificate identifies an Issuer, then the certificate cannot contain an empty sequence of RDNs in the Subject and still be conformant to PKIX. Yes, CA certificates need to have a non-empty subject name if they're to be used for signing subordinate certificates. End-entity certificates do not need to have a non-empty subject name, and some do not. The usual public CAs have on the whole not yet stopped populating CN values into the subject DN of subordinate EE certificates, but when the DNS name in question is longer than ~64 bytes, they have no choice but to omit the CN. Undoubtedly a search through the CT logs would find some examples. -- Viktor. From matt at openssl.org Tue Dec 25 01:02:48 2018 From: matt at openssl.org (Matt Caswell) Date: Tue, 25 Dec 2018 01:02:48 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <8847FD16-9999-4226-A183-276970706794@dukhovni.org> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> Message-ID: <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> On 24/12/2018 19:52, Viktor Dukhovni wrote: >> On Dec 24, 2018, at 2:44 PM, Salz, Rich via openssl-users wrote: >> >> Pre-shared keys (PSK) don't require certs, maybe that meets the need. A thing to know about PSK is that each side is fully trusted, and if one side gets the key stolen, then the thief can pretend to be either side. > > PSK only makes sense for svelte SSL libraries that either run > on devices with too little CPU to do public key crypto, or don't > want to the pay the code footprint of X.509 certificate processing. > > For OpenSSL on a typical computer, PSK deployment and application > support is more complex than just going with self-signed certs. > > The OP is IMHO better off avoiding PSKs. > I disagree with this assessment of what PSKs are good for. As with any technology choice there are trade offs that have to be made. PSKs are actually *simple* to deploy (far simpler than X.509 based certificates IMO) and are perfectly suitable for all sorts of environments - it doesn't just have to be "devices with too little CPU to do public key crypto". The problem with PSKs is that they do not scale well. If you have lots of endpoints then the cost of deploying and managing keys across all of them becomes too high too quickly. By comparison X.509 certificate based authentication is complex and costly to deploy and manage. Such a solution does scale well though. If you've got a small number of endpoints then PSKs may be a suitable choice. If you've got lots of endpoints then, probably, an X.509 certificate based solution is the way to go. The OP talks about a "select few machines" being able to access a database server. This sounds precisely the sort of environment where PSKs would work well. On 24/12/2018 16:43, Viktor Dukhovni wrote: > This requires more complex application code on the client and server, > so I would not recommend it. Not really. The application code for PSKs is quite straight forward in most cases. > And IIRC there may be some complications > with getting PSKs to work across both TLS 1.2 and TLS 1.3??? Yes, there are differences between PSKs in TLSv1.2 and TLSv1.3, so if supporting both of those is a requirement then there are additional things to bear in mind. In TLSv1.2: 1) A server (optionally) provides an identity hint to the client 2) The client looks up the identity to be used and the associated PSK value (possibly using the hint provided by the server to select the right identity) 3) The client sends the identity to the server 4) The server receives the identity from the client and finds the PSK associated with it 5) Both sides derive keys for the session based on the PSK (possibly additionally using (EC)DHE to add forward secrecy) In TLSv1.3 there is no identity hint - the client just finds the identity without the use of a hint. The identity has an associated key (as in TLSv1.2) but it *also* has an associated hash algorithm. If no hash algorithm is explicitly specified then SHA256 is assumed by default. OpenSSL 1.1.0 (and earlier) provided an API for TLSv1.2 PSKs. This continues to work in OpenSSL 1.1.1 and it can be used in both TLSv1.2 *and* TLSv1.3. However the callbacks will get called with a NULL identity hint on the client side. Since this older API was not designed with TLSv1.3 in mind there is no way to specify the hash to be used - so if you use this older API then SHA256 is always in use (and a SHA256 based TLSv1.3 ciphersuite must be available). OpenSSL 1.1.1 provides additional APIs for doing TLSv1.3 PSKs that may be used instead of (or as well as) the TLSv1.2 PSK API. This API *does* allow you to specify the hash to be used, but does not work in TLSv1.2. So, if you're happy with the SHA256 default, then you can just use the older PSK API and it will work quite happily in both TLSv1.2 and TLSv1.3. If you want more control in TLSv1.3 then you might need to use a combination of the old API (in TLSv1.2) and the new API (in TLSv1.3). Matt From michel.sales at free.fr Tue Dec 25 20:07:59 2018 From: michel.sales at free.fr (Michel) Date: Tue, 25 Dec 2018 21:07:59 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> Message-ID: <000f01d49c8d$8d3d8cd0$a7b8a670$@sales@free.fr> Thanks Matt for the reminder about the use of PSK in TLS 1.3. This leads me to this other question : Can someone please clarify what is the future of SRP starting with TLS 1.3 ? From prateep.kumar at broadcom.com Wed Dec 26 04:15:38 2018 From: prateep.kumar at broadcom.com (Prateep Kumar) Date: Wed, 26 Dec 2018 09:45:38 +0530 Subject: [openssl-users] Delay in converting CRL to binary data In-Reply-To: References: Message-ID: Hello, Please let me know if we have any update on this. With Regards, Prateep On Thu, Dec 13, 2018 at 2:26 PM Prateep Kumar wrote: > Hello, > > We are converting a *CRL* (Size *3.687 MB*) to binary data using > *X509_CRL_get_REVOKED()* and it is taking *167.977* seconds to process > the same. > > Please let us know if this is an expected behavior or something should be > done to improve the above observation. > > With Regards, > Prateep > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Dec 26 23:59:44 2018 From: matt at openssl.org (Matt Caswell) Date: Wed, 26 Dec 2018 23:59:44 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <000f01d49c8d$8d3d8cd0$a7b8a670$@sales@free.fr> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> <000f01d49c8d$8d3d8cd0$a7b8a670$@sales@free.fr> Message-ID: On 25/12/2018 20:07, Michel wrote: > Thanks Matt for the reminder about the use of PSK in TLS 1.3. > This leads me to this other question : > Can someone please clarify what is the future of SRP starting with TLS 1.3 ? SRP is not currently supported in OpenSSL with TLSv1.3. AFAIK there is no standard available to define it. We'd need to see such a standard first before we could integrate support for it. Matt From prithiraj.das at gmail.com Thu Dec 27 06:32:43 2018 From: prithiraj.das at gmail.com (prithiraj das) Date: Thu, 27 Dec 2018 06:32:43 +0000 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction In-Reply-To: References: Message-ID: Please find the above previous mail. On Mon, 24 Dec 2018 at 19:29, Salz, Rich via openssl-users < openssl-users at openssl.org> wrote: > If all you need is RSA then you will probably find it easier to write a > makefile of your own. You will have to do multiple builds to get all the > missing pieces, such as the BN facility, the memory allocation, the error > stack, etc. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beldmit at gmail.com Thu Dec 27 08:37:24 2018 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Thu, 27 Dec 2018 11:37:24 +0300 Subject: [openssl-users] tls1_change_cipher_state Message-ID: Hello, Am I right supposing that local variables tmp1, tmp2, iv1, and iv2 are unused in this function? -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From devadas.kk at gmail.com Thu Dec 27 08:54:45 2018 From: devadas.kk at gmail.com (Devadas kk) Date: Thu, 27 Dec 2018 14:24:45 +0530 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction In-Reply-To: References: Message-ID: This sounds the simple and optimal approach for the problem stated. On Tue, 25 Dec 2018, 12:59 am Salz, Rich via openssl-users < openssl-users at openssl.org wrote: > If all you need is RSA then you will probably find it easier to write a > makefile of your own. You will have to do multiple builds to get all the > missing pieces, such as the BN facility, the memory allocation, the error > stack, etc. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Thu Dec 27 09:12:34 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Thu, 27 Dec 2018 10:12:34 +0100 Subject: [openssl-users] PerlASM for x64 In-Reply-To: <20181224.190841.1801525626083809360.levitte@openssl.org> References: <20181223.210815.534126485389681186.levitte@openssl.org> <3fc4d543-d71a-8c22-566a-d902c4f7da03@gmail.com> <20181224.190841.1801525626083809360.levitte@openssl.org> Message-ID: <98f571de-f47e-6259-f1ac-792ebed3ed54@wisemo.com> On 24/12/2018 19:08, Richard Levitte wrote: > In message <3fc4d543-d71a-8c22-566a-d902c4f7da03 at gmail.com> on Mon, 24 Dec 2018 13:17:51 +0100, Gisle Vanem said: > >> Richard Levitte wrote: >> >>>> I experimented some more. I figured the "auto" does not work. >>>> But this works: >>>> perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s >>>> diff tmp-file.s crypto\aes\aesni-x86_64.asm >>>> >>>> No diffs. >>>> >>>> Why does the the generation of .asm-files be so damn hard to >>>> figure out? Some cmd-line help to show what "auto" does would >>>> be nice. >>> The "auto" flavor takes note of the output file extension. .asm vs .s >>> in this case. >> Thank, but it was a typo in my 1st email. The correct command was >> with a redirect: >> set ASM=nasm >> f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl >> auto > tmp-file.s > That isn't a correct use of the script. All of the assembler perl > scripts expect the output file as last argument, and the x86_64 ones > do look at the output file and determines that if the extension is > '.asm', nasm assembler is expected, otherwise you will get gas > assembler. So if you redirect, the result is, mildly put, undefined. > > Thank you, though... it is time the assembler stuff gets documented, > and I think I'm in a fairly good position to do so. I will not > promise that it will happen fast, but it is in my backlog. As a trivial (and easily audited first patch) perhaps make the common code error out with a usage message to STDERR if the command line makes no sense (no output file, wrong argument count, auto with unrecognized file extension).? Ideally this would be in the common perl module(s), not in individual assembler files. Remember that keeping every patch easily audited by the wider community is essential to the trustworthiness of OpenSSL, the great reformatting a while back was a major mistake in this regard. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From rsalz at akamai.com Thu Dec 27 14:36:06 2018 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 27 Dec 2018 14:36:06 +0000 Subject: [openssl-users] OpenSSL v1.1.1 static library size reduction In-Reply-To: References: Message-ID: <237B1145-394A-40A4-9AE7-73B9028670BE@akamai.com> * Please find the above previous mail. I am not sure what this means. I guess you are referring to earlier email in the thread. I gave you my suggestion, good luck. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Thu Dec 27 14:40:31 2018 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 27 Dec 2018 14:40:31 +0000 Subject: [openssl-users] Delay in converting CRL to binary data In-Reply-To: References: Message-ID: * Please let me know if we have any update on this. This is a volunteer effort. :) My *GUESS* is that the CRL data isn?t sorted, and it?s doing a linear search. You should profile the code to find out where, exactly, all the time is being spent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckashiquekvk at gmail.com Thu Dec 27 15:07:18 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Thu, 27 Dec 2018 20:37:18 +0530 Subject: [openssl-users] Openssl async support In-Reply-To: References: Message-ID: Hi all, Thanks for the earlier reply. But still Iam facing issue regarding the asynchronous job operation. I have implemented asynchronous job operation partially. I am now getting requests asynchronously ie. getting the next request after calling ASYNC_pause_job from the first request. But I am unable to resume the paused jobs after job completion. Test setup consists of a nginx server and three SSL client apps. I have got the first 16kb processing request (AES-GCM encryption/decryption) from client1 and have submitted the request to the engine and done ASYNC_pause_job, so client1 entered into waiting state. But when we run the client2 app, the first job went into ASYNC_FINISH state before job completion. Similarly, when we run the client3 app, the second job went into ASYNC_FINISH state. Can you help regarding this? On Wed, Dec 19, 2018 at 5:33 PM ASHIQUE CK wrote: > Gentle reminder > > On Tue, Dec 18, 2018 at 4:06 PM ASHIQUE CK wrote: > >> Hi all, >> >> I truly understand that everyone might be busy with your work and didn't >> found time to reply. That's okay, but incase you have accidendly forgot to >> reply, please accept this as a gentle reminder. >> >> >> >> >> >> On Mon, Dec 17, 2018 at 6:11 PM ASHIQUE CK >> wrote: >> >>> Hi all, >>> >>> I have some queries regarding OpenSSL async operation. >>> >>> Current setup >>> ------------- >>> I have one* OpenSSL dynamic engine (with RSA and AES-GCM support) *and >>> linked it with *Nginx* server. Multiple *WGET* commands on the client >>> side. >>> >>> Current issue >>> ------------- >>> Since OpenSSL *do_cipher call *(the function in which actual AES-GCM >>> encryption/decryption happening) comes from one client at a time which is >>> reducing file downloading performance. So we need an *asynchronous >>> operation in OpenSSL* ie. we need multiple do_cipher calls at the same >>> time from which we should submit requests to HW without affecting the >>> incoming requests and should wait for HW output. >>> >>> Queries >>> -------- >>> 1) Is there is any other scheme for multiple do_cipher calls at a >>> time?. >>> 2) Any method to enable asynchronous call from OpenSSL? >>> >>> Versions >>> ------------- >>> Openssl - 1.1.0h >>> Nginx1.11.10 >>> Wget 1.17.1 >>> >>> Kindly support me. Please inform me if any more inputs needed. Thanks >>> in advance. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcr at sandelman.ca Thu Dec 27 16:31:05 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Thu, 27 Dec 2018 11:31:05 -0500 Subject: [openssl-users] openssl 1.1.1 manuals Message-ID: <10477.1545928265@localhost> If manual pages for 1.1.1 aren't going to be posted/generated: could https://www.openssl.org/docs/man1.1.1 redirect to https://www.openssl.org/docs/man1.1.0? (I think that 1.1.1 ought to be generated) -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From jeremy.farrell at oracle.com Thu Dec 27 16:45:50 2018 From: jeremy.farrell at oracle.com (J. J. Farrell) Date: Thu, 27 Dec 2018 16:45:50 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <10477.1545928265@localhost> References: <10477.1545928265@localhost> Message-ID: man1.1.1 looks OK to me, the pages all appear to be there. What is missing is a link to 1.1.1 in the little Manpages list of links on the right hand side of the page On 27/12/2018 16:31, Michael Richardson wrote: > If manual pages for 1.1.1 aren't going to be posted/generated: > could https://www.openssl.org/docs/man1.1.1 > redirect to https://www.openssl.org/docs/man1.1.0? > > (I think that 1.1.1 ought to be generated) -- J. J. Farrell Not speaking for Oracle -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Thu Dec 27 16:48:56 2018 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 27 Dec 2018 16:48:56 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <10477.1545928265@localhost> References: <10477.1545928265@localhost> Message-ID: They are there, but the sidenav needs to be updated. ?On 12/27/18, 11:31 AM, "Michael Richardson" wrote: If manual pages for 1.1.1 aren't going to be posted/generated: could https://www.openssl.org/docs/man1.1.1 redirect to https://www.openssl.org/docs/man1.1.0? (I think that 1.1.1 ought to be generated) -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ From levitte at openssl.org Thu Dec 27 16:51:16 2018 From: levitte at openssl.org (Richard Levitte) Date: Thu, 27 Dec 2018 17:51:16 +0100 (CET) Subject: [openssl-users] PerlASM for x64 In-Reply-To: <98f571de-f47e-6259-f1ac-792ebed3ed54@wisemo.com> References: <3fc4d543-d71a-8c22-566a-d902c4f7da03@gmail.com> <20181224.190841.1801525626083809360.levitte@openssl.org> <98f571de-f47e-6259-f1ac-792ebed3ed54@wisemo.com> Message-ID: <20181227.175116.1428569202187490753.levitte@openssl.org> In message <98f571de-f47e-6259-f1ac-792ebed3ed54 at wisemo.com> on Thu, 27 Dec 2018 10:12:34 +0100, Jakob Bohm said: > On 24/12/2018 19:08, Richard Levitte wrote: > > In message <3fc4d543-d71a-8c22-566a-d902c4f7da03 at gmail.com> on Mon, 24 > > Dec 2018 13:17:51 +0100, Gisle Vanem said: > > > >> Richard Levitte wrote: > >> > >>>> I experimented some more. I figured the "auto" does not work. > >>>> But this works: > >>>> perl crypto\aes\asm\aesni-x86_64.pl nasm > tmp-file.s > >>>> diff tmp-file.s crypto\aes\aesni-x86_64.asm > >>>> > >>>> No diffs. > >>>> > >>>> Why does the the generation of .asm-files be so damn hard to > >>>> figure out? Some cmd-line help to show what "auto" does would > >>>> be nice. > >>> The "auto" flavor takes note of the output file extension. .asm vs .s > >>> in this case. > >> Thank, but it was a typo in my 1st email. The correct command was > >> with a redirect: > >> set ASM=nasm > >> f:\util\StrawberryPerl\perl\bin\perl crypto\aes\asm\aesni-x86_64.pl > >> auto > tmp-file.s > > That isn't a correct use of the script. All of the assembler perl > > scripts expect the output file as last argument, and the x86_64 ones > > do look at the output file and determines that if the extension is > > '.asm', nasm assembler is expected, otherwise you will get gas > > assembler. So if you redirect, the result is, mildly put, undefined. > > > > Thank you, though... it is time the assembler stuff gets documented, > > and I think I'm in a fairly good position to do so. I will not > > promise that it will happen fast, but it is in my backlog. > As a trivial (and easily audited first patch) perhaps make the > common code error out with a usage message to STDERR if the > command line makes no sense (no output file, wrong argument > count, auto with unrecognized file extension).? Ideally this > would be in the common perl module(s), not in individual > assembler files. Ideas differ from one person to another, and there are ideas on flexibility based on intimate knowledge of these modules that are contrary to the more strict interpretation you desire. Also, and we've argued this back and forth quite a bit, there's the idea of the modules being usable without dependence on other modules (apart from the xlate module that they pipe to). These modules have worked this way for quite a while (apart from standardising on having the last argument be the output file at all times, that actually varied between assembler modules before 1.1.0), and while I agree with you that these modules are a bit too flexible (please take note of this before thinking that I'm arguing against you!), changing them need to be done carefully. > Remember that keeping every patch easily audited by the wider > community is essential to the trustworthiness of OpenSSL, the > great reformatting a while back was a major mistake in this > regard. Regarding the great reformatting, this may be argued 'til hell freezes over. One of the things we considered was that the old source format was arcane, didn't exist anywhere else, and wasn't even well supported by the project team members (there was code inserted in more common formats, most often the usual 4 space indent BSD format). The current format has much better recognision and is easy to support in editors and current formatters. So as "mistake" goes, keeping the old source code format could have been regarded as one just as much. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From dclarke at blastwave.org Thu Dec 27 17:16:52 2018 From: dclarke at blastwave.org (Dennis Clarke) Date: Thu, 27 Dec 2018 12:16:52 -0500 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: References: <10477.1545928265@localhost> Message-ID: On 12/27/18 11:48 AM, Salz, Rich via openssl-users wrote: > They are there, but the sidenav needs to be updated. > Generally I find everything I need in the source tarball and after the install is done everything anyone could want is installed on the system. As for 'sidenav' that sounds like someone actually has to go tweak stuff manually on some website. Sadly. Anyways, the source tarballs have everything that is for certain. A lot of symlinks to be sure. Dennis From Matthias.St.Pierre at ncp-e.com Thu Dec 27 17:39:11 2018 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Thu, 27 Dec 2018 17:39:11 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: References: <10477.1545928265@localhost> Message-ID: <9c039abf6f7f4acdbacce728362a7245@Ex13.ncp.local> > Generally I find everything I need in the source tarball and after the > install is done everything anyone could want is installed on the system. > As for 'sidenav' that sounds like someone actually has to go tweak stuff > manually on some website. Sadly. Anyways, the source tarballs have > everything that is for certain. A lot of symlinks to be sure. > > Dennis All supported manual page versions are publicly available from this site here. https://www.openssl.org/docs/manpages.html The missing link in the manual side bar is an oversight, which will be fixed shortly, see https://github.com/openssl/web/pull/100 HTH, Matthias From mcr at sandelman.ca Thu Dec 27 18:59:30 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Thu, 27 Dec 2018 13:59:30 -0500 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: References: <10477.1545928265@localhost> Message-ID: <12099.1545937170@localhost> J. J. Farrell wrote: > man1.1.1 looks OK to me, the pages all appear to be there. What is > missing is a link to 1.1.1 in the little Manpages list of links on the > right hand side of the page https://www.openssl.org/docs/man1.1.0/crypto/CMS_sign.html exists, but https://www.openssl.org/docs/man1.1.1/crypto/CMS_sign.html does not. There are other examples which I have come across. > On 27/12/2018 16:31, Michael Richardson wrote: > If manual pages for 1.1.1 aren't going to be posted/generated: > could https://www.openssl.org/docs/man1.1.1 redirect to > https://www.openssl.org/docs/man1.1.0? > (I think that 1.1.1 ought to be generated) > -- > J. J. Farrell Not speaking for Oracle > ---------------------------------------------------- > Alternatives: > ---------------------------------------------------- > -- > openssl-users mailing list To unsubscribe: > https://mta.openssl.org/mailman/listinfo/openssl-users From uri at ll.mit.edu Thu Dec 27 19:06:33 2018 From: uri at ll.mit.edu (Blumenthal, Uri - 0553 - MITLL) Date: Thu, 27 Dec 2018 19:06:33 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <12099.1545937170@localhost> References: <10477.1545928265@localhost> <12099.1545937170@localhost> Message-ID: <5E5DF536-8455-4D4D-8704-F19A54907BAE@ll.mit.edu> The docs site is screwed up. CMS_sign is indeed documented for 1.1.1 - but you have to go there via https://www.openssl.org/docs/man1.1.1 -> Libraries -> CMS_sign.html, which would bring you to https://www.openssl.org/docs/man1.1.1/man3/CMS_sign.html ?On 12/27/18, 14:00, "openssl-users on behalf of Michael Richardson" wrote: J. J. Farrell wrote: > man1.1.1 looks OK to me, the pages all appear to be there. What is > missing is a link to 1.1.1 in the little Manpages list of links on the > right hand side of the page https://www.openssl.org/docs/man1.1.0/crypto/CMS_sign.html exists, but https://www.openssl.org/docs/man1.1.1/crypto/CMS_sign.html does not. There are other examples which I have come across. > On 27/12/2018 16:31, Michael Richardson wrote: > If manual pages for 1.1.1 aren't going to be posted/generated: > could https://www.openssl.org/docs/man1.1.1 redirect to > https://www.openssl.org/docs/man1.1.0? > (I think that 1.1.1 ought to be generated) > -- > J. J. Farrell Not speaking for Oracle > ---------------------------------------------------- > Alternatives: > ---------------------------------------------------- > -- > openssl-users mailing list To unsubscribe: > https://mta.openssl.org/mailman/listinfo/openssl-users -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5249 bytes Desc: not available URL: From mcr at sandelman.ca Thu Dec 27 19:08:46 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Thu, 27 Dec 2018 14:08:46 -0500 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <9c039abf6f7f4acdbacce728362a7245@Ex13.ncp.local> References: <10477.1545928265@localhost> <9c039abf6f7f4acdbacce728362a7245@Ex13.ncp.local> Message-ID: <14222.1545937726@localhost> Dr. Matthias St. Pierre wrote: >> Generally I find everything I need in the source tarball and after the >> install is done everything anyone could want is installed on the >> system. As for 'sidenav' that sounds like someone actually has to go >> tweak stuff manually on some website. Sadly. Anyways, the source >> tarballs have everything that is for certain. A lot of symlinks to be >> sure. >> >> Dennis > All supported manual page versions are publicly available from this > site here. https://www.openssl.org/docs/manpages.html The listings like: https://www.openssl.org/docs/man1.1.1/man3/ are basically useless for navigation. Particularly if you don't know exactly what one is looking for... { There is something amiss with BIO_addr_rawaddress... it's shift right. I don't see a problem in the HTML source though.. } Sure, google will find some things, but usually it's the wrong version, and one has to guess what the URL for the most recent one is. At which point, like Dennis Clarke suggests, might as well grep the POD files in the source code. This is not terribly effective to find information about how to manipulate particular object types. (I have started writing an index by object type for my own use, but I doubt I'll get very far) -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From Matthias.St.Pierre at ncp-e.com Thu Dec 27 19:39:46 2018 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Thu, 27 Dec 2018 19:39:46 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <14222.1545937726@localhost> References: <10477.1545928265@localhost> <9c039abf6f7f4acdbacce728362a7245@Ex13.ncp.local> <14222.1545937726@localhost> Message-ID: <0e9e40ca7e704e2ebb51a96ef42534dc@Ex13.ncp.local> > Particularly if you don't know exactly what one is looking for... > { There is something amiss with BIO_addr_rawaddress... it's shift right. > I don't see a problem in the HTML source though.. } > > Sure, google will find some things, but usually it's the wrong version, and > one has to guess what the URL for the most recent one is. > > At which point, like Dennis Clarke suggests, might as well grep the POD files > in the source code. This is not terribly effective to find information > about how to manipulate particular object types. > > (I have started writing an index by object type for my own use, but I doubt > I'll get very far) The manpages are primarily what the name says: manual pages. I.e, their primary use is the unix/linux 'man' command. The conversion to html is an add-on to make it available via web server. And I agree with you that static web pages are not of much help, it could be better, more searchable. As for grepping the POD files: There is a much simpler solution if you are using bash on linux: Install your manual pages locally, add them to your MANPATH, and marvel at the power of bash's tab completion. Disclaimer: Unless you know what you are doing, you should not replace your distribution's copy of OpenSSL by your own, but instead install it to a separate location. For example, I have all my openssl library versions installed locally in /opt/openssl-dev /opt/openssl-1.1.1-dev /opt/openssl-1.1.0-dev /opt/openssl-1.0.2-dev (By configuring with --prefix=/opt/openssl-dev (etc.) and then running 'make -j 16 ; sudo make install'.) Additionally, I have a simple script and a set of aliases 'ossl', 'ossl111' to set the MANPATH accordingly: cat ~/.osslpath export PATH=${OSSLPATH}/bin:$ORI_PATH export LD_LIBRARY_PATH=${OSSLPATH}/lib:$ORI_LD_LIBRARY_PATH export MANPATH=${OSSLPATH}/${OSSL_MANPATH}:$ORI_MANPATH msp at msppc:~$ alias ossl alias ossl='export OSSLPATH=/opt/openssl-dev ; OSSL_MANPATH=share/man source ~/.osslpath ; echo $OSSLPATH: $(openssl version)' msp at msppc:~$ alias ossl111 alias ossl111='export OSSLPATH=/opt/openssl-1.1.1-dev ; OSSL_MANPATH=share/man source ~/.osslpath ; echo $OSSLPATH: $(openssl version)' ($ORI_PATH is initally set to $PATH in my .bashrc, and the same holds for the other $ORI_XXX) Changing to the manual pages for the correct openssl version is now a matter of a single command, msp at msppc:~$ ossl /opt/openssl-dev: OpenSSL 3.0.0-dev xx XXX xxxx And voila, if your tab completion is setup correctly, help is only two s away: msp at msppc:~$ man BIO_new BIO_new BIO_new_file BIO_new_CMS BIO_new_fp BIO_new_accept BIO_new_mem_buf BIO_new_bio_pair BIO_new_socket BIO_new_buffer_ssl_connect BIO_new_ssl BIO_new_connect BIO_new_ssl_connect BIO_new_fd Matthias From mcr at sandelman.ca Thu Dec 27 19:51:57 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Thu, 27 Dec 2018 14:51:57 -0500 Subject: [openssl-users] setting eContentType for CMS messages without CMS_PARTIAL In-Reply-To: <10477.1545928265@localhost> References: <10477.1545928265@localhost> Message-ID: <25407.1545940317@localhost> A major way in which PKCS7 and CMS signed artifacts differ is that the CMS artifacts include a content-type. RFC5652 has a decision tree to decide what version of SignedData structure to produce. The presence of a non-"id-data" content-type is among the decision tree, and so I understand why it can't be set after the signature (besides, the content-type is within the signature!). I think it's probably too complex that the only way to set the content-type is by doing the CMS_PARTIAL work. I think that CMS_sign() and CMS_encrypt() ought to take a eContentType OID: but ABI issues would mean a new call. I had to read the source code to understand the difference between CMS_get0_type() and CMS_get0_eContentType(). I can see how one refers to the cms->contentType, and the other refers to the same thing "as received", in the structure (RFC5652's EncapsulatedContentInfo). I'm not sure if there is intended to be functional or API contract differences between the two?? I was also mystified about get0_content(), until I realized that it did not have the word "type" in it. I've sent some pull requests, one of which suggests that you can't call get0_content() until CMS_final() has been called on outgoing objects. CMS_get0_content() returns a pointer to a pointer, and it says down at the bottom that it can be used to modify the content. It's clear that a receiver (verifier/decrypter) can mutate this content as part of it's processing: saves memory for a buffer, a copy, and a potential buffer overflow, I guess. It's unclear to me of what use this is for outgoing content. Clearly one could allocate an ASN1_OCTET_STRING big enough for constructing content, or point it at a buffer already in use. Clearly that's nonsense if CMS_PARTIAL is not used, and I wonder if CMS_get0_content() should return NULL if the signature is already done. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mcr at sandelman.ca Thu Dec 27 19:52:11 2018 From: mcr at sandelman.ca (Michael Richardson) Date: Thu, 27 Dec 2018 14:52:11 -0500 Subject: [openssl-users] setting eContentType for CMS messages without CMS_PARTIAL In-Reply-To: <10477.1545928265@localhost> References: <10477.1545928265@localhost> Message-ID: <25492.1545940331@localhost> A major way in which PKCS7 and CMS signed artifacts differ is that the CMS artifacts include a content-type. RFC5652 has a decision tree to decide what version of SignedData structure to produce. The presence of a non-"id-data" content-type is among the decision tree, and so I understand why it can't be set after the signature (besides, the content-type is within the signature!). I think it's probably too complex that the only way to set the content-type is by doing the CMS_PARTIAL work. I think that CMS_sign() and CMS_encrypt() ought to take a eContentType OID: but ABI issues would mean a new call. I had to read the source code to understand the difference between CMS_get0_type() and CMS_get0_eContentType(). I can see how one refers to the cms->contentType, and the other refers to the same thing "as received", in the structure (RFC5652's EncapsulatedContentInfo). I'm not sure if there is intended to be functional or API contract differences between the two?? I was also mystified about get0_content(), until I realized that it did not have the word "type" in it. I've sent some pull requests, one of which suggests that you can't call get0_content() until CMS_final() has been called on outgoing objects. CMS_get0_content() returns a pointer to a pointer, and it says down at the bottom that it can be used to modify the content. It's clear that a receiver (verifier/decrypter) can mutate this content as part of it's processing: saves memory for a buffer, a copy, and a potential buffer overflow, I guess. It's unclear to me of what use this is for outgoing content. Clearly one could allocate an ASN1_OCTET_STRING big enough for constructing content, or point it at a buffer already in use. Clearly that's nonsense if CMS_PARTIAL is not used, and I wonder if CMS_get0_content() should return NULL if the signature is already done. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From Matthias.St.Pierre at ncp-e.com Thu Dec 27 20:00:05 2018 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Thu, 27 Dec 2018 20:00:05 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <5E5DF536-8455-4D4D-8704-F19A54907BAE@ll.mit.edu> References: <10477.1545928265@localhost> <12099.1545937170@localhost> <5E5DF536-8455-4D4D-8704-F19A54907BAE@ll.mit.edu> Message-ID: > The docs site is screwed up. Actually, it is screwed up for the older versions, not for 1.1.1: In OpenSSL 1.1.0 and before, the pod files (the manual page sources) would be located in /doc/crypto and /doc/ssl, and only during the installation would be placed in the proper manX subdirectory (X=1,3,5,7). Starting with OpenSSL 1.1.1, the pod files are now reorganized in subdirectories doc/man1, doc/man3, doc/man5, doc/man7, reflecting the manual section where they will finally be installed. So the path is correct for 1.1.1 and screwed up for 1.1.0 and below. https://www.openssl.org/docs/man1.1.1/man3/CMS_sign.html https://www.openssl.org/docs/man1.1.0/crypto/CMS_sign.html Matthias From ertan.kucukoglu at gmail.com Thu Dec 27 21:02:44 2018 From: ertan.kucukoglu at gmail.com (=?UTF-8?B?RXJ0YW4gS8O8w6fDvGtvZ2x1?=) Date: Fri, 28 Dec 2018 00:02:44 +0300 Subject: [openssl-users] Decrypting an OpenSSL encrypt AES256-CBC data Message-ID: Hello, First of all I am a newbie to this list and to cryptography, padding, and C language. Please, bear with me. I am trying to encrypt some data on an embedded Linux system using OpenSSL crypto library and decrypt it on a Windows system. I am following example at following link for C codes on embedded system: https://wiki.openssl.org/index.php/EVP_Symmetric_Encryption_and_Decryption Fortunately, I could encrypt with given code on that embedded linux system using OpenSSL Library version used was: 1.0.0e I do not have a change to use another version of the library. I am needed to be provided newer/older library by the device manufacturer to be able to cross compile my C code using their SDK. My tests on embedded device works in both ways. I can encrypt and decrypt simple string (or below provided example data) on that system. My problem is, I need to decrypt on Windows OS what is crypt on the embedded device. Windows OS, I am using Delphi (Object Pascal) for my programming needs. As I have not enough cryptography knowledge, I am using an open source library from mORMot project. This project supports AES256-CBC, AES256-CFB and several other AES based encryption and decryption, some hashing, etc. I have used that library before and it did work well for my needs. Thing is, I only used mORMot for my encryption and decryption which simply works. Similarly, OpenSSL encryption and decryption works on that embedded Linux system. I failed to make them talk to each other, properly. A- I tried to directly decrypt (no padding applied) and I get my plain text plus some additional invisible characters at the end. I am told it maybe a "padding" issue, my problem, during decryption. B- I tried PKCS#7 for decryption and it fails (no text being returned. All bytes are zero and of course that returns an empty string. Below is encrypt base64 data on embedded linux system. 8XnbAER2Mh4GLQpDrBLA24R0uEm2SkqDqa0U/PZ3KsSCZsKmJ+WKoYqx7dTiLC/uvJivgm2LOJ0mD5U4NQ19SZgYbT1TByMlLL+075EF8LsXotySz2hze2IozKOB8TG4dn2W/nDdM5deO7csBY28onQHOV4wbqzInUeaLVzbvAI= Attached is crypt data saved directly in a file. My tests, base64 decodes to identical bytes as in that file. Plain text should be (last character is invisible '\n'): 0000010000012018122721570520181227215705 00017214422c4277d76H 10350514.44 0.01 10350514.43 0.010000 For test purposes my key and IV are simple. Key: bytes from 0 to 31 (inclusive) IV: bytes from 0 to 15 (inclusive) C code I used for generating key and IV is something like: unsigned char key[32]; unsigned char iv[16]; for(i = 0; i < sizeof(key); i++) key[i] = i; for(i = 0; i < sizeof(iv); i++) iv[i] = i; What I see on Windows after directly decrypting is something as below (I used some embedded picture to be able to show invisible characters at the end) [image: image.png] My questions are (please keep in mind that I maybe asking non-sense): 1- Is OpenSSL version 1.0.0e using some kind of proprietary padding? I read something in this accepted answer: https://stackoverflow.com/questions/11783062/how-to-decrypt-file-in-java-encrypted-with-openssl-command-using-aes I also remember reading PKCS#5 is identical to PKCS#7 in another answer (which failed to work in my case). 2- What should I do to properly decrypt and receive plain text without additional characters in the end? I appreciate any help. Thanks & regards, Ertan K???ko?lu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: crypt Type: application/octet-stream Size: 128 bytes Desc: not available URL: From carabiankyi at gmail.com Fri Dec 28 05:24:48 2018 From: carabiankyi at gmail.com (=?UTF-8?B?4YG+4YCA4YCK4YC54YCF4YCt4YCv4YC4IOGAnuGAhOGAueGAuA==?=) Date: Fri, 28 Dec 2018 11:54:48 +0630 Subject: [openssl-users] How can I compile nginx with openssl to support 0-rtt TLS1.3 Message-ID: Dear Sirs, I have an nginx web server compiled with openssl that support TLS 1.3. But when I test with firefox Nightly browser, it does not send early data together with client hello packet. I test this test after waiting for about five minutes after accessing web server. I cannot find any source on internet about enabling 0-rtt in nginx with openssl. Please advise me. Thanks, Kyi Soe Thin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Fri Dec 28 05:39:21 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Fri, 28 Dec 2018 06:39:21 +0100 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: <0e9e40ca7e704e2ebb51a96ef42534dc@Ex13.ncp.local> References: <10477.1545928265@localhost> <9c039abf6f7f4acdbacce728362a7245@Ex13.ncp.local> <14222.1545937726@localhost> <0e9e40ca7e704e2ebb51a96ef42534dc@Ex13.ncp.local> Message-ID: On 27/12/2018 20:39, Dr. Matthias St. Pierre wrote: >> Particularly if you don't know exactly what one is looking for... >> { There is something amiss with BIO_addr_rawaddress... it's shift right. >> I don't see a problem in the HTML source though.. } >> >> Sure, google will find some things, but usually it's the wrong version, and >> one has to guess what the URL for the most recent one is. >> >> At which point, like Dennis Clarke suggests, might as well grep the POD files >> in the source code. This is not terribly effective to find information >> about how to manipulate particular object types. >> >> (I have started writing an index by object type for my own use, but I doubt >> I'll get very far) > The manpages are primarily what the name says: manual pages. I.e, their > primary use is the unix/linux 'man' command. > > The conversion to html is an add-on to make it available via web server. > And I agree with you that static web pages are not of much help, it could > be better, more searchable. Consider at least including the one-line manpage summaries on the index pages (the ones displayed by the apropos command on POSIX systems). Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From c.wehrmeyer at freshlions.de Fri Dec 28 10:22:23 2018 From: c.wehrmeyer at freshlions.de (Christian) Date: Fri, 28 Dec 2018 11:22:23 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> Message-ID: <35795b25-1bd7-d7d6-fdd7-84c367f6de9b@freshlions.de> Thank you for the suggestions thus far. I've been working on a simple SSL client/server system in the last couple days. Unfortunately the SSL documentation is a right mess, so I don't know what is allowed and what is not, which leads to some problems that I don't know exactly how to tackle on. First of all, I opted out for the cipher "ECDHE-PSK-AES128-CBC-SHA256". As Matt suggested, using PSKs does reduce a lot of complexity in this scenario - if I've been understanding him correctly, this cipher should give us forward secrecy while still relying on a pre-shared key, which also authenticates both sides to each other. When I run my server, and then run my client, it receives the data the server sends without any problems. However, things start to get messy once the keys mismatch, as would in any attacker scenario. The client initiates the handshake, but on the server side SSL_accept() returns -1, the client receives no data (as should). Then I start the client *again*. On the server side SSL_accept() returns -1 again, but this time the client blocks in SSL_read() (I haven't not implemented timeout handling yet, as this still all runs on my testing environments). It's almost as if SSL_shutdown on the server side does not notify the client that the connection is to be closed. For the BIO object on the server side I'm using a permanent BIO object which I just call BIO_set_fd() upon to set the socket I receive from accept(). The call chain looks like this: =================== First connection, client closes connection as excepted. =================== BIO_set_fd with 4|1 #Socket 4, BIO_CLOSE SSL_set_accept_state SSL_accept SSL_accept returned with -1 SSL_shutdown SSL_clear =================== Second connection, client suddenly blocks, has to be interrupted with CTRL + C. =================== BIO_set_fd with 5|1 #Socket 5, BIO_CLOSE SSL_set_accept_state SSL_accept SSL_accept returned with -1 SSL_shutdown SSL_clear =================== Third connection, client blocks again, has to be interrupted again. =================== BIO_set_fd with 4|1 SSL_set_accept_state SSL_accept SSL_accept returned with -1 SSL_shutdown SSL_clear What am I doing wrong on the server side? I assume it's the server; the client process ends right after the connection attempt, and it's the server that keeps running. And once I reset the server the first connection closes properly again. Am I supposed to use a new BIO object for each incoming connection? If so, that's pretty dumb. You usually want to have your accept() loop to be free of as much code as possible, and setting up everything in advance during server startup. The current server code for setting up the SSL object and using it looks like this: > if(NULL == (bio = BIO_new_socket(0,BIO_NOCLOSE))) /*Socket doesn't really matter, we're gonna reset this soon enough.*/ > { > goto LABEL_END_NO_BIO; > } > > if(NULL == (ssl = SSL_new(ssl_ctx))) > { > BIO_free(bio); > goto LABEL_END_NO_SSL; > } > > SSL_clear(ssl); > SSL_set_bio(ssl,bio,bio); > > tmp = 1; > setsockopt > ( > socket_server, > SOL_SOCKET, > SO_REUSEADDR, > &tmp, > sizeof(tmp) > ); > > if(-1 == bind > ( > socket_server, > (struct sockaddr*)&sin_server, > sizeof(sin_server) > )) > { > fprintf(stderr,"Can't bind socket.\n"); > goto LABEL_END; > } > > if(-1 == listen(socket_server,1)) > goto LABEL_END; > > while(0 <= (socket_client = accept > ( > socket_server, > (struct sockaddr*)&sin_client, > &sin_client_length > ))) > { > fprintf(stderr,"BIO_set_fd with %u|%u\n",socket_client,BIO_CLOSE); > BIO_set_fd(bio,socket_client,BIO_CLOSE); > fprintf(stderr,"SSL_set_accept_state\n"); > SSL_set_accept_state(ssl); > fprintf(stderr,"SSL_accept\n"); > tmp = SSL_accept(ssl); > if(tmp != 1) > { > fprintf(stderr,"SSL_accept returned with %i\n",tmp); > goto LABEL_NEXT_CLIENT; > } > > fprintf(stderr,"SSL_write\n"); > SSL_write(ssl,"That is my string",sizeof("That is my string") - 1); > > LABEL_NEXT_CLIENT: > fprintf(stderr,"SSL_shutdown\n"); > SSL_shutdown(ssl); > fprintf(stderr,"SSL_clear\n"); > SSL_clear(ssl); > } > > /*Rest of cleanup, doesn't matter, this is hopefully never reached.*/ Thank you for your continued help. From c.wehrmeyer at freshlions.de Fri Dec 28 11:17:07 2018 From: c.wehrmeyer at freshlions.de (Christian) Date: Fri, 28 Dec 2018 12:17:07 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <35795b25-1bd7-d7d6-fdd7-84c367f6de9b@freshlions.de> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> <35795b25-1bd7-d7d6-fdd7-84c367f6de9b@freshlions.de> Message-ID: <8483fee5-4e37-4841-c9f0-ba9ed5216344@freshlions.de> I should also add that printing the error stack doesn't yield much info other than "you dun goof'd": =================== First connection, client closes connection as excepted. =================== BIO_set_fd with 4|1 #Socket 4, BIO_CLOSE SSL_set_accept_state SSL_accept SSL_accept failed, SSL_get_error: 1 #SSL_ERROR_SSL 140059505588032:error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:375: SSL_shutdown SSL_clear =================== Second connection, client suddenly blocks, has to be interrupted with CTRL + C. =================== BIO_set_fd with 5|1 #Socket 5, BIO_CLOSE SSL_set_accept_state SSL_accept SSL_accept failed, SSL_get_error: 1 #SSL_ERROR_SSL 140059505588032:error:140A4044:SSL routines:SSL_clear:internal error:../ssl/ssl_lib.c:559: SSL_shutdown SSL_clear =================== Third connection, client blocks again, has to be interrupted again. =================== BIO_set_fd with 4|1 #Socket 4, BIO_CLOSE SSL_set_accept_state SSL_accept SSL_accept failed, SSL_get_error: 1 #SSL_ERROR_SSL 140059505588032:error:140A4044:SSL routines:SSL_clear:internal error:../ssl/ssl_lib.c:559: SSL_shutdown SSL_clear The error messages are being generated by ERR_print_errors_fp(stderr); From stanermetin at gmail.com Fri Dec 28 11:17:10 2018 From: stanermetin at gmail.com (Taner) Date: Fri, 28 Dec 2018 12:17:10 +0100 Subject: [openssl-users] Build target architecture Message-ID: After some searching and check, I've realized that openssl is not configured for different target architectures? I develop an application for Android using NDK(Native Development Kit). There is *Configurations/15-android.conf *inside openssl git repo, but could not be sure. Could someone advise for the right usage. There is also opensslconf.h, and I was thinking adding macros and use it. I use Ubuntu16 and Mac-HighSierra as development OS. Thanks -- BW, Taner -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Fri Dec 28 15:32:11 2018 From: rsalz at akamai.com (Salz, Rich) Date: Fri, 28 Dec 2018 15:32:11 +0000 Subject: [openssl-users] openssl 1.1.1 manuals In-Reply-To: References: <10477.1545928265@localhost> <9c039abf6f7f4acdbacce728362a7245@Ex13.ncp.local> <14222.1545937726@localhost> <0e9e40ca7e704e2ebb51a96ef42534dc@Ex13.ncp.local> Message-ID: Great idea; https://github.com/openssl/web/issues/101 ?On 12/28/18, 12:39 AM, "Jakob Bohm via openssl-users" wrote: Consider at least including the one-line manpage summaries on the index pages (the ones displayed by the apropos command on POSIX systems). From matt at openssl.org Fri Dec 28 17:17:39 2018 From: matt at openssl.org (Matt Caswell) Date: Fri, 28 Dec 2018 17:17:39 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <35795b25-1bd7-d7d6-fdd7-84c367f6de9b@freshlions.de> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> <35795b25-1bd7-d7d6-fdd7-84c367f6de9b@freshlions.de> Message-ID: <30cbc387-5ecb-c020-7bac-2582ce2afbcf@openssl.org> On 28/12/2018 10:22, Christian wrote: > Thank you for the suggestions thus far. I've been working on a simple SSL > client/server system in the last couple days. Unfortunately the SSL > documentation is a right mess, so I don't know what is allowed and what is not, > which leads to some problems that I don't know exactly how to tackle on. > > First of all, I opted out for the cipher "ECDHE-PSK-AES128-CBC-SHA256". As Matt > suggested, using PSKs does reduce a lot of complexity in this scenario - if I've > been understanding him correctly, this cipher should give us forward secrecy > while still relying on a pre-shared key, which also authenticates both sides to > each other. Yes, this is correct. > When I run my server, and then run my client, it receives the data > the server sends without any problems. > > However, things start to get messy once the keys mismatch, as would in any > attacker scenario. The client initiates the handshake, but on the server side > SSL_accept() returns -1, the client receives no data (as should). Then I start > the client *again*. On the server side SSL_accept() returns -1 again, but this > time the client blocks in SSL_read() (I haven't not implemented timeout handling > yet, as this still all runs on my testing environments). It's almost as if > SSL_shutdown on the server side does not notify the client that the connection > is to be closed. Which version of OpenSSL is this? (I don't remember if you said this already). Note that SSL_shutdown is intended for orderly shutdown of a successful, active SSL/TLS connection. It is not supposed to be called if the connection has failed for some reason. If the server decides to abort the connection it should have already sent a fatal alert. >> LABEL_NEXT_CLIENT: >>???????? fprintf(stderr,"SSL_shutdown\n"); >>???????? SSL_shutdown(ssl); >>???????? fprintf(stderr,"SSL_clear\n"); >>???????? SSL_clear(ssl); Please check the return code of this SSL_clear function. It can fail, and if it does it means the SSL object has not been cleared properly, and that will cause all sorts of weird, difficult to debug failures later on. Matt From openssl-users at dukhovni.org Fri Dec 28 17:48:58 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 28 Dec 2018 12:48:58 -0500 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <8483fee5-4e37-4841-c9f0-ba9ed5216344@freshlions.de> References: <712647c2-1674-ca81-0efb-84ab41c86eac@freshlions.de> <20181224151053.GH79754@straasha.imrryr.org> <05067d08-6ff2-8dce-9bd3-d0c98baf6b3b@freshlions.de> <20181224160132.GJ79754@straasha.imrryr.org> <05089988-07F4-488C-B9F8-BE1285247895@akamai.com> <8847FD16-9999-4226-A183-276970706794@dukhovni.org> <97a8372f-f023-59ad-6535-1626431d532d@openssl.org> <35795b25-1bd7-d7d6-fdd7-84c367f6de9b@freshlions.de> <8483fee5-4e37-4841-c9f0-ba9ed5216344@freshlions.de> Message-ID: <082DDC8F-1DD2-45AD-BF90-9FBFA6AE3B9E@dukhovni.org> > On Dec 28, 2018, at 6:17 AM, Christian wrote: > > BIO_set_fd with 4|1 #Socket 4, BIO_CLOSE > SSL_set_accept_state > SSL_accept > SSL_accept failed, SSL_get_error: 1 #SSL_ERROR_SSL > 140059505588032:error:1408F119:SSL routines:ssl3_get_record:decryption failed or bad record mac:../ssl/record/ssl3_record.c:375: > SSL_shutdown > SSL_clear 1. Don't call SSL_shutdown(), rather just call SSL_free() and close the socket using close(), IIRC SSL_set_fd() (you should not need to use BIO_set_fd) leaves you as the owner of the socket to close or not. 2. DO NOT reuse the same SSL handle for multiple connections, create a new one for subsequent connections, but you can and generally should reuse the SSL_CTX. -- Viktor. From Michael.Wojcik at microfocus.com Fri Dec 28 18:16:20 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 28 Dec 2018 18:16:20 +0000 Subject: [openssl-users] Decrypting an OpenSSL encrypt AES256-CBC data In-Reply-To: References: Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Ertan K???koglu > Sent: Thursday, December 27, 2018 16:03 > A- I tried to directly decrypt (no padding applied) and I get my plain text plus > some additional invisible characters at the end. I am told it maybe a "padding" > issue, my problem, during decryption. How does the Windows program know how long the decrypted data is? It sounds to me like the problem is simply that your Windows code is decrypting the data correctly, then reading past it into garbage left at the end of the buffer. If the messages are of fixed length, only use that many bytes from the decryption output. If they're of variable length, then the sender will have to tell the receiver how long they are. There are many ways of doing that; you haven't told us enough about your protocol to know which would be appropriate in your case. -- Michael Wojcik Distinguished Engineer, Micro Focus From Michael.Wojcik at microfocus.com Fri Dec 28 18:16:21 2018 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 28 Dec 2018 18:16:21 +0000 Subject: [openssl-users] How can I compile nginx with openssl to support 0-rtt TLS1.3 In-Reply-To: References: Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of ???????? ???? > Sent: Friday, December 28, 2018 00:25 > I have an nginx web server compiled with openssl that support TLS 1.3. What version of OpenSSL? Is it 1.1.1? The final version or an early release? Or 1.1.0, and if so, which letter release? > But when I test with firefox Nightly browser, it does not send early data together with > client hello packet. This sounds like an nginx or Firefox question. I haven't experimented with 0-RTT, which I think was a bad idea in TLSv1.3 and have no interest in enabling in my applications; but as I understand it, you have to set some options in the SSL structure (or the SSL_CTX you use to create it) in order to enable 0-RTT. That means nginx will have to make the necessary OpenSSL API calls. It may not have support for that yet, or in whatever version of nginx you're running. It's also possible that there's some issue with the Firefox build you're running and its 0-RTT support. My suspicion though is that nginx is not enabling 0-RTT in nginx. -- Michael Wojcik Distinguished Engineer, Micro Focus From Matthias.St.Pierre at ncp-e.com Fri Dec 28 22:11:57 2018 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Fri, 28 Dec 2018 22:11:57 +0000 Subject: [openssl-users] Build target architecture In-Reply-To: References: Message-ID: <17c4995c3ed14dbb8da0a326cf54084d@Ex13.ncp.local> > After some searching and check, I've realized that openssl is not configured for different target architectures? > I develop an application for Android using NDK(Native Development Kit). > There is?Configurations/15-android.conf?inside openssl git repo, but could not be sure.? > Could someone advise for the right usage. There is also opensslconf.h, and I was thinking adding macros and use it. > I use Ubuntu16 and Mac-HighSierra as development OS. Thanks If it's your first time you try compiling OpenSSL, I'd recommend you start with reading the INSTALL instructions and the platform specific NOTES.ANDROID instructions first. There you will hopefully find the answers to your questions. You find those two text files in the root of your OpenSSL source directory. You can also view them directly on GitHub at https://github.com/openssl/openssl/blob/OpenSSL_1_1_1-stable/INSTALL https://github.com/openssl/openssl/blob/OpenSSL_1_1_1-stable/NOTES.ANDROID The `opensslconf.h` file is not intended to be edited. It is created by the .\Configure script from the `opensslconf.h.in` template. Also the `Configurations/*.conf` files which are part of the tarball are normally not intended to be edited, unless you intend to get you changes merged upstream. But you are free to add your own configuration file if it really turns out to be necessary. The config files also support inheritance, so you can derive from an existing configuration and apply incremental changes. HTH, Matthias From carabiankyi at gmail.com Sat Dec 29 06:42:19 2018 From: carabiankyi at gmail.com (carabiankyi) Date: Sat, 29 Dec 2018 13:12:19 +0630 Subject: [openssl-users] How can I compile nginx with openssl to support 0-rtt TLS1.3 In-Reply-To: Message-ID: <5c27174d.1c69fb81.8adc8.2837@mx.google.com> Thanks for your advice.I get early data when I configure nginx ssl_early_data on.But I only get early data for get method.When using post method, the server terminate connection. Is it related with openssl? If so, how can I do to allow post method? Sent from my Samsung Galaxy smartphone. -------- Original message --------From: Michael Wojcik Date: 29/12/2018 12:46 a.m. (GMT+06:30) To: openssl-users at openssl.org Subject: Re: [openssl-users] How can I compile nginx with openssl to support 0-rtt TLS1.3 > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of ???????? ???? > Sent: Friday, December 28, 2018 00:25 > I have an nginx web server compiled with openssl that support TLS 1.3. What version of OpenSSL? Is it 1.1.1? The final version or an early release? Or 1.1.0, and if so, which letter release? > But when I test with firefox Nightly browser, it does not send early data together with > client hello packet. This sounds like an nginx or Firefox question. I haven't experimented with 0-RTT, which I think was a bad idea in TLSv1.3 and have no interest in enabling in my applications; but as I understand it, you have to set some options in the SSL structure (or the SSL_CTX you use to create it) in order to enable 0-RTT. That means nginx will have to make the necessary OpenSSL API calls. It may not have support for that yet, or in whatever version of nginx you're running. It's also possible that there's some issue with the Firefox build you're running and its 0-RTT support. My suspicion though is that nginx is not enabling 0-RTT in nginx. -- Michael Wojcik Distinguished Engineer, Micro Focus -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Sat Dec 29 10:05:28 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Sat, 29 Dec 2018 11:05:28 +0100 Subject: [openssl-users] How can I compile nginx with openssl to support 0-rtt TLS1.3 In-Reply-To: <5c27174d.1c69fb81.8adc8.2837@mx.google.com> References: <5c27174d.1c69fb81.8adc8.2837@mx.google.com> Message-ID: <476091da-a6f7-a0b3-b22e-8f6ad2f94481@wisemo.com> On 29/12/2018 07:42, carabiankyi wrote: > Thanks for your advice. > I get early data when I configure nginx ssl_early_data on. > But I only get early data for get method. > When using post method, the server terminate connection. Is it related > with openssl? If so, how can I do to allow post method? > > TLSv1.x and SSL do not know or care what the HTTP commands are. It is probably nginx enforcing a security rule that 0-rtt data should not contain any potentially sensitive information, such as POST data. 0-rtt may be a reasonable way to more quickly transfer the URLs in the many GET requests for static web content such as images, javascript, video segments and user independent web pages.? But it is too risky when handling requests for user specific or password protected content, because the 0-rtt would then be readable by an attacker even if the certificate check fails a few packets after the 0-rtt and associated decryption keys were already sent. > > > Sent from my Samsung Galaxy smartphone. > > -------- Original message -------- > From: Michael Wojcik > Date: 29/12/2018 12:46 a.m. (GMT+06:30) > To: openssl-users at openssl.org > Subject: Re: [openssl-users] How can I compile nginx with openssl to > support 0-rtt TLS1.3 > > > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On > Behalf Of ???????? ???? > > Sent: Friday, December 28, 2018 00:25 > > > I have an nginx web server compiled with openssl that support TLS 1.3. > > What version of OpenSSL? Is it 1.1.1? The final version or an early > release? Or 1.1.0, and if so, which letter release? > > > But when I test with firefox Nightly browser, it does not send early > data together with > > client hello packet. > > This sounds like an nginx or Firefox question. I haven't experimented > with 0-RTT, which I think was a bad idea in TLSv1.3 and have no > interest in enabling in my applications; but as I understand it, you > have to set some options in the SSL structure (or the SSL_CTX you use > to create it) in order to enable 0-RTT. That means nginx will have to > make the necessary OpenSSL API calls. It may not have support for that > yet, or in whatever version of nginx you're running. > > It's also possible that there's some issue with the Firefox build > you're running and its 0-RTT support. My suspicion though is that > nginx is not enabling 0-RTT in nginx. > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From ertan.kucukoglu at gmail.com Sat Dec 29 12:41:53 2018 From: ertan.kucukoglu at gmail.com (=?UTF-8?B?RXJ0YW4gS8O8w6fDvGtvZ2x1?=) Date: Sat, 29 Dec 2018 15:41:53 +0300 Subject: [openssl-users] Decrypting an OpenSSL encrypt AES256-CBC data In-Reply-To: References: Message-ID: Hello, Windows program does not know length of data. I would like to use some kind of standard method and use exact method on Windows to decrypt. I think my problem is really that I do not know what "padding" is used by default. I have found below function. However, there is no detailed explanation about it in here: https://www.openssl.org/docs/man1.0.2/crypto/EVP_CIPHER_CTX_set_padding.html int EVP_CIPHER_CTX_set_padding(EVP_CIPHER_CTX *x, int padding); I wanted to learn what values can "padding" parameter be. I understand I can set it to zero ( 0 ) for disabling padding. This is not what I want because my plain text length is not confirmed to be multiply of 16 bytes. I can use PKCS#7 to decrypt on Windows so I would like to encrypt using that padding. Just do not know what value to pass in above function now. Thanks & regards, Ertan K???ko?lu Michael Wojcik , 28 Ara 2018 Cum, 21:16 tarihinde ?unu yazd?: > > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On > Behalf Of Ertan K???koglu > > Sent: Thursday, December 27, 2018 16:03 > > > A- I tried to directly decrypt (no padding applied) and I get my plain > text plus > > some additional invisible characters at the end. I am told it maybe a > "padding" > > issue, my problem, during decryption. > > How does the Windows program know how long the decrypted data is? > > It sounds to me like the problem is simply that your Windows code is > decrypting the data correctly, then reading past it into garbage left at > the end of the buffer. > > If the messages are of fixed length, only use that many bytes from the > decryption output. If they're of variable length, then the sender will have > to tell the receiver how long they are. There are many ways of doing that; > you haven't told us enough about your protocol to know which would be > appropriate in your case. > > -- > Michael Wojcik > Distinguished Engineer, Micro Focus > > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Sat Dec 29 12:54:03 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Sat, 29 Dec 2018 13:54:03 +0100 Subject: [openssl-users] Decrypting an OpenSSL encrypt AES256-CBC data In-Reply-To: References: Message-ID: <18d77aea-9291-0553-3028-6b3fbfaf08fa@wisemo.com> On 29/12/2018 13:41, Ertan K???koglu wrote: > Hello, > > Windows program does not know length of data. I would like to use some > kind of standard method and use exact method on Windows to decrypt. > > I think my problem is really that I do not know what "padding" is used > by default. I have found below function. However, there is no detailed > explanation about it in here: > https://www.openssl.org/docs/man1.0.2/crypto/EVP_CIPHER_CTX_set_padding.html > int EVP_CIPHER_CTX_set_padding(EVP_CIPHER_CTX *x, int padding); > > I wanted to learn what values can "padding" parameter be. I understand > I can set it to zero ( 0 ) for disabling padding. This is not what I > want because my plain text length is not confirmed? to be multiply of > 16 bytes. > > I can use PKCS#7 to decrypt on Windows so I would like to encrypt > using that padding. Just do not know what value to pass in above > function now. > PKCS#7 also known as CMS or (in OpenSSL) SMIME, doesn't just pad. It generates a random key and encrypts it with the recipients key (usually a public key from a certificate, but there may be a symmetric variant). Thus to do PKCS#7 with OpenSSL, you need to use the "openssl cms" command line or the corresponding functions. > > > > Michael Wojcik >, 28 Ara 2018 Cum, 21:16 > tarihinde ?unu yazd?: > > > From: openssl-users [mailto:openssl-users-bounces at openssl.org > ] On Behalf Of Ertan > K???koglu > > Sent: Thursday, December 27, 2018 16:03 > > > A- I tried to directly decrypt (no padding applied) and I get my > plain text plus > > some additional invisible characters at the end. I am told it > maybe a "padding" > > issue, my problem, during decryption. > > How does the Windows program know how long the decrypted data is? > > It sounds to me like the problem is simply that your Windows code > is decrypting the data correctly, then reading past it into > garbage left at the end of the buffer. > > If the messages are of fixed length, only use that many bytes from > the decryption output. If they're of variable length, then the > sender will have to tell the receiver how long they are. There are > many ways of doing that; you haven't told us enough about your > protocol to know which would be appropriate in your case. > > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From c.wehrmeyer at gmx.de Sat Dec 29 13:19:47 2018 From: c.wehrmeyer at gmx.de (C.Wehrmeyer) Date: Sat, 29 Dec 2018 14:19:47 +0100 Subject: [openssl-users] Authentication over ECDHE Message-ID: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> I don't have access to the actual testing environments until Wednesday next year, so I've had to create a private account. > Which version of OpenSSL is this? (I don't remember if you said this > already). I'm not entirely sure, but I *think* it's 1.1.0. ===================================================================== OK, so I've been reading the mails before going to sleep and spent some time thinking and researching about this, and I've come to a conclusion: OpenSSL is a goddamn mess, SSL_clear() is pretty much superfluous, and as such shouldn't exist. Why? Well, to quote Viktor here: > DO NOT reuse the same SSL handle for multiple connections, And that is fricking bullshit. Not the quote itself or the suggestion - it's unlikely you had anything to do with the actual code - but the way things have been thought through (or rather, have not been thought through) by the library devs. I've written highly scalable libraries in the past before, and one thing you always want to do there is to trim fat. And "object allocation and initialisation" is something that you very much want to trim fat of, not only for obvious reasons such as malloc() and free() (or whatever OpenSSL uses as wrappers) being complexity monsters, but also for cache reasons (loading different cache line hurts performance). That's why you usually have functions like XXX_clear() or XXX_reset(), which do exactly that - prepare an object for another usage. memset() (or the OpenSSL equivalent of a secure memset) your allocated resources. I don't really see the problem here. Now add to that the fact that OpenSSL has been moving towards making its structures opaque, thus falling into the same trap that Microsoft has with COM and DirectX, and you can kind of see why, if you can't really determine anymore WHERE your object is going to be stored, you at least want to keep reusing it. This is not PHP, where people allocate memory all willy-nilly, or C++, where people don't even have shame anymore to use std::vector str_array instead of good old static const char*const str_array[] while expecting things to be made faster by invisible memory pools (and horribly failing at it), but C, where you want to think about each step quite carefully. Then OpenSSL even provides an SSL_clear function which is advertised like this: > SSL_clear - reset SSL object to allow another connection , and then, only later, in a big warning block, decides to tell the reader that this function only works when the stars align quite correctly and you've sacrificed at least two virgins, because: > The reset operation however keeps several settings of the last > sessions Then, as the documentation suggests, I read the entry for SSL_get_session: > The ssl session contains all information required to re-establish the > connection without a full handshake for SSL versions up to and > including TLSv1.2. In TLSv1.3 the same is true, but sessions are > established after the main handshake has occurred. And at this point it all falls apart. From my understanding OpenSSL keeps a session cache for servers so that key exchanges and protocol handshakes can be avoided. Problem is, *we're using ECDHE, where the last E stands for "ephemeral"*. In simple English: throw away the keys after you're done, we want to have forward secrecy. And then OpenSSL keeps a fresh copy of those for everyone who happened to be logged on at this point. Heartbleed apparently wasn't enough of a warning. Oh, but lets move everything to the heap so that it's more secure there now. I don't want to reuse a session with ephemeral keys; I want to reuse an object that is supposed to already have resources allocated for doing its job, as is indicated by the documentation of this function except for a small note at the end that tells you that the devs didn't really think about what "ephemeral" means. Creating a new SSL object (EVEN FROM AN EXISTING SSL_CTX object) entails: - allocating the memory for the object itself on the heap (via OPENSSL_zalloc) - creating and managing a new lock for the object, and who knows for much more subobjects - creating a duplicate of the cipher suite stack (which isn't even a flat copy, but something that can cause the code to call OPENSSL_malloc *twice* in the worst case) - creating a duplicate of the certificates (which I don't even use, but that doesn't stop the code of ssl_cert_dup() to call OPENSSL_zalloc *in its very first line!*) - setting up a bunch of callbacks - copying 32 bytes for a sid_ctx - creating an X509_VERIFY_PARAM object (*which calls OPENSSL_zalloc again*) as well as creating a deep copy of the SSL_CTX's parameter via X509_VERIFY_PARAM_inherit(), with Thor knows how many copies hidden in all those *set* and *deep_copy* routines - copying EC point formats from the context - deep again, of course, at least that's what OPENSSL_memdup() makes me think - copying supported group informations, and of course deep again! - deep-copying an ALPN object - SSL_clear()-ing the object (no, really!) - deep-copying a CRYPTO_EX_DATA object via CRYPTO_new_ex_data ... at this point, is anyone surprised here that timing attacks against crypto are *still* so successful? Because I'm not. Not at all. I didn't bother looking up what freeing entails - it's obvious to anyone at this point that OpenSSL is a severe victim of feature creep, that its memory allocation scheme is a mess, and long story short: I will NOT free a perfectly fine object just because of incompetent devs' chutzpah expecting their users to allocate memory dynamically en mass for no goddamn reason whenever a new connection comes in. Fix your goddamn code. And don't give me any "trust us, we're experienced programmers" bullshit. I've *seen* ssl/record/ssl3_record.c: > static const unsigned char ssl3_pad_1[48] = { > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36 > }; > static const unsigned char ssl3_pad_2[48] = { > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c > }; What's wrong with that, you ask? Let me show you how I'd have done that: > static const unsigned char ssl3_pad_1[] = > { > "66666666" > "66666666" > "66666666" > "66666666" > "66666666" > "66666666" > }; > > static const unsigned char*ssl3_pad_2[] = > { > "\\\\\\\\\\\\\\\\" > "\\\\\\\\\\\\\\\\" > "\\\\\\\\\\\\\\\\" > "\\\\\\\\\\\\\\\\" > "\\\\\\\\\\\\\\\\" > "\\\\\\\\\\\\\\\\" > }; So, no. I don't trust anyone. Especially not this mess of a code. From jb-openssl at wisemo.com Sat Dec 29 15:53:25 2018 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Sat, 29 Dec 2018 16:53:25 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> Message-ID: <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> On 29/12/2018 14:19, C.Wehrmeyer wrote: > I don't have access to the actual testing environments until Wednesday > next year, so I've had to create a private account. > > > Which version of OpenSSL is this? (I don't remember if you said this > > already). > > I'm not entirely sure, but I *think* it's 1.1.0. > > ===================================================================== > > OK, so I've been reading the mails before going to sleep and spent > some time thinking and researching about this, and I've come to a > conclusion: OpenSSL is a goddamn mess, SSL_clear() is pretty much > superfluous, and as such shouldn't exist. > > Why? Well, to quote Viktor here: > > > DO NOT reuse the same SSL handle for multiple connections, > > And that is fricking bullshit. Not the quote itself or the suggestion > - it's unlikely you had anything to do with the actual code - but the > way things have been thought through (or rather, have not been thought > through) by the library devs. I've written highly scalable libraries > in the past before, and one thing you always want to do there is to > trim fat. And "object allocation and initialisation" is something that > you very much want to trim fat of, not only for obvious reasons such > as malloc() and free() (or whatever OpenSSL uses as wrappers) being > complexity monsters, but also for cache reasons (loading different > cache line hurts performance). That's why you usually have functions > like XXX_clear() or XXX_reset(), which do exactly that - prepare an > object for another usage. memset() (or the OpenSSL equivalent of a > secure memset) your allocated resources. I don't really see the > problem here. > > Now add to that the fact that OpenSSL has been moving towards making > its structures opaque, thus falling into the same trap that Microsoft > has with COM and DirectX, and you can kind of see why, if you can't > really determine anymore WHERE your object is going to be stored, you > at least want to keep reusing it. This is not PHP, where people > allocate memory all willy-nilly, or C++, where people don't even have > shame anymore to use std::vector str_array instead of > good old static const char*const str_array[] while expecting things to > be made faster by invisible memory pools (and horribly failing at it), > but C, where you want to think about each step quite carefully. > > Then OpenSSL even provides an SSL_clear function which is advertised > like this: > > > SSL_clear - reset SSL object to allow another connection > > , and then, only later, in a big warning block, decides to tell the > reader that this function only works when the stars align quite > correctly and you've sacrificed at least two virgins, because: > > > The reset operation however keeps several settings of the last > > sessions > > Then, as the documentation suggests, I read the entry for > SSL_get_session: > > > The ssl session contains all information required to re-establish the > > connection without a full handshake for SSL versions up to and > > including TLSv1.2. In TLSv1.3 the same is true, but sessions are > > established after the main handshake has occurred. > > And at this point it all falls apart. From my understanding OpenSSL > keeps a session cache for servers so that key exchanges and protocol > handshakes can be avoided. Problem is, *we're using ECDHE, where the > last E stands for "ephemeral"*. In simple English: throw away the keys > after you're done, we want to have forward secrecy. And then OpenSSL > keeps a fresh copy of those for everyone who happened to be logged on > at this point. Heartbleed apparently wasn't enough of a warning. Oh, > but lets move everything to the heap so that it's more secure there now. > > I don't want to reuse a session with ephemeral keys; I want to reuse > an object that is supposed to already have resources allocated for > doing its job, as is indicated by the documentation of this function > except for a small note at the end that tells you that the devs didn't > really think about what "ephemeral" means. > The session caching in the SSL and TLS protocols is to skip the expensive key exchange when reconnecting within a few seconds, as is extremely common with web browsers opening up to 8 parallel connections to each server. There is hopefully a configuration option to tell the OpenSSL server end SSL_CTX to not do this, just as there should (for multi-process web servers) be an option to hand the state storage over to the web server application for inter-process sharing in whatever the web server application (and its configuration) deems secure. > Creating a new SSL object (EVEN FROM AN EXISTING SSL_CTX object) entails: > > - allocating the memory for the object itself on the heap (via > OPENSSL_zalloc) > - creating and managing a new lock for the object, and who knows for > much more subobjects > - creating a duplicate of the cipher suite stack (which isn't even a > flat copy, but something that can cause the code to call > OPENSSL_malloc *twice* in the worst case) > - creating a duplicate of the certificates (which I don't even use, > but that doesn't stop the code of ssl_cert_dup() to call > OPENSSL_zalloc *in its very first line!*) > - setting up a bunch of callbacks > - copying 32 bytes for a sid_ctx > - creating an X509_VERIFY_PARAM object (*which calls OPENSSL_zalloc > again*) as well as creating a deep copy of the SSL_CTX's parameter via > X509_VERIFY_PARAM_inherit(), with Thor knows how many copies hidden in > all those *set* and *deep_copy* routines > - copying EC point formats from the context - deep again, of course, > at least that's what OPENSSL_memdup() makes me think > - copying supported group informations, and of course deep again! > - deep-copying an ALPN object > - SSL_clear()-ing the object (no, really!) > - deep-copying a CRYPTO_EX_DATA object via CRYPTO_new_ex_data ... at > this point, is anyone surprised here that timing attacks against > crypto are *still* so successful? Because I'm not. Not at all. > > I didn't bother looking up what freeing entails - it's obvious to > anyone at this point that OpenSSL is a severe victim of feature creep, > that its memory allocation scheme is a mess, and long story short: I > will NOT free a perfectly fine object just because of incompetent > devs' chutzpah expecting their users to allocate memory dynamically en > mass for no goddamn reason whenever a new connection comes in. Fix > your goddamn code. > > And don't give me any "trust us, we're experienced programmers" > bullshit. I've *seen* ssl/record/ssl3_record.c: > > > static const unsigned char ssl3_pad_1[48] = { > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36 > > }; > > static const unsigned char ssl3_pad_2[48] = { > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c > > }; > > What's wrong with that, you ask? Let me show you how I'd have done that: > > > static const unsigned char ssl3_pad_1[] = > > { > >???? "66666666" > >???? "66666666" > >???? "66666666" > >???? "66666666" > >???? "66666666" > >???? "66666666" > > }; > > > > static const unsigned char*ssl3_pad_2[] = > > { > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > > }; > > So, no. I don't trust anyone. Especially not this mess of a code. Well, these two latter arrays look like a stray copy of the HMAC constants "ipad" and "opad", which (while looking like ASCII), are defined as exact hex constants even on a non-ASCII machine, such as PDP-11 or an IBM mainframe. I wonder if those constants are actually still used somewhere in the SSL3 code, or if they have been properly replaced by calls to the HMAC implementation in libcrypto. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From levitte at openssl.org Sat Dec 29 16:08:46 2018 From: levitte at openssl.org (Richard Levitte) Date: Sat, 29 Dec 2018 17:08:46 +0100 (CET) Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> Message-ID: <20181229.170846.804158981742723988.levitte@openssl.org> In message <38b97114-0c66-40ed-f631-58aa20940a3a at gmx.de> on Sat, 29 Dec 2018 14:19:47 +0100, "C.Wehrmeyer" said: > I've written highly scalable libraries in the past before, and one > thing you always want to do there is to trim fat. Sure, but: > Now add to that the fact that OpenSSL has been moving towards making > its structures opaque, thus falling into the same trap that Microsoft > has with COM and DirectX, ... I'm not sure about you, but I have a hard time seeing how one would trim off fat from *public* structures that everyone and their stray cat might be tinkering in. Trimming off fat usually means restructuring the structures, and unless they're opaque, the freedom to do so is severily limited. Mind you, though, that I agree we could do with some cleanup. > And don't give me any "trust us, we're experienced programmers" > bullshit. I've *seen* ssl/record/ssl3_record.c: > > > static const unsigned char ssl3_pad_1[48] = { > > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > > 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36 > > }; > > static const unsigned char ssl3_pad_2[48] = { > > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > > 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c > > }; > > What's wrong with that, you ask? Let me show you how I'd have done > that: > > > static const unsigned char ssl3_pad_1[] = > > { > > "66666666" > > "66666666" > > "66666666" > > "66666666" > > "66666666" > > "66666666" > > }; > > > > static const unsigned char*ssl3_pad_2[] = > > { > > "\\\\\\\\\\\\\\\\" > > "\\\\\\\\\\\\\\\\" > > "\\\\\\\\\\\\\\\\" > > "\\\\\\\\\\\\\\\\" > > "\\\\\\\\\\\\\\\\" > > "\\\\\\\\\\\\\\\\" > > }; > > So, no. I don't trust anyone. Especially not this mess of a code. You do know that your string insert NUL bytes, right? If you have a look at how they're used, you might see why those stray NUL bytes aren't a good thing. Cheers, Richard P.S. as a side note, your message triggered profanity filters. I don't really care, it's not our filters, but this is just to inform you that your rant didn't quite reach everyone (those with profanity filters in place) /postmaster -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From jeremy.farrell at oracle.com Sat Dec 29 16:21:04 2018 From: jeremy.farrell at oracle.com (J. J. Farrell) Date: Sat, 29 Dec 2018 16:21:04 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> Message-ID: <2d1e570e-b2b4-114b-df73-e2b5e5922fca@oracle.com> On 29/12/2018 13:19, C.Wehrmeyer wrote: > ... Your corrections, improvements and enhancements would be very welcome as pull requests at https://github.com/openssl/openssl - thank you for your contributions. > And don't give me any "trust us, we're experienced programmers" > bullshit. I've *seen* ssl/record/ssl3_record.c: > > > static const unsigned char ssl3_pad_1[48] = { > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, > >???? 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36, 0x36 > > }; > > static const unsigned char ssl3_pad_2[48] = { > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, > >???? 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c, 0x5c > > }; > > What's wrong with that, you ask? Yes, I ask; why not tell us? > Let me show you how I'd have done that: > > > static const unsigned char ssl3_pad_1[] = > > { > >???? "66666666" > >???? "66666666" > >???? "66666666" > >???? "66666666" > >???? "66666666" > >???? "66666666" > > }; > > > > static const unsigned char*ssl3_pad_2[] = > > { > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > >???? "\\\\\\\\\\\\\\\\" > > }; > > So, no. I don't trust anyone. Especially not this mess of a code. So instead of correct portable code which derives obviously and straightforwardly from the specification, you'd write arrays of a different length from the original, the first 48 bytes of which would only be correct in some compilation environments, and even in the cases where those 48 bytes end up correct they have no obvious relationship to the specification they are implementing (your obfuscation making the code much more difficult to review). How are these changes improvements? I'd walk you out of an interview if you offered this as an implementation, let alone as an improvement. For the record, I have nothing to do with any of the code in OpenSSL. -- J. J. Farrell Not speaking for Oracle -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.wehrmeyer at gmx.de Sat Dec 29 17:18:48 2018 From: c.wehrmeyer at gmx.de (C.Wehrmeyer) Date: Sat, 29 Dec 2018 18:18:48 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> Message-ID: <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> On 29.12.18 16:53, Jakob Bohm via openssl-users wrote: > The session caching in the SSL and TLS protocols is to skip the > expensive key exchange when reconnecting within a few seconds, > as is extremely common with web browsers opening up to 8 parallel > connections to each server. My outburst was somewhat out of line. SSL_clear() is not *completely* superfluous, you're right, but it's incredibly limited. > There is hopefully a configuration option to tell the OpenSSL server > end SSL_CTX to not do this, just as there should (for multi-process > web servers) be an option to hand the state storage over to the web > server application for inter-process sharing in whatever the web > server application (and its configuration) deems secure. Then why doesn't the documentation page of SSL_clear() mention this directly? "If you want to reuse an SSL object, use this function to set some option on the SSL_CTX object". On 29.12.18 17:08, Richard Levitte wrote: > ... I'm not sure about you, but I have a hard time seeing how one > would trim off fat from *public* structures that everyone and their > stray cat might be tinkering in. Trimming off fat usually means > restructuring the structures, and unless they're opaque, the freedom > to do so is severily limited. You're implying that people can't do that anymore. Let me assure you that they still can, you just made it a little harder for people who're really determined to walk outside of the API bounds. On the other hand you've made the normal applications programmers job - which is to know where and when to allocate and free memory - a lot harder. Here I am, having a bunch of objects all sitting in a designated memory area of mine - which I can initialise, reset, and reuse just how I seem fit (not that I want to horribly break SSL objects, I just want to determine where they are stored) - and I can't use them because the OpenSSL devs are working on taking a little bit of power from me that I need in order to help the library do smart things. Like, imagine that I know I'll need: - a context object - a connection object - a BIO object - some X.509 cert object/memory/whatever - and so forth and so on And that not just once, but for a thousand connections. As an application programmer who knows a thing or two about scalable programming I'd be like: OK, that's fantastic. I can mmap the necessary memory, use hugepages, reduce the TLB, and just have all that stuff written on the same chunk without metadata or padding inbetween, which doesn't bloat our D$. Sweet money! And now I can't do that because the devs want me to use their single-object-only creation functions who return already allocated memory to me. I don't get to decide anymore where my objects are written, I don't get to decide what caching objects are used (maybe I don't WANT an X.509 cert object, so I could pass NULL to the function that creates it, or maybe I already HAVE a couple hundred of those lying here, so you can have them ... no? You prefer your structures to be opaque? Oh well). But, you know, it could still be argued that this is safer somehow. *Somehow*. If not ... for the fact that I don't even seem to be able to KEEP the objects OpenSSL created for me quite elaborately. > You do know that your string insert NUL bytes, right? If you have a > look at how they're used, you might see why those stray NUL bytes > aren't a good thing. Yes, I do. See below, I wrote the last part first. (Also, what? Please have a look again, those stray NUL bytes wouldn't have ANY effect, at least not that I see it. One memcpy(), two EVP_DigestUpdate(), and it's always a separately calculated length). > P.S. as a side note, your message triggered profanity filters. I > don't really care, it's not our filters, but this is just to inform > you that your rant didn't quite reach everyone (those with profanity > filters in place) > /postmaster It's just that this is so stupid to me. I'm no crypto master, I know that. But I constantly hear about timing attacks and side channels and all that, so I tried to avoid stepping into the pitfalls that other people would do - and then I'm being told it's SUPPOSED to be like that. Come on, please! It's almost as if the devs aren't even trying. On 29.12.18 17:21, J. J. Farrell wrote:> So instead of correct portable code which derives obviously and > straightforwardly from the specification, you'd write arrays of a > different length from the original, the first 48 bytes of which would > only be correct in some compilation environments, and even in the cases > where those 48 bytes end up correct they have no obvious relationship to > the specification they are implementing (your obfuscation making the > code much more difficult to review). How are these changes improvements? Another implication, this time that my code isn't perfectly portable code. There is *one* environment I could think of where this wouldn't be the case - that being Shift JIS environments that tinker with ASCII standard by replacing a backslash with a Japanese half-width Yen sign - however: 1. we'll already have much, MUCH bigger problems if ASCII isn't the encoding the compiler is expecting here, so exchanging 0x5c for '\' is not going to ruin much more here. And it doesn't even matter anyway because any Shift JIS editor would display this as the half-width Yen sign *anyways*. (And that being said, since the main criticism of the Han unification of the Unicode consortium came from the Japanese, I don't care if they're going to throw another fit. They can't even prevent mojibake between mainly Japanese character encodings. At least ISO-8859-1/CP1252 has the excuse of being the most popular encoding in the entire west, so ... whatever. Just let them rail.) 2. to be honest I wouldn't have have this be a static array at all, but rather an exportable pointer and an exportable variable that would hold the string's size minus one. However, if you actually HAD looked at the code as is - which you obviously haven't because you wouldn't have even brought it up then - the size of the array is completely inconsequential in that particular code. That's right: they don't even derive the amounts of bytes to copy from the string directly, but rather just use a constant: > npad = (48 / md_size) * md_size; Oh, you want me to change that? No problem: > #define STRING \ > "xxxxxxxx" \ > "xxxxxxxx" \ > "xxxxxxxx" \ > "xxxxxxxx" \ > "xxxxxxxx" \ > "xxxxxxxx" > > const unsigned char string_length = sizeof(STRING) - 1; > const char*string = STRING; > > npad = (string_length / md_size) * md_size; Hell, I could even create a macro for this so that I don't even need the explicit definition of STRING here. It's not as if OpenSSL shies away from the concept of using macros to auto-generate a plethora of symbols (I'm looking at include/openssl/crypto.h right now). > I'd walk you out of an interview if you offered this as an > implementation, let alone as an improvement. Don't worry, I'd fire you on the spot if you had checked in the existing code, so I'll call it quits. From jeremy.farrell at oracle.com Sat Dec 29 18:47:46 2018 From: jeremy.farrell at oracle.com (J. J. Farrell) Date: Sat, 29 Dec 2018 18:47:46 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> Message-ID: <06f4aff1-a572-233e-6558-9a04de1c4dd5@oracle.com> On 29/12/2018 17:18, C.Wehrmeyer wrote: > On 29.12.18 17:21, J. J. Farrell wrote:> So instead of correct > portable code which derives obviously and > > straightforwardly from the specification, you'd write arrays of a > > different length from the original, the first 48 bytes of which would > > only be correct in some compilation environments, and even in the cases > > where those 48 bytes end up correct they have no obvious > relationship to > > the specification they are implementing (your obfuscation making the > > code much more difficult to review). How are these changes improvements? > Another implication, this time that my code isn't perfectly portable > code. There is *one* environment I could think of where this wouldn't > be the case - that being Shift JIS environments that tinker with ASCII > standard by replacing a backslash with a Japanese half-width Yen sign > - however: > > 1. we'll already have much, MUCH bigger problems if ASCII isn't the > encoding the compiler is expecting here, so exchanging 0x5c for '\' is > not going to ruin much more here. And it doesn't even matter anyway > because any Shift JIS editor would display this as ... You don't explain the benefits of coding a requirement for "the byte 0x5C repeated 48 times" as a string of back-slash characters instead of, well, the byte 0x5C repeated 48 times. What are the benefits, and how do they outweigh the ability to compare more easily against the requirement? You don't explain the benefits of writing non-portable code where that code is very widely deployed in environments of which you have no knowledge (and which don't even exist at the time of writing it), without even a comment specifying the portability restrictions you are imposing, when it's just as easy to write the code portably and not need to think about such restrictions. What are the benefits, and how do they outweigh the obvious disadvantages? -- J. J. Farrell Not speaking for Oracle From openssl-users at dukhovni.org Sat Dec 29 20:32:09 2018 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sat, 29 Dec 2018 15:32:09 -0500 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> Message-ID: <79292995-7008-4062-B0A0-208B60B7C75F@dukhovni.org> > On Dec 29, 2018, at 8:19 AM, C.Wehrmeyer wrote: > > OK, so I've been reading the mails before going to sleep and spent some time thinking and researching about this, and I've come to a conclusion: OpenSSL is a goddamn mess, SSL_clear() is pretty much superfluous, and as such shouldn't exist. > > Why? Well, to quote Viktor here: > > > DO NOT reuse the same SSL handle for multiple connections, I said it, neither because it can't be done, nor because it is incompatible with session caching, or has anything to do with ephemeral key agreement (which works just fine even with session resumption), but simply because it is easier for a beginner to get the code working without SSL handle re-use. Once you have you everything else working, and have become more adept with use of the library, you can add connection handle re-use and measure the performance impact. If it makes a significant difference, then invest in maintaining slightly more complex code to get the advantage. That's all I can offer in light of the bellicose rant, ... :-( Good luck. -- Viktor. From filipe.mfgfernandes at gmail.com Sat Dec 29 20:39:52 2018 From: filipe.mfgfernandes at gmail.com (Filipe Fernandes) Date: Sat, 29 Dec 2018 20:39:52 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> Message-ID: You really have no idea how to code. You look like one of those junior engineers that think they know it all. I won't be replying again, so don't need to get your hopes up. Na(o) s?bado, 29 de dez de 2018, 17:19, C.Wehrmeyer escreveu: > On 29.12.18 16:53, Jakob Bohm via openssl-users wrote: > > The session caching in the SSL and TLS protocols is to skip the > > expensive key exchange when reconnecting within a few seconds, > > as is extremely common with web browsers opening up to 8 parallel > > connections to each server. > > My outburst was somewhat out of line. SSL_clear() is not *completely* > superfluous, you're right, but it's incredibly limited. > > > There is hopefully a configuration option to tell the OpenSSL server > > end SSL_CTX to not do this, just as there should (for multi-process > > web servers) be an option to hand the state storage over to the web > > server application for inter-process sharing in whatever the web > > server application (and its configuration) deems secure. > > Then why doesn't the documentation page of SSL_clear() mention this > directly? "If you want to reuse an SSL object, use this function to set > some option on the SSL_CTX object". > > On 29.12.18 17:08, Richard Levitte wrote: > > ... I'm not sure about you, but I have a hard time seeing how one > > would trim off fat from *public* structures that everyone and their > > stray cat might be tinkering in. Trimming off fat usually means > > restructuring the structures, and unless they're opaque, the freedom > > to do so is severily limited. > > You're implying that people can't do that anymore. Let me assure you > that they still can, you just made it a little harder for people who're > really determined to walk outside of the API bounds. > > On the other hand you've made the normal applications programmers job - > which is to know where and when to allocate and free memory - a lot > harder. Here I am, having a bunch of objects all sitting in a designated > memory area of mine - which I can initialise, reset, and reuse just how > I seem fit (not that I want to horribly break SSL objects, I just want > to determine where they are stored) - and I can't use them because the > OpenSSL devs are working on taking a little bit of power from me that I > need in order to help the library do smart things. > > Like, imagine that I know I'll need: > > - a context object > - a connection object > - a BIO object > - some X.509 cert object/memory/whatever > - and so forth and so on > > And that not just once, but for a thousand connections. As an > application programmer who knows a thing or two about scalable > programming I'd be like: OK, that's fantastic. I can mmap the necessary > memory, use hugepages, reduce the TLB, and just have all that stuff > written on the same chunk without metadata or padding inbetween, which > doesn't bloat our D$. Sweet money! > > And now I can't do that because the devs want me to use their > single-object-only creation functions who return already allocated > memory to me. I don't get to decide anymore where my objects are > written, I don't get to decide what caching objects are used (maybe I > don't WANT an X.509 cert object, so I could pass NULL to the function > that creates it, or maybe I already HAVE a couple hundred of those lying > here, so you can have them ... no? You prefer your structures to be > opaque? Oh well). > > But, you know, it could still be argued that this is safer somehow. > *Somehow*. If not ... for the fact that I don't even seem to be able to > KEEP the objects OpenSSL created for me quite elaborately. > > > You do know that your string insert NUL bytes, right? If you have a > > look at how they're used, you might see why those stray NUL bytes > > aren't a good thing. > > Yes, I do. See below, I wrote the last part first. > > (Also, what? Please have a look again, those stray NUL bytes wouldn't > have ANY effect, at least not that I see it. One memcpy(), two > EVP_DigestUpdate(), and it's always a separately calculated length). > > > P.S. as a side note, your message triggered profanity filters. I > > don't really care, it's not our filters, but this is just to inform > > you that your rant didn't quite reach everyone (those with profanity > > filters in place) > > /postmaster > > It's just that this is so stupid to me. I'm no crypto master, I know > that. But I constantly hear about timing attacks and side channels and > all that, so I tried to avoid stepping into the pitfalls that other > people would do - and then I'm being told it's SUPPOSED to be like that. > Come on, please! It's almost as if the devs aren't even trying. > > On 29.12.18 17:21, J. J. Farrell wrote:> So instead of correct portable > code which derives obviously and > > straightforwardly from the specification, you'd write arrays of a > > different length from the original, the first 48 bytes of which would > > only be correct in some compilation environments, and even in the cases > > where those 48 bytes end up correct they have no obvious relationship to > > the specification they are implementing (your obfuscation making the > > code much more difficult to review). How are these changes improvements? > Another implication, this time that my code isn't perfectly portable > code. There is *one* environment I could think of where this wouldn't be > the case - that being Shift JIS environments that tinker with ASCII > standard by replacing a backslash with a Japanese half-width Yen sign - > however: > > 1. we'll already have much, MUCH bigger problems if ASCII isn't the > encoding the compiler is expecting here, so exchanging 0x5c for '\' is > not going to ruin much more here. And it doesn't even matter anyway > because any Shift JIS editor would display this as the half-width Yen > sign *anyways*. (And that being said, since the main criticism of the > Han unification of the Unicode consortium came from the Japanese, I > don't care if they're going to throw another fit. They can't even > prevent mojibake between mainly Japanese character encodings. At least > ISO-8859-1/CP1252 has the excuse of being the most popular encoding in > the entire west, so ... whatever. Just let them rail.) > 2. to be honest I wouldn't have have this be a static array at all, but > rather an exportable pointer and an exportable variable that would hold > the string's size minus one. However, if you actually HAD looked at the > code as is - which you obviously haven't because you wouldn't have even > brought it up then - the size of the array is completely inconsequential > in that particular code. That's right: they don't even derive the > amounts of bytes to copy from the string directly, but rather just use a > constant: > > > npad = (48 / md_size) * md_size; > > Oh, you want me to change that? No problem: > > > #define STRING \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" > > > > const unsigned char string_length = sizeof(STRING) - 1; > > const char*string = STRING; > > > > npad = (string_length / md_size) * md_size; > > Hell, I could even create a macro for this so that I don't even need the > explicit definition of STRING here. It's not as if OpenSSL shies away > from the concept of using macros to auto-generate a plethora of symbols > (I'm looking at include/openssl/crypto.h right now). > > > I'd walk you out of an interview if you offered this as an > > implementation, let alone as an improvement. > > Don't worry, I'd fire you on the spot if you had checked in the existing > code, so I'll call it quits. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Sat Dec 29 21:02:43 2018 From: rsalz at akamai.com (Salz, Rich) Date: Sat, 29 Dec 2018 21:02:43 +0000 Subject: [openssl-users] How can I compile nginx with openssl to support 0-rtt TLS1.3 In-Reply-To: <5c27174d.1c69fb81.8adc8.2837@mx.google.com> References: <5c27174d.1c69fb81.8adc8.2837@mx.google.com> Message-ID: <907C21DE-CC34-4EA9-ACF4-8852EEB00690@akamai.com> * But I only get early data for get method. * When using post method, the server terminate connection. Is it related with openssl? If so, how can I do to allow post method? Early data can be replayed. It is only safe to use early data when the request is idempotent, like GET. You might find https://tools.ietf.org/html/rfc8470 useful reading. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Sat Dec 29 21:27:58 2018 From: rsalz at akamai.com (Salz, Rich) Date: Sat, 29 Dec 2018 21:27:58 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> Message-ID: <155EF66E-FFF7-480B-AA09-387F87D3787B@akamai.com> > I didn't bother looking up what freeing entails - it's obvious to > anyone at this point that OpenSSL is a severe victim of feature creep, > that its memory allocation scheme is a mess, and long story short: I > will NOT free a perfectly fine object just because of incompetent > devs' chutzpah expecting their users to allocate memory dynamically en > mass for no goddamn reason whenever a new connection comes in. Fix > your goddamn code. Might I suggest that you fix your attitude? An insult and invective-filled polemic does no good. Perhaps you might find another library more to your liking; there are many available now. From levitte at openssl.org Sat Dec 29 21:33:53 2018 From: levitte at openssl.org (Richard Levitte) Date: Sat, 29 Dec 2018 22:33:53 +0100 (CET) Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <20181229.170846.804158981742723988.levitte@openssl.org> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <20181229.170846.804158981742723988.levitte@openssl.org> Message-ID: <20181229.223353.2252472956067207821.levitte@openssl.org> In message <20181229.170846.804158981742723988.levitte at openssl.org> on Sat, 29 Dec 2018 17:08:46 +0100 (CET), Richard Levitte said: > In message <38b97114-0c66-40ed-f631-58aa20940a3a at gmx.de> on Sat, 29 Dec 2018 14:19:47 +0100, "C.Wehrmeyer" said: > ... > > What's wrong with that, you ask? Let me show you how I'd have done > > that: > > > > > static const unsigned char ssl3_pad_1[] = > > > { > > > "66666666" > > > "66666666" > > > "66666666" > > > "66666666" > > > "66666666" > > > "66666666" > > > }; > > > > > > static const unsigned char*ssl3_pad_2[] = > > > { > > > "\\\\\\\\\\\\\\\\" > > > "\\\\\\\\\\\\\\\\" > > > "\\\\\\\\\\\\\\\\" > > > "\\\\\\\\\\\\\\\\" > > > "\\\\\\\\\\\\\\\\" > > > "\\\\\\\\\\\\\\\\" > > > }; > > > > So, no. I don't trust anyone. Especially not this mess of a code. > > You do know that your string insert NUL bytes, right? If you have a > look at how they're used, you might see why those stray NUL bytes > aren't a good thing. Never mind this remark... For some reason, my brain added commas after each partial string. Meh... -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From levitte at openssl.org Sat Dec 29 21:38:01 2018 From: levitte at openssl.org (Richard Levitte) Date: Sat, 29 Dec 2018 22:38:01 +0100 (CET) Subject: [openssl-users] Authentication over ECDHE In-Reply-To: References: <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> Message-ID: <20181229.223801.1721745516640956271.levitte@openssl.org> When we're starting to stoop to this level, I think it's time to step away from the screen and take a few deep breaths... or maybe even go away and take a nap, go for a walk, or something else. Then, perhaps come back in a better mood. Cheers, Richard ( am off to sleep, it's getting late over here ) In message on Sat, 29 Dec 2018 20:39:52 +0000, Filipe Fernandes said: > You really have no idea how to code. You look like one of those junior engineers that think they > know it all. > > I won't be replying again, so don't need to get your hopes up. > > Na(o) s?bado, 29 de dez de 2018, 17:19, C.Wehrmeyer escreveu: > > On 29.12.18 16:53, Jakob Bohm via openssl-users wrote: > > The session caching in the SSL and TLS protocols is to skip the > > expensive key exchange when reconnecting within a few seconds, > > as is extremely common with web browsers opening up to 8 parallel > > connections to each server. > > My outburst was somewhat out of line. SSL_clear() is not *completely* > superfluous, you're right, but it's incredibly limited. > > > There is hopefully a configuration option to tell the OpenSSL server > > end SSL_CTX to not do this, just as there should (for multi-process > > web servers) be an option to hand the state storage over to the web > > server application for inter-process sharing in whatever the web > > server application (and its configuration) deems secure. > > Then why doesn't the documentation page of SSL_clear() mention this > directly? "If you want to reuse an SSL object, use this function to set > some option on the SSL_CTX object". > > On 29.12.18 17:08, Richard Levitte wrote: > > ... I'm not sure about you, but I have a hard time seeing how one > > would trim off fat from *public* structures that everyone and their > > stray cat might be tinkering in. Trimming off fat usually means > > restructuring the structures, and unless they're opaque, the freedom > > to do so is severily limited. > > You're implying that people can't do that anymore. Let me assure you > that they still can, you just made it a little harder for people who're > really determined to walk outside of the API bounds. > > On the other hand you've made the normal applications programmers job - > which is to know where and when to allocate and free memory - a lot > harder. Here I am, having a bunch of objects all sitting in a designated > memory area of mine - which I can initialise, reset, and reuse just how > I seem fit (not that I want to horribly break SSL objects, I just want > to determine where they are stored) - and I can't use them because the > OpenSSL devs are working on taking a little bit of power from me that I > need in order to help the library do smart things. > > Like, imagine that I know I'll need: > > - a context object > - a connection object > - a BIO object > - some X.509 cert object/memory/whatever > - and so forth and so on > > And that not just once, but for a thousand connections. As an > application programmer who knows a thing or two about scalable > programming I'd be like: OK, that's fantastic. I can mmap the necessary > memory, use hugepages, reduce the TLB, and just have all that stuff > written on the same chunk without metadata or padding inbetween, which > doesn't bloat our D$. Sweet money! > > And now I can't do that because the devs want me to use their > single-object-only creation functions who return already allocated > memory to me. I don't get to decide anymore where my objects are > written, I don't get to decide what caching objects are used (maybe I > don't WANT an X.509 cert object, so I could pass NULL to the function > that creates it, or maybe I already HAVE a couple hundred of those lying > here, so you can have them ... no? You prefer your structures to be > opaque? Oh well). > > But, you know, it could still be argued that this is safer somehow. > *Somehow*. If not ... for the fact that I don't even seem to be able to > KEEP the objects OpenSSL created for me quite elaborately. > > > You do know that your string insert NUL bytes, right? If you have a > > look at how they're used, you might see why those stray NUL bytes > > aren't a good thing. > > Yes, I do. See below, I wrote the last part first. > > (Also, what? Please have a look again, those stray NUL bytes wouldn't > have ANY effect, at least not that I see it. One memcpy(), two > EVP_DigestUpdate(), and it's always a separately calculated length). > > > P.S. as a side note, your message triggered profanity filters. I > > don't really care, it's not our filters, but this is just to inform > > you that your rant didn't quite reach everyone (those with profanity > > filters in place) > > /postmaster > > It's just that this is so stupid to me. I'm no crypto master, I know > that. But I constantly hear about timing attacks and side channels and > all that, so I tried to avoid stepping into the pitfalls that other > people would do - and then I'm being told it's SUPPOSED to be like that. > Come on, please! It's almost as if the devs aren't even trying. > > On 29.12.18 17:21, J. J. Farrell wrote:> So instead of correct portable > code which derives obviously and > > straightforwardly from the specification, you'd write arrays of a > > different length from the original, the first 48 bytes of which would > > only be correct in some compilation environments, and even in the cases > > where those 48 bytes end up correct they have no obvious relationship to > > the specification they are implementing (your obfuscation making the > > code much more difficult to review). How are these changes improvements? > Another implication, this time that my code isn't perfectly portable > code. There is *one* environment I could think of where this wouldn't be > the case - that being Shift JIS environments that tinker with ASCII > standard by replacing a backslash with a Japanese half-width Yen sign - > however: > > 1. we'll already have much, MUCH bigger problems if ASCII isn't the > encoding the compiler is expecting here, so exchanging 0x5c for '\' is > not going to ruin much more here. And it doesn't even matter anyway > because any Shift JIS editor would display this as the half-width Yen > sign *anyways*. (And that being said, since the main criticism of the > Han unification of the Unicode consortium came from the Japanese, I > don't care if they're going to throw another fit. They can't even > prevent mojibake between mainly Japanese character encodings. At least > ISO-8859-1/CP1252 has the excuse of being the most popular encoding in > the entire west, so ... whatever. Just let them rail.) > 2. to be honest I wouldn't have have this be a static array at all, but > rather an exportable pointer and an exportable variable that would hold > the string's size minus one. However, if you actually HAD looked at the > code as is - which you obviously haven't because you wouldn't have even > brought it up then - the size of the array is completely inconsequential > in that particular code. That's right: they don't even derive the > amounts of bytes to copy from the string directly, but rather just use a > constant: > > > npad = (48 / md_size) * md_size; > > Oh, you want me to change that? No problem: > > > #define STRING \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" \ > > "xxxxxxxx" > > > > const unsigned char string_length = sizeof(STRING) - 1; > > const char*string = STRING; > > > > npad = (string_length / md_size) * md_size; > > Hell, I could even create a macro for this so that I don't even need the > explicit definition of STRING here. It's not as if OpenSSL shies away > from the concept of using macros to auto-generate a plethora of symbols > (I'm looking at include/openssl/crypto.h right now). > > > I'd walk you out of an interview if you offered this as an > > implementation, let alone as an improvement. > > Don't worry, I'd fire you on the spot if you had checked in the existing > code, so I'll call it quits. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > From c.wehrmeyer at gmx.de Sat Dec 29 22:08:12 2018 From: c.wehrmeyer at gmx.de (C.Wehrmeyer) Date: Sat, 29 Dec 2018 23:08:12 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> Message-ID: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> On 29.12.18 21:32, Viktor Dukhovni wrote: > I said it, neither because it can't be done, nor because it is > incompatible with session caching, or has anything to do with > ephemeral key agreement (which works just fine even with > session resumption), but simply because it is easier for a > beginner to get the code working without SSL handle re-use. OK, now just hold on a sec here. 1. Your complete statement was: > DO NOT reuse the same SSL handle for multiple connections, create a > new one for subsequent connections, but you can and generally should > reuse the SSL_CTX. Previously I had stated that client and server already stand pretty much, and that this is about the finishing touches. Like in, the finishing touches where I'd test what happens if the PSKs mismatch, and see the result of what's happening there. I had already established at this point that my code works if the PSKs DO match. Why is that important? Well, because that would've been a *perfect point in time* for you to mention that it's indeed possible to reuse a handle without recreation. Hell, such a thing would've been perfect *in the documentation page of SSL_clear(), where people would first go to read up on that*. I'd know they do. I did so. 2. I never said ephemeral key agreement would NOT work with session resumption. To quote the documentation of SSL_clear() again: > The reset operation however keeps several settings of the last > sessions (some of these settings were made automatically during the > last handshake). And when I hear TLS resumption, then I don't just hear this: > https://svs.informatik.uni-hamburg.de/publications/2018/2018-12-06-Sy-ACSAC-Tracking_Users_across_the_Web_via_TLS_Session_Resumption.pdf No, I also hear "My keys are not being renegotiated". Not the case? Then this is a thing that belongs into the documentation of SSL_clear(): "For ephemeral key ciphers renegotiates those, so that a different key is being used henceforth". I mean, come on. This is what documentation is supposed to be made for, isn't it? > Once you have you everything else working Well, what else could I have left working on that doesn't involve the transport layer? Because that's the main issue right now. Application protocol isn't a problem. Connection to the database server isn't a problem. Loading a 4096-bit Diffie-Hellman prime in order to prevent Logjam isn't a problem (also the API to make OpenSSL use that one is from the same universe in which Spock has a beard). > and have become more adept with use of the library How am I supposed to get more adept when the documentation is a literal mess? SSL_clear() doesn't mention stuff it's supposed to mention. BIO_new_socket has had a "TBA" in its "SEE ALSO" block seemingly ever since 1.0.2 came out, which was January 2015. Let me reverse that: What is the *point* of getting more adept with the API when I feel more and more disgusted by learning how it's working internally? Why should I bother? The more *adept* I become the more I want to switch to something else, I can tell you that. > If it makes a significant difference, then invest in maintaining > slightly more complex code to get the advantage. Why don't you make it easy for people to use your API correctly right from the start, then? And that includes, and is not exclusive, to startup code as well. Do you know how often I've seen people out there use ERR_load_crypto_strings(), ERR_load_SSL_strings(), OpenSSL_add_all_algorithms(), or SSL_library_init()? And that's also including SSL object reuse. You cannot tell me I'm the first one who, in wise precognition of how ugly object initialisation and release code can be, thought of reusing their SSL objects? And that the devs never said at one point: "You know what, we'll make this black magic *easy* to use! Like SSL_clear(), but properly!"? And of course you're not giving any hints as to WHAT I could look up. Are there references, example codes, anything I could read up on? Specific google search words? Links? Nah. Nothing. It's weekend, let's go shopping! > That's all I can offer in light of the bellicose rant You're not getting the point, are you? I've been trying to do my homework, *much* more than what most of the people I know and work with would have considered acceptable. I've read about ciphers, their advantages and disadvantages, key exchange crypto. I got some things wrong. I learnt about them. I tried to implement them. If someone goes out of their *way* to spend their time familiarising themselves with the library, the documentation, the very code that runs things - do you think I pulled that list of stuff SSL_new() does out of my rectum? - and you do not tell them "Don't do X even though X is possible, and I could've told you a couple times now that X is possible even though our documentation is mute about this" - then what you're basically saying is "F*ck you and your face". Try to understand me here. I'm trying to get this done, trying to improve here. I've said several times I ain't got no clue about crypto, but apparently I'm still trying, aren't I? On 29.12.18 22:33, Richard Levitte wrote: > Never mind this remark... For some reason, my brain added commas > after each partial string. Meh... Already forgiven. On 29.12.18 22:27, Salz, Rich via openssl-users wrote: > Might I suggest that you fix your attitude? Might I suggest to focus your attention to something that can still be fixed (and that is arguably much more important), rather than a personality that everyone thus far has given up on? Good crypto is infrastructure, and the envelope of the message shouldn't deter anyone from the actual message - that there is something rotten in the state of Denmark. And if I may say so - there's a lot more message to ponder about than envelope. > An insult and invective-filled polemic does no good. I was not aware of insults. I used strong adjectives and called people incompetent, although I find it hard to call it *polemic* seeing at my arguments far outdid my ranting. It was not my intent to insult anyone specifically. > Perhaps you might find another library more to your liking; there are > many available now. So there are other libs that are: - receiving frequent updates - already somewhat old, and as such, well-hung - are being wildly used - support all sorts of ciphers (interesting for later projects) - are written in C (compatibility, and better control against timing attacks) ? I mean, the first other library that comes to my mind is BoringSSL, and they even state in their second paragraph of their project side: > Although BoringSSL is an open source project, it is not intended for > general use, as OpenSSL is. We don't recommend that third parties > depend upon it. Doing so is likely to be frustrating because there are > no guarantees of API or ABI stability. On 29.12.18 21:39, Filipe Fernandes wrote: > You really have no idea how to code. You look like one of those junior > engineers that think they know it all. - writes to the topic for the first time - has shown no code - has shown no sign that he knows what is being discussed - has shown no argument against my points - literally only pops in once in order to shoot an ad hominem attack without further explanation, without any substance, without anything It kind of makes one wonder why you felt the need to get this off your chest - I mean, you could've addressed any of the arguments I've made, but instead you did ... whatever this is supposed to be ... and then ran away because you can't stand the echo: > I won't be replying again, so don't need to get your hopes up. Oh, no. I will not receive another trivial attack against my personality without any arguments to back them up other than "But I don't like your tone"? What ever will I do now? How will I keep on going? In all seriousness, you make it sound as if you should be too old for this kind of behaviour, and then you show *exactly* that kind of behaviour. Which makes me wonder if you're not too old for this BS. From quantumgleam at gmail.com Sat Dec 29 23:04:19 2018 From: quantumgleam at gmail.com (Matt Milosevic) Date: Sun, 30 Dec 2018 00:04:19 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> Message-ID: I do not want to complicate matters further, but there needs to be one thing clear here: this library is mainly developed and maintained by /volunteers/. They're putting in time and effort to improve the state of the crypto ecosystem, and they seem to be doing a damn good job at it, as even you, yourself, said that (paraphrasing massively) this is the best choice of crypto library out there. I am not one of those "it doesn't matter what you say, but how you say it" people, but the way you have presented your (valid or invalid; I am not one to say) arguments has changed the perception of yourself in the eyes of quite a few people (by mailing list metrics) thus far. I'd bet that you'd have a much better time getting your points across if you at least tried to be more civil in your arguments, and that you'd attract many more useful and free replies, as opposed to the ones trying to avoid further conflict you're getting now. Lastly, you are definitely free to submit patches and file issues for anything you deem wrong, bad, insecure or unacceptable in the codebase; it is a much more constructive way of helping both yourself and people who might run into the same problems later down the line. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at jordan.maileater.net Sat Dec 29 23:22:27 2018 From: openssl at jordan.maileater.net (Jordan Brown) Date: Sat, 29 Dec 2018 23:22:27 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> Message-ID: <01010167fc46a668-92ef10f5-fdf5-43c8-8bff-b465161aadd7-000000@us-west-2.amazonses.com> On 12/29/2018 7:53 AM, Jakob Bohm via openssl-users wrote: > Well, these two latter arrays look like a stray copy of the HMAC > constants "ipad" and "opad", which (while looking like ASCII), are > defined as exact hex constants even on a non-ASCII machine, such > as PDP-11 or an IBM mainframe. PDP-11 used ASCII.? So did all of the PDP series, though some used a six-bit (no lowercase) variant for some purposes. -- Jordan Brown, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From tincanteksup at gmail.com Sun Dec 30 02:40:48 2018 From: tincanteksup at gmail.com (tincanteksup) Date: Sun, 30 Dec 2018 02:40:48 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> Message-ID: <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> On 29/12/2018 22:08, C.Wehrmeyer wrote: > How am I supposed to get more adept when the documentation is a literal > mess? > Let me reverse that: What is the *point* of getting more adept with the > API when I feel more and more disgusted by learning how it's working > internally? Welcome to The Jungle .. From c.wehrmeyer at gmx.de Sun Dec 30 15:45:27 2018 From: c.wehrmeyer at gmx.de (C.Wehrmeyer) Date: Sun, 30 Dec 2018 16:45:27 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> References: <38b97114-0c66-40ed-f631-58aa20940a3a@gmx.de> <79d66603-fb83-4b69-be4f-2f4641857a95@wisemo.com> <144d8508-7218-a1be-7475-db30180cdeea@gmx.de> <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> Message-ID: <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> On 30.12.18 00:04, Matt Milosevic wrote: > I do not want to complicate matters further, but there needs to be one > thing clear here: this library is mainly developed and maintained by > /volunteers/. For some reason you seem to think this excuses something. I don't. Fact is: there are a lot of things that could be improved. Fact is: there are a lot of things that aren't improved, however there are still features added every now and again. The latest things I've seen thus far in the code is TLS 1.3 and kernel TLS. That's nice to have, don't get me wrong, but the base here has been broken for years, and is even made *worse* over time. Opaquing structures so that people cannot know how big they are anymore, which is required for determining the amount of memory such an object needs, has been done in 2016. So, apparently people are *willing* to do wide-spread rewrites of things and are even willing to break existing applications for newer versions (which is brave, again, don't get me wrong here), but ... seemingly not in what I'd call a positive direction. Turning structures opaque doesn't prevent people from still messing with their internal fields. We know that because people have been doing that on Windows handles for ages, so it only makes their jobs a little harder on that field. On the other hand, turning structures opaque for people who want to *work* with the library, to want to do smart things because they know the requirements of the applications, are actively being hindered by this approach. It adds a lot of code and complexity to the library side, which already *has* a lot of code and complexity by its nature of being a *crypto* library. And it does not even make memory contents more secure, as Heartbleed has shown the world - from what I've been told the reason for why those memory dumps always looked so juicy was because OpenSSL used its own memory pooling. So what is even the point of opaquing here? And yet there's been put a lot of time and effort into this mechanic. The point is that those volunteers you mentioned happen to volunteer to work on actively bad stuff and/or in an actively bad way. Seggelmann was a volunteer, wasn't he, and he did the hithertofore greatest damage to OpenSSL and cryptography because of his incompetency, whether it was a genuine mistake or intentional. I see a lack of *respect* for OpenSSL in that people who probably *shouldn't* be working on it still work on it as volunteers, because it looks nice on the r?sum?, and check in shoddy code. Genuine mistakes happen, but they *shouldn't* happen in infrastructure code - and I think I've made myself already clear that I view cryptography as infrastructure. Saying "yeah, well, they volunteered, they put in the time and effort, we should be thankful for that" is not enough. I don't care if the engineer who's building the bridge is being paid or not (he should be, though); what I care about is that the bridge doesn't collapse when a stronger wind appears. Same with code: I don't care if you volunteered to work on SSL or if you're being paid for it, what I care about is the quality of your work. I'd rather have no work at all from you than work that is just bull. I'd rather have no encryption at all than so-called "export encryption" that's been lobotomised for commercial use. And I'd rather walk those 300 meters using a mountain path than using a bridge that didn't cost anything and doesn't look very stable. /At the very least I'll have some sort of security, may it be that we need a dedicated secure channel rather than cryptography, or maybe adding a rail to the mountain path/. And in that sense not having or not wanting to have some sort of leverage against someone who repeatedly sends in shit code is a negative - not a positive. I also think you're *sorely* underestimating how low people can steep just to say "I've been working on OpenSSL" or "I've been working on the Linux kernel" or "I've been working on Apache". The Apache FCGI module for Perl does not support printing out UTF-8 data to this day - in fact there's code that checks if the UTF-8 is set, and implicitly downgrades that string to ISO-8859-1 if so. If it can't do so it gobs a retarded warning into your server logs. The module's apparently been written in 2003 and received an update in 2010. Did this update get rid of the warning and/or the downgrade? Nope, neither of those. The update merely changed the warning to "[this] will stop wprking [sic!] in a future version of FCGI". In 2010. If this wasn't someone who just wanted to be able to say that they've been working on Apache FCGI I'm going to eat a broomstick, as they say in German. So, no. I will not show respect to bad code just because it's cheap or free. My respect goes to people who do good stuff, whether it's for free or not. People who just provide shitty things for free deserve shitty respect at best. In turn, people who do good stuff for free deserve a lot of respect without asking for it. You want the same respect? Then maybe not let any volunteer check in code just because they can. And more often than you'd think "no deal" can still be the best deal if the seemingly only other best deal is still a shit deal. I've been told a lot of Brits are learning that lesson these days. On 30.12.18 03:40, tincanteksup wrote: > On 29/12/2018 22:08, C.Wehrmeyer wrote: > >> How am I supposed to get more adept when the documentation is a >> literal mess? > >> Let me reverse that: What is the *point* of getting more adept with >> the API when I feel more and more disgusted by learning how it's >> working internally? > > Welcome to The Jungle .. I don't get the message. Care to elaborate? Getting a feeling for the tidiness of the source code isn't a hard problem. If I want to look at the source code of SSL_new() that's not terribly hard. One fgrep through the source directory lists "ssl/ssl_lib.c" amongst other hits, and unlike those other hits this one shows the type it returns (SSL *) and doesn't end in a semicolon. Then just search for SSL_new() until you find it, and then start reading away. Getting more adept with the library in general? That's hard. There's 280 symbols just starting with "SSL_" in 1.0.0 alone: $ nm libssl.so.1.0.0 | grep ' SSL_' | wc -l 280 I'm not going to drop all those symbols on you, but how is one to know which function or macro or whatever is the ticket here? The documentation barely helps, we've already established that. Source code reading? That reveals the things that I *do not* want to know too, because it makes me feel uneasy - which really is the point here: you *never* want to reach a point where your users are brought to the point where they start reading your source code, because even though that might teach them what to do it makes them unable to sleep at night. I've had that problem with the nouveau driver, which wrote random numbers to random hardware registers without even trying to make some sense of it, in other words it was completely undebuggable; I've had that problem with freetype, whose API was even worse than OpenSSLs because they didn't even *attempt* to give higher-levels the option to pass pointers to cached objects, so they'd constantly allocate and free subobjects via malloc() and free() every time an object was created; and I've had that problem with PCSX2, which, at least the last time I checked, doesn't support x64 builds and is as such limited to a reduced amount of virtual memory space, which it actually sorely needs for image file mappings and random access therein. Well, that, and the fact that instead of using a static buffer with static size for the window title update function they're using lots of dynamic buffers and reallocations each time the function is called. Why do I mention all of this? Because in all those cases I didn't have to know exactly what those functions and programs did for me to be able to tell that things were messy. From ckashiquekvk at gmail.com Mon Dec 31 05:14:30 2018 From: ckashiquekvk at gmail.com (ASHIQUE CK) Date: Mon, 31 Dec 2018 10:44:30 +0530 Subject: [openssl-users] Openssl async support In-Reply-To: References: Message-ID: Gentle reminder On Thu, Dec 27, 2018 at 8:37 PM ASHIQUE CK wrote: > Hi all, > > Thanks for the earlier reply. But still Iam facing issue > regarding the asynchronous job operation. > > I have implemented asynchronous job operation partially. I am > now getting requests asynchronously ie. getting the next request after > calling ASYNC_pause_job from the first request. But I am unable to resume > the paused jobs after job completion. > > Test setup consists of a nginx server and three SSL client apps. > > I have got the first 16kb processing request (AES-GCM > encryption/decryption) from client1 and have submitted the request to the > engine and done ASYNC_pause_job, so client1 entered into waiting state. But > when we run the client2 app, the first job went into ASYNC_FINISH state > before job completion. Similarly, when we run the client3 app, the second > job went into ASYNC_FINISH state. Can you help regarding this? > > > > On Wed, Dec 19, 2018 at 5:33 PM ASHIQUE CK wrote: > >> Gentle reminder >> >> On Tue, Dec 18, 2018 at 4:06 PM ASHIQUE CK >> wrote: >> >>> Hi all, >>> >>> I truly understand that everyone might be busy with your work and didn't >>> found time to reply. That's okay, but incase you have accidendly forgot to >>> reply, please accept this as a gentle reminder. >>> >>> >>> >>> >>> >>> On Mon, Dec 17, 2018 at 6:11 PM ASHIQUE CK >>> wrote: >>> >>>> Hi all, >>>> >>>> I have some queries regarding OpenSSL async operation. >>>> >>>> Current setup >>>> ------------- >>>> I have one* OpenSSL dynamic engine (with RSA and AES-GCM support) *and >>>> linked it with *Nginx* server. Multiple *WGET* commands on the client >>>> side. >>>> >>>> Current issue >>>> ------------- >>>> Since OpenSSL *do_cipher call *(the function in which actual AES-GCM >>>> encryption/decryption happening) comes from one client at a time which is >>>> reducing file downloading performance. So we need an *asynchronous >>>> operation in OpenSSL* ie. we need multiple do_cipher calls at the same >>>> time from which we should submit requests to HW without affecting the >>>> incoming requests and should wait for HW output. >>>> >>>> Queries >>>> -------- >>>> 1) Is there is any other scheme for multiple do_cipher calls at a >>>> time?. >>>> 2) Any method to enable asynchronous call from OpenSSL? >>>> >>>> Versions >>>> ------------- >>>> Openssl - 1.1.0h >>>> Nginx1.11.10 >>>> Wget 1.17.1 >>>> >>>> Kindly support me. Please inform me if any more inputs needed. Thanks >>>> in advance. >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Mon Dec 31 09:12:57 2018 From: levitte at openssl.org (Richard Levitte) Date: Mon, 31 Dec 2018 10:12:57 +0100 (CET) Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> References: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> Message-ID: <20181231.101257.2069367019671144053.levitte@openssl.org> In message <4d287beb-f79b-5fb8-9a6f-8a612c175474 at gmx.de> on Sun, 30 Dec 2018 16:45:27 +0100, "C.Wehrmeyer" said: > On 30.12.18 00:04, Matt Milosevic wrote: > > I do not want to complicate matters further, but there needs to be one > > thing clear here: this library is mainly developed and maintained by > > /volunteers/. > > For some reason you seem to think this excuses something. I don't. > > Fact is: there are a lot of things that could be improved. Agreed. > Fact is: there are a lot of things that aren't improved, however > there are still features added every now and again. The latest > things I've seen thus far in the code is TLS 1.3 and kernel TLS. Yes, it's true, new features are going in. And it's true that it's often more exciting to add new features than to do the janitorial work. BUT, you also have to appreciate that stuff is happening around us that affects our focus. TLS 1.3 happened, and rather than having to answer the question "why don't you have TLS 1.3 yet?" (there's a LOT of interest in that version), we decided to add it. However, your message is clear, we do need to do some cleanup as well. More than that, I agree with you that it's needed (I've screamed out in angst when stumbling upon particularly ugly or misplaced code, so the feeling is shared, more than you might believe). That being said, cleanup happens, and documentation happens, in a piecemeal fashion, 'cause that's what most people have capacity for. Now, here's something else that you need to consider: API/ABI compatibility needs to be preserved. See, I did see you say something about all the available SSL_ symbols, and it's true that we have a lot of them (that includes all the macros): : ; grep -E '^([^#].*[^A-Za-z_]|# *define *)SSL_[A-Za-z0-9_]*\(' include/openssl/* | wc -l 747 Counting symbols is, however, nothing other than a blunt instrument. Quite a lot of those symbols are convenience macros and functions that have accumulated over time. But nevertheless, I do hear you call for a remake of the SSL API as well as cleaner internals. The latter is easier, and I'm sure it will happen piecemeal as per usual so as to not break something / inadvertently change a behavior (i.e. break ABI). The former is a fairly massive project, and is more of creating a new API and library rather than a mere cleanup job. That will be a massive effort, and you do have to keep in mind how much time all involved can put into it. > Turning structures opaque doesn't prevent people from still messing > with their internal fields. True. But it makes for a clear delineation where people are forced to be aware that they are playing with internal stuff, and that it may not be a safe thing to do. When structures weren't opaque, people *expected* things to stay as they were or be added at the end of the structure, see API / ABI compatibility. That took away *all* possibilities of cleaning them up or enhancing them smoothly without risking application breakage at every turn. So basically, the message is that if you want to tinker with stuff that's essentially internal to the library, do feel free, but do so at *your* risk, not ours. (re API / ABI compatibility, I learned the lesson back when I was fairly new and made my own mistakes... adding the weak_key field in struct des_ks_struct back in 0.9.2something deserved me no end of fairly harsh scolding because it broke ABI for everything that did a stack allocation of that structure and that used the OpenSSL shared libraries) > world - from what I've been told the reason for why those memory > dumps always looked so juicy was because OpenSSL used its own memory > pooling. Uhmmmm.... this is factually incorrect. OpenSSL doesn't use its own memory pooling. We have thin wrappers around the usual malloc() / realloc() / free(), which allows any application to do its own memory pooling. > The point is that those volunteers you mentioned happen to volunteer > to work on actively bad stuff and/or in an actively bad way. > Seggelmann was a volunteer, wasn't he, and he did the hithertofore > greatest damage to OpenSSL and cryptography because of his > incompetency, whether it was a genuine mistake or intentional. Wow... So what you're saying is that one huge enough mistake, and that cancels out everything else you do or have done? : ; git log | grep Seggel | wc -l 53 Note also that he couldn't commit his contributions directly, they had to be applied (and implicitly reviewed) by someone in the team, which was a single underfunded (UNfunded, to tell the truth) individual at the time. Wanna assign incompetence for mistakes like this? In that case, there's plenty to go around, and there's isn't one single competent programmer alive. Errare humanum est 'n all that. > Genuine mistakes happen, but they *shouldn't* happen in > infrastructure code It's easy to say. Still, humans err... you can look at any infrastructure (say, roads) and realise that mistakes are made, and we try to learn from them. Speaking of learning, one of the things we did after Heartbleed was to put a code review process in place. We do hope that it will help to keep shitty mistakes out. It's not an absolute guarantee, but we do believe it's *better*. Speaking of which, all our development is available on github in form of pull requests. Anyone is welcome to have a look and to comment / help weed out bad code or help make the code better. You're welcome to go in there and help out, and that would probably be more constructive than a massive continued rant here. > I also think you're *sorely* underestimating how low people can steep > just to say "I've been working on OpenSSL" or "I've been working on > the Linux kernel" or "I've been working on Apache". The Apache FCGI > module for Perl does not support printing out UTF-8 data to this day - > in fact there's code that checks if the UTF-8 is set, and implicitly > downgrades that string to ISO-8859-1 if so. If it can't do so it gobs > a retarded warning into your server logs. The module's apparently been > written in 2003 and received an update in 2010. Did this update get > rid of the warning and/or the downgrade? Nope, neither of those. The > update merely changed the warning to "[this] will stop wprking [sic!] > in a future version of FCGI". In 2010. If this wasn't someone who just > wanted to be able to say that they've been working on Apache FCGI I'm > going to eat a broomstick, as they say in German. Not that I would know why we should care about shit in other projects here, but considering it's open source, you could do the required modification and contribute there. > So, no. I will not show respect to bad code just because it's cheap > or free. My respect goes to people who do good stuff, whether it's > for free or not. People who just provide shitty things for free > deserve shitty respect at best. In turn, people who do good stuff > for free deserve a lot of respect without asking for it. You want > the same respect? Then maybe not let any volunteer check in code > just because they can. I'll gently point out that for non-free / non-open-source code, you have no idea if the code is shitty or not. All you have to see is the API. > you *never* want to reach a point where your users are brought to > the point where they start reading your source code, because even > though that might teach them what to do it makes them unable to > sleep at night. I'm sorry if my code causes that level of angst. However, I disagree with you, we *do* want users to look at our code, 'cause at least some of them will come back and help us improve it. Or well, we hope they will. That's the whole idea with an open source project. To conclude, I have a question for you: are you only willing to rant (*), or are you willing to help out in another way? Things that may have a better outcome and may be energy better spent is to actually volunteer by going to our github space, review and comment on PRs that you think are important, raise issues (please, more detailed than a generic call for cleanup) or contribute code. https://github.com/openssl/openssl/ Cheers, Richard (*) a word to anyone that wants to scold Herr Wehrmeyer for ranting: it may be annoying to have to listen to it (no one forces you to), but it's ALSO a contribution if you're willing to listen, as it helps keep a focus you may be missing. Also, I do not want to encourage a rant fest, that's just going to deteriorate morale, but I do think that the occasional rant is acceptable and should be appreciated for what it is. -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From c.wehrmeyer at gmx.de Mon Dec 31 11:36:42 2018 From: c.wehrmeyer at gmx.de (C.Wehrmeyer) Date: Mon, 31 Dec 2018 12:36:42 +0100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <20181231.101257.2069367019671144053.levitte@openssl.org> References: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> <20181231.101257.2069367019671144053.levitte@openssl.org> Message-ID: On 31.12.18 10:12, Richard Levitte wrote: > Yes, it's true, new features are going in. And it's true that it's > often more exciting to add new features than to do the janitorial > work. You realised what I have left unspoken thus far, which is this almost obsession-like preference of OSS coders to add new features rather than improving the old, boring codebase. However, there's a reason why it's still called code*base*. It's the base of everything. And adding more and more features to that base is going to make ripping off the band-aid more painful in the long run. Also, infrastructure again. I, as a user, don't care if the kernel gets a new feature that makes some black magic happening. What I care about is that the kernel doesn't throw away my writes (which has happened in May of 2018, see): > https://www.postgresql.org/message-id/flat/CAMsr%2BYE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG%2BvzHun1EqMxyQg%40mail.gmail.com#CAMsr+YE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG+vzHun1EqMxyQg at mail.gmail.com) Cryptography libs should be equally conservative, considering that cryptography is conservative to begin with. I don't care if TLS 1.3 lets me use new exiting ciphers and handshakes when it unreasonably bogs down my server code. > BUT, you also have to appreciate that stuff is happening around us > that affects our focus. TLS 1.3 happened, and rather than having to > answer the question "why don't you have TLS 1.3 yet?" (there's a LOT > of interest in that version), we decided to add it. Sure, but didn't Matt just say that there are a lot of volunteers working on that library? The disadvantage here is that quality assurance is barely a thing - however, the *advantage* of this is that OpenSSL does not have to follow commercial interests. If we look at this at face value you could just say "No, people, it's high time we streamline some of the internal aspects of the library, TLS 1.3 will have to wait. You can't wait that long? Well, sorry". > However, your message is clear, we do need to do some cleanup as > well. More than that, I agree with you that it's needed (I've > screamed out in angst when stumbling upon particularly ugly or > misplaced code, so the feeling is shared, more than you might > believe). But what does "cleanup" entail? That's the hot-button question here. I've already made a suggestion, that is to say, getting rid of opaque structures. If that is deemed too insecure (for whatever reasons), export symbols that allow programmers to query the size of structures, and provide two versions of functions: one function expects the caller to pass an object to which the changes are to be made, and the other one allocates the necessary memory dynamically and then calls the first version. Or just don't allocate my object memory dynamically anymore. > That being said, cleanup happens, and documentation happens, in a > piecemeal fashion, 'cause that's what most people have capacity for. So, what you're effectively saying is that I'm the first one who ever asked for SSL object reuse, right? Because if piecemeal work happens on the documentation, and Viktor says that it's possible, then surely no one would have ever answered that question on the mailing list and *not* put it piecemeal-ly in the OpenSSL documentation, right? > Now, here's something else that you need to consider: API/ABI > compatibility needs to be preserved. No it doesn't. We *know* it doesn't. When OpenSSL 1.1 was released it broke all *sorts* of applications out there, because all sorts of applications used struct fields rather than accessors. wget, mutt, neon, python, you name it, you broke it. > https://breakpoint.cc/openssl-1.1-rebuild-2016-08-26/ So since when do we need to consider API/ABI compatibility? Did we grow up recently? Or maybe OpenSSL should have switched the language. The point of C is that structures are public. And if I'm going to be honest that approach saved my sorry arse more than a couple times. When zlib choked because it couldn't go past 4 GiBs of data since its fields were uint32_ts, I was able to easily workaround this problem. But what do I know. > Counting symbols is, however, nothing other than a blunt instrument. > Quite a lot of those symbols are convenience macros and functions that > have accumulated over time. You're taking my statement out of context. Counting the symbols wasn't supposed to suggest that there are too *many* of them. I'm in no position to say that, seeing as the original context in which my statement was put is that *I'm not familiar enough with the library*. What I said was that reading the code is easy. Learning what the library provides is hard, and that you won't learn much just by looking at the symbols because there's so many of them. > But nevertheless, I do hear you call for a remake of the SSL API as > well as cleaner internals. The latter is easier, and I'm sure it will > happen piecemeal as per usual so as to not break something / > inadvertently change a behavior (i.e. break ABI). The former is a > fairly massive project, and is more of creating a new API and library > rather than a mere cleanup job. That will be a massive effort, and > you do have to keep in mind how much time all involved can put into > it. I'm not saying it needs to be done right now. I'm merely suggesting that it might be a good goal post for OpenSSL 2.0. >> Turning structures opaque doesn't prevent people from still messing >> with their internal fields. > > True. But it makes for a clear delineation where people are forced to > be aware that they are playing with internal stuff, and that it may > not be a safe thing to do. Then why not provide small helper functions for covering the "playing with internal stuff" part? That way it's still controlled, and documented, and unified. You guys must've had some examples to show off in order to justify the process, so surely you know what it is that people do when they use internal stuff. Make functions for those. Don't give them any reason to continue playing with internal stuff. I don't like code that tries to protect programmers from themselves. I like code that lets good programmers do smart things. And if bad programmers use that freedom to do bad stuff, then doesn't that mean your API simply didn't support this, and they had to make it work somehow else? Again, helper functions. > Uhmmmm.... this is factually incorrect. OpenSSL doesn't use its own > memory pooling. We have thin wrappers around the usual malloc() / > realloc() / free(), which allows any application to do its own memory > pooling. > https://web.archive.org/web/20150207180717/http://article.gmane.org/gmane.os.openbsd.misc/211963 If you don't wanna read too much: > https://xkcd.com/1353/ Read the mouse-hover text. >> The point is that those volunteers you mentioned happen to volunteer >> to work on actively bad stuff and/or in an actively bad way. >> Seggelmann was a volunteer, wasn't he, and he did the hithertofore >> greatest damage to OpenSSL and cryptography because of his >> incompetency, whether it was a genuine mistake or intentional. > > Wow... > > So what you're saying is that one huge enough mistake, and that > cancels out everything else you do or have done? If it only was one mistake. I read in June 2014 that Seggelmann was *also* responsible for the DTLS code: https://www.openssl.org/news/secadv/20140605.txt And Thor knows how many others that we aren't aware of. And just in case you don't get this: this has caused *massive* damage. It's not just *one* mistake. Heartbleed was caused by input validation screwups. We've been known since at least the 80s that this causes widespread issues. For f*cks sake, this November we were informed that FreeBSD could be pinged to death. And not only Seggelmann f*cked up. Also the guy who went over his code. On New Year's morning, no less, where it should have been *clear* that no one is in the right mental state of mind for that. For instance I don't do any coding in the evening because I know myself enough to know that my brain is mush and that whatever I'll produce is equivalent with horse barf. So, no, this isn't just *one mistake*. This is "I have no idea what I'm doing, in a field where certified security experts f*ck up things every now and then, but I'm doing it anyway for my dissertation" while everyone was looking away. > Wanna assign incompetence for mistakes like this? In that case, > there's plenty to go around, and there's isn't one single competent > programmer alive. Errare humanum est 'n all that. Only there's a difference between a mistake and dabbling in forces you shouldn't dabble in. But again, it looks nice on the r?sum?. >> Genuine mistakes happen, but they *shouldn't* happen in >> infrastructure code > > It's easy to say. Still, humans err... you can look at any > infrastructure (say, roads) and realise that mistakes are made, and we > try to learn from them. WE HAVE HAD INPUT VALIDATION PROBLEMS SINCE THE 80S. SO WHAT HAVE WE LEARNT, HUH? WHAT HAVE WE LEARNT? WE HAVE LEARNT JACK-EFFING-SHTIE. Decades went by, but we're as arrogant as ever. I HATE this approach. "Programmers shouldn't be held accountable for their screw-ups because to err is human" is what it effectively says. Did this story have ANY consequences for Seggelmann? Not that I'm aware of. Only when people began to suggest that this may be a backdoor he felt the need to save face. And as far as I'm aware he hasn't even apologised once. But that's what you get when you prefer to add features over a solid codebase. > Speaking of learning, one of the things we did after Heartbleed was to > put a code review process in place. We do hope that it will help to > keep shitty mistakes out. It's not an absolute guarantee, but we do > believe it's *better*. It certainly didn't help against: > npad = (48 / md_size) * md_size >> I also think you're *sorely* underestimating how low people can steep >> just to say "I've been working on OpenSSL" or "I've been working on >> the Linux kernel" or "I've been working on Apache". The Apache FCGI >> module for Perl does not support printing out UTF-8 data to this day - >> in fact there's code that checks if the UTF-8 is set, and implicitly >> downgrades that string to ISO-8859-1 if so. If it can't do so it gobs >> a retarded warning into your server logs. The module's apparently been >> written in 2003 and received an update in 2010. Did this update get >> rid of the warning and/or the downgrade? Nope, neither of those. The >> update merely changed the warning to "[this] will stop wprking [sic!] >> in a future version of FCGI". In 2010. If this wasn't someone who just >> wanted to be able to say that they've been working on Apache FCGI I'm >> going to eat a broomstick, as they say in German. > > Not that I would know why we should care about shit in other projects > here, but considering it's open source, you could do the required > modification and contribute there. You're just a bigger target than Apache FCGI. This heartbleed stuff was pretty much only for Seggelmann's career. And look what kind of example he set, seeing as I don't see him suffering consequences for that. >> So, no. I will not show respect to bad code just because it's cheap >> or free. My respect goes to people who do good stuff, whether it's >> for free or not. People who just provide shitty things for free >> deserve shitty respect at best. In turn, people who do good stuff >> for free deserve a lot of respect without asking for it. You want >> the same respect? Then maybe not let any volunteer check in code >> just because they can. > > I'll gently point out that for non-free / non-open-source code, you > have no idea if the code is shitty or not. All you have to see is the > API. Unfortunately I never said that I have an idea about whether or not non-open-source code is good or bad. What I actually said was that people who do good work should be paid for that, and if they do that for free that's even more admirable. (I'm not going to discuss business models on how people can make money with cryptography code while it still being Open-Source.) >> you *never* want to reach a point where your users are brought to >> the point where they start reading your source code, because even >> though that might teach them what to do it makes them unable to >> sleep at night. > > I'm sorry if my code causes that level of angst. Not the code per se. However I (and several other people I know) have given up on reading code they weren't paid for. Otherwise they see things that makes them want to throw it away in disgust, and if that happens a couple times they're left there with their kernel and a text editor. It's a self-preservation policy. So, no, it's *reading* your code that causes ... angst? You used that word twice now. I don't know what it has evolved into in English, seeing as I only ever see people accusing others of "being angsty", but just for the record: in German "Angst" ("a" somewhat like the "u" in "hunt") means genuine fear and fright. "Angst" and "anxiety" share the same roots, but "anxiety" is usually more mellow. Not that "anxiety" is the right word here, again - "disgust" fits better. > However, I disagree with you, we *do* want users to look at our code, > 'cause at least some of them will come back and help us improve it. > Or well, we hope they will. I know enough people who wouldn't, and that's all I'm gonna say about this. > To conclude, I have a question for you: are you only willing to rant > (*), or are you willing to help out in another way? This is not the question I feel you should ask because we haven't even established if I *could* make contributions to the project, as my mindset appears to be so much more different. Especially the idea of not wanting to break APIs/ABIs is a huge limitation - just looking at SSL_new() made me give up hope here. I'm no cryptography expert, I've made that clear from mail one, and my cleanup jobs would be more widespread than what seems to be deemed acceptable right now. I can read and write scalable C code, otherwise I wouldn't even have tried to reuse that SSL object from the beginning. So, I ask a question in return: what do you think I *could* be helping with? Also, a happy new year to all. From jvp at forthepolls.org Mon Dec 31 11:35:26 2018 From: jvp at forthepolls.org (jvp) Date: Mon, 31 Dec 2018 03:35:26 -0800 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <20181231.101257.2069367019671144053.levitte@openssl.org> References: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> <20181231.101257.2069367019671144053.levitte@openssl.org> Message-ID: <20181231033526.0000208e@forthepolls.org> kudos, matt milosevic. you have not complicated matters but rightly reminded us of the need for civility and the benefits that accrue from such an approach in opposition to merely railing while offering little in the way of a path towards problem resolution. also, appreciation to richard levitte for his balanced response and patience when many would have none. one would also hope ? as you two and others have suggested ? that everyone understands that the best channel of outlet for someone "fired-up" over something leads to actually contributing what it takes to make the fix. it can be said in different ways amongst all languages, but one's own value boils down to 'words are cheap and deeds are dear'. while probably not exactly the watch-words of this news-letter, they certainly represent the mindset of the many, many highly-competent contributors whose goal is to make openssl the best crypto application/library available. not to be forgotten are the many firms that contribute funds and/or their employee's time on the project as well as the contributions of funds by individuals. there is no question that soft-ware that is encumbered with a dual responsibility for simultaneously affording continuity and adopting innovation will suffer from bloat. while coders are intent on patching what is amiss and making new things work, paring back the old often takes a back seat and openssl fits this mold. moreover, documentation has been a problem since the "programming" of NCR machine wiring in the 1930's. additionally, i think it is safe to say than no piece of soft-ware has ever been cited as "over documented". openssl's stream-lining and documentation will improve, but there are bound to be those unhappy with the time-line. i am from the sysadmin side and feel deeply indebted to the wealth of effort put forth by so many to make openssl what it is today and will be tomorrow. -- Thank you, Johann From matt at openssl.org Mon Dec 31 14:11:56 2018 From: matt at openssl.org (Matt Caswell) Date: Mon, 31 Dec 2018 14:11:56 +0000 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: References: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> <20181231.101257.2069367019671144053.levitte@openssl.org> Message-ID: <27fa0004-7684-9343-218b-558682c79201@openssl.org> On 31/12/2018 11:36, C.Wehrmeyer wrote: > On 31.12.18 10:12, Richard Levitte wrote: >> Yes, it's true, new features are going in.? And it's true that it's >> often more exciting to add new features than to do the janitorial >> work. > > You realised what I have left unspoken thus far, which is this almost > obsession-like preference of OSS coders to add new features rather than > improving the old, boring codebase. However, there's a reason why it's still > called code*base*. It's the base of everything. And adding more and more > features to that base is going to make ripping off the band-aid more painful in > the long run. There has been a huge amount of effort put in over the last few years to improve the codebase. Things that immediately spring to mind (and there's probably a whole load more): - Rewrite of the state machine - libssl record layer refactor - Implementation of the PACKET and WPACKET abstractions in libssl - Rewrite of the rand code - Implementation of the new test harness - Significant effort into developing tests - Implementation of the coding style and reformat of the codebase to meet it - Opaque many of the structures (which I know you don't see as an improvement, but I'll answer that point separately) - Implementation of continuous fuzzing - Significant expansion of the documentation coverage It is simply not true to claim that we have "an obsession-like preference...to add new features rather than improving the old, boring codebase". None of the above things resulted in or were motivated by user visible features. They were all about improving the codebase. > > Also, infrastructure again. I, as a user, don't care if the kernel gets a new > feature that makes some black magic happening. What I care about is that the > kernel doesn't throw away my writes (which has happened in May of 2018, see): > >> > https://www.postgresql.org/message-id/flat/CAMsr%2BYE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG%2BvzHun1EqMxyQg%40mail.gmail.com#CAMsr+YE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG+vzHun1EqMxyQg at mail.gmail.com) > > > Cryptography libs should be equally conservative, considering that cryptography > is conservative to begin with. I don't care if TLS 1.3 lets me use new exiting > ciphers and handshakes when it unreasonably bogs down my server code. > >> BUT, you also have to appreciate that stuff is happening around us >> that affects our focus.? TLS 1.3 happened, and rather than having to >> answer the question "why don't you have TLS 1.3 yet?" (there's a LOT >> of interest in that version), we decided to add it. > > Sure, but didn't Matt just say that there are a lot of volunteers working on > that library? The disadvantage here is that quality assurance is barely a thing > - however, the *advantage* of this is that OpenSSL does not have to follow > commercial interests. If we look at this at face value you could just say "No, > people, it's high time we streamline some of the internal aspects of the > library, TLS 1.3 will have to wait. You can't wait that long? Well, sorry". > >> However, your message is clear, we do need to do some cleanup as >> well.? More than that, I agree with you that it's needed (I've >> screamed out in angst when stumbling upon particularly ugly or >> misplaced code, so the feeling is shared, more than you might >> believe). > > But what does "cleanup" entail? That's the hot-button question here. I've > already made a suggestion, that is to say, getting rid of opaque structures. If > that is deemed too insecure (for whatever reasons), export symbols that allow > programmers to query the size of structures, and provide two versions of > functions: one function expects the caller to pass an object to which the > changes are to be made, and the other one allocates the necessary memory > dynamically and then calls the first version. Or just don't allocate my object > memory dynamically anymore. > >> That being said, cleanup happens, and documentation happens, in a >> piecemeal fashion, 'cause that's what most people have capacity for. > > So, what you're effectively saying is that I'm the first one who ever asked for > SSL object reuse, right? Because if piecemeal work happens on the documentation, > and Viktor says that it's possible, then surely no one would have ever answered > that question on the mailing list and *not* put it piecemeal-ly in the OpenSSL > documentation, right? > >> Now, here's something else that you need to consider: API/ABI >> compatibility needs to be preserved. > > No it doesn't. We *know* it doesn't. When OpenSSL 1.1 was released it broke all > *sorts* of applications out there, because all sorts of applications used struct > fields rather than accessors. wget, mutt, neon, python, you name it, you broke it. API/ABI stability is absolutely required. Every time we make a breaking change it is painful for our users - and more pain is felt the bigger the scale of the break. We simply cannot go around making wholesale breaks on an ongoing basis. If we did so then OpenSSL would be a lot less useful to our users. This is not to say that we can *never* make breaking changes. Only that when we do so it must be strongly justified and only done relatively infrequently. We made such a decision when we decided to make the structures opaque. It's not a decision we are likely to repeat anytime soon IMO. We are still feeling the pain of that now (and will continue to do so for at least the next year until 1.0.2 goes out of support - and probably beyond that). Which brings me onto why structures were made opaque in the first place. A significant driver for this (probably *the* most important one) was to improve the codebase. I have witnessed first hand the harm that non-opaque structures did to OpenSSL. We will be fixing the fallout from them for years to come. Non-opaque structures combined with the requirements for stable API/ABI means you cannot change anything in those structures. Renaming or deleting structure members constitutes an API break. Even *adding* structure members constitutes an ABI break (due to the changed size of the structure). This means the code ossifies over time and cannot easily be refactored. Much of OpenSSL's internal "quirkiness" results from attempting to work around this restriction. Things like the state machine refactor and the record layer refactor would not have been possible without opaque structures. In my mind making the structures opaque was one of the best things that ever happened to OpenSSL. > >> https://breakpoint.cc/openssl-1.1-rebuild-2016-08-26/ > > So since when do we need to consider API/ABI compatibility? Did we grow up > recently? > > Or maybe OpenSSL should have switched the language. The point of C is that > structures are public. And if I'm going to be honest that approach saved my > sorry arse more than a couple times. When zlib choked because it couldn't go > past 4 GiBs of data since its fields were uint32_ts, I was able to easily > workaround this problem. But what do I know. If you really want to fiddle with OpenSSL internal structures - feel free. Just include the OpenSSL internal header files and away you go. Just do so in the knowledge that they could be changed at any time, and your code might break. If this isn't a concern to you then - no problem. If it is a concern to you - then actually you *do* care about API/ABI stability after all. > >> Counting symbols is, however, nothing other than a blunt instrument. >> Quite a lot of those symbols are convenience macros and functions that >> have accumulated over time. > > You're taking my statement out of context. Counting the symbols wasn't supposed > to suggest that there are too *many* of them. I'm in no position to say that, > seeing as the original context in which my statement was put is that *I'm not > familiar enough with the library*. > > What I said was that reading the code is easy. Learning what the library > provides is hard, and that you won't learn much just by looking at the symbols > because there's so many of them. > >> But nevertheless, I do hear you call for a remake of the SSL API as >> well as cleaner internals.? The latter is easier, and I'm sure it will >> happen piecemeal as per usual so as to not break something / >> inadvertently change a behavior (i.e. break ABI).? The former is a >> fairly massive project, and is more of creating a new API and library >> rather than a mere cleanup job.? That will be a massive effort, and >> you do have to keep in mind how much time all involved can put into >> it. > > I'm not saying it needs to be done right now. I'm merely suggesting that it > might be a good goal post for OpenSSL 2.0. > >>> Turning structures opaque doesn't prevent people from still messing >>> with their internal fields. >> >> True.? But it makes for a clear delineation where people are forced to >> be aware that they are playing with internal stuff, and that it may >> not be a safe thing to do. > > Then why not provide small helper functions for covering the "playing with > internal stuff" part? That way it's still controlled, and documented, and > unified. You guys must've had some examples to show off in order to justify the > process, so surely you know what it is that people do when they use internal > stuff. Make functions for those. Don't give them any reason to continue playing > with internal stuff. > > I don't like code that tries to protect programmers from themselves. I like code > that lets good programmers do smart things. And if bad programmers use that > freedom to do bad stuff, then doesn't that mean your API simply didn't support > this, and they had to make it work somehow else? Again, helper functions. > >> Uhmmmm....? this is factually incorrect.? OpenSSL doesn't use its own >> memory pooling.? We have thin wrappers around the usual malloc() / >> realloc() / free(), which allows any application to do its own memory >> pooling. > >> > https://web.archive.org/web/20150207180717/http://article.gmane.org/gmane.os.openbsd.misc/211963 The BUF_FREELISTS code that this post references was ripped out years ago. This no longer represents the current state of OpenSSL in any supported version. >> To conclude, I have a question for you: are you only willing to rant >> (*), or are you willing to help out in another way? > > This is not the question I feel you should ask because we haven't even > established if I *could* make contributions to the project, as my mindset > appears to be so much more different. Especially the idea of not wanting to > break APIs/ABIs is a huge limitation - just looking at SSL_new() made me give up > hope here. Yes - not breaking APIs/ABIs is a huge limitation. BoringSSL is not suitable for general purpose use precisely because of this. The only users BoringSSL cares about whether they break or not are Google users. As soon as you have a library that wants to cater for large numbers of users (which we do) then you have to accept that limitation. As to whether or not we have established whether you *could* make such contributions - I think you are missing the point. We cannot know whether you are capable or not until you try. It is on the basis of your code that we would make such a judgement. In order for your code to get into OpenSSL it must have been reviewed and approved by two current OpenSSL committers (one of whom must be on the OpenSSL Management Committee). We invite anyone to contribute. In order for this to be a healthy open-source community we *need* those contributions. Only those that make the grade will make it in. Note - this review process wasn't always the case. Things used to be much more informal in pre-heartbleed days. This is no longer the case. > > I'm no cryptography expert, I've made that clear from mail one, and my cleanup > jobs would be more widespread than what seems to be deemed acceptable right now. > I can read and write scalable C code, otherwise I wouldn't even have tried to > reuse that SSL object from the beginning. > > So, I ask a question in return: what do you think I *could* be helping with? Well, you have vocally complained about the state of the documentation. You have the benefit of being a new OpenSSL user. You know what things were confusing or unclear in the documentation. More experienced OpenSSL coders often don't have the perspective - because some things are just "obvious" to them. So help with pull requests to improve the documentation. Matt From openssl at foocrypt.net Mon Dec 31 14:27:53 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Tue, 1 Jan 2019 01:27:53 +1100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <27fa0004-7684-9343-218b-558682c79201@openssl.org> References: <1d66ccc0-87ac-82ad-818d-f71bf1e9f811@gmx.de> <04e8446f-35fb-dc21-ff34-ea64f6dae0c9@gmail.com> <4d287beb-f79b-5fb8-9a6f-8a612c175474@gmx.de> <20181231.101257.2069367019671144053.levitte@openssl.org> <27fa0004-7684-9343-218b-558682c79201@openssl.org> Message-ID: <11EDC036-87D3-4757-AD33-80C6E608F77F@foocrypt.net> Matt et al 'been reviewed and approved by two current OpenSSL committers (one of whom must be on the OpenSSL Management Committee).? Due to the recent legislative changes here in Australia around the T.O.L.A. Act, can a change be made to the OpenSSL policy so that the 2 reviewers, don?t reside in Australia, or are Australian citizens ? ABI/API changes -> breaks -> back door requests?. -- Regards, Mark A. Lane Cryptopocalypse NOW 01 04 2016 Volumes 0.0 -> 10.0 Now available through iTunes - iBooks @ https://itunes.apple.com/au/author/mark-a.-lane/id1100062966?mt=11 ? Mark A. Lane 1980 - 2019, All Rights Reserved. ? FooCrypt 1980 - 2019, All Rights Reserved. ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2019, All Rights Reserved. ? Cryptopocalypse 1980 - 2019, All Rights Reserved. > On 1 Jan 2019, at 01:11, Matt Caswell wrote: > > > > On 31/12/2018 11:36, C.Wehrmeyer wrote: >> On 31.12.18 10:12, Richard Levitte wrote: >>> Yes, it's true, new features are going in. And it's true that it's >>> often more exciting to add new features than to do the janitorial >>> work. >> >> You realised what I have left unspoken thus far, which is this almost >> obsession-like preference of OSS coders to add new features rather than >> improving the old, boring codebase. However, there's a reason why it's still >> called code*base*. It's the base of everything. And adding more and more >> features to that base is going to make ripping off the band-aid more painful in >> the long run. > > There has been a huge amount of effort put in over the last few years to improve > the codebase. Things that immediately spring to mind (and there's probably a > whole load more): > > - Rewrite of the state machine > - libssl record layer refactor > - Implementation of the PACKET and WPACKET abstractions in libssl > - Rewrite of the rand code > - Implementation of the new test harness > - Significant effort into developing tests > - Implementation of the coding style and reformat of the codebase to meet it > - Opaque many of the structures (which I know you don't see as an improvement, > but I'll answer that point separately) > - Implementation of continuous fuzzing > - Significant expansion of the documentation coverage > > It is simply not true to claim that we have "an obsession-like preference...to > add new features rather than improving the old, boring codebase". None of the > above things resulted in or were motivated by user visible features. They were > all about improving the codebase. > >> >> Also, infrastructure again. I, as a user, don't care if the kernel gets a new >> feature that makes some black magic happening. What I care about is that the >> kernel doesn't throw away my writes (which has happened in May of 2018, see): >> >>> >> https://www.postgresql.org/message-id/flat/CAMsr%2BYE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG%2BvzHun1EqMxyQg%40mail.gmail.com#CAMsr+YE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG+vzHun1EqMxyQg at mail.gmail.com) >> >> >> Cryptography libs should be equally conservative, considering that cryptography >> is conservative to begin with. I don't care if TLS 1.3 lets me use new exiting >> ciphers and handshakes when it unreasonably bogs down my server code. >> >>> BUT, you also have to appreciate that stuff is happening around us >>> that affects our focus. TLS 1.3 happened, and rather than having to >>> answer the question "why don't you have TLS 1.3 yet?" (there's a LOT >>> of interest in that version), we decided to add it. >> >> Sure, but didn't Matt just say that there are a lot of volunteers working on >> that library? The disadvantage here is that quality assurance is barely a thing >> - however, the *advantage* of this is that OpenSSL does not have to follow >> commercial interests. If we look at this at face value you could just say "No, >> people, it's high time we streamline some of the internal aspects of the >> library, TLS 1.3 will have to wait. You can't wait that long? Well, sorry". >> >>> However, your message is clear, we do need to do some cleanup as >>> well. More than that, I agree with you that it's needed (I've >>> screamed out in angst when stumbling upon particularly ugly or >>> misplaced code, so the feeling is shared, more than you might >>> believe). >> >> But what does "cleanup" entail? That's the hot-button question here. I've >> already made a suggestion, that is to say, getting rid of opaque structures. If >> that is deemed too insecure (for whatever reasons), export symbols that allow >> programmers to query the size of structures, and provide two versions of >> functions: one function expects the caller to pass an object to which the >> changes are to be made, and the other one allocates the necessary memory >> dynamically and then calls the first version. Or just don't allocate my object >> memory dynamically anymore. >> >>> That being said, cleanup happens, and documentation happens, in a >>> piecemeal fashion, 'cause that's what most people have capacity for. >> >> So, what you're effectively saying is that I'm the first one who ever asked for >> SSL object reuse, right? Because if piecemeal work happens on the documentation, >> and Viktor says that it's possible, then surely no one would have ever answered >> that question on the mailing list and *not* put it piecemeal-ly in the OpenSSL >> documentation, right? >> >>> Now, here's something else that you need to consider: API/ABI >>> compatibility needs to be preserved. >> >> No it doesn't. We *know* it doesn't. When OpenSSL 1.1 was released it broke all >> *sorts* of applications out there, because all sorts of applications used struct >> fields rather than accessors. wget, mutt, neon, python, you name it, you broke it. > > API/ABI stability is absolutely required. Every time we make a breaking change > it is painful for our users - and more pain is felt the bigger the scale of the > break. We simply cannot go around making wholesale breaks on an ongoing basis. > If we did so then OpenSSL would be a lot less useful to our users. > > This is not to say that we can *never* make breaking changes. Only that when we > do so it must be strongly justified and only done relatively infrequently. We > made such a decision when we decided to make the structures opaque. It's not a > decision we are likely to repeat anytime soon IMO. We are still feeling the pain > of that now (and will continue to do so for at least the next year until 1.0.2 > goes out of support - and probably beyond that). > > Which brings me onto why structures were made opaque in the first place. A > significant driver for this (probably *the* most important one) was to improve > the codebase. I have witnessed first hand the harm that non-opaque structures > did to OpenSSL. We will be fixing the fallout from them for years to come. > Non-opaque structures combined with the requirements for stable API/ABI means > you cannot change anything in those structures. Renaming or deleting structure > members constitutes an API break. Even *adding* structure members constitutes an > ABI break (due to the changed size of the structure). This means the code > ossifies over time and cannot easily be refactored. Much of OpenSSL's internal > "quirkiness" results from attempting to work around this restriction. > > Things like the state machine refactor and the record layer refactor would not > have been possible without opaque structures. In my mind making the structures > opaque was one of the best things that ever happened to OpenSSL. > > >> >>> https://breakpoint.cc/openssl-1.1-rebuild-2016-08-26/ >> >> So since when do we need to consider API/ABI compatibility? Did we grow up >> recently? >> >> Or maybe OpenSSL should have switched the language. The point of C is that >> structures are public. And if I'm going to be honest that approach saved my >> sorry arse more than a couple times. When zlib choked because it couldn't go >> past 4 GiBs of data since its fields were uint32_ts, I was able to easily >> workaround this problem. But what do I know. > > If you really want to fiddle with OpenSSL internal structures - feel free. Just > include the OpenSSL internal header files and away you go. Just do so in the > knowledge that they could be changed at any time, and your code might break. If > this isn't a concern to you then - no problem. If it is a concern to you - then > actually you *do* care about API/ABI stability after all. > > >> >>> Counting symbols is, however, nothing other than a blunt instrument. >>> Quite a lot of those symbols are convenience macros and functions that >>> have accumulated over time. >> >> You're taking my statement out of context. Counting the symbols wasn't supposed >> to suggest that there are too *many* of them. I'm in no position to say that, >> seeing as the original context in which my statement was put is that *I'm not >> familiar enough with the library*. >> >> What I said was that reading the code is easy. Learning what the library >> provides is hard, and that you won't learn much just by looking at the symbols >> because there's so many of them. >> >>> But nevertheless, I do hear you call for a remake of the SSL API as >>> well as cleaner internals. The latter is easier, and I'm sure it will >>> happen piecemeal as per usual so as to not break something / >>> inadvertently change a behavior (i.e. break ABI). The former is a >>> fairly massive project, and is more of creating a new API and library >>> rather than a mere cleanup job. That will be a massive effort, and >>> you do have to keep in mind how much time all involved can put into >>> it. >> >> I'm not saying it needs to be done right now. I'm merely suggesting that it >> might be a good goal post for OpenSSL 2.0. >> >>>> Turning structures opaque doesn't prevent people from still messing >>>> with their internal fields. >>> >>> True. But it makes for a clear delineation where people are forced to >>> be aware that they are playing with internal stuff, and that it may >>> not be a safe thing to do. >> >> Then why not provide small helper functions for covering the "playing with >> internal stuff" part? That way it's still controlled, and documented, and >> unified. You guys must've had some examples to show off in order to justify the >> process, so surely you know what it is that people do when they use internal >> stuff. Make functions for those. Don't give them any reason to continue playing >> with internal stuff. >> >> I don't like code that tries to protect programmers from themselves. I like code >> that lets good programmers do smart things. And if bad programmers use that >> freedom to do bad stuff, then doesn't that mean your API simply didn't support >> this, and they had to make it work somehow else? Again, helper functions. >> >>> Uhmmmm.... this is factually incorrect. OpenSSL doesn't use its own >>> memory pooling. We have thin wrappers around the usual malloc() / >>> realloc() / free(), which allows any application to do its own memory >>> pooling. >> >>> >> https://web.archive.org/web/20150207180717/http://article.gmane.org/gmane.os.openbsd.misc/211963 > > The BUF_FREELISTS code that this post references was ripped out years ago. This > no longer represents the current state of OpenSSL in any supported version. > >>> To conclude, I have a question for you: are you only willing to rant >>> (*), or are you willing to help out in another way? >> >> This is not the question I feel you should ask because we haven't even >> established if I *could* make contributions to the project, as my mindset >> appears to be so much more different. Especially the idea of not wanting to >> break APIs/ABIs is a huge limitation - just looking at SSL_new() made me give up >> hope here. > > Yes - not breaking APIs/ABIs is a huge limitation. BoringSSL is not suitable for > general purpose use precisely because of this. The only users BoringSSL cares > about whether they break or not are Google users. As soon as you have a library > that wants to cater for large numbers of users (which we do) then you have to > accept that limitation. > > As to whether or not we have established whether you *could* make such > contributions - I think you are missing the point. We cannot know whether you > are capable or not until you try. It is on the basis of your code that we would > make such a judgement. In order for your code to get into OpenSSL it must have > been reviewed and approved by two current OpenSSL committers (one of whom must > be on the OpenSSL Management Committee). We invite anyone to contribute. In > order for this to be a healthy open-source community we *need* those > contributions. Only those that make the grade will make it in. > > Note - this review process wasn't always the case. Things used to be much more > informal in pre-heartbleed days. This is no longer the case. > >> >> I'm no cryptography expert, I've made that clear from mail one, and my cleanup >> jobs would be more widespread than what seems to be deemed acceptable right now. >> I can read and write scalable C code, otherwise I wouldn't even have tried to >> reuse that SSL object from the beginning. >> >> So, I ask a question in return: what do you think I *could* be helping with? > > Well, you have vocally complained about the state of the documentation. You have > the benefit of being a new OpenSSL user. You know what things were confusing or > unclear in the documentation. More experienced OpenSSL coders often don't have > the perspective - because some things are just "obvious" to them. So help with > pull requests to improve the documentation. > > Matt > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Mon Dec 31 15:43:42 2018 From: levitte at openssl.org (Richard Levitte) Date: Mon, 31 Dec 2018 16:43:42 +0100 (CET) Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <11EDC036-87D3-4757-AD33-80C6E608F77F@foocrypt.net> References: <27fa0004-7684-9343-218b-558682c79201@openssl.org> <11EDC036-87D3-4757-AD33-80C6E608F77F@foocrypt.net> Message-ID: <20181231.164342.595806582237525869.levitte@openssl.org> I'll go ahead and ask, how long do you think such a back door would stay unnoticed, let alone survive? I'm considering the fact that we have a lot of people looking at our code, just judging from the issues and pull requests raised on github. I can't say that I have an actual answer, but it's a question worth asking as well, to see if the T.O.L.A. Act is worth a state of panic or not. Cheers, Richard ( who's on vacation and should stop reading these mails ) In message <11EDC036-87D3-4757-AD33-80C6E608F77F at foocrypt.net> on Tue, 1 Jan 2019 01:27:53 +1100, "openssl at foocrypt.net" said: > Matt et al > > 'been reviewed and approved by two current OpenSSL committers (one of whom must > be on the OpenSSL Management Committee).? > > Due to the recent legislative changes here in Australia around the T.O.L.A. Act, can a change be > made to the OpenSSL policy so that the 2 reviewers, don?t reside in Australia, or are Australian > citizens ? > > ABI/API changes -> breaks -> back door requests?. > > -- > > Regards, > > Mark A. Lane > > Cryptopocalypse NOW 01 04 2016 > > Volumes 0.0 -> 10.0 Now available through iTunes - iBooks @ > https://itunes.apple.com/au/author/mark-a.-lane/id1100062966?mt=11 > > ? Mark A. Lane 1980 - 2019, All Rights Reserved. > ? FooCrypt 1980 - 2019, All Rights Reserved. > ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2019, All Rights Reserved. > ? Cryptopocalypse 1980 - 2019, All Rights Reserved. > > On 1 Jan 2019, at 01:11, Matt Caswell wrote: > > On 31/12/2018 11:36, C.Wehrmeyer wrote: > > On 31.12.18 10:12, Richard Levitte wrote: > > Yes, it's true, new features are going in. And it's true that it's > often more exciting to add new features than to do the janitorial > work. > > You realised what I have left unspoken thus far, which is this almost > obsession-like preference of OSS coders to add new features rather than > improving the old, boring codebase. However, there's a reason why it's still > called code*base*. It's the base of everything. And adding more and more > features to that base is going to make ripping off the band-aid more painful in > the long run. > > There has been a huge amount of effort put in over the last few years to improve > the codebase. Things that immediately spring to mind (and there's probably a > whole load more): > > - Rewrite of the state machine > - libssl record layer refactor > - Implementation of the PACKET and WPACKET abstractions in libssl > - Rewrite of the rand code > - Implementation of the new test harness > - Significant effort into developing tests > - Implementation of the coding style and reformat of the codebase to meet it > - Opaque many of the structures (which I know you don't see as an improvement, > but I'll answer that point separately) > - Implementation of continuous fuzzing > - Significant expansion of the documentation coverage > > It is simply not true to claim that we have "an obsession-like preference...to > add new features rather than improving the old, boring codebase". None of the > above things resulted in or were motivated by user visible features. They were > all about improving the codebase. > > Also, infrastructure again. I, as a user, don't care if the kernel gets a new > feature that makes some black magic happening. What I care about is that the > kernel doesn't throw away my writes (which has happened in May of 2018, see): > > https://www.postgresql.org/message-id/flat/CAMsr%2BYE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG%2BvzHun1EqMxyQg%40mail.gmail.com#CAMsr+YE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG+vzHun1EqMxyQg at mail.gmail.com) > > > Cryptography libs should be equally conservative, considering that cryptography > is conservative to begin with. I don't care if TLS 1.3 lets me use new exiting > ciphers and handshakes when it unreasonably bogs down my server code. > > BUT, you also have to appreciate that stuff is happening around us > that affects our focus. TLS 1.3 happened, and rather than having to > answer the question "why don't you have TLS 1.3 yet?" (there's a LOT > of interest in that version), we decided to add it. > > Sure, but didn't Matt just say that there are a lot of volunteers working on > that library? The disadvantage here is that quality assurance is barely a thing > - however, the *advantage* of this is that OpenSSL does not have to follow > commercial interests. If we look at this at face value you could just say "No, > people, it's high time we streamline some of the internal aspects of the > library, TLS 1.3 will have to wait. You can't wait that long? Well, sorry". > > However, your message is clear, we do need to do some cleanup as > well. More than that, I agree with you that it's needed (I've > screamed out in angst when stumbling upon particularly ugly or > misplaced code, so the feeling is shared, more than you might > believe). > > But what does "cleanup" entail? That's the hot-button question here. I've > already made a suggestion, that is to say, getting rid of opaque structures. If > that is deemed too insecure (for whatever reasons), export symbols that allow > programmers to query the size of structures, and provide two versions of > functions: one function expects the caller to pass an object to which the > changes are to be made, and the other one allocates the necessary memory > dynamically and then calls the first version. Or just don't allocate my object > memory dynamically anymore. > > That being said, cleanup happens, and documentation happens, in a > piecemeal fashion, 'cause that's what most people have capacity for. > > So, what you're effectively saying is that I'm the first one who ever asked for > SSL object reuse, right? Because if piecemeal work happens on the documentation, > and Viktor says that it's possible, then surely no one would have ever answered > that question on the mailing list and *not* put it piecemeal-ly in the OpenSSL > documentation, right? > > Now, here's something else that you need to consider: API/ABI > compatibility needs to be preserved. > > No it doesn't. We *know* it doesn't. When OpenSSL 1.1 was released it broke all > *sorts* of applications out there, because all sorts of applications used struct > fields rather than accessors. wget, mutt, neon, python, you name it, you broke it. > > API/ABI stability is absolutely required. Every time we make a breaking change > it is painful for our users - and more pain is felt the bigger the scale of the > break. We simply cannot go around making wholesale breaks on an ongoing basis. > If we did so then OpenSSL would be a lot less useful to our users. > > This is not to say that we can *never* make breaking changes. Only that when we > do so it must be strongly justified and only done relatively infrequently. We > made such a decision when we decided to make the structures opaque. It's not a > decision we are likely to repeat anytime soon IMO. We are still feeling the pain > of that now (and will continue to do so for at least the next year until 1.0.2 > goes out of support - and probably beyond that). > > Which brings me onto why structures were made opaque in the first place. A > significant driver for this (probably *the* most important one) was to improve > the codebase. I have witnessed first hand the harm that non-opaque structures > did to OpenSSL. We will be fixing the fallout from them for years to come. > Non-opaque structures combined with the requirements for stable API/ABI means > you cannot change anything in those structures. Renaming or deleting structure > members constitutes an API break. Even *adding* structure members constitutes an > ABI break (due to the changed size of the structure). This means the code > ossifies over time and cannot easily be refactored. Much of OpenSSL's internal > "quirkiness" results from attempting to work around this restriction. > > Things like the state machine refactor and the record layer refactor would not > have been possible without opaque structures. In my mind making the structures > opaque was one of the best things that ever happened to OpenSSL. > > https://breakpoint.cc/openssl-1.1-rebuild-2016-08-26/ > > So since when do we need to consider API/ABI compatibility? Did we grow up > recently? > > Or maybe OpenSSL should have switched the language. The point of C is that > structures are public. And if I'm going to be honest that approach saved my > sorry arse more than a couple times. When zlib choked because it couldn't go > past 4 GiBs of data since its fields were uint32_ts, I was able to easily > workaround this problem. But what do I know. > > If you really want to fiddle with OpenSSL internal structures - feel free. Just > include the OpenSSL internal header files and away you go. Just do so in the > knowledge that they could be changed at any time, and your code might break. If > this isn't a concern to you then - no problem. If it is a concern to you - then > actually you *do* care about API/ABI stability after all. > > Counting symbols is, however, nothing other than a blunt instrument. > Quite a lot of those symbols are convenience macros and functions that > have accumulated over time. > > You're taking my statement out of context. Counting the symbols wasn't supposed > to suggest that there are too *many* of them. I'm in no position to say that, > seeing as the original context in which my statement was put is that *I'm not > familiar enough with the library*. > > What I said was that reading the code is easy. Learning what the library > provides is hard, and that you won't learn much just by looking at the symbols > because there's so many of them. > > But nevertheless, I do hear you call for a remake of the SSL API as > well as cleaner internals. The latter is easier, and I'm sure it will > happen piecemeal as per usual so as to not break something / > inadvertently change a behavior (i.e. break ABI). The former is a > fairly massive project, and is more of creating a new API and library > rather than a mere cleanup job. That will be a massive effort, and > you do have to keep in mind how much time all involved can put into > it. > > I'm not saying it needs to be done right now. I'm merely suggesting that it > might be a good goal post for OpenSSL 2.0. > > Turning structures opaque doesn't prevent people from still messing > with their internal fields. > > True. But it makes for a clear delineation where people are forced to > be aware that they are playing with internal stuff, and that it may > not be a safe thing to do. > > Then why not provide small helper functions for covering the "playing with > internal stuff" part? That way it's still controlled, and documented, and > unified. You guys must've had some examples to show off in order to justify the > process, so surely you know what it is that people do when they use internal > stuff. Make functions for those. Don't give them any reason to continue playing > with internal stuff. > > I don't like code that tries to protect programmers from themselves. I like code > that lets good programmers do smart things. And if bad programmers use that > freedom to do bad stuff, then doesn't that mean your API simply didn't support > this, and they had to make it work somehow else? Again, helper functions. > > Uhmmmm.... this is factually incorrect. OpenSSL doesn't use its own > memory pooling. We have thin wrappers around the usual malloc() / > realloc() / free(), which allows any application to do its own memory > pooling. > > https://web.archive.org/web/20150207180717/http://article.gmane.org/gmane.os.openbsd.misc/211963 > > > The BUF_FREELISTS code that this post references was ripped out years ago. This > no longer represents the current state of OpenSSL in any supported version. > > To conclude, I have a question for you: are you only willing to rant > (*), or are you willing to help out in another way? > > This is not the question I feel you should ask because we haven't even > established if I *could* make contributions to the project, as my mindset > appears to be so much more different. Especially the idea of not wanting to > break APIs/ABIs is a huge limitation - just looking at SSL_new() made me give up > hope here. > > Yes - not breaking APIs/ABIs is a huge limitation. BoringSSL is not suitable for > general purpose use precisely because of this. The only users BoringSSL cares > about whether they break or not are Google users. As soon as you have a library > that wants to cater for large numbers of users (which we do) then you have to > accept that limitation. > > As to whether or not we have established whether you *could* make such > contributions - I think you are missing the point. We cannot know whether you > are capable or not until you try. It is on the basis of your code that we would > make such a judgement. In order for your code to get into OpenSSL it must have > been reviewed and approved by two current OpenSSL committers (one of whom must > be on the OpenSSL Management Committee). We invite anyone to contribute. In > order for this to be a healthy open-source community we *need* those > contributions. Only those that make the grade will make it in. > > Note - this review process wasn't always the case. Things used to be much more > informal in pre-heartbleed days. This is no longer the case. > > I'm no cryptography expert, I've made that clear from mail one, and my cleanup > jobs would be more widespread than what seems to be deemed acceptable right now. > I can read and write scalable C code, otherwise I wouldn't even have tried to > reuse that SSL object from the beginning. > > So, I ask a question in return: what do you think I *could* be helping with? > > Well, you have vocally complained about the state of the documentation. You have > the benefit of being a new OpenSSL user. You know what things were confusing or > unclear in the documentation. More experienced OpenSSL coders often don't have > the perspective - because some things are just "obvious" to them. So help with > pull requests to improve the documentation. > > Matt > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > From openssl at foocrypt.net Mon Dec 31 15:55:52 2018 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Tue, 1 Jan 2019 02:55:52 +1100 Subject: [openssl-users] Authentication over ECDHE In-Reply-To: <20181231.164342.595806582237525869.levitte@openssl.org> References: <27fa0004-7684-9343-218b-558682c79201@openssl.org> <11EDC036-87D3-4757-AD33-80C6E608F77F@foocrypt.net> <20181231.164342.595806582237525869.levitte@openssl.org> Message-ID: <2BB81435-719B-4DFD-9FF8-5088B5829B1B@foocrypt.net> 3 - 6 months Happy New Year..;) > On 1 Jan 2019, at 02:43, Richard Levitte wrote: > > I'll go ahead and ask, how long do you think such a back door would > stay unnoticed, let alone survive? I'm considering the fact that we > have a lot of people looking at our code, just judging from the issues > and pull requests raised on github. > > I can't say that I have an actual answer, but it's a question worth > asking as well, to see if the T.O.L.A. Act is worth a state of panic > or not. > > Cheers, > Richard ( who's on vacation and should stop reading these mails ) > > In message <11EDC036-87D3-4757-AD33-80C6E608F77F at foocrypt.net> on Tue, 1 Jan 2019 01:27:53 +1100, "openssl at foocrypt.net" said: > >> Matt et al >> >> 'been reviewed and approved by two current OpenSSL committers (one of whom must >> be on the OpenSSL Management Committee).? >> >> Due to the recent legislative changes here in Australia around the T.O.L.A. Act, can a change be >> made to the OpenSSL policy so that the 2 reviewers, don?t reside in Australia, or are Australian >> citizens ? >> >> ABI/API changes -> breaks -> back door requests?. >> >> -- >> >> Regards, >> >> Mark A. Lane >> >> Cryptopocalypse NOW 01 04 2016 >> >> Volumes 0.0 -> 10.0 Now available through iTunes - iBooks @ >> https://itunes.apple.com/au/author/mark-a.-lane/id1100062966?mt=11 >> >> ? Mark A. Lane 1980 - 2019, All Rights Reserved. >> ? FooCrypt 1980 - 2019, All Rights Reserved. >> ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2019, All Rights Reserved. >> ? Cryptopocalypse 1980 - 2019, All Rights Reserved. >> >> On 1 Jan 2019, at 01:11, Matt Caswell wrote: >> >> On 31/12/2018 11:36, C.Wehrmeyer wrote: >> >> On 31.12.18 10:12, Richard Levitte wrote: >> >> Yes, it's true, new features are going in. And it's true that it's >> often more exciting to add new features than to do the janitorial >> work. >> >> You realised what I have left unspoken thus far, which is this almost >> obsession-like preference of OSS coders to add new features rather than >> improving the old, boring codebase. However, there's a reason why it's still >> called code*base*. It's the base of everything. And adding more and more >> features to that base is going to make ripping off the band-aid more painful in >> the long run. >> >> There has been a huge amount of effort put in over the last few years to improve >> the codebase. Things that immediately spring to mind (and there's probably a >> whole load more): >> >> - Rewrite of the state machine >> - libssl record layer refactor >> - Implementation of the PACKET and WPACKET abstractions in libssl >> - Rewrite of the rand code >> - Implementation of the new test harness >> - Significant effort into developing tests >> - Implementation of the coding style and reformat of the codebase to meet it >> - Opaque many of the structures (which I know you don't see as an improvement, >> but I'll answer that point separately) >> - Implementation of continuous fuzzing >> - Significant expansion of the documentation coverage >> >> It is simply not true to claim that we have "an obsession-like preference...to >> add new features rather than improving the old, boring codebase". None of the >> above things resulted in or were motivated by user visible features. They were >> all about improving the codebase. >> >> Also, infrastructure again. I, as a user, don't care if the kernel gets a new >> feature that makes some black magic happening. What I care about is that the >> kernel doesn't throw away my writes (which has happened in May of 2018, see): >> >> https://www.postgresql.org/message-id/flat/CAMsr%2BYE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG%2BvzHun1EqMxyQg%40mail.gmail.com#CAMsr+YE5Gs9iPqw2mQ6OHt1aC5Qk5EuBFCyG+vzHun1EqMxyQg at mail.gmail.com) >> >> >> Cryptography libs should be equally conservative, considering that cryptography >> is conservative to begin with. I don't care if TLS 1.3 lets me use new exiting >> ciphers and handshakes when it unreasonably bogs down my server code. >> >> BUT, you also have to appreciate that stuff is happening around us >> that affects our focus. TLS 1.3 happened, and rather than having to >> answer the question "why don't you have TLS 1.3 yet?" (there's a LOT >> of interest in that version), we decided to add it. >> >> Sure, but didn't Matt just say that there are a lot of volunteers working on >> that library? The disadvantage here is that quality assurance is barely a thing >> - however, the *advantage* of this is that OpenSSL does not have to follow >> commercial interests. If we look at this at face value you could just say "No, >> people, it's high time we streamline some of the internal aspects of the >> library, TLS 1.3 will have to wait. You can't wait that long? Well, sorry". >> >> However, your message is clear, we do need to do some cleanup as >> well. More than that, I agree with you that it's needed (I've >> screamed out in angst when stumbling upon particularly ugly or >> misplaced code, so the feeling is shared, more than you might >> believe). >> >> But what does "cleanup" entail? That's the hot-button question here. I've >> already made a suggestion, that is to say, getting rid of opaque structures. If >> that is deemed too insecure (for whatever reasons), export symbols that allow >> programmers to query the size of structures, and provide two versions of >> functions: one function expects the caller to pass an object to which the >> changes are to be made, and the other one allocates the necessary memory >> dynamically and then calls the first version. Or just don't allocate my object >> memory dynamically anymore. >> >> That being said, cleanup happens, and documentation happens, in a >> piecemeal fashion, 'cause that's what most people have capacity for. >> >> So, what you're effectively saying is that I'm the first one who ever asked for >> SSL object reuse, right? Because if piecemeal work happens on the documentation, >> and Viktor says that it's possible, then surely no one would have ever answered >> that question on the mailing list and *not* put it piecemeal-ly in the OpenSSL >> documentation, right? >> >> Now, here's something else that you need to consider: API/ABI >> compatibility needs to be preserved. >> >> No it doesn't. We *know* it doesn't. When OpenSSL 1.1 was released it broke all >> *sorts* of applications out there, because all sorts of applications used struct >> fields rather than accessors. wget, mutt, neon, python, you name it, you broke it. >> >> API/ABI stability is absolutely required. Every time we make a breaking change >> it is painful for our users - and more pain is felt the bigger the scale of the >> break. We simply cannot go around making wholesale breaks on an ongoing basis. >> If we did so then OpenSSL would be a lot less useful to our users. >> >> This is not to say that we can *never* make breaking changes. Only that when we >> do so it must be strongly justified and only done relatively infrequently. We >> made such a decision when we decided to make the structures opaque. It's not a >> decision we are likely to repeat anytime soon IMO. We are still feeling the pain >> of that now (and will continue to do so for at least the next year until 1.0.2 >> goes out of support - and probably beyond that). >> >> Which brings me onto why structures were made opaque in the first place. A >> significant driver for this (probably *the* most important one) was to improve >> the codebase. I have witnessed first hand the harm that non-opaque structures >> did to OpenSSL. We will be fixing the fallout from them for years to come. >> Non-opaque structures combined with the requirements for stable API/ABI means >> you cannot change anything in those structures. Renaming or deleting structure >> members constitutes an API break. Even *adding* structure members constitutes an >> ABI break (due to the changed size of the structure). This means the code >> ossifies over time and cannot easily be refactored. Much of OpenSSL's internal >> "quirkiness" results from attempting to work around this restriction. >> >> Things like the state machine refactor and the record layer refactor would not >> have been possible without opaque structures. In my mind making the structures >> opaque was one of the best things that ever happened to OpenSSL. >> >> https://breakpoint.cc/openssl-1.1-rebuild-2016-08-26/ >> >> So since when do we need to consider API/ABI compatibility? Did we grow up >> recently? >> >> Or maybe OpenSSL should have switched the language. The point of C is that >> structures are public. And if I'm going to be honest that approach saved my >> sorry arse more than a couple times. When zlib choked because it couldn't go >> past 4 GiBs of data since its fields were uint32_ts, I was able to easily >> workaround this problem. But what do I know. >> >> If you really want to fiddle with OpenSSL internal structures - feel free. Just >> include the OpenSSL internal header files and away you go. Just do so in the >> knowledge that they could be changed at any time, and your code might break. If >> this isn't a concern to you then - no problem. If it is a concern to you - then >> actually you *do* care about API/ABI stability after all. >> >> Counting symbols is, however, nothing other than a blunt instrument. >> Quite a lot of those symbols are convenience macros and functions that >> have accumulated over time. >> >> You're taking my statement out of context. Counting the symbols wasn't supposed >> to suggest that there are too *many* of them. I'm in no position to say that, >> seeing as the original context in which my statement was put is that *I'm not >> familiar enough with the library*. >> >> What I said was that reading the code is easy. Learning what the library >> provides is hard, and that you won't learn much just by looking at the symbols >> because there's so many of them. >> >> But nevertheless, I do hear you call for a remake of the SSL API as >> well as cleaner internals. The latter is easier, and I'm sure it will >> happen piecemeal as per usual so as to not break something / >> inadvertently change a behavior (i.e. break ABI). The former is a >> fairly massive project, and is more of creating a new API and library >> rather than a mere cleanup job. That will be a massive effort, and >> you do have to keep in mind how much time all involved can put into >> it. >> >> I'm not saying it needs to be done right now. I'm merely suggesting that it >> might be a good goal post for OpenSSL 2.0. >> >> Turning structures opaque doesn't prevent people from still messing >> with their internal fields. >> >> True. But it makes for a clear delineation where people are forced to >> be aware that they are playing with internal stuff, and that it may >> not be a safe thing to do. >> >> Then why not provide small helper functions for covering the "playing with >> internal stuff" part? That way it's still controlled, and documented, and >> unified. You guys must've had some examples to show off in order to justify the >> process, so surely you know what it is that people do when they use internal >> stuff. Make functions for those. Don't give them any reason to continue playing >> with internal stuff. >> >> I don't like code that tries to protect programmers from themselves. I like code >> that lets good programmers do smart things. And if bad programmers use that >> freedom to do bad stuff, then doesn't that mean your API simply didn't support >> this, and they had to make it work somehow else? Again, helper functions. >> >> Uhmmmm.... this is factually incorrect. OpenSSL doesn't use its own >> memory pooling. We have thin wrappers around the usual malloc() / >> realloc() / free(), which allows any application to do its own memory >> pooling. >> >> https://web.archive.org/web/20150207180717/http://article.gmane.org/gmane.os.openbsd.misc/211963 >> >> >> The BUF_FREELISTS code that this post references was ripped out years ago. This >> no longer represents the current state of OpenSSL in any supported version. >> >> To conclude, I have a question for you: are you only willing to rant >> (*), or are you willing to help out in another way? >> >> This is not the question I feel you should ask because we haven't even >> established if I *could* make contributions to the project, as my mindset >> appears to be so much more different. Especially the idea of not wanting to >> break APIs/ABIs is a huge limitation - just looking at SSL_new() made me give up >> hope here. >> >> Yes - not breaking APIs/ABIs is a huge limitation. BoringSSL is not suitable for >> general purpose use precisely because of this. The only users BoringSSL cares >> about whether they break or not are Google users. As soon as you have a library >> that wants to cater for large numbers of users (which we do) then you have to >> accept that limitation. >> >> As to whether or not we have established whether you *could* make such >> contributions - I think you are missing the point. We cannot know whether you >> are capable or not until you try. It is on the basis of your code that we would >> make such a judgement. In order for your code to get into OpenSSL it must have >> been reviewed and approved by two current OpenSSL committers (one of whom must >> be on the OpenSSL Management Committee). We invite anyone to contribute. In >> order for this to be a healthy open-source community we *need* those >> contributions. Only those that make the grade will make it in. >> >> Note - this review process wasn't always the case. Things used to be much more >> informal in pre-heartbleed days. This is no longer the case. >> >> I'm no cryptography expert, I've made that clear from mail one, and my cleanup >> jobs would be more widespread than what seems to be deemed acceptable right now. >> I can read and write scalable C code, otherwise I wouldn't even have tried to >> reuse that SSL object from the beginning. >> >> So, I ask a question in return: what do you think I *could* be helping with? >> >> Well, you have vocally complained about the state of the documentation. You have >> the benefit of being a new OpenSSL user. You know what things were confusing or >> unclear in the documentation. More experienced OpenSSL coders often don't have >> the perspective - because some things are just "obvious" to them. So help with >> pull requests to improve the documentation. >> >> Matt >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >> > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users