From brenon.alexis+openssl at gmail.com Fri Feb 1 12:58:42 2019 From: brenon.alexis+openssl at gmail.com (Alexis BRENON @OpenSSL) Date: Fri, 1 Feb 2019 13:58:42 +0100 Subject: [openssl-users] Some documentation about key derivation and block padding Message-ID: i everyone, I am looking for some documentation on how to pad and/or derive my message and my key (from simple password), to mimic AES 128 ECB en/decryption. For a decorative purpose (no security consideration in mind), I used openssl to encrypt a small message (less than 16 bytes) with a small key (less than 16 bytes). I used an AES 128 ECB encryption algorithm with no salt. Here is the command line I used: printf 'my message' | openssl enc -aes-128-ecb -nosalt -pass pass:word This gave me a block of 16 bytes that I plotted with a script. Then I have another script which rebuild the ciphered message from the list of 0s and 1s that I can enter manually and then decrypt the message with: openssl enc -d -aes-128-ecb -nosalt -pass pass:word And this worked like a charm. However, recently I saw that running these commands output a warning: *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. So I decided to re-write the scripts to make the en/decryption on their own, not relying on future implementations of openssl. Since then, I could not reproduce the same results as the ones obtained with openssl (compatibility required to be able to decrypt already printed arts). My scripts are in Python and I use pycrypto library which provides AES 128 ECB algorithms but does not make any padding (it is the responsability of the user to pad her data). It seems that the message should be padded using PKCS7 (RFC 2315) standard. Nevertheless, I did not really understand how to pad/derive a 128 bits key from my simple password. In openssl code base, it seems to use some CRYPTO_128_wrap function, but I don't understand it very well. So, do you know some documentation or example on how to achieve the same behavior than openssl. Is there anything that I also must take care of ? Kind regards, Alexis. From brenon.alexis+openssl at gmail.com Mon Feb 4 10:01:06 2019 From: brenon.alexis+openssl at gmail.com (Alexis BRENON @OpenSSL) Date: Mon, 4 Feb 2019 11:01:06 +0100 Subject: [openssl-users] Some documentation about key derivation and block padding In-Reply-To: References: Message-ID: Hi all, So, I found some hints on stack overflow (https://stackoverflow.com/questions/6772465/is-there-any-c-api-in-openssl-to-derive-a-key-from-given-string) and an implementation with pyCrypto (https://gist.github.com/mimoo/11383475). I still can't get the expected results but these raise some questions: how many iteration of PBKDF must I do ? Must the result of the encryption be hashed with HMAC ? Kind regards, Alexis. Le ven. 1 f?vr. 2019 ? 13:58, Alexis BRENON @OpenSSL a ?crit : > > i everyone, > > I am looking for some documentation on how to pad and/or derive my > message and my key (from simple password), to mimic AES 128 ECB > en/decryption. > > For a decorative purpose (no security consideration in mind), I used > openssl to encrypt a small message (less than 16 bytes) with a small > key (less than 16 bytes). I used an AES 128 ECB encryption algorithm > with no salt. Here is the command line I used: > printf 'my message' | openssl enc -aes-128-ecb -nosalt -pass pass:word > This gave me a block of 16 bytes that I plotted with a script. Then I > have another script which rebuild the ciphered message from the list > of 0s and 1s that I can enter manually and then decrypt the message > with: > openssl enc -d -aes-128-ecb -nosalt -pass pass:word > And this worked like a charm. > > However, recently I saw that running these commands output a warning: > *** WARNING : deprecated key derivation used. > Using -iter or -pbkdf2 would be better. > So I decided to re-write the scripts to make the en/decryption on > their own, not relying on future implementations of openssl. Since > then, I could not reproduce the same results as the ones obtained with > openssl (compatibility required to be able to decrypt already printed > arts). > > My scripts are in Python and I use pycrypto library which provides AES > 128 ECB algorithms but does not make any padding (it is the > responsability of the user to pad her data). It seems that the message > should be padded using PKCS7 (RFC 2315) standard. Nevertheless, I did > not really understand how to pad/derive a 128 bits key from my simple > password. In openssl code base, it seems to use some CRYPTO_128_wrap > function, but I don't understand it very well. > > So, do you know some documentation or example on how to achieve the > same behavior than openssl. Is there anything that I also must take > care of ? > > Kind regards, > Alexis. From hkario at redhat.com Mon Feb 4 15:52:12 2019 From: hkario at redhat.com (Hubert Kario) Date: Mon, 04 Feb 2019 16:52:12 +0100 Subject: [openssl-users] Adding custom OBJ identifiers In-Reply-To: References: Message-ID: <5023460.sNSupHX0md@pintsize.usersys.redhat.com> On Thursday, 31 January 2019 11:09:00 CET Dmitry Belyavsky wrote: > Hello, > > What is best practice to add own object identifiers to the crypto/objects/* > files? > > It's not a problem to add all the necessary strings to the > crypto/objects/objects.txt file and invoke 'make generate_crypto_objects', > but during the branch development, the changes in the main openssl branch > usually cause numerous merge conflicts. So any advice is appreciated. why using oid_section in config file (https://www.openssl.org/docs/man1.0.2/man5/config.html) is not workable for you? -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part. URL: From beldmit at gmail.com Mon Feb 4 15:56:56 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 4 Feb 2019 18:56:56 +0300 Subject: [openssl-users] Adding custom OBJ identifiers In-Reply-To: <5023460.sNSupHX0md@pintsize.usersys.redhat.com> References: <5023460.sNSupHX0md@pintsize.usersys.redhat.com> Message-ID: Dear Hubert, On Mon, Feb 4, 2019 at 6:52 PM Hubert Kario wrote: > On Thursday, 31 January 2019 11:09:00 CET Dmitry Belyavsky wrote: > > Hello, > > > > What is best practice to add own object identifiers to the > crypto/objects/* > > files? > > > > It's not a problem to add all the necessary strings to the > > crypto/objects/objects.txt file and invoke 'make > generate_crypto_objects', > > but during the branch development, the changes in the main openssl branch > > usually cause numerous merge conflicts. So any advice is appreciated. > > why using oid_section in config file > (https://www.openssl.org/docs/man1.0.2/man5/config.html) is not workable > for > you? > > I need to add the NIDs to some internal openssl lists, such as algorithm identifiers for TLS ciphersuites. -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From vieuxtech at gmail.com Mon Feb 4 23:54:48 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Mon, 4 Feb 2019 15:54:48 -0800 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? Message-ID: And is it possible that this is different for TLS1.2 and 1.3? Using TLS1.3, SSL_session_reused() is always returning false, I'm not sure if that's because I'm doing something else wrong, and the ticket is not being accepted and a full handshake is occurring, or if the API literally only signals "session reuse" not "tls ticket" reuse. Its also not clear from the docs if this API is supposed to work for both client & server sides. With TLS1.2, I notice that the cb to SSL_CTX_sess_set_new_cb() occurs when a session is NOT reused (and I guess a new ticket is issued), but in situation that I would expect the session to be resumed, I don't get the callback. I assume this is because it doesn't make sense to issue more tickets for a resumed connection? This gives me some confidence that ticket use is occurring. For 1.3, I'm always getting the callback (twice per connection, of course), which makes me think that somehow my ticket reuse code is working only for 1.2. For both, I'm getting the session in the new session callback, and then setting it with SSL_set_session(), so I'd expect resumption to work for either protocol. Thanks, Sam From openssl-users at dukhovni.org Tue Feb 5 00:57:53 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Mon, 4 Feb 2019 19:57:53 -0500 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: References: Message-ID: <20190205005753.GZ79754@straasha.imrryr.org> On Mon, Feb 04, 2019 at 03:54:48PM -0800, Sam Roberts wrote: > And is it possible that this is different for TLS1.2 and 1.3? The resumption API is the same. However, because in TLS 1.3, session tickets are sent *after* the completion of the handshake, it is possible that the session handle you're saving is the one that does not yet have any associated tickets, because they've not yet been received. Session ticket resumption is working with Postfix and TLS 1.3. $ posttls-finger -c -Lsummary,cache,ssl-debug -r 4 smtp.dukhovni.org posttls-finger: looking for session [100.2.39.101]:25&4A46567FCBCF5C0617FE221FA66FD0CB8F240EB24DB6BD261D53255FC8C9BE94 in memory cache posttls-finger: smtp.dukhovni.org[100.2.39.101]:25: SNI hostname: smtp.dukhovni.org posttls-finger: SSL_connect:before SSL initialization posttls-finger: SSL_connect:SSLv3/TLS write client hello posttls-finger: SSL_connect:SSLv3/TLS write client hello posttls-finger: SSL_connect:SSLv3/TLS read server hello posttls-finger: SSL_connect:TLSv1.3 read encrypted extensions posttls-finger: SSL_connect:SSLv3/TLS read server certificate posttls-finger: SSL_connect:TLSv1.3 read server certificate verify posttls-finger: SSL_connect:SSLv3/TLS read finished posttls-finger: SSL_connect:SSLv3/TLS write change cipher spec posttls-finger: SSL_connect:SSLv3/TLS write finished posttls-finger: Verified TLS connection established to smtp.dukhovni.org[100.2.39.101]:25: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256 posttls-finger: SSL_connect:SSL negotiation finished successfully posttls-finger: SSL_connect:SSL negotiation finished successfully posttls-finger: save session [100.2.39.101]:25&4A46567FCBCF5C0617FE221FA66FD0CB8F240EB24DB6BD261D53255FC8C9BE94 to memory cache posttls-finger: SSL_connect:SSLv3/TLS read server session ticket posttls-finger: Reconnecting after 4 seconds posttls-finger: looking for session [100.2.39.101]:25&4A46567FCBCF5C0617FE221FA66FD0CB8F240EB24DB6BD261D53255FC8C9BE94 in memory cache posttls-finger: reloaded session [100.2.39.101]:25&4A46567FCBCF5C0617FE221FA66FD0CB8F240EB24DB6BD261D53255FC8C9BE94 from memory cache posttls-finger: smtp.dukhovni.org[100.2.39.101]:25: SNI hostname: smtp.dukhovni.org posttls-finger: SSL_connect:before SSL initialization posttls-finger: SSL_connect:SSLv3/TLS write client hello posttls-finger: SSL_connect:SSLv3/TLS write client hello posttls-finger: SSL_connect:SSLv3/TLS read server hello posttls-finger: SSL_connect:TLSv1.3 read encrypted extensions posttls-finger: SSL_connect:SSLv3/TLS read finished posttls-finger: SSL_connect:SSLv3/TLS write change cipher spec posttls-finger: SSL_connect:SSLv3/TLS write finished posttls-finger: smtp.dukhovni.org[100.2.39.101]:25: Reusing old session posttls-finger: Verified TLS connection established to smtp.dukhovni.org[100.2.39.101]:25: TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) posttls-finger: Found a previously used server. Done reconnecting. -- Viktor. From matt at openssl.org Tue Feb 5 09:46:50 2019 From: matt at openssl.org (Matt Caswell) Date: Tue, 5 Feb 2019 09:46:50 +0000 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: References: Message-ID: On 04/02/2019 23:54, Sam Roberts wrote: > And is it possible that this is different for TLS1.2 and 1.3? > > Using TLS1.3, SSL_session_reused() is always returning false, I'm not > sure if that's because I'm doing something else wrong, and the ticket > is not being accepted and a full handshake is occurring, or if the API > literally only signals "session reuse" not "tls ticket" reuse. Its > also not clear from the docs if this API is supposed to work for both > client & server sides. SSL_session_reused() works in both TLSv1.2 and TLSv1.3 on both the client and the server, regardless of whether the reuse comes from a traditional session or from a ticket. If you're always getting false in TLSv1.3 then you are failing to resume in TLSv1.3. > With TLS1.2, I notice that the cb to SSL_CTX_sess_set_new_cb() occurs > when a session is NOT reused (and I guess a new ticket is issued), but > in situation that I would expect the session to be resumed, I don't > get the callback. I assume this is because it doesn't make sense to > issue more tickets for a resumed connection? This gives me some > confidence that ticket use is occurring. > > For 1.3, I'm always getting the callback (twice per connection, of > course), which makes me think that somehow my ticket reuse code is > working only for 1.2. In TLSv1.3, by default, we issue two tickets if session reuse did not occur, and one if reuse did occur. > For both, I'm getting the session in the new session callback, and > then setting it with SSL_set_session(), so I'd expect resumption to > work for either protocol. Yes - it should. It would be helpful to check whether the ticket is actually appearing in the ClientHello or not. Matt From vieuxtech at gmail.com Tue Feb 5 15:41:05 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Tue, 5 Feb 2019 07:41:05 -0800 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: <20190205005753.GZ79754@straasha.imrryr.org> References: <20190205005753.GZ79754@straasha.imrryr.org> Message-ID: On Mon, Feb 4, 2019 at 9:46 PM Viktor Dukhovni wrote: > On Mon, Feb 04, 2019 at 03:54:48PM -0800, Sam Roberts wrote: > However, because in TLS 1.3, session > tickets are sent *after* the completion of the handshake, it is > possible that the session handle you're saving is the one that does > not yet have any associated tickets, because they've not yet been > received. I'm saving the session that is passed to the callback in SSL_CTX_sess_set_new_cb() as described in https://wiki.openssl.org/index.php/TLS1.3#Sessions. > posttls-finger: smtp.dukhovni.org[100.2.39.101]:25: Reusing old session What API are you using to confirm that the ticket was used to resume the session? SSL_session_reused? Thanks, Sam From openssl-users at dukhovni.org Tue Feb 5 16:33:50 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Tue, 5 Feb 2019 11:33:50 -0500 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: References: <20190205005753.GZ79754@straasha.imrryr.org> Message-ID: > On Feb 5, 2019, at 10:41 AM, Sam Roberts wrote: > >> However, because in TLS 1.3, session >> tickets are sent *after* the completion of the handshake, it is >> possible that the session handle you're saving is the one that does >> not yet have any associated tickets, because they've not yet been >> received. > > I'm saving the session that is passed to the callback in > SSL_CTX_sess_set_new_cb() as described in > https://wiki.openssl.org/index.php/TLS1.3#Sessions. And then? How are you restoring the saved session for re-use? > >> posttls-finger: smtp.dukhovni.org[100.2.39.101]:25: Reusing old session > > What API are you using to confirm that the ticket was used to resume > the session? SSL_session_reused? Yes. -- Viktor. From vieuxtech at gmail.com Tue Feb 5 22:43:03 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Tue, 5 Feb 2019 14:43:03 -0800 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: References: <20190205005753.GZ79754@straasha.imrryr.org> Message-ID: I tracked down my problem, its due to a change in the relative order of handshake completion (as detected by the info callback, anyhow), and the callback to SSL_CTX_set_tlsext_ticket_key_cb(). With TLS1.2, I can rotate ticket keys on the server when the handshake completes, and they will only apply to the next connection. With TLS1.3, the tickets haven't been sent yet, at the time the handshake completes, so when I "rotate" the keys, the new keys are used immediately afterwards in the ticket_key_cb to encrypt the tickets for the connection that just handshaked. Its semi-obvious in retrospect, after having read our ticket key handling code, but it took me a while to find it. And it turns out that yes, SSL_session_resumed() does work with TLS tickets. Thanks for the suggestions, Viktor. Cheers, Sam From openssl-users at dukhovni.org Wed Feb 6 03:25:15 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Tue, 5 Feb 2019 22:25:15 -0500 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: References: <20190205005753.GZ79754@straasha.imrryr.org> Message-ID: <20190206032515.GB79754@straasha.imrryr.org> On Tue, Feb 05, 2019 at 02:43:03PM -0800, Sam Roberts wrote: > I tracked down my problem, its due to a change in the relative order > of handshake completion (as detected by the info callback, anyhow), > and the callback to SSL_CTX_set_tlsext_ticket_key_cb(). > > With TLS1.2, I can rotate ticket keys on the server when the handshake > completes, and they will only apply to the next connection. > > With TLS1.3, the tickets haven't been sent yet, at the time the > handshake completes, so when I "rotate" the keys, the new keys are > used immediately afterwards in the ticket_key_cb to encrypt the > tickets for the connection that just handshaked. Your ticket rotation approach looks a bit fragile. Postfix keeps two session ticket keys in memory, one that's used to both encrypt new tickets and decrypt freshly issued tickets, and other that's used only decrypt unexpired tickets that were isssued just before the new key was introduced. This maintains session ticket continuity across a single key change. The key change interval is either equal to or is twice the maximum ticket lifetime, ensuring that tickets are only invalidated by expiration, not key rotation. -- Viktor. From hkario at redhat.com Wed Feb 6 11:30:21 2019 From: hkario at redhat.com (Hubert Kario) Date: Wed, 06 Feb 2019 12:30:21 +0100 Subject: [openssl-users] Adding custom OBJ identifiers In-Reply-To: References: <5023460.sNSupHX0md@pintsize.usersys.redhat.com> Message-ID: <4226223.KjAbKKt3Wn@pintsize.usersys.redhat.com> On Monday, 4 February 2019 16:56:56 CET Dmitry Belyavsky wrote: > Dear Hubert, > > On Mon, Feb 4, 2019 at 6:52 PM Hubert Kario wrote: > > On Thursday, 31 January 2019 11:09:00 CET Dmitry Belyavsky wrote: > > > Hello, > > > > > > What is best practice to add own object identifiers to the > > > > crypto/objects/* > > > > > files? > > > > > > It's not a problem to add all the necessary strings to the > > > crypto/objects/objects.txt file and invoke 'make > > > > generate_crypto_objects', > > > > > but during the branch development, the changes in the main openssl > > > branch > > > usually cause numerous merge conflicts. So any advice is appreciated. > > > > why using oid_section in config file > > (https://www.openssl.org/docs/man1.0.2/man5/config.html) is not workable > > for > > you? > > I need to add the NIDs to some internal openssl lists, such as > algorithm identifiers for TLS ciphersuites. ah, sorry, too much ASN.1 recently, I immediately equated OBJ identifiers with OIDs -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part. URL: From minoda.magar at collins.com Wed Feb 6 23:31:53 2019 From: minoda.magar at collins.com (Magar, Minoda Collins) Date: Wed, 6 Feb 2019 23:31:53 +0000 Subject: [openssl-users] openssl verify with crl_check_all and partial chain flags Message-ID: <71ad250c4b704b21a48d70bafa4b0879@UUSALE0Q.utcmail.com> Hi all, While trying to verify a client certificate using openssl verify with -crl_check_all and ?partial_chain options set , I get the following error: error 8 at 1 depth lookup: CRL signature failure error client1.pem: verification failed Here is the command used: openssl verify -crl_check -crl_check_all -CAfile ca_chain_crl.pem -partial_chain -show_chain client1.pem ca_chain_crl.pem file has one intermediate and one root certificate and two CRLs(issued by the intermediate and root CAs). Openssl verify without ?partial_chain or ?crl_check_all works. Are we not supposed to use openssl verify with these two options set at the same time? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajin6594 at gmail.com Fri Feb 8 17:20:07 2019 From: rajin6594 at gmail.com (Rajinder Pal Singh) Date: Fri, 8 Feb 2019 12:20:07 -0500 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. Message-ID: Hi, I want to use a specific ip interface (out of several available ethernet interfaces available on my server) to test TLS/SSL connectivity to a remote server. Wondering if its possible? Regards, Rajinder. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Fri Feb 8 17:55:58 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 8 Feb 2019 17:55:58 +0000 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: References: Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Rajinder Pal Singh > Sent: Friday, February 08, 2019 12:20 > I want to use a specific ip interface (out of several available ethernet interfaces available > on my server) to test TLS/SSL connectivity to a remote server. This isn't an OpenSSL question; it's a networking-API question. For IPv4: Create your socket, bind it to the local interface you want to use (specifying a port of 0 if you want an ephemeral port assigned as in the usual case), then connect to the peer. You'll probably want to enable SO_REUSEADDR on the socket before calling bind. Once the connection is established, create the OpenSSL socket BIO and associate it with your socket. For IPv6: You should be able to use a scope zone ID to force a particular local interface. The easiest way to do this is to specify the appropriate zone ID suffix (which might look like e.g. "%15" or "%eth1") on the text representation of the peer's address, then use getaddrinfo with the AI_NUMERICHOST hint to convert it to a sockaddr_in6 structure with the correct scope zone ID field value. Then connect using that, create BIO, etc. Note that all of this will only work if the peer can actually be reached using that interface. Another alternative is to configure your routing table with a host route to the peer using the desired interface. -- Michael Wojcik Distinguished Engineer, Micro Focus From openssl-users at dukhovni.org Fri Feb 8 18:00:03 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 8 Feb 2019 13:00:03 -0500 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: References: Message-ID: <2970FC37-C052-4EFC-AEA5-03271A9A6B42@dukhovni.org> > On Feb 8, 2019, at 12:55 PM, Michael Wojcik wrote: > > For IPv4: Create your socket, bind it to the local interface you want to use (specifying a port of 0 if you want an ephemeral port assigned as in the usual case), then connect to the peer. You'll probably want to enable SO_REUSEADDR on the socket before calling bind. For the record, one should *not* use SO_REUSEADDR for client sockets used in outbound connections. -- Viktor. From openssl at foocrypt.net Fri Feb 8 18:06:48 2019 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Sat, 9 Feb 2019 05:06:48 +1100 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: References: Message-ID: Hi Rajinder There shouldn?t be any issues depending on how your host OS is performing the routing to the network the SSL/TLS endpoint is on. Try a tracerout to the IP to see where it goes, and a telnet IP 80 or 443 to make sure you can connect to the web server. ? Regards, Mark A. Lane > On 9 Feb 2019, at 04:20, Rajinder Pal Singh > wrote: > > Hi, > > I want to use a specific ip interface (out of several available ethernet interfaces available on my server) to test TLS/SSL connectivity to a remote server. > > > Wondering if its possible? > > > Regards, > Rajinder. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Wojcik at microfocus.com Fri Feb 8 18:48:33 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Fri, 8 Feb 2019 18:48:33 +0000 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: <2970FC37-C052-4EFC-AEA5-03271A9A6B42@dukhovni.org> References: <2970FC37-C052-4EFC-AEA5-03271A9A6B42@dukhovni.org> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Viktor Dukhovni > Sent: Friday, February 08, 2019 13:00 > > > On Feb 8, 2019, at 12:55 PM, Michael Wojcik > wrote: > > > > For IPv4: Create your socket, bind it to the local interface you want to > use (specifying a port of 0 if you want an ephemeral port assigned as in the > usual case), then connect to the peer. You'll probably want to enable > SO_REUSEADDR on the socket before calling bind. > > For the record, one should *not* use SO_REUSEADDR for client sockets used in > outbound connections. Not usually, but in the specific case of testing connections bound to specific local addresses - an artificial use case - it will either avoid having to wait for the 2MSL timer to expire (if you bind to a specific local port) or exhausing the ephemeral port space (if you use a stack-assigned ephemeral port) if you're making a lot of short-lived connections. Obviously bypassing TIME_WAIT this way introduces precisely the problem that TIME_WAIT exists to prevent: picking up data from a previous connection. However, modern stacks with randomized ISNs make the failure mode for that situation more palatable (more likely to produce an error state rather than silently accepting the stale data), and applications that implement their own session and/or presentation layers on top of the TCP bytestream will typically do a good job of 1) ensuring there isn't any stale data, and 2) detecting it if there is. TLS provides such a layer. I recognize that the use of SO_REUSEADDR on the active-open (client) side is controversial, but this particular use case shouldn't appear in a production environment anyway. -- Michael Wojcik Distinguished Engineer, Micro Focus From rajin6594 at gmail.com Fri Feb 8 20:53:48 2019 From: rajin6594 at gmail.com (Rajinder Pal Singh) Date: Fri, 8 Feb 2019 15:53:48 -0500 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: <21C2335C-20EA-4FF9-B570-1432954F0EBF@foocrypt.net> References: <21C2335C-20EA-4FF9-B570-1432954F0EBF@foocrypt.net> Message-ID: Thanks Mark for the prompt reply. Absolutely makes sense. Actually, i am on Nonstop HPE servers. There are no internal routing tables or so to say static routes. Environment is different from unix/linux. >From Application perspective, we choose what ip interface to use. Wondering if we can force the openssl to use specific interface? Regards. On Fri, Feb 8, 2019, 12:26 PM mark at foocrypt.net Hi Rajinder > > There shouldn?t be any issues depending on how your host OS is performing > the routing to the network the SSL/TLS endpoint is on. > > Try a tracerout to the IP to see where it goes, and a telnet IP 80 or 443 > to make sure you can connect to the web server. > > ? > > Regards, > > Mark A. Lane > > > > > On 9 Feb 2019, at 04:20, Rajinder Pal Singh wrote: > > Hi, > > I want to use a specific ip interface (out of several available ethernet > interfaces available on my server) to test TLS/SSL connectivity to a remote > server. > > > Wondering if its possible? > > > Regards, > Rajinder. > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at foocrypt.net Sat Feb 9 13:45:33 2019 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Sun, 10 Feb 2019 00:45:33 +1100 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: References: <21C2335C-20EA-4FF9-B570-1432954F0EBF@foocrypt.net> Message-ID: <404C9B96-1D50-4DB5-B01A-3E342685B7AC@foocrypt.net> HI Rajinder Perhaps a tunnel may help ? Have a look at man -s ssh, check out binding to interfaces and setting up a tunnel from one Nic through to your endpoint. Have a look at nectar or nc as its called these days for listening on the endpoint of the tunnel as your basic http 1.1 server, and redirect the output to a file to see what it is receiving. https://unix.stackexchange.com/questions/32182/simple-command-line-http-server may help You could write a quick shell script in KORN and open up a TCP socket connection to your web server and just feed it the raw SSL/TLS packets captured from the hand shake from another session captured with tcpdump, snoop, etc. Regards, Mark A. Lane > On 9 Feb 2019, at 07:53, Rajinder Pal Singh wrote: > > Thanks Mark for the prompt reply. Absolutely makes sense. Actually, i am on Nonstop HPE servers. There are no internal routing tables or so to say static routes. Environment is different from unix/linux. > > From Application perspective, we choose what ip interface to use. > > Wondering if we can force the openssl to use specific interface? > > Regards. > > > > On Fri, Feb 8, 2019, 12:26 PM mark at foocrypt.net wrote: > Hi Rajinder > > There shouldn?t be any issues depending on how your host OS is performing the routing to the network the SSL/TLS endpoint is on. > > Try a tracerout to the IP to see where it goes, and a telnet IP 80 or 443 to make sure you can connect to the web server. > > ? > > Regards, > > Mark A. Lane > > > > >> On 9 Feb 2019, at 04:20, Rajinder Pal Singh > wrote: >> >> Hi, >> >> I want to use a specific ip interface (out of several available ethernet interfaces available on my server) to test TLS/SSL connectivity to a remote server. >> >> >> Wondering if its possible? >> >> >> Regards, >> Rajinder. >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From vieuxtech at gmail.com Sat Feb 9 20:05:11 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Sat, 9 Feb 2019 12:05:11 -0800 Subject: [openssl-users] how is it possible to confirm that a TLS ticket was used? In-Reply-To: <20190206032515.GB79754@straasha.imrryr.org> References: <20190205005753.GZ79754@straasha.imrryr.org> <20190206032515.GB79754@straasha.imrryr.org> Message-ID: On Wed, Feb 6, 2019 at 1:01 PM Viktor Dukhovni wrote: > On Tue, Feb 05, 2019 at 02:43:03PM -0800, Sam Roberts wrote: > Your ticket rotation approach looks a bit fragile. I agree, though perhaps I should not have described what was happening as rotation. The test that was failing with TLS1.3 was one in which clearing the ticket keys was supposed to invalidate previously issued keys, but it wasn't (at least, not in the same way as it did for 1.2). > Postfix keeps two session ticket keys in memory, one that's used > to both encrypt new tickets and decrypt freshly issued tickets, and > other that's used only decrypt unexpired tickets that were isssued > just before the new key was introduced. This maintains session > ticket continuity across a single key change. The key change interval > is either equal to or is twice the maximum ticket lifetime, ensuring > that tickets are only invalidated by expiration, not key rotation. This seems a very reasonable approach, I may propose it as the default after we have 1.3 support, thanks. Cheers, Sam From aerowolf at gmail.com Sat Feb 9 20:32:55 2019 From: aerowolf at gmail.com (Kyle Hamilton) Date: Sat, 9 Feb 2019 14:32:55 -0600 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: References: <21C2335C-20EA-4FF9-B570-1432954F0EBF@foocrypt.net> Message-ID: It appears you could create() a socket, bind() it to the interface you want to use, possibly connect() it, and then pass it to either BIO_s_connect() or BIO_s_socket() depending on which meets your needs. -Kyle H On Sat, Feb 9, 2019 at 7:21 AM Rajinder Pal Singh wrote: > > Thanks Mark for the prompt reply. Absolutely makes sense. Actually, i am on Nonstop HPE servers. There are no internal routing tables or so to say static routes. Environment is different from unix/linux. > > From Application perspective, we choose what ip interface to use. > > Wondering if we can force the openssl to use specific interface? > > Regards. > > > > On Fri, Feb 8, 2019, 12:26 PM mark at foocrypt.net > >> Hi Rajinder >> >> There shouldn?t be any issues depending on how your host OS is performing the routing to the network the SSL/TLS endpoint is on. >> >> Try a tracerout to the IP to see where it goes, and a telnet IP 80 or 443 to make sure you can connect to the web server. >> >> ? >> >> Regards, >> >> Mark A. Lane >> >> >> >> >> On 9 Feb 2019, at 04:20, Rajinder Pal Singh wrote: >> >> Hi, >> >> I want to use a specific ip interface (out of several available ethernet interfaces available on my server) to test TLS/SSL connectivity to a remote server. >> >> >> Wondering if its possible? >> >> >> Regards, >> Rajinder. >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >> >> > -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From rajin6594 at gmail.com Sat Feb 9 23:38:59 2019 From: rajin6594 at gmail.com (Rajinder Pal Singh) Date: Sat, 9 Feb 2019 18:38:59 -0500 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: <404C9B96-1D50-4DB5-B01A-3E342685B7AC@foocrypt.net> References: <21C2335C-20EA-4FF9-B570-1432954F0EBF@foocrypt.net> <404C9B96-1D50-4DB5-B01A-3E342685B7AC@foocrypt.net> Message-ID: Thanks Mark. Will definitely try this. Appreciate your help. Will keep you losted. Regards. On Sat, Feb 9, 2019, 8:45 AM openssl at foocrypt.net HI Rajinder > > Perhaps a tunnel may help ? > > Have a look at man -s ssh, check out binding to interfaces and setting up > a tunnel from one Nic through to your endpoint. > > Have a look at nectar or nc as its called these days for listening on the > endpoint of the tunnel as your basic http 1.1 server, and redirect the > output to a file to see what it is receiving. > > > https://unix.stackexchange.com/questions/32182/simple-command-line-http-server may > help > > You could write a quick shell script in KORN and open up a TCP socket > connection to your web server and just feed it the raw SSL/TLS packets > captured from the hand shake from another session captured with tcpdump, > snoop, etc. > > Regards, > > Mark A. Lane > > > On 9 Feb 2019, at 07:53, Rajinder Pal Singh wrote: > > Thanks Mark for the prompt reply. Absolutely makes sense. Actually, i am > on Nonstop HPE servers. There are no internal routing tables or so to say > static routes. Environment is different from unix/linux. > > From Application perspective, we choose what ip interface to use. > > Wondering if we can force the openssl to use specific interface? > > Regards. > > > > On Fri, Feb 8, 2019, 12:26 PM mark at foocrypt.net >> Hi Rajinder >> >> There shouldn?t be any issues depending on how your host OS is performing >> the routing to the network the SSL/TLS endpoint is on. >> >> Try a tracerout to the IP to see where it goes, and a telnet IP 80 or 443 >> to make sure you can connect to the web server. >> >> ? >> >> Regards, >> >> Mark A. Lane >> >> >> >> >> On 9 Feb 2019, at 04:20, Rajinder Pal Singh wrote: >> >> Hi, >> >> I want to use a specific ip interface (out of several available ethernet >> interfaces available on my server) to test TLS/SSL connectivity to a remote >> server. >> >> >> Wondering if its possible? >> >> >> Regards, >> Rajinder. >> -- >> openssl-users mailing list >> To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users >> >> >> -- > openssl-users mailing list > To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott_n at xypro.com Mon Feb 11 16:31:07 2019 From: scott_n at xypro.com (Scott Neugroschl) Date: Mon, 11 Feb 2019 16:31:07 +0000 Subject: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. In-Reply-To: References: <21C2335C-20EA-4FF9-B570-1432954F0EBF@foocrypt.net> Message-ID: Hi Rajinder, Have you tried the ?socket_transport_name_set? call in your main program? ScottN From: openssl-users On Behalf Of Rajinder Pal Singh Sent: Friday, February 08, 2019 12:54 PM To: mark at foocrypt.net Cc: openssl-users Subject: Re: [openssl-users] How to use a specific ip interface while testing TLS/SSL connectivity. Thanks Mark for the prompt reply. Absolutely makes sense. Actually, i am on Nonstop HPE servers. There are no internal routing tables or so to say static routes. Environment is different from unix/linux. From Application perspective, we choose what ip interface to use. Wondering if we can force the openssl to use specific interface? Regards. On Fri, Feb 8, 2019, 12:26 PM mark at foocrypt.net wrote: Hi Rajinder There shouldn?t be any issues depending on how your host OS is performing the routing to the network the SSL/TLS endpoint is on. Try a tracerout to the IP to see where it goes, and a telnet IP 80 or 443 to make sure you can connect to the web server. ? Regards, Mark A. Lane On 9 Feb 2019, at 04:20, Rajinder Pal Singh > wrote: Hi, I want to use a specific ip interface (out of several available ethernet interfaces available on my server) to test TLS/SSL connectivity to a remote server. Wondering if its possible? Regards, Rajinder. -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at mad-scientist.net Tue Feb 12 20:23:39 2019 From: paul at mad-scientist.net (Paul Smith) Date: Tue, 12 Feb 2019 15:23:39 -0500 Subject: [openssl-users] Multiplexing TLS / non-TLS connections on a single socket Message-ID: <432be0aa3ed39289822e151b20ac76865c339c7b.camel@mad-scientist.net> Hi all. We have a service that currently implements a home-grown secure connection model based on SRP using AES as the cipher. We want to add support for TLS 1.2/1.3 as well, but we have to maintain backward- compatibility. Our app is in C++ and using OpenSSL 1.1.1. We really don't want to create a separate socket: we'd like to support client requests on the same socket using either the old connection method or TLS. We also want to support "pure" TLS, rather than some kind of wrapped connection protocol. This means we need to determine at connect time which method is being used. One idea is to use MSG_PEEK on the socket recv() to check the first bytes of the initial message (our protocol uses an XML message as the initial connection so seeing something like " References: <432be0aa3ed39289822e151b20ac76865c339c7b.camel@mad-scientist.net> Message-ID: <48d00e01-43e4-0e73-2eb3-90f917e7c54c@wisemo.com> On 12/02/2019 21:23, Paul Smith wrote: > Hi all. > > We have a service that currently implements a home-grown secure > connection model based on SRP using AES as the cipher. We want to add > support for TLS 1.2/1.3 as well, but we have to maintain backward- > compatibility. Our app is in C++ and using OpenSSL 1.1.1. > > We really don't want to create a separate socket: we'd like to support > client requests on the same socket using either the old connection > method or TLS. We also want to support "pure" TLS, rather than some > kind of wrapped connection protocol. This means we need to determine > at connect time which method is being used. > > One idea is to use MSG_PEEK on the socket recv() to check the first > bytes of the initial message (our protocol uses an XML message as the > initial connection so seeing something like " to differentiate them). One possible annoyance is that we need to > support Windows as well as GNU/Linux and I understand that peek on > Winsocket is not very efficient. > > Is PEEK still the best bet? Or is there a way in OpenSSL to manage > this more directly? For example we read the initial message then if we > discover that it's not the old connection model we provide it plus the > socket to OpenSSL so it can handle the rest of the handshake? Or maybe > we can register a callback with OpenSSL so that if it reads an initial > message from the socket that it doesn't recognize it would hand that > back to us? > > Any pointers to docs and/or examples would be really helpful, thanks! > At least in older versions of OpenSSL, you could create a custom BIO that buffers the socket data and lets you look at it before passing it to the SSL/TLS layer or directly to your code according to the contents.? This way you don't depend on the ability to make the OS socket API do this for you. I don't know if this ability is also in OpenSSL 1.1.x. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From hmurray at megapathdsl.net Tue Feb 12 22:29:04 2019 From: hmurray at megapathdsl.net (Hal Murray) Date: Tue, 12 Feb 2019 14:29:04 -0800 Subject: [openssl-users] Man page suggestion - SSL_get_verify_result Message-ID: <20190212222904.94C6D40605C@ip-64-139-1-69.sjc.megapath.net> Is there a better place for things like this? Please add X509_verify_cert_error_string to the SEE ALSO section of the man page for SSL_get_verify_result Thanks. -- These are my opinions. I hate spam. From openssl-users at dukhovni.org Tue Feb 12 23:31:12 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Tue, 12 Feb 2019 18:31:12 -0500 Subject: [openssl-users] Multiplexing TLS / non-TLS connections on a single socket In-Reply-To: <48d00e01-43e4-0e73-2eb3-90f917e7c54c@wisemo.com> References: <432be0aa3ed39289822e151b20ac76865c339c7b.camel@mad-scientist.net> <48d00e01-43e4-0e73-2eb3-90f917e7c54c@wisemo.com> Message-ID: <20190212233112.GQ916@straasha.imrryr.org> On Tue, Feb 12, 2019 at 11:22:47PM +0100, Jakob Bohm via openssl-users wrote: > At least in older versions of OpenSSL, you could create a custom BIO > that buffers the socket data and lets you look at it before passing > it to the SSL/TLS layer or directly to your code according to the > contents.? This way you don't depend on the ability to make the OS > socket API do this for you. > > I don't know if this ability is also in OpenSSL 1.1.x. This has not changed. So OpenSSL can do that, but the other application protocol might still want to read the socket directly. I would expect a socket "peek" once at the beginning of a connection to be sufficient cheap compared to TLS handshakes, ... to not warrant trying to find another approach. -- Viktor. From jetson23 at hotmail.com Tue Feb 12 23:38:30 2019 From: jetson23 at hotmail.com (Jason Schultz) Date: Tue, 12 Feb 2019 23:38:30 +0000 Subject: [openssl-users] FIPS Module for OpenSSL 1.1.1 Message-ID: Just wondering if there is a time frame for the availability of the FIPS Module for OpenSSL 1.1.1? Q3 2019? Q4? I realize this has been asked before, but the most recent answer I found was from several months ago, so I thought there might be new information. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Wed Feb 13 01:24:22 2019 From: paul.dale at oracle.com (Paul Dale) Date: Tue, 12 Feb 2019 17:24:22 -0800 (PST) Subject: [openssl-users] FIPS Module for OpenSSL 1.1.1 In-Reply-To: References: Message-ID: <151946de-c7e0-405a-b328-0d42d60fe5d0@default> The answer hasn't changed: there is no firm date. Progress is being made however. Pauli -- Oracle Dr Paul Dale | Cryptographer | Network Security & Encryption Phone +61 7 3031 7217 Oracle Australia From: Jason Schultz [mailto:jetson23 at hotmail.com] Sent: Wednesday, 13 February 2019 9:39 AM To: openssl-users at openssl.org Subject: [openssl-users] FIPS Module for OpenSSL 1.1.1 Just wondering if there is a time frame for the availability of the FIPS Module for OpenSSL 1.1.1? Q3 2019? Q4? I realize this has been asked before, but the most recent answer I found was from several months ago, so I thought there might be new information. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Feb 13 09:59:10 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 13 Feb 2019 09:59:10 +0000 Subject: [openssl-users] Man page suggestion - SSL_get_verify_result In-Reply-To: <20190212222904.94C6D40605C@ip-64-139-1-69.sjc.megapath.net> References: <20190212222904.94C6D40605C@ip-64-139-1-69.sjc.megapath.net> Message-ID: On 12/02/2019 22:29, Hal Murray wrote: > Is there a better place for things like this? > > Please add X509_verify_cert_error_string to the SEE ALSO section of the man > page for SSL_get_verify_result Please raise an issue on github for this sort of thing. Even better create a pull request. Matt From ali.tahir at live.com Wed Feb 13 10:52:15 2019 From: ali.tahir at live.com (ALe TAHIR) Date: Wed, 13 Feb 2019 10:52:15 +0000 Subject: [openssl-users] FIPS Fails due to Fingerprint Error while running for a App Message-ID: Hi Experts, Looking for some assistance. I?ve compiled one of the App in FIPs mode and while running the App. I?m getting fingerprint mismatch error. I?ve followed the standard procedure to build a FIPS module using OpenSSL UserGuide 2.0. But not sure what part is missing. :~$ openssl version OpenSSL 1.0.2q-fips 20 Nov 2018 :~$ (App version check Output) error initializing FIPS mode 0:error:2D06B06F:FIPS routines:FIPS_check_incore_fingerprint:fingerprint does not match:fips.c:232: I followed the standard procedure to build the FIPS module. If I try running Openssl commands via FIPS enabled it didn?t give me any errors: root at haproxyOpenSSLFIPS-02:/home/ubuntu# OPENSSL_FIPS=1 openssl md5 xyz.txt Error setting digest md5 140197799200408:error:060A80A3:digital envelope routines:FIPS_DIGESTINIT:disabled for fips:fips_md.c:180: But if I try via app it initialize to fail due to fingerprint error: I compiled the app build via following make command: make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 SSL_INC=/usr/local/ssl/include SSL_LIB=/usr/local/ssl/lib/ Where as FIPS module path is: /usr/local/ssl/fips-2.0 I?m thinking may be issue is at the path end while using make for haproxy (as above ^) but not sure. Here is ldd haproxy result: root at haproxyOpenSSLFIPS-02:/home/ubuntu/haproxy-1.9.2# ldd haproxy linux-vdso.so.1 => (0x00007ffcd331c000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fa12fef2000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fa12fcd8000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fa12fabb000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fa12f8b3000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fa12f6af000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fa12f43f000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa12f075000) /lib64/ld-linux-x86-64.so.2 (0x00007fa13012a000) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Feb 13 11:26:05 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 13 Feb 2019 11:26:05 +0000 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update Message-ID: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Please see my blog post for an OpenSSL 3.0 and FIPS Update: https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ Matt From jetson23 at hotmail.com Wed Feb 13 14:00:30 2019 From: jetson23 at hotmail.com (Jason Schultz) Date: Wed, 13 Feb 2019 14:00:30 +0000 Subject: [openssl-users] FIPS Module for OpenSSL 1.1.1 In-Reply-To: <151946de-c7e0-405a-b328-0d42d60fe5d0@default> References: , <151946de-c7e0-405a-b328-0d42d60fe5d0@default> Message-ID: Thanks for your response. A follow up question based on Matt Caswell's blog post: Does the blog post imply that the next FIPS module will be based on OpenSSL 3.0? Or is 3.0 a longer term thing and the next FIPS module will be for OpenSSL 1.1.1? Thanks. ________________________________ From: openssl-users on behalf of Paul Dale Sent: Wednesday, February 13, 2019 1:24 AM To: openssl-users at openssl.org Subject: Re: [openssl-users] FIPS Module for OpenSSL 1.1.1 The answer hasn?t changed: there is no firm date. Progress is being made however. Pauli -- Oracle Dr Paul Dale | Cryptographer | Network Security & Encryption Phone +61 7 3031 7217 Oracle Australia From: Jason Schultz [mailto:jetson23 at hotmail.com] Sent: Wednesday, 13 February 2019 9:39 AM To: openssl-users at openssl.org Subject: [openssl-users] FIPS Module for OpenSSL 1.1.1 Just wondering if there is a time frame for the availability of the FIPS Module for OpenSSL 1.1.1? Q3 2019? Q4? I realize this has been asked before, but the most recent answer I found was from several months ago, so I thought there might be new information. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Feb 13 14:33:02 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 13 Feb 2019 14:33:02 +0000 Subject: [openssl-users] FIPS Module for OpenSSL 1.1.1 In-Reply-To: References: <151946de-c7e0-405a-b328-0d42d60fe5d0@default> Message-ID: <1b1546c2-5e96-8660-007e-4e71c4f8dbff@openssl.org> On 13/02/2019 14:00, Jason Schultz wrote: > Thanks for your response. A follow up question based on Matt Caswell's blog > post: Does the blog post imply that the next FIPS module will be based on > OpenSSL 3.0? Or is 3.0 a longer term thing and the next FIPS module will be for > OpenSSL 1.1.1? OpenSSL 3.0 is our next release and the FIPS module will be based on it. There will be no FIPS module for 1.1.1. Matt > > Thanks. > > > -------------------------------------------------------------------------------- > *From:* openssl-users on behalf of Paul Dale > > *Sent:* Wednesday, February 13, 2019 1:24 AM > *To:* openssl-users at openssl.org > *Subject:* Re: [openssl-users] FIPS Module for OpenSSL 1.1.1 > ? > > The answer hasn?t changed: there is no firm date. > > Progress is being made however. > > ? > > ? > > Pauli > > -- > > Oracle > > Dr Paul Dale | Cryptographer | Network Security & Encryption > > Phone +61 7 3031 7217 > > Oracle Australia > > ? > > *From:*Jason Schultz [mailto:jetson23 at hotmail.com] > *Sent:* Wednesday, 13 February 2019 9:39 AM > *To:* openssl-users at openssl.org > *Subject:* [openssl-users] FIPS Module for OpenSSL 1.1.1 > > ? > > Just wondering if there is a time frame for the availability of the FIPS Module > for OpenSSL 1.1.1? Q3 2019? Q4?? > > ? > > I realize this has been asked before, but the most recent answer I found was > from several months ago, so I thought there might be new information. > > ? > > Thanks?in advance.? > > From jb-openssl at wisemo.com Wed Feb 13 17:32:45 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Wed, 13 Feb 2019 18:32:45 +0100 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> On 13/02/2019 12:26, Matt Caswell wrote: > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ > > Matt Given this announcement, a few questions arise: - How will a FIPS provider in the main tarball ensure compliance ?with the strict code delivery and non-change requirements of the ?CMVP (what was previously satisfied by distributing physical ?copies of the FIPS canister source code, and sites compiling this ?in a highly controlled environment to produce a golden canister)? - Will there be a reasonable transition period where users of the ?old FIPS-validated module can transition to the new module (meaning ?that both modules are validated and usable with a supported ?FIPS-capable OpenSSL library)?? I imagine that applications relying ?on the existing FIPS canister will need some time to quality test ?their code with all the API changes from OpenSSL 1.0.x to OpenSSL ?3.0.x .? OS distributions will also need some time to roll out the ?resulting feature updates to end users. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From matt at openssl.org Wed Feb 13 19:12:18 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 13 Feb 2019 19:12:18 +0000 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> Message-ID: On 13/02/2019 17:32, Jakob Bohm via openssl-users wrote: > On 13/02/2019 12:26, Matt Caswell wrote: >> Please see my blog post for an OpenSSL 3.0 and FIPS Update: >> >> https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ >> >> Matt > > Given this announcement, a few questions arise: > > - How will a FIPS provider in the main tarball ensure compliance > ?with the strict code delivery and non-change requirements of the > ?CMVP (what was previously satisfied by distributing physical > ?copies of the FIPS canister source code, and sites compiling this > ?in a highly controlled environment to produce a golden canister)? My understanding is that physical distribution is no longer a requirement. > > - Will there be a reasonable transition period where users of the > ?old FIPS-validated module can transition to the new module (meaning > ?that both modules are validated and usable with a supported > ?FIPS-capable OpenSSL library)?? I imagine that applications relying > ?on the existing FIPS canister will need some time to quality test > ?their code with all the API changes from OpenSSL 1.0.x to OpenSSL > ?3.0.x .? OS distributions will also need some time to roll out the > ?resulting feature updates to end users. The old FIPS module will remain validated for some time to come, so both the old and new modules will be validated at the same time for a while. 1.0.2 will go EOL at the end of this year. The intention is that 3.0 will be available before that. It's not yet clear exactly when 3.0 will become available and what the overlap with 1.0.2 will be so I don't have an answer at this stage for transition period. Matt From mcr at sandelman.ca Wed Feb 13 20:28:30 2019 From: mcr at sandelman.ca (Michael Richardson) Date: Wed, 13 Feb 2019 15:28:30 -0500 Subject: [openssl-users] [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: <2274.1550089710@localhost> Matt Caswell wrote: > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ Thank you, it is very useful to have these plans made up front. I think your posts should probably explain what happened to 2.x, and if this represents a move towards semantic versioning. (I think it does...?) In the various things linked, in particular: https://www.openssl.org/docs/OpenSSL300Design.html I think that there is a missing box. Specifically, the PERL API wrappers that are used in the test bench. I believe that the "applications" are a serious problem as there are (in 1.1.1) still many things that are very difficult (sometimes, it seems, impossible) to do programmatically, and which the test cases actually simply shell out to the application to do. An example of this is adding certain extensions to a certificate when generating it, which is only really possible by loading pieces of configuration file in. So what I'd like to see is to remove many of the applications from the core of OpenSSL, put them into a seperate package using better-documented API calls. Let them evolve according their own time-scale, probably taking patches faster without disrupting the underlying libraries. My observation is that the Perl testing system is used to drive the tests, but the tests do not actually use the Perl API wrapper for OpenSSL, but rather rely on the vast number of .c files in test/*. In other (more purely agile) projects, the test cases often serve as documentation as to how to use the API. In OpenSSL, the test cases rely too much on the openssl "applications", and the API is hidden. This would involve adopting some or all of Net::SSLeay. While there would be some initial duplication of effort, I think that over time it would sort itself out. Perl is no longer as cool as it used to be (I still like it) and maybe someone would argue for Python3 or something, and frankly I don't care which. What I care about is that the test cases actually test the API, rather than depend upon 20 years of twisty code in the "applications". And that the applications are permitted to grow/change/adapt to people's needs, rather than living in a hard spot between developer needs and end user needs, pissing off both groups. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matt at openssl.org Thu Feb 14 09:25:54 2019 From: matt at openssl.org (Matt Caswell) Date: Thu, 14 Feb 2019 09:25:54 +0000 Subject: [openssl-users] [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <2274.1550089710@localhost> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <2274.1550089710@localhost> Message-ID: On 13/02/2019 20:28, Michael Richardson wrote: > > Matt Caswell wrote: > > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > > > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ > > Thank you, it is very useful to have these plans made up front. > I think your posts should probably explain what happened to 2.x, and if this > represents a move towards semantic versioning. (I think it does...?) This is all explained in one of my previous blog posts. See: https://www.openssl.org/blog/blog/2018/11/28/version/ > > In the various things linked, in particular: > https://www.openssl.org/docs/OpenSSL300Design.html > > I think that there is a missing box. Specifically, the PERL API wrappers > that are used in the test bench. I believe that the "applications" are > a serious problem as there are (in 1.1.1) still many things that are very > difficult (sometimes, it seems, impossible) to do programmatically, and which > the test cases actually simply shell out to the application to do. > An example of this is adding certain extensions to a certificate when > generating it, which is only really possible by loading pieces of > configuration file in. > > So what I'd like to see is to remove many of the applications from the core > of OpenSSL, put them into a seperate package using better-documented API > calls. Let them evolve according their own time-scale, probably taking > patches faster without disrupting the underlying libraries. > > My observation is that the Perl testing system is used to drive the tests, > but the tests do not actually use the Perl API wrapper for OpenSSL, but > rather rely on the vast number of .c files in test/*. > In other (more purely agile) projects, the test cases often serve as > documentation as to how to use the API. In OpenSSL, the test cases > rely too much on the openssl "applications", and the API is hidden. > > This would involve adopting some or all of Net::SSLeay. > While there would be some initial duplication of effort, I think that over > time it would sort itself out. Perl is no longer as cool as it used to be (I > still like it) and maybe someone would argue for Python3 or something, and > frankly I don't care which. > > What I care about is that the test cases actually test the API, rather than > depend upon 20 years of twisty code in the "applications". > And that the applications are permitted to grow/change/adapt to people's > needs, rather than living in a hard spot between developer needs and end > user needs, pissing off both groups. I don't think it is accurate to characterise the tests as not directly testing the API but instead depending on the applications to do that. That *is* probably the case in many older tests but I don't recall many (any?) such tests being written in recent years. Instead there has been much effort put into directly testing the API (as an example see sslapitest.c which did not exist a few years ago (it doesn't appear in 1.0.2), but is currently over 6000 lines long). There are also the TLSProxy tests which do use s_server/s_client. But in those cases s_server/s_client are just used to drive a handshake. The tests themselves are actually written in perl. These are not API tests (so they don't depend on adding lots of obscure options to s_server/s_client) but are instead protocol tests. These tests modify the handshake in-flight to confirm that we can handle unusual or invalid protocol messages. Actually I would love to see the removal of the s_server/s_client dependency to something custom written. IMO the applications are no longer driven by developer needs. That may have once been the case, but I don't think it is true today. That said, of course, there is plenty of room for improvement in our testing. I would love to see more complete direct testing of the API. I do think we are moving in the right direction, but it is definitely a long term project. Matt From ignacio.casal at nice-software.com Thu Feb 14 11:56:08 2019 From: ignacio.casal at nice-software.com (Ignacio Casal) Date: Thu, 14 Feb 2019 12:56:08 +0100 Subject: [openssl-users] How to get the CA list Message-ID: Hey guys, I would like to get a list of all the CAs added to the X509_STORE. For this I use: X509_STORE_set_default_paths or X509_STORE_load_locations. Basically I need to get the list of the CAs out of the store or the store context. I could not figure out a proper way to do this. I tried to use X509_STORE_get1_certs but this seems to require a X509_NAME which I do not have since I want all the certificates out of the CAs. Is there a proper way to do this? Regards. -- Ignacio Casal Quinteiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Thu Feb 14 16:34:01 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Thu, 14 Feb 2019 17:34:01 +0100 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> Message-ID: <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> On 13/02/2019 20:12, Matt Caswell wrote: > > On 13/02/2019 17:32, Jakob Bohm via openssl-users wrote: >> On 13/02/2019 12:26, Matt Caswell wrote: >>> Please see my blog post for an OpenSSL 3.0 and FIPS Update: >>> >>> https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ >>> >>> Matt >> Given this announcement, a few questions arise: >> >> - How will a FIPS provider in the main tarball ensure compliance >> ?with the strict code delivery and non-change requirements of the >> ?CMVP (what was previously satisfied by distributing physical >> ?copies of the FIPS canister source code, and sites compiling this >> ?in a highly controlled environment to produce a golden canister)? > My understanding is that physical distribution is no longer a requirement. And the other things in that question? Integrity of validated source code when other parts of the tarball get regular changes? Building the validated source code in a controlled environment separate from the full tarball? (If there are answers in the FIPS 3.0.0 draft spec, they need repeating). >> - Will there be a reasonable transition period where users of the >> ?old FIPS-validated module can transition to the new module (meaning >> ?that both modules are validated and usable with a supported >> ?FIPS-capable OpenSSL library)?? I imagine that applications relying >> ?on the existing FIPS canister will need some time to quality test >> ?their code with all the API changes from OpenSSL 1.0.x to OpenSSL >> ?3.0.x .? OS distributions will also need some time to roll out the >> ?resulting feature updates to end users. > The old FIPS module will remain validated for some time to come, so both the old > and new modules will be validated at the same time for a while. 1.0.2 will go > EOL at the end of this year. The intention is that 3.0 will be available before > that. It's not yet clear exactly when 3.0 will become available and what the > overlap with 1.0.2 will be so I don't have an answer at this stage for > transition period. > > Matt > So right now, FIPS-validated users are left hanging, with no date to get a 3.0.0 code drop to start porting and a looming deadline for the 1.0.x API. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From guerinp at talasi.fr Thu Feb 14 16:42:21 2019 From: guerinp at talasi.fr (=?UTF-8?Q?Patrice_Gu=c3=a9rin?=) Date: Thu, 14 Feb 2019 17:42:21 +0100 Subject: [openssl-users] Questions about Ciphers Message-ID: <7e84a443-6f32-1e99-d452-4604184a97ba@talasi.fr> Hello, I have two questions : * I use OBJ_NAME_do_all_sorted() with? OBJ_NAME_TYPE_CIPHER_METH to get the list of supported cipher methods Is there a difference between lowercase and uppercase names ? I've noticed that some do not have uppercase name (ex. aes-128-ccm) Is there a prefered name to use ? * In the case of GCM usage (with examples found in the OpenSSL wiki), Is the specific control action to set the tag on decryption can be done at the beginning rather than juste before EVP_DecryptFinal_ex() ? Thank you. Kind regards, Patrice. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Thu Feb 14 16:46:57 2019 From: matt at openssl.org (Matt Caswell) Date: Thu, 14 Feb 2019 16:46:57 +0000 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> Message-ID: On 14/02/2019 16:34, Jakob Bohm via openssl-users wrote: > On 13/02/2019 20:12, Matt Caswell wrote: >> >> On 13/02/2019 17:32, Jakob Bohm via openssl-users wrote: >>> On 13/02/2019 12:26, Matt Caswell wrote: >>>> Please see my blog post for an OpenSSL 3.0 and FIPS Update: >>>> >>>> https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ >>>> >>>> Matt >>> Given this announcement, a few questions arise: >>> >>> - How will a FIPS provider in the main tarball ensure compliance >>> ??with the strict code delivery and non-change requirements of the >>> ??CMVP (what was previously satisfied by distributing physical >>> ??copies of the FIPS canister source code, and sites compiling this >>> ??in a highly controlled environment to produce a golden canister)? >> My understanding is that physical distribution is no longer a requirement. > And the other things in that question? > > Integrity of validated source code when other parts of the tarball > get regular changes? > > Building the validated source code in a controlled environment > separate from the full tarball? See the section of the Design document with the title "Detection of Changes inside the FIPS Boundary". Basically there will be version controlled checksum covering all of the validated source. Yes - I do expect you to be able to build just the validated source independently of the rest of the tarball so that you could (for example) run the latest main OpenSSL version but with an older module. Matt From ludwig.mark at siemens.com Thu Feb 14 16:48:17 2019 From: ludwig.mark at siemens.com (Ludwig, Mark) Date: Thu, 14 Feb 2019 16:48:17 +0000 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> Message-ID: +1 on the point: firm expiration date without firm replacement date ... really?! We have to hope that the firm expiration date will actually move if the replacement isn't ready before then ... and that doesn't begin to account for the calendar time to get the new one certified.... Thanks, Mark Ludwig -----Original Message----- From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of Jakob Bohm via openssl-users Sent: Thursday, February 14, 2019 10:34 AM To: openssl-users at openssl.org Subject: Re: [openssl-users] OpenSSL 3.0 and FIPS Update On 13/02/2019 20:12, Matt Caswell wrote: > > On 13/02/2019 17:32, Jakob Bohm via openssl-users wrote: >> On 13/02/2019 12:26, Matt Caswell wrote: >>> Please see my blog post for an OpenSSL 3.0 and FIPS Update: >>> >>> https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ >>> >>> Matt >> Given this announcement, a few questions arise: >> >> - How will a FIPS provider in the main tarball ensure compliance >> ?with the strict code delivery and non-change requirements of the >> ?CMVP (what was previously satisfied by distributing physical >> ?copies of the FIPS canister source code, and sites compiling this >> ?in a highly controlled environment to produce a golden canister)? > My understanding is that physical distribution is no longer a requirement. And the other things in that question? Integrity of validated source code when other parts of the tarball get regular changes? Building the validated source code in a controlled environment separate from the full tarball? (If there are answers in the FIPS 3.0.0 draft spec, they need repeating). >> - Will there be a reasonable transition period where users of the >> ?old FIPS-validated module can transition to the new module (meaning >> ?that both modules are validated and usable with a supported >> ?FIPS-capable OpenSSL library)?? I imagine that applications relying >> ?on the existing FIPS canister will need some time to quality test >> ?their code with all the API changes from OpenSSL 1.0.x to OpenSSL >> ?3.0.x .? OS distributions will also need some time to roll out the >> ?resulting feature updates to end users. > The old FIPS module will remain validated for some time to come, so both the old > and new modules will be validated at the same time for a while. 1.0.2 will go > EOL at the end of this year. The intention is that 3.0 will be available before > that. It's not yet clear exactly when 3.0 will become available and what the > overlap with 1.0.2 will be so I don't have an answer at this stage for > transition period. > > Matt > So right now, FIPS-validated users are left hanging, with no date to get a 3.0.0 code drop to start porting and a looming deadline for the 1.0.x API. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded -- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users From matt at openssl.org Thu Feb 14 17:01:30 2019 From: matt at openssl.org (Matt Caswell) Date: Thu, 14 Feb 2019 17:01:30 +0000 Subject: [openssl-users] Questions about Ciphers In-Reply-To: <7e84a443-6f32-1e99-d452-4604184a97ba@talasi.fr> References: <7e84a443-6f32-1e99-d452-4604184a97ba@talasi.fr> Message-ID: <1c39b2da-4e6d-83c0-96e7-f8dd70078ff7@openssl.org> On 14/02/2019 16:42, Patrice Gu?rin wrote: > Hello, > > I have two questions : > > * I use OBJ_NAME_do_all_sorted() with? OBJ_NAME_TYPE_CIPHER_METH to get the > list of supported cipher methods > Is there a difference between lowercase and uppercase names ? > I've noticed that some do not have uppercase name (ex. aes-128-ccm) > Is there a prefered name to use ? Objects have a "short name" and a "long name". In many cases the two are identical. In others they have minor differences such as uppercase vs lowercase. It doesn't matter - both forms refer to the same object. You can use either. > * In the case of GCM usage (with examples found in the OpenSSL wiki), > Is the specific control action to set the tag on decryption can be done at > the beginning rather than juste before EVP_DecryptFinal_ex() ? Yes, as long as it's done after EVP_DecryptInit_ex(). Matt From Zeke.Evans at microfocus.com Thu Feb 14 16:26:50 2019 From: Zeke.Evans at microfocus.com (Zeke Evans) Date: Thu, 14 Feb 2019 16:26:50 +0000 Subject: [openssl-users] [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: Can you give any guidance on which platforms will be validated with the OpenSSL FIPS 3.0 module? My recollection is that it will only be a handful of platforms. It would be helpful to have an idea which platforms will and will not be included. Any additional information about how other platforms can be validated would also be helpful. Thanks, Zeke Evans Senior Software Engineer, Micro Focus ________________________________ From: openssl-project on behalf of Matt Caswell Sent: Wednesday, February 13, 2019 4:26 AM To: openssl-announce at openssl.org; openssl-users at openssl.org; openssl-project at openssl.org Subject: [openssl-project] OpenSSL 3.0 and FIPS Update Please see my blog post for an OpenSSL 3.0 and FIPS Update: https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ Matt _______________________________________________ openssl-project mailing list openssl-project at openssl.org https://mta.openssl.org/mailman/listinfo/openssl-project -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Thu Feb 14 18:08:06 2019 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 14 Feb 2019 18:08:06 +0000 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> Message-ID: > Integrity of validated source code when other parts of the tarball get regular changes? The design doc, just recently published, talks about this a bit. Not all details are known yet. > Building the validated source code in a controlled environment separate from the full tarball? I do not believe this has been discussed within the FIPS sponsors. > (If there are answers in the FIPS 3.0.0 draft spec, they need repeating). Or a more careful reading. :) > So right now, FIPS-validated users are left hanging, with no date to get a 3.0.0 code drop to start porting and a looming deadline for the 1.0.x API. You get what you pay for. I can be harsh because I am not a member of the OpenSSL project. You can start by porting to 1.1.x now. From rsalz at akamai.com Thu Feb 14 18:09:18 2019 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 14 Feb 2019 18:09:18 +0000 Subject: [openssl-users] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <47013e75-8d69-1f64-f046-ce1092d73f2a@wisemo.com> <1c65e4dc-6873-8e8d-8d9b-461b134a730f@wisemo.com> Message-ID: > Yes - I do expect you to be able to build just the validated source independently of the rest of the tarball so that you could (for example) run the latest main OpenSSL version but with an older module. Which means that this doesn't have to happen in the first release since there's only one runtime that works with the one FIPS module. From jain61 at gmail.com Thu Feb 14 21:22:46 2019 From: jain61 at gmail.com (NJ) Date: Thu, 14 Feb 2019 14:22:46 -0700 (MST) Subject: [openssl-users] Queston about CMS_encrypt : Generates Version Message-ID: <1550179366621-0.post@n7.nabble.com> Hi All, I am using CMS_sign API to generate pkcs7-envelopedData for SCEP implementation. I am facing issue as CMS_sign API generates default version, originatorInfo and recipientInfo fields as . I would like to know how to set correct values to these fields, if there is any other openssl API I need ? CMS_ContentInfo: contentType: pkcs7-envelopedData (1.2.840.113549.1.7.3) d.envelopedData: version: originatorInfo: recipientInfos: d.ktri: version: Thanks NJ -- Sent from: http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html From vieuxtech at gmail.com Thu Feb 14 22:51:36 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Thu, 14 Feb 2019 14:51:36 -0800 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? Message-ID: In particular, I'm getting a close_notify alert, followed by two NewSessionTickets from the server. The does SSL_read()/SSL_get_error(), it is returning SSL_ERROR_ZERO_RETURN, so I stop calling SSL_read(). However, that means that the NewSessionTickets aren't seen, so I don't get the callbacks from SSL_CTX_sess_set_new_cb(). Should we be calling SSL_read() until some other return value occurs? Note that no data is written by the server, and SSL_shutdown() is called from inside the server's SSL_CB_HANDSHAKE_DONE info callback. The node test suite is rife with this pracitce, where a connection is established to prove its possible, but then just ended without data transfer. For TLS1.2 we get the session callbacks, but TLS1.3 we do not. This is the trace, edited to reduce SSL_trace verbosity: server TLSWrap::SSLInfoCallback(where SSL_CB_HANDSHAKE_DONE, alert U) established? 0 state 0x21 TWST: SSLv3/TLS write session ticket TLSv1.3 server TLSWrap::DoShutdown() established? 1 ssl? 1 Sent Record Inner Content Type = Alert (21) Level=warning(1), description=close notify(0) Sent Record NewSessionTicket, Length=245 Sent Record NewSessionTicket, Length=245 client TLSWrap::OnStreamRead(nread 566) established? 1 ssl? 1 parsing? 0 eof? 0 Received Record Level=warning(1), description=close notify(0) SSL_read() => 0 SSL_get_shutdown() => SSL_RECEIVED_SHUTDOWN SSL_get_error() => SSL_ERROR_ZERO_RETURN At this point, we consider the connection closed... not sure what else to do. Thanks, Sam From jb-openssl at wisemo.com Fri Feb 15 03:55:38 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Fri, 15 Feb 2019 04:55:38 +0100 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification (Monday 2019-02-11) Message-ID: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> These comments are on the version of the specification released on Monday 2019-02-11 at https://www.openssl.org/docs/OpenSSL300Design.html General notes on this release: - The release was not announced on the openssl-users and ?openssl-announce mailing lists.? A related blog post was ?announced two days later. - The related strategy document is at ?https://www.openssl.org/docs/OpenSSLStrategicArchitecture.html ?(This link is broken on the www.openssl.org front page). - The draft does not link to anywhere that the public can ?inspect archived or version tracked document versions. Non-FIPS architecture issues: - The identifiers for predefined parameters and values (such as ?"fips", "on", "off", "aes-128-cbc" should be binary values that ?cannot be easily searched in larger program files (by attackers). ? This rules out both text strings, UUID values and ASN OID values. ?Something similar to the function ids would be ideal.? Note that ?to make this effective, the string names of these should not ?appear in linked binaries. ? (The context of this is linking libcrypto and/or libssl into ?closed source binary programs, since open source binaries cannot ?hide their internal structure anyway). - It should be possible for applications to configure OpenSSL to ?load provider DLLs and config files from their own directories ?rather than the global well-known directory (isolation from ?system wide changes). - It should be possible for providers (possibly not the FIPS ?provider) to be linked directly into programs that link ?statically to libcrypto.? This implies the absence of ?conflicting identifiers, a public API to pass the address of ?a |OSSL_provider_init|function, all bundled providers provided ?as static libraries in static library builds, and a higher ?level init function that initializes both libcrypto and the ?default provider. - Static library forms of the default provider should not ?force callers to include every algorithm just because they ?are referenced from the default dispatch tables.? For example, ?it should be easy to link a static application that uses only ?AES-256-CBC and SHA-256, and contains little else.? Such limited ?feature applications would obviously have to forego using the ?all-inclusive high level init function. - For use with engine-like providers (such as hardware providers ?and the PKCS#11 provider), it should be possible for a provider ?to provide algorithms like RSA at multiple abstraction levels. ? For example, some PKCS#11 hardware provides the raw RSA ?algorithm (bignum in, bignum out) while others provide specific ?forms such as PKCS#1.5 signature.? There are even some that ?provide the PKCS#1.5 form with some hashes and the RSA form ?as a general fallback. - Similarly, some providers will provide both ends of an ?asymmetric algorithm, while others only provide the private ?key operation, leaving the public key operation to other ?providers (selected by core in the general way). - The general bignum library should be exposed via an API, either ?the legacy OpenSSL bignum API or a replacement API with an overlap ?of at least one major version with both APIs available. - Provider algorithm implementations should carry ?description/selection parameters indicating limits to access: ?"key-readable=yes/no", "key-writable=yes/no", "data-internal=yes/no", ?"data-external=yes/no" and "iv-internal=yes/no".? For example, ?a smartcard-like provider may have "key-readable=no" and ?"key-writable=yes" for RSA keys, while another card may have ?"key-writable=no" (meaning that externally generated keys cannot ?be imported to the card.? "data-internal" refers to the ?ability to process (encrypt, hash etc.) data internal to the ?provider, such as other keys, while "data-external" refers to ?the ability to process arbitrary application data. - Variable key length algorithm implementations should carry ?description/selection parameters indicating maximum and minimum ?key lengths (Some will refuse to process short keys, others will ?refuse long keys, some will require the key length to be a ?multiple of some number). - The current EVP interface abuses the general (re)init operations ?with omitted arguments as the main interface to update rapidly ?changing algorithm parameters such as IVs and/or keys.? With the ?removal of legacy APIs, the need to provide parameter changing ?as explicit calls in the EVP API and provider has become more ?obvious. - A provider property valuable to some callers (and already a known ?property of some legacy APIs) is to declare that certain simple ?operations will always succeed, such as passing additional data ?bytes to a hash/mac (the rare cases of hardware disconnect and/or ?exceeding the algorithm maximums can be deferred to "finish" ?operations).? A name for this property of an algorithm ?implementation could be "nofail=yes", and the list of non-failing ?operations defined for each type of algorithm should be publicly ?specified (a nofail hash would have a different list than a ?no-fail symmetric encryption). - Providers that are really bridges to another multi-provider API ?(ENGINE, PKCS#11, MS CAPI 1, MS CNG) should be explicitly allowed ?to load/init separately for each underlying provider.? For example, ?it would be bad for an application talking to one PKCS#11 module to ?run, load or block all other PKCS#11 modules on the system. - Under normal file system layout conventions, /usr/share/ (and ?below) is for architecture-independent files such as man pages, ?trusted root certificates and platform-independent include files. ? Architecture specific files such as "openssl/providers/foo.so" ?and opensslconf.h belong in /usr/ or /usr/local/ . FIPS-specific issues: - The checksum of the FIPS DLL should be compiled into the FIPS- ?capable OpenSSL library, since a checksum stored in its own file ?on the end user system is too easily replaced by attackers.? This ?also implies that each FIPS DLL version will need its own file name ?in case different applications are linked to different libcrypto ?versions (because they were started before an upgrade of the shared ?libcrypto or because they use their own copy of libcrypto). - If possible, the core or a libcrypto-provided FIPS-wrapper should ?check the hash of the opensslfips-3.x.x.so DLL before running any ?of its code (including on-load stubs), secondly, the DLL can ?recheck itself using its internal implementation of the chosen MAC ?algorithm, if this is required by the CMVP.? This is to protect the ?application if a totally unrelated malicious file is dropped in ?place of the DLL. - The document seems to consistently only mentions the ?shortest/weakest key lengths, such as AES-128.? Hopefully the ?actual release will have no such limitation. - The well-known slowness of FIPS validations will in practice ?require the FIPS module compiled from a source change to be ?released (much) later than the same change in the default ?provider.? The draft method of submitting FIPS validation ?updates just before any FIPS-affecting OpenSSL release seems ?overly optimistic. - Similarly, due to the slowness of FIPS validation updates, ?it may often be prudent to provide a root-cause fix in the ?default provider and a less-effective change in the FIPS ?provider, possibly involving FIPS-frozen workaround code in ?libcrypto, either in core or in a separate FIPS-wrapper ?component. - The mechanisms for dealing with cannot-export-the-private-key ?hardware providers could also be used to let the FIPS provider ?offer algorithm variants where the crypto officer (application ?writer/installer) specify that some keys remain inside the ?FIPS blob, inaccessible to the user role (application code). ? For example, TLS PFS (EC)DHE keys and CMS per message keys ?could by default remain inside the provider.? Extending this ?to TLS session keys and server private key would be a future ?option. - In future versions, it should be possible to combine the ?bundled FIPS provider with providers for FIPS-validated hardware, ?such as FIPS validated PIV smart cards for TLS client ?certificates. - Support for generating and validating (EC)DH and (EC)DSA ?group parameters using the FIPS-specified algorithms should ?be available in addition to the fixed sets of well-known ?group parameters.? In FIPS 800-56A rev 3, these are the ?DH primes specified using a SEED value.? Other versions of ?SP 800-56A, and/or supplemental NIST documents may allow ?other such group parameters. - If permitted by the CMVP rules, allow an option for ?application provided (additional) entropy input to the RNG ?from outside the module boundary. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From matt at openssl.org Fri Feb 15 10:24:33 2019 From: matt at openssl.org (Matt Caswell) Date: Fri, 15 Feb 2019 10:24:33 +0000 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: Message-ID: <8cadb209-d121-ab7a-71b7-5688c2947666@openssl.org> On 14/02/2019 22:51, Sam Roberts wrote: > In particular, I'm getting a close_notify alert, followed by two > NewSessionTickets from the server. This sounds like a bug somewhere. Once you have close_notify you shouldn't expect anything else. Is that an OpenSSL server? Matt From levitte at openssl.org Fri Feb 15 10:40:49 2019 From: levitte at openssl.org (Richard Levitte) Date: Fri, 15 Feb 2019 11:40:49 +0100 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification (Monday 2019-02-11) In-Reply-To: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> References: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> Message-ID: <87zhqxnxla.wl-levitte@openssl.org> Note: these are my personal answers. I'm sure (and hope) that other in our team will chip in (and possibly disagree with me) On Fri, 15 Feb 2019 04:55:38 +0100, Jakob Bohm wrote: > > These comments are on the version of the specification released on > Monday 2019-02-11 at https://www.openssl.org/docs/OpenSSL300Design.html > > General notes on this release: > > - The release was not announced on the openssl-users and > openssl-announce mailing lists. A related blog post was > announced two days later. Yes. > - The related strategy document is at > https://www.openssl.org/docs/OpenSSLStrategicArchitecture.html > (This link is broken on the www.openssl.org front page). Broken link fixed. It was a typo... > - The draft does not link to anywhere that the public can > inspect archived or version tracked document versions. Are you asking us to provide a link to the git repo? Should we do that in each of our HTML files? (I'm not trying to be a smartass, this is something we could actually do, and quite easily for these documents specifically) BTW, more direct answer: the underlying documents are written in markdown, and the repo links are: - [our own repo] https://git.openssl.org/?p=openssl-web.git;a=blob;f=docs/OpenSSL300Design.md;h=30a02eb08f574e06e827386ce9fa1830eb3b8070;hb=HEAD https://git.openssl.org/?p=openssl-web.git;a=blob;f=docs/OpenSSLStrategicArchitecture.md;h=ecc8fd11e48ec62eb4bfb7e4a60ab799c26971c7;hb=HEAD - [github] https://github.com/openssl/web/blob/master/docs/OpenSSL300Design.md https://github.com/openssl/web/blob/master/docs/OpenSSLStrategicArchitecture.md > Non-FIPS architecture issues: > > - The identifiers for predefined parameters and values (such as > "fips", "on", "off", "aes-128-cbc" should be binary values that > cannot be easily searched in larger program files (by attackers). > This rules out both text strings, UUID values and ASN OID values. > Something similar to the function ids would be ideal.? Note that > to make this effective, the string names of these should not > appear in linked binaries. > (The context of this is linking libcrypto and/or libssl into > closed source binary programs, since open source binaries cannot > hide their internal structure anyway). The trouble with this is that it limits what providers can actually provide. I've used the example of someone inventing an algorithm "BLARGH", with some parameters and properties they define. How would they be represented with a binary number? How would an application know what number they should ask for? We've chosen strings to have the flexibility. > - It should be possible for applications to configure OpenSSL to > load provider DLLs and config files from their own directories > rather than the global well-known directory (isolation from > system wide changes). I see no issue with that. The well-known directory would work as a fallback, the same way we do with ENGINESDIR today. > - It should be possible for providers (possibly not the FIPS > provider) to be linked directly into programs that link > statically to libcrypto.? This implies the absence of > conflicting identifiers, a public API to pass the address of > a |OSSL_provider_init|function, all bundled providers provided > as static libraries in static library builds, and a higher > level init function that initializes both libcrypto and the > default provider. You may have noticed in the packaging view that the default provider will be part of libcrypto. To allow that, we must allow exactly the construct that you're talking about, and the step to make it possible for applications to link statically with providers is very small. (as for "possibly not the FIPS provider", that's exactly right. That one *will* be a loadable module and nothing else, and will only be validated as such... meaning that noone can stop you from hacking around and have it linked in statically, but that would make it invalid re FIPS) > - Static library forms of the default provider should not > force callers to include every algorithm just because they > are referenced from the default dispatch tables.? For example, > it should be easy to link a static application that uses only > AES-256-CBC and SHA-256, and contains little else.? Such limited > feature applications would obviously have to forego using the > all-inclusive high level init function. The way to do that would be to divide the default provider into a number of smaller providers, one for each algorithm, and then leave you free to link with exactly those you want and none other. We haven't really made any explicit plans to do this initially, but I can't see any reason why it shouldn't be doable. I can't say if it will happen in time for 3.0.0, there's already enough to work on and this is not in the top priorities. > - For use with engine-like providers (such as hardware providers > and the PKCS#11 provider), it should be possible for a provider > to provide algorithms like RSA at multiple abstraction levels. > For example, some PKCS#11 hardware provides the raw RSA > algorithm (bignum in, bignum out) while others provide specific > forms such as PKCS#1.5 signature.? There are even some that > provide the PKCS#1.5 form with some hashes and the RSA form > as a general fallback. The new design is at that higher level, centered around the EVP API. The lower level / raw RSA abstraction level isn't there, and we have not planned for the kind of lower level "default" algorithm support that ENGINE did. > - Similarly, some providers will provide both ends of an > asymmetric algorithm, while others only provide the private > key operation, leaving the public key operation to other > providers (selected by core in the general way). That would mean some key information passing between providers in a generic form to allow them to store the key data internally as they see fit. Otherwise, those two providers will have to share pretty damn intimate knowledge about each other's internals. > - The general bignum library should be exposed via an API, either > the legacy OpenSSL bignum API or a replacement API with an overlap > of at least one major version with both APIs available. I don't understand... the BIGNUM library is available in libcrypto and as far as I know, there are no plans to remove it. What did I miss? > - Provider algorithm implementations should carry > description/selection parameters indicating limits to access: > "key-readable=yes/no", "key-writable=yes/no", "data-internal=yes/no", > "data-external=yes/no" and "iv-internal=yes/no".? For example, > a smartcard-like provider may have "key-readable=no" and > "key-writable=yes" for RSA keys, while another card may have > "key-writable=no" (meaning that externally generated keys cannot > be imported to the card.? "data-internal" refers to the > ability to process (encrypt, hash etc.) data internal to the > provider, such as other keys, while "data-external" refers to > the ability to process arbitrary application data. There's nothing stopping the providers to set such properties, and there's nothing stopping an application from having them in their method queries. And finally, there's nothing stopping a provider to not provide functionality that goes against their properties, either by simple not providing that specific function, or by having it return with an error if it finds a call invalid. > - Variable key length algorithm implementations should carry > description/selection parameters indicating maximum and minimum > key lengths (Some will refuse to process short keys, others will > refuse long keys, some will require the key length to be a > multiple of some number). Are you thinking descriptions that can be displayed back to the user, or something that's usable programmatically? Do you have an idea that doesn't involve inventing a mini language? > - The current EVP interface abuses the general (re)init operations > with omitted arguments as the main interface to update rapidly > changing algorithm parameters such as IVs and/or keys.? With the > removal of legacy APIs, the need to provide parameter changing > as explicit calls in the EVP API and provider has become more > obvious. We have also explicitely said that current code must work as much as possible. This means that while we could certainly provide all kinds of new functions, our priority is to make this work with the current EVP API as much as humanly possible. > - A provider property valuable to some callers (and already a known > property of some legacy APIs) is to declare that certain simple > operations will always succeed, such as passing additional data > bytes to a hash/mac (the rare cases of hardware disconnect and/or > exceeding the algorithm maximums can be deferred to "finish" > operations).? A name for this property of an algorithm > implementation could be "nofail=yes", and the list of non-failing > operations defined for each type of algorithm should be publicly > specified (a nofail hash would have a different list than a > no-fail symmetric encryption). A note here: "provider property" is meaningless in this case. Properties are tied to each algorithm implementation, as can be seen in the definition of OSSL_ALGORITHM. And yes, that could be done. I can't say that it *will* be done, and you'll probably have to remind us later on (via github issue, please). > - Providers that are really bridges to another multi-provider API > (ENGINE, PKCS#11, MS CAPI 1, MS CNG) should be explicitly allowed > to load/init separately for each underlying provider.? For example, > it would be bad for an application talking to one PKCS#11 module to > run, load or block all other PKCS#11 modules on the system. Noted. > - Under normal file system layout conventions, /usr/share/ (and > below) is for architecture-independent files such as man pages, > trusted root certificates and platform-independent include files. > Architecture specific files such as "openssl/providers/foo.so" > and opensslconf.h belong in /usr/ or /usr/local/ . Ah, I see what you're talking about. Okie, simple example change. BTW, many other applications use /usr/lib/{appname}/ or /usr/local/lib/{appname} for this, so I assume that replacing "/usr/share/openssl/providers/foo.so" with "/usr/lib/openssl/providers/foo.so" would be a fine enough example. > FIPS-specific issues: > > - The checksum of the FIPS DLL should be compiled into the FIPS- > capable OpenSSL library, since a checksum stored in its own file > on the end user system is too easily replaced by attackers.? This > also implies that each FIPS DLL version will need its own file name > in case different applications are linked to different libcrypto > versions (because they were started before an upgrade of the shared > libcrypto or because they use their own copy of libcrypto). I'm not sure how important you think the libcrypto version is. A goal of the new internal design is to make providers fairly agnostic to libcrypto versions and vice versa. > - If possible, the core or a libcrypto-provided FIPS-wrapper should > check the hash of the opensslfips-3.x.x.so DLL before running any > of its code (including on-load stubs), secondly, the DLL can > recheck itself using its internal implementation of the chosen MAC > algorithm, if this is required by the CMVP.? This is to protect the > application if a totally unrelated malicious file is dropped in > place of the DLL. > > - The document seems to consistently only mentions the > shortest/weakest key lengths, such as AES-128.? Hopefully the > actual release will have no such limitation. We could have mentioned something else. It's just an example. (I'm surprised you didn't mention that we consistently specified '-cbc' as well... I know some folks who raise an eyebrow at that these days) > - The well-known slowness of FIPS validations will in practice > require the FIPS module compiled from a source change to be > released (much) later than the same change in the default > provider.? The draft method of submitting FIPS validation > updates just before any FIPS-affecting OpenSSL release seems > overly optimistic. It will only mean that the bleeding edge FIPS module source will be in a "validation pending" for some time, i.e. you'll have to run your latest libcrypto build with a previous version of the module for a bit of time. > - Similarly, due to the slowness of FIPS validation updates, > it may often be prudent to provide a root-cause fix in the > default provider and a less-effective change in the FIPS > provider, possibly involving FIPS-frozen workaround code in > libcrypto, either in core or in a separate FIPS-wrapper > component. > > - The mechanisms for dealing with cannot-export-the-private-key > hardware providers could also be used to let the FIPS provider > offer algorithm variants where the crypto officer (application > writer/installer) specify that some keys remain inside the > FIPS blob, inaccessible to the user role (application code). > For example, TLS PFS (EC)DHE keys and CMS per message keys > could by default remain inside the provider.? Extending this > to TLS session keys and server private key would be a future > option. > > - In future versions, it should be possible to combine the > bundled FIPS provider with providers for FIPS-validated hardware, > such as FIPS validated PIV smart cards for TLS client > certificates. From a building perspective, I see nothing that would stop such bundles to emerge. Some bundling code that appears as one provider to libcrypto, maybe? > - Support for generating and validating (EC)DH and (EC)DSA > group parameters using the FIPS-specified algorithms should > be available in addition to the fixed sets of well-known > group parameters.? In FIPS 800-56A rev 3, these are the > DH primes specified using a SEED value.? Other versions of > SP 800-56A, and/or supplemental NIST documents may allow > other such group parameters. > > - If permitted by the CMVP rules, allow an option for > application provided (additional) entropy input to the RNG > from outside the module boundary. > > Enjoy Likewise Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From matt at openssl.org Fri Feb 15 11:23:42 2019 From: matt at openssl.org (Matt Caswell) Date: Fri, 15 Feb 2019 11:23:42 +0000 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification (Monday 2019-02-11) In-Reply-To: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> References: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> Message-ID: <368dc8e6-ba62-a521-0e67-818e8c5f4256@openssl.org> On 15/02/2019 03:55, Jakob Bohm via openssl-users wrote: > These comments are on the version of the specification released on > Monday 2019-02-11 at https://www.openssl.org/docs/OpenSSL300Design.html > > General notes on this release: > > - The release was not announced on the openssl-users and > ?openssl-announce mailing lists.? A related blog post was > ?announced two days later. Well the blog post was intended to *be* the announcement. > > - The related strategy document is at > ?https://www.openssl.org/docs/OpenSSLStrategicArchitecture.html > ?(This link is broken on the www.openssl.org front page). Fixed - thanks. > > - The draft does not link to anywhere that the public can > ?inspect archived or version tracked document versions. These documents have only just reached the point where they were stable enough to make public and go into version control. Any future updates will go through the normal review process for the web repo and be version controlled. The raw markdown versions are here: https://github.com/openssl/web/blob/master/docs/OpenSSL300Design.md https://github.com/openssl/web/blob/master/docs/OpenSSLStrategicArchitecture.md Pull requests and issues can be made via github in the normal way: https://github.com/openssl/web/pulls https://github.com/openssl/web/issues Other comments inserted below where I have an opinion or something to say. I'm hoping others will chip in on your other points: > Non-FIPS architecture issues: > > - The identifiers for predefined parameters and values (such as > "fips", "on", "off", "aes-128-cbc" should be binary values that > cannot be easily searched in larger program files (by attackers). > This rules out both text strings, UUID values and ASN OID values. > Something similar to the function ids would be ideal. Note that > to make this effective, the string names of these should not > appear in linked binaries. > (The context of this is linking libcrypto and/or libssl into > closed source binary programs, since open source binaries cannot > hide their internal structure anyway). > > - It should be possible for applications to configure OpenSSL to > ?load provider DLLs and config files from their own directories > ?rather than the global well-known directory (isolation from > ?system wide changes). I believe this is the intention. > > - It should be possible for providers (possibly not the FIPS > ?provider) to be linked directly into programs that link > ?statically to libcrypto.? This implies the absence of > ?conflicting identifiers, a public API to pass the address of > ?a |OSSL_provider_init|function, all bundled providers provided > ?as static libraries in static library builds, and a higher > ?level init function that initializes both libcrypto and the > ?default provider. The plan is that Providers may choose to be linked against libcrypto or not as they see fit (the FIPS Provider will not be). They can be built entirely without using any libcrypto symbols at all. They just need to have the well known entry point. Any functions from the Core that the Provider may need to call are passed as callback function pointers. I can't think of a reason why there should be an issue with providers statically linking with libcrypto if they so wish. > - Static library forms of the default provider should not > ?force callers to include every algorithm just because they > ?are referenced from the default dispatch tables.? For example, > ?it should be easy to link a static application that uses only > ?AES-256-CBC and SHA-256, and contains little else.? Such limited > ?feature applications would obviously have to forego using the > ?all-inclusive high level init function. > > - For use with engine-like providers (such as hardware providers > ?and the PKCS#11 provider), it should be possible for a provider > ?to provide algorithms like RSA at multiple abstraction levels. > ? For example, some PKCS#11 hardware provides the raw RSA > ?algorithm (bignum in, bignum out) while others provide specific > ?forms such as PKCS#1.5 signature.? There are even some that > ?provide the PKCS#1.5 form with some hashes and the RSA form > ?as a general fallback. I think this should be possible with the design as it stands. Providers make implementations of algorithms available to the core. I don't see any reason why they can't provide multiple implementations of the same algorithm (presumably distinguished by some properties) > > - Similarly, some providers will provide both ends of an > ?asymmetric algorithm, while others only provide the private > ?key operation, leaving the public key operation to other > ?providers (selected by core in the general way). Again I believe this should be possible with the current design. We split algorithm implementations into different "operations". I don't think there is any reason to require a provider to implement all operations that an algorithm is capable of (in fact I think that was the design intent). It might be worth making the ability to do this more explicit in the document. > > - The general bignum library should be exposed via an API, either > ?the legacy OpenSSL bignum API or a replacement API with an overlap > ?of at least one major version with both APIs available. There are no plans to remove access to bignum. > > - Provider algorithm implementations should carry > ?description/selection parameters indicating limits to access: > ?"key-readable=yes/no", "key-writable=yes/no", "data-internal=yes/no", > ?"data-external=yes/no" and "iv-internal=yes/no".? For example, > ?a smartcard-like provider may have "key-readable=no" and > ?"key-writable=yes" for RSA keys, while another card may have > ?"key-writable=no" (meaning that externally generated keys cannot > ?be imported to the card.? "data-internal" refers to the > ?ability to process (encrypt, hash etc.) data internal to the > ?provider, such as other keys, while "data-external" refers to > ?the ability to process arbitrary application data. We expect Provider authors to be able to define their own properties as they see fit. We plan to create a central repository (outside the main source code) of "common" names. So I think all of the above should be possible. > > - Variable key length algorithm implementations should carry > ?description/selection parameters indicating maximum and minimum > ?key lengths (Some will refuse to process short keys, others will > ?refuse long keys, some will require the key length to be a > ?multiple of some number). > > - The current EVP interface abuses the general (re)init operations > ?with omitted arguments as the main interface to update rapidly > ?changing algorithm parameters such as IVs and/or keys.? With the > ?removal of legacy APIs, the need to provide parameter changing > ?as explicit calls in the EVP API and provider has become more > ?obvious. Agreed that we will need to review the EVP interface to ensure that everything you can do in the low-level interface is still possible (within reason). Note though that in 3.0.0 we are only deprecating the low-level APIs not removing them. The Strategic Architecture document (which has a view beyond 3.0.0) sees us moving them to a libcrypto-legacy library (so they would still be available).* If you do use the low-level APIs in 3.0.0 then they won't go via the Core/Providers. (* I just spotted an error in the strategy document. The packaging diagram doesn't match up with the text and doesn't show libcrypto-legacy on it - althogh the text does talk about it. I need to investigate that) > - A provider property valuable to some callers (and already a known > ?property of some legacy APIs) is to declare that certain simple > ?operations will always succeed, such as passing additional data > ?bytes to a hash/mac (the rare cases of hardware disconnect and/or > ?exceeding the algorithm maximums can be deferred to "finish" > ?operations).? A name for this property of an algorithm > ?implementation could be "nofail=yes", and the list of non-failing > ?operations defined for each type of algorithm should be publicly > ?specified (a nofail hash would have a different list than a > ?no-fail symmetric encryption). That's an interesting idea. Again Provider can define their own properties as they see fit. We can certainly give consideration to any other properties that we would like to have a "common" definition. > > - Providers that are really bridges to another multi-provider API > ?(ENGINE, PKCS#11, MS CAPI 1, MS CNG) should be explicitly allowed > ?to load/init separately for each underlying provider.? For example, > ?it would be bad for an application talking to one PKCS#11 module to > ?run, load or block all other PKCS#11 modules on the system. The design allows for providers to make algorithm implementations available/not-available over time. So I think this addresses what you are saying here? > > - Under normal file system layout conventions, /usr/share/ (and > ?below) is for architecture-independent files such as man pages, > ?trusted root certificates and platform-independent include files. > ? Architecture specific files such as "openssl/providers/foo.so" > ?and opensslconf.h belong in /usr/ or /usr/local/ . I don't believe we've got as far as specifying the installation file system layout - but this is useful input. > > > FIPS-specific issues: > > - The checksum of the FIPS DLL should be compiled into the FIPS- > ?capable OpenSSL library, since a checksum stored in its own file > ?on the end user system is too easily replaced by attackers.? This > ?also implies that each FIPS DLL version will need its own file name > ?in case different applications are linked to different libcrypto > ?versions (because they were started before an upgrade of the shared > ?libcrypto or because they use their own copy of libcrypto). This is not an attack that we are seeking to defend against in 3.0.0. We consider the checksum to be an integrity check to protect against accidental changes to the module. > - If possible, the core or a libcrypto-provided FIPS-wrapper should > ?check the hash of the opensslfips-3.x.x.so DLL before running any > ?of its code (including on-load stubs), secondly, the DLL can > ?recheck itself using its internal implementation of the chosen MAC > ?algorithm, if this is required by the CMVP.? This is to protect the > ?application if a totally unrelated malicious file is dropped in > ?place of the DLL. As above - this is not an attack we are seeking to defend against. > - The document seems to consistently only mentions the > ?shortest/weakest key lengths, such as AES-128.? Hopefully the > ?actual release will have no such limitation. No - there is no such restriction. The full list of what we are planning to support is in Appendix 3. Although I note that we explicitly mention key lengths for some algorithms/modes but not others. We should probably update that to be consistent. > > - The well-known slowness of FIPS validations will in practice > ?require the FIPS module compiled from a source change to be > ?released (much) later than the same change in the default > ?provider.? The draft method of submitting FIPS validation > ?updates just before any FIPS-affecting OpenSSL release seems > ?overly optimistic. > > - Similarly, due to the slowness of FIPS validation updates, > ?it may often be prudent to provide a root-cause fix in the > ?default provider and a less-effective change in the FIPS > ?provider, possibly involving FIPS-frozen workaround code in > ?libcrypto, either in core or in a separate FIPS-wrapper > ?component. > > - The mechanisms for dealing with cannot-export-the-private-key > ?hardware providers could also be used to let the FIPS provider > ?offer algorithm variants where the crypto officer (application > ?writer/installer) specify that some keys remain inside the > ?FIPS blob, inaccessible to the user role (application code). > ? For example, TLS PFS (EC)DHE keys and CMS per message keys > ?could by default remain inside the provider.? Extending this > ?to TLS session keys and server private key would be a future > ?option. > > - In future versions, it should be possible to combine the > ?bundled FIPS provider with providers for FIPS-validated hardware, > ?such as FIPS validated PIV smart cards for TLS client > ?certificates. The OpenSSL FIPS provider will provide algorithm implementations matching "fips=yes". I see no reason why other providers can't do the same - so the above should be possible. > > - Support for generating and validating (EC)DH and (EC)DSA > ?group parameters using the FIPS-specified algorithms should > ?be available in addition to the fixed sets of well-known > ?group parameters.? In FIPS 800-56A rev 3, these are the > ?DH primes specified using a SEED value.? Other versions of > ?SP 800-56A, and/or supplemental NIST documents may allow > ?other such group parameters. > > - If permitted by the CMVP rules, allow an option for > ?application provided (additional) entropy input to the RNG > ?from outside the module boundary. Thanks for the input and all of the suggestions. Matt From tmraz at redhat.com Fri Feb 15 12:58:12 2019 From: tmraz at redhat.com (Tomas Mraz) Date: Fri, 15 Feb 2019 13:58:12 +0100 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification (Monday 2019-02-11) In-Reply-To: <368dc8e6-ba62-a521-0e67-818e8c5f4256@openssl.org> References: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> <368dc8e6-ba62-a521-0e67-818e8c5f4256@openssl.org> Message-ID: On Fri, 2019-02-15 at 11:23 +0000, Matt Caswell wrote: > > On 15/02/2019 03:55, Jakob Bohm via openssl-users wrote: > > yout - but this is useful input. > > > > > FIPS-specific issues: > > > > - The checksum of the FIPS DLL should be compiled into the FIPS- > > capable OpenSSL library, since a checksum stored in its own file > > on the end user system is too easily replaced by attackers. This > > also implies that each FIPS DLL version will need its own file > > name > > in case different applications are linked to different libcrypto > > versions (because they were started before an upgrade of the > > shared > > libcrypto or because they use their own copy of libcrypto). > > This is not an attack that we are seeking to defend against in 3.0.0. > We > consider the checksum to be an integrity check to protect against > accidental > changes to the module. +1 to Matt. The integrity check of FIPS standard was never ment to be a mitigation against active attacks. Its purpose always was just protection against inadvertent HW or SW errors. Building the checksum into a binary overly complicates things and it is not worth the hassle as it would not protect against active attacks either, it would just complicate them a little. > > > - If possible, the core or a libcrypto-provided FIPS-wrapper should > > check the hash of the opensslfips-3.x.x.so DLL before running any > > of its code (including on-load stubs), secondly, the DLL can > > recheck itself using its internal implementation of the chosen MAC > > algorithm, if this is required by the CMVP. This is to protect > > the > > application if a totally unrelated malicious file is dropped in > > place of the DLL. > > As above - this is not an attack we are seeking to defend against. +1 -- Tom?? Mr?z No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] From levitte at openssl.org Fri Feb 15 15:03:42 2019 From: levitte at openssl.org (Richard Levitte) Date: Fri, 15 Feb 2019 16:03:42 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us Message-ID: <87va1lnlf5.wl-levitte@openssl.org> Hi all, It seem like DMARC, SPF, DKIM, or *something* is tripping us up quite a bit. Earlier this afternoon (that's what it is in Sweden at least), us postmasters got a deluge of bounce reports from mailman, basically telling us that it got something like this: : host aspmx.l.google.com[74.125.140.26] said: 550-5.7.1 This message does not have authentication information or fails to pass 550-5.7.1 authentication checks. To best protect our users from spam, the 550-5.7.1 message has been blocked. Please visit 550-5.7.1 https://support.google.com/mail/answer/81126#authentication for more 550 5.7.1 information. f1si3266960wro.105 - gsmtp (in reply to end of DATA command) There's very little fact of what actually triggered these bounces, but they always come from Google, so we're guessing that they're becoming increasingly aggressive in their checks of DKIM, SPF, ARC, who knows (they don't seem to check DMARC, 'cause we do have one with p=none and an address to sent DMARC reports to, and I'm hearing absolutely nothing from Google, but I do hear from others) So, to mitigate the problem, we've removed all extra decoration of the messages, i.e. the list footer that's usually added and the subject tag that indicates what list this is (I added the "openssl-users:" that you see manually). So IF you're filtering the messages to get list messages in a different folder, based on the subject line, you will unfortunately have to change it. If I may suggest something, it's to look at this: List-Id: Cheers, Richard ( role: postmaster ) -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From jb-openssl at wisemo.com Fri Feb 15 16:14:03 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Fri, 15 Feb 2019 17:14:03 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <87va1lnlf5.wl-levitte@openssl.org> References: <87va1lnlf5.wl-levitte@openssl.org> Message-ID: <79765d96-9117-d6f2-91eb-757f996db2a8@wisemo.com> On 15/02/2019 16:03, Richard Levitte wrote: > Hi all, > > It seem like DMARC, SPF, DKIM, or *something* is tripping us up quite > a bit. Earlier this afternoon (that's what it is in Sweden at least), > us postmasters got a deluge of bounce reports from mailman, basically > telling us that it got something like this: > > : host aspmx.l.google.com[74.125.140.26] said: > 550-5.7.1 This message does not have authentication information or fails to > pass 550-5.7.1 authentication checks. To best protect our users from spam, > the 550-5.7.1 message has been blocked. Please visit 550-5.7.1 > https://support.google.com/mail/answer/81126#authentication for more 550 > 5.7.1 information. f1si3266960wro.105 - gsmtp (in reply to end of DATA > command) > > There's very little fact of what actually triggered these bounces, but > they always come from Google, so we're guessing that they're becoming > increasingly aggressive in their checks of DKIM, SPF, ARC, who knows > (they don't seem to check DMARC, 'cause we do have one with p=none and > an address to sent DMARC reports to, and I'm hearing absolutely > nothing from Google, but I do hear from others) > > So, to mitigate the problem, we've removed all extra decoration of the > messages, i.e. the list footer that's usually added and the subject > tag that indicates what list this is (I added the "openssl-users:" > that you see manually). > > So IF you're filtering the messages to get list messages in a > different folder, based on the subject line, you will unfortunately > have to change it. If I may suggest something, it's to look at this: > > List-Id: > > Cheers, > Richard ( role: postmaster ) > I have had some fruitless discussion with the mailman authors a while back.? They seemed to insist that DMARC etc. were bad ideas and that senders complying were broken and needed half-assed workarounds. In my own role as postmaster, I see all these systems implementing variations of the same concepts, that have pretty simply implications for mailing list software: 1. The global mail system is increasingly implementing checks for ? spoofed source addresses.? List gateways thus need to be ? increasingly conservative in what they send out. 2. If posts contain any kind of digital signature (PGP, S/MIME, ? DKIM etc.), the mailing list software must either preserve the ? validity by not changing anything signed or remove that signature. ?? Leaving a now-invalid signature in place makes the post obviously ? bogus to anyone checking, resulting in bounces and lost mail. ?? (Optionally, the list gateway may add headers describing the ? validity of those signatures before processing, perhaps even ? rejecting posts with invalid signatures to reduce spam). 3. When sending out the mails, the various from addresses must be ? appropriately authorized, either by being the list gateway itself or ? by satisfying all the known checks for any preserved addresses. 4. As mass senders of mail, mailing list gateways should themselves ? implement all the checkable features in the standards: strict SPF ? records for its own domain, DKIM signatures for any mails with ? the list gateway as source, DMARC records telling recipients to ? discard/reject messages pretending to be from the list without ? satisfying all these checks. 5. The enforcement strictness in DMARC, SPF and DKIM DNS records ? should not be taken as license to violate the requirements.? For ? example an SPF rule of +all should not be treated as permission ? to use the posters domain in envelope-from. ?? Frustration with spam may lead recipient systems to enforce ? more strictly than requested by the source domain. More specific rules: A. SPF requires the envelope-from (SMTP MAIL FROM) address to always ? be that of the list gateway, even if the posters domain has no SPF ? record. B. A valid DKIM signature from the posters domain can allow keeping ? the poster as From: address if the DKIM signature is undamaged. C. Some DKIM signatures allow appending a mailing list footer to the ? end of plaintext mails and adding the mailing list headers to the ? mail, others do not.? In practice this is determined by ? implementation details in the posters mail server (for example, ? most versions of exim don't sign in that permissive way). ?? Programmatically this can be determined by directly comparing the ? coverage indicated in the DKIM header to the intended mail ? modifications. D. DMARC DNS records indicate if a sending domain wants to restrict ? header-From (etc.) pointing to that domain to only be used with ? at least one of DKIM and SPF passing for header-From.? Rule 5 ? applies, but so does rule C. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From mark at keypair.us Fri Feb 15 16:20:31 2019 From: mark at keypair.us (Mark Minnoch) Date: Fri, 15 Feb 2019 08:20:31 -0800 Subject: [openssl-users] [openssl-project] OpenSSL 3.0 and FIPS Update Message-ID: Responding to some earlier questions: > Can you give any guidance on which platforms will be validated with the OpenSSL FIPS 3.0 module? My recollection is that it will only be a handful of platforms. I would expect the number of platforms to be small. The wonderful 5 sponsors of the FIPS project will likely direct the initial platforms. > Any additional information about how other platforms can be validated would also be helpful. My company, KeyPair Consulting, performs FIPS testing for new platforms for the OpenSSL FOM 2.0. We plan to continue this service for the OpenSSL FIPS Module for 3.0. -- Mark J. Minnoch Co-Founder, CISSP KeyPair Consulting +1 (805) 550-3231 <(805)%20550-3231> mobile https://KeyPair.us https://www.linkedin.com/in/minnoch *We expertly guide technology companies in achieving their FIPS 140 goals* *Blog post: You Have Your FIPS Certificate. Now What? * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Fri Feb 15 16:54:35 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Fri, 15 Feb 2019 17:54:35 +0100 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification (Monday 2019-02-11) In-Reply-To: <368dc8e6-ba62-a521-0e67-818e8c5f4256@openssl.org> References: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> <368dc8e6-ba62-a521-0e67-818e8c5f4256@openssl.org> Message-ID: <5226374c-2404-2163-d979-b0d52eef271a@wisemo.com> On 15/02/2019 12:23, Matt Caswell wrote: > > On 15/02/2019 03:55, Jakob Bohm via openssl-users wrote: >> These comments are on the version of the specification released on >> Monday 2019-02-11 at https://www.openssl.org/docs/OpenSSL300Design.html >> >> General notes on this release: >> >> - The release was not announced on the openssl-users and >> ?openssl-announce mailing lists.? A related blog post was >> ?announced two days later. > Well the blog post was intended to *be* the announcement. > >> - The related strategy document is at >> ?https://www.openssl.org/docs/OpenSSLStrategicArchitecture.html >> ?(This link is broken on the www.openssl.org front page). > Fixed - thanks. > >> - The draft does not link to anywhere that the public can >> ?inspect archived or version tracked document versions. > These documents have only just reached the point where they were stable enough > to make public and go into version control. Any future updates will go through > the normal review process for the web repo and be version controlled. The raw > markdown versions are here: > > https://github.com/openssl/web/blob/master/docs/OpenSSL300Design.md > https://github.com/openssl/web/blob/master/docs/OpenSSLStrategicArchitecture.md > > Pull requests and issues can be made via github in the normal way: > > https://github.com/openssl/web/pulls > https://github.com/openssl/web/issues > > Other comments inserted below where I have an opinion or something to say. I'm > hoping others will chip in on your other points: > >> Non-FIPS architecture issues: >> >> - The identifiers for predefined parameters and values (such as >> "fips", "on", "off", "aes-128-cbc" should be binary values that >> cannot be easily searched in larger program files (by attackers). >> This rules out both text strings, UUID values and ASN OID values. >> Something similar to the function ids would be ideal. Note that >> to make this effective, the string names of these should not >> appear in linked binaries. >> (The context of this is linking libcrypto and/or libssl into >> closed source binary programs, since open source binaries cannot >> hide their internal structure anyway). >> >> - It should be possible for applications to configure OpenSSL to >> ?load provider DLLs and config files from their own directories >> ?rather than the global well-known directory (isolation from >> ?system wide changes). > I believe this is the intention. > > >> - It should be possible for providers (possibly not the FIPS >> ?provider) to be linked directly into programs that link >> ?statically to libcrypto.? This implies the absence of >> ?conflicting identifiers, a public API to pass the address of >> ?a |OSSL_provider_init|function, all bundled providers provided >> ?as static libraries in static library builds, and a higher >> ?level init function that initializes both libcrypto and the >> ?default provider. > The plan is that Providers may choose to be linked against libcrypto or not as > they see fit (the FIPS Provider will not be). They can be built entirely without > using any libcrypto symbols at all. They just need to have the well known entry > point. Any functions from the Core that the Provider may need to call are passed > as callback function pointers. I can't think of a reason why there should be an > issue with providers statically linking with libcrypto if they so wish. This one is not about providers linked against libcrypto, it's about applications linked against libcrypto3.a and provider-lib.a, thus eliminating the DLL loading step. > > >> - Static library forms of the default provider should not >> ?force callers to include every algorithm just because they >> ?are referenced from the default dispatch tables.? For example, >> ?it should be easy to link a static application that uses only >> ?AES-256-CBC and SHA-256, and contains little else.? Such limited >> ?feature applications would obviously have to forego using the >> ?all-inclusive high level init function. >> >> - For use with engine-like providers (such as hardware providers >> ?and the PKCS#11 provider), it should be possible for a provider >> ?to provide algorithms like RSA at multiple abstraction levels. >> ? For example, some PKCS#11 hardware provides the raw RSA >> ?algorithm (bignum in, bignum out) while others provide specific >> ?forms such as PKCS#1.5 signature.? There are even some that >> ?provide the PKCS#1.5 form with some hashes and the RSA form >> ?as a general fallback. > I think this should be possible with the design as it stands. Providers make > implementations of algorithms available to the core. I don't see any reason why > they can't provide multiple implementations of the same algorithm (presumably > distinguished by some properties) The case here is that some providers (such as certain Gemalto USB smartcards) offer hardware implementation of RSA over arbitrary bignums, leaving the PKCS formatting to libraries such as OpenSSL. Experience with upgrading to better hashes in the past tells me it is more robust if the PKCS formatting code is not pushed into the provider in those cases.? I have other cards in my collection that act the other way round (insisting on doing the PKCS formatting to prevent chosen plaintext attacks). > >> - Similarly, some providers will provide both ends of an >> ?asymmetric algorithm, while others only provide the private >> ?key operation, leaving the public key operation to other >> ?providers (selected by core in the general way). > Again I believe this should be possible with the current design. We split > algorithm implementations into different "operations". I don't think there is > any reason to require a provider to implement all operations that an algorithm > is capable of (in fact I think that was the design intent). It might be worth > making the ability to do this more explicit in the document. > >> - The general bignum library should be exposed via an API, either >> ?the legacy OpenSSL bignum API or a replacement API with an overlap >> ?of at least one major version with both APIs available. > There are no plans to remove access to bignum. It was missing from the component diagrams and vague text about deprecating "legacy APIs" was not reassuring. > >> - Provider algorithm implementations should carry >> ?description/selection parameters indicating limits to access: >> ?"key-readable=yes/no", "key-writable=yes/no", "data-internal=yes/no", >> ?"data-external=yes/no" and "iv-internal=yes/no".? For example, >> ?a smartcard-like provider may have "key-readable=no" and >> ?"key-writable=yes" for RSA keys, while another card may have >> ?"key-writable=no" (meaning that externally generated keys cannot >> ?be imported to the card.? "data-internal" refers to the >> ?ability to process (encrypt, hash etc.) data internal to the >> ?provider, such as other keys, while "data-external" refers to >> ?the ability to process arbitrary application data. > We expect Provider authors to be able to define their own properties as they see > fit. We plan to create a central repository (outside the main source code) of > "common" names. So I think all of the above should be possible. The idea was to make these standard properties, as they seem to occur in many real world providers, from FIPS to MS CAPI.? They also affect which implementations can be used at various points in the protocols. > >> - Variable key length algorithm implementations should carry >> ?description/selection parameters indicating maximum and minimum >> ?key lengths (Some will refuse to process short keys, others will >> ?refuse long keys, some will require the key length to be a >> ?multiple of some number). There was a comment in the other reply.? I think this simple list of 3 numeric properties (or perhaps a few more), would be enough to answer the question "will this provider implementation handle this particular key size"?? No need for a mini language. Examples: The FIPS provider 3.0.0 will explicitly enforce some minimum key lengths for RSA and DH keys.? A smart card in my collection requires RSA keys to be a multiple of 64 bits (in addition to max and min lengths), while another card from the same vendor has a different divider. >> - The current EVP interface abuses the general (re)init operations >> ?with omitted arguments as the main interface to update rapidly >> ?changing algorithm parameters such as IVs and/or keys.? With the >> ?removal of legacy APIs, the need to provide parameter changing >> ?as explicit calls in the EVP API and provider has become more >> ?obvious. > Agreed that we will need to review the EVP interface to ensure that everything > you can do in the low-level interface is still possible (within reason). Note > though that in 3.0.0 we are only deprecating the low-level APIs not removing > them. The Strategic Architecture document (which has a view beyond 3.0.0) sees > us moving them to a libcrypto-legacy library (so they would still be available).* > > If you do use the low-level APIs in 3.0.0 then they won't go via the Core/Providers. > > (* I just spotted an error in the strategy document. The packaging diagram > doesn't match up with the text and doesn't show libcrypto-legacy on it - althogh > the text does talk about it. I need to investigate that) > Point would be to provide EVP methods to replace some already deprecated low-level APIs.? Currently fragile logic to do less when only changing the IV is buried deep in each symmetric algorithm provider.? Making this an explicit provider method and making core dispatch that case accordingly would improve code quality. >> - A provider property valuable to some callers (and already a known >> ?property of some legacy APIs) is to declare that certain simple >> ?operations will always succeed, such as passing additional data >> ?bytes to a hash/mac (the rare cases of hardware disconnect and/or >> ?exceeding the algorithm maximums can be deferred to "finish" >> ?operations).? A name for this property of an algorithm >> ?implementation could be "nofail=yes", and the list of non-failing >> ?operations defined for each type of algorithm should be publicly >> ?specified (a nofail hash would have a different list than a >> ?no-fail symmetric encryption). > That's an interesting idea. Again Provider can define their own properties as > they see fit. We can certainly give consideration to any other properties that > we would like to have a "common" definition. I believe this is a (non-public) property of some of default implementations. > >> - Providers that are really bridges to another multi-provider API >> ?(ENGINE, PKCS#11, MS CAPI 1, MS CNG) should be explicitly allowed >> ?to load/init separately for each underlying provider.? For example, >> ?it would be bad for an application talking to one PKCS#11 module to >> ?run, load or block all other PKCS#11 modules on the system. > The design allows for providers to make algorithm implementations > available/not-available over time. So I think this addresses what you are saying > here? Loading a PKCS#11 module (or the equivalent for other APIs) has side effects.? Loading (or not) PKCS#11 modules (etc.) as needed should be almost as easy as loading (or not) providers. > >> - Under normal file system layout conventions, /usr/share/ (and >> ?below) is for architecture-independent files such as man pages, >> ?trusted root certificates and platform-independent include files. >> ? Architecture specific files such as "openssl/providers/foo.so" >> ?and opensslconf.h belong in /usr/ or /usr/local/ . > I don't believe we've got as far as specifying the installation file system > layout - but this is useful input. There were some unfortunate examples in the document. >> >> FIPS-specific issues: >> >> - The checksum of the FIPS DLL should be compiled into the FIPS- >> ?capable OpenSSL library, since a checksum stored in its own file >> ?on the end user system is too easily replaced by attackers.? This >> ?also implies that each FIPS DLL version will need its own file name >> ?in case different applications are linked to different libcrypto >> ?versions (because they were started before an upgrade of the shared >> ?libcrypto or because they use their own copy of libcrypto). > This is not an attack that we are seeking to defend against in 3.0.0. We > consider the checksum to be an integrity check to protect against accidental > changes to the module. While FIPS 140 level 1 might not, the higher FIPS levels seem very keen on defending against these attacks, and the checksum at level 1 seems to be a degenerated remnant of those defenses. > >> - If possible, the core or a libcrypto-provided FIPS-wrapper should >> ?check the hash of the opensslfips-3.x.x.so DLL before running any >> ?of its code (including on-load stubs), secondly, the DLL can >> ?recheck itself using its internal implementation of the chosen MAC >> ?algorithm, if this is required by the CMVP.? This is to protect the >> ?application if a totally unrelated malicious file is dropped in >> ?place of the DLL. > As above - this is not an attack we are seeking to defend against. It is, however, a new attack made possible by moving the FIPS canister into its own file. > >> - The document seems to consistently only mentions the >> ?shortest/weakest key lengths, such as AES-128.? Hopefully the >> ?actual release will have no such limitation. > No - there is no such restriction. The full list of what we are planning to > support is in Appendix 3. Although I note that we explicitly mention key lengths > for some algorithms/modes but not others. We should probably update that to be > consistent. Bad choices of examples then.? I saw lots of mention of weak strength stuff, such as 96 bits of entropy, AES-128 etc. >> - The well-known slowness of FIPS validations will in practice >> ?require the FIPS module compiled from a source change to be >> ?released (much) later than the same change in the default >> ?provider.? The draft method of submitting FIPS validation >> ?updates just before any FIPS-affecting OpenSSL release seems >> ?overly optimistic. >> >> - Similarly, due to the slowness of FIPS validation updates, >> ?it may often be prudent to provide a root-cause fix in the >> ?default provider and a less-effective change in the FIPS >> ?provider, possibly involving FIPS-frozen workaround code in >> ?libcrypto, either in core or in a separate FIPS-wrapper >> ?component. >> >> - The mechanisms for dealing with cannot-export-the-private-key >> ?hardware providers could also be used to let the FIPS provider >> ?offer algorithm variants where the crypto officer (application >> ?writer/installer) specify that some keys remain inside the >> ?FIPS blob, inaccessible to the user role (application code). >> ? For example, TLS PFS (EC)DHE keys and CMS per message keys >> ?could by default remain inside the provider.? Extending this >> ?to TLS session keys and server private key would be a future >> ?option. >> >> - In future versions, it should be possible to combine the >> ?bundled FIPS provider with providers for FIPS-validated hardware, >> ?such as FIPS validated PIV smart cards for TLS client >> ?certificates. > The OpenSSL FIPS provider will provide algorithm implementations matching > "fips=yes". I see no reason why other providers can't do the same - so the above > should be possible. Some wording in the document suggested this might be erroneously blocked. > >> - Support for generating and validating (EC)DH and (EC)DSA >> ?group parameters using the FIPS-specified algorithms should >> ?be available in addition to the fixed sets of well-known >> ?group parameters.? In FIPS 800-56A rev 3, these are the >> ?DH primes specified using a SEED value.? Other versions of >> ?SP 800-56A, and/or supplemental NIST documents may allow >> ?other such group parameters. >> >> - If permitted by the CMVP rules, allow an option for >> ?application provided (additional) entropy input to the RNG >> ?from outside the module boundary. > Thanks for the input and all of the suggestions. > Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From vieuxtech at gmail.com Fri Feb 15 17:11:19 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Fri, 15 Feb 2019 09:11:19 -0800 Subject: when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: Message-ID: Resending, I just got banned for "bounces", though why gmail would be bouncing I don't know. On Thu, Feb 14, 2019 at 2:51 PM Sam Roberts wrote: In particular, I'm getting a close_notify alert, followed by two NewSessionTickets from the server. The does SSL_read()/SSL_get_error(), it is returning SSL_ERROR_ZERO_RETURN, so I stop calling SSL_read(). However, that means that the NewSessionTickets aren't seen, so I don't get the callbacks from SSL_CTX_sess_set_new_cb(). Should we be calling SSL_read() until some other return value occurs? Note that no data is written by the server, and SSL_shutdown() is called from inside the server's SSL_CB_HANDSHAKE_DONE info callback. The node test suite is rife with this pracitce, where a connection is established to prove its possible, but then just ended without data transfer. For TLS1.2 we get the session callbacks, but TLS1.3 we do not. This is the trace, edited to reduce SSL_trace verbosity: server TLSWrap::SSLInfoCallback(where SSL_CB_HANDSHAKE_DONE, alert U) established? 0 state 0x21 TWST: SSLv3/TLS write session ticket TLSv1.3 server TLSWrap::DoShutdown() established? 1 ssl? 1 Sent Record Inner Content Type = Alert (21) Level=warning(1), description=close notify(0) Sent Record NewSessionTicket, Length=245 Sent Record NewSessionTicket, Length=245 client TLSWrap::OnStreamRead(nread 566) Received Record Level=warning(1), description=close notify(0) SSL_read() => 0 SSL_get_shutdown() => SSL_RECEIVED_SHUTDOWN SSL_get_error() => SSL_ERROR_ZERO_RETURN At this point, we consider the connection closed... not sure what else to do. Thanks, Sam From matt at openssl.org Fri Feb 15 17:16:27 2019 From: matt at openssl.org (Matt Caswell) Date: Fri, 15 Feb 2019 17:16:27 +0000 Subject: when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: Message-ID: <8ddc09c8-ba1b-e70d-a563-e2daceab30a4@openssl.org> Resending my answer, because I guess you didn't get it: On 15/02/2019 17:11, Sam Roberts wrote: > Resending, I just got banned for "bounces", though why gmail would be > bouncing I don't know. > > On Thu, Feb 14, 2019 at 2:51 PM Sam Roberts wrote: > In particular, I'm getting a close_notify alert, followed by two > NewSessionTickets from the server. This sounds like a bug somewhere. Once you have close_notify you shouldn't expect anything else. Is that an OpenSSL server? Matt > > The does SSL_read()/SSL_get_error(), it is returning > SSL_ERROR_ZERO_RETURN, so I stop calling SSL_read(). > > However, that means that the NewSessionTickets aren't seen, so I don't > get the callbacks from SSL_CTX_sess_set_new_cb(). > > Should we be calling SSL_read() until some other return value occurs? > > Note that no data is written by the server, and SSL_shutdown() is > called from inside the server's SSL_CB_HANDSHAKE_DONE info callback. > The node test suite is rife with this pracitce, where a connection is > established to prove its possible, but then just ended without data > transfer. For TLS1.2 we get the session callbacks, but TLS1.3 we do > not. > > This is the trace, edited to reduce SSL_trace verbosity: > > server TLSWrap::SSLInfoCallback(where SSL_CB_HANDSHAKE_DONE, alert U) > established? 0 > state 0x21 TWST: SSLv3/TLS write session ticket TLSv1.3 > server TLSWrap::DoShutdown() established? 1 ssl? 1 > Sent Record > Inner Content Type = Alert (21) > Level=warning(1), description=close notify(0) > Sent Record > NewSessionTicket, Length=245 > Sent Record > NewSessionTicket, Length=245 > > > client TLSWrap::OnStreamRead(nread 566) > Received Record > Level=warning(1), description=close notify(0) > > SSL_read() => 0 > SSL_get_shutdown() => SSL_RECEIVED_SHUTDOWN > SSL_get_error() => SSL_ERROR_ZERO_RETURN > > At this point, we consider the connection closed... not sure what else to do. > > Thanks, > Sam > From rsalz at akamai.com Fri Feb 15 17:35:27 2019 From: rsalz at akamai.com (Salz, Rich) Date: Fri, 15 Feb 2019 17:35:27 +0000 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification (Monday 2019-02-11) In-Reply-To: <87zhqxnxla.wl-levitte@openssl.org> References: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> <87zhqxnxla.wl-levitte@openssl.org> Message-ID: <0741C0D6-A3A2-414D-A799-F1501CDC44BA@akamai.com> > (as for "possibly not the FIPS provider", that's exactly right. That one *will* be a loadable module and nothing else, and will only be validated as such... meaning that noone can stop you from hacking around and have it linked in statically, but that would make it invalid re FIPS) To be pedantic: this is true only *if you are using the OpenSSL validation.* If you are getting your own validation (such as using OpenSSL in an HSM device or whatnot), this is not true. > - If permitted by the CMVP rules, allow an option for > application provided (additional) entropy input to the RNG > from outside the module boundary. This is allowed, but it does not count toward the "minimum entropy" requirements. Anything after the first seeding falls into that category. From lgrosenthal at 2rosenthals.com Fri Feb 15 17:33:30 2019 From: lgrosenthal at 2rosenthals.com (Lewis Rosenthal) Date: Fri, 15 Feb 2019 12:33:30 -0500 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <87va1lnlf5.wl-levitte@openssl.org> References: <87va1lnlf5.wl-levitte@openssl.org> Message-ID: <5C66F7EA.6000305@2rosenthals.com> Hi, Richard... I'm not going to place my reply after Jakob's, as his makes a number of excellent points, with many of which I wholeheartedly agree (I'm not big on DKIM and DMARC, myself). However, a few points specific to the case at hand, if I may: Richard Levitte wrote: > Hi all, > > It seem like DMARC, SPF, DKIM, or *something* is tripping us up quite > a bit. Earlier this afternoon (that's what it is in Sweden at least), > us postmasters got a deluge of bounce reports from mailman, basically > telling us that it got something like this: > > : host aspmx.l.google.com[74.125.140.26] said: > 550-5.7.1 This message does not have authentication information or fails to > pass 550-5.7.1 authentication checks. To best protect our users from spam, > the 550-5.7.1 message has been blocked. Please visit 550-5.7.1 > https://support.google.com/mail/answer/81126#authentication for more 550 > 5.7.1 information. f1si3266960wro.105 - gsmtp (in reply to end of DATA > command) > > There's very little fact of what actually triggered these bounces, but > they always come from Google, so we're guessing that they're becoming > increasingly aggressive in their checks of DKIM, SPF, ARC, who knows > (they don't seem to check DMARC, 'cause we do have one with p=none and > an address to sent DMARC reports to, and I'm hearing absolutely > nothing from Google, but I do hear from others) > The onus for getting the attention of the mail admins at Google needs to be on those who use their services for mail, and not on a third party. If this were a non-technical list (the high school soccer team schedule), I might not expect all of the list members to be able to discuss in technical terms with the Google mail admins what the problems may be, but people on this list should be able to get the relevant points across, citing RFC numbers and so forth. I often find myself assisting other admins (aren't we all on alternating sides of that coin?) when we have delivery problems. The biggest hurdle is getting to the right admin on the "problem" side, which is why the initial contact needs to come from one of their customers who has been affected. > So, to mitigate the problem, we've removed all extra decoration of the > messages, i.e. the list footer that's usually added and the subject > tag that indicates what list this is (I added the "openssl-users:" > that you see manually). > I strongly encourage you to re-think this. Everyone else on this list whose server has been properly configured to not trash legitimate messages must now be inconvenienced by the needs of a seemingly tone-deaf provider. (FWIW, I go through this with yahoo.com addresses all the time; the fault lies there, not in the list configuration - so long as the list configuration follows the applicable RFC guidelines.) > So IF you're filtering the messages to get list messages in a > different folder, based on the subject line, you will unfortunately > have to change it. If I may suggest something, it's to look at this: > > List-Id: > Yes, this can be done, but without the list ID in square brackets in the subject, what is liable to happen is that the entire string will be replaced along the line when thread subjects change (e.g., "blah-blah (was: blah)") and we would all have to remember to type "openssl-users:" in order to get "organized" subjects (yes, I know; filtering to a particular folder on the List-Id header should effectively "organize" list messages by corralling them, but that's not my point). Threading is liable to go at least slightly off the rails for some of us (depending upon mail client), and there are a host of potential side effects, all for what? The next time Google decides to change their filters, should list managers hop-to and make further changes? My own thinking is that if list messages cannot proliferate across Google's infrastructure, then those list members should find alternative means of subscribing. Undoubtedly, this is not the only list which would be so affected for them. -- Lewis ------------------------------------------------------------- Lewis G Rosenthal, CNA, CLP, CLE, CWTS, EA Rosenthal & Rosenthal, LLC www.2rosenthals.com visit my IT blog www.2rosenthals.net/wordpress ------------------------------------------------------------- From richard at nod.at Fri Feb 15 17:46:14 2019 From: richard at nod.at (Richard Weinberger) Date: Fri, 15 Feb 2019 18:46:14 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <87va1lnlf5.wl-levitte@openssl.org> References: <87va1lnlf5.wl-levitte@openssl.org> Message-ID: <29819202.MIK5xt3Muo@blindfold> Am Freitag, 15. Februar 2019, 16:03:42 CET schrieb Richard Levitte: > So, to mitigate the problem, we've removed all extra decoration of the > messages, i.e. the list footer that's usually added and the subject > tag that indicates what list this is (I added the "openssl-users:" > that you see manually). Hmm, and as side effect you have forcefully re-enabled mail delivery for all mailinglist members? I disable mail delivery usually and read mails via gmail. Thanks, //richard From vieuxtech at gmail.com Fri Feb 15 19:03:33 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Fri, 15 Feb 2019 11:03:33 -0800 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: I don't see a FIPS repo in https://github.com/openssl, or a FIPS branch in https://github.com/openssl/openssl/branches/all Has coding started? If so, is it visible anywhere? If not, where should we watch for when it does? The FIPS design doc looks like lots of thought has gone into it, which is very promising. I also looked around in github.com/openssl, even the OpenSSL_1_0_2-stable branch, and couldn't find where the openssl-fips-2.0.16.tar.gz is built from. Where is it located? Thanks, Sam From levitte at openssl.org Fri Feb 15 19:57:55 2019 From: levitte at openssl.org (Richard Levitte) Date: Fri, 15 Feb 2019 20:57:55 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <29819202.MIK5xt3Muo@blindfold> References: <87va1lnlf5.wl-levitte@openssl.org> <29819202.MIK5xt3Muo@blindfold> Message-ID: <4AA1816B-B82F-4F63-98F4-542F054F6948@openssl.org> I did re-enable everyone that had [B] (for bounce) as reason for not receiving mail, but I may have gotten one or two that were disabled by choice. Sorry about that... Cheers Richard Richard Weinberger skrev: (15 februari 2019 18:46:14 CET) >Am Freitag, 15. Februar 2019, 16:03:42 CET schrieb Richard Levitte: >> So, to mitigate the problem, we've removed all extra decoration of >the >> messages, i.e. the list footer that's usually added and the subject >> tag that indicates what list this is (I added the "openssl-users:" >> that you see manually). > >Hmm, and as side effect you have forcefully re-enabled mail delivery >for all >mailinglist members? >I disable mail delivery usually and read mails via gmail. > >Thanks, >//richard -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From openssl-users at dukhovni.org Fri Feb 15 20:32:33 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Fri, 15 Feb 2019 15:32:33 -0500 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: Message-ID: > On Feb 15, 2019, at 12:11 PM, Sam Roberts wrote: > > In particular, I'm getting a close_notify alert, followed by two > NewSessionTickets from the server. > > The does SSL_read()/SSL_get_error(), it is returning > SSL_ERROR_ZERO_RETURN, so I stop calling SSL_read(). > > However, that means that the NewSessionTickets aren't seen, so I don't > get the callbacks from SSL_CTX_sess_set_new_cb(). > > Should we be calling SSL_read() until some other return value occurs? > > Note that no data is written by the server, and SSL_shutdown() is > called from inside the server's SSL_CB_HANDSHAKE_DONE info callback. > The node test suite is rife with this pracitce, where a connection is > established to prove its possible, but then just ended without data > transfer. For TLS1.2 we get the session callbacks, but TLS1.3 we do The code that's calling SSL_shutdown from the middle of the callback is too clever by half. It well and truly *deserves* to break. Which is not to say that everything that's deserved should necessarily happen, sometimes reality is more forgiving than just. Perhaps that should also the case here, but maybe not. OpenSSL could delay the actual shutdown until we're about to return from the SSL_accept() that invoked the callback. That is SSL_shutdown() called from callbacks could be deferred until a more favourable time. Not sure whether the complexity of doing this is warranted. Perhaps the all too clever code should get its just deserts after all. -- Viktor. From levitte at openssl.org Fri Feb 15 23:02:15 2019 From: levitte at openssl.org (Richard Levitte) Date: Sat, 16 Feb 2019 00:02:15 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <5C66F7EA.6000305@2rosenthals.com> References: <87va1lnlf5.wl-levitte@openssl.org> <5C66F7EA.6000305@2rosenthals.com> Message-ID: <87k1i0odu0.wl-levitte@openssl.org> On Fri, 15 Feb 2019 18:33:30 +0100, Lewis Rosenthal wrote: > > Hi, Richard... > > I'm not going to place my reply after Jakob's, as his makes a number > of excellent points, with many of which I wholeheartedly agree (I'm > not big on DKIM and DMARC, myself). However, a few points specific to > the case at hand, if I may: Yes you may. Quite frankly, I'm frustrated with the situation, and it... well, kinda exploded today (getting a huge bunch of messages from mailman tell us that it had disabled this and that user, it turned out to be quite a lot of them...). Either way, I'll take any help I can get to get some clarity and a path forward. > Richard Levitte wrote: > > Hi all, > > > > It seem like DMARC, SPF, DKIM, or *something* is tripping us up quite > > a bit. Earlier this afternoon (that's what it is in Sweden at least), > > us postmasters got a deluge of bounce reports from mailman, basically > > telling us that it got something like this: > > > > : host aspmx.l.google.com[74.125.140.26] said: > > 550-5.7.1 This message does not have authentication information or fails to > > pass 550-5.7.1 authentication checks. To best protect our users from spam, > > the 550-5.7.1 message has been blocked. Please visit 550-5.7.1 > > https://support.google.com/mail/answer/81126#authentication for more 550 > > 5.7.1 information. f1si3266960wro.105 - gsmtp (in reply to end of DATA > > command) > > > > There's very little fact of what actually triggered these bounces, but > > they always come from Google, so we're guessing that they're becoming > > increasingly aggressive in their checks of DKIM, SPF, ARC, who knows > > (they don't seem to check DMARC, 'cause we do have one with p=none and > > an address to sent DMARC reports to, and I'm hearing absolutely > > nothing from Google, but I do hear from others) > > > > The onus for getting the attention of the mail admins at Google needs > to be on those who use their services for mail, and not on a third > party. If this were a non-technical list (the high school soccer team > schedule), I might not expect all of the list members to be able to > discuss in technical terms with the Google mail admins what the > problems may be, but people on this list should be able to get the > relevant points across, citing RFC numbers and so forth. > > I often find myself assisting other admins (aren't we all on > alternating sides of that coin?) when we have delivery problems. The > biggest hurdle is getting to the right admin on the "problem" side, > which is why the initial contact needs to come from one of their > customers who has been affected. > > > So, to mitigate the problem, we've removed all extra decoration of the > > messages, i.e. the list footer that's usually added and the subject > > tag that indicates what list this is (I added the "openssl-users:" > > that you see manually). > > > > I strongly encourage you to re-think this. Everyone else on this list > whose server has been properly configured to not trash legitimate > messages must now be inconvenienced by the needs of a seemingly > tone-deaf provider. (FWIW, I go through this with yahoo.com addresses > all the time; the fault lies there, not in the list configuration - so > long as the list configuration follows the applicable RFC guidelines.) Well, if we change the subject of a DKIM signed message, don't we break it? (I'm not sure how applicable that's with Google, as we received the same kind of bounce for message originating at openssl.org (there is a DMARC record with p=none, so shouldn't affect anything as far as I understand) and no DKIM signature... but still, when there is one... > > So IF you're filtering the messages to get list messages in a > > different folder, based on the subject line, you will unfortunately > > have to change it. If I may suggest something, it's to look at this: > > > > List-Id: > > > > Yes, this can be done, but without the list ID in square brackets in > the subject, what is liable to happen is that the entire string will > be replaced along the line when thread subjects change (e.g., > "blah-blah (was: blah)") and we would all have to remember to type > "openssl-users:" in order to get "organized" subjects (yes, I know; > filtering to a particular folder on the List-Id header should > effectively "organize" list messages by corralling them, but that's > not my point). Threading is liable to go at least slightly off the > rails for some of us (depending upon mail client), and there are a > host of potential side effects, all for what? The next time Google > decides to change their filters, should list managers hop-to and make > further changes? > > My own thinking is that if list messages cannot proliferate across > Google's infrastructure, then those list members should find > alternative means of subscribing. Undoubtedly, this is not the only > list which would be so affected for them. Well, Google users is a *large* part of our subscribers, and some of them are Google Apps users, possibly not of their own choice. I believe that Google users aren't quite as easy to dismiss as, say, hotmail back when that provider tumbled down the reputation shute. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From levitte at openssl.org Fri Feb 15 23:11:48 2019 From: levitte at openssl.org (Richard Levitte) Date: Sat, 16 Feb 2019 00:11:48 +0100 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: <87imxkode3.wl-levitte@openssl.org> On Fri, 15 Feb 2019 20:03:33 +0100, Sam Roberts wrote: > > I don't see a FIPS repo in https://github.com/openssl, or a FIPS > branch in https://github.com/openssl/openssl/branches/all > > Has coding started? If so, is it visible anywhere? If not, where > should we watch for when it does? Coding has started to appear on github since the beginning of this week, and there's a related github project that we should attach related issue and PRs to: https://github.com/openssl/openssl/projects/2 That project should hold a collected view of everything that happens when it does. As for the FIPS module itself, it will not appear immediately. We need to code the foundation, i.e. the new framework, first. > The FIPS design doc looks like lots of thought has gone into it, > which is very promising. > > I also looked around in github.com/openssl, even the > OpenSSL_1_0_2-stable branch, and couldn't find where the > openssl-fips-2.0.16.tar.gz is built from. Where is it located? There are branches called OpenSSL-fips-*, that's where you want to look. We will NOT use that as a model for the 3.0.0 FIPS module, though. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From matt at openssl.org Fri Feb 15 23:25:35 2019 From: matt at openssl.org (Matt Caswell) Date: Fri, 15 Feb 2019 23:25:35 +0000 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: Message-ID: <9c5b2083-4f39-929e-2695-4d27c06ce2ff@openssl.org> On 15/02/2019 20:32, Viktor Dukhovni wrote: >> On Feb 15, 2019, at 12:11 PM, Sam Roberts wrote: >> >> In particular, I'm getting a close_notify alert, followed by two >> NewSessionTickets from the server. >> >> The does SSL_read()/SSL_get_error(), it is returning >> SSL_ERROR_ZERO_RETURN, so I stop calling SSL_read(). >> >> However, that means that the NewSessionTickets aren't seen, so I don't >> get the callbacks from SSL_CTX_sess_set_new_cb(). >> >> Should we be calling SSL_read() until some other return value occurs? >> >> Note that no data is written by the server, and SSL_shutdown() is >> called from inside the server's SSL_CB_HANDSHAKE_DONE info callback. >> The node test suite is rife with this pracitce, where a connection is >> established to prove its possible, but then just ended without data >> transfer. For TLS1.2 we get the session callbacks, but TLS1.3 we do > > The code that's calling SSL_shutdown from the middle of the callback > is too clever by half. It well and truly *deserves* to break. > > Which is not to say that everything that's deserved should necessarily > happen, sometimes reality is more forgiving than just. Perhaps that > should also the case here, but maybe not. > > OpenSSL could delay the actual shutdown until we're about to return > from the SSL_accept() that invoked the callback. That is SSL_shutdown() > called from callbacks could be deferred until a more favourable time. > > Not sure whether the complexity of doing this is warranted. Perhaps > the all too clever code should get its just deserts after all. > Oh - right. I missed this detail. Calling SSL_shutdown() from inside a callback is definitely a bad idea. Don't do that. Matt From matt at openssl.org Fri Feb 15 23:36:31 2019 From: matt at openssl.org (Matt Caswell) Date: Fri, 15 Feb 2019 23:36:31 +0000 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: <9540ffd6-5d13-c870-e3d3-10cd13b9be4e@openssl.org> On 15/02/2019 19:03, Sam Roberts wrote: > I don't see a FIPS repo in https://github.com/openssl, or a FIPS > branch in https://github.com/openssl/openssl/branches/all >> Has coding started? If so, is it visible anywhere? If not, where > should we watch for when it does? All coding will be taking place in the master branch. The 3.0.0 release will bring the FIPS module into mainline OpenSSL. > > The FIPS design doc looks like lots of thought has gone into it, which > is very promising. > > I also looked around in github.com/openssl, even the > OpenSSL_1_0_2-stable branch, and couldn't find where the > openssl-fips-2.0.16.tar.gz is built from. Where is it located? You can checkout the OpenSSL-fips-2_0_16 tag, which is also on the OpenSSL-fips-2_0-stable branch. Matt From vieuxtech at gmail.com Sat Feb 16 05:04:01 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Fri, 15 Feb 2019 21:04:01 -0800 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: <9c5b2083-4f39-929e-2695-4d27c06ce2ff@openssl.org> References: <9c5b2083-4f39-929e-2695-4d27c06ce2ff@openssl.org> Message-ID: On Fri, Feb 15, 2019 at 3:35 PM Matt Caswell wrote: > On 15/02/2019 20:32, Viktor Dukhovni wrote: > >> On Feb 15, 2019, at 12:11 PM, Sam Roberts wrote: > > OpenSSL could delay the actual shutdown until we're about to return > > from the SSL_accept() that invoked the callback. That is SSL_shutdown() > > called from callbacks could be deferred until a more favourable time. In this case, it's an SSL_read() that invoked the callback, though probably not relevant. And actually, no deferal would be necessary, I looks to me that the info callback for handshake done is coming too early. Particularly, the writing of the NewSessionTickets into the BIO should occur before the info callback. I'll check later, but I'm pretty sure with TLS1.2 the session tickets are written and then the HANDSHAKE_DONE info callback occurs, so the timing here is incompatible with TLS1.2. Though the deferal mechanism might be there already. It looks like doing an SSL_write(); SSL_shutdown() in the info callback works fine, on the client side new ticket callbacks are fired by the SSL_read() before the SSL_read() sees the close_notify and returns 0. I haven't looked at the packet/API trace for this, because the tests all pass for this case, but I do see that in the javascript called from the HANDSHAKE_DONE callback, that calling .end("x") (write + shutdown) causes the client to get tickets, but .end() causes it to miss them because they are after close_notify. > Oh - right. I missed this detail. Calling SSL_shutdown() from inside a callback > is definitely a bad idea. Don't do that. The info callback, or ANY callbacks? What about the new ticket callback, for example? Is it expected that no SSL_ calls are made in ANY callbacks? This code has been working a fair number of years now, I can move it (and review every other callback where we callout to javascript code) to a model where callbacks just save data, set global state, and return into SSL, and we check after returning from SSL_read() what has happened, and callback into javascript then, but its a bit of work, and this fringe case of TLS servers that shutdown immediately after handshake is not likely to be that important (at least in the short term, though if our users scream about the API change we'll have to decide whether we can enable TLS1.3 on stable branches, or if this difference counts as semver-major for code that didn't explicitly opt-in to 1.3). Cheers, Sam From matt at openssl.org Sun Feb 17 13:26:39 2019 From: matt at openssl.org (Matt Caswell) Date: Sun, 17 Feb 2019 13:26:39 +0000 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: <9c5b2083-4f39-929e-2695-4d27c06ce2ff@openssl.org> Message-ID: On 16/02/2019 05:04, Sam Roberts wrote: > On Fri, Feb 15, 2019 at 3:35 PM Matt Caswell wrote: >> On 15/02/2019 20:32, Viktor Dukhovni wrote: >>>> On Feb 15, 2019, at 12:11 PM, Sam Roberts wrote: >>> OpenSSL could delay the actual shutdown until we're about to return >>> from the SSL_accept() that invoked the callback. That is SSL_shutdown() >>> called from callbacks could be deferred until a more favourable time. > > In this case, it's an SSL_read() that invoked the callback, though > probably not relevant. > > And actually, no deferal would be necessary, I looks to me that the > info callback for handshake done is coming too early. Particularly, > the writing of the NewSessionTickets into the BIO should occur before > the info callback. I'll check later, but I'm pretty sure with TLS1.2 > the session tickets are written and then the HANDSHAKE_DONE info > callback occurs, so the timing here is incompatible with TLS1.2. In TLSv1.2 New session tickets are written as part of the handshake. In TLSv1.3 session tickets are sent after the handshake has completed. It sounds to me like the info callback is doing the right thing. > > Though the deferal mechanism might be there already. It looks like > doing an SSL_write(); SSL_shutdown() in the info callback works fine, > on the client side new ticket callbacks are fired by the SSL_read() > before the SSL_read() sees the close_notify and returns 0. I haven't > looked at the packet/API trace for this, because the tests all pass > for this case, but I do see that in the javascript called from the > HANDSHAKE_DONE callback, that calling .end("x") (write + shutdown) > causes the client to get tickets, but .end() causes it to miss them > because they are after close_notify. > >> Oh - right. I missed this detail. Calling SSL_shutdown() from inside a callback >> is definitely a bad idea. Don't do that. > > The info callback, or ANY callbacks? What about the new ticket > callback, for example? Is it expected that no SSL_ calls are made in > ANY callbacks? I wouldn't go that far. Callbacks occur during the processing of an IO operation. Callbacks are generally expected to be small and quick. I wouldn't call anything that might invoke a new IO operation from within a callback, so SSL_read, SSL_write, SSL_do_handshake, SSL_shutdown etc. Matt From jb-openssl at wisemo.com Mon Feb 18 21:51:09 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Mon, 18 Feb 2019 22:51:09 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <87k1i0odu0.wl-levitte@openssl.org> References: <87va1lnlf5.wl-levitte@openssl.org> <5C66F7EA.6000305@2rosenthals.com> <87k1i0odu0.wl-levitte@openssl.org> Message-ID: <298b82ce-f3ff-d486-6317-ccfbbe3a144f@wisemo.com> On 16/02/2019 00:02, Richard Levitte wrote: > On Fri, 15 Feb 2019 18:33:30 +0100, Lewis Rosenthal wrote: >> ... >> >> I strongly encourage you to re-think this. Everyone else on this list >> whose server has been properly configured to not trash legitimate >> messages must now be inconvenienced by the needs of a seemingly >> tone-deaf provider. (FWIW, I go through this with yahoo.com addresses >> all the time; the fault lies there, not in the list configuration - so >> long as the list configuration follows the applicable RFC guidelines.) > Well, if we change the subject of a DKIM signed message, don't we > break it? (I'm not sure how applicable that's with Google, as we > received the same kind of bounce for message originating at > openssl.org (there is a DMARC record with p=none, so shouldn't affect > anything as far as I understand) and no DKIM signature... but still, > when there is one... Indeed it does break it (unless the signature unusually doesn't cover the Subject).?? According to the RFC, a DKIM signature can choose an almost arbitrary subset of headers to cover (including covering the absence of a header type), plus a choice between signing the entire body or only the first N lines (for arbitrary N).? That "first N lines" option is how to create a DKIM signature that allows appending a list footer. As for p=none, this is what my rule 5 covered, just because a DMARC record says p=none doesn't remove the requirement for messages to be correct, only lowers the default error handling to a warning (I receive daily mails listing which IP addresses spoofed our domains by sending out mails with the not doing so, as is required by the DMARC RFC, and I did so when I had p=none). Having a DMARC record without DKIM signatures (including DKIM signing mails relayed with openssl.org as From: address) is either an RFC violation or very close to one.? So I would suggest setting that up.? There are probably generic plugins for Postfix already, but check the DMARC and DKIM RFC rules for how to handle the various special address combinations that occur in mailing list traffic (such as having Sender and From with different domains).? Because the plugins may not have been tested for that. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From jb-openssl at wisemo.com Mon Feb 18 22:17:12 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Mon, 18 Feb 2019 23:17:12 +0100 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: References: <9c5b2083-4f39-929e-2695-4d27c06ce2ff@openssl.org> Message-ID: <54afc2db-0233-a1da-ed22-3f27f4e7452c@wisemo.com> On 17/02/2019 14:26, Matt Caswell wrote: > On 16/02/2019 05:04, Sam Roberts wrote: >> On Fri, Feb 15, 2019 at 3:35 PM Matt Caswell wrote: >>> On 15/02/2019 20:32, Viktor Dukhovni wrote: >>>>> On Feb 15, 2019, at 12:11 PM, Sam Roberts wrote: >>>> OpenSSL could delay the actual shutdown until we're about to return >>>> from the SSL_accept() that invoked the callback. That is SSL_shutdown() >>>> called from callbacks could be deferred until a more favourable time. >> In this case, it's an SSL_read() that invoked the callback, though >> probably not relevant. >> >> And actually, no deferal would be necessary, I looks to me that the >> info callback for handshake done is coming too early. Particularly, >> the writing of the NewSessionTickets into the BIO should occur before >> the info callback. I'll check later, but I'm pretty sure with TLS1.2 >> the session tickets are written and then the HANDSHAKE_DONE info >> callback occurs, so the timing here is incompatible with TLS1.2. > In TLSv1.2 New session tickets are written as part of the handshake. In TLSv1.3 > session tickets are sent after the handshake has completed. It sounds to me like > the info callback is doing the right thing. That seems to be a major theme in many reported OpenSSL 1.1.1 problems.? It seems that you guys have gotten too hung up on how the TLS 1.3 RFC uses words like handshake differently than the TLS 1.2 RFC, rather than by the higher level semantics of what would be considered the API visible meta-operations. From an API user perspective, the messages that are exchanged right after the RFC-handshake in order to complete the connection set up should be considered part of the API-handshake. This made little difference for the "change cipher spec" TLS 1.2 record, but makes a lot more difference for TLS 1.3 where various things like certificate checks and session tickets fall into that gray area. Any opportunity to send data earlier than that should be handled in a way that doesn't break the API for applications that aren't doing so. >> Though the deferal mechanism might be there already. It looks like >> doing an SSL_write(); SSL_shutdown() in the info callback works fine, >> on the client side new ticket callbacks are fired by the SSL_read() >> before the SSL_read() sees the close_notify and returns 0. I haven't >> looked at the packet/API trace for this, because the tests all pass >> for this case, but I do see that in the javascript called from the >> HANDSHAKE_DONE callback, that calling .end("x") (write + shutdown) >> causes the client to get tickets, but .end() causes it to miss them >> because they are after close_notify. >> >>> Oh - right. I missed this detail. Calling SSL_shutdown() from inside a callback >>> is definitely a bad idea. Don't do that. >> The info callback, or ANY callbacks? What about the new ticket >> callback, for example? Is it expected that no SSL_ calls are made in >> ANY callbacks? > I wouldn't go that far. Callbacks occur during the processing of an IO > operation. Callbacks are generally expected to be small and quick. I wouldn't > call anything that might invoke a new IO operation from within a callback, so > SSL_read, SSL_write, SSL_do_handshake, SSL_shutdown etc. > Callbacks are often an opportunity for applications to detect violations of security policy.? It thus makes a lot of sense for callbacks to request that the connection is ended as soon as allowed by the risk of creating an attack side channel. Other OpenSSL callbacks represent the one place to do certain complex tasks, such as choosing among different certificates, checking against outside (networked!) revocation systems etc. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From d3ck0r at gmail.com Mon Feb 18 23:48:46 2019 From: d3ck0r at gmail.com (J Decker) Date: Mon, 18 Feb 2019 15:48:46 -0800 Subject: [openssl-users] when should client stop calling SSL_read to get TLS1.3 session tickets after the close_notify? In-Reply-To: <54afc2db-0233-a1da-ed22-3f27f4e7452c@wisemo.com> References: <9c5b2083-4f39-929e-2695-4d27c06ce2ff@openssl.org> <54afc2db-0233-a1da-ed22-3f27f4e7452c@wisemo.com> Message-ID: On Mon, Feb 18, 2019 at 2:18 PM Jakob Bohm via openssl-users < openssl-users at openssl.org> wrote: > On 17/02/2019 14:26, Matt Caswell wrote: > > On 16/02/2019 05:04, Sam Roberts wrote: > >> On Fri, Feb 15, 2019 at 3:35 PM Matt Caswell wrote: > >>> On 15/02/2019 20:32, Viktor Dukhovni wrote: > >>>>> On Feb 15, 2019, at 12:11 PM, Sam Roberts > wrote: > >>>> OpenSSL could delay the actual shutdown until we're about to return > >>>> from the SSL_accept() that invoked the callback. That is > SSL_shutdown() > >>>> called from callbacks could be deferred until a more favourable time. > >> In this case, it's an SSL_read() that invoked the callback, though > >> probably not relevant. > >> > >> And actually, no deferal would be necessary, I looks to me that the > >> info callback for handshake done is coming too early. Particularly, > >> the writing of the NewSessionTickets into the BIO should occur before > >> the info callback. I'll check later, but I'm pretty sure with TLS1.2 > >> the session tickets are written and then the HANDSHAKE_DONE info > >> callback occurs, so the timing here is incompatible with TLS1.2. > > In TLSv1.2 New session tickets are written as part of the handshake. In > TLSv1.3 > > session tickets are sent after the handshake has completed. It sounds to > me like > > the info callback is doing the right thing. > That seems to be a major theme in many reported OpenSSL 1.1.1 > problems. It seems that you guys have gotten too hung up on how > the TLS 1.3 RFC uses words like handshake differently than the > TLS 1.2 RFC, rather than by the higher level semantics of what > would be considered the API visible meta-operations. > > From an API user perspective, the messages that are exchanged > right after the RFC-handshake in order to complete the connection > set up should be considered part of the API-handshake. > > This made little difference for the "change cipher spec" TLS 1.2 > record, but makes a lot more difference for TLS 1.3 where various > things like certificate checks and session tickets fall into that > gray area. > > Any opportunity to send data earlier than that should be handled > in a way that doesn't break the API for applications that aren't > doing so. > >> Though the deferal mechanism might be there already. It looks like > >> doing an SSL_write(); SSL_shutdown() in the info callback works fine, > >> on the client side new ticket callbacks are fired by the SSL_read() > >> before the SSL_read() sees the close_notify and returns 0. I haven't > >> looked at the packet/API trace for this, because the tests all pass > >> for this case, but I do see that in the javascript called from the > >> HANDSHAKE_DONE callback, that calling .end("x") (write + shutdown) > >> causes the client to get tickets, but .end() causes it to miss them > >> because they are after close_notify. > >> > >>> Oh - right. I missed this detail. Calling SSL_shutdown() from inside a > callback > >>> is definitely a bad idea. Don't do that. > >> The info callback, or ANY callbacks? What about the new ticket > >> callback, for example? Is it expected that no SSL_ calls are made in > >> ANY callbacks? > > I wouldn't go that far. Callbacks occur during the processing of an IO > > operation. Callbacks are generally expected to be small and quick. I > wouldn't > > call anything that might invoke a new IO operation from within a > callback, so > > SSL_read, SSL_write, SSL_do_handshake, SSL_shutdown etc. > > > Callbacks are often an opportunity for applications to detect > violations of security policy. It thus makes a lot of sense for > callbacks to request that the connection is ended as soon as > allowed by the risk of creating an attack side channel. > > Other OpenSSL callbacks represent the one place to do certain > complex tasks, such as choosing among different certificates, > checking against outside (networked!) revocation systems etc.> > > All of that makes me question; so in migrating to 1.3, does the basic flow change? > https://github.com/d3x0r/SACK/blob/master/src/netlib/ssl_layer.c#L178 (handshake... hmm that's long tedious debug optioned code) summary is pretty short... if (!SSL_is_init_finished(ses->ssl)) {r = SSL_do_handshake(ses->ssl); if( r == 0 )/*error/incomplete */ else /* handle errors; usually WANT_READ; read for any control data pending, and send data*/ } else return 2/1; > until is_init_finished which is handshake() returns 2 on the first is_init_finished... and 1 after that; so the first callback does certificate verification... > then kinda... > onread() { /* recv got data */ > handshake(); > -1 ; close > 0 - return wait for more data > 2 - verify handshaken certs > 1 - continue as normal. > read data (if any) (post to app) > read if any control data/send control data to remote > } > and I could optionally? register verification callbacks and remove the == 2 check inline? > Enjoy > > Jakob > -- > Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com > Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 > This public discussion message is non-binding and may contain errors. > WiseMo - Remote Service Management for PCs, Phones and Embedded > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jb-openssl at wisemo.com Tue Feb 19 00:19:39 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Tue, 19 Feb 2019 01:19:39 +0100 Subject: [openssl-users] Comments on the recent OpenSSL 3.0.0 specification In-Reply-To: <0741C0D6-A3A2-414D-A799-F1501CDC44BA@akamai.com> References: <4d65dc06-064a-035d-815f-68f426600d82@wisemo.com> <87zhqxnxla.wl-levitte@openssl.org> <0741C0D6-A3A2-414D-A799-F1501CDC44BA@akamai.com> Message-ID: <1dec59be-fc7e-d98e-d858-ef6717e25250@wisemo.com> (Resend from correct account) On 15/02/2019 18:35, Salz, Rich via openssl-users wrote: >> (as for "possibly not the FIPS provider", that's exactly right. That > one *will* be a loadable module and nothing else, and will only be > validated as such... meaning that noone can stop you from hacking > around and have it linked in statically, but that would make it > invalid re FIPS) > To be pedantic: this is true only *if you are using the OpenSSL > validation.* If you are getting your own validation (such as using > OpenSSL in an HSM device or whatnot), this is not true. > > - If permitted by the CMVP rules, allow an option for > > application provided (additional) entropy input to the RNG > > from outside the module boundary. > This is allowed, but it does not count toward the "minimum entropy" > requirements. Anything after the first seeding falls into that category. > Thanks, the document wording made it look like the OpenSSL 3 FIPS RNG would only accept the system entropy source. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From guru at unixarea.de Tue Feb 19 08:57:11 2019 From: guru at unixarea.de (Matthias Apitz) Date: Tue, 19 Feb 2019 09:57:11 +0100 Subject: understand 'openssl dhparms ....' Message-ID: <20190219085711.GA10746@sh4-5.1blu.de> Hello, Some years ago (in 2012) I wrote an OpenSSL server, loosely based on the example sources 'openssl-examples-20020110' which nowadays still exist in https://github.com/smbutton/DataCommProject/tree/master/openssl-examples-20020110/openssl-examples-20020110 There was also some guiding available about how to create the necessary key material, which goes more or less like this: -------------------------------------------------------------------------------- $ mkdir newca $ cd newca $ cp /usr/local/openssl/misc/CA.sh . $ ./CA.sh -newca will create a new CA. Remember the passphrase as you will need it to sign certificates. $ cp demoCA/cacert.pem ../root.pem Second step $ ./CA.sh -newreq will create a certificate and a certification request. Set the passphrase to 'password' as this is hard-coded in the examples' source code. It is important to set the [Common Name] to 'localhost'. Third step $ ./CA.sh -sign will sign your newly created certificate. Enter the password for your CA which you have defined in step 1. Fourth step $ cat newreq.pem newkey.pem newcert.pem > ../localhost.pem $ cd .. $ ln -s localhost.pem server.pem $ ln -s localhost.pem client.pem Maybe you also want to issue $ openssl dhparam 1024 -2 -out dh1024.pem -outform PEM in order to update the DH parameters. -------------------------------------------------------------------------------- What I (today) do not understand is the last step about creating the file 'dh1024.pem' :-( Two questions: 1. Why this has no input file? Shouldn't it have on, and which? The man page says, it would read stdin, but it doesn't do so. 2. When I re-run the examples today the above command does not even produces a file 'dh1024.pem', but writes the result to stdout: openssl dhparam 1024 -2 -outform PEM -out dh1024.pem .... (lot of random output) ... -----BEGIN DH PARAMETERS----- MIGHAoGBAIc6JqvNBSGwdBBzIJQAuq+TG+ttNNYZcUv/p3/nloWGwxeCKqWt2M4x z6WsA3tVbykRw80A0Rja2y7IHZ9dGJc/guxrxUpNketeSddFzGicz6mrEafSdurd ephztXEmQ63XP4ULPlcaOXzYk6GLUXFYKVYuIHnpdcJLLRMFWZ0bAgEC -----END DH PARAMETERS----- How this is supposed to work? Thanks matthias -- Matthias Apitz, ? guru at unixarea.de, http://www.unixarea.de/ +49-176-38902045 Public GnuPG key: http://www.unixarea.de/key.pub From matt at openssl.org Tue Feb 19 10:47:44 2019 From: matt at openssl.org (Matt Caswell) Date: Tue, 19 Feb 2019 10:47:44 +0000 Subject: understand 'openssl dhparms ....' In-Reply-To: <20190219085711.GA10746@sh4-5.1blu.de> References: <20190219085711.GA10746@sh4-5.1blu.de> Message-ID: On 19/02/2019 08:57, Matthias Apitz wrote: > > Two questions: > > 1. Why this has no input file? Shouldn't it have on, and which? The man > page says, it would read stdin, but it doesn't do so. The man page in question is here: https://www.openssl.org/docs/man1.1.1/man1/dhparam.html I draw your attention to the description of the "numbits" value (i.e. 1024 in your command line): "This option specifies that a parameter set should be generated of size numbits. It must be the last option. If this option is present then the input file is ignored and parameters are generated instead. If this option is not present but a generator (-2 or -5) is present, parameters are generated with a default length of 2048 bits." So by specifying 1024 you are asking to *generate* new parameters of size 1024 bits and so the input file is ignored. > > 2. When I re-run the examples today the above command does not even > produces a file 'dh1024.pem', but writes the result to stdout: > > openssl dhparam 1024 -2 -outform PEM -out dh1024.pem > .... (lot of random output) ... > -----BEGIN DH PARAMETERS----- > MIGHAoGBAIc6JqvNBSGwdBBzIJQAuq+TG+ttNNYZcUv/p3/nloWGwxeCKqWt2M4x > z6WsA3tVbykRw80A0Rja2y7IHZ9dGJc/guxrxUpNketeSddFzGicz6mrEafSdurd > ephztXEmQ63XP4ULPlcaOXzYk6GLUXFYKVYuIHnpdcJLLRMFWZ0bAgEC > -----END DH PARAMETERS----- > > How this is supposed to work? Thanks The options are the wrong way around the numbits value is supposed to be last - so actually the rest of your options are being ignored. The command line should be: openssl dhparam -2 -outform PEM -out dh1024.pem 1024 It seems that in OpenSSL 1.1.0 we got stricter about the ordering of the command line parameters. We probably really ought to error out if there are trailing options that we haven't processed. Note that 1024 is these days considered too short. At a *minimum* you should be using at least 2048. I would also draw your attention to the SSL_CTX_set_dh_auto() and SSL_set_dh_auto() macros that your server can use (available since OpenSSL 1.1.0). These are sadly undocumented (grrrrr) but the use is straight forward: SSL_CTX_set_dh_auto(ctx, 1); or SSL_set_dh_auto(s, 1); By making these calls then your server will use automatic built-in DH parameters and there is no need to supply your own explicitly. Matt From levitte at openssl.org Tue Feb 19 10:51:36 2019 From: levitte at openssl.org (Richard Levitte) Date: Tue, 19 Feb 2019 11:51:36 +0100 Subject: openssl-users: DKIM, DMARC and all that jazz, and what it means to us In-Reply-To: <298b82ce-f3ff-d486-6317-ccfbbe3a144f@wisemo.com> References: <87va1lnlf5.wl-levitte@openssl.org> <5C66F7EA.6000305@2rosenthals.com> <87k1i0odu0.wl-levitte@openssl.org> <298b82ce-f3ff-d486-6317-ccfbbe3a144f@wisemo.com> Message-ID: <87mumsm4p3.wl-levitte@openssl.org> On Mon, 18 Feb 2019 22:51:09 +0100, Jakob Bohm wrote: > Having a DMARC record without DKIM signatures (including DKIM > signing mails relayed with openssl.org as From: address) is either > an RFC violation or very close to one. I suspected that. We're not quite ready for full blown DKIM yet, so I'll remove the DMARC record for now. Thank you. (I know that you have sent other recommendations, but haven't read them yet... be assured that I will give them consideration) Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From guru at unixarea.de Tue Feb 19 12:04:00 2019 From: guru at unixarea.de (Matthias Apitz) Date: Tue, 19 Feb 2019 13:04:00 +0100 Subject: understand 'openssl dhparms ....' In-Reply-To: References: <20190219085711.GA10746@sh4-5.1blu.de> Message-ID: <20190219120400.GA20430@sh4-5.1blu.de> El d?a Tuesday, February 19, 2019 a las 10:47:44AM +0000, Matt Caswell escribi?: > > > On 19/02/2019 08:57, Matthias Apitz wrote: > > > > Two questions: > > > > 1. Why this has no input file? Shouldn't it have on, and which? The man > > page says, it would read stdin, but it doesn't do so. > > The man page in question is here: > > https://www.openssl.org/docs/man1.1.1/man1/dhparam.html > > I draw your attention to the description of the "numbits" value (i.e. 1024 in > your command line): > > ... Matt, thanks for the detailed explanation. matthias -- Matthias Apitz, ? guru at unixarea.de, http://www.unixarea.de/ +49-176-38902045 Public GnuPG key: http://www.unixarea.de/key.pub October, 7 -- The GDR was different: Peace instead of Bundeswehr and wars, Druschba instead of Nazis, to live instead of to survive. From tniessen at tnie.de Tue Feb 19 13:04:57 2019 From: tniessen at tnie.de (=?UTF-8?Q?Tobias_Nie=c3=9fen?=) Date: Tue, 19 Feb 2019 14:04:57 +0100 Subject: Allow specifying the tag after AAD in CCM mode Message-ID: <7494a711-9480-7b49-6e82-6af8144bea2d@tnie.de> Hello everyone, in GCM and OCB mode, it is possible to set the authentication tag after supplying AAD, but the CCM implementation does not allow that. This isn't a problem for most applications, but in Node.js, we expose similar APIs to interact with AEAD ciphers and these differences between cipher modes within OpenSSL propagate to our users. Unless there is a reason for the current behavior, I would prefer to change it. I opened a PR about this five months ago (https://github.com/openssl/openssl/pull/7243). It has received zero attention and I am hoping the mailing list is a good way to change that. Kind regards, Tobias From matt at openssl.org Tue Feb 19 16:10:20 2019 From: matt at openssl.org (Matt Caswell) Date: Tue, 19 Feb 2019 16:10:20 +0000 Subject: Forthcoming OpenSSL Releases Message-ID: <9b5740f6-0f40-0adf-3b60-beda7707edb3@openssl.org> The OpenSSL project team would like to announce the forthcoming release of OpenSSL versions 1.1.1b and 1.0.2r. There will be no new 1.1.0 release at this time. These releases will be made available on 26th February 2019 between approximately 1300-1700 UTC. OpenSSL 1.0.2r is a security-fix release. The highest severity issue fixed in this release is MODERATE: https://www.openssl.org/policies/secpolicy.html#moderate OpenSSL 1.1.1b is a bug-fix release. Yours The OpenSSL Project Team -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From blaufish.public.email at gmail.com Tue Feb 19 16:32:35 2019 From: blaufish.public.email at gmail.com (Peter Magnusson) Date: Tue, 19 Feb 2019 17:32:35 +0100 Subject: Allow specifying the tag after AAD in CCM mode In-Reply-To: <7494a711-9480-7b49-6e82-6af8144bea2d@tnie.de> References: <7494a711-9480-7b49-6e82-6af8144bea2d@tnie.de> Message-ID: I've commented on the PR, mostly about not understanding the commit message RFC-references and indentation error. Overall the PR looks good to me, but I'd like someone who is more familiar with implementation have a look at it. Best Regards Eine Kleine Blau Fisch On Tue, Feb 19, 2019 at 2:10 PM Tobias Nie?en wrote: > > Hello everyone, > > in GCM and OCB mode, it is possible to set the authentication tag after > supplying AAD, but the CCM implementation does not allow that. This > isn't a problem for most applications, but in Node.js, we expose similar > APIs to interact with AEAD ciphers and these differences between cipher > modes within OpenSSL propagate to our users. Unless there is a reason > for the current behavior, I would prefer to change it. > > I opened a PR about this five months ago > (https://github.com/openssl/openssl/pull/7243). It has received zero > attention and I am hoping the mailing list is a good way to change that. > > Kind regards, > Tobias > From walt at safelogic.com Tue Feb 19 19:49:29 2019 From: walt at safelogic.com (Walter Paley) Date: Tue, 19 Feb 2019 11:49:29 -0800 Subject: [openssl-users] [openssl-project] OpenSSL 3.0 and FIPS Update Message-ID: <0192D980-B962-4889-BEF0-5D48D335C9A2@safelogic.com> Thanks for the speculation on validated platforms, Mark. Please be careful about using this resource as a medium for self-promotion. - Walt Walter Paley Walt at SafeLogic.com SafeLogic - FIPS 140-2 Simplified From jgh at wizmail.org Wed Feb 20 20:55:49 2019 From: jgh at wizmail.org (Jeremy Harris) Date: Wed, 20 Feb 2019 20:55:49 +0000 Subject: implicit connect Message-ID: Hi, Is the use of SSL_write() to do an implicit SSL_connect() expected to save any packets? With 1.1.1a (Fedora 29) I don't see it doing so; the (TLS1.3) Change Cipher Spec, Finished is sent in a separate TCP segment to the data written. If not, might it do some time in the future? -- Thanks, Jeremy From matt at openssl.org Wed Feb 20 21:15:55 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 20 Feb 2019 21:15:55 +0000 Subject: implicit connect In-Reply-To: References: Message-ID: On 20/02/2019 20:55, Jeremy Harris wrote: > Hi, > > Is the use of SSL_write() to do an implicit SSL_connect() > expected to save any packets? With 1.1.1a (Fedora 29) I > don't see it doing so; the (TLS1.3) Change Cipher Spec, > Finished is sent in a separate TCP segment to the data > written. No. > > If not, might it do some time in the future? > There are no plans at the moment to do this - but never say never. If anyone wanted to submit a PR for this it would be looked at. Matt From beldmit at gmail.com Thu Feb 21 15:02:38 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Thu, 21 Feb 2019 18:02:38 +0300 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: Dear Matt On Wed, Feb 13, 2019 at 9:30 PM Matt Caswell wrote: > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ After reading the proposed architecture description, I have some questions that are very important for the developers of non-US certified openssl-based products. 1. Will it still be available to implement custom RAND_methods via the new providers API? 2. Can we do something with a bunch of hard-linked non-extendable lists of internal NIDs? For example, providing GOST algorithms always requires a patch to extend 3-5 internal lists. If it could be done dynamically, it will be great. 3. Do you have plans to make some callback structures created by providers? I mean such structures as SSL key exchange/authentication methods, X.509 extensions etc. Thank you very much! -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Thu Feb 21 16:20:53 2019 From: matt at openssl.org (Matt Caswell) Date: Thu, 21 Feb 2019 16:20:53 +0000 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> Message-ID: <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> On 21/02/2019 15:02, Dmitry Belyavsky wrote: > Dear Matt > > > > On Wed, Feb 13, 2019 at 9:30 PM Matt Caswell > wrote: > > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ > > > After reading the proposed architecture description, I have some questions that > are very important for the developers of non-US certified openssl-based products. Hi Dmitry, Answers inserted. > > 1. Will it still be available to implement custom RAND_methods via the new > providers API? Yes, I expect this to be possible. > 2. Can we do something with a bunch of hard-linked non-extendable lists of > internal?NIDs?? > For example, providing GOST algorithms always requires a patch to extend 3-5 > internal lists. > If it could be done dynamically, it will be great. That's not currently something we've considered, but I agree it would be great to fix that. Perhaps you could create a github issue identifying the specific areas we should be looking at and then we can take a look at the feasibility of fixing it. > 3. Do you have plans to make some callback structures created by providers?? > I mean such structures as SSL key exchange/authentication methods, X.509 > extensions etc. There aren't any plans to do that at the moment. There's nothing in the provider design that would prevent us from doing so at some point in the future. Matt From prithiraj.das at gmail.com Fri Feb 22 10:27:43 2019 From: prithiraj.das at gmail.com (prithiraj das) Date: Fri, 22 Feb 2019 15:57:43 +0530 Subject: OpenSSL hash memory leak Message-ID: Hi All, Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file in an embeddded device having linux OS and low memory capacity and the files are generally of size 44 MB or more. The first time or even the second time on some occasions, the hash of any file is successfully generated. On the 3rd or 4th time (possibly due to lack of memory/memory leak), the system reboots before the hash can be generated. After restart, the same thing happens when the previous steps are repeated. The stats below shows the memory usage before and after computing the hash. *root at at91sam9m10g45ek:~# free* * total used free shared buff/cache available* *Mem: 252180 13272 223048 280 15860 230924* *Swap: 0 0 0* *After computing hash :-* *root at at91sam9m10g45ek:~# free* * total used free shared buff/cache available* *Mem: 252180 13308 179308 280 59564 230868* *Swap: 0 0 0* Buff/cache increases by almost 44MB (same as file size) everytime I generate the hash and free decreases. I believe the file is being loaded into buffer and not being freed. I am using the below code to compute the message digest. This code is part of a function ComputeHash and the file pointer here is fph. * EVP_add_digest(EVP_sha256());* * md = EVP_get_digestbyname("sha256");* * if(!md) {* * printf("Unknown message digest \n");* * exit(1);* * }* * printf("Message digest algorithm successfully loaded\n");* * mdctx = EVP_MD_CTX_create();* * EVP_DigestInit_ex(mdctx, md, NULL);* * // Reading data to array of unsigned chars * * long long int bytes_read = 0;* * printf("FILE size of the file to be hashed is %ld",filesize); * * //reading image file in chunks below and fph is the file pointer to the 44MB file* * while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* * EVP_DigestUpdate(mdctx, message_data, bytes_read);* * EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);* * printf("\n%d\n",EVP_MD_CTX_size(mdctx));* * printf("\n%d\n",EVP_MD_CTX_type(mdctx));* * hash_data.md_type=EVP_MD_CTX_type(mdctx);* * EVP_MD_CTX_destroy(mdctx);* * //fclose(fp);* * printf("Generated Digest is:\n ");* * for(i = 0; i < hash_data.md_len; i++)* * printf("%02x", hash_data.md_value[i]);* * printf("\n");* * EVP_cleanup();* * return hash_data;* In the the code below, I have done fclose(fp) *verify_hash=ComputeHash(fp,size1);* *fclose(fp);* I believe that instead of loading the entire file all at once I am reading the 44MB file in chunks and computing the hash using the piece of code below: (fph is the file pointer) *while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* * EVP_DigestUpdate(mdctx, message_data, bytes_read);* Where I am going wrong? How can I free the buff/cache after computation of message digest? Please suggest ways to tackle this. Thanks and Regards, Prithiraj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jisoza at gmail.com Fri Feb 22 10:28:33 2019 From: jisoza at gmail.com (Juan Isoza) Date: Fri, 22 Feb 2019 11:28:33 +0100 Subject: creating Linux "portable" x64 binary Message-ID: Hello, I want create for one of my application a Linux binary which run on all current linux system running x86_64 processor. by example, I uses -static-libgcc -static-libstdc++ when I link my app , because I'm not sure found recent version of this lib I also use -lrt to prevent search some tims function added on recent GLIBC With openssl 1.1.0, I had no problem related to openssl With openssl 1.1.1, there is somes modern function searched at compile on recent library So, I just run these command sed -i -e 's/__ELF__/__ELF_and_sure_modern__/g' ./crypto/rand/rand_unix.c sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/rand/rand_unix.c sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/getenv.c sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/crypto.c sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/uid.c with this modification, I'm sure that checking of modern API fail, and I use previous api (like if I compile on oldest linux). I suggest offering an option to not trying using these modern GLICBC_PREREQ , or pehaps uses dl (when openssl is compiled to uses dl) regards! -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at zil.li Fri Feb 22 13:08:00 2019 From: paul at zil.li (Paul Zillmann) Date: Fri, 22 Feb 2019 14:08:00 +0100 Subject: creating Linux "portable" x64 binary In-Reply-To: References: Message-ID: <0cd5d46f-9668-befa-6e10-48ca91f8d329@zil.li> Hello Juan, unfortunately is it not possible to static link the glibc. You can try static link another libc like musl-libc [1]. Should there be any problems compiling OpenSSL with musl-libc, take a look at the packages from Alpine Linux [2], they are using musl as their standard libc. You should get portable POSIX Linux ELF64 executables out of this process. 1: https://www.musl-libc.org/how.html 2: https://git.alpinelinux.org/aports/tree/main/openssl/APKBUILD - Paul Am 22.02.19 um 11:28 schrieb Juan Isoza: > > > Hello, > I want create for one of my application a Linux binary which run on > all current linux system running x86_64 processor. > > by example, I uses -static-libgcc -static-libstdc++ when I link my app > , because I'm not sure found recent version of this lib > I also use -lrt to prevent search some tims function added on recent GLIBC > > With openssl 1.1.0, I had no problem related to openssl > > With openssl 1.1.1, there is somes modern function searched at compile > on recent library > > So, I just run these command > sed -i -e 's/__ELF__/__ELF_and_sure_modern__/g' ./crypto/rand/rand_unix.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' > ./crypto/rand/rand_unix.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/getenv.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/crypto.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/uid.c > > with this modification, I'm sure that checking of modern API fail, and > I use previous api (like if I compile on oldest linux). > > I suggest offering an option to not trying using these modern > GLICBC_PREREQ , or pehaps uses dl (when openssl is compiled to uses dl) > > regards! From Maxime.Torrelli at conduent.com Fri Feb 22 13:28:20 2019 From: Maxime.Torrelli at conduent.com (Torrelli, Maxime) Date: Fri, 22 Feb 2019 13:28:20 +0000 Subject: OpenSSL 1.1.1a for WINCE700 Message-ID: Hello, I am trying to compile OpenSSL 1.1.1a for WinCE700 whereas until now, I am failing. When I look into the different files used by PERL to create the makefile, it seems WINCE is still supported. Am I right on this point ? If yes, when I look into the generated makefile, I see in the CNF_CPPFLAGS : -D"OPENSSL_SYS_WIN32". I was expecting to find OPENSSL_SYS_WINCE instead. Finally, my compilation fails because STD_INPUT_HANDLE is not defined at line 2696 of apps\apps.c. I do not understand why I have this error since this line is after a #if defined(OPENSSL_SYS_WINDOWS). I do not see where OPENSSL_SYS_WINDOWS could be defined. My configuration : OSVERSION=WCE700 TARGETCPU=ARMV4I PLATFORM=VC-CE WCECOMPAT=C:\GIT\repos\wcecompat\wcecompat Any help would be much appreciated ! Greetings, Maxime TORRELLI Embedded Software Engineer Conduent Conduent Business Solutions (France) 1 rue Claude Chappe - BP 345 07503 Guilherand Granges Cedex -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.paquin at yale.edu Fri Feb 22 17:18:57 2019 From: brian.paquin at yale.edu (Paquin, Brian) Date: Fri, 22 Feb 2019 17:18:57 +0000 Subject: Build error on CentOS 7.6 Message-ID: <0FC2FC7F-124A-4AAE-AC2A-301126E564BB@yale.edu> Hello, I?ve been given a CentOS VM and started by installing OpenSSL 1.02q. wget https://www.openssl.org/source/openssl-1.0.2q.tar.gz tar -xvf openssl-1.0.2q.tar.gz cd openssl-1.0.2q ./config --prefix=/usr/local/openssl make depend make make test During the ?make test? I get: make[2]: Entering directory `/home/paquinbw/openssl-1.0.2q/test' ( :; LIBDEPS="${LIBDEPS:--L.. -lssl -L.. -lcrypto -ldl}"; LDCMD="${LDCMD:-gcc}"; LDFLAGS="${LDFLAGS:--DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -Wa,--noexecstack -m64 -DL_ENDIAN -O3 -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DRC4_ASM -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM}"; LIBPATH=`for x in $LIBDEPS; do echo $x; done | sed -e 's/^ *-L//;t' -e d | uniq`; LIBPATH=`echo $LIBPATH | sed -e 's/ /:/g'`; LD_LIBRARY_PATH=$LIBPATH:$LD_LIBRARY_PATH ${LDCMD} ${LDFLAGS} -o ${APPNAME:=bntest} bntest.o ${LIBDEPS} ) /bin/ld: cannot find -lssl collect2: error: ld returned 1 exit status make[2]: *** [link_app.] Error 1 make[2]: Leaving directory `/home/paquinbw/openssl-1.0.2q/test' make[1]: *** [bntest] Error 2 make[1]: Leaving directory `/home/paquinbw/openssl-1.0.2q/test' make: *** [tests] Error 2 [paquinbw at pathclinapps2 openssl-1.0.2q]$ sudo yum list installed | grep openssl openssl.x86_64 1:1.0.2k-16.el7 @base openssl-libs.x86_64 1:1.0.2k-16.el7 @base openssl098e.x86_64 0.9.8e-29.el7.centos.3 @anaconda Searches online suggest running ?yum install openssl-devel?. But I don?t want another openssl on the system just to get a 4th one installed! Do I need to remove the existing openssl first? Or is there another package I need? Thank you, Brian From hkario at redhat.com Fri Feb 22 18:22:09 2019 From: hkario at redhat.com (Hubert Kario) Date: Fri, 22 Feb 2019 19:22:09 +0100 Subject: creating Linux "portable" x64 binary In-Reply-To: References: Message-ID: <3442744.lCVLNQ1In5@pintsize.usersys.redhat.com> On Friday, 22 February 2019 11:28:33 CET Juan Isoza wrote: > Hello, > I want create for one of my application a Linux binary which run on all > current linux system running x86_64 processor. > > by example, I uses -static-libgcc -static-libstdc++ when I link my app , > because I'm not sure found recent version of this lib > I also use -lrt to prevent search some tims function added on recent GLIBC > > With openssl 1.1.0, I had no problem related to openssl > > With openssl 1.1.1, there is somes modern function searched at compile on > recent library > > So, I just run these command > sed -i -e 's/__ELF__/__ELF_and_sure_modern__/g' ./crypto/rand/rand_unix.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' > ./crypto/rand/rand_unix.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/getenv.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/crypto.c > sed -i -e 's/__GLIBC_PREREQ/__GLIBC__not_use_PREREQ/g' ./crypto/uid.c > > with this modification, I'm sure that checking of modern API fail, and I > use previous api (like if I compile on oldest linux). > > I suggest offering an option to not trying using these modern GLICBC_PREREQ > , or pehaps uses dl (when openssl is compiled to uses dl) compile it on oldest system that you wish to target glibc is backwards compatible so new versions of it will work with binaries compiled with old versions forward compatibility (compiling with new glibc and running with old library) is not supported, and even if it may appear to work initially, it's not something that is generally supported and in practice very hard to support and may lead to hard to detect vulnerabilities. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part. URL: From openssl at jordan.maileater.net Fri Feb 22 18:47:58 2019 From: openssl at jordan.maileater.net (Jordan Brown) Date: Fri, 22 Feb 2019 18:47:58 +0000 Subject: OpenSSL hash memory leak In-Reply-To: References: Message-ID: <0101016916891c9d-7282f4a4-c0fe-45b7-84d8-6402a6c7aa04-000000@us-west-2.amazonses.com> The most obvious question is "how are you allocating your message_data buffer?".? You don't show that. On 2/22/2019 2:27 AM, prithiraj das wrote: > > Hi All, > > Using OpenSSL 1.0.2g, I have written a code to generate the hash of a > file in an embeddded device having linux OS and low memory capacity > and the files are generally of size 44 MB or more. The first time or > even the second time on some occasions, the hash of any file is > successfully generated. On the 3rd or 4th time (possibly due to lack > of memory/memory leak), the system reboots before the hash can be > generated.? After restart, the same thing happens when the previous > steps are repeated. > The stats below shows the memory usage before and after computing the > hash.? > > *root at at91sam9m10g45ek:~# free* > *? ? ? ? ? ? ? ? ? ? ? total? ? ? ? used? ? ? ? ? free? ? ? ? ?shared? > ? buff/cache? ?available* > *Mem:? ? ? ? ?252180? ? ? ?13272? ? ? 223048? ? ? ? ?280? ? ? ? ? > 15860? ? ? ? ? 230924* > *Swap:? ? ? ? ? ? ? ? 0? ? ? ? ? ?0? ? ? ? ? ? ? ?0* > * > * > *After computing hash :-* > *root at at91sam9m10g45ek:~# free* > *? ? ? ? ? ? ? ? ? ? ? total? ? ? ? used? ? ? ? ? free? ? ? ?shared? ? > buff/cache? ?available* > *Mem:? ? ? ? ?252180? ? ? ?13308? ? ? 179308? ? ? ? 280? ? ? ?59564? ? > ? ? ? ?230868* > *Swap:? ? ? ? ? ? ?0? ? ? ? ? ? ? ? 0? ? ? ? ? ? ? 0* > > Buff/cache increases by almost 44MB (same as file size) everytime I > generate the hash and free decreases. I believe the file is being > loaded into buffer and not being freed.? > > I am using the below code to compute the message digest. This code is > part of a function ComputeHash and the file pointer here is fph. > > ??*?EVP_add_digest(EVP_sha256());* > *?md = EVP_get_digestbyname("sha256");* > *?* > *?if(!md) {* > *? ? ? ? printf("Unknown message digest \n");* > *? ? ? ? exit(1);* > *?}* > *?printf("Message digest algorithm successfully loaded\n");* > *?mdctx = EVP_MD_CTX_create();* > *?EVP_DigestInit_ex(mdctx, md, NULL);* > * > * > *?// Reading data to array of unsigned chars* > *?long long int bytes_read = 0;* > * > * > *?printf("FILE size of the file to be hashed is %ld",filesize);* > * > * > *?//reading image file in chunks below and fph is the file pointer to > the 44MB file* > *?while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* > *?EVP_DigestUpdate(mdctx, message_data, bytes_read);* > *?EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);* > *?printf("\n%d\n",EVP_MD_CTX_size(mdctx));* > *?printf("\n%d\n",EVP_MD_CTX_type(mdctx));* > *?hash_data.md_type=EVP_MD_CTX_type(mdctx);* > *?EVP_MD_CTX_destroy(mdctx);* > *?//fclose(fp);* > *?printf("Generated Digest is:\n ");* > *?for(i = 0; i < hash_data.md_len; i++)* > *? ? ? ? printf("%02x", hash_data.md_value[i]);* > *?printf("\n");* > *?EVP_cleanup();* > *? ? ? ? ?return hash_data;* > * > * > In the the code below, I have done fclose(fp) > *verify_hash=ComputeHash(fp,size1);* > *fclose(fp);* > * > * > I believe that instead of loading the entire file all at once I am > reading the 44MB file in chunks and computing the hash using?the piece > of code below: (fph is the file pointer) > *while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* > *?EVP_DigestUpdate(mdctx, message_data, bytes_read);* > * > * > Where I am going wrong? How can I free the buff/cache after > computation of message digest?? Please suggest ways to tackle this. > > > Thanks and Regards, > Prithiraj > -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Sat Feb 23 05:47:10 2019 From: levitte at openssl.org (Richard Levitte) Date: Sat, 23 Feb 2019 06:47:10 +0100 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> Message-ID: <87zhqnkqe9.wl-levitte@openssl.org> On Thu, 21 Feb 2019 17:20:53 +0100, Matt Caswell wrote: > On 21/02/2019 15:02, Dmitry Belyavsky wrote: > > Dear Matt > > > > > > > > On Wed, Feb 13, 2019 at 9:30 PM Matt Caswell > > wrote: > > > > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > > > > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ > > > > > > After reading the proposed architecture description, I have some questions that > > are very important for the developers of non-US certified openssl-based products. > > Hi Dmitry, > > Answers inserted. > > > > > 1. Will it still be available to implement custom RAND_methods via the new > > providers API? > > Yes, I expect this to be possible. This is something I'd like to see explored further. OpenSSL 3.0 will target the EVP API primarly, and while we do talk about entropy with regards to FIPS, I haven't quite grasped if that would be a provider internal thing or if entropy is supposed to come from "elsewhere". Since our RAND API is separate from the EVP API, I'm unsure how we plan on getting custom RAND_methods from providers. Please note that we can add RAND to the list of provider backed APIs, and given a foundation that we're currently building, it may even be quite easy. However, no one has said explicitly that we would do so. The other option is, of course, to move the RAND API to EVP somehow, but that will probably be more challenging. > > 2. Can we do something with a bunch of hard-linked non-extendable lists of > > internal?NIDs?? > > For example, providing GOST algorithms always requires a patch to extend 3-5 > > internal lists. > > If it could be done dynamically, it will be great. > > That's not currently something we've considered, but I agree it > would be great to fix that. Perhaps you could create a github issue > identifying the specific areas we should be looking at and then we > can take a look at the feasibility of fixing it. Let me address this in a different way... Are you very attached to those NIDs and them actually being NIDs? Or would you be just as happy to have the implementations identified by name? You see, providers will offer algorithm implementation by algorithm name (oh, and properties), not by number. > > 3. Do you have plans to make some callback structures created by providers?? > > I mean such structures as SSL key exchange/authentication methods, X.509 > > extensions etc. > > There aren't any plans to do that at the moment. There's nothing in the provider > design that would prevent us from doing so at some point in the future. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From prithiraj.das at gmail.com Sat Feb 23 17:25:09 2019 From: prithiraj.das at gmail.com (prithiraj das) Date: Sat, 23 Feb 2019 22:55:09 +0530 Subject: OpenSSL hash memory leak In-Reply-To: <0101016916891c9d-7282f4a4-c0fe-45b7-84d8-6402a6c7aa04-000000@us-west-2.amazonses.com> References: <0101016916891c9d-7282f4a4-c0fe-45b7-84d8-6402a6c7aa04-000000@us-west-2.amazonses.com> Message-ID: Hi, This is how I have initialized my variables:- EVP_MD_CTX *mdctx; const EVP_MD *md; int i; HASH hash_data; unsigned char message_data[BUFFER_SIZE]; BUFFER_SIZE has been defined as 131072 and HASH is my hash structure defined to hold the message digest, message digest length and message digest type On Sat, 23 Feb 2019 at 00:17, Jordan Brown wrote: > The most obvious question is "how are you allocating your message_data > buffer?". You don't show that. > > On 2/22/2019 2:27 AM, prithiraj das wrote: > > > Hi All, > > Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file > in an embeddded device having linux OS and low memory capacity and the > files are generally of size 44 MB or more. The first time or even the > second time on some occasions, the hash of any file is successfully > generated. On the 3rd or 4th time (possibly due to lack of memory/memory > leak), the system reboots before the hash can be generated. After restart, > the same thing happens when the previous steps are repeated. > The stats below shows the memory usage before and after computing the > hash. > > *root at at91sam9m10g45ek:~# free* > * total used free shared > buff/cache available* > *Mem: 252180 13272 223048 280 15860 > 230924* > *Swap: 0 0 0* > > *After computing hash :-* > *root at at91sam9m10g45ek:~# free* > * total used free shared > buff/cache available* > *Mem: 252180 13308 179308 280 59564 > 230868* > *Swap: 0 0 0* > > Buff/cache increases by almost 44MB (same as file size) everytime I > generate the hash and free decreases. I believe the file is being loaded > into buffer and not being freed. > > I am using the below code to compute the message digest. This code is part > of a function ComputeHash and the file pointer here is fph. > > * EVP_add_digest(EVP_sha256());* > * md = EVP_get_digestbyname("sha256");* > > * if(!md) {* > * printf("Unknown message digest \n");* > * exit(1);* > * }* > * printf("Message digest algorithm successfully loaded\n");* > * mdctx = EVP_MD_CTX_create();* > * EVP_DigestInit_ex(mdctx, md, NULL);* > > * // Reading data to array of unsigned chars * > * long long int bytes_read = 0;* > > * printf("FILE size of the file to be hashed is %ld",filesize); * > > * //reading image file in chunks below and fph is the file pointer to the > 44MB file* > * while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* > * EVP_DigestUpdate(mdctx, message_data, bytes_read);* > * EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);* > * printf("\n%d\n",EVP_MD_CTX_size(mdctx));* > * printf("\n%d\n",EVP_MD_CTX_type(mdctx));* > * hash_data.md_type=EVP_MD_CTX_type(mdctx);* > * EVP_MD_CTX_destroy(mdctx);* > * //fclose(fp);* > * printf("Generated Digest is:\n ");* > * for(i = 0; i < hash_data.md_len; i++)* > * printf("%02x", hash_data.md_value[i]);* > * printf("\n");* > * EVP_cleanup();* > * return hash_data;* > > In the the code below, I have done fclose(fp) > *verify_hash=ComputeHash(fp,size1);* > *fclose(fp);* > > I believe that instead of loading the entire file all at once I am reading > the 44MB file in chunks and computing the hash using the piece of code > below: (fph is the file pointer) > *while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* > * EVP_DigestUpdate(mdctx, message_data, bytes_read);* > > Where I am going wrong? How can I free the buff/cache after computation of > message digest? Please suggest ways to tackle this. > > > Thanks and Regards, > Prithiraj > > > -- > Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beldmit at gmail.com Sat Feb 23 20:47:00 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Sat, 23 Feb 2019 23:47:00 +0300 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <87zhqnkqe9.wl-levitte@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <87zhqnkqe9.wl-levitte@openssl.org> Message-ID: Dear Richard, On Sat, Feb 23, 2019 at 8:47 AM Richard Levitte wrote: > On Thu, 21 Feb 2019 17:20:53 +0100, > Matt Caswell wrote: > > On 21/02/2019 15:02, Dmitry Belyavsky wrote: > > > Dear Matt > > > > > > > > > > > > On Wed, Feb 13, 2019 at 9:30 PM Matt Caswell > > > wrote: > > > > > > Please see my blog post for an OpenSSL 3.0 and FIPS Update: > > > > > > https://www.openssl.org/blog/blog/2019/02/13/FIPS-update/ > > > > > > > > > After reading the proposed architecture description, I have some > questions that > > > are very important for the developers of non-US certified > openssl-based products. > > > > Hi Dmitry, > > > > Answers inserted. > > > > > > > > 1. Will it still be available to implement custom RAND_methods via the > new > > > providers API? > > > > Yes, I expect this to be possible. > > This is something I'd like to see explored further. OpenSSL 3.0 will > target the EVP API primarly, and while we do talk about entropy with > regards to FIPS, I haven't quite grasped if that would be a provider > internal thing or if entropy is supposed to come from "elsewhere". > > Since our RAND API is separate from the EVP API, I'm unsure how we > plan on getting custom RAND_methods from providers. > > Please note that we can add RAND to the list of provider backed APIs, > and given a foundation that we're currently building, it may even be > quite easy. However, no one has said explicitly that we would do so. > > The other option is, of course, to move the RAND API to EVP somehow, > but that will probably be more challenging. > I do not think it is really necessary to move RAND to EVP. Current architecture suits our requirements, but if the possibility to overwrite the RAND_METHOD is removed, it will cause problems for us. > > > 2. Can we do something with a bunch of hard-linked non-extendable > lists of > > > internal NIDs? > > > For example, providing GOST algorithms always requires a patch to > extend 3-5 > > > internal lists. > > > If it could be done dynamically, it will be great. > > > > That's not currently something we've considered, but I agree it > > would be great to fix that. Perhaps you could create a github issue > > identifying the specific areas we should be looking at and then we > > can take a look at the feasibility of fixing it. > > Let me address this in a different way... > > Are you very attached to those NIDs and them actually being NIDs? Or > would you be just as happy to have the implementations identified by > name? You see, providers will offer algorithm implementation by > algorithm name (oh, and properties), not by number. > The command grep -ril gost . | grep -v objects in the crypto/ folder enlists the following files: ./cms/cms_sd.c ./asn1/asn_mime.c ./x509/x509type.c ./pkcs12/p12_mutl.c ./evp/evp_pbe.c ./pkcs7/pk7_smime.c Namely the functions CMS_add_standard_smimecap, PKCS7_sign_add_signer, asn1_write_micalg, X509_certificate_type and array builtin_pbe[] refer to gost-related NIDs. The pkcs12_gen_mac function has a gost-specific processing. It was much more simple to add gost-specific processing here than to add a callback everywhere, though it breaks encapsulation I dream about. Also, we have some patches adding Russian-specific X.509 extensions, and I think for now it's better to register the necessary NIDs and provide pull requests to add their processing. The situation in libssl is much more difficult, because of more monolithic architecture there. > > > > 3. Do you have plans to make some callback structures created by > providers? > > > I mean such structures as SSL key exchange/authentication methods, > X.509 > > > extensions etc. > > > > There aren't any plans to do that at the moment. There's nothing in the > provider > > design that would prevent us from doing so at some point in the future. > -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From georg.hoellrigl at gmx.at Sat Feb 23 21:44:58 2019 From: georg.hoellrigl at gmx.at (=?utf-8?Q?Georg_H=C3=B6llrigl?=) Date: Sat, 23 Feb 2019 22:44:58 +0100 Subject: AW: OpenSSL hash memory leak In-Reply-To: References: <0101016916891c9d-7282f4a4-c0fe-45b7-84d8-6402a6c7aa04-000000@us-west-2.amazonses.com> Message-ID: <003201d4cbc1$065f69f0$131e3dd0$@gmx.at> Hello, I guess you?re not seeing a memory leak, but just normal behaviour of linux file system cache. Buff/cache is keeping files in memory that were least accessed as long as not needed by other stuff. You don?t need to free the buffer/cache, because linux does that automatically, when memory is needed. Kind Regards, Georg Von: openssl-users Im Auftrag von prithiraj das Gesendet: 23 February 2019 18:25 An: Jordan Brown Cc: openssl-users at openssl.org Betreff: Re: OpenSSL hash memory leak Hi, This is how I have initialized my variables:- EVP_MD_CTX *mdctx; const EVP_MD *md; int i; HASH hash_data; unsigned char message_data[BUFFER_SIZE]; BUFFER_SIZE has been defined as 131072 and HASH is my hash structure defined to hold the message digest, message digest length and message digest type On Sat, 23 Feb 2019 at 00:17, Jordan Brown > wrote: The most obvious question is "how are you allocating your message_data buffer?". You don't show that. On 2/22/2019 2:27 AM, prithiraj das wrote: Hi All, Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file in an embeddded device having linux OS and low memory capacity and the files are generally of size 44 MB or more. The first time or even the second time on some occasions, the hash of any file is successfully generated. On the 3rd or 4th time (possibly due to lack of memory/memory leak), the system reboots before the hash can be generated. After restart, the same thing happens when the previous steps are repeated. The stats below shows the memory usage before and after computing the hash. root at at91sam9m10g45ek:~# free total used free shared buff/cache available Mem: 252180 13272 223048 280 15860 230924 Swap: 0 0 0 After computing hash :- root at at91sam9m10g45ek:~# free total used free shared buff/cache available Mem: 252180 13308 179308 280 59564 230868 Swap: 0 0 0 Buff/cache increases by almost 44MB (same as file size) everytime I generate the hash and free decreases. I believe the file is being loaded into buffer and not being freed. I am using the below code to compute the message digest. This code is part of a function ComputeHash and the file pointer here is fph. EVP_add_digest(EVP_sha256()); md = EVP_get_digestbyname("sha256"); if(!md) { printf("Unknown message digest \n"); exit(1); } printf("Message digest algorithm successfully loaded\n"); mdctx = EVP_MD_CTX_create(); EVP_DigestInit_ex(mdctx, md, NULL); // Reading data to array of unsigned chars long long int bytes_read = 0; printf("FILE size of the file to be hashed is %ld",filesize); //reading image file in chunks below and fph is the file pointer to the 44MB file while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0) EVP_DigestUpdate(mdctx, message_data, bytes_read); EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len); printf("\n%d\n",EVP_MD_CTX_size(mdctx)); printf("\n%d\n",EVP_MD_CTX_type(mdctx)); hash_data.md_type=EVP_MD_CTX_type(mdctx); EVP_MD_CTX_destroy(mdctx); //fclose(fp); printf("Generated Digest is:\n "); for(i = 0; i < hash_data.md_len; i++) printf("%02x", hash_data.md_value[i]); printf("\n"); EVP_cleanup(); return hash_data; In the the code below, I have done fclose(fp) verify_hash=ComputeHash(fp,size1); fclose(fp); I believe that instead of loading the entire file all at once I am reading the 44MB file in chunks and computing the hash using the piece of code below: (fph is the file pointer) while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0) EVP_DigestUpdate(mdctx, message_data, bytes_read); Where I am going wrong? How can I free the buff/cache after computation of message digest? Please suggest ways to tackle this. Thanks and Regards, Prithiraj -- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From prithiraj.das at gmail.com Sun Feb 24 10:28:50 2019 From: prithiraj.das at gmail.com (prithiraj das) Date: Sun, 24 Feb 2019 15:58:50 +0530 Subject: OpenSSL hash memory leak In-Reply-To: <003201d4cbc1$065f69f0$131e3dd0$@gmx.at> References: <0101016916891c9d-7282f4a4-c0fe-45b7-84d8-6402a6c7aa04-000000@us-west-2.amazonses.com> <003201d4cbc1$065f69f0$131e3dd0$@gmx.at> Message-ID: Hi All, Apart from my code posted in this mailchain, I tried testing using the OpenSSL commands. I ran *openssl dgst -sha256 Test_blob.* Test_blob and all files mentioned below are almost 44 MB (or more). The first time buff/cache value increased by 44MB (size of the file) * total used free shared buff/cache available* *Mem: 252180 12984 181544 284 57652 231188* *Swap: 0 0 0* I ran the same OpenSSL command again with the same file, and the result of free command remained the same * total used free shared buff/cache available* *Mem: 252180 12984 181544 284 57652 231188* *Swap: 0 0 0* Next I ran the same command with a different file (let's say Test_blob2) and ran the free command after it, result:- * total used free s**hared buff/cache available* *Mem: 252180 12948 137916 284 101316 231200* *Swap: 0 0 0* The *buff/cache* value has increased by the size of the file concerned* (almost 44MB)* If I run the same command the 3rd time with another file not previously used (let's say Test_blob3), the following happens *Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b* *Rebooting in 15 seconds..* Is there a way to resolve this problem, How do I clear the buff/cache? On Sun, 24 Feb 2019 at 03:15, Georg H?llrigl wrote: > Hello, > > > > I guess you?re not seeing a memory leak, but just normal behaviour of > linux file system cache. > > Buff/cache is keeping files in memory that were least accessed as long as > not needed by other stuff. > > You don?t need to free the buffer/cache, because linux does that > automatically, when memory is needed. > > > > Kind Regards, > > Georg > > > > *Von:* openssl-users *Im Auftrag von *prithiraj > das > *Gesendet:* 23 February 2019 18:25 > *An:* Jordan Brown > *Cc:* openssl-users at openssl.org > *Betreff:* Re: OpenSSL hash memory leak > > > > Hi, > > This is how I have initialized my variables:- > > > > EVP_MD_CTX *mdctx; > > const EVP_MD *md; > > int i; > > HASH hash_data; > > unsigned char message_data[BUFFER_SIZE]; > > > > BUFFER_SIZE has been defined as 131072 > > and HASH is my hash structure defined to hold the message digest, message > digest length and message digest type > > > > On Sat, 23 Feb 2019 at 00:17, Jordan Brown > wrote: > > The most obvious question is "how are you allocating your message_data > buffer?". You don't show that. > > > > On 2/22/2019 2:27 AM, prithiraj das wrote: > > > > Hi All, > > > > Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file > in an embeddded device having linux OS and low memory capacity and the > files are generally of size 44 MB or more. The first time or even the > second time on some occasions, the hash of any file is successfully > generated. On the 3rd or 4th time (possibly due to lack of memory/memory > leak), the system reboots before the hash can be generated. After restart, > the same thing happens when the previous steps are repeated. > > The stats below shows the memory usage before and after computing the > hash. > > > > *root at at91sam9m10g45ek:~# free* > > * total used free shared > buff/cache available* > > *Mem: 252180 13272 223048 280 15860 > 230924* > > *Swap: 0 0 0* > > > > *After computing hash :-* > > *root at at91sam9m10g45ek:~# free* > > * total used free shared > buff/cache available* > > *Mem: 252180 13308 179308 280 59564 > 230868* > > *Swap: 0 0 0* > > > > Buff/cache increases by almost 44MB (same as file size) everytime I > generate the hash and free decreases. I believe the file is being loaded > into buffer and not being freed. > > > > I am using the below code to compute the message digest. This code is part > of a function ComputeHash and the file pointer here is fph. > > > > * EVP_add_digest(EVP_sha256());* > > * md = EVP_get_digestbyname("sha256");* > > > > * if(!md) {* > > * printf("Unknown message digest \n");* > > * exit(1);* > > * }* > > * printf("Message digest algorithm successfully loaded\n");* > > * mdctx = EVP_MD_CTX_create();* > > * EVP_DigestInit_ex(mdctx, md, NULL);* > > > > * // Reading data to array of unsigned chars * > > * long long int bytes_read = 0;* > > > > * printf("FILE size of the file to be hashed is %ld",filesize); * > > > > * //reading image file in chunks below and fph is the file pointer to the > 44MB file* > > * while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* > > * EVP_DigestUpdate(mdctx, message_data, bytes_read);* > > * EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);* > > * printf("\n%d\n",EVP_MD_CTX_size(mdctx));* > > * printf("\n%d\n",EVP_MD_CTX_type(mdctx));* > > * hash_data.md_type=EVP_MD_CTX_type(mdctx);* > > * EVP_MD_CTX_destroy(mdctx);* > > * //fclose(fp);* > > * printf("Generated Digest is:\n ");* > > * for(i = 0; i < hash_data.md_len; i++)* > > * printf("%02x", hash_data.md_value[i]);* > > * printf("\n");* > > * EVP_cleanup();* > > * return hash_data;* > > > > In the the code below, I have done fclose(fp) > > *verify_hash=ComputeHash(fp,size1);* > > *fclose(fp);* > > > > I believe that instead of loading the entire file all at once I am reading > the 44MB file in chunks and computing the hash using the piece of code > below: (fph is the file pointer) > > *while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* > > * EVP_DigestUpdate(mdctx, message_data, bytes_read);* > > > > Where I am going wrong? How can I free the buff/cache after computation of > message digest? Please suggest ways to tackle this. > > > > > > Thanks and Regards, > > Prithiraj > > > > > > -- > > Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prithiraj.das at gmail.com Sun Feb 24 10:34:18 2019 From: prithiraj.das at gmail.com (prithiraj das) Date: Sun, 24 Feb 2019 16:04:18 +0530 Subject: OpenSSL hash memory leak In-Reply-To: References: <0101016916891c9d-7282f4a4-c0fe-45b7-84d8-6402a6c7aa04-000000@us-west-2.amazonses.com> <003201d4cbc1$065f69f0$131e3dd0$@gmx.at> Message-ID: If it helps, sometimes I do get the following errors for the same and subsequent reboot: Alignment trap: sh (601) PC=0xb6e008f8 Instr=0x4589c0d7 Address=0x000000d7 FSR 0x801 Alignment trap: login (584) PC=0xb6e6ab00 Instr=0xe5951000 Address=0xd27cdc63 FSR 0x001 Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b On Sun, 24 Feb 2019 at 15:58, prithiraj das wrote: > Hi All, > > Apart from my code posted in this mailchain, I tried testing using the > OpenSSL commands. I ran *openssl dgst -sha256 Test_blob.* Test_blob and > all files mentioned below are almost 44 MB (or more). > > The first time buff/cache value increased by 44MB (size of the file) > * total used free shared > buff/cache available* > *Mem: 252180 12984 181544 284 57652 > 231188* > *Swap: 0 0 0* > > I ran the same OpenSSL command again with the same file, and the result of > free command remained the same > * total used free shared > buff/cache available* > *Mem: 252180 12984 181544 284 57652 > 231188* > *Swap: 0 0 0* > > Next I ran the same command with a different file (let's say Test_blob2) > and ran the free command after it, result:- > * total used free s**hared > buff/cache available* > *Mem: 252180 12948 137916 284 101316 > 231200* > *Swap: 0 0 0* > > The *buff/cache* value has increased by the size of the file concerned* (almost > 44MB)* > If I run the same command the 3rd time with another file not previously > used (let's say Test_blob3), the following happens > > *Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b* > *Rebooting in 15 seconds..* > > Is there a way to resolve this problem, How do I clear the buff/cache? > > On Sun, 24 Feb 2019 at 03:15, Georg H?llrigl > wrote: > >> Hello, >> >> >> >> I guess you?re not seeing a memory leak, but just normal behaviour of >> linux file system cache. >> >> Buff/cache is keeping files in memory that were least accessed as long as >> not needed by other stuff. >> >> You don?t need to free the buffer/cache, because linux does that >> automatically, when memory is needed. >> >> >> >> Kind Regards, >> >> Georg >> >> >> >> *Von:* openssl-users *Im Auftrag von >> *prithiraj das >> *Gesendet:* 23 February 2019 18:25 >> *An:* Jordan Brown >> *Cc:* openssl-users at openssl.org >> *Betreff:* Re: OpenSSL hash memory leak >> >> >> >> Hi, >> >> This is how I have initialized my variables:- >> >> >> >> EVP_MD_CTX *mdctx; >> >> const EVP_MD *md; >> >> int i; >> >> HASH hash_data; >> >> unsigned char message_data[BUFFER_SIZE]; >> >> >> >> BUFFER_SIZE has been defined as 131072 >> >> and HASH is my hash structure defined to hold the message digest, message >> digest length and message digest type >> >> >> >> On Sat, 23 Feb 2019 at 00:17, Jordan Brown >> wrote: >> >> The most obvious question is "how are you allocating your message_data >> buffer?". You don't show that. >> >> >> >> On 2/22/2019 2:27 AM, prithiraj das wrote: >> >> >> >> Hi All, >> >> >> >> Using OpenSSL 1.0.2g, I have written a code to generate the hash of a >> file in an embeddded device having linux OS and low memory capacity and the >> files are generally of size 44 MB or more. The first time or even the >> second time on some occasions, the hash of any file is successfully >> generated. On the 3rd or 4th time (possibly due to lack of memory/memory >> leak), the system reboots before the hash can be generated. After restart, >> the same thing happens when the previous steps are repeated. >> >> The stats below shows the memory usage before and after computing the >> hash. >> >> >> >> *root at at91sam9m10g45ek:~# free* >> >> * total used free shared >> buff/cache available* >> >> *Mem: 252180 13272 223048 280 15860 >> 230924* >> >> *Swap: 0 0 0* >> >> >> >> *After computing hash :-* >> >> *root at at91sam9m10g45ek:~# free* >> >> * total used free shared >> buff/cache available* >> >> *Mem: 252180 13308 179308 280 59564 >> 230868* >> >> *Swap: 0 0 0* >> >> >> >> Buff/cache increases by almost 44MB (same as file size) everytime I >> generate the hash and free decreases. I believe the file is being loaded >> into buffer and not being freed. >> >> >> >> I am using the below code to compute the message digest. This code is >> part of a function ComputeHash and the file pointer here is fph. >> >> >> >> * EVP_add_digest(EVP_sha256());* >> >> * md = EVP_get_digestbyname("sha256");* >> >> >> >> * if(!md) {* >> >> * printf("Unknown message digest \n");* >> >> * exit(1);* >> >> * }* >> >> * printf("Message digest algorithm successfully loaded\n");* >> >> * mdctx = EVP_MD_CTX_create();* >> >> * EVP_DigestInit_ex(mdctx, md, NULL);* >> >> >> >> * // Reading data to array of unsigned chars * >> >> * long long int bytes_read = 0;* >> >> >> >> * printf("FILE size of the file to be hashed is %ld",filesize); * >> >> >> >> * //reading image file in chunks below and fph is the file pointer to the >> 44MB file* >> >> * while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* >> >> * EVP_DigestUpdate(mdctx, message_data, bytes_read);* >> >> * EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);* >> >> * printf("\n%d\n",EVP_MD_CTX_size(mdctx));* >> >> * printf("\n%d\n",EVP_MD_CTX_type(mdctx));* >> >> * hash_data.md_type=EVP_MD_CTX_type(mdctx);* >> >> * EVP_MD_CTX_destroy(mdctx);* >> >> * //fclose(fp);* >> >> * printf("Generated Digest is:\n ");* >> >> * for(i = 0; i < hash_data.md_len; i++)* >> >> * printf("%02x", hash_data.md_value[i]);* >> >> * printf("\n");* >> >> * EVP_cleanup();* >> >> * return hash_data;* >> >> >> >> In the the code below, I have done fclose(fp) >> >> *verify_hash=ComputeHash(fp,size1);* >> >> *fclose(fp);* >> >> >> >> I believe that instead of loading the entire file all at once I am >> reading the 44MB file in chunks and computing the hash using the piece of >> code below: (fph is the file pointer) >> >> *while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)* >> >> * EVP_DigestUpdate(mdctx, message_data, bytes_read);* >> >> >> >> Where I am going wrong? How can I free the buff/cache after computation >> of message digest? Please suggest ways to tackle this. >> >> >> >> >> >> Thanks and Regards, >> >> Prithiraj >> >> >> >> >> >> -- >> >> Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From georg.hoellrigl at gmx.at Sun Feb 24 10:53:16 2019 From: georg.hoellrigl at gmx.at (=?ISO-8859-1?Q?Georg_H=F6llrigl?=) Date: Sun, 24 Feb 2019 11:53:16 +0100 Subject: OpenSSL hash memory leak In-Reply-To: Message-ID: <0MF5FT-1gmXXa0wQ7-00GG6r@mail.gmx.com> That pretty much sounds like a hardware problem. I'd expect that you see similar behaviour when you md5sum the files?Openssl mailing list might be the wrong place for that topic. -------- Urspr?ngliche Nachricht --------Von: prithiraj das Datum: 24.02.19 11:34 (GMT+01:00) An: Georg H?llrigl , openssl-users at openssl.org, Jordan Brown Betreff: Re: OpenSSL hash memory leak If it helps, sometimes I do get the following errors for the same and subsequent reboot:Alignment trap: sh (601) PC=0xb6e008f8 Instr=0x4589c0d7 Address=0x000000d7 FSR 0x801Alignment trap: login (584) PC=0xb6e6ab00 Instr=0xe5951000 Address=0xd27cdc63 FSR 0x001Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000bOn Sun, 24 Feb 2019 at 15:58, prithiraj das wrote:Hi All,Apart from my code posted in this mailchain, I tried testing using the OpenSSL commands. I ran?openssl dgst -sha256 Test_blob.?Test_blob and all files mentioned below are almost 44 MB (or more).The first time buff/cache value increased by 44MB (size of the file)?? ? ? ? ? ? ? ? ? ?total? ? ? ? used? ? ? ? ? ?free? ? ? ? shared? buff/cache? ?availableMem:? ? ? ? ?252180? ? ? ?12984? ? ? 181544? ? ? ? ?284? ? ? ?57652? ? ? ? ?231188Swap:? ? ? ? ? ? ? ? ? 0? ? ? ? ? ?0? ? ? ? ? ?0I ran the same OpenSSL command again with the same file, and the result of free command remained the same? ? ? ? ? ? ? ? ? ? total? ? ? ? used? ? ? ? ? ?free? ? ? ? shared? buff/cache? ?availableMem:? ? ? ? ?252180? ? ? ?12984? ? ? 181544? ? ? ? ?284? ? ? ?57652? ? ? ? 231188Swap:? ? ? ? ? ? ? ? ? 0? ? ? ? ? ?0? ? ? ? ? ?0Next I ran the same command with a different file (let's say Test_blob2) and ran the free command after it, result:-? ? ? ? ? ? ? ? ? ? ? ? ?total? ? ? ? used? ? ? ? free? ? ? ? shared? buff/cache? ?availableMem:? ? ? ? ? ? 252180? ? ? ?12948? ? ? 137916? ? ? 284? ? ? 101316? ? ? ? ?231200Swap:? ? ? ? ? ? ? ? ? ? 0? ? ? ? ? ?0? ? ? ? ? ?0The buff/cache value has increased by the size of the file concerned?(almost 44MB)If I run the same command the 3rd time with another file not previously used (let's say Test_blob3), the following happensKernel panic - not syncing: Attempted to kill init! exitcode=0x0000000bRebooting in 15 seconds..Is there a way to resolve this problem, How do I clear the buff/cache?On Sun, 24 Feb 2019 at 03:15, Georg H?llrigl wrote:Hello,?I guess you?re not seeing a memory leak, but just normal behaviour of linux file system cache.Buff/cache is keeping files in memory that were least accessed as long as not needed by other stuff.You don?t need to free the buffer/cache, because linux does that automatically, when memory is needed.?Kind Regards,Georg?Von: openssl-users Im Auftrag von prithiraj dasGesendet: 23 February 2019 18:25An: Jordan Brown Cc: openssl-users at openssl.orgBetreff: Re: OpenSSL hash memory leak?Hi,This is how I have initialized my variables:-?EVP_MD_CTX *mdctx;const EVP_MD *md;int i;HASH hash_data;unsigned char message_data[BUFFER_SIZE];?BUFFER_SIZE has been defined as 131072and HASH is my hash structure defined to hold the message digest, message digest length and message digest type?On Sat, 23 Feb 2019 at 00:17, Jordan Brown wrote:The most obvious question is "how are you allocating your message_data buffer?".? You don't show that.?On 2/22/2019 2:27 AM, prithiraj das wrote:?Hi All, ?Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file in an embeddded device having linux OS and low memory capacity and the files are generally of size 44 MB or more. The first time or even the second time on some occasions, the hash of any file is successfully generated. On the 3rd or 4th time (possibly due to lack of memory/memory leak), the system reboots before the hash can be generated.? After restart, the same thing happens when the previous steps are repeated.The stats below shows the memory usage before and after computing the hash.??root at at91sam9m10g45ek:~# free? ? ? ? ? ? ? ? ? ? ? total? ? ? ? used? ? ? ? ? free? ? ? ? ?shared? ? buff/cache? ?availableMem:? ? ? ? ?252180? ? ? ?13272? ? ? 223048? ? ? ? ?280? ? ? ? ? 15860? ? ? ? ? 230924Swap:? ? ? ? ? ? ? ? 0? ? ? ? ? ?0? ? ? ? ? ? ? ?0?After computing hash :-root at at91sam9m10g45ek:~# free? ? ? ? ? ? ? ? ? ? ? total? ? ? ? used? ? ? ? ? free? ? ? ?shared? ? buff/cache? ?availableMem:? ? ? ? ?252180? ? ? ?13308? ? ? 179308? ? ? ? 280? ? ? ?59564? ? ? ? ? ?230868Swap:? ? ? ? ? ? ?0? ? ? ? ? ? ? ? 0? ? ? ? ? ? ? 0?Buff/cache increases by almost 44MB (same as file size) everytime I generate the hash and free decreases. I believe the file is being loaded into buffer and not being freed.??I am using the below code to compute the message digest. This code is part of a function ComputeHash and the file pointer here is fph.????EVP_add_digest(EVP_sha256());?md = EVP_get_digestbyname("sha256");??if(!md) {? ? ? ? printf("Unknown message digest \n");? ? ? ? exit(1);?}?printf("Message digest algorithm successfully loaded\n");?mdctx = EVP_MD_CTX_create();?EVP_DigestInit_ex(mdctx, md, NULL);??// Reading data to array of unsigned chars ?long long int bytes_read = 0;??printf("FILE size of the file to be hashed is %ld",filesize); ??//reading image file in chunks below and fph is the file pointer to the 44MB file?while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)? EVP_DigestUpdate(mdctx, message_data, bytes_read);?EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);?printf("\n%d\n",EVP_MD_CTX_size(mdctx));?printf("\n%d\n",EVP_MD_CTX_type(mdctx));?hash_data.md_type=EVP_MD_CTX_type(mdctx);?EVP_MD_CTX_destroy(mdctx);?//fclose(fp);?printf("Generated Digest is:\n ");?for(i = 0; i < hash_data.md_len; i++)? ? ? ? printf("%02x", hash_data.md_value[i]);?printf("\n");?EVP_cleanup();? ? ? ? ?return hash_data;?In the the code below, I have done fclose(fp)verify_hash=ComputeHash(fp,size1);fclose(fp);?I believe that instead of loading the entire file all at once I am reading the 44MB file in chunks and computing the hash using?the piece of code below: (fph is the file pointer)while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)? EVP_DigestUpdate(mdctx, message_data, bytes_read);?Where I am going wrong? How can I free the buff/cache after computation of message digest?? Please suggest ways to tackle this.??Thanks and Regards,Prithiraj??-- Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl-users at dukhovni.org Sun Feb 24 20:31:21 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Sun, 24 Feb 2019 15:31:21 -0500 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> Message-ID: <20190224203120.GG916@straasha.imrryr.org> On Thu, Feb 21, 2019 at 04:20:53PM +0000, Matt Caswell wrote: > > 2. Can we do something with a bunch of hard-linked non-extendable lists of > > internal NIDs? > > > For example, providing GOST algorithms always requires a patch to extend 3-5 > > internal lists. > > If it could be done dynamically, it will be great. The simplest solution is to submit a PR to add your OIDs to OpenSSL, so that no furher out of tree patches are required. Dynamic NIDs don't fit very well into the design, because NIDs are expected to be stable compile-time constants. We could perhaps reserve a range for "private-use", and "engines" could allocate new NIDs in the private space at runtime. The key question is whether such NIDs are global or valid only if returned to the same engine (provider, ...). If not global, the allocation might be static within the engine, and not require any locks. -- Viktor. From mcr at sandelman.ca Sun Feb 24 23:40:51 2019 From: mcr at sandelman.ca (Michael Richardson) Date: Sun, 24 Feb 2019 18:40:51 -0500 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <20190224203120.GG916@straasha.imrryr.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> Message-ID: <10289.1551051651@localhost> Not sure who Matt quoted, wrote: >> 2. Can we do something with a bunch of hard-linked non-extendable >> lists of internal NIDs? >> >> For example, providing GOST algorithms always requires a patch to >> extend 3-5 >> internal lists. >> If it could be done dynamically, it will be great. Matt then wrote: > The simplest solution is to submit a PR to add your OIDs to OpenSSL, > so that no furher out of tree patches are required. Viktor Dukhovni wrote: > Dynamic NIDs don't fit very well into the design, because NIDs are > expected to be stable compile-time constants. We could perhaps > reserve a range for "private-use", and "engines" could allocate new > NIDs in the private space at runtime. The key question is whether > such NIDs are global or valid only if returned to the same engine > (provider, ...). If not global, the allocation might be static > within the engine, and not require any locks. Having stubbed my toe on some NID stuff, I must question exposting NIDs. ruby-openssl used them in a dumb way that meant that adding extensions by OID was broken until I removed some code. I think that the #define/enum of NIDs should be made internal-only, available as optimization to internal code only. Your question then becomes, "are engines internal users", and I'd like the answer to be no. I think that the openssl 3 changes suggest the same thing. All other users can call OBJ_obj2nid() or OBJ_txt2nid() to get a NID, and we can figure out how to allocate things dynamically if this makes sense. I don't know which APIs are currently NID-only. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From beldmit at gmail.com Mon Feb 25 07:02:44 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 25 Feb 2019 10:02:44 +0300 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <20190224203120.GG916@straasha.imrryr.org> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> Message-ID: On Sun, Feb 24, 2019 at 11:31 PM Viktor Dukhovni wrote: > On Thu, Feb 21, 2019 at 04:20:53PM +0000, Matt Caswell wrote: > > > > 2. Can we do something with a bunch of hard-linked non-extendable > lists of > > > internal NIDs? > > > > > For example, providing GOST algorithms always requires a patch to > extend 3-5 > > > internal lists. > > > If it could be done dynamically, it will be great. > > The simplest solution is to submit a PR to add your OIDs to OpenSSL, > so that no furher out of tree patches are required. > This is a way we go here and now. It is inevitable for libssl, but can be significantly reduced for libcrypto. Some examples are available in my response to Richard. And here we get a second problem, relatively small. If I remember correctly, adding new OIDs/NIDs is treated as breaking the binary compatibility so we have to wait for a major release. > Dynamic NIDs don't fit very well into the design, because NIDs are > expected to be stable compile-time constants. We could perhaps > reserve a range for "private-use", and "engines" could allocate new > NIDs in the private space at runtime. The key question is whether > such NIDs are global or valid only if returned to the same engine > (provider, ...). If not global, the allocation might be static > within the engine, and not require any locks. > Totally agree. OBJ_create() and similar functions exist, but do not solve our problems. -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From beldmit at gmail.com Mon Feb 25 07:07:11 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 25 Feb 2019 10:07:11 +0300 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <10289.1551051651@localhost> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> <10289.1551051651@localhost> Message-ID: Dear Michael, On Mon, Feb 25, 2019 at 2:41 AM Michael Richardson wrote: > > Not sure who Matt quoted, wrote: > >> 2. Can we do something with a bunch of hard-linked non-extendable > >> lists of internal NIDs? > >> > >> For example, providing GOST algorithms always requires a patch to > >> extend 3-5 > >> internal lists. > >> If it could be done dynamically, it will be great. > > Matt then wrote: > > The simplest solution is to submit a PR to add your OIDs to OpenSSL, > > so that no furher out of tree patches are required. > > Viktor Dukhovni wrote: > > Dynamic NIDs don't fit very well into the design, because NIDs are > > expected to be stable compile-time constants. We could perhaps > > reserve a range for "private-use", and "engines" could allocate new > > NIDs in the private space at runtime. The key question is whether > > such NIDs are global or valid only if returned to the same engine > > (provider, ...). If not global, the allocation might be static > > within the engine, and not require any locks. > > Having stubbed my toe on some NID stuff, I must question exposting NIDs. > ruby-openssl used them in a dumb way that meant that adding extensions by > OID > was broken until I removed some code. > > I think that the #define/enum of NIDs should be made internal-only, > available as optimization to internal code only. > Your question then becomes, "are engines internal users", and I'd like the > answer to be no. I think that the openssl 3 changes suggest the same thing. > The engines are _mostly_ external users. But sometimes, providing new algorithms, there appear some parts that should go into the core part. And regulation creates similar problems. All other users can call OBJ_obj2nid() or OBJ_txt2nid() to get a NID, > and we can figure out how to allocate things dynamically if this makes > sense. I don't know which APIs are currently NID-only. AFAIK, no external API, but there are some cases when external API does not cover all. -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Mon Feb 25 10:36:45 2019 From: paul.dale at oracle.com (Dr Paul Dale) Date: Mon, 25 Feb 2019 20:36:45 +1000 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> Message-ID: <117FE82C-9E1F-4544-A158-9D9CBEE4E1D6@oracle.com> I don?t think that that new OIDs or NIDs are considering breaking. Changing existing ones definitely is, but that?s an entirely different proposition. Pauli -- Dr Paul Dale | Cryptographer | Network Security & Encryption Phone +61 7 3031 7217 Oracle Australia > On 25 Feb 2019, at 5:02 pm, Dmitry Belyavsky wrote: > > > > On Sun, Feb 24, 2019 at 11:31 PM Viktor Dukhovni > wrote: > On Thu, Feb 21, 2019 at 04:20:53PM +0000, Matt Caswell wrote: > > > > 2. Can we do something with a bunch of hard-linked non-extendable lists of > > > internal NIDs? > > > > > For example, providing GOST algorithms always requires a patch to extend 3-5 > > > internal lists. > > > If it could be done dynamically, it will be great. > > The simplest solution is to submit a PR to add your OIDs to OpenSSL, > so that no furher out of tree patches are required. > > This is a way we go here and now. It is inevitable for libssl, but can be significantly reduced for libcrypto. > Some examples are available in my response to Richard. > > And here we get a second problem, relatively small. If I remember correctly, > adding new OIDs/NIDs is treated as breaking the binary compatibility so we have to wait for a major release. > > Dynamic NIDs don't fit very well into the design, because NIDs are > expected to be stable compile-time constants. We could perhaps > reserve a range for "private-use", and "engines" could allocate new > NIDs in the private space at runtime. The key question is whether > such NIDs are global or valid only if returned to the same engine > (provider, ...). If not global, the allocation might be static > within the engine, and not require any locks. > > Totally agree. OBJ_create() and similar functions exist, but do not solve our problems. > > -- > SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From beldmit at gmail.com Mon Feb 25 10:51:38 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 25 Feb 2019 13:51:38 +0300 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <117FE82C-9E1F-4544-A158-9D9CBEE4E1D6@oracle.com> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> <117FE82C-9E1F-4544-A158-9D9CBEE4E1D6@oracle.com> Message-ID: Dear Dr Paul, I think this change is somewhere in a gray zone. On Mon, Feb 25, 2019 at 1:37 PM Dr Paul Dale wrote: > I don?t think that that new OIDs or NIDs are considering breaking. > Changing existing ones definitely is, but that?s an entirely different > proposition. > > > Pauli > -- > Dr Paul Dale | Cryptographer | Network Security & Encryption > Phone +61 7 3031 7217 > Oracle Australia > > > > On 25 Feb 2019, at 5:02 pm, Dmitry Belyavsky wrote: > > > > On Sun, Feb 24, 2019 at 11:31 PM Viktor Dukhovni < > openssl-users at dukhovni.org> wrote: > >> On Thu, Feb 21, 2019 at 04:20:53PM +0000, Matt Caswell wrote: >> >> > > 2. Can we do something with a bunch of hard-linked non-extendable >> lists of >> > > internal NIDs? >> > >> > > For example, providing GOST algorithms always requires a patch to >> extend 3-5 >> > > internal lists. >> > > If it could be done dynamically, it will be great. >> >> The simplest solution is to submit a PR to add your OIDs to OpenSSL, >> so that no furher out of tree patches are required. >> > > This is a way we go here and now. It is inevitable for libssl, but can be > significantly reduced for libcrypto. > Some examples are available in my response to Richard. > > And here we get a second problem, relatively small. If I remember > correctly, > adding new OIDs/NIDs is treated as breaking the binary compatibility so we > have to wait for a major release. > > >> Dynamic NIDs don't fit very well into the design, because NIDs are >> expected to be stable compile-time constants. We could perhaps >> reserve a range for "private-use", and "engines" could allocate new >> NIDs in the private space at runtime. The key question is whether >> such NIDs are global or valid only if returned to the same engine >> (provider, ...). If not global, the allocation might be static >> within the engine, and not require any locks. >> > > Totally agree. OBJ_create() and similar functions exist, but do not solve > our problems. > > -- > SY, Dmitry Belyavsky > > > -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From openssl at foocrypt.net Mon Feb 25 11:04:04 2019 From: openssl at foocrypt.net (openssl at foocrypt.net) Date: Mon, 25 Feb 2019 22:04:04 +1100 Subject: TLS v HSTS v T.O.L.A. In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> <117FE82C-9E1F-4544-A158-9D9CBEE4E1D6@oracle.com> Message-ID: Hi The current PJCIS is due to report early April. And just to relieve some the seriousness of the T.O.L.A. impacts whilst scribbling together another 10 pages for a PJCIS, I?ve put together the following subdomain. https://WeTheAustralianGovernemntApologizeForCausingTheCryptopocalypse.AUGov.FooCrypt.Net Does any one know how to fix the TLS v HSTS v TOLA problem ? -- Regards, Mark A. Lane Cryptopocalypse NOW 01 04 2016 Volumes 0.0 -> 10.0 Now available through iTunes - iBooks @ https://itunes.apple.com/au/author/mark-a.-lane/id1100062966?mt=11 ? Mark A. Lane 1980 - 2019, All Rights Reserved. ? FooCrypt 1980 - 2019, All Rights Reserved. ? FooCrypt, A Tale of Cynical Cyclical Encryption. 1980 - 2019, All Rights Reserved. ? Cryptopocalypse 1980 - 2019, All Rights Reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From beldmit at gmail.com Mon Feb 25 13:28:40 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 25 Feb 2019 16:28:40 +0300 Subject: Missing accessor for the EVP_PKEY.engine Message-ID: Hello, We've started porting our 1.0.2 application to 1.1.1. What is a way to get an engine reference? I did not find a function like EVP_PKEY_get1_engine Thank you! -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkario at redhat.com Mon Feb 25 14:05:03 2019 From: hkario at redhat.com (Hubert Kario) Date: Mon, 25 Feb 2019 15:05:03 +0100 Subject: OpenSSL hash memory leak In-Reply-To: References: Message-ID: <2248720.Mk6Wbyczpt@pintsize.usersys.redhat.com> On Sunday, 24 February 2019 11:34:18 CET prithiraj das wrote: > If it helps, sometimes I do get the following errors for the same and > subsequent reboot: > > Alignment trap: sh (601) PC=0xb6e008f8 Instr=0x4589c0d7 Address=0x000000d7 > FSR 0x801 > Alignment trap: login (584) PC=0xb6e6ab00 Instr=0xe5951000 > Address=0xd27cdc63 FSR 0x001 > Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b that doesn't look like openssl problem at all, openssl may trigger it, but only because it's using the system to its fullest potential, not because there are issues in openssl I'd suggest trying memtest86 and trying to capture full kernel stacktrace with netconsole, in this order. But this mailing list is not a good place for follow up on this. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part. URL: From matt at openssl.org Mon Feb 25 14:23:11 2019 From: matt at openssl.org (Matt Caswell) Date: Mon, 25 Feb 2019 14:23:11 +0000 Subject: Missing accessor for the EVP_PKEY.engine In-Reply-To: References: Message-ID: On 25/02/2019 13:28, Dmitry Belyavsky wrote: > Hello, > > We've started porting our 1.0.2 application to 1.1.1.? > What is a way to get an engine reference? I did not find a function like > EVP_PKEY_get1_engine Seems to be a missing accessor. Matt From jb-openssl at wisemo.com Mon Feb 25 14:31:13 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Mon, 25 Feb 2019 15:31:13 +0100 Subject: OpenSSL hash memory leak In-Reply-To: <2248720.Mk6Wbyczpt@pintsize.usersys.redhat.com> References: <2248720.Mk6Wbyczpt@pintsize.usersys.redhat.com> Message-ID: <95483a56-977c-a52a-0fb1-6eac478148fb@wisemo.com> On 25/02/2019 15:05, Hubert Kario wrote: > On Sunday, 24 February 2019 11:34:18 CET prithiraj das wrote: >> If it helps, sometimes I do get the following errors for the same and >> subsequent reboot: >> >> Alignment trap: sh (601) PC=0xb6e008f8 Instr=0x4589c0d7 Address=0x000000d7 >> FSR 0x801 >> Alignment trap: login (584) PC=0xb6e6ab00 Instr=0xe5951000 >> Address=0xd27cdc63 FSR 0x001 >> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b > that doesn't look like openssl problem at all, openssl may trigger it, but > only because it's using the system to its fullest potential, not because there > are issues in openssl > > I'd suggest trying memtest86 and trying to capture full kernel stacktrace with > netconsole, in this order. But this mailing list is not a good place for > follow up on this. Just FYI.? "Alignment trap" is not usually a hardware issue.? It is virtually always a software error (specifically, accessing a 16, 32, 64, 80 or 128 bit value through an insufficiently aligned pointer). A stack trace is needed to determine if this is a kernel or user mode issue, and if so where. Of cause there is the remote possibility that a hardware error caused a pointer to have a value it shouldn't have according to the code. However unless the error is actually in OpenSSL code, there is little that this list can do to fix the problem. Given the specific text of the other error message, I hope you are not somehow running OpenSSL itself as process 1 (init), as that would be highly unusual. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From beldmit at gmail.com Mon Feb 25 15:04:59 2019 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 25 Feb 2019 18:04:59 +0300 Subject: Missing accessor for the EVP_PKEY.engine In-Reply-To: References: Message-ID: On Mon, Feb 25, 2019 at 5:23 PM Matt Caswell wrote: > > > On 25/02/2019 13:28, Dmitry Belyavsky wrote: > > Hello, > > > > We've started porting our 1.0.2 application to 1.1.1. > > What is a way to get an engine reference? I did not find a function like > > EVP_PKEY_get1_engine > > Seems to be a missing accessor. > https://github.com/openssl/openssl/pull/8329 -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Mon Feb 25 18:57:43 2019 From: levitte at openssl.org (Richard Levitte) Date: Mon, 25 Feb 2019 19:57:43 +0100 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: <10289.1551051651@localhost> References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <20190224203120.GG916@straasha.imrryr.org> <10289.1551051651@localhost> Message-ID: <87va17lmqg.wl-levitte@openssl.org> On Mon, 25 Feb 2019 00:40:51 +0100, Michael Richardson wrote: > I think that the #define/enum of NIDs should be made internal-only, > available as optimization to internal code only. Having asked around a bit on this, that was the original intention... However, in an old era of having everything in public headers (or at least everything that was of interest to the public *plus* everything that libssl might want to use), they slipped out. NID literally means "numeric identity" and really has no inherent meaning other than quick access, like you say. The public interface was meant to be getting stuff by name (string) or possibly special functions such as EVP_aes_128_cbc()... > Your question then becomes, "are engines internal users", and I'd like the > answer to be no. I think that the openssl 3 changes suggest the same thing. Yup. > All other users can call OBJ_obj2nid() or OBJ_txt2nid() to get a NID, > and we can figure out how to allocate things dynamically if this makes > sense. I don't know which APIs are currently NID-only. There are some new APIs in master that add such functions: EVP_MAC_CTX_new_id() EVP_KDF_CTX_new_id() I'm currently thinking that's a mistake. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From levitte at openssl.org Mon Feb 25 18:59:55 2019 From: levitte at openssl.org (Richard Levitte) Date: Mon, 25 Feb 2019 19:59:55 +0100 Subject: [openssl-project] OpenSSL 3.0 and FIPS Update In-Reply-To: References: <2c9fe037-9817-ba6f-1062-1d574264318a@openssl.org> <9c953cd0-5439-ec7c-9a25-b124f72f3328@openssl.org> <87zhqnkqe9.wl-levitte@openssl.org> Message-ID: <87tvgrlmms.wl-levitte@openssl.org> On Sat, 23 Feb 2019 21:47:00 +0100, Dmitry Belyavsky wrote: > > > Dear Richard,? > > On Sat, Feb 23, 2019 at 8:47 AM Richard Levitte wrote: > > Since our RAND API is separate from the EVP API, I'm unsure how we > plan on getting custom RAND_methods from providers. > > Please note that we can add RAND to the list of provider backed APIs, > and given a foundation that we're currently building, it may even be > quite easy.? However, no one has said explicitly that we would do so. > > The other option is, of course, to move the RAND API to EVP somehow, > but that will probably be more challenging. > > I do not think it is really necessary to move RAND to EVP. > Current architecture suits our requirements, but if the possibility to overwrite > the RAND_METHOD is removed, it will cause problems for us. So it turns out that some of my collegues were assuming that the RAND API would be provider backed. I simply hadn't caught on to that... Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From john.sha.jiang at gmail.com Tue Feb 26 06:22:52 2019 From: john.sha.jiang at gmail.com (John Jiang) Date: Tue, 26 Feb 2019 14:22:52 +0800 Subject: s_server/s_client on checking middlebox compatibility Message-ID: Is it possible to check if peer implements middlebox compatibility by s_server/s_client? It looks the test tools don't care this point. For example, if a server doesn't send change_cipher_spec after HelloRetryRequest, s_client still feels fine.That's not bad. But can I setup these tools to check middlebox compatibility? -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Feb 26 09:34:32 2019 From: matt at openssl.org (Matt Caswell) Date: Tue, 26 Feb 2019 09:34:32 +0000 Subject: s_server/s_client on checking middlebox compatibility In-Reply-To: References: Message-ID: <10b3fe51-3d37-d273-e948-59a445fc43ee@openssl.org> On 26/02/2019 06:22, John Jiang wrote: > Is it possible to check if peer implements middlebox compatibility by > s_server/s_client? > It looks the test tools don't care this point. > For example, if a server doesn't send change_cipher_spec after > HelloRetryRequest, s_client still feels fine.That's not bad. But can I setup > these tools to check middlebox compatibility? By default s_server/s_client will have middlebox compatibility on. You can turn it off using the option "-no_middlebox". There is no option to directly tell you if an endpoint is using middlebox compatibility mode or not. You could figure it out indirectly by using the "-debug" option. This shows you the raw data that is being sent/received by the endpoints. Assuming TLSv1.3 has been negotiated then a remote peer is using middlebox compatibility if you see a sequence like this during the handshake: read from 0x557afedffb60 [0x557afee057d3] (5 bytes => 5 (0x5)) 0000 - 14 03 03 00 01 ..... read from 0x557afedffb60 [0x557afee057d8] (1 bytes => 1 (0x1)) 0000 - 01 Matt From sujiknair at gmail.com Tue Feb 26 12:51:02 2019 From: sujiknair at gmail.com (Suji) Date: Tue, 26 Feb 2019 18:21:02 +0530 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: References: Message-ID: Hi, I am unable to use AES-cipher offload to my engine even though it was registered with the proper flag (EVP_CIPH_FLAG_FIPS). I was able to use RSA, digests, and ECDSA to the engine with corresponding flags. I am using openssl-fips-2.0.16 and openssl-1.0.2e. OPENSSL_FIPS is set. I come across the code snippet in crypto/evp/evp_enc.c . In function EVP_CipherInit_ex. At start, pointer is updated with engine function and at Line number 173, In case of fips mode, function pointer gets updated to openssl function. Which means in fips mode ciphers never gets offloaded to engine? All other functions (digest, RSA etc) , it first updates to fips function, and then engine function. Why only ciphers has this different behaviour? Please reply. Thanks, Suji -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.lynch at atos.net Tue Feb 26 12:52:37 2019 From: andrew.lynch at atos.net (Lynch, Andrew) Date: Tue, 26 Feb 2019 12:52:37 +0000 Subject: How to not use a configured engine? Message-ID: Hi, I support two bespoke engines which have been in use with OpenSSL 1.0.2. Due to a new requirement for RSA-PSS keys we are in the process of migrating to 1.1.1a. Implementing various minor API changes was no big deal and the engines still work as expected. However the behaviour of openssl has changed when trying to _not_ use the engine. There are several environments, each with OPENSSL_CONF pointing to a configuration file with an engines section for one of the two engines. This specifies a number of PRE commands to configure the engine. Mostly requests are processed with "openssl req" or "openssl x509" using HSM keys with -keyform ENGINE and the appropriate -engine option, however there are also some use cases with local file-based keys in the same environment. Take the simplified example of creating a signature with "openssl dgst -sha256 -sign mykey_priv.pem -out foo.sig foo.txt" There is no -engine option given, but the active configuration file does include an engines section. Using OpenSSL 1.0.2 the signature is created using the given key. Debug output from the engine shows that its init, finish and destroy functions have been called, but the sign operation does not go through the engine. (Although ENGINE_set_RSA must also have been called?) The same command line with OpenSSL 1.1.1a fails in the engine's rsa_sign method because of some missing setup that happens in the load_privkey method (which has not been called as no -engine or -keyform ENGINE option were provided). Is this an intended change of behaviour? If yes, how can I prevent 1.1.1a from using the engine's RSA method without having to change the configuration file? Our current workaround is to repoint OPENSSL_CONF to a duplicate of the file in which the line "engines = engine_section" has been commented out. Then the engine is not referenced at all. As the configuration files contain a large number of other settings managing two almost identical copies is not desirable. The implementation of our engines closely follows those included in the source distribution (e.g. e_capi.c). bind_helper calls bind_enginename which registers all the functions and methods via ENGINE_set_init_function etc. including ENGINE_set_RSA and ENGINE_set_EC. It appears that OpenSSL calls the bind_helper for every engine that appears in the configuration file. If this includes two engines then always both init functions are called. With an -engine option on the command line only the specified engine's method is used for the actual operation. With no -engine option OpenSSL 1.0.2 uses its internal software method whereas OpenSSL 1.1.1a tries to use whatever engine happens to have been registered first (appears first in [engine_section]). Assuming our engines' init function is always called, where is the right place to do any stuff that should only happen if that particular engine is actually set via the -engine option? Regards, Andrew. From hkario at redhat.com Tue Feb 26 13:56:45 2019 From: hkario at redhat.com (Hubert Kario) Date: Tue, 26 Feb 2019 14:56:45 +0100 Subject: s_server/s_client on checking middlebox compatibility In-Reply-To: References: Message-ID: <1592309.X9iGMNNUvS@pintsize.usersys.redhat.com> On Tuesday, 26 February 2019 07:22:52 CET John Jiang wrote: > Is it possible to check if peer implements middlebox compatibility by > s_server/s_client? > It looks the test tools don't care this point. > For example, if a server doesn't send change_cipher_spec after > HelloRetryRequest, s_client still feels fine.That's not bad. But can I > setup these tools to check middlebox compatibility? As Matt said, there's no human-readable output that shows that. tlsfuzzer does verify if the server sends ChangeCipherSpec and at what point in the connection (all scripts expect it right after ServerHello or right after HelloRetryRequest depending on connection). You can use https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-conversation.py https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-hrr.py and https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-session-resumption.py respectively to test regular handshake, one with HelloRetryRequest and one that performs session resumption. -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part. URL: From Michael.Wojcik at microfocus.com Tue Feb 26 14:00:13 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Tue, 26 Feb 2019 14:00:13 +0000 Subject: How to not use a configured engine? In-Reply-To: References: Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Lynch, Andrew > Sent: Tuesday, February 26, 2019 07:53 > > Our current workaround is to repoint OPENSSL_CONF to a duplicate of > the file in which the line "engines = engine_section" has been commented out. > Then the engine is not referenced at all. As the configuration files contain > a large number of other settings managing two almost identical copies is not > desirable. Is this a case where the .include mechanism or the $ENV::name syntax could resolve the duplicate-configuration issue? That's the approach I've taken with my test CA. See https://www.openssl.org/docs/man1.1.1/man5/config.html. Unfortunately I haven't looked at how the engine system may have changed in 1.1.1, so I can't respond to your main question. -- Michael Wojcik Distinguished Engineer, Micro Focus From openssl at openssl.org Tue Feb 26 14:54:20 2019 From: openssl at openssl.org (OpenSSL) Date: Tue, 26 Feb 2019 14:54:20 +0000 Subject: OpenSSL version 1.0.2r published Message-ID: <20190226145420.GA1729@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OpenSSL version 1.0.2r released =============================== OpenSSL - The Open Source toolkit for SSL/TLS https://www.openssl.org/ The OpenSSL project team is pleased to announce the release of version 1.0.2r of our open source toolkit for SSL/TLS. For details of changes and known issues see the release notes at: https://www.openssl.org/news/openssl-1.0.2-notes.html OpenSSL 1.0.2r is available for download via HTTP and FTP from the following master locations (you can find the various FTP mirrors under https://www.openssl.org/source/mirror.html): * https://www.openssl.org/source/ * ftp://ftp.openssl.org/source/ The distribution file name is: o openssl-1.0.2r.tar.gz Size: 5348369 SHA1 checksum: b9aec1fa5cedcfa433aed37c8fe06b0ab0ce748d SHA256 checksum: ae51d08bba8a83958e894946f15303ff894d75c2b8bbd44a852b64e3fe11d0d6 The checksums were calculated using the following commands: openssl sha1 openssl-1.0.2r.tar.gz openssl sha256 openssl-1.0.2r.tar.gz Yours, The OpenSSL Project Team. -----BEGIN PGP SIGNATURE----- iQEzBAEBCgAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAlx1S0oACgkQ2cTSbQ5g RJH9UQf9Gi2WrDyOwxtlu84f7vlcQX1zfG+Fs10OZgYi6rvD6VprJJewsWaJI9S+ O5LDv0p1aCFNgcTc57oNZCb+Or8xWdhvTOc5cNa408nFVK4wVazTdzKRFLECZEL4 E0vs22XNEIhrPHuHAJnuYaP12232Wymn9VHSbWeNl2ZR7Vj64rJ8Lqp8w+YpBU5+ eGidbLSKC29r8VV/6/9ei8PUSGEpy6ci8Tp+oMn6iVgMx6fuAnVDWDL32kWbzdAB r/OUee06D+QQFQMAJGAiDRxbC4XuNaLCiysr8a7QoltsxJjCaq7H9zRlArv3iE27 /fuwegvHE+upW2k3J1ZCL/Dlq+MuxA== =MwGd -----END PGP SIGNATURE----- From openssl at openssl.org Tue Feb 26 14:54:38 2019 From: openssl at openssl.org (OpenSSL) Date: Tue, 26 Feb 2019 14:54:38 +0000 Subject: OpenSSL version 1.1.1b published Message-ID: <20190226145438.GA1980@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OpenSSL version 1.1.1b released =============================== OpenSSL - The Open Source toolkit for SSL/TLS https://www.openssl.org/ The OpenSSL project team is pleased to announce the release of version 1.1.1b of our open source toolkit for SSL/TLS. For details of changes and known issues see the release notes at: https://www.openssl.org/news/openssl-1.1.1-notes.html OpenSSL 1.1.1b is available for download via HTTP and FTP from the following master locations (you can find the various FTP mirrors under https://www.openssl.org/source/mirror.html): * https://www.openssl.org/source/ * ftp://ftp.openssl.org/source/ The distribution file name is: o openssl-1.1.1b.tar.gz Size: 8213737 SHA1 checksum: e9710abf5e95c48ebf47991b10cbb48c09dae102 SHA256 checksum: 5c557b023230413dfb0756f3137a13e6d726838ccd1430888ad15bfb2b43ea4b The checksums were calculated using the following commands: openssl sha1 openssl-1.1.1b.tar.gz openssl sha256 openssl-1.1.1b.tar.gz Yours, The OpenSSL Project Team. -----BEGIN PGP SIGNATURE----- iQEzBAEBCgAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAlx1SgkACgkQ2cTSbQ5g RJEc5QgAoB+R93O6fi3QBaLM6zcZQWcq0y/c2fEo+tybClP4DfUudJij5cjlfzfN W0srK+qq15PJPxbH02fUcUdIBHF5OdQv0XMIS5ueN1clvGTcvpqdmyvE7INqouFd xUGbRzNw8hN4BY/skamuc1uxMXQUFx4ek2W12q4D/oCSOuPrS411uSev3pACLyK8 Bchcs/TLSreaz46ckRC+fiQ9jgBKjcA5q4pC/kIn+KGrfoRZz+no4cQlZS84NFgN BbT4bn9mV1+f1PksSlBZ6r+YSeaFrXP/e0sfTuMGYiXUx+XPQ+uMHjiljAGuYYz3 Nr2GqL9nHLvJ5xMBJmJCes4zkd0J9g== =Wh0M -----END PGP SIGNATURE----- From openssl at openssl.org Tue Feb 26 14:59:17 2019 From: openssl at openssl.org (OpenSSL) Date: Tue, 26 Feb 2019 14:59:17 +0000 Subject: OpenSSL Security Advisory Message-ID: <20190226145917.GA5404@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OpenSSL Security Advisory [26 February 2019] ============================================ 0-byte record padding oracle (CVE-2019-1559) ============================================ Severity: Moderate If an application encounters a fatal protocol error and then calls SSL_shutdown() twice (once to send a close_notify, and once to receive one) then OpenSSL can respond differently to the calling application if a 0 byte record is received with invalid padding compared to if a 0 byte record is received with an invalid MAC. If the application then behaves differently based on that in a way that is detectable to the remote peer, then this amounts to a padding oracle that could be used to decrypt data. In order for this to be exploitable "non-stitched" ciphersuites must be in use. Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites. Also the application must call SSL_shutdown() twice even if a protocol error has occurred (applications should not do this but some do anyway). This issue does not impact OpenSSL 1.1.1 or 1.1.0. OpenSSL 1.0.2 users should upgrade to 1.0.2r. This issue was discovered by Juraj Somorovsky, Robert Merget and Nimrod Aviram, with additional investigation by Steven Collison and Andrew Hourselt. It was reported to OpenSSL on 10th December 2018. Note ==== OpenSSL 1.0.2 and 1.1.0 are currently only receiving security updates. Support for 1.0.2 will end on 31st December 2019. Support for 1.1.0 will end on 11th September 2019. Users of these versions should upgrade to OpenSSL 1.1.1. References ========== URL for this Security Advisory: https://www.openssl.org/news/secadv/20190226.txt Note: the online version of the advisory may be updated with additional details over time. For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html -----BEGIN PGP SIGNATURE----- iQEzBAEBCgAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAlx1U+gACgkQ2cTSbQ5g RJFnlAf/U9yZtCz59BjgD0Kh7Eya5KxlmUWItdBu1r3DwbY4KDgL/Wwh4UxG3Qim D7Ht5Xsta4iAywrMRI/iPEdEQct8pcpWjq4/65lEbTYjToEnNWhIeWHH/Lw3Jfza gcVpIfbWoWc7OL7U4uPQuGWcb/PO8fJXF+HcCdZ+kIuut0peMSgN5sK/wBnmSdsM +sJXCei+jwVy/9WvCBMOooX7D8oerJ6NX12n2cNAYH/K7e2deiPZ7D/HB7T9MSv/ BgOi1UqFzBxcsNhFpY5NMTHG8pl0bmS0OiZ9bThN0YHwxFVJz6ZsVX/L5cYOAbm/ mJAdDE24XMmUAOlVZrROzCZKXADx/A== =8h8L -----END PGP SIGNATURE----- From tshort at akamai.com Tue Feb 26 15:03:41 2019 From: tshort at akamai.com (Short, Todd) Date: Tue, 26 Feb 2019 15:03:41 +0000 Subject: Stitched vs non-Stitched Ciphersuites Message-ID: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> The latest security advisory: https://www.openssl.org/news/secadv/20190226.txt mentions stitched vs. non-stitched ciphersuites, but doesn?t really elaborate on which ciphersuites are stitched and non-stitched. "In order for this to be exploitable "non-stitched" ciphersuites must be in use. Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites." Can someone give some examples of both? -- -Todd Short // tshort at akamai.com // "One if by land, two if by sea, three if by the Internet." -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Feb 26 15:40:21 2019 From: matt at openssl.org (Matt Caswell) Date: Tue, 26 Feb 2019 15:40:21 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> Message-ID: <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> On 26/02/2019 15:03, Short, Todd via openssl-users wrote: > The latest security advisory: > > https://www.openssl.org/news/secadv/20190226.txt > > mentions stitched vs. non-stitched ciphersuites, but doesn?t really elaborate on > which ciphersuites are stitched and non-stitched. The actual list in use is platform specific - the stitched ciphers are based on asm implementations. Libssl in 1.0.2 knows about these stitched ciphers: https://github.com/openssl/openssl/blob/56ff0f643482b19f7b2d7ed532dfb94ed3a4e294/ssl/ssl_ciph.c#L651-L671 Any TLS ciphersuite based on the above ciphers will use the stitched implementation if it is available on that platform. So, for example, if a stitched implementation of AES-128-CBC-HMAC-SHA1 is available on your platform then it will be used if you negotiate the AES128-SHA ciphersuite (aka TLS_RSA_WITH_AES_128_CBC_SHA). Similarly it will be used if you negotiate DH-RSA-AES128-SHA (aka TLS_DH_RSA_WITH_AES_128_CBC_SHA) The combined encrypt and mac operation will be performed in one go by the stitched implementation. If you don't have a stitched implementation then the encrypt and mac operations are performed individually. Matt > >> "In order for this to be exploitable "non-stitched" ciphersuites must be in >> use. Stitched ciphersuites are optimised implementations of certain commonly >> used ciphersuites." > > Can someone give some examples of both? > > -- > -Todd Short > // tshort at akamai.com > // "One if by land, two if by sea, three if by the Internet." > From tshort at akamai.com Tue Feb 26 15:44:35 2019 From: tshort at akamai.com (Short, Todd) Date: Tue, 26 Feb 2019 15:44:35 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> Message-ID: Thanks Matt, So, just the cipher+MAC matter, the authentication/key-exchange are irrelevant. What about AEAD ciphers? Are they considered "stitched"? -- -Todd Short // tshort at akamai.com // "One if by land, two if by sea, three if by the Internet." On Feb 26, 2019, at 10:40 AM, Matt Caswell > wrote: On 26/02/2019 15:03, Short, Todd via openssl-users wrote: The latest security advisory: https://www.openssl.org/news/secadv/20190226.txt mentions stitched vs. non-stitched ciphersuites, but doesn?t really elaborate on which ciphersuites are stitched and non-stitched. The actual list in use is platform specific - the stitched ciphers are based on asm implementations. Libssl in 1.0.2 knows about these stitched ciphers: https://github.com/openssl/openssl/blob/56ff0f643482b19f7b2d7ed532dfb94ed3a4e294/ssl/ssl_ciph.c#L651-L671 Any TLS ciphersuite based on the above ciphers will use the stitched implementation if it is available on that platform. So, for example, if a stitched implementation of AES-128-CBC-HMAC-SHA1 is available on your platform then it will be used if you negotiate the AES128-SHA ciphersuite (aka TLS_RSA_WITH_AES_128_CBC_SHA). Similarly it will be used if you negotiate DH-RSA-AES128-SHA (aka TLS_DH_RSA_WITH_AES_128_CBC_SHA) The combined encrypt and mac operation will be performed in one go by the stitched implementation. If you don't have a stitched implementation then the encrypt and mac operations are performed individually. Matt "In order for this to be exploitable "non-stitched" ciphersuites must be in use. Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites." Can someone give some examples of both? -- -Todd Short // tshort at akamai.com // "One if by land, two if by sea, three if by the Internet." -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Tue Feb 26 16:42:10 2019 From: matt at openssl.org (Matt Caswell) Date: Tue, 26 Feb 2019 16:42:10 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> Message-ID: <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> On 26/02/2019 15:44, Short, Todd wrote: > Thanks Matt,? > > So, just the cipher+MAC matter, the authentication/key-exchange are irrelevant. > > What about AEAD ciphers? Are they considered "stitched"? No, they are not "stitched" but they are not impacted by this issue. We should probably make that clearer in the advisory. Matt > > -- > -Todd Short > // tshort at akamai.com > // "One if by land, two if by sea, three if by the Internet." > >> On Feb 26, 2019, at 10:40 AM, Matt Caswell > > wrote: >> >> >> >> On 26/02/2019 15:03, Short, Todd via openssl-users wrote: >>> The latest security advisory: >>> >>> https://www.openssl.org/news/secadv/20190226.txt >>> >>> mentions stitched vs. non-stitched ciphersuites, but doesn?t really elaborate on >>> which ciphersuites are stitched and non-stitched. >> >> The actual list in use is platform specific - the stitched ciphers are based on >> asm implementations. Libssl in 1.0.2 knows about these stitched ciphers: >> >> https://github.com/openssl/openssl/blob/56ff0f643482b19f7b2d7ed532dfb94ed3a4e294/ssl/ssl_ciph.c#L651-L671 >> >> Any TLS ciphersuite based on the above ciphers will use the stitched >> implementation if it is available on that platform. >> >> So, for example, if a stitched implementation of AES-128-CBC-HMAC-SHA1 is >> available on your platform then it will be used if you negotiate the AES128-SHA >> ciphersuite (aka TLS_RSA_WITH_AES_128_CBC_SHA). Similarly it will be used if you >> negotiate DH-RSA-AES128-SHA (aka TLS_DH_RSA_WITH_AES_128_CBC_SHA) The combined >> encrypt and mac operation will be performed in one go by the stitched >> implementation. If you don't have a stitched implementation then the encrypt and >> mac operations are performed individually. >> >> Matt >> >> >>> >>>> "In order for this to be exploitable "non-stitched" ciphersuites must be in >>>> use. Stitched ciphersuites are optimised implementations of certain commonly >>>> used ciphersuites." >>> >>> Can someone give some examples of both? >>> >>> -- >>> -Todd Short >>> // tshort at akamai.com >>> // "One if by land, two if by sea, three if by the Internet." >>> > From rsalz at akamai.com Tue Feb 26 17:23:24 2019 From: rsalz at akamai.com (Salz, Rich) Date: Tue, 26 Feb 2019 17:23:24 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: References: Message-ID: * Which means in fips mode ciphers never gets offloaded to engine? * All other functions (digest, RSA etc) , it first updates to fips function, and then engine function. Why only ciphers has this different behaviour? That seems like a bug. In FIPS mode you can only use the FIPS-validated code, which means that you *have* to use the OpenSSL implementation. If you do not use the OpenSSL implementation, then you cannot claim to be FIPS validated, and you must get your validation for your implementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joebrowning99 at gmail.com Tue Feb 26 20:18:12 2019 From: joebrowning99 at gmail.com (Joe Browning) Date: Tue, 26 Feb 2019 15:18:12 -0500 Subject: OpenSSL hash memory leak In-Reply-To: <95483a56-977c-a52a-0fb1-6eac478148fb@wisemo.com> References: <2248720.Mk6Wbyczpt@pintsize.usersys.redhat.com> <95483a56-977c-a52a-0fb1-6eac478148fb@wisemo.com> Message-ID: * EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len)* *Missing reference there for value?* *Joe* On Mon, Feb 25, 2019, 09:31 Jakob Bohm via openssl-users < openssl-users at openssl.org> wrote: > On 25/02/2019 15:05, Hubert Kario wrote: > > On Sunday, 24 February 2019 11:34:18 CET prithiraj das wrote: > >> If it helps, sometimes I do get the following errors for the same and > >> subsequent reboot: > >> > >> Alignment trap: sh (601) PC=0xb6e008f8 Instr=0x4589c0d7 > Address=0x000000d7 > >> FSR 0x801 > >> Alignment trap: login (584) PC=0xb6e6ab00 Instr=0xe5951000 > >> Address=0xd27cdc63 FSR 0x001 > >> Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b > > that doesn't look like openssl problem at all, openssl may trigger it, > but > > only because it's using the system to its fullest potential, not because > there > > are issues in openssl > > > > I'd suggest trying memtest86 and trying to capture full kernel > stacktrace with > > netconsole, in this order. But this mailing list is not a good place for > > follow up on this. > > Just FYI. "Alignment trap" is not usually a hardware issue. It > is virtually always a software error (specifically, accessing a > 16, 32, 64, 80 or 128 bit value through an insufficiently aligned > pointer). > > A stack trace is needed to determine if this is a kernel or user > mode issue, and if so where. > > Of cause there is the remote possibility that a hardware error > caused a pointer to have a value it shouldn't have according to > the code. > > However unless the error is actually in OpenSSL code, there is > little that this list can do to fix the problem. > > Given the specific text of the other error message, I hope you > are not somehow running OpenSSL itself as process 1 (init), as > that would be highly unusual. > > > Enjoy > > Jakob > -- > Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com > Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 > This public discussion message is non-binding and may contain errors. > WiseMo - Remote Service Management for PCs, Phones and Embedded > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From walt at safelogic.com Tue Feb 26 21:24:32 2019 From: walt at safelogic.com (Walter Paley) Date: Tue, 26 Feb 2019 13:24:32 -0800 Subject: AES-cipher offload to engine in openssl-fips Message-ID: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> To clarify here, using the OpenSSL FIPS implementation does not allow you to claim ?FIPS Validated?, rather this would be ?FIPS Compliant?. If you want to claim ?FIPS Validated?, you must get your own validation for your implementation regardless of what you are using, OpenSSL FIPS module or otherwise. - Walt Walter Paley Walt at SafeLogic.com SafeLogic - FIPS 140-2 Simplified From hongcho at gmail.com Tue Feb 26 23:28:17 2019 From: hongcho at gmail.com (Hong Cho) Date: Wed, 27 Feb 2019 08:28:17 +0900 Subject: [openssl-project] OpenSSL version 1.0.2q published In-Reply-To: <20181120141700.GA29541@openssl.org> References: <20181120141700.GA29541@openssl.org> Message-ID: I see no code change between 1.0.2q and 1.0.2r. ------ # diff -dup openssl-1.0.2q openssl-1.0.2r |& grep '^diff' | awk '{print $4}' openssl-1.0.2r/CHANGES openssl-1.0.2r/Makefile openssl-1.0.2r/Makefile.org openssl-1.0.2r/NEWS openssl-1.0.2r/README openssl-1.0.2r/openssl.spec hongch at hongch_bldx:~/downloads> diff -dup openssl-1.0.2q openssl-1.0.2r | & grep '^Only' Only in openssl-1.0.2q: Makefile.bak ------ It's supposed have a fix for CVE-2019-1559? Am I missing something? Hong. On Tue, Nov 20, 2018 at 11:17 PM OpenSSL wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > OpenSSL version 1.0.2q released > =============================== > > OpenSSL - The Open Source toolkit for SSL/TLS > https://www.openssl.org/ > > The OpenSSL project team is pleased to announce the release of > version 1.0.2q of our open source toolkit for SSL/TLS. For details > of changes and known issues see the release notes at: > > https://www.openssl.org/news/openssl-1.0.2-notes.html > > OpenSSL 1.0.2q is available for download via HTTP and FTP from the > following master locations (you can find the various FTP mirrors under > https://www.openssl.org/source/mirror.html): > > * https://www.openssl.org/source/ > * ftp://ftp.openssl.org/source/ > > The distribution file name is: > > o openssl-1.0.2q.tar.gz > Size: 5345604 > SHA1 checksum: 692f5f2f1b114f8adaadaa3e7be8cce1907f38c5 > SHA256 checksum: > 5744cfcbcec2b1b48629f7354203bc1e5e9b5466998bbccc5b5fcde3b18eb684 > > The checksums were calculated using the following commands: > > openssl sha1 openssl-1.0.2q.tar.gz > openssl sha256 openssl-1.0.2q.tar.gz > > Yours, > > The OpenSSL Project Team. > > -----BEGIN PGP SIGNATURE----- > > iQEzBAEBCgAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAlv0D/MACgkQ2cTSbQ5g > RJHZwQf/XVVXUUPD6ybAWXzWTAhb4kECMC7ahiEuLwO82IF8dafNNGLWVKU4qD5Q > oHCBuHq8UUHPo1s+YeR+3phH0it8xZNUvpDw4BPFlLNkev16+yYJudl2YE9asVep > 1Hup97zhSVfF7YS3o4r3TFL6VeAeC0XLHNItIYznldZ7oiI4iCvSH3rZ3Sb3O6lL > EpSu3CYqgpbUI09aSZDdwYaUwj7j2KGf3D+U8U+bHY7d47GdvykSk18l1Mt2m/0K > 63gDR4Nl+dgkLu6BALuqT79vhkRdiKWV4+e0GhvZPpjpoWBveYY1Q7nkfjy0Sh7j > womsen61sS073bbdHZX6LoVuAsQbOw== > =WXDE > -----END PGP SIGNATURE----- > _______________________________________________ > openssl-project mailing list > openssl-project at openssl.org > https://mta.openssl.org/mailman/listinfo/openssl-project > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raysatiro at yahoo.com Tue Feb 26 23:50:56 2019 From: raysatiro at yahoo.com (Ray Satiro) Date: Tue, 26 Feb 2019 18:50:56 -0500 Subject: [openssl-project] OpenSSL version 1.0.2q published In-Reply-To: References: <20181120141700.GA29541@openssl.org> Message-ID: <3595d832-be18-9079-a3ce-cf832a805660@yahoo.com> On 2/26/2019 6:28 PM, Hong Cho wrote: > I see no code change between 1.0.2q and 1.0.2r. > > ------ > # diff -dup openssl-1.0.2q openssl-1.0.2r |& grep '^diff' | awk > '{print $4}' > openssl-1.0.2r/CHANGES > openssl-1.0.2r/Makefile > openssl-1.0.2r/Makefile.org > openssl-1.0.2r/NEWS > openssl-1.0.2r/README > openssl-1.0.2r/openssl.spec > hongch at hongch_bldx:~/downloads> diff -dup openssl-1.0.2q > openssl-1.0.2r | & grep '^Only' > Only in openssl-1.0.2q: Makefile.bak > ------ > > It's supposed have a fix for CVE-2019-1559? Am I missing something? add recursive ? -r? --recursive? Recursively compare any subdirectories found. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongcho at gmail.com Tue Feb 26 23:55:53 2019 From: hongcho at gmail.com (Hong Cho) Date: Wed, 27 Feb 2019 08:55:53 +0900 Subject: [openssl-project] OpenSSL version 1.0.2q published In-Reply-To: <3595d832-be18-9079-a3ce-cf832a805660@yahoo.com> References: <20181120141700.GA29541@openssl.org> <3595d832-be18-9079-a3ce-cf832a805660@yahoo.com> Message-ID: Thanks. My mistake. Hong. On Wed, Feb 27, 2019 at 8:51 AM Ray Satiro via openssl-users < openssl-users at openssl.org> wrote: > On 2/26/2019 6:28 PM, Hong Cho wrote: > > I see no code change between 1.0.2q and 1.0.2r. > > ------ > # diff -dup openssl-1.0.2q openssl-1.0.2r |& grep '^diff' | awk '{print > $4}' > openssl-1.0.2r/CHANGES > openssl-1.0.2r/Makefile > openssl-1.0.2r/Makefile.org > openssl-1.0.2r/NEWS > openssl-1.0.2r/README > openssl-1.0.2r/openssl.spec > hongch at hongch_bldx:~/downloads> diff -dup openssl-1.0.2q openssl-1.0.2r | > & grep '^Only' > Only in openssl-1.0.2q: Makefile.bak > ------ > > It's supposed have a fix for CVE-2019-1559? Am I missing something? > > > add recursive > > -r --recursive Recursively compare any subdirectories found. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shinelight at shininglightpro.com Wed Feb 27 01:37:38 2019 From: shinelight at shininglightpro.com (Thomas J. Hruska) Date: Tue, 26 Feb 2019 18:37:38 -0700 Subject: OpenSSL version 1.1.1b published In-Reply-To: <20190226145438.GA1980@openssl.org> References: <20190226145438.GA1980@openssl.org> Message-ID: On 2/26/2019 7:54 AM, OpenSSL wrote: > The distribution file name is: > > o openssl-1.1.1b.tar.gz > Size: 8213737 > SHA1 checksum: e9710abf5e95c48ebf47991b10cbb48c09dae102 > SHA256 checksum: 5c557b023230413dfb0756f3137a13e6d726838ccd1430888ad15bfb2b43ea4b Unlike previous releases, this tar-gzipped file contains a 52 byte file called 'pax_global_header'. The contents of the file contain a single line of text: 52 comment=50eaac9f3337667259de725451f201e784599687 -- Thomas Hruska Shining Light Productions Home of BMP2AVI and Win32 OpenSSL. http://www.slproweb.com/ From john.sha.jiang at gmail.com Wed Feb 27 02:24:38 2019 From: john.sha.jiang at gmail.com (John Jiang) Date: Wed, 27 Feb 2019 10:24:38 +0800 Subject: s_server/s_client on checking middlebox compatibility In-Reply-To: <1592309.X9iGMNNUvS@pintsize.usersys.redhat.com> References: <1592309.X9iGMNNUvS@pintsize.usersys.redhat.com> Message-ID: I had tried TLS Fuzzer, and it worked for me. I just wished that OpenSSL can do the similar things. Thanks! On Tue, Feb 26, 2019 at 9:56 PM Hubert Kario wrote: > On Tuesday, 26 February 2019 07:22:52 CET John Jiang wrote: > > Is it possible to check if peer implements middlebox compatibility by > > s_server/s_client? > > It looks the test tools don't care this point. > > For example, if a server doesn't send change_cipher_spec after > > HelloRetryRequest, s_client still feels fine.That's not bad. But can I > > setup these tools to check middlebox compatibility? > > As Matt said, there's no human-readable output that shows that. > > tlsfuzzer does verify if the server sends ChangeCipherSpec and at what > point in the connection (all scripts expect it right after ServerHello or > right after HelloRetryRequest depending on connection). > > You can use > > https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-conversation.py > https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-hrr.py > and > > https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-session-resumption.py > respectively to test regular handshake, one with HelloRetryRequest > and one that performs session resumption. > > -- > Regards, > Hubert Kario > Senior Quality Engineer, QE BaseOS Security team > Web: www.cz.redhat.com > Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- An HTML attachment was scrubbed... URL: From mksarav at gmail.com Wed Feb 27 02:56:01 2019 From: mksarav at gmail.com (M K Saravanan) Date: Wed, 27 Feb 2019 10:56:01 +0800 Subject: CVE-2019-1559 advisory - what is "non-stiched" ciphersuite means? Message-ID: Hi, In the context of https://www.openssl.org/news/secadv/20190226.txt ====== In order for this to be exploitable "non-stitched" ciphersuites must be in use. ====== what is "non-stitched" ciphersuites means? with regards, Saravanan From Matthias.St.Pierre at ncp-e.com Wed Feb 27 05:05:10 2019 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Wed, 27 Feb 2019 05:05:10 +0000 Subject: AW: OpenSSL version 1.1.1b published In-Reply-To: References: <20190226145438.GA1980@openssl.org> Message-ID: Hi Thomas, > Unlike previous releases, this tar-gzipped file contains a 52 byte file > called 'pax_global_header'. The contents of the file contain a single > line of text: > > 52 comment=50eaac9f3337667259de725451f201e784599687 my extracted tarball does not contain this file. This seems to be a bug of the tar command which was fixed in 1.14. https://lkml.org/lkml/2005/6/18/5 https://marc.info/?l=linux-kernel&m=111909182607985&w=2 HTH, Matthias From shinelight at shininglightpro.com Wed Feb 27 06:07:53 2019 From: shinelight at shininglightpro.com (Thomas J. Hruska) Date: Tue, 26 Feb 2019 23:07:53 -0700 Subject: AW: OpenSSL version 1.1.1b published In-Reply-To: References: <20190226145438.GA1980@openssl.org> Message-ID: On 2/26/2019 10:05 PM, Dr. Matthias St. Pierre wrote: > Hi Thomas, > >> Unlike previous releases, this tar-gzipped file contains a 52 byte file >> called 'pax_global_header'. The contents of the file contain a single >> line of text: >> >> 52 comment=50eaac9f3337667259de725451f201e784599687 > > my extracted tarball does not contain this file. This seems to be a bug of the tar command which was fixed in 1.14. > > https://lkml.org/lkml/2005/6/18/5 > https://marc.info/?l=linux-kernel&m=111909182607985&w=2 > > HTH, > Matthias Okay. Certain versions of 7-Zip seem to be affected. Just a FYI in case anyone else brings it up on the list. It's minor and didn't affect the extraction in any way other than being an extra file. -- Thomas Hruska Shining Light Productions Home of BMP2AVI and Win32 OpenSSL. http://www.slproweb.com/ From public at enkore.de Wed Feb 27 09:26:47 2019 From: public at enkore.de (Marian Beermann) Date: Wed, 27 Feb 2019 10:26:47 +0100 Subject: CVE-2019-1559 advisory - what is "non-stiched" ciphersuite means? In-Reply-To: References: Message-ID: <21bb2462-2ac4-6bf6-6811-a06ed9fbb921@enkore.de> "Stitching" is an optimization where you have algorithm A (e.g. AES-CBC) and algorithm B (e.g. HMAC-SHA2) working on the same data, and you interleave the instructions of A and B. (This can improve performance by increasing port and EU utilization relative to running A and B sequentially). I believe OpenSSL uses stitched implementations in TLS for AES-CBC + HMAC-SHA1/2, if they exist for the platform. Also note that "AEAD ciphersuites are not impacted", i.e. AES-GCM and ChaPoly are not impacted. Cheers, Marian Am 27.02.19 um 03:56 schrieb M K Saravanan: > Hi, > > In the context of https://www.openssl.org/news/secadv/20190226.txt > > ====== > In order for this to be exploitable "non-stitched" ciphersuites must be in use. > ====== > > what is "non-stitched" ciphersuites means? > > with regards, > Saravanan > From phpdev at ehrhardt.nl Wed Feb 27 09:09:38 2019 From: phpdev at ehrhardt.nl (Jan Ehrhardt) Date: Wed, 27 Feb 2019 10:09:38 +0100 Subject: AW: OpenSSL version 1.1.1b published References: <20190226145438.GA1980@openssl.org> Message-ID: <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m@4ax.com> Thomas J. Hruska in gmane.comp.encryption.openssl.user (Tue, 26 Feb 2019 23:07:53 -0700): >On 2/26/2019 10:05 PM, Dr. Matthias St. Pierre wrote: >> Hi Thomas, >> >>> Unlike previous releases, this tar-gzipped file contains a 52 byte file >>> called 'pax_global_header'. The contents of the file contain a single >>> line of text: >>> >>> 52 comment=50eaac9f3337667259de725451f201e784599687 >> >> my extracted tarball does not contain this file. This seems to be a bug of the tar command which was fixed in 1.14. >> >> https://lkml.org/lkml/2005/6/18/5 >> https://marc.info/?l=linux-kernel&m=111909182607985&w=2 >> >> HTH, >> Matthias > >Okay. Certain versions of 7-Zip seem to be affected. Just a FYI in >case anyone else brings it up on the list. I ran into this using 7-Zip 18.05 (x64) on Windows, which is a fairly recent version. -- Jan From mann.patidar at gmail.com Wed Feb 27 11:07:13 2019 From: mann.patidar at gmail.com (Manish Patidar) Date: Wed, 27 Feb 2019 16:37:13 +0530 Subject: Zombie poddle and Goldendoodle vulnerablity Message-ID: Hi, There has been two vulnerability reported: golden doodle and zombie poddle. Does it impact openssl 1.1.1 or 1.0.2 version ? https://www.tripwire.com/state-of-security/vulnerability-management/zombie-poodle-goldendoodle/ Regards Manish -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Feb 27 11:18:38 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 27 Feb 2019 11:18:38 +0000 Subject: Zombie poddle and Goldendoodle vulnerablity In-Reply-To: References: Message-ID: <019f5db2-c01e-4561-447d-0252d52463d2@openssl.org> On 27/02/2019 11:07, Manish Patidar wrote: > > Hi,? > There has been two vulnerability reported: golden doodle and zombie poddle.? > Does it impact openssl 1.1.1 or 1.0.2 version ?? > > https://www.tripwire.com/state-of-security/vulnerability-management/zombie-poodle-goldendoodle/ These issues haven't been reported to openssl-security. From that blog bost zombie poodle only seems to affect Citrix products (https://support.citrix.com/article/CTX240139). There are very little details about the "goldennoodle" vulnerability. Given that this hasn't been reported to us I would assume that OpenSSL is not vulnerable. Matt From sujiknair at gmail.com Wed Feb 27 11:45:21 2019 From: sujiknair at gmail.com (suji) Date: Wed, 27 Feb 2019 04:45:21 -0700 (MST) Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> Message-ID: <1551267921689-0.post@n7.nabble.com> Thanks for the reply. With non-fips openssl, it is possible to write my own fips-module. I understood. But, is it possible for me to write a fips-compliant/fips validated "dynamic engine" with openssl-fips? Which allows me to offload "fips-compilant" functions to my engine "dynamically"? -- Sent from: http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html From sujiknair at gmail.com Wed Feb 27 11:49:35 2019 From: sujiknair at gmail.com (suji) Date: Wed, 27 Feb 2019 04:49:35 -0700 (MST) Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <1551267921689-0.post@n7.nabble.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> Message-ID: <1551268175617-0.post@n7.nabble.com> The requirement here is, to offload my "engine supported fips-compliant methods" to engine and other "fips-complaint" functions to openssl dynamically. Here I need to use openssl-fips module I guess. -- Sent from: http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html From tshort at akamai.com Wed Feb 27 11:53:36 2019 From: tshort at akamai.com (Short, Todd) Date: Wed, 27 Feb 2019 11:53:36 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <1551267921689-0.post@n7.nabble.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com>, <1551267921689-0.post@n7.nabble.com> Message-ID: <67B78AA9-7ED2-49AC-8A5F-5935F894A36E@akamai.com> No. The OpenSSL FIPS Module is not written that way. It should not be permitting any non-FIPS implementations (see Rich's email regarding a bug). You could write your own engine, get that FIPS certified, and run it with plain, vanilla OpenSSL. There's a design spec out for OpenSSL 3.0.0 that may allow you to have your own FIPS provider, which, I believe, would be the closest thing to what you want. -- -Todd Short // Sent from my iPhone // "One if by land, two if by sea, three if by the Internet." > On Feb 27, 2019, at 6:45 AM, suji wrote: > > Thanks for the reply. > > With non-fips openssl, it is possible to write my own fips-module. I > understood. > > But, is it possible for me to write a fips-compliant/fips validated "dynamic > engine" with openssl-fips? Which allows me to offload "fips-compilant" > functions to my engine "dynamically"? > > > > -- > Sent from: http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html From hkario at redhat.com Wed Feb 27 11:58:12 2019 From: hkario at redhat.com (Hubert Kario) Date: Wed, 27 Feb 2019 12:58:12 +0100 Subject: s_server/s_client on checking middlebox compatibility In-Reply-To: References: <1592309.X9iGMNNUvS@pintsize.usersys.redhat.com> Message-ID: <3398441.EgfBsoiVNS@pintsize.usersys.redhat.com> On Wednesday, 27 February 2019 03:24:38 CET John Jiang wrote: > I had tried TLS Fuzzer, and it worked for me. > I just wished that OpenSSL can do the similar things. The problem is that the middlebox compatibility mode is not defined strictly by the standard, and while all the popular TLS libraries (OpenSSL, Mozilla NSS, GnuTLS) agree on where the CCS should be inserted in the handshake, placing it in other locations may be necessary for some specific middleboxes. IOW, there is no one correct location for CCS, so if openssl just reported that the CCS was received (or if it was received at one specific place in handshake), it could be misleading. Also, let's be clear, middlebox compatibility mode is a thing because of bugs in other implementations, so it's better to spend time on basically anything else than polishing stuff around it > On Tue, Feb 26, 2019 at 9:56 PM Hubert Kario wrote: > > On Tuesday, 26 February 2019 07:22:52 CET John Jiang wrote: > > > Is it possible to check if peer implements middlebox compatibility by > > > s_server/s_client? > > > It looks the test tools don't care this point. > > > For example, if a server doesn't send change_cipher_spec after > > > HelloRetryRequest, s_client still feels fine.That's not bad. But can I > > > setup these tools to check middlebox compatibility? > > > > As Matt said, there's no human-readable output that shows that. > > > > tlsfuzzer does verify if the server sends ChangeCipherSpec and at what > > point in the connection (all scripts expect it right after ServerHello or > > right after HelloRetryRequest depending on connection). > > > > You can use > > > > https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-conve > > rsation.py > > https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-hrr. > > py and > > > > https://github.com/tomato42/tlsfuzzer/blob/master/scripts/test-tls13-sessi > > on-resumption.py respectively to test regular handshake, one with > > HelloRetryRequest and one that performs session resumption. > > > > -- > > Regards, > > Hubert Kario > > Senior Quality Engineer, QE BaseOS Security team > > Web: www.cz.redhat.com > > Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -- Regards, Hubert Kario Senior Quality Engineer, QE BaseOS Security team Web: www.cz.redhat.com Red Hat Czech s.r.o., Purky?ova 115, 612 00 Brno, Czech Republic -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part. URL: From Matthias.St.Pierre at ncp-e.com Wed Feb 27 12:00:55 2019 From: Matthias.St.Pierre at ncp-e.com (Matthias St. Pierre) Date: Wed, 27 Feb 2019 13:00:55 +0100 Subject: AW: OpenSSL version 1.1.1b published In-Reply-To: <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m@4ax.com> References: <20190226145438.GA1980@openssl.org> <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m@4ax.com> Message-ID: On 27.02.19 10:09, Jan Ehrhardt wrote: > Thomas J. Hruska in gmane.comp.encryption.openssl.user (Tue, 26 Feb 2019 > 23:07:53 -0700): >> On 2/26/2019 10:05 PM, Dr. Matthias St. Pierre wrote: >>> Hi Thomas, >>> >>>> Unlike previous releases, this tar-gzipped file contains a 52 byte file >>>> called 'pax_global_header'. The contents of the file contain a single >>>> line of text: >>>> >>>> 52 comment=50eaac9f3337667259de725451f201e784599687 >>> my extracted tarball does not contain this file. This seems to be a bug of the tar command which was fixed in 1.14. >>> >>> https://lkml.org/lkml/2005/6/18/5 >>> https://marc.info/?l=linux-kernel&m=111909182607985&w=2 >>> >>> HTH, >>> Matthias >> Okay. Certain versions of 7-Zip seem to be affected. Just a FYI in >> case anyone else brings it up on the list. > I ran into this using 7-Zip 18.05 (x64) on Windows, which is a fairly > recent version. Thanks for the Updates about 7-Zip. But IMHO it is not really an issue, just a little 'manufacturing byproduct'. As Linus wrote on the LKML mailing list: this file can safely be ignored/removed. Alternatively, you can view it as a feature, because this file actually contains useful information: It's the id of the commit from whose tree the tar file was created: https://github.com/openssl/openssl/commit/50eaac9f3337667259de725451f201e784599687 If it really disturbs you, you might want to get in touch with the 7-Zip Developers on their SourceForge Forum. https://sourceforge.net/p/sevenzip/discussion/search/?q=pax_global_header Regards, Matthias From phpdev at ehrhardt.nl Wed Feb 27 12:51:54 2019 From: phpdev at ehrhardt.nl (Jan Ehrhardt) Date: Wed, 27 Feb 2019 13:51:54 +0100 Subject: AW: OpenSSL version 1.1.1b published References: <20190226145438.GA1980@openssl.org> <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m@4ax.com> <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m-e09XROE/p8c@public.gmane.org> Message-ID: <6m1d7eld3e5qr281qpmnqjg7vgcmnkba7r@4ax.com> Matthias St. Pierre in gmane.comp.encryption.openssl.user (Wed, 27 Feb 2019 13:00:55 +0100): > >On 27.02.19 10:09, Jan Ehrhardt wrote: >> I ran into this using 7-Zip 18.05 (x64) on Windows, which is a fairly >> recent version. > >Thanks for the Updates about 7-Zip. But IMHO it is not really an issue, just a little 'manufacturing byproduct'. It does not bother me at all. I just ignored it. But Thomas was right in observing that it was different from the previous releases: OpenSSL 1.1.1a did not create that file when it was extracted by the same 7-zip version. -- Jan From mann.patidar at gmail.com Wed Feb 27 13:46:59 2019 From: mann.patidar at gmail.com (Manish Patidar) Date: Wed, 27 Feb 2019 19:16:59 +0530 Subject: Zombie poddle and Goldendoodle vulnerablity In-Reply-To: <019f5db2-c01e-4561-447d-0252d52463d2@openssl.org> References: <019f5db2-c01e-4561-447d-0252d52463d2@openssl.org> Message-ID: Does CVE-2019-1559 is related to these vulnerability. On Wed, 27 Feb 2019, 4:48 pm Matt Caswell, wrote: > > > On 27/02/2019 11:07, Manish Patidar wrote: > > > > Hi, > > There has been two vulnerability reported: golden doodle and zombie > poddle. > > Does it impact openssl 1.1.1 or 1.0.2 version ? > > > > > https://www.tripwire.com/state-of-security/vulnerability-management/zombie-poodle-goldendoodle/ > > These issues haven't been reported to openssl-security. From that blog bost > zombie poodle only seems to affect Citrix products > (https://support.citrix.com/article/CTX240139). There are very little > details > about the "goldennoodle" vulnerability. Given that this hasn't been > reported to > us I would assume that OpenSSL is not vulnerable. > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Feb 27 13:48:08 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 27 Feb 2019 13:48:08 +0000 Subject: Zombie poddle and Goldendoodle vulnerablity In-Reply-To: References: <019f5db2-c01e-4561-447d-0252d52463d2@openssl.org> Message-ID: On 27/02/2019 13:46, Manish Patidar wrote: > Does CVE-2019-1559? is related to these vulnerability. > No, that is entirely different. Matt > > On Wed, 27 Feb 2019, 4:48 pm Matt Caswell, > wrote: > > > > On 27/02/2019 11:07, Manish Patidar wrote: > > > > Hi,? > > There has been two vulnerability reported: golden doodle and zombie poddle.? > > Does it impact openssl 1.1.1 or 1.0.2 version ?? > > > > > https://www.tripwire.com/state-of-security/vulnerability-management/zombie-poodle-goldendoodle/ > > These issues haven't been reported to openssl-security. From that blog bost > zombie poodle only seems to affect Citrix products > (https://support.citrix.com/article/CTX240139). There are very little details > about the "goldennoodle" vulnerability. Given that this hasn't been reported to > us I would assume that OpenSSL is not vulnerable. > > Matt > From christian at python.org Wed Feb 27 15:02:32 2019 From: christian at python.org (Christian Heimes) Date: Wed, 27 Feb 2019 16:02:32 +0100 Subject: OpenSSL 3.0 vs. SSL 3.0 Message-ID: Hi, I'm concerned about the version number of the upcoming major release of OpenSSL. "OpenSSL 3.0" just sounds and looks way too close to "SSL 3.0". It took us more than a decade to teach people that SSL 3.0 is bad and should be avoided in favor of TLS. In my humble opinion, it's problematic and confusing to use "OpenSSL 3.0" for the next major version of OpenSSL and first release of OpenSSL with SSL 3.0 support. You skipped version 2.0 for technical reasons, because (IIRC) 2.0 was used / reserved for FIPS mode. May I suggest that you also skip 3.0 for UX reasons and call the upcoming version "OpenSSL 4.0". That way you can avoid any confusion with SSL 3.0. Kind regards, Christian From mcr at sandelman.ca Wed Feb 27 16:04:09 2019 From: mcr at sandelman.ca (Michael Richardson) Date: Wed, 27 Feb 2019 11:04:09 -0500 Subject: shared libraries vs test cases Message-ID: <31242.1551283449@localhost> Running LDD on the binaries in test/* shows that they appear to link against the "system" copies of libssl and libcrypto. Perhaps something I'm missing is setting up LD_PRELOAD or some such so that the tests run the local copy of libssl/libcrypto, but I can't find that. Am I missing something? This is, I think, making it very difficult for me to bisect a problem. It seems to me that the test cases ought to be statically linked to make it easiest to know what code they are running. (This also makes it slightly easier to use gdb on them) -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ From vieuxtech at gmail.com Wed Feb 27 16:23:49 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Wed, 27 Feb 2019 08:23:49 -0800 Subject: CVE-2019-1559 advisory - what is "non-stiched" ciphersuite means? In-Reply-To: <21bb2462-2ac4-6bf6-6811-a06ed9fbb921@enkore.de> References: <21bb2462-2ac4-6bf6-6811-a06ed9fbb921@enkore.de> Message-ID: It would have been helpful if the sec announcement had contained a specific list of cipher suites affected, even without the additional list of specific architectures vulnerable. Its hard to communicate clearly ATM to people which suites are or are not affected, so they can know if they are affected. Sam From vieuxtech at gmail.com Wed Feb 27 16:33:22 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Wed, 27 Feb 2019 08:33:22 -0800 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> Message-ID: On Tue, Feb 26, 2019 at 8:42 AM Matt Caswell wrote: > > What about AEAD ciphers? Are they considered "stitched"? > > No, they are not "stitched" but they are not impacted by this issue. We should > probably make that clearer in the advisory. That would be helpful! Even though this is fixed, would the general advice still be "avoid CBC in favour of AESCCM and AESGCM when using TLS1.2"? Or update to TLS1.3. From matt at openssl.org Wed Feb 27 16:42:37 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 27 Feb 2019 16:42:37 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> Message-ID: <0a447a6b-e19b-d31b-cf91-1df6433c1e8c@openssl.org> On 27/02/2019 16:33, Sam Roberts wrote: > On Tue, Feb 26, 2019 at 8:42 AM Matt Caswell wrote: >>> What about AEAD ciphers? Are they considered "stitched"? >> >> No, they are not "stitched" but they are not impacted by this issue. We should >> probably make that clearer in the advisory. > > That would be helpful! It has been updated: https://www.openssl.org/news/secadv/20190226.txt > > Even though this is fixed, would the general advice still be "avoid > CBC in favour of AESCCM and AESGCM when using TLS1.2"? Or update to > TLS1.3. IMO, and in order: - TLSv1.3 is preferable to TLSv1.2 - in TLSv1.2 forward secret ciphersuites are preferable to non-forward secret ones - in TLSv1.2 using an AEAD based ciphersuite is preferable to a CBC one Probably there is a whole bunch of other stuff that should be added to that list - but I'm sure others will chip in with their advice :-) Matt From levitte at openssl.org Wed Feb 27 16:52:08 2019 From: levitte at openssl.org (Richard Levitte) Date: Wed, 27 Feb 2019 17:52:08 +0100 Subject: shared libraries vs test cases In-Reply-To: <31242.1551283449@localhost> References: <31242.1551283449@localhost> Message-ID: <87sgw9xjgn.wl-levitte@openssl.org> On Wed, 27 Feb 2019 17:04:09 +0100, Michael Richardson wrote: > > Running LDD on the binaries in test/* shows that they appear to link against > the "system" copies of libssl and libcrypto. > > Perhaps something I'm missing is setting up LD_PRELOAD or some such so that > the tests run the local copy of libssl/libcrypto, but I can't find that. > Am I missing something? > > This is, I think, making it very difficult for me to bisect a problem. > > It seems to me that the test cases ought to be statically linked to make > it easiest to know what code they are running. (This also makes it slightly > easier to use gdb on them) There's a script called util/shlib_wrap.sh that you place first on the command line: ./util/shlib_wrap.sh test/whatevertest ./util/shlib_wrap.sh ldd test/whatevertest ./util/shlib_wrap.sh gdb test/whatevertest Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From Michael.Wojcik at microfocus.com Wed Feb 27 16:47:25 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Wed, 27 Feb 2019 16:47:25 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Sam Roberts > Sent: Wednesday, February 27, 2019 11:33 > > Even though this is fixed, would the general advice still be "avoid > CBC in favour of AESCCM and AESGCM when using TLS1.2"? Or update to > TLS1.3. The general advice is to avoid CBC mode where possible, full stop. Too many deployed implementations are still vulnerable to one form or another of padding-oracle attacks. Unless you control both ends of the conversation, you can't guarantee the peer isn't vulnerable. Frankly, this latest vulnerability in OpenSSL 1.0.2 feels pretty minor in that regard, since it depends on two different (if related) behaviors by the application to be vulnerable. The application has to incorrectly attempt a second SSL_shutdown if the first one fails (it should only do the second if the first succeeds), and it has to have different behavior that's visible to the attacker for the two cases, in order to be a useful oracle. AND it has to be using a non-stitched implementation of a vulnerable cipher. It's a relatively narrow branch of the attack tree. -- Michael Wojcik Distinguished Engineer, Micro Focus From matt at openssl.org Wed Feb 27 17:06:35 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 27 Feb 2019 17:06:35 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> Message-ID: On 27/02/2019 16:47, Michael Wojcik wrote: >> From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf >> Of Sam Roberts Sent: Wednesday, February 27, 2019 11:33 >> >> Even though this is fixed, would the general advice still be "avoid CBC in >> favour of AESCCM and AESGCM when using TLS1.2"? Or update to TLS1.3. > > The general advice is to avoid CBC mode where possible, full stop. Too many > deployed implementations are still vulnerable to one form or another of > padding-oracle attacks. Unless you control both ends of the conversation, you > can't guarantee the peer isn't vulnerable. > > Frankly, this latest vulnerability in OpenSSL 1.0.2 feels pretty minor in > that regard, since it depends on two different (if related) behaviors by the > application to be vulnerable. The application has to incorrectly attempt a > second SSL_shutdown if the first one fails (it should only do the second if > the first succeeds), This is not quite correct. It requires you to incorrectly call SSL_shutdown() twice (once to send a close_notify, and once to receive one) having previously encountered a fatal error. For example if you call SSL_read() which returns <=0 and SSL_get_error() returns SSL_ERROR_SYSCALL or SSL_ERROR_SSL then a fatal error has occurred. You should *not* then attempt to call SSL_shutdown(). Matt From jb-openssl at wisemo.com Wed Feb 27 17:10:34 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Wed, 27 Feb 2019 18:10:34 +0100 Subject: shared libraries vs test cases In-Reply-To: <31242.1551283449@localhost> References: <31242.1551283449@localhost> Message-ID: <34ebe061-b7bc-c39a-1f00-a52269216df4@wisemo.com> On 27/02/2019 17:04, Michael Richardson wrote: > Running LDD on the binaries in test/* shows that they appear to link against > the "system" copies of libssl and libcrypto. > > Perhaps something I'm missing is setting up LD_PRELOAD or some such so that > the tests run the local copy of libssl/libcrypto, but I can't find that. > Am I missing something? > > This is, I think, making it very difficult for me to bisect a problem. > > It seems to me that the test cases ought to be statically linked to make > it easiest to know what code they are running. (This also makes it slightly > easier to use gdb on them) > In builds that produce shared libraries, those shared libraries (and not a similar-but-different static library) is what needs to be tested. That said, it would be beneficial if the build system set the appropriate linker flags to make the test programs (but not the user programs such as PREFIX/bin/openssl{.exe,}) link to the shared library in the build tree whenever the target allows this. Some examples: - Windows(all versions): This is already the system default ?if the shared libraries are copied into the test program ?directory, even in Windows versions that don't search the ?current directory at invocation (which is often different ?from the program directory). However some very old Windows ?versions will only search the launch-time current dir. - For many other targets, the -rpath option will do this ?for local runs of the tests, while for cross-compiled ?tests (for testing on hardware without local compilation), ?a more careful -rpath value is needed to reference the ?test dir on the target, not the host. As a further improvement, where possible, the inter-library references and the reference from the user programs compiled from the OpenSSL source should somehow tie themselves to the exact shared library versions used, e.g. by linking to versioned .so file names (such as libssl.so.3.0.2), however this does not protect recompiling and/or debugging with an unchanged .so name. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From Maxime.Torrelli at conduent.com Wed Feb 27 17:22:01 2019 From: Maxime.Torrelli at conduent.com (Torrelli, Maxime) Date: Wed, 27 Feb 2019 17:22:01 +0000 Subject: OpenSSL 1.1.1b for WinCE700 Message-ID: Hello, Sorry to send you again an email about the same subject but I really need some help on this topic. I will try to give as much information I can. I am using WCECOMPAT tool to compile OpenSSL 1.1.1b for WINCE700 on a ARMV4I CPU. We have to do this because the Long Time Support of OpenSSL 1.0.2 is ending in December 2019. Is VC-CE platform still supported ? If so you will find below what I did : My computer : Windows 7 Enterprise N (32 bits) Visual Studio 2008 Professional Edition + Windows Embedded Compact 7.5.2884.0 I. WCECOMPAT Compilation set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Include\ARMV4I set OSVERSION=WCE700 set PLATFORM=VC-CE set TARGETCPU=ARMV4I set Path=C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\bin\x86_arm;%Path% set LIBPATH="C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Lib\ARMV4I";C:\Program Files\Microsoft Visual Studio 9.0\VC\lib; (my WINCE700 SDK is called "SDK WEC7 for VPE420 device") In a command prompt : - Perl config.pl - Nmake -f makefile The compilation is a success. II. OpenSSL Compilation The I open another command prompt in the openssl-1.1.1b folder. * set OSVERSION=WCE700 * set PLATFORM=VC-CE * set TARGETCPU=ARMV4I * set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib * set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Include\ARMV4I;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\INCLUDE;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include; * set Path=C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\bin\x86_arm;%Path% * set LIBPATH=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\lib\ARMV4I; * set WCECOMPAT=../wcecompat * perl Configure no-idea no-mdc2 no-rc5 no-asm no-ssl2 no-ssl3 VC-CE * nmake The output is the following : Microsoft (R) Program Maintenance Utility Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. Microsoft (R) Program Maintenance Utility Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. Microsoft (R) Program Maintenance Utility Version 9.00.30729.01 Copyright (C) Microsoft Corporation. All rights reserved. "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "crypto\include\internal\bn_conf.h.in" > crypto\include\internal\bn_conf.h "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "crypto\include\internal\dso_conf.h.in" > crypto\include\internal\dso_conf.h "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata "util\dofile.pl" "-omakefile" "include\openssl\opensslconf.h.in" > include\openssl\opensslconf.h "C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\nmake.exe" / depend && "C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\nmake.exe" / _all cl /Zi /Fdossl_static.pdb /GF /Gy /MD /W3 /wd4090 /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -I"\../wcecompat/include" -c /Foapps\app_rand.obj "apps\app_rand.c" app_rand.c C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro redefinition C:\wcecompat\include\io.h(43) : see previous definition of 'open' C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro redefinition C:\wcecompat\include\io.h(45) : see previous definition of 'close' C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro redefinition C:\wcecompat\include\io.h(50) : see previous definition of 'unlink' cl /Zi /Fdossl_static.pdb /GF /Gy /MD /W3 /wd4090 /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -I"\../wcecompat/include" /Zs /showIncludes "apps\app_rand.c" 2>&1 > apps\app_rand.d cl /Zi /Fdossl_static.pdb /GF /Gy /MD /W3 /wd4090 /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" -I"\../wcecompat/include" -c /Foapps\apps.obj "apps\apps.c" apps.c C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro redefinition C:\wcecompat\include\io.h(43) : see previous definition of 'open' C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro redefinition C:\wcecompat\include\io.h(45) : see previous definition of 'close' C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro redefinition C:\wcecompat\include\io.h(50) : see previous definition of 'unlink' apps\apps.c(2596) : warning C4013: '_fdopen' undefined; assuming extern returning int apps\apps.c(2596) : warning C4047: '=' : 'FILE *' differs in levels of indirection from 'int' apps\apps.c(2614) : warning C4013: '_close' undefined; assuming extern returning int apps\apps.c(2696) : warning C4013: 'GetStdHandle' undefined; assuming extern returning int apps\apps.c(2696) : error C2065: 'STD_INPUT_HANDLE' : undeclared identifier apps\apps.c(2696) : warning C4047: 'initializing' : 'HANDLE' differs in levels of indirection from 'int' apps\apps.c(2698) : error C2065: 'INPUT_RECORD' : undeclared identifier apps\apps.c(2698) : error C2146: syntax error : missing ';' before identifier 'inputrec' apps\apps.c(2698) : error C2065: 'inputrec' : undeclared identifier apps\apps.c(2699) : error C2275: 'DWORD' : illegal use of this type as an expression C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Include\ARMV4I\windef.h(161) : see declaration of 'DWORD' apps\apps.c(2699) : error C2146: syntax error : missing ';' before identifier 'insize' apps\apps.c(2699) : error C2065: 'insize' : undeclared identifier apps\apps.c(2700) : error C2275: 'BOOL' : illegal use of this type as an expression C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 device\Include\ARMV4I\windef.h(162) : see declaration of 'BOOL' apps\apps.c(2700) : error C2146: syntax error : missing ';' before identifier 'peeked' apps\apps.c(2700) : error C2065: 'peeked' : undeclared identifier apps\apps.c(2706) : error C2065: 'peeked' : undeclared identifier apps\apps.c(2706) : warning C4013: 'PeekConsoleInput' undefined; assuming extern returning int apps\apps.c(2706) : error C2065: 'inputrec' : undeclared identifier apps\apps.c(2706) : error C2065: 'insize' : undeclared identifier apps\apps.c(2707) : error C2065: 'peeked' : undeclared identifier NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\ bin\x86_arm\cl.EXE"' : return code '0x2' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN \nmake.exe"' : return code '0x2' Stop. Any guess or hint would be much appreciated. Greetings, Maxime TORRELLI Embedded Software Engineer Conduent Conduent Business Solutions (France) 1 rue Claude Chappe - BP 345 07503 Guilherand Granges Cedex -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Feb 27 17:44:55 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 27 Feb 2019 17:44:55 +0000 Subject: OpenSSL 1.1.1b for WinCE700 In-Reply-To: References: Message-ID: On 27/02/2019 17:22, Torrelli, Maxime wrote: > Hello, > > ? > > Sorry to send you again an email about the same subject but I really need some > help on this topic. I will try to give as much information I can. > > ? > > I am using WCECOMPAT tool to compile OpenSSL 1.1.1b for WINCE700 on a ARMV4I > CPU. We have to do this because the Long Time Support of OpenSSL 1.0.2 is ending > in December 2019. > *_Is VC-CE platform still supported ?_* > I can't answer your main question but can attempt this one. VC-CE is not a primary or a secondary supported platform: https://www.openssl.org/policies/platformpolicy.html Support has not been *removed* and we've not done anything to actively break it, but AFAIK no one on the dev team has access to that platform. Which puts it in the "Unknown" classification (or possibly "Community"). Matt > ? > > If so you will find below what I did : > > ? > > My computer : Windows 7 Enterprise N (32 bits) > > Visual Studio 2008 Professional Edition + Windows Embedded Compact 7.5.2884.0 > > ? > > *I.??????????????????? **WCECOMPAT Compilation* > > set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Lib\ARMV4I;C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib;C:\Program > Files\Microsoft Visual Studio 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft > Visual Studio 9.0\VC\lib > > set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I > > set OSVERSION=WCE700 > > set PLATFORM=VC-CE > > set TARGETCPU=ARMV4I > > set Path=C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE;C:\Program > Files\Microsoft Visual Studio 9.0\VC\ce\bin\x86_arm;%Path% > > set LIBPATH="C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Lib\ARMV4I";C:\Program Files\Microsoft Visual Studio 9.0\VC\lib; > > ? > > (my WINCE700 SDK is called ?SDK WEC7 for VPE420 device?) > > ? > > In a command prompt : > > -????????? Perl config.pl > > -????????? Nmake ?f makefile > > ? > > The compilation is a success. > > ? > > *II.????????????????? **OpenSSL Compilation* > > ? > > The I open another command prompt in the openssl-1.1.1b folder. > > ? > > ??????? set OSVERSION=WCE700 > > ??????? set PLATFORM=VC-CE > > ??????? set TARGETCPU=ARMV4I > > ??????? set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Lib\ARMV4I;C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib;C:\Program > Files\Microsoft Visual Studio 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft > Visual Studio 9.0\VC\lib > > ??????? set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I;C:\Program Files\Microsoft Visual Studio > 9.0\VC\atlmfc\include;C:\Program Files\Microsoft Visual Studio > 9.0\VC\INCLUDE;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include; > > ??????? set Path=C:\Program Files\Microsoft Visual Studio > 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\bin\x86_arm;%Path% > > ??????? set LIBPATH=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Lib\ARMV4I;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program > Files\Microsoft Visual Studio 9.0\VC\ce\lib\ARMV4I; > > ??????? set WCECOMPAT=../wcecompat > > ? > > ??????? perl Configure no-idea no-mdc2 no-rc5 no-asm no-ssl2 no-ssl3 VC-CE > > ? > > ??????? nmake > > ? > > The output is the following : > > ? > > *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* > > *Copyright (C) Microsoft Corporation.? All rights reserved.* > > *?* > > *?* > > *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* > > *Copyright (C) Microsoft Corporation.? All rights reserved.* > > *?* > > *?* > > *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* > > *Copyright (C) Microsoft Corporation.? All rights reserved.* > > ? > > *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata > "util\dofile.pl"? "-omakefile" "crypto\include\internal\bn_conf.h.in" > > crypto\include\internal\bn_conf.h* > > *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata > "util\dofile.pl"? "-omakefile" "crypto\include\internal\dso_conf.h.in" > > crypto\include\internal\dso_conf.h* > > *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata > "util\dofile.pl"? "-omakefile" "include\openssl\opensslconf.h.in" > > include\openssl\opensslconf.h* > > *????????????? "C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\nmake.exe" > /?????????????????? depend && "C:\Program Files\Microsoft Visual Studio > 9.0\VC\BIN\nmake.exe" /?????????????????? _all* > > *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 /nologo /O1i > /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM > -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" > -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\Program > Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Program > Files\\OpenSSL\\lib\\engines-1_1\"" -D_WIN32_WCE=700 -DUNDER_CE=700 > -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return > -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" > -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" > -I"\../wcecompat/include"? -c /Foapps\app_rand.obj "apps\app_rand.c"* > > *app_rand.c* > > *C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro redefinition* > > *??????? C:\wcecompat\include\io.h(43) : see previous definition of 'open'* > > *C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro redefinition* > > *??????? C:\wcecompat\include\io.h(45) : see previous definition of 'close'* > > *C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro redefinition* > > *??????? C:\wcecompat\include\io.h(50) : see previous definition of 'unlink'* > > *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 /nologo /O1i > /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM > -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" > -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\Program > Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Program > Files\\OpenSSL\\lib\\engines-1_1\"" -D_WIN32_WCE=700 -DUNDER_CE=700 > -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return > -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" > -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" > -I"\../wcecompat/include"? /Zs /showIncludes "apps\app_rand.c" 2>&1 > > apps\app_rand.d* > > *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 /nologo /O1i > /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM > -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" > -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" -D"OPENSSLDIR=\"C:\\Program > Files\\Common Files\\SSL\"" -D"ENGINESDIR=\"C:\\Program > Files\\OpenSSL\\lib\\engines-1_1\"" -D_WIN32_WCE=700 -DUNDER_CE=700 > -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return > -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" > -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" > -I"\../wcecompat/include"? -c /Foapps\apps.obj "apps\apps.c"* > > *apps.c* > > *C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro redefinition* > > *??????? C:\wcecompat\include\io.h(43) : see previous definition of 'open'* > > *C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro redefinition* > > *??????? C:\wcecompat\include\io.h(45) : see previous definition of 'close'* > > *C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro redefinition* > > *??????? C:\wcecompat\include\io.h(50) : see previous definition of 'unlink'* > > *apps\apps.c(2596) : warning C4013: '_fdopen' undefined; assuming extern > returning int* > > *apps\apps.c(2596) : warning C4047: '=' : 'FILE *' differs in levels of > indirection from 'int'* > > *apps\apps.c(2614) : warning C4013: '_close' undefined; assuming extern > returning int* > > *apps\apps.c(2696) : warning C4013: 'GetStdHandle' undefined; assuming extern > returning int* > > *apps\apps.c(2696) : error C2065: 'STD_INPUT_HANDLE' : undeclared identifier* > > *apps\apps.c(2696) : warning C4047: 'initializing' : 'HANDLE' differs in levels > of indirection from 'int'* > > *apps\apps.c(2698) : error C2065: 'INPUT_RECORD' : undeclared identifier* > > *apps\apps.c(2698) : error C2146: syntax error : missing ';' before identifier > 'inputrec'* > > *apps\apps.c(2698) : error C2065: 'inputrec' : undeclared identifier* > > *apps\apps.c(2699) : error C2275: 'DWORD' : illegal use of this type as an > expression* > > *??????? C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I\windef.h(161) : see declaration of 'DWORD'* > > *apps\apps.c(2699) : error C2146: syntax error : missing ';' before identifier > 'insize'* > > *apps\apps.c(2699) : error C2065: 'insize' : undeclared identifier* > > *apps\apps.c(2700) : error C2275: 'BOOL' : illegal use of this type as an > expression* > > *??????? C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I\windef.h(162) : see declaration of 'BOOL'* > > *apps\apps.c(2700) : error C2146: syntax error : missing ';' before identifier > 'peeked'* > > *apps\apps.c(2700) : error C2065: 'peeked' : undeclared identifier* > > *apps\apps.c(2706) : error C2065: 'peeked' : undeclared identifier* > > *apps\apps.c(2706) : warning C4013: 'PeekConsoleInput' undefined; assuming > extern returning int* > > *apps\apps.c(2706) : error C2065: 'inputrec' : undeclared identifier* > > *apps\apps.c(2706) : error C2065: 'insize' : undeclared identifier* > > *apps\apps.c(2707) : error C2065: 'peeked' : undeclared identifier* > > *?* > > *NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\ce\* > > *bin\x86_arm\cl.EXE"' : return code '0x2'* > > *Stop.* > > *NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN* > > *\nmake.exe"' : return code '0x2'* > > *Stop.* > > *?* > > Any guess or hint would be much appreciated. > > ? > > ? > > Greetings, > > *?* > > *Maxime TORRELLI* > > Embedded Software Engineer > > ? > > *Conduent* > > Conduent Business Solutions (France) > > 1 rue Claude Chappe ? BP 345 > 07503 Guilherand Granges Cedex > > ? > From openssl-users at dukhovni.org Wed Feb 27 16:15:40 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Wed, 27 Feb 2019 11:15:40 -0500 Subject: shared libraries vs test cases In-Reply-To: <31242.1551283449@localhost> References: <31242.1551283449@localhost> Message-ID: > On Feb 27, 2019, at 11:04 AM, Michael Richardson wrote: > > Running LDD on the binaries in test/* shows that they appear to link against > the "system" copies of libssl and libcrypto. With no environment overrides of LD_LIBRARY_PATH or similar, the test cases in the build tree are expected to find the OpenSSL libraries in the install target location (if on the default system search path) or when you compile with "-R", "-rpath" or similar. So the output of ldd is not surprising. The tests run with LD_LIBRARY_PATH settings via util/shlib_wrap.sh > This is, I think, making it very difficult for me to bisect a problem. > > It seems to me that the test cases ought to be statically linked to make > it easiest to know what code they are running. (This also makes it slightly > easier to use gdb on them) The test cases exercise the code the same way it is going to be used. You can do a "no-shared" build if you like, but then some features that depend on dynamic linking/loading may not be available. If you're just trying to bisect a problem, that may be acceptable... -- Viktor. From Matthias.St.Pierre at ncp-e.com Wed Feb 27 18:15:48 2019 From: Matthias.St.Pierre at ncp-e.com (Matthias St. Pierre) Date: Wed, 27 Feb 2019 19:15:48 +0100 Subject: AW: OpenSSL version 1.1.1b published In-Reply-To: <6m1d7eld3e5qr281qpmnqjg7vgcmnkba7r@4ax.com> References: <20190226145438.GA1980@openssl.org> <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m@4ax.com> <5rkc7edk0pv229d3oeuq0hp5b68cos0a3m-e09XROE/p8c@public.gmane.org> <6m1d7eld3e5qr281qpmnqjg7vgcmnkba7r@4ax.com> Message-ID: <64707992-9d8f-a9e1-1d6b-89193f22f8e9@ncp-e.com> On 27.02.19 13:51, Jan Ehrhardt wrote: > Matthias St. Pierre in gmane.comp.encryption.openssl.user (Wed, 27 Feb > 2019 13:00:55 +0100): >> On 27.02.19 10:09, Jan Ehrhardt wrote: >>> I ran into this using 7-Zip 18.05 (x64) on Windows, which is a fairly >>> recent version. >> Thanks for the Updates about 7-Zip. But IMHO it is not really an issue, just a little 'manufacturing byproduct'. > It does not bother me at all. I just ignored it. But Thomas was right in > observing that it was different from the previous releases: OpenSSL > 1.1.1a did not create that file when it was extracted by the same 7-zip > version. This change was introduced by https://github.com/openssl/openssl/pull/7692: Previously, the tarballs were created using the `tar` command, while nowadays it's done using `git archive`,? see util/mktar.sh: ??? git archive --worktree-attributes --format=tar --prefix="$NAME/" -v HEAD \ ??????? | gzip -9 > "$TARFILE.gz" And it's git that adds this comment. Matthias From Michael.Wojcik at microfocus.com Wed Feb 27 18:10:02 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Wed, 27 Feb 2019 18:10:02 +0000 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Matt Caswell > Sent: Wednesday, February 27, 2019 12:07 > > On 27/02/2019 16:47, Michael Wojcik wrote: > > > > Frankly, this latest vulnerability in OpenSSL 1.0.2 feels pretty minor in > > that regard, since it depends on two different (if related) behaviors by the > > application to be vulnerable. The application has to incorrectly attempt a > > second SSL_shutdown if the first one fails (it should only do the second if > > the first succeeds), > > This is not quite correct. It requires you to incorrectly call SSL_shutdown() > twice (once to send a close_notify, and once to receive one) having previously > encountered a fatal error. Thanks for the correction. Still the general point applies: it depends on the application having rather suspect error handling, and on having visibly different behavior for the two cases in order to provide an oracle. Perhaps that's not uncommon, but I checked some of our products which use OpenSSL, and they didn't have either behavior. -- Michael Wojcik Distinguished Engineer, Micro Focus From scott_n at xypro.com Wed Feb 27 18:43:59 2019 From: scott_n at xypro.com (Scott Neugroschl) Date: Wed, 27 Feb 2019 18:43:59 +0000 Subject: OpenSSL Security Advisory In-Reply-To: <20190226145917.GA5404@openssl.org> References: <20190226145917.GA5404@openssl.org> Message-ID: Is this a client-side or server-side vulnerability? Or does it matter? Thanks, ScottN --- Scott Neugroschl | XYPRO Technology Corporation 4100 Guardian Street | Suite 100 |Simi Valley, CA 93063 | Phone 805 583-2874|Fax 805 583-0124 | -----Original Message----- From: openssl-users On Behalf Of OpenSSL Sent: Tuesday, February 26, 2019 6:59 AM To: openssl-project at openssl.org; OpenSSL User Support ML ; OpenSSL Announce ML Subject: OpenSSL Security Advisory -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OpenSSL Security Advisory [26 February 2019] ============================================ 0-byte record padding oracle (CVE-2019-1559) ============================================ Severity: Moderate If an application encounters a fatal protocol error and then calls SSL_shutdown() twice (once to send a close_notify, and once to receive one) then OpenSSL can respond differently to the calling application if a 0 byte record is received with invalid padding compared to if a 0 byte record is received with an invalid MAC. If the application then behaves differently based on that in a way that is detectable to the remote peer, then this amounts to a padding oracle that could be used to decrypt data. In order for this to be exploitable "non-stitched" ciphersuites must be in use. Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites. Also the application must call SSL_shutdown() twice even if a protocol error has occurred (applications should not do this but some do anyway). This issue does not impact OpenSSL 1.1.1 or 1.1.0. OpenSSL 1.0.2 users should upgrade to 1.0.2r. This issue was discovered by Juraj Somorovsky, Robert Merget and Nimrod Aviram, with additional investigation by Steven Collison and Andrew Hourselt. It was reported to OpenSSL on 10th December 2018. Note ==== OpenSSL 1.0.2 and 1.1.0 are currently only receiving security updates. Support for 1.0.2 will end on 31st December 2019. Support for 1.1.0 will end on 11th September 2019. Users of these versions should upgrade to OpenSSL 1.1.1. References ========== URL for this Security Advisory: https://www.openssl.org/news/secadv/20190226.txt Note: the online version of the advisory may be updated with additional details over time. For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html -----BEGIN PGP SIGNATURE----- iQEzBAEBCgAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAlx1U+gACgkQ2cTSbQ5g RJFnlAf/U9yZtCz59BjgD0Kh7Eya5KxlmUWItdBu1r3DwbY4KDgL/Wwh4UxG3Qim D7Ht5Xsta4iAywrMRI/iPEdEQct8pcpWjq4/65lEbTYjToEnNWhIeWHH/Lw3Jfza gcVpIfbWoWc7OL7U4uPQuGWcb/PO8fJXF+HcCdZ+kIuut0peMSgN5sK/wBnmSdsM +sJXCei+jwVy/9WvCBMOooX7D8oerJ6NX12n2cNAYH/K7e2deiPZ7D/HB7T9MSv/ BgOi1UqFzBxcsNhFpY5NMTHG8pl0bmS0OiZ9bThN0YHwxFVJz6ZsVX/L5cYOAbm/ mJAdDE24XMmUAOlVZrROzCZKXADx/A== =8h8L -----END PGP SIGNATURE----- From mcr at sandelman.ca Wed Feb 27 18:53:46 2019 From: mcr at sandelman.ca (Michael Richardson) Date: Wed, 27 Feb 2019 13:53:46 -0500 Subject: OpenSSL 3.0 vs. SSL 3.0 In-Reply-To: References: Message-ID: <9153.1551293626@localhost> Christian Heimes wrote: > I'm concerned about the version number of the upcoming major release of > OpenSSL. "OpenSSL 3.0" just sounds and looks way too close to "SSL 3.0". > It took us more than a decade to teach people that SSL 3.0 is bad and > should be avoided in favor of TLS. In my humble opinion, it's > problematic and confusing to use "OpenSSL 3.0" for the next major > version of OpenSSL and first release of OpenSSL with SSL 3.0 support. You make a good point which I had not thought about, having exhumed SSLx.y From my brain. +5 > You skipped version 2.0 for technical reasons, because (IIRC) 2.0 was > used / reserved for FIPS mode. May I suggest that you also skip 3.0 for > UX reasons and call the upcoming version "OpenSSL 4.0". That way you can > avoid any confusion with SSL 3.0. Integers are cheap. And 4.0 is > 3.0, so (Open)SSL 4.0.0 must be better than SSL3. -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matt at openssl.org Wed Feb 27 19:17:30 2019 From: matt at openssl.org (Matt Caswell) Date: Wed, 27 Feb 2019 19:17:30 +0000 Subject: OpenSSL Security Advisory In-Reply-To: References: <20190226145917.GA5404@openssl.org> Message-ID: <172587d1-2139-e02e-5f0a-a2dec6113ef9@openssl.org> On 27/02/2019 18:43, Scott Neugroschl wrote: > Is this a client-side or server-side vulnerability? Or does it matter? It can apply to either side. Matt > > Thanks, > > ScottN > > --- > Scott Neugroschl | XYPRO Technology Corporation > 4100 Guardian Street | Suite 100 |Simi Valley, CA 93063 | Phone 805 583-2874|Fax 805 583-0124 | > > > > > -----Original Message----- > From: openssl-users On Behalf Of OpenSSL > Sent: Tuesday, February 26, 2019 6:59 AM > To: openssl-project at openssl.org; OpenSSL User Support ML ; OpenSSL Announce ML > Subject: OpenSSL Security Advisory > > OpenSSL Security Advisory [26 February 2019] ============================================ > > 0-byte record padding oracle (CVE-2019-1559) ============================================ > > Severity: Moderate > > If an application encounters a fatal protocol error and then calls > SSL_shutdown() twice (once to send a close_notify, and once to receive one) then OpenSSL can respond differently to the calling application if a 0 byte record is received with invalid padding compared to if a 0 byte record is received with an invalid MAC. If the application then behaves differently based on that in a way that is detectable to the remote peer, then this amounts to a padding oracle that could be used to decrypt data. > > In order for this to be exploitable "non-stitched" ciphersuites must be in use. > Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites. Also the application must call SSL_shutdown() twice even if a protocol error has occurred (applications should not do this but some do anyway). > > This issue does not impact OpenSSL 1.1.1 or 1.1.0. > > OpenSSL 1.0.2 users should upgrade to 1.0.2r. > > This issue was discovered by Juraj Somorovsky, Robert Merget and Nimrod Aviram, with additional investigation by Steven Collison and Andrew Hourselt. It was reported to OpenSSL on 10th December 2018. > > Note > ==== > > OpenSSL 1.0.2 and 1.1.0 are currently only receiving security updates. Support for 1.0.2 will end on 31st December 2019. Support for 1.1.0 will end on 11th September 2019. Users of these versions should upgrade to OpenSSL 1.1.1. > > References > ========== > > URL for this Security Advisory: > https://www.openssl.org/news/secadv/20190226.txt > > Note: the online version of the advisory may be updated with additional details over time. > > For details of OpenSSL severity classifications please see: > https://www.openssl.org/policies/secpolicy.html > From rsalz at akamai.com Wed Feb 27 19:59:43 2019 From: rsalz at akamai.com (Salz, Rich) Date: Wed, 27 Feb 2019 19:59:43 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <1551267921689-0.post@n7.nabble.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> Message-ID: <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> If you change a single line of code or do not build it EXACTLY as documented, you cannot claim to use the OpenSSL validation. From jb-openssl at wisemo.com Wed Feb 27 20:55:29 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Wed, 27 Feb 2019 21:55:29 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> Message-ID: On 27/02/2019 20:59, Salz, Rich via openssl-users wrote: > If you change a single line of code or do not build it EXACTLY as documented, you cannot claim to use the OpenSSL validation. > > I believe the context here is one I also mentioned in my comment on the 3.0 draft spec: - OpenSSL FIPS Module provides FIPS validated software implementations of ?all/most of the permitted algorithms. - Engine provides FIPS validated (hardware?) implementations of one or ?more implementations, under a separate FIPS validation, perhaps done ?at the hardware level. - FIPS-capable OpenSSL (outside the FIPS boundary) is somehow made to use ?both FIPS validated modules depending on various conditions (such as ?algorithm availability).? FIPS-capable OpenSSL can be changed without ?breaking the FIPS validation of the modules. - Overall application claims FIPS compliance as all crypto is done by ?FIPS validated modules. A hypothetical US gov example would be using a certificate on a FIPS validated FIPS 201 PIV ID card. Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From levitte at openssl.org Wed Feb 27 21:18:58 2019 From: levitte at openssl.org (Richard Levitte) Date: Wed, 27 Feb 2019 22:18:58 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> Message-ID: <87mumhx73x.wl-levitte@openssl.org> On Wed, 27 Feb 2019 21:55:29 +0100, Jakob Bohm via openssl-users wrote: > > On 27/02/2019 20:59, Salz, Rich via openssl-users wrote: > > If you change a single line of code or do not build it EXACTLY as documented, you cannot claim to use the OpenSSL validation. > > > > I believe the context here is one I also mentioned in my comment on > the 3.0 draft spec: > > - OpenSSL FIPS Module provides FIPS validated software implementations of > all/most of the permitted algorithms. > - Engine provides FIPS validated (hardware?) implementations of one or > more implementations, under a separate FIPS validation, perhaps done > at the hardware level. > - FIPS-capable OpenSSL (outside the FIPS boundary) is somehow made to use > both FIPS validated modules depending on various conditions (such as > algorithm availability). FIPS-capable OpenSSL can be changed without > breaking the FIPS validation of the modules. > - Overall application claims FIPS compliance as all crypto is done by > FIPS validated modules. Side note: "FIPS-capable OpenSSL" isn't quite right. Basically, if libcrypto is capable of loading a dynamically loadable module, it's "FIPS-capable", since it can load the FIPS provider module. I see no reason why libcrypto should be able to load two FIPS-validated modules (*) and use them both, all depending on what algorithms and properties are desired (apart from the "fips" property). However, I've come to understand that those two modules must not be made to cooperate, i.e. for a signing operation using sha256WithRSAEncryption, it's not permitted for one module to do the sha256 part and the other module to do the RSA calculations. Cheers, Richard ----- (*) an engine module is also a module... all that actually makes it an OpenSSL engine is two entry points, "bind_engine" and "v_check". -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From scott_n at xypro.com Wed Feb 27 21:27:20 2019 From: scott_n at xypro.com (Scott Neugroschl) Date: Wed, 27 Feb 2019 21:27:20 +0000 Subject: OpenSSL Security Advisory In-Reply-To: <172587d1-2139-e02e-5f0a-a2dec6113ef9@openssl.org> References: <20190226145917.GA5404@openssl.org> <172587d1-2139-e02e-5f0a-a2dec6113ef9@openssl.org> Message-ID: Thanks. -----Original Message----- From: openssl-users On Behalf Of Matt Caswell Sent: Wednesday, February 27, 2019 11:18 AM To: openssl-users at openssl.org Subject: Re: OpenSSL Security Advisory On 27/02/2019 18:43, Scott Neugroschl wrote: > Is this a client-side or server-side vulnerability? Or does it matter? It can apply to either side. Matt > > Thanks, > > ScottN > > --- > Scott Neugroschl | XYPRO Technology Corporation > 4100 Guardian Street | Suite 100 |Simi Valley, CA 93063 | Phone 805 583-2874|Fax 805 583-0124 | > > > > > -----Original Message----- > From: openssl-users On Behalf Of OpenSSL > Sent: Tuesday, February 26, 2019 6:59 AM > To: openssl-project at openssl.org; OpenSSL User Support ML ; OpenSSL Announce ML > Subject: OpenSSL Security Advisory > > OpenSSL Security Advisory [26 February 2019] ============================================ > > 0-byte record padding oracle (CVE-2019-1559) ============================================ > > Severity: Moderate > > If an application encounters a fatal protocol error and then calls > SSL_shutdown() twice (once to send a close_notify, and once to receive one) then OpenSSL can respond differently to the calling application if a 0 byte record is received with invalid padding compared to if a 0 byte record is received with an invalid MAC. If the application then behaves differently based on that in a way that is detectable to the remote peer, then this amounts to a padding oracle that could be used to decrypt data. > > In order for this to be exploitable "non-stitched" ciphersuites must be in use. > Stitched ciphersuites are optimised implementations of certain commonly used ciphersuites. Also the application must call SSL_shutdown() twice even if a protocol error has occurred (applications should not do this but some do anyway). > > This issue does not impact OpenSSL 1.1.1 or 1.1.0. > > OpenSSL 1.0.2 users should upgrade to 1.0.2r. > > This issue was discovered by Juraj Somorovsky, Robert Merget and Nimrod Aviram, with additional investigation by Steven Collison and Andrew Hourselt. It was reported to OpenSSL on 10th December 2018. > > Note > ==== > > OpenSSL 1.0.2 and 1.1.0 are currently only receiving security updates. Support for 1.0.2 will end on 31st December 2019. Support for 1.1.0 will end on 11th September 2019. Users of these versions should upgrade to OpenSSL 1.1.1. > > References > ========== > > URL for this Security Advisory: > https://www.openssl.org/news/secadv/20190226.txt > > Note: the online version of the advisory may be updated with additional details over time. > > For details of OpenSSL severity classifications please see: > https://www.openssl.org/policies/secpolicy.html > From jb-openssl at wisemo.com Wed Feb 27 21:38:17 2019 From: jb-openssl at wisemo.com (Jakob Bohm) Date: Wed, 27 Feb 2019 22:38:17 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <87mumhx73x.wl-levitte@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> Message-ID: <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> On 27/02/2019 22:18, Richard Levitte wrote: > On Wed, 27 Feb 2019 21:55:29 +0100, > Jakob Bohm via openssl-users wrote: >> On 27/02/2019 20:59, Salz, Rich via openssl-users wrote: >>> If you change a single line of code or do not build it EXACTLY as documented, you cannot claim to use the OpenSSL validation. >>> >> I believe the context here is one I also mentioned in my comment on >> the 3.0 draft spec: >> >> - OpenSSL FIPS Module provides FIPS validated software implementations of >> all/most of the permitted algorithms. >> - Engine provides FIPS validated (hardware?) implementations of one or >> more implementations, under a separate FIPS validation, perhaps done >> at the hardware level. >> - FIPS-capable OpenSSL (outside the FIPS boundary) is somehow made to use >> both FIPS validated modules depending on various conditions (such as >> algorithm availability). FIPS-capable OpenSSL can be changed without >> breaking the FIPS validation of the modules. >> - Overall application claims FIPS compliance as all crypto is done by >> FIPS validated modules. > Side note: "FIPS-capable OpenSSL" isn't quite right. Basically, if > libcrypto is capable of loading a dynamically loadable module, it's > "FIPS-capable", since it can load the FIPS provider module. I always understood "FIPS-capable OpenSSL" to refer specifically to an OpenSSL compiled with the options to incorporate the FIPS canister module, not just any OpenSSL build that might be used in FIPS compliant applications (as that would be any OpenSSL at all). > > I see no reason why libcrypto should be able to load two > FIPS-validated modules (*) and use them both, all depending on what > algorithms and properties are desired (apart from the "fips" > property). However, I've come to understand that those two modules > must not be made to cooperate, i.e. for a signing operation using > sha256WithRSAEncryption, it's not permitted for one module to do the > sha256 part and the other module to do the RSA calculations. > > Cheers, > Richard > > ----- > (*) an engine module is also a module... all that actually makes it > an OpenSSL engine is two entry points, "bind_engine" and "v_check". > How does this understanding work for other applications that use a FIPS-validated smart card (cryptographic boundary for validation is the physical card boundary) to sign messages?? Are such applications required to pass every message (mega)byte through the smart card serial interface so the low speed smart card chip can do the hashing? Enjoy Jakob -- Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com Transformervej 29, 2860 S?borg, Denmark. Direct +45 31 13 16 10 This public discussion message is non-binding and may contain errors. WiseMo - Remote Service Management for PCs, Phones and Embedded From rsalz at akamai.com Wed Feb 27 21:54:41 2019 From: rsalz at akamai.com (Salz, Rich) Date: Wed, 27 Feb 2019 21:54:41 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> Message-ID: <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> > I always understood "FIPS-capable OpenSSL" to refer specifically to an OpenSSL compiled with the options to incorporate the FIPS canister module, not just any OpenSSL build that might be used in FIPS compliant applications (as that would be any OpenSSL at all). Yes, that is historically correct. I don't believe the project uses the term "FIPS-capable OpenSSL" any more. Instead, the design and such talk about a FIPS module which OpenSSL can use. > I see no reason why libcrypto should be able to load two > FIPS-validated modules (*) and use them both, all depending on what > algorithms and properties are desired (apart from the "fips" > property). Richard made a typo here. He means there is no reason why libcrypto should NOT be able to load two modules. > However, I've come to understand that those two modules > must not be made to cooperate, i.e. for a signing operation using > sha256WithRSAEncryption, it's not permitted for one module to do the > sha256 part and the other module to do the RSA calculations. I believe Richard is wrong here. Or at least his text could be misleading. If the EVP API does the digesting with one module and then calls another module to do the RSA signing, that is okay. If the "digest and sign" module calls out to another module *itself* that is probably not okay. From levitte at openssl.org Wed Feb 27 22:20:24 2019 From: levitte at openssl.org (Richard Levitte) Date: Wed, 27 Feb 2019 23:20:24 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> Message-ID: <87lg20yitz.wl-levitte@openssl.org> On Wed, 27 Feb 2019 22:54:41 +0100, Salz, Rich via openssl-users wrote: > > > I always understood "FIPS-capable OpenSSL" to refer specifically to an > OpenSSL compiled with the options to incorporate the FIPS canister > module, not just any OpenSSL build that might be used in FIPS compliant > applications (as that would be any OpenSSL at all). > > Yes, that is historically correct. I don't believe the project uses > the term "FIPS-capable OpenSSL" any more. Instead, the design and > such talk about a FIPS module which OpenSSL can use. Correct. > > I see no reason why libcrypto should be able to load two > > FIPS-validated modules (*) and use them both, all depending on what > > algorithms and properties are desired (apart from the "fips" > > property). > > Richard made a typo here. He means there is no reason why libcrypto > should NOT be able to load two modules. You got it right. Sorry for the confusion I caused. > > However, I've come to understand that those two modules > > must not be made to cooperate, i.e. for a signing operation using > > sha256WithRSAEncryption, it's not permitted for one module to do the > > sha256 part and the other module to do the RSA calculations. > > I believe Richard is wrong here. Or at least his text could be > misleading. If the EVP API does the digesting with one module and > then calls another module to do the RSA signing, that is okay. Huh? From the design document, section "Example dynamic views of algorithm selection", after the second diagram: An EVP_DigestSign* operation is more complicated because it involves two algorithms: a signing algorithm, and a digest algorithm. In general those two algorithms may come from different providers or the same one. In the case of the FIPS module the algorithms must both come from the same FIPS module provider. The operation will fail if an attempt is made to do otherwise. Ref: https://www.openssl.org/docs/OpenSSL300Design.html#example-dynamic-views-of-algorithm-selection Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From Matthias.St.Pierre at ncp-e.com Wed Feb 27 22:34:23 2019 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Wed, 27 Feb 2019 22:34:23 +0000 Subject: AW: AES-cipher offload to engine in openssl-fips In-Reply-To: <87lg20yitz.wl-levitte@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> Message-ID: <38f50252f1e0456592fa33b81a657091@Ex13.ncp.local> > -----Urspr?ngliche Nachricht----- > > > I always understood "FIPS-capable OpenSSL" to refer specifically to an > > OpenSSL compiled with the options to incorporate the FIPS canister > > module, not just any OpenSSL build that might be used in FIPS compliant > > applications (as that would be any OpenSSL at all). > > > > Yes, that is historically correct. I don't believe the project uses > > the term "FIPS-capable OpenSSL" any more. Instead, the design and > > such talk about a FIPS module which OpenSSL can use. > > Correct. I disagree: The term "FIPS Capable OpenSSL" is a technical term from the OpenSSL FIPS 2.0 User Guide (https://www.openssl.org/docs/fips/UserGuide-2.0.pdf) and has a very clear and precise meaning: It refers to an OpenSSL 1.0.2 (or 1.0.1) library configured and built with `./configure fips ...` in order to integrate the FIPS Object Module. Until FIPS 3.0 has been released and FIPS 2.0 is history, we should stick to that definition and not confuse FIPS users by reinterpreting it or pretend that it is not used anymore or has a different meaning nowadays. Matthias -- You find the details in Sections 4.2.3 resp. 4.3.3 of https://www.openssl.org/docs/fips/UserGuide-2.0.pdf. 4.2.3 Building a FIPS Capable OpenSSL (Unix/Linux) 4.3.3 Building a FIPS Capable OpenSSL (Windows) Here a brief excerpt: Once the validated FIPS Object Module has been generated it is usually combined with an OpenSSL distribution in order to provide the standard OpenSSL API. Any 1.0.1 or 1.0.2 release can be used for this purpose. The commands ./config fips <...other options...> make <...options...> make install will build and install the new OpenSSL without overwriting the validated FIPS Object Module files. The FIPSDIR environment variable or the --with?fipsdir command line option can be used to explicitly reference the location of the FIPS Object Module (fipscanister.o). The combination of the validated FIPS Object Module plus an OpenSSL distribution built in this way is referred to as a FIPS capable OpenSSL, as it can be used either as a drop-in replacement for a non-FIPS OpenSSL or for use in generating FIPS mode applications. From levitte at openssl.org Wed Feb 27 22:53:53 2019 From: levitte at openssl.org (Richard Levitte) Date: Wed, 27 Feb 2019 23:53:53 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <38f50252f1e0456592fa33b81a657091@Ex13.ncp.local> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <38f50252f1e0456592fa33b81a657091@Ex13.ncp.local> Message-ID: <9D93C7E4-7D40-43D1-813B-F57EBA81E2ED@openssl.org> Uhm, I'm confused. I thought we were talking about 3.0? "Dr. Matthias St. Pierre" skrev: (27 februari 2019 23:34:23 CET) > >> -----Urspr?ngliche Nachricht----- >> > > I always understood "FIPS-capable OpenSSL" to refer >specifically to an >> > OpenSSL compiled with the options to incorporate the FIPS >canister >> > module, not just any OpenSSL build that might be used in FIPS >compliant >> > applications (as that would be any OpenSSL at all). >> > >> > Yes, that is historically correct. I don't believe the project >uses >> > the term "FIPS-capable OpenSSL" any more. Instead, the design and >> > such talk about a FIPS module which OpenSSL can use. >> >> Correct. > >I disagree: The term "FIPS Capable OpenSSL" is a technical term from >the OpenSSL FIPS 2.0 >User Guide (https://www.openssl.org/docs/fips/UserGuide-2.0.pdf) and >has a very clear and >precise meaning: > >It refers to an OpenSSL 1.0.2 (or 1.0.1) library configured and built >with `./configure fips ...` >in order to integrate the FIPS Object Module. Until FIPS 3.0 has been >released and FIPS 2.0 >is history, we should stick to that definition and not confuse FIPS >users by reinterpreting it >or pretend that it is not used anymore or has a different meaning >nowadays. > >Matthias > >-- > >You find the details in Sections 4.2.3 resp. 4.3.3 of >https://www.openssl.org/docs/fips/UserGuide-2.0.pdf. > > 4.2.3 Building a FIPS Capable OpenSSL (Unix/Linux) > 4.3.3 Building a FIPS Capable OpenSSL (Windows) > >Here a brief excerpt: > >Once the validated FIPS Object Module has been generated it is usually >combined with an >OpenSSL distribution in order to provide the standard OpenSSL API. Any >1.0.1 or 1.0.2 release >can be used for this purpose. The commands > ./config fips <...other options...> > make <...options...> > make install >will build and install the new OpenSSL without overwriting the >validated FIPS Object Module >files. The FIPSDIR environment variable or the --with?fipsdir command >line option can >be used to explicitly reference the location of the FIPS Object Module >(fipscanister.o). > >The combination of the validated FIPS Object Module plus an OpenSSL >distribution built in this >way is referred to as a FIPS capable OpenSSL, as it can be used either >as a drop-in replacement for >a non-FIPS OpenSSL or for use in generating FIPS mode applications. -- Skickat fr?n min Android-enhet med K-9 Mail. Urs?kta min f?ordighet. From rsalz at akamai.com Wed Feb 27 23:17:13 2019 From: rsalz at akamai.com (Salz, Rich) Date: Wed, 27 Feb 2019 23:17:13 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <87lg20yitz.wl-levitte@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> Message-ID: <5DA853F6-8438-4FE7-9E73-FD7895A3D5E8@akamai.com> > Huh? From the design document, section "Example dynamic views of algorithm selection", after the second diagram: An EVP_DigestSign* operation is more complicated because it involves two algorithms: a signing algorithm, and a digest algorithm. In general those two algorithms may come from different providers or the same one. In the case of the FIPS module the algorithms must both come from the same FIPS module provider. The operation will fail if an attempt is made to do otherwise. There are two options. First, the application does the digest and sign as two separate things. Second, the provider implementing digestSign has to be validated to use the other FIPS module. From Matthias.St.Pierre at ncp-e.com Wed Feb 27 23:51:24 2019 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Wed, 27 Feb 2019 23:51:24 +0000 Subject: AW: AES-cipher offload to engine in openssl-fips In-Reply-To: <9D93C7E4-7D40-43D1-813B-F57EBA81E2ED@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <38f50252f1e0456592fa33b81a657091@Ex13.ncp.local> <9D93C7E4-7D40-43D1-813B-F57EBA81E2ED@openssl.org> Message-ID: > Uhm, I'm confused. I thought we were talking about 3.0? Well, the original post started at FIPS 2.0: > I am using openssl-fips-2.0.16 and openssl-1.0.2e. https://mta.openssl.org/pipermail/openssl-users/2019-February/009919.html But it seems like the discussion in the thread has drifted a little towards the FIPS 3.0 future, which explains our mutual confusion. For that reason it is even more important that we don't use legacy terms like "FIPS capable" in the context of FIPS 3.0 and stick to "FIPS Providers" (or whatever correct new terms are; I'm currently not 100% up-to-date) instead. Matthias From mcr at sandelman.ca Wed Feb 27 23:57:12 2019 From: mcr at sandelman.ca (Michael Richardson) Date: Wed, 27 Feb 2019 18:57:12 -0500 Subject: shared libraries vs test cases In-Reply-To: <87sgw9xjgn.wl-levitte@openssl.org> References: <31242.1551283449@localhost> <87sgw9xjgn.wl-levitte@openssl.org> Message-ID: <29269.1551311832@localhost> Richard Levitte wrote: >> Running LDD on the binaries in test/* shows that they appear to link against >> the "system" copies of libssl and libcrypto. >> >> Perhaps something I'm missing is setting up LD_PRELOAD or some such so that >> the tests run the local copy of libssl/libcrypto, but I can't find that. >> Am I missing something? >> >> This is, I think, making it very difficult for me to bisect a problem. >> >> It seems to me that the test cases ought to be statically linked to make >> it easiest to know what code they are running. (This also makes it slightly >> easier to use gdb on them) > There's a script called util/shlib_wrap.sh that you place first on the > command line: > ./util/shlib_wrap.sh test/whatevertest > ./util/shlib_wrap.sh ldd test/whatevertest > ./util/shlib_wrap.sh gdb test/whatevertest And another email says that this is done by default for "make test". -- ] Never tell me the odds! | ipv6 mesh networks [ ] Michael Richardson, Sandelman Software Works | IoT architect [ ] mcr at sandelman.ca http://www.sandelman.ca/ | ruby on rails [ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From Maxime.Torrelli at conduent.com Thu Feb 28 00:17:55 2019 From: Maxime.Torrelli at conduent.com (Torrelli, Maxime) Date: Thu, 28 Feb 2019 00:17:55 +0000 Subject: OpenSSL 1.1.1b for WinCE700 In-Reply-To: References: Message-ID: Thank you very much for your answer. At least now I know what to except from the generated makefile ! What do you think of this : could I try to adapt the makefile for 1.0.2 (which is compiling for 1.0.2) to the 1.1.1 release ? Is the difference between the 2 versions really big ? Greetings, Maxime TORRELLI Embedded Software Engineer Conduent Conduent Business Solutions (France) 1 rue Claude Chappe - BP 345 07503 Guilherand Granges Cedex -----Message d'origine----- De?: openssl-users De la part de Matt Caswell Envoy??: 27 February 2019 18:45 ??: openssl-users at openssl.org Objet?: Re: OpenSSL 1.1.1b for WinCE700 On 27/02/2019 17:22, Torrelli, Maxime wrote: > Hello, > > ? > > Sorry to send you again an email about the same subject but I really > need some help on this topic. I will try to give as much information I can. > > ? > > I am using WCECOMPAT tool to compile OpenSSL 1.1.1b for WINCE700 on a > ARMV4I CPU. We have to do this because the Long Time Support of > OpenSSL 1.0.2 is ending in December 2019. > *_Is VC-CE platform still supported ?_* > I can't answer your main question but can attempt this one. VC-CE is not a primary or a secondary supported platform: https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.openssl.org%2Fpolicies%2Fplatformpolicy.html&data=02%7C01%7Cmaxime.torrelli%40conduent.com%7Ccaa2fb03f9cd49e1259b08d69cdb54e9%7C1aed4588b8ce43a8a775989538fd30d8%7C0%7C0%7C636868863174939886&sdata=RPyJsrS3T%2B5rkxxhFFlH2lRqxzIX1ool94a0CpzCeXo%3D&reserved=0 Support has not been *removed* and we've not done anything to actively break it, but AFAIK no one on the dev team has access to that platform. Which puts it in the "Unknown" classification (or possibly "Community"). Matt > ? > > If so you will find below what I did : > > ? > > My computer : Windows 7 Enterprise N (32 bits) > > Visual Studio 2008 Professional Edition + Windows Embedded Compact > 7.5.2884.0 > > ? > > *I.??????????????????? **WCECOMPAT Compilation* > > set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Lib\ARMV4I;C:\Program Files\Microsoft > SDKs\Windows\v6.0A\Lib;C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft Visual Studio > 9.0\VC\lib > > set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I > > set OSVERSION=WCE700 > > set PLATFORM=VC-CE > > set TARGETCPU=ARMV4I > > set Path=C:\Program Files\Microsoft Visual Studio > 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\bin\x86_arm;%Path% > > set LIBPATH="C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for > VPE420 device\Lib\ARMV4I";C:\Program Files\Microsoft Visual Studio > 9.0\VC\lib; > > ? > > (my WINCE700 SDK is called "SDK WEC7 for VPE420 device") > > ? > > In a command prompt : > > -????????? Perl config.pl > > -????????? Nmake -f makefile > > ? > > The compilation is a success. > > ? > > *II.????????????????? **OpenSSL Compilation* > > ? > > The I open another command prompt in the openssl-1.1.1b folder. > > ? > > ??????? set OSVERSION=WCE700 > > ??????? set PLATFORM=VC-CE > > ??????? set TARGETCPU=ARMV4I > > ??????? set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for > VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft > SDKs\Windows\v6.0A\Lib;C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft Visual Studio > 9.0\VC\lib > > ??????? set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 > for VPE420 device\Include\ARMV4I;C:\Program Files\Microsoft Visual > Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft Visual Studio > 9.0\VC\INCLUDE;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include; > > ??????? set Path=C:\Program Files\Microsoft Visual Studio > 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\bin\x86_arm;%Path% > > ??????? set LIBPATH=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 > for VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft Visual Studio > 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\lib\ARMV4I; > > ??????? set WCECOMPAT=../wcecompat > > ? > > ??????? perl Configure no-idea no-mdc2 no-rc5 no-asm no-ssl2 no-ssl3 > VC-CE > > ? > > ??????? nmake > > ? > > The output is the following : > > ? > > *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* > > *Copyright (C) Microsoft Corporation.? All rights reserved.* > > *?* > > *?* > > *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* > > *Copyright (C) Microsoft Corporation.? All rights reserved.* > > *?* > > *?* > > *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* > > *Copyright (C) Microsoft Corporation.? All rights reserved.* > > ? > > *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata > "util\dofile.pl"? "-omakefile" "crypto\include\internal\bn_conf.h.in" > > > crypto\include\internal\bn_conf.h* > > *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata > "util\dofile.pl"? "-omakefile" "crypto\include\internal\dso_conf.h.in" > > > crypto\include\internal\dso_conf.h* > > *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata > "util\dofile.pl"? "-omakefile" "include\openssl\opensslconf.h.in" > > include\openssl\opensslconf.h* > > *????????????? "C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\nmake.exe" > /?????????????????? depend && "C:\Program Files\Microsoft Visual > Studio 9.0\VC\BIN\nmake.exe" /?????????????????? _all* > > *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 > /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 > -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" > -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" > -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" > -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" > -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ > -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" > -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" > -I"\../wcecompat/include"? -c /Foapps\app_rand.obj "apps\app_rand.c"* > > *app_rand.c* > > *C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro > redefinition* > > *??????? C:\wcecompat\include\io.h(43) : see previous definition of > 'open'* > > *C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro > redefinition* > > *??????? C:\wcecompat\include\io.h(45) : see previous definition of > 'close'* > > *C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro > redefinition* > > *??????? C:\wcecompat\include\io.h(50) : see previous definition of > 'unlink'* > > *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 > /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 > -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" > -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" > -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" > -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" > -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ > -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" > -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" > -I"\../wcecompat/include"? /Zs /showIncludes "apps\app_rand.c" 2>&1 > > apps\app_rand.d* > > *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 > /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 > -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" > -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" > -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" > -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" > -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ > -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" > -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" > -I"\../wcecompat/include"? -c /Foapps\apps.obj "apps\apps.c"* > > *apps.c* > > *C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro > redefinition* > > *??????? C:\wcecompat\include\io.h(43) : see previous definition of > 'open'* > > *C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro > redefinition* > > *??????? C:\wcecompat\include\io.h(45) : see previous definition of > 'close'* > > *C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro > redefinition* > > *??????? C:\wcecompat\include\io.h(50) : see previous definition of > 'unlink'* > > *apps\apps.c(2596) : warning C4013: '_fdopen' undefined; assuming > extern returning int* > > *apps\apps.c(2596) : warning C4047: '=' : 'FILE *' differs in levels > of indirection from 'int'* > > *apps\apps.c(2614) : warning C4013: '_close' undefined; assuming > extern returning int* > > *apps\apps.c(2696) : warning C4013: 'GetStdHandle' undefined; assuming > extern returning int* > > *apps\apps.c(2696) : error C2065: 'STD_INPUT_HANDLE' : undeclared > identifier* > > *apps\apps.c(2696) : warning C4047: 'initializing' : 'HANDLE' differs > in levels of indirection from 'int'* > > *apps\apps.c(2698) : error C2065: 'INPUT_RECORD' : undeclared > identifier* > > *apps\apps.c(2698) : error C2146: syntax error : missing ';' before > identifier > 'inputrec'* > > *apps\apps.c(2698) : error C2065: 'inputrec' : undeclared identifier* > > *apps\apps.c(2699) : error C2275: 'DWORD' : illegal use of this type > as an > expression* > > *??????? C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I\windef.h(161) : see declaration of 'DWORD'* > > *apps\apps.c(2699) : error C2146: syntax error : missing ';' before > identifier > 'insize'* > > *apps\apps.c(2699) : error C2065: 'insize' : undeclared identifier* > > *apps\apps.c(2700) : error C2275: 'BOOL' : illegal use of this type as > an > expression* > > *??????? C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 > device\Include\ARMV4I\windef.h(162) : see declaration of 'BOOL'* > > *apps\apps.c(2700) : error C2146: syntax error : missing ';' before > identifier > 'peeked'* > > *apps\apps.c(2700) : error C2065: 'peeked' : undeclared identifier* > > *apps\apps.c(2706) : error C2065: 'peeked' : undeclared identifier* > > *apps\apps.c(2706) : warning C4013: 'PeekConsoleInput' undefined; > assuming extern returning int* > > *apps\apps.c(2706) : error C2065: 'inputrec' : undeclared identifier* > > *apps\apps.c(2706) : error C2065: 'insize' : undeclared identifier* > > *apps\apps.c(2707) : error C2065: 'peeked' : undeclared identifier* > > *?* > > *NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 9.0\VC\ce\* > > *bin\x86_arm\cl.EXE"' : return code '0x2'* > > *Stop.* > > *NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 9.0\VC\BIN* > > *\nmake.exe"' : return code '0x2'* > > *Stop.* > > *?* > > Any guess or hint would be much appreciated. > > ? > > ? > > Greetings, > > *?* > > *Maxime TORRELLI* > > Embedded Software Engineer > > ? > > *Conduent* > > Conduent Business Solutions (France) > > 1 rue Claude Chappe - BP 345 > 07503 Guilherand Granges Cedex > > ? > From mksarav at gmail.com Thu Feb 28 02:35:07 2019 From: mksarav at gmail.com (M K Saravanan) Date: Thu, 28 Feb 2019 10:35:07 +0800 Subject: CVE-2019-1559 advisory - what is "non-stiched" ciphersuite means? In-Reply-To: <21bb2462-2ac4-6bf6-6811-a06ed9fbb921@enkore.de> References: <21bb2462-2ac4-6bf6-6811-a06ed9fbb921@enkore.de> Message-ID: Thanks Marian for the clarification. After your email, I also read the https://github.com/RUB-NDS/TLS-Padding-Oracles and found https://software.intel.com/en-us/articles/improving-openssl-performance#_Toc416943485 with regards, Saravanan On Wed, 27 Feb 2019 at 17:26, Marian Beermann wrote: > > "Stitching" is an optimization where you have algorithm A (e.g. AES-CBC) > and algorithm B (e.g. HMAC-SHA2) working on the same data, and you > interleave the instructions of A and B. (This can improve performance by > increasing port and EU utilization relative to running A and B > sequentially). > > I believe OpenSSL uses stitched implementations in TLS for AES-CBC + > HMAC-SHA1/2, if they exist for the platform. > > Also note that "AEAD ciphersuites are not impacted", i.e. AES-GCM and > ChaPoly are not impacted. > > Cheers, Marian > > Am 27.02.19 um 03:56 schrieb M K Saravanan: > > Hi, > > > > In the context of https://www.openssl.org/news/secadv/20190226.txt > > > > ====== > > In order for this to be exploitable "non-stitched" ciphersuites must be in use. > > ====== > > > > what is "non-stitched" ciphersuites means? > > > > with regards, > > Saravanan > > > From levitte at openssl.org Thu Feb 28 08:15:02 2019 From: levitte at openssl.org (Richard Levitte) Date: Thu, 28 Feb 2019 09:15:02 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <5DA853F6-8438-4FE7-9E73-FD7895A3D5E8@akamai.com> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <5DA853F6-8438-4FE7-9E73-FD7895A3D5E8@akamai.com> Message-ID: <87imx4xrax.wl-levitte@openssl.org> On Thu, 28 Feb 2019 00:17:13 +0100, Salz, Rich wrote: > > > Huh? From the design document, section "Example dynamic views of > algorithm selection", after the second diagram: > > An EVP_DigestSign* operation is more complicated because it > involves two algorithms: a signing algorithm, and a digest > algorithm. In general those two algorithms may come from different > providers or the same one. In the case of the FIPS module the > algorithms must both come from the same FIPS module provider. The > operation will fail if an attempt is made to do otherwise. > > There are two options. First, the application does the digest and > sign as two separate things. My memory is a foggy surrounding that scenario, so I might be wrong, but I think it was argued that this was invalid use from a FIPS perspective. Now, we can't actually stop any application from doing this, sure! But... > Second, the provider implementing digestSign has to be validated to > use the other FIPS module. Yes, and this is, as far as I remember, a "combined FIPS module" (I don't remember the exact terminology, sorry) which is supposed to be validated together and present itself to libcrypto as one provider, not two. However, what you wrote earlier was this: > If the EVP API does the digesting with one module and then calls > another module to do the RSA signing, that is okay. That suggests to me that libcrypto could "magically" combine two different FIPS providers, which would be none of the two options mentioned above. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From levitte at openssl.org Thu Feb 28 08:27:57 2019 From: levitte at openssl.org (Richard Levitte) Date: Thu, 28 Feb 2019 09:27:57 +0100 Subject: AW: AES-cipher offload to engine in openssl-fips In-Reply-To: References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <38f50252f1e0456592fa33b81a657091@Ex13.ncp.local> <9D93C7E4-7D40-43D1-813B-F57EBA81E2ED@openssl.org> Message-ID: <87h8coxqpe.wl-levitte@openssl.org> On Thu, 28 Feb 2019 00:51:24 +0100, Dr. Matthias St. Pierre wrote: > > > > Uhm, I'm confused. I thought we were talking about 3.0? > > Well, the original post started at FIPS 2.0: > > > I am using openssl-fips-2.0.16 and openssl-1.0.2e. > https://mta.openssl.org/pipermail/openssl-users/2019-February/009919.html Yes, it did... and then evolved, as threads on the Internet often do (or for that matter, in physical life too). > But it seems like the discussion in the thread has drifted a little > towards the FIPS 3.0 future, which explains our mutual confusion. Yup :-) > For that reason it is even more important that we don't use legacy > terms like "FIPS capable" in the context of FIPS 3.0 and stick to > "FIPS Providers" (or whatever correct new terms are; I'm currently > not 100% up-to-date) instead. Cool, we agree then :-) "FIPS provider" is what we use within the team, or sometimes "FIPS provider module". They are synonymous, but the latter is more precise. Cheers, Richard -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From matt at openssl.org Thu Feb 28 09:32:12 2019 From: matt at openssl.org (Matt Caswell) Date: Thu, 28 Feb 2019 09:32:12 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <87lg20yitz.wl-levitte@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> Message-ID: <20cda107-dc55-84b4-725a-60066885ca6f@openssl.org> On 27/02/2019 22:20, Richard Levitte wrote: >> I believe Richard is wrong here. Or at least his text could be >> misleading. If the EVP API does the digesting with one module and >> then calls another module to do the RSA signing, that is okay. > > Huh? From the design document, section "Example dynamic views of > algorithm selection", after the second diagram: > > An EVP_DigestSign* operation is more complicated because it > involves two algorithms: a signing algorithm, and a digest > algorithm. In general those two algorithms may come from different > providers or the same one. In the case of the FIPS module the > algorithms must both come from the same FIPS module provider. The > operation will fail if an attempt is made to do otherwise. > > Ref: https://www.openssl.org/docs/OpenSSL300Design.html#example-dynamic-views-of-algorithm-selection Also from the design document: "Once in a FIPS module provided algorithm, we must remain within the FIPS module for any other cryptographic operations. It would be allowed by the FIPS rules for one FIPS module to use another FIPS module. However, for the purposes of the 3.0 design we are making the simplifying assumption that we will not allow this. For example an EVP_DigestSign* implementation uses both a signing algorithm and digest algorithm. We will not allow one of those algorithms to come from the FIPS module, and one to come from some other provider." Note the the text Richard quotes above talks about *the* FIPS module - i.e. it is in specific reference to our FIPS module. It is not making a general statement about the FIPS rules. In general, my understanding is that it is ok for one FIPS module to do signing and another one to do digesting. However we are making the simplifying assumption that in *our* FIPS module we will not allow this. Matt From matt at openssl.org Thu Feb 28 09:47:29 2019 From: matt at openssl.org (Matt Caswell) Date: Thu, 28 Feb 2019 09:47:29 +0000 Subject: OpenSSL 1.1.1b for WinCE700 In-Reply-To: References: Message-ID: On 28/02/2019 00:17, Torrelli, Maxime wrote: > Thank you very much for your answer. At least now I know what to except from the generated makefile ! > > What do you think of this : could I try to adapt the makefile for 1.0.2 (which is compiling for 1.0.2) to the 1.1.1 release ? Is the difference between the 2 versions really big ? We would welcome patches for master and 1.1.1 for this platform. However the build system was completely rewritten in 1.1.0 so it will not be a simple case of copying Makefile changes from one branch to another. In addition it is probable that any fixes may extend beyond the build system itself and into the C code - because there are many significant internal changes between 1.0.2 and 1.1.1. Matt > > > Greetings, > > Maxime TORRELLI > Embedded Software Engineer > > Conduent > Conduent Business Solutions (France) > 1 rue Claude Chappe - BP 345 > 07503 Guilherand Granges Cedex > > -----Message d'origine----- > De?: openssl-users De la part de Matt Caswell > Envoy??: 27 February 2019 18:45 > ??: openssl-users at openssl.org > Objet?: Re: OpenSSL 1.1.1b for WinCE700 > > > > On 27/02/2019 17:22, Torrelli, Maxime wrote: >> Hello, >> >> ? >> >> Sorry to send you again an email about the same subject but I really >> need some help on this topic. I will try to give as much information I can. >> >> ? >> >> I am using WCECOMPAT tool to compile OpenSSL 1.1.1b for WINCE700 on a >> ARMV4I CPU. We have to do this because the Long Time Support of >> OpenSSL 1.0.2 is ending in December 2019. >> *_Is VC-CE platform still supported ?_* >> > > I can't answer your main question but can attempt this one. VC-CE is not a primary or a secondary supported platform: > > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.openssl.org%2Fpolicies%2Fplatformpolicy.html&data=02%7C01%7Cmaxime.torrelli%40conduent.com%7Ccaa2fb03f9cd49e1259b08d69cdb54e9%7C1aed4588b8ce43a8a775989538fd30d8%7C0%7C0%7C636868863174939886&sdata=RPyJsrS3T%2B5rkxxhFFlH2lRqxzIX1ool94a0CpzCeXo%3D&reserved=0 > > Support has not been *removed* and we've not done anything to actively break it, but AFAIK no one on the dev team has access to that platform. Which puts it in the "Unknown" classification (or possibly "Community"). > > Matt > >> ? >> >> If so you will find below what I did : >> >> ? >> >> My computer : Windows 7 Enterprise N (32 bits) >> >> Visual Studio 2008 Professional Edition + Windows Embedded Compact >> 7.5.2884.0 >> >> ? >> >> *I.??????????????????? **WCECOMPAT Compilation* >> >> set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 >> device\Lib\ARMV4I;C:\Program Files\Microsoft >> SDKs\Windows\v6.0A\Lib;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\lib >> >> set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 >> device\Include\ARMV4I >> >> set OSVERSION=WCE700 >> >> set PLATFORM=VC-CE >> >> set TARGETCPU=ARMV4I >> >> set Path=C:\Program Files\Microsoft Visual Studio >> 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\ce\bin\x86_arm;%Path% >> >> set LIBPATH="C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for >> VPE420 device\Lib\ARMV4I";C:\Program Files\Microsoft Visual Studio >> 9.0\VC\lib; >> >> ? >> >> (my WINCE700 SDK is called "SDK WEC7 for VPE420 device") >> >> ? >> >> In a command prompt : >> >> -????????? Perl config.pl >> >> -????????? Nmake -f makefile >> >> ? >> >> The compilation is a success. >> >> ? >> >> *II.????????????????? **OpenSSL Compilation* >> >> ? >> >> The I open another command prompt in the openssl-1.1.1b folder. >> >> ? >> >> ??????? set OSVERSION=WCE700 >> >> ??????? set PLATFORM=VC-CE >> >> ??????? set TARGETCPU=ARMV4I >> >> ??????? set LIB=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for >> VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft >> SDKs\Windows\v6.0A\Lib;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\ce\lib\ARMV4I;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\lib >> >> ??????? set INCLUDE=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 >> for VPE420 device\Include\ARMV4I;C:\Program Files\Microsoft Visual >> Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\INCLUDE;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include; >> >> ??????? set Path=C:\Program Files\Microsoft Visual Studio >> 9.0\Common7\IDE;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\ce\bin\x86_arm;%Path% >> >> ??????? set LIBPATH=C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 >> for VPE420 device\Lib\ARMV4I;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio >> 9.0\VC\ce\lib\ARMV4I; >> >> ??????? set WCECOMPAT=../wcecompat >> >> ? >> >> ??????? perl Configure no-idea no-mdc2 no-rc5 no-asm no-ssl2 no-ssl3 >> VC-CE >> >> ? >> >> ??????? nmake >> >> ? >> >> The output is the following : >> >> ? >> >> *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* >> >> *Copyright (C) Microsoft Corporation.? All rights reserved.* >> >> *?* >> >> *?* >> >> *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* >> >> *Copyright (C) Microsoft Corporation.? All rights reserved.* >> >> *?* >> >> *?* >> >> *Microsoft (R) Program Maintenance Utility Version 9.00.30729.01* >> >> *Copyright (C) Microsoft Corporation.? All rights reserved.* >> >> ? >> >> *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata >> "util\dofile.pl"? "-omakefile" "crypto\include\internal\bn_conf.h.in" >>> >> crypto\include\internal\bn_conf.h* >> >> *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata >> "util\dofile.pl"? "-omakefile" "crypto\include\internal\dso_conf.h.in" >>> >> crypto\include\internal\dso_conf.h* >> >> *????????????? "C:\Strawberry\perl\bin\perl.exe" "-I." -Mconfigdata >> "util\dofile.pl"? "-omakefile" "include\openssl\opensslconf.h.in" > >> include\openssl\opensslconf.h* >> >> *????????????? "C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\nmake.exe" >> /?????????????????? depend && "C:\Program Files\Microsoft Visual >> Studio 9.0\VC\BIN\nmake.exe" /?????????????????? _all* >> >> *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 >> /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 >> -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" >> -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" >> -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" >> -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" >> -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ >> -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" >> -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" >> -I"\../wcecompat/include"? -c /Foapps\app_rand.obj "apps\app_rand.c"* >> >> *app_rand.c* >> >> *C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro >> redefinition* >> >> *??????? C:\wcecompat\include\io.h(43) : see previous definition of >> 'open'* >> >> *C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro >> redefinition* >> >> *??????? C:\wcecompat\include\io.h(45) : see previous definition of >> 'close'* >> >> *C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro >> redefinition* >> >> *??????? C:\wcecompat\include\io.h(50) : see previous definition of >> 'unlink'* >> >> *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 >> /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 >> -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" >> -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" >> -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" >> -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" >> -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ >> -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" >> -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" >> -I"\../wcecompat/include"? /Zs /showIncludes "apps\app_rand.c" 2>&1 > >> apps\app_rand.d* >> >> *????????????? cl? /Zi /Fdossl_static.pdb /GF /Gy? /MD /W3 /wd4090 >> /nologo /O1i /I "." /I "include" -D_WIN32_WCE=700 -DUNDER_CE=700 >> -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ -DARMV4I -QRarch4T -QRinterwork-return -D"L_ENDIAN" -D"NO_CHMOD" >> -D"OPENSSL_SMALL_FOOTPRINT" -D"OPENSSL_PIC" >> -D"OPENSSLDIR=\"C:\\Program Files\\Common Files\\SSL\"" >> -D"ENGINESDIR=\"C:\\Program Files\\OpenSSL\\lib\\engines-1_1\"" >> -D_WIN32_WCE=700 -DUNDER_CE=700 -DWCE_PLATFORM_VC-CE -DARM -D_ARM_ >> -DARMV4I -QRarch4T -QRinterwork-return -D"OPENSSL_SYS_WIN32" -D"WIN32_LEAN_AND_MEAN" -D"UNICODE" -D"_UNICODE" >> -D"_CRT_SECURE_NO_DEPRECATE" -D"_WINSOCK_DEPRECATED_NO_WARNINGS" -D"NDEBUG" >> -I"\../wcecompat/include"? -c /Foapps\apps.obj "apps\apps.c"* >> >> *apps.c* >> >> *C:\openssl-1.1.1b\e_os.h(287) : warning C4005: 'open' : macro >> redefinition* >> >> *??????? C:\wcecompat\include\io.h(43) : see previous definition of >> 'open'* >> >> *C:\openssl-1.1.1b\e_os.h(289) : warning C4005: 'close' : macro >> redefinition* >> >> *??????? C:\wcecompat\include\io.h(45) : see previous definition of >> 'close'* >> >> *C:\openssl-1.1.1b\e_os.h(293) : warning C4005: 'unlink' : macro >> redefinition* >> >> *??????? C:\wcecompat\include\io.h(50) : see previous definition of >> 'unlink'* >> >> *apps\apps.c(2596) : warning C4013: '_fdopen' undefined; assuming >> extern returning int* >> >> *apps\apps.c(2596) : warning C4047: '=' : 'FILE *' differs in levels >> of indirection from 'int'* >> >> *apps\apps.c(2614) : warning C4013: '_close' undefined; assuming >> extern returning int* >> >> *apps\apps.c(2696) : warning C4013: 'GetStdHandle' undefined; assuming >> extern returning int* >> >> *apps\apps.c(2696) : error C2065: 'STD_INPUT_HANDLE' : undeclared >> identifier* >> >> *apps\apps.c(2696) : warning C4047: 'initializing' : 'HANDLE' differs >> in levels of indirection from 'int'* >> >> *apps\apps.c(2698) : error C2065: 'INPUT_RECORD' : undeclared >> identifier* >> >> *apps\apps.c(2698) : error C2146: syntax error : missing ';' before >> identifier >> 'inputrec'* >> >> *apps\apps.c(2698) : error C2065: 'inputrec' : undeclared identifier* >> >> *apps\apps.c(2699) : error C2275: 'DWORD' : illegal use of this type >> as an >> expression* >> >> *??????? C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 >> device\Include\ARMV4I\windef.h(161) : see declaration of 'DWORD'* >> >> *apps\apps.c(2699) : error C2146: syntax error : missing ';' before >> identifier >> 'insize'* >> >> *apps\apps.c(2699) : error C2065: 'insize' : undeclared identifier* >> >> *apps\apps.c(2700) : error C2275: 'BOOL' : illegal use of this type as >> an >> expression* >> >> *??????? C:\Program Files\Windows CE Tools\SDKs\SDK WEC7 for VPE420 >> device\Include\ARMV4I\windef.h(162) : see declaration of 'BOOL'* >> >> *apps\apps.c(2700) : error C2146: syntax error : missing ';' before >> identifier >> 'peeked'* >> >> *apps\apps.c(2700) : error C2065: 'peeked' : undeclared identifier* >> >> *apps\apps.c(2706) : error C2065: 'peeked' : undeclared identifier* >> >> *apps\apps.c(2706) : warning C4013: 'PeekConsoleInput' undefined; >> assuming extern returning int* >> >> *apps\apps.c(2706) : error C2065: 'inputrec' : undeclared identifier* >> >> *apps\apps.c(2706) : error C2065: 'insize' : undeclared identifier* >> >> *apps\apps.c(2707) : error C2065: 'peeked' : undeclared identifier* >> >> *?* >> >> *NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio >> 9.0\VC\ce\* >> >> *bin\x86_arm\cl.EXE"' : return code '0x2'* >> >> *Stop.* >> >> *NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio >> 9.0\VC\BIN* >> >> *\nmake.exe"' : return code '0x2'* >> >> *Stop.* >> >> *?* >> >> Any guess or hint would be much appreciated. >> >> ? >> >> ? >> >> Greetings, >> >> *?* >> >> *Maxime TORRELLI* >> >> Embedded Software Engineer >> >> ? >> >> *Conduent* >> >> Conduent Business Solutions (France) >> >> 1 rue Claude Chappe - BP 345 >> 07503 Guilherand Granges Cedex >> >> ? >> > From christian at python.org Thu Feb 28 10:04:55 2019 From: christian at python.org (Christian Heimes) Date: Thu, 28 Feb 2019 11:04:55 +0100 Subject: OpenSSL 3.0 vs. SSL 3.0 In-Reply-To: <9153.1551293626@localhost> References: <9153.1551293626@localhost> Message-ID: On 27/02/2019 19.53, Michael Richardson wrote: > > Christian Heimes wrote: > > I'm concerned about the version number of the upcoming major release of > > OpenSSL. "OpenSSL 3.0" just sounds and looks way too close to "SSL 3.0". > > It took us more than a decade to teach people that SSL 3.0 is bad and > > should be avoided in favor of TLS. In my humble opinion, it's > > problematic and confusing to use "OpenSSL 3.0" for the next major > > version of OpenSSL and first release of OpenSSL with SSL 3.0 support. > > You make a good point which I had not thought about, having exhumed SSLx.y > From my brain. +5 > > > You skipped version 2.0 for technical reasons, because (IIRC) 2.0 was > > used / reserved for FIPS mode. May I suggest that you also skip 3.0 for > > UX reasons and call the upcoming version "OpenSSL 4.0". That way you can > > avoid any confusion with SSL 3.0. > > Integers are cheap. > And 4.0 is > 3.0, so (Open)SSL 4.0.0 must be better than SSL3. Thanks for your support! I have created PR https://github.com/openssl/openssl/pull/8367 to bump the version number to 4.0.0. Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From sujiknair at gmail.com Thu Feb 28 11:59:09 2019 From: sujiknair at gmail.com (suji) Date: Thu, 28 Feb 2019 04:59:09 -0700 (MST) Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <20cda107-dc55-84b4-725a-60066885ca6f@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <20cda107-dc55-84b4-725a-60066885ca6f@openssl.org> Message-ID: <1551355149208-0.post@n7.nabble.com> >From https://www.openssl.org/docs/fips/UserGuide-2.0.pdf I got these lines "OpenSSL provides mechanisms for interfacing with external cryptographic devices, such as accelerator cards, via ?ENGINES.? This mechanism is not disabled in FIPS mode. In general, if a FIPS validated cryptographic device is used with OpenSSL in FIPS mode so that all cryptographic operations are performed either by the device or the FIPS Object Module, then the result is still FIPS validated cryptography. However, if any cryptographic operations are performed by a non-FIPS validated device, the result is use of non-validated cryptography. It is the responsibility of the application developer to ensure that ENGINES used during FIPS mode of operation are also FIPS validated.". Then coming back to my first question, I should be able to offload AES_Ciphers to my engine right? Then can I assume that either Its a bug in openssl-1.0.2 versions or I have missed some flags/something? -- Sent from: http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html From rsalz at akamai.com Thu Feb 28 13:41:19 2019 From: rsalz at akamai.com (Salz, Rich) Date: Thu, 28 Feb 2019 13:41:19 +0000 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: <87imx4xrax.wl-levitte@openssl.org> References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <5DA853F6-8438-4FE7-9E73-FD7895A3D5E8@akamai.com> <87imx4xrax.wl-levitte@openssl.org> Message-ID: > There are two options. First, the application does the digest and > sign as two separate things. My memory is a foggy surrounding that scenario, so I might be wrong, but I think it was argued that this was invalid use from a FIPS perspective. Now, we can't actually stop any application from doing this, sure! But... No, it's not illegal -- FIPS code being used for all FIPS operations. > If the EVP API does the digesting with one module and then calls > another module to do the RSA signing, that is okay. That suggests to me that libcrypto could "magically" combine two different FIPS providers, which would be none of the two options mentioned above. Yes. I believe this is okay, but also that OpenSSL is not going to support this. From levitte at openssl.org Thu Feb 28 14:06:00 2019 From: levitte at openssl.org (Richard Levitte) Date: Thu, 28 Feb 2019 15:06:00 +0100 Subject: AES-cipher offload to engine in openssl-fips In-Reply-To: References: <6D4F8144-786E-4CB2-B6FE-30760520F2F0@safelogic.com> <1551267921689-0.post@n7.nabble.com> <628393B4-CFEE-420C-8D02-33D3F4AC4B29@akamai.com> <87mumhx73x.wl-levitte@openssl.org> <2ce4120e-b831-3677-f4b4-e009633854c0@wisemo.com> <4F9A6435-15A7-4AC6-B071-2C9D9489FD6C@akamai.com> <87lg20yitz.wl-levitte@openssl.org> <5DA853F6-8438-4FE7-9E73-FD7895A3D5E8@akamai.com> <87imx4xrax.wl-levitte@openssl.org> Message-ID: <87d0ncxb1z.wl-levitte@openssl.org> On Thu, 28 Feb 2019 14:41:19 +0100, Salz, Rich wrote: > > > There are two options. First, the application does the digest and > > sign as two separate things. > > My memory is a foggy surrounding that scenario, so I might be wrong, > but I think it was argued that this was invalid use from a FIPS > perspective. Now, we can't actually stop any application from doing > this, sure! But... > > No, it's not illegal -- FIPS code being used for all FIPS operations. > > > If the EVP API does the digesting with one module and then calls > > another module to do the RSA signing, that is okay. > > That suggests to me that libcrypto could "magically" combine two > different FIPS providers, which would be none of the two options > mentioned above. > > Yes. I believe this is okay, but also that OpenSSL is not going to support this. Matt quoted a part of the design document that confirms what you're saying. I stand (*) corrected. Cheers, Richard ----- (*) actually, I sit ;-) -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From vieuxtech at gmail.com Thu Feb 28 18:35:01 2019 From: vieuxtech at gmail.com (Sam Roberts) Date: Thu, 28 Feb 2019 10:35:01 -0800 Subject: Stitched vs non-Stitched Ciphersuites In-Reply-To: <0a447a6b-e19b-d31b-cf91-1df6433c1e8c@openssl.org> References: <55324953-2695-40D1-B260-B26CBBBD658B@akamai.com> <1d7843f2-a650-fb81-c59b-8027a8194841@openssl.org> <8c801444-c340-49d9-0f92-06d582b23f15@openssl.org> <0a447a6b-e19b-d31b-cf91-1df6433c1e8c@openssl.org> Message-ID: On Wed, Feb 27, 2019 at 8:42 AM Matt Caswell wrote: > On 27/02/2019 16:33, Sam Roberts wrote: > > That would be helpful! > > It has been updated: Thank you, that is helpful. From kgoldman at us.ibm.com Thu Feb 28 20:05:43 2019 From: kgoldman at us.ibm.com (Ken Goldman) Date: Thu, 28 Feb 2019 15:05:43 -0500 Subject: ECC keypair generation with password Message-ID: I've been using this command to generate a password protected ECC keypair. openssl ecparam -name prime256v1 -genkey -noout | openssl pkey -aes256 -passout pass:passwd -text > tmpecprivkey.pem The output is a -----BEGIN ENCRYPTED PRIVATE KEY----- which I parsed using PEM_read_PrivateKey(pemKeyFile, NULL, NULL, (void *)password); *ecKey = EVP_PKEY_get1_EC_KEY(evpPkey); privateKeyBn = EC_KEY_get0_private_key(ecKey); Now I must send the PEM file to a crypto library that does not support -----BEGIN ENCRYPTED PRIVATE KEY----- It expects -----BEGIN EC PRIVATE KEY----- Its parser does accept a password. Is there a way to generate that PEM file? I.e. A password protected ECC keypair in -----BEGIN EC PRIVATE KEY----- format/ From paul at mad-scientist.net Thu Feb 28 19:48:02 2019 From: paul at mad-scientist.net (Paul Smith) Date: Thu, 28 Feb 2019 14:48:02 -0500 Subject: Online docs have broken links Message-ID: Not sure if anyone is aware or not, but many of the man pages on the openssl.org site contain broken links. Basically anywhere a man page refers to a man page in a different section, the link is broken because it uses the same section. So for example: https://www.openssl.org/docs/man1.1.1/man7/ssl.html is in section 7, but it refers to functions in section 3... however all the links are broken because they still point to section 7. See the link in the second paragraph of the description to SSL_CTX_NEW, which has this HTML linkage: SSL_CTX_new which does not exist; this should be .../man3/SSL_CTX_new.html instead. I've found other links in the man3 section which want to refer to this "ssl" page, and look for it in section 3 instead of section 7, also broken. Cheers! From openssl-users at dukhovni.org Thu Feb 28 20:36:25 2019 From: openssl-users at dukhovni.org (Viktor Dukhovni) Date: Thu, 28 Feb 2019 15:36:25 -0500 Subject: ECC keypair generation with password In-Reply-To: References: Message-ID: <20190228203625.GD916@straasha.imrryr.org> On Thu, Feb 28, 2019 at 03:05:43PM -0500, Ken Goldman wrote: > The output is a > -----BEGIN ENCRYPTED PRIVATE KEY----- This is PKCS8, which is the non-legacy private key format that should be used by modern libraries. This is for example output by: $ openssl genpkey -algorithm ec -pkeyopt ec_paramgen_curve:prime256v1 -aes128 Enter PEM pass phrase: Verifying - Enter PEM pass phrase: -----BEGIN ENCRYPTED PRIVATE KEY----- MIHsMFcGCSqGSIb3DQEFDTBKMCkGCSqGSIb3DQEFDDAcBAgWnV30Y37QvAICCAAw DAYIKoZIhvcNAgkFADAdBglghkgBZQMEAQIEEMx8xGM1W+W4JdPET0xj0MAEgZAp 9XvYDcsnokrXBoyWqFF73VeT/4ALgS+StQQK/84qzqjOKSUeteLiDoHkyH2GUYue WILJh+3MoqRRGyGPGaznI7yT2fCSUJNGZsvEDd8ILYGpvkS8ssfa/WXWZ0d4jwXr VE05VWx424ospaKPz8E5wsvpfuqB3/CxFnD0WUTa1cY/oLkwAUem/ps4iMWoIP8= -----END ENCRYPTED PRIVATE KEY----- [ The password is "sesame", if you want to test using the above key. ] > Now I must send the PEM file to a crypto library that does not support > > It expects > -----BEGIN EC PRIVATE KEY----- That's the legacy algorithm-specific format, your library is rather dated. > Its parser does accept a password. > > Is there a way to generate that PEM file? I.e. $ openssl ec -aes128 < -----BEGIN ENCRYPTED PRIVATE KEY----- > MIHsMFcGCSqGSIb3DQEFDTBKMCkGCSqGSIb3DQEFDDAcBAgWnV30Y37QvAICCAAw > DAYIKoZIhvcNAgkFADAdBglghkgBZQMEAQIEEMx8xGM1W+W4JdPET0xj0MAEgZAp > 9XvYDcsnokrXBoyWqFF73VeT/4ALgS+StQQK/84qzqjOKSUeteLiDoHkyH2GUYue > WILJh+3MoqRRGyGPGaznI7yT2fCSUJNGZsvEDd8ILYGpvkS8ssfa/WXWZ0d4jwXr > VE05VWx424ospaKPz8E5wsvpfuqB3/CxFnD0WUTa1cY/oLkwAUem/ps4iMWoIP8= > -----END ENCRYPTED PRIVATE KEY----- > EOF read EC key Enter PEM pass phrase: writing EC key Enter PEM pass phrase: Verifying - Enter PEM pass phrase: -----BEGIN EC PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,28ADEB740F62A9F41B2AAE09B53CD433 WbSfKUDAWwz8/6mAH9fuiBbCHrNwb7hnoRz7rfaoJ9QU5VzxZtwuZhGnAw/nKfsy b/GHtWa4ghtHf9QofQWuJukeMrC2/KAO+8K1qRsUtcH3KFsaVLcKrDk9plQ2lGdr qh3IX8vzPi+YZbdtquSse84g5GNMSE/Urv2bGdZH278= -----END EC PRIVATE KEY----- [ The password is still "sesame" ] -- Viktor. From Michael.Wojcik at microfocus.com Thu Feb 28 20:55:41 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Thu, 28 Feb 2019 20:55:41 +0000 Subject: ECC keypair generation with password In-Reply-To: References: Message-ID: > From: openssl-users [mailto:openssl-users-bounces at openssl.org] On Behalf Of > Ken Goldman > Sent: Thursday, February 28, 2019 15:06 > > I've been using this command to generate a password protected ECC keypair. > > openssl ecparam -name prime256v1 -genkey -noout | openssl pkey -aes256 > -passout pass:passwd -text > tmpecprivkey.pem >... > > Now I must send the PEM file to a crypto library that does not support > -----BEGIN ENCRYPTED PRIVATE KEY----- > > It expects > -----BEGIN EC PRIVATE KEY----- > > Its parser does accept a password. > > Is there a way to generate that PEM file? I.e. > > A password protected ECC keypair in -----BEGIN EC PRIVATE KEY----- format You don't say what version of OpenSSL you're using. Have you tried just changing the PEM header and footer? OpenSSL doesn't like that (it expects an unencrypted EC keypair for "EC PRIVATE KEY"), but maybe this other library does. Are you sure the other library is expecting an encrypted key? Have you tried with an unencrypted one, but using the "EC PRIVATE KEY" header/footer? -- Michael Wojcik Distinguished Engineer, Micro Focus From Michael.Wojcik at microfocus.com Thu Feb 28 20:58:20 2019 From: Michael.Wojcik at microfocus.com (Michael Wojcik) Date: Thu, 28 Feb 2019 20:58:20 +0000 Subject: ECC keypair generation with password References: Message-ID: > From: Michael Wojcik > Sent: Thursday, February 28, 2019 15:55 > > Have you tried just changing the PEM header and footer? ... Whoops. Just saw Viktor's response. Never mind. -- Michael Wojcik Distinguished Engineer, Micro Focus