From matt at openssl.org Tue Dec 1 10:22:39 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 1 Dec 2020 10:22:39 +0000 Subject: Fwd: Forthcoming OpenSSL Release In-Reply-To: References: Message-ID: <72b2008f-919a-859a-7788-36f5f4f84587@openssl.org> FYI -------- Forwarded Message -------- Subject: Forthcoming OpenSSL Release Date: Tue, 1 Dec 2020 04:15:51 -0600 From: Paul Nelson Reply-To: openssl-users at openssl.org To: openssl-announce at openssl.org The OpenSSL project team would like to announce the forthcoming release of OpenSSL version 1.1.1i. This release will be made available on Tuesday 8th December 2020 between 1300-1700 UTC. OpenSSL 1.1.i is a security-fix release. The highest severity issue fixed in this release is HIGH: https://www.openssl.org/policies/secpolicy.html#high Yours The OpenSSL Project Team From tmraz at redhat.com Tue Dec 1 11:20:20 2020 From: tmraz at redhat.com (Tomas Mraz) Date: Tue, 01 Dec 2020 12:20:20 +0100 Subject: OTC Vote proposal: Relax the implementation in regards to required public component Message-ID: The vote on relaxing the conceptual model in regards to required public component for EVP_PKEY has passed with the following text: For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: * relax the conceptual model to allow private keys to exist without public components; * all implementations apart from EC require the public component to be present; * relax implementation for EC key management to allow private keys that do not contain public keys and * our decoders unconditionally generate the public key (where possible). However since then the issue 13506 [1] was reported. During OTC meeting we concluded that we might need to relax also other public key algorithm implementations to allow private keys without public component. So here is my vote proposal in regards to this: ------ proposed vote text ------ For 3.0 EVP_PKEY keys all algorithm implementations that were usable with 1.1.1 EVP_PKEY API or low level APIs without public component must stay usable. -------------------------------- This effectively overrules the '* all implementations apart from EC require the public component to be present' part of the previous vote. I did not explicitly mention in the vote proposal that we do not want to generate the public component on fly (or even on 'fromdata' call) as I do not think we were doing that in 1.1.1 so implementation of this vote should not require that either. [1] https://github.com/openssl/openssl/issues/13506 -- Tom?? Mr?z No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] From matt at openssl.org Tue Dec 1 12:29:15 2020 From: matt at openssl.org (Matt Caswell) Date: Tue, 1 Dec 2020 12:29:15 +0000 Subject: OTC VOTE: Fixing missing failure exit status is a bug fix In-Reply-To: References: Message-ID: +1 On 30/11/2020 12:03, Nicola Tuveri wrote: > Vote background > --------------- > > This follows up on a [previous proposal] that was abandoned in favor of > an OMC vote on the behavior change introduced in [PR#13359]. > Within today's OTC meeting this was further discussed with the attending > members that also sit in the OMC. > > The suggestion was to improve the separation of the OTC and OMC domains > here, by having a more generic OTC vote to qualify as bug fixes the > changes to let any OpenSSL app return an (early) failure exit status > when a called function fails. > > The idea is that, if we agree on this technical definition, then no OMC > vote to allow a behavior change in the apps would be required in > general, unless, on a case-by-case basis, the "OMC hold" process is > invoked for whatever reason on the specific bug fix, triggering the > usual OMC decision process. > > [previous proposal]: > > [PR#13359]: > > > > Vote text > --------- > > topic: In the context of the OpenSSL apps, the OTC qualifies as bug > fixes the changes to return a failure exit status when a called > function fails with an unhandled return value. > Even when these bug fixes change the apps behavior triggering > early exits (compared to previous versions of the apps), as bug > fixes, they do not qualify as behavior changes that require an > explicit OMC approval. > Proposed by Nicola Tuveri > Public: yes > opened: 2020-11-30 > From kurt at roeckx.be Thu Dec 3 11:37:06 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Thu, 3 Dec 2020 12:37:06 +0100 Subject: OTC VOTE: Fixing missing failure exit status is a bug fix In-Reply-To: References: Message-ID: <20201203113706.GA1361836@roeckx.be> On Mon, Nov 30, 2020 at 02:03:15PM +0200, Nicola Tuveri wrote: > Vote text > --------- > > topic: In the context of the OpenSSL apps, the OTC qualifies as bug > fixes the changes to return a failure exit status when a called > function fails with an unhandled return value. > Even when these bug fixes change the apps behavior triggering > early exits (compared to previous versions of the apps), as bug > fixes, they do not qualify as behavior changes that require an > explicit OMC approval. +1 Kurt From tmraz at redhat.com Thu Dec 3 12:47:49 2020 From: tmraz at redhat.com (Tomas Mraz) Date: Thu, 03 Dec 2020 13:47:49 +0100 Subject: OTC Vote proposal: Relax the implementation in regards to required public component In-Reply-To: References: Message-ID: <9f8e78c4619f3dc910bb3ccfe4e9225392a5255b.camel@redhat.com> There were no comments so far, so unless there is any comment today, I'll call a vote on the proposed vote text tomorrow. On Tue, 2020-12-01 at 12:20 +0100, Tomas Mraz wrote: > The vote on relaxing the conceptual model in regards to required > public > component for EVP_PKEY has passed with the following text: > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > * relax the conceptual model to allow private keys to exist without > public components; > * all implementations apart from EC require the public component to > be > present; > * relax implementation for EC key management to allow private keys > that > do not contain public keys and > * our decoders unconditionally generate the public key (where > possible). > > However since then the issue 13506 [1] was reported. > > During OTC meeting we concluded that we might need to relax also > other > public key algorithm implementations to allow private keys without > public component. > > So here is my vote proposal in regards to this: > > ------ proposed vote text ------ > For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component > must > stay usable. > -------------------------------- > > This effectively overrules the '* all implementations apart from EC > require the public component to be present' part of the previous > vote. > > I did not explicitly mention in the vote proposal that we do not want > to generate the public component on fly (or even on 'fromdata' call) > as > I do not think we were doing that in 1.1.1 so implementation of this > vote should not require that either. > > > [1] https://github.com/openssl/openssl/issues/13506 > -- Tom?? Mr?z No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] From tmraz at redhat.com Fri Dec 4 12:45:07 2020 From: tmraz at redhat.com (Tomas Mraz) Date: Fri, 04 Dec 2020 13:45:07 +0100 Subject: OTC VOTE: Keeping API compatibility with missing public key Message-ID: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Vote background --------------- The vote on relaxing the conceptual model in regards to required public component for EVP_PKEY has passed with the following text: For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: * relax the conceptual model to allow private keys to exist without public components; * all implementations apart from EC require the public component to be present; * relax implementation for EC key management to allow private keys that do not contain public keys and * our decoders unconditionally generate the public key (where possible). However since then the issue 13506 [1] was reported. During OTC meeting we concluded that we might need to relax also other public key algorithm implementations to allow private keys without public component. Vote ---- topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable with 1.1.1 EVP_PKEY API or low level APIs without public component must stay usable. This overrules the * all implementations apart from EC require the public component to be present; part of the vote closed on 2020-11-17. Proposed by Tomas Mraz Public: yes opened: 2020-12-04 Tomas Mraz From paul.dale at oracle.com Sat Dec 5 01:16:01 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Sat, 5 Dec 2020 11:16:01 +1000 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Message-ID: +1 Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 4 Dec 2020, at 10:45 pm, Tomas Mraz wrote: > > Vote background > --------------- > > The vote on relaxing the conceptual model in regards to required public > component for EVP_PKEY has passed with the following text: > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > * relax the conceptual model to allow private keys to exist without > public components; > * all implementations apart from EC require the public component to be > present; > * relax implementation for EC key management to allow private keys that > do not contain public keys and > * our decoders unconditionally generate the public key (where > possible). > > However since then the issue 13506 [1] was reported. > > During OTC meeting we concluded that we might need to relax also other > public key algorithm implementations to allow private keys without > public component. > > Vote > ---- > > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component must > stay usable. > > This overrules the > * all implementations apart from EC require the public component to be present; > part of the vote closed on 2020-11-17. > > Proposed by Tomas Mraz > Public: yes > opened: 2020-12-04 > > Tomas Mraz > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjh at cryptsoft.com Sat Dec 5 01:19:14 2020 From: tjh at cryptsoft.com (Tim Hudson) Date: Sat, 5 Dec 2020 11:19:14 +1000 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Message-ID: +1 Note I support also changing all key types to be able to operate without the public component (where that is possible) which goes beyond what this vote covers (as previously noted). Having a documented conceptual model that is at odds with the code isn't a good thing and in particular this choice of conceptual model isn't one that is appropriate in my view. Tim. On Fri, Dec 4, 2020 at 10:45 PM Tomas Mraz wrote: > Vote background > --------------- > > The vote on relaxing the conceptual model in regards to required public > component for EVP_PKEY has passed with the following text: > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > * relax the conceptual model to allow private keys to exist without > public components; > * all implementations apart from EC require the public component to be > present; > * relax implementation for EC key management to allow private keys that > do not contain public keys and > * our decoders unconditionally generate the public key (where > possible). > > However since then the issue 13506 [1] was reported. > > During OTC meeting we concluded that we might need to relax also other > public key algorithm implementations to allow private keys without > public component. > > Vote > ---- > > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component > must > stay usable. > > This overrules the > * all implementations apart from EC require the public component > to be present; > part of the vote closed on 2020-11-17. > > Proposed by Tomas Mraz > Public: yes > opened: 2020-12-04 > > Tomas Mraz > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shane.lontis at oracle.com Sat Dec 5 02:14:00 2020 From: shane.lontis at oracle.com (SHANE LONTIS) Date: Sat, 5 Dec 2020 12:14:00 +1000 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Message-ID: <16F0E1AA-7AFC-469F-9F00-66B55DD178AA@oracle.com> +1 > On 4 Dec 2020, at 10:45 pm, Tomas Mraz wrote: > > Vote background > --------------- > > The vote on relaxing the conceptual model in regards to required public > component for EVP_PKEY has passed with the following text: > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > * relax the conceptual model to allow private keys to exist without > public components; > * all implementations apart from EC require the public component to be > present; > * relax implementation for EC key management to allow private keys that > do not contain public keys and > * our decoders unconditionally generate the public key (where > possible). > > However since then the issue 13506 [1] was reported. > > During OTC meeting we concluded that we might need to relax also other > public key algorithm implementations to allow private keys without > public component. > > Vote > ---- > > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component must > stay usable. > > This overrules the > * all implementations apart from EC require the public component to be present; > part of the vote closed on 2020-11-17. > > Proposed by Tomas Mraz > Public: yes > opened: 2020-12-04 > > Tomas Mraz > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Mon Dec 7 09:15:31 2020 From: levitte at openssl.org (Richard Levitte) Date: Mon, 07 Dec 2020 10:15:31 +0100 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Message-ID: <87r1o2rn7w.wl-levitte@openssl.org> +1 On Fri, 04 Dec 2020 13:45:07 +0100, Tomas Mraz wrote: > > Vote background > --------------- > > The vote on relaxing the conceptual model in regards to required public > component for EVP_PKEY has passed with the following text: > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > * relax the conceptual model to allow private keys to exist without > public components; > * all implementations apart from EC require the public component to be > present; > * relax implementation for EC key management to allow private keys that > do not contain public keys and > * our decoders unconditionally generate the public key (where > possible). > > However since then the issue 13506 [1] was reported. > > During OTC meeting we concluded that we might need to relax also other > public key algorithm implementations to allow private keys without > public component. > > Vote > ---- > > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component must > stay usable. > > This overrules the > * all implementations apart from EC require the public component to be present; > part of the vote closed on 2020-11-17. > > Proposed by Tomas Mraz > Public: yes > opened: 2020-12-04 > > Tomas Mraz > > -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From matt at openssl.org Mon Dec 7 09:46:21 2020 From: matt at openssl.org (Matt Caswell) Date: Mon, 7 Dec 2020 09:46:21 +0000 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Message-ID: <59af490d-28a5-50f3-97e2-4ea9ba63a2cf@openssl.org> +1 On 04/12/2020 12:45, Tomas Mraz wrote: > Vote background > --------------- > > The vote on relaxing the conceptual model in regards to required public > component for EVP_PKEY has passed with the following text: > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > * relax the conceptual model to allow private keys to exist without > public components; > * all implementations apart from EC require the public component to be > present; > * relax implementation for EC key management to allow private keys that > do not contain public keys and > * our decoders unconditionally generate the public key (where > possible). > > However since then the issue 13506 [1] was reported. > > During OTC meeting we concluded that we might need to relax also other > public key algorithm implementations to allow private keys without > public component. > > Vote > ---- > > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component must > stay usable. > > This overrules the > * all implementations apart from EC require the public component to be present; > part of the vote closed on 2020-11-17. > > Proposed by Tomas Mraz > Public: yes > opened: 2020-12-04 > > Tomas Mraz > > From Matthias.St.Pierre at ncp-e.com Mon Dec 7 11:21:00 2020 From: Matthias.St.Pierre at ncp-e.com (Dr. Matthias St. Pierre) Date: Mon, 7 Dec 2020 11:21:00 +0000 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <59af490d-28a5-50f3-97e2-4ea9ba63a2cf@openssl.org> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> <59af490d-28a5-50f3-97e2-4ea9ba63a2cf@openssl.org> Message-ID: <82904c6ba8f94e4bae2518840be4efa7@ncp-e.com> +1 > -----Original Message----- > From: openssl-project On Behalf Of Matt Caswell > Sent: Monday, December 7, 2020 10:46 AM > To: openssl-project at openssl.org > Subject: Re: OTC VOTE: Keeping API compatibility with missing public key > > +1 > > On 04/12/2020 12:45, Tomas Mraz wrote: > > Vote background > > --------------- > > > > The vote on relaxing the conceptual model in regards to required public > > component for EVP_PKEY has passed with the following text: > > > > For 3.0 EVP_PKEY keys, the OTC accepts the following resolution: > > * relax the conceptual model to allow private keys to exist without > > public components; > > * all implementations apart from EC require the public component to be > > present; > > * relax implementation for EC key management to allow private keys that > > do not contain public keys and > > * our decoders unconditionally generate the public key (where > > possible). > > > > However since then the issue 13506 [1] was reported. > > > > During OTC meeting we concluded that we might need to relax also other > > public key algorithm implementations to allow private keys without > > public component. > > > > Vote > > ---- > > > > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > > with 1.1.1 EVP_PKEY API or low level APIs without public component must > > stay usable. > > > > This overrules the > > * all implementations apart from EC require the public component to be present; > > part of the vote closed on 2020-11-17. > > > > Proposed by Tomas Mraz > > Public: yes > > opened: 2020-12-04 > > > > Tomas Mraz > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7494 bytes Desc: not available URL: From paul.dale at oracle.com Tue Dec 8 00:43:59 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Tue, 8 Dec 2020 10:43:59 +1000 Subject: #8765 Message-ID: #8765 has been sitting in an OTC hold state for a while and @DDvO has asked how it can be progressed. The PR is attempting to change the bnrand_range() function. Our existing code iterates (up to 100 times) and generates candidates which each have a 75% chance of being within the desired range. It guarantees an unbiased result but is slow and variable in its timing. It is also difficult to understand. The code that currently stands in the PR uses the method FIPS 186-4 B.1.1 / BSI TR-02102-1 Table B4 Method 2 to generate random numbers within a range quickly, although there is a very slight bias introduced. This generates an extra 64 bits of randomness, modulo reduces the result and returns it. The bias being that 2^(n+64) isn?t exactly divisible by the range (at least in general). Again The third approach with takes advantage of an idea from Lemire?s exquisite Fast Random Integer Generation in an Interval to produce a similar biassed result using a multiplication instead of a modulus. I.e. this one can be constant time and is faster again. It is possible to implement Lemire?s algorithm completely which gives fast and unbiassed results, although it might have to iterate on occasion. Do RNGs need to be constant time? I?m not sure. Our BN_mod() function isn?t and exposes potential side channel attacks (we have this now). The variable number of iterations cannot be used for a timing attack because iteration means that the generated number is out of range and thrown away. An attacker only learns that and nothing about the final out. Do we want our RNG to be faster? This seems like a decent idea. Can we live with (slightly) biassed output? As noted by @DDvO, some of the tests are failing. At one point I implemented Lemire?s algorithm and the broken tests were what stopped me. I don?t remember the precise details, I?ve a niggle that it might have been NIST?s KATs implicitly relying on the ?standard? modulo reduce approach being used for random range generation. Thoughts or suggestions? Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsalz at akamai.com Tue Dec 8 01:36:32 2020 From: rsalz at akamai.com (Salz, Rich) Date: Tue, 8 Dec 2020 01:36:32 +0000 Subject: #8765 In-Reply-To: References: Message-ID: <54566794-9904-40B1-B5A6-E2E0FA8D97D9@akamai.com> I assume nobody is surprised to see me say this: I do not see a requirement to do this in 3.0. In particular, I hope that none of the contributors who already have 3.0 work spend time on this. If this is going to be considered for 3.0, I would like to know the rationale for doing so. I don?t think ?the code is hard to understand? is important enough at this point in time. I don?t think making asymmetric keygen faster is important now, either. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmraz at redhat.com Tue Dec 8 10:31:16 2020 From: tmraz at redhat.com (Tomas Mraz) Date: Tue, 08 Dec 2020 11:31:16 +0100 Subject: #8765 In-Reply-To: <54566794-9904-40B1-B5A6-E2E0FA8D97D9@akamai.com> References: <54566794-9904-40B1-B5A6-E2E0FA8D97D9@akamai.com> Message-ID: On Tue, 2020-12-08 at 01:36 +0000, Salz, Rich wrote: > I assume nobody is surprised to see me say this: I do not see a > requirement to do this in 3.0. In particular, I hope that none of the > contributors who already have 3.0 work spend time on this. > > If this is going to be considered for 3.0, I would like to know the > rationale for doing so. I don?t think ?the code is hard to > understand? is important enough at this point in time. I don?t think > making asymmetric keygen faster is important now, either. I agree with Rich here. To me this is clearly Post 3.0.0 item. -- Tom?? Mr?z No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] From openssl at openssl.org Tue Dec 8 15:01:33 2020 From: openssl at openssl.org (OpenSSL) Date: Tue, 8 Dec 2020 15:01:33 +0000 Subject: OpenSSL version 1.1.1i published Message-ID: <20201208150133.GA23749@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 OpenSSL version 1.1.1i released =============================== OpenSSL - The Open Source toolkit for SSL/TLS https://www.openssl.org/ The OpenSSL project team is pleased to announce the release of version 1.1.1i of our open source toolkit for SSL/TLS. For details of changes and known issues see the release notes at: https://www.openssl.org/news/openssl-1.1.1-notes.html OpenSSL 1.1.1i is available for download via HTTP and FTP from the following master locations (you can find the various FTP mirrors under https://www.openssl.org/source/mirror.html): * https://www.openssl.org/source/ * ftp://ftp.openssl.org/source/ The distribution file name is: o openssl-1.1.1i.tar.gz Size: 9808346 SHA1 checksum: eb684ba4ed31fe2c48062aead75233ecd36882a6 SHA256 checksum: e8be6a35fe41d10603c3cc635e93289ed00bf34b79671a3a4de64fcee00d5242 The checksums were calculated using the following commands: openssl sha1 openssl-1.1.1i.tar.gz openssl sha256 openssl-1.1.1i.tar.gz Yours, The OpenSSL Project Team. -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAl/PfcIACgkQ2cTSbQ5g RJGTdAgAg4vCZBf6Ugf0JojEHlqfxvdYTDPaz7C8vT4KFOsXW7vYr7Flc0O7rgfH hL/N25f8Ao4AlX1mtlq5whR6adf3dA3Ny3T5r8WNXy8a2GdC/AH7zSVI1+0yQ3L8 C1ohbRYUHgP9o6DjjSBylSgJzmwSK7CfBFbiq4MX/FeEqon+fy8Er5LMW7Cor2Tq 07a5532Gb67zuRPu/U5D6fFsXBDvzeDsT/c9ZMt0eImvmpU6wJNqALC+I0qI/pKY AY6FmljuYM3gr1aWbuCeyMbcGutRCFOLGrNl/VpQZFM5m7Rs6NQsQ+c3O5EICpoU NKmPlsXfAabUZpEaWKK/4mzXLgMxfw== =MgEX -----END PGP SIGNATURE----- From openssl at openssl.org Tue Dec 8 15:13:49 2020 From: openssl at openssl.org (OpenSSL) Date: Tue, 8 Dec 2020 15:13:49 +0000 Subject: OpenSSL Security Advisory Message-ID: <20201208151349.GA7672@openssl.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 OpenSSL Security Advisory [08 December 2020] ============================================ EDIPARTYNAME NULL pointer de-reference (CVE-2020-1971) ====================================================== Severity: High The X.509 GeneralName type is a generic type for representing different types of names. One of those name types is known as EDIPartyName. OpenSSL provides a function GENERAL_NAME_cmp which compares different instances of a GENERAL_NAME to see if they are equal or not. This function behaves incorrectly when both GENERAL_NAMEs contain an EDIPARTYNAME. A NULL pointer dereference and a crash may occur leading to a possible denial of service attack. OpenSSL itself uses the GENERAL_NAME_cmp function for two purposes: 1) Comparing CRL distribution point names between an available CRL and a CRL distribution point embedded in an X509 certificate 2) When verifying that a timestamp response token signer matches the timestamp authority name (exposed via the API functions TS_RESP_verify_response and TS_RESP_verify_token) If an attacker can control both items being compared then that attacker could trigger a crash. For example if the attacker can trick a client or server into checking a malicious certificate against a malicious CRL then this may occur. Note that some applications automatically download CRLs based on a URL embedded in a certificate. This checking happens prior to the signatures on the certificate and CRL being verified. OpenSSL's s_server, s_client and verify tools have support for the "-crl_download" option which implements automatic CRL downloading and this attack has been demonstrated to work against those tools. Note that an unrelated bug means that affected versions of OpenSSL cannot parse or construct correct encodings of EDIPARTYNAME. However it is possible to construct a malformed EDIPARTYNAME that OpenSSL's parser will accept and hence trigger this attack. All OpenSSL 1.1.1 and 1.0.2 versions are affected by this issue. Other OpenSSL releases are out of support and have not been checked. OpenSSL 1.1.1 users should upgrade to 1.1.1i. OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2x. Other users should upgrade to OpenSSL 1.1.1i. This issue was reported to OpenSSL on 9th November 2020 by David Benjamin (Google). Initial analysis was performed by David Benjamin with additional analysis by Matt Caswell (OpenSSL). The fix was developed by Matt Caswell. Note ==== OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. The impact of this issue on OpenSSL 1.1.0 has not been analysed. Users of these versions should upgrade to OpenSSL 1.1.1. References ========== URL for this Security Advisory: https://www.openssl.org/news/secadv/20201208.txt Note: the online version of the advisory may be updated with additional details over time. For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAl/PloEACgkQ2cTSbQ5g RJERNQf/d8G0r7APrOuxlwOL2j0j4JX5HZoR/ilD1eD6kSj3uZmCbl/DTZgN9uhj hMN9UTCVdF+NcWlqldwUVLLSq16/P821QLrbqKs4Q6i2NDwHIAU6VCneRZOUIOpl VOyQ+BJDavvqQ2gNziDK29sjG8JxWUqQ10fdphfrV1vS0Wd1fV1/Kk9I0ba+yv5O RiIyvbJobCEyNz52JdqbBsKjrSCtPh6qMra3IYm6EDJDnp+T8UpliB3RBIBuIPfU ALRageyqmE9+J5BFYxbd1Lx37mHXq1PZsSYd6L09Y9Wg5fJLHzWffd74SfJHwRza xZ/UTvCvkbGUbspT/U4mkuHwHzYXcg== =41vP -----END PGP SIGNATURE----- From kurt at roeckx.be Tue Dec 8 19:05:57 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Tue, 8 Dec 2020 20:05:57 +0100 Subject: OTC VOTE: Keeping API compatibility with missing public key In-Reply-To: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> References: <03937e1f3ba1077379ebdec6d7269393655481aa.camel@redhat.com> Message-ID: On Fri, Dec 04, 2020 at 01:45:07PM +0100, Tomas Mraz wrote: > topic: For 3.0 EVP_PKEY keys all algorithm implementations that were usable > with 1.1.1 EVP_PKEY API or low level APIs without public component must > stay usable. > > This overrules the > * all implementations apart from EC require the public component to be present; > part of the vote closed on 2020-11-17. > +1 Kurt From tmraz at redhat.com Wed Dec 9 09:00:10 2020 From: tmraz at redhat.com (Tomas Mraz) Date: Wed, 09 Dec 2020 10:00:10 +0100 Subject: Remote unpack error when trying to push Message-ID: <4f3cb2388e80b26f50999a598f9805383ff38534.camel@redhat.com> It seems we are out of space or there is other similar problem on git.openssl.org. Pushing to openssl-git at git.openssl.org:openssl.git Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 8 threads Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 474 bytes | 474.00 KiB/s, done. Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 error: remote unpack failed: unable to create temporary object directory To git.openssl.org:openssl.git ! [remote rejected] master -> master (unpacker error) error: failed to push some refs to 'openssl-git at git.openssl.org:openssl.git' -- Tom?? Mr?z No matter how far down the wrong road you've gone, turn back. Turkish proverb [You'll know whether the road is wrong if you carefully listen to your conscience.] From paul.dale at oracle.com Wed Dec 9 09:41:23 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Wed, 9 Dec 2020 19:41:23 +1000 Subject: Remote unpack error when trying to push In-Reply-To: <4f3cb2388e80b26f50999a598f9805383ff38534.camel@redhat.com> References: <4f3cb2388e80b26f50999a598f9805383ff38534.camel@redhat.com> Message-ID: <981F4FB2-0A3A-45FB-8DBD-E20DE0651E54@oracle.com> I can confirm that there is a disc full no the machine. I?m not confident I can safely fix it ? it was the first time I?ve logged in to it. Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 9 Dec 2020, at 7:00 pm, Tomas Mraz wrote: > > It seems we are out of space or there is other similar problem on > git.openssl.org. > > > Pushing to openssl-git at git.openssl.org:openssl.git > Enumerating objects: 7, done. > Counting objects: 100% (7/7), done. > Delta compression using up to 8 threads > Compressing objects: 100% (4/4), done. > Writing objects: 100% (4/4), 474 bytes | 474.00 KiB/s, done. > Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 > error: remote unpack failed: unable to create temporary object directory > To git.openssl.org:openssl.git > ! [remote rejected] master -> master (unpacker error) > error: failed to push some refs to 'openssl-git at git.openssl.org:openssl.git' > > > > -- > Tom?? Mr?z > No matter how far down the wrong road you've gone, turn back. > Turkish proverb > [You'll know whether the road is wrong if you carefully listen to your > conscience.] > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.dale at oracle.com Wed Dec 9 12:32:40 2020 From: paul.dale at oracle.com (Dr Paul Dale) Date: Wed, 9 Dec 2020 22:32:40 +1000 Subject: Remote unpack error when trying to push In-Reply-To: <981F4FB2-0A3A-45FB-8DBD-E20DE0651E54@oracle.com> References: <4f3cb2388e80b26f50999a598f9805383ff38534.camel@redhat.com> <981F4FB2-0A3A-45FB-8DBD-E20DE0651E54@oracle.com> Message-ID: <96A62BD3-CB90-44C8-A1A7-30317F0AD8D4@oracle.com> Richard has fixed the space problem. PRs can be merged again. Pauli -- Dr Paul Dale | Distinguished Architect | Cryptographic Foundations Phone +61 7 3031 7217 Oracle Australia > On 9 Dec 2020, at 7:41 pm, Dr Paul Dale wrote: > > I can confirm that there is a disc full no the machine. > I?m not confident I can safely fix it ? it was the first time I?ve logged in to it. > > > Pauli > -- > Dr Paul Dale | Distinguished Architect | Cryptographic Foundations > Phone +61 7 3031 7217 > Oracle Australia > > > > >> On 9 Dec 2020, at 7:00 pm, Tomas Mraz > wrote: >> >> It seems we are out of space or there is other similar problem on >> git.openssl.org . >> >> >> Pushing to openssl-git at git.openssl.org :openssl.git >> Enumerating objects: 7, done. >> Counting objects: 100% (7/7), done. >> Delta compression using up to 8 threads >> Compressing objects: 100% (4/4), done. >> Writing objects: 100% (4/4), 474 bytes | 474.00 KiB/s, done. >> Total 4 (delta 3), reused 0 (delta 0), pack-reused 0 >> error: remote unpack failed: unable to create temporary object directory >> To git.openssl.org :openssl.git >> ! [remote rejected] master -> master (unpacker error) >> error: failed to push some refs to 'openssl-git at git.openssl.org :openssl.git' >> >> >> >> -- >> Tom?? Mr?z >> No matter how far down the wrong road you've gone, turn back. >> Turkish proverb >> [You'll know whether the road is wrong if you carefully listen to your >> conscience.] >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Wed Dec 9 17:42:12 2020 From: matt at openssl.org (Matt Caswell) Date: Wed, 9 Dec 2020 17:42:12 +0000 Subject: Monthly Status Report (November) Message-ID: As well as normal reviews, responding to user queries, wiki user requests, OMC business, support customer issues, handling security reports, etc., key activities this month: - Investigated and prepared a fix where the nginx "ssl_reject_handshake" feature does not work in OpenSSL. - Completed and merged the PR to remove low-level DH use from libssl - Ongoing involvement in the regular OTC meetings (currently twice a week) - Improved the output from conf_diagnostics (some issues were being incorrectly suppressed from the error output) - Performed the alpha8 and alpha9 releases for OpenSSL 3.0 - Fixed the reading of DSA parameters files in the dsaparam app - Corrected system guessing for solaris64-x86_64-* targets - Fixed issues with the error "mark" system to enable multiple nested marks - Continued work on and merged the PR to change the default key generation type for DH/DSA - Cleaned up some functions in the apps to remove redundant error messages - Provided initial fix for clang10 issues (later superseded by a fix by Pauli) - Created a fix for RC4 based ciphersuites - Investigated and created an initial patch for the EDIPARTYNAME security issue - Investigated and fixed an issue where OSSL_STORE was forgetting the data type that we read from the PEM header when decoding the DER - Completed and merged the PR to ensure that the dhparam app no longer needs to use low level APIs - Investigated and fixed a fuzzing error in the Thawte Strong Extranet X509 extension - Removed deprecation warning suppression from genpkey - Fixed an error in missingcrypto111.txt related to ERR_load_KDF_strings - Moved some libssl global variables into SSL_CTX - Undeprecated the -dsaparam option in the dhparam app. The original motivation for this deprecation no longer applies - Implemented a Github CI solution as a replacement for Travis - Fixed no-rc2 - Fixed no-posix-io - Fixed no-err - Fixed no-engine - Completed and merged the PR to fully deprecate the DH low level APIs - Fixed the run-checker ubsan build - Fixed builds combining no-dh and no-ed Matt From nic.tuv at gmail.com Fri Dec 11 07:39:23 2020 From: nic.tuv at gmail.com (Nicola Tuveri) Date: Fri, 11 Dec 2020 09:39:23 +0200 Subject: OpenSSL Cryticality Score Message-ID: Hi all, just sharing an interesting factoid I came across today about the project. Google, as part of the Open Source Security Foundation, yesterday released a new project dubbed "Criticality Score", attempting (I am simplifying here for brevity) to create a metric of "how critical" a software is in the software ecosystem. You can read more accurate info about it here: https://opensource.googleblog.com/2020/12/finding-critical-open-source-projects.html They publish the collected metadata and the resulting score (based on the formula described at ) online as a CSV file. Sidenote: Notice the data seems to refer only to whatever the github API for a repo says, so for example OpenSSL is only 95 months old because that's when the github mirror was created (I opened an issue about this). Anyway, they split the data by language, and, among the analyzed C projects, OpenSSL expectedly scores quite high, being 6th in the top 200 measured C projects. Here is a link directly to the data: https://commondatastorage.googleapis.com/ossf-criticality-score/index.html Cheers, Nicola -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at openssl.org Fri Dec 11 09:23:44 2020 From: matt at openssl.org (Matt Caswell) Date: Fri, 11 Dec 2020 09:23:44 +0000 Subject: OpenSSL Cryticality Score In-Reply-To: References: Message-ID: <05975f97-f725-0a45-47d3-20e24b09807a@openssl.org> On 11/12/2020 07:39, Nicola Tuveri wrote: > Hi all, > > just sharing an interesting factoid I came across today about the project.? > > Google, as part of the Open Source Security Foundation, yesterday > released a new project dubbed "Criticality Score", attempting (I am > simplifying here for brevity) to create a metric of "how critical" a > software is in the software ecosystem.? > You can read more accurate info about it here: > https://opensource.googleblog.com/2020/12/finding-critical-open-source-projects.html > > They publish the collected metadata and the resulting score (based on > the formula described at ) > online as a CSV file. > > Sidenote: Notice the data seems to refer only to whatever the github API > for a repo says, so for example OpenSSL is only 95 months old because > that's when the github mirror was created (I opened an issue about this). > > Anyway, they split the data by language, and, among the analyzed C > projects, OpenSSL expectedly scores quite high, being 6th in the top 200 > measured C projects. This is really interesting! We've always known that OpenSSL is widely used but never had any data to back it up. Actually according to the spreadsheet we are 5th (not 6th) - line 1 in the sheet is the title row. Linux takes 2 of the top spots, with git and php taking the other spots ahead of OpenSSL. Not sure I understand the "Releases (last yr)" column which says we did 41 releases - that's a number I can't reconcile with the actual number of releases we did. Matt > > Here is a link directly to the data: > https://commondatastorage.googleapis.com/ossf-criticality-score/index.html > > > Cheers,? > > Nicola From nic.tuv at gmail.com Fri Dec 11 09:54:30 2020 From: nic.tuv at gmail.com (Nicola Tuveri) Date: Fri, 11 Dec 2020 11:54:30 +0200 Subject: OpenSSL Cryticality Score In-Reply-To: <05975f97-f725-0a45-47d3-20e24b09807a@openssl.org> References: <05975f97-f725-0a45-47d3-20e24b09807a@openssl.org> Message-ID: On Fri, Dec 11, 2020 at 11:23 AM Matt Caswell wrote: > > > Actually according to the spreadsheet we are 5th (not 6th) - line 1 in > the sheet is the title row. Linux takes 2 of the top spots, with git and > php taking the other spots ahead of OpenSSL. Good, it's good that the double review process catches my off-by-one errors also on the mailing list ;) > > > Not sure I understand the "Releases (last yr)" column which says we did > 41 releases - that's a number I can't reconcile with the actual number > of releases we did. > https://github.com/ossf/criticality_score/blob/59e449d5598de4f27a83070297e5779a4a3407b2/criticality_score/run.py#L96-L114 It seems to be an estimate based on the number of tags, as we don't do github releases: ``` RELEASE_LOOKBACK_DAYS=365 (total_tags / days_since_creation) * RELEASE_LOOKBACK_DAYS ``` This is definitely skewed by considering the project 95 months old (2887 days) instead of ~264 months (8026 days). Nicola From nic.tuv at gmail.com Sun Dec 13 10:31:28 2020 From: nic.tuv at gmail.com (Nicola Tuveri) Date: Sun, 13 Dec 2020 12:31:28 +0200 Subject: OpenSSL Cryticality Score In-Reply-To: References: <05975f97-f725-0a45-47d3-20e24b09807a@openssl.org> Message-ID: As an update on the issue of some fields being not entirely accurate. I am forwarding a message on behalf of @inferno-chromium, the maintainer of the https://github.com/ossf/criticality_score project that followed up on the [Github issue] I opened about this. > Thanks for notifying us of the issue with incorrect project creation > date issue, we do plan to look into it and see feasibility of picking > the first commit date for accuracy. In case of openssl, it would have > little to no-impact on criticality score, as other factors clearly > indicate it is a super-critical project. These include things like > users dependent on openssl library, number of project contributors and > user activity in terms of issues filed, updated. [Github issue]: https://github.com/ossf/criticality_score/issues/14 On Fri, Dec 11, 2020 at 11:54 AM Nicola Tuveri wrote: > > On Fri, Dec 11, 2020 at 11:23 AM Matt Caswell wrote: > > > > > > Actually according to the spreadsheet we are 5th (not 6th) - line 1 in > > the sheet is the title row. Linux takes 2 of the top spots, with git and > > php taking the other spots ahead of OpenSSL. > > > Good, it's good that the double review process catches my off-by-one > errors also on the mailing list ;) > > > > > > > Not sure I understand the "Releases (last yr)" column which says we did > > 41 releases - that's a number I can't reconcile with the actual number > > of releases we did. > > > > https://github.com/ossf/criticality_score/blob/59e449d5598de4f27a83070297e5779a4a3407b2/criticality_score/run.py#L96-L114 > > It seems to be an estimate based on the number of tags, as we don't do > github releases: > > ``` > RELEASE_LOOKBACK_DAYS=365 > (total_tags / days_since_creation) * RELEASE_LOOKBACK_DAYS > ``` > > This is definitely skewed by considering the project 95 months old > (2887 days) instead of ~264 months (8026 days). > > > Nicola From kurt at roeckx.be Mon Dec 14 14:59:37 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Mon, 14 Dec 2020 15:59:37 +0100 Subject: OSSL_PARAM behaviour for unknown keys Message-ID: Hi, doc/man3/OSSL_PARAM.pod current says: Keys that a I or I doesn't recognise should simply be ignored. That in itself isn't an error. The intention of that seems to be that you just pass all the data you have, and that it takes data it needs. So you can pass it data that it doesn't need because it's only used in case some other parameter has some specific value. For example, depending on the DRBG mode (HMAC, CTR, HASH) you have different parameters, and you can just pass all the parameters for all the modes. I think for behaviour for a setter is not something that we want, it makes it complicated for applications to check that it will behave properly. I think that in general, if the applications wants to set something and you don't understand it, you should return an error. This is about future proofing the API. For instance, a new version supports a new mode to work in and that needs a new parameter. If it's build against a version that knows about it, but then runs against a version that doesn't know about it, everything will appear to work, but be broken. If we return an error, it will be clear that it's not supported. An alternative method of working is that the application first needs to query that it's supported. And only if it's supported it should call the function. But we don't have an API to query for that. You might be able to ask for which keys you can set, but it doesn't cover which values you can set. I hope we at least return an error for a known key with an unknown value. But it's my understanding that we currently don't always return all supported keys, and that the supported keys can depend on one of the set parameters. I suggest that we change the return value to indicate that all parameters have been used or not. For instance return 1 in case all used, return 2 in case not all used. Kurt From beldmit at gmail.com Mon Dec 14 19:20:29 2020 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Mon, 14 Dec 2020 20:20:29 +0100 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: Dear Kurt, On Mon, Dec 14, 2020 at 3:59 PM Kurt Roeckx wrote: > Hi, > > doc/man3/OSSL_PARAM.pod current says: > Keys that a I or I doesn't recognise should > simply be ignored. That in itself isn't an error. > > The intention of that seems to be that you just pass all the data > you have, and that it takes data it needs. So you can pass it data > that it doesn't need because it's only used in case some other parameter > has some specific value. For example, depending on the DRBG mode > (HMAC, CTR, HASH) you have different parameters, and you can just > pass all the parameters for all the modes. > > I think for behaviour for a setter is not something that we want, > it makes it complicated for applications to check that it will > behave properly. I think that in general, if the applications > wants to set something and you don't understand it, you should > return an error. This is about future proofing the API. For > instance, a new version supports a new mode to work in and that > needs a new parameter. If it's build against a version that knows > about it, but then runs against a version that doesn't know about > it, everything will appear to work, but be broken. If we return > an error, it will be clear that it's not supported. > > An alternative method of working is that the application first > needs to query that it's supported. And only if it's supported > it should call the function. But we don't have an API to query for > that. You might be able to ask for which keys you can set, but it > doesn't cover which values you can set. I hope we at least return > an error for a known key with an unknown value. But it's my > understanding that we currently don't always return all supported > keys, and that the supported keys can depend on one of the set > parameters. > > I suggest that we change the return value to indicate that all > parameters have been used or not. For instance return 1 in case > all used, return 2 in case not all used. > > >From my GOST implementor's experience, the provider can get a lot of parameters. Some of them are supported, some of them are not. The particular provider is the only subsystem that knows which parameters are supported and which are necessary for the operations. So the caller can provide some unsupported parameters, some supported and some totally wrong for the provider. These are the cases that must be distinguishable on the caller side. After that the provided EVP object should be either in a consistent state or not, assuming the upcoming operation. And the possibility to find out whether the state is consistent and suitable for the upcoming operation or not is a must and should be provided by an API. -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at roeckx.be Mon Dec 14 21:10:05 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Mon, 14 Dec 2020 22:10:05 +0100 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: On Mon, Dec 14, 2020 at 08:20:29PM +0100, Dmitry Belyavsky wrote: > Dear Kurt, > > > On Mon, Dec 14, 2020 at 3:59 PM Kurt Roeckx wrote: > > > Hi, > > > > doc/man3/OSSL_PARAM.pod current says: > > Keys that a I or I doesn't recognise should > > simply be ignored. That in itself isn't an error. > > > > The intention of that seems to be that you just pass all the data > > you have, and that it takes data it needs. So you can pass it data > > that it doesn't need because it's only used in case some other parameter > > has some specific value. For example, depending on the DRBG mode > > (HMAC, CTR, HASH) you have different parameters, and you can just > > pass all the parameters for all the modes. > > > > I think for behaviour for a setter is not something that we want, > > it makes it complicated for applications to check that it will > > behave properly. I think that in general, if the applications > > wants to set something and you don't understand it, you should > > return an error. This is about future proofing the API. For > > instance, a new version supports a new mode to work in and that > > needs a new parameter. If it's build against a version that knows > > about it, but then runs against a version that doesn't know about > > it, everything will appear to work, but be broken. If we return > > an error, it will be clear that it's not supported. > > > > An alternative method of working is that the application first > > needs to query that it's supported. And only if it's supported > > it should call the function. But we don't have an API to query for > > that. You might be able to ask for which keys you can set, but it > > doesn't cover which values you can set. I hope we at least return > > an error for a known key with an unknown value. But it's my > > understanding that we currently don't always return all supported > > keys, and that the supported keys can depend on one of the set > > parameters. > > > > I suggest that we change the return value to indicate that all > > parameters have been used or not. For instance return 1 in case > > all used, return 2 in case not all used. > > > > > From my GOST implementor's experience, the provider can get a lot of > parameters. > Some of them are supported, some of them are not. > > The particular provider is the only subsystem that knows which parameters > are supported and which are necessary for the operations. > > So the caller can provide some unsupported parameters, some supported and > some totally wrong for the provider. > These are the cases that must be distinguishable on the caller side. If I understand you correctly, what you're saying is that it's sometimes ok to ignore some parameters. For instance, if you try to create an RSA object, and you pass it CRT parameters, and the implementation doesn't do anything with them, it can ignore them if it wants to. I would say that the provider should know what those parameters mean, so that it's not an "unknown key", it just ignores them, at which points it can say that it understands all the parameters. Some might argue that they don't want to use something that doesn't make use of the CRT parameters, but then they probably shouldn't be using that provider to begin with. > After that the provided EVP object should be either in a consistent state > or not, assuming the upcoming operation. The object should always be in a consistent state. I would prefer that in case of failure the object is not created (or modified). Which brings us to some other open points about the API we have. We should not introduce new APIs where you can modify the state of the object, so it can not be in a non-consistent state. It's much more simple to get things correct in that case. But as long as we have to support old APIs where it can be modified, the prefered consistent state is to not mofify the object on error. Some APIs make this very hard, so the other acceptable state is that you can free the object. With an API that doesn't allow modification, either you get a complete object, or you get no object. Kurt From davidben at google.com Mon Dec 14 22:20:38 2020 From: davidben at google.com (David Benjamin) Date: Mon, 14 Dec 2020 17:20:38 -0500 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: I'm not very familiar with the new providers system, but I would discourage introducing new special return values. In my experience, callers don't do a good job of handling this sort of thing. The more APIs diverge from a straightforward success/failure return, the more error-prone they are. So a 1 vs 2 return value for "success" vs "success, but..." seems likely to confuse things. It also seems safer for unexpected parameters to be an error. While sometimes they can be ignored, sometimes they cannot. For example, looking at the RSA provider interface, suppose a caller passed in rsa-factor3, rsa-factor4, etc. A provider that didn't implement multi-prime keys, or supported a different number of coefficients would not notice at the parameter level. The object wouldn't be in a consistent state. (Relatedly, I think this example is another reason that providers should validate inputs on key import. See https://github.com/openssl/openssl/issues/13615.) https://www.openssl.org/docs/manmaster/man7/EVP_PKEY-RSA.html Or consider a caller that thought they were configuring a private key, but got the parameter name wrong. That would likely result in a public key. While self-consistent, it's the wrong type of object, compared to what the caller was expecting, and may result in strange errors further down the program flow. In the other direction, the DRBG example does not seem very compelling to me. At the point the application picks the broad family of algorithm, it should also pick the parameters to instantiate the actual algorithm. They're typically a unit. The convenience of passing arguments that won't be used seems not especially valuable, especially compared against the safety and correctness cost in silently misinterpreting the caller's request. An RSA provider which does not implement CRT is a little more plausible, but I think optional parameters are more the exception rather than the rule. RSA-CRT is well-established and standard (RFC8017, Appendix A.1.2), so that provider can simply know to ignore CRT parameters, possibly still validating them. (If less well-established, the caller may need to query capabilities anyway, in which case it'd know the provider implements a smaller interface. Though see below about how likely this is.) We can also look to programming languages. While languages sometimes do drop unused and undeclared parameters (e.g. Python **kwargs), that's usually not the default story. Finally, these are cryptographic primitives, not a general-purpose plugin system. Cryptographic primitives aren't introduced frequently. They especially aren't extended frequently, and typically have well-defined serializations and structures. They're also security-sensitive. That suggests leaning towards safety and structure rather than ad-hoc extensibility. David On Mon, Dec 14, 2020 at 4:10 PM Kurt Roeckx wrote: > On Mon, Dec 14, 2020 at 08:20:29PM +0100, Dmitry Belyavsky wrote: > > Dear Kurt, > > > > > > On Mon, Dec 14, 2020 at 3:59 PM Kurt Roeckx wrote: > > > > > Hi, > > > > > > doc/man3/OSSL_PARAM.pod current says: > > > Keys that a I or I doesn't recognise should > > > simply be ignored. That in itself isn't an error. > > > > > > The intention of that seems to be that you just pass all the data > > > you have, and that it takes data it needs. So you can pass it data > > > that it doesn't need because it's only used in case some other > parameter > > > has some specific value. For example, depending on the DRBG mode > > > (HMAC, CTR, HASH) you have different parameters, and you can just > > > pass all the parameters for all the modes. > > > > > > I think for behaviour for a setter is not something that we want, > > > it makes it complicated for applications to check that it will > > > behave properly. I think that in general, if the applications > > > wants to set something and you don't understand it, you should > > > return an error. This is about future proofing the API. For > > > instance, a new version supports a new mode to work in and that > > > needs a new parameter. If it's build against a version that knows > > > about it, but then runs against a version that doesn't know about > > > it, everything will appear to work, but be broken. If we return > > > an error, it will be clear that it's not supported. > > > > > > An alternative method of working is that the application first > > > needs to query that it's supported. And only if it's supported > > > it should call the function. But we don't have an API to query for > > > that. You might be able to ask for which keys you can set, but it > > > doesn't cover which values you can set. I hope we at least return > > > an error for a known key with an unknown value. But it's my > > > understanding that we currently don't always return all supported > > > keys, and that the supported keys can depend on one of the set > > > parameters. > > > > > > I suggest that we change the return value to indicate that all > > > parameters have been used or not. For instance return 1 in case > > > all used, return 2 in case not all used. > > > > > > > > From my GOST implementor's experience, the provider can get a lot of > > parameters. > > Some of them are supported, some of them are not. > > > > The particular provider is the only subsystem that knows which parameters > > are supported and which are necessary for the operations. > > > > So the caller can provide some unsupported parameters, some supported and > > some totally wrong for the provider. > > These are the cases that must be distinguishable on the caller side. > > If I understand you correctly, what you're saying is that it's > sometimes ok to ignore some parameters. For instance, if you try > to create an RSA object, and you pass it CRT parameters, and the > implementation doesn't do anything with them, it can ignore them > if it wants to. > > I would say that the provider should know what those parameters > mean, so that it's not an "unknown key", it just ignores them, > at which points it can say that it understands all the parameters. > > Some might argue that they don't want to use something that > doesn't make use of the CRT parameters, but then they probably > shouldn't be using that provider to begin with. > > > After that the provided EVP object should be either in a consistent state > > or not, assuming the upcoming operation. > > The object should always be in a consistent state. I would prefer > that in case of failure the object is not created (or modified). > Which brings us to some other open points about the API we have. We > should not introduce new APIs where you can modify the state of the > object, so it can not be in a non-consistent state. It's much more > simple to get things correct in that case. But as long as we have > to support old APIs where it can be modified, the prefered > consistent state is to not mofify the object on error. Some APIs make > this very hard, so the other acceptable state is that you can free > the object. With an API that doesn't allow modification, either > you get a complete object, or you get no object. > > > Kurt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beldmit at gmail.com Tue Dec 15 07:40:03 2020 From: beldmit at gmail.com (Dmitry Belyavsky) Date: Tue, 15 Dec 2020 08:40:03 +0100 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: Dear Kurt, On Mon, Dec 14, 2020 at 10:10 PM Kurt Roeckx wrote: > On Mon, Dec 14, 2020 at 08:20:29PM +0100, Dmitry Belyavsky wrote: > > Dear Kurt, > > > > > > On Mon, Dec 14, 2020 at 3:59 PM Kurt Roeckx wrote: > > > > > Hi, > > > > > > doc/man3/OSSL_PARAM.pod current says: > > > Keys that a I or I doesn't recognise should > > > simply be ignored. That in itself isn't an error. > > > > > > The intention of that seems to be that you just pass all the data > > > you have, and that it takes data it needs. So you can pass it data > > > that it doesn't need because it's only used in case some other > parameter > > > has some specific value. For example, depending on the DRBG mode > > > (HMAC, CTR, HASH) you have different parameters, and you can just > > > pass all the parameters for all the modes. > > > > > > I think for behaviour for a setter is not something that we want, > > > it makes it complicated for applications to check that it will > > > behave properly. I think that in general, if the applications > > > wants to set something and you don't understand it, you should > > > return an error. This is about future proofing the API. For > > > instance, a new version supports a new mode to work in and that > > > needs a new parameter. If it's build against a version that knows > > > about it, but then runs against a version that doesn't know about > > > it, everything will appear to work, but be broken. If we return > > > an error, it will be clear that it's not supported. > > > > > > An alternative method of working is that the application first > > > needs to query that it's supported. And only if it's supported > > > it should call the function. But we don't have an API to query for > > > that. You might be able to ask for which keys you can set, but it > > > doesn't cover which values you can set. I hope we at least return > > > an error for a known key with an unknown value. But it's my > > > understanding that we currently don't always return all supported > > > keys, and that the supported keys can depend on one of the set > > > parameters. > > > > > > I suggest that we change the return value to indicate that all > > > parameters have been used or not. For instance return 1 in case > > > all used, return 2 in case not all used. > > > > > > > > From my GOST implementor's experience, the provider can get a lot of > > parameters. > > Some of them are supported, some of them are not. > > > > The particular provider is the only subsystem that knows which parameters > > are supported and which are necessary for the operations. > > > > So the caller can provide some unsupported parameters, some supported and > > some totally wrong for the provider. > > These are the cases that must be distinguishable on the caller side. > > If I understand you correctly, what you're saying is that it's > sometimes ok to ignore some parameters. For instance, if you try > to create an RSA object, and you pass it CRT parameters, and the > implementation doesn't do anything with them, it can ignore them > if it wants to. > > I would say that the provider should know what those parameters > mean, so that it's not an "unknown key", it just ignores them, > at which points it can say that it understands all the parameters. > > Some might argue that they don't want to use something that > doesn't make use of the CRT parameters, but then they probably > shouldn't be using that provider to begin with. > > > After that the provided EVP object should be either in a consistent state > > or not, assuming the upcoming operation. > > The object should always be in a consistent state. I would prefer > that in case of failure the object is not created (or modified). > Which brings us to some other open points about the API we have. We > should not introduce new APIs where you can modify the state of the > object, so it can not be in a non-consistent state. It's much more > simple to get things correct in that case. But as long as we have > to support old APIs where it can be modified, the prefered > consistent state is to not mofify the object on error. Some APIs make > this very hard, so the other acceptable state is that you can free > the object. With an API that doesn't allow modification, either > you get a complete object, or you get no object. > > I hope I've got a specific point of our disagreement. There are 2 variants of using OpenSSL. 1. Algorithm-agnostic. We can deal with most of the algorithms in a more or less similar way. That was the way we dealt with various algorithms in libcrypto since 1.0 version. 2. Algorithm-specific. The API user should take into account which algorithms are supported by their application and provide some specific processing. These are two different approaches. The OpenSSL itself should be more or less algorithm-agnostic. The providers (as engines before) are definitely algorithm-specific. The openssl command line utilities in fact provide flexibility leaving the burden of parameters setup to the end-user. So if you pass some RSA-specific parameters to an EC key and vice versa, you should get an (ignorable) error. But when you have set the parameters and try to do a particular operation, you either have a consistent set of parameters (and get OK checking it) or not (and get an unrecoverable failure). -- SY, Dmitry Belyavsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From levitte at openssl.org Tue Dec 15 07:45:45 2020 From: levitte at openssl.org (Richard Levitte) Date: Tue, 15 Dec 2020 08:45:45 +0100 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: <874kknh5qu.wl-levitte@openssl.org> Whatever we decide on this, I would rather not burden provider authors with having to check for all sorts of things they aren't interested in. I've often had the fictitious algorithm BLARGH (someone should invent it, just 'cause), and while everyone with access to specs could write a provider, some might extend it as well, with extra parameters (like the CRT params for RSA keys) or flags or whatnot. If we burden the providers with the sort of checks that are discussed here, it would require every BLARGH implementation to be constantly in sync with every other BLARGH implementation. That's not a very good idea (*) So, I'm thinking that control should remain with the application / libcrypto. However, I'll also maintain that, as a matter of protocol, we can ask the providers to set the return_size field for any parameter they use, as a way to help out. That would enable the use of functions like OSSL_PARAM_modified() on the application / libcrypto side when setting parameters, just at it currently does for getting them. From a provider author point of view, I'd say that's a much lesser burden than having to have knowledge of all sorts of params any other provider might support. Of course, checking the gettable and settable tables beforehand works as well. They were originally never meant to be mandatory, but I guess we're moving that way... Cheers, Richard On Mon, 14 Dec 2020 23:20:38 +0100, David Benjamin wrote: > > I'm not very familiar with the new providers system, but I would discourage introducing new > special return values. In my experience, callers don't do a good job of handling this sort of > thing. The more APIs diverge from a straightforward success/failure return, the more error-prone > they are. So a 1 vs 2 return value for "success" vs "success, but..." seems likely to confuse > things. > > It also seems safer for unexpected parameters to be an error. While sometimes they can be ignored, > sometimes they cannot. For example, looking at the RSA provider interface, suppose a caller passed > in rsa-factor3, rsa-factor4, etc. A provider that didn't implement multi-prime keys, or supported > a different number of coefficients would not notice at the parameter level. The object wouldn't?be > in a consistent state. (Relatedly, I think this example is another reason that providers should > validate inputs on key import. See https://github.com/openssl/openssl/issues/13615.) > https://www.openssl.org/docs/manmaster/man7/EVP_PKEY-RSA.html > > Or consider a caller that thought they were configuring a private key, but got the parameter name > wrong. That would likely result in a public key. While self-consistent, it's the wrong type of > object, compared to what the caller was expecting, and may result in strange errors further down > the program flow. > > In the other direction, the DRBG example does not seem very compelling to me. At the point the > application picks the broad family of algorithm, it should also?pick the parameters to instantiate > the actual algorithm. They're typically a unit. The convenience of passing arguments that won't be > used seems not especially valuable,?especially compared against the safety?and correctness cost in > silently misinterpreting the caller's request. An RSA provider which?does not implement CRT is a > little more plausible, but I think optional parameters are more the exception rather than the > rule. RSA-CRT is well-established and standard (RFC8017, Appendix A.1.2), so that?provider can > simply know to ignore CRT parameters, possibly still validating them. (If less well-established, > the caller may need to query capabilities anyway, in which case it'd know the provider implements > a smaller interface. Though see below about how likely this is.) > > We can also look to programming languages. While languages sometimes do drop unused and undeclared > parameters (e.g. Python **kwargs), that's usually not the default story. > > Finally, these are cryptographic primitives,?not a general-purpose plugin system. Cryptographic > primitives aren't introduced frequently. They especially aren't extended frequently, and typically > have well-defined serializations and structures. They're also security-sensitive. That suggests > leaning towards safety and structure rather than ad-hoc extensibility. > > David > > On Mon, Dec 14, 2020 at 4:10 PM Kurt Roeckx wrote: > > On Mon, Dec 14, 2020 at 08:20:29PM +0100, Dmitry Belyavsky wrote: > > Dear Kurt, > > > > > > On Mon, Dec 14, 2020 at 3:59 PM Kurt Roeckx wrote: > > > > > Hi, > > > > > > doc/man3/OSSL_PARAM.pod current says: > > > Keys that a I or I doesn't recognise should > > > simply be ignored. That in itself isn't an error. > > > > > > The intention of that seems to be that you just pass all the data > > > you have, and that it takes data it needs. So you can pass it data > > > that it doesn't need because it's only used in case some other parameter > > > has some specific value. For example, depending on the DRBG mode > > > (HMAC, CTR, HASH) you have different parameters, and you can just > > > pass all the parameters for all the modes. > > > > > > I think for behaviour for a setter is not something that we want, > > > it makes it complicated for applications to check that it will > > > behave properly. I think that in general, if the applications > > > wants to set something and you don't understand it, you should > > > return an error. This is about future proofing the API. For > > > instance, a new version supports a new mode to work in and that > > > needs a new parameter. If it's build against a version that knows > > > about it, but then runs against a version that doesn't know about > > > it, everything will appear to work, but be broken. If we return > > > an error, it will be clear that it's not supported. > > > > > > An alternative method of working is that the application first > > > needs to query that it's supported. And only if it's supported > > > it should call the function. But we don't have an API to query for > > > that. You might be able to ask for which keys you can set, but it > > > doesn't cover which values you can set. I hope we at least return > > > an error for a known key with an unknown value. But it's my > > > understanding that we currently don't always return all supported > > > keys, and that the supported keys can depend on one of the set > > > parameters. > > > > > > I suggest that we change the return value to indicate that all > > > parameters have been used or not. For instance return 1 in case > > > all used, return 2 in case not all used. > > > > > > > > From my GOST implementor's experience, the provider can get a lot of > > parameters. > > Some of them are supported, some of them are not. > > > > The particular provider is the only subsystem that knows which parameters > > are supported and which are necessary for the operations. > > > > So the caller can provide some unsupported parameters, some supported and > > some totally wrong for the provider. > > These are the cases that must be distinguishable on the caller side. > > If I understand you correctly, what you're saying is that it's > sometimes ok to ignore some parameters. For instance, if you try > to create an RSA object, and you pass it CRT parameters, and the > implementation doesn't do anything with them, it can ignore them > if it wants to. > > I would say that the provider should know what those parameters > mean, so that it's not an "unknown key", it just ignores them, > at which points it can say that it understands all the parameters. > > Some might argue that they don't want to use something that > doesn't make use of the CRT parameters, but then they probably > shouldn't be using that provider to begin with. > > > After that the provided EVP object should be either in a consistent state > > or not, assuming the upcoming operation. > > The object should always be in a consistent state. I would prefer > that in case of failure the object is not created (or modified). > Which brings us to some other open points about the API we have. We > should not introduce new APIs where you can modify the state of the > object, so it can not be in a non-consistent state. It's much more > simple to get things correct in that case. But as long as we have > to support old APIs where it can be modified, the prefered > consistent state is to not mofify the object on error. Some APIs make > this very hard, so the other acceptable state is that you can free > the object. With an API that doesn't allow modification, either > you get a complete object, or you get no object. > > Kurt > > -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From tjh at cryptsoft.com Tue Dec 15 07:53:34 2020 From: tjh at cryptsoft.com (Tim Hudson) Date: Tue, 15 Dec 2020 17:53:34 +1000 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: <874kknh5qu.wl-levitte@openssl.org> References: <874kknh5qu.wl-levitte@openssl.org> Message-ID: On Tue, Dec 15, 2020 at 5:46 PM Richard Levitte wrote: > Of course, checking the gettable and settable tables beforehand works > as well. They were originally never meant to be mandatory, but I > guess we're moving that way... > The only one who knows whether or not a given parameter is critically important to have been used is the application. The gettable and settable interfaces provide the ability to check that. For forward and backward compatibility it makes no sense to wire in a requirement for complete knowledge of everything that is provided. You need to be able to provide extra optional parameters that some implementations want to consume and are entirely irrelevant to other implementations to have extensibility wired into the APIs and that was one of the purposes of the new plumbing - to be able to handle things going forward. If you change things at this late stage to basically say everything has to know everything then we lose that ability. In practical terms too, we need to be able to have later releases of applications able to work with earlier releases of providers (specifically but not limited to the FIPS provider) and it would practically mean you could never reach that interchangeable provider context that is there for a stable ABI - wiring in a requirement to be aware of all parameters will simply lead to provider implementations needing to ignore things to achieve the necessary outcome. If you want to know if a specific implementation is aware of something, the interface is already there. In short - I don't see an issue as there is a way to check, and the interface is designed for forward and backward compatibility and that is more important than the various items raised here so far IMHO. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at roeckx.be Tue Dec 15 12:23:58 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Tue, 15 Dec 2020 13:23:58 +0100 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: On Tue, Dec 15, 2020 at 08:40:03AM +0100, Dmitry Belyavsky wrote: > > There are 2 variants of using OpenSSL. > 1. Algorithm-agnostic. We can deal with most of the algorithms in a more or > less similar way. > That was the way we dealt with various algorithms in libcrypto since 1.0 > version. > > 2. Algorithm-specific. The API user should take into account which > algorithms are > supported by their application and provide some specific processing. > > These are two different approaches. I'm not really sure what you're trying to say. 1) seem to be things like EVP_CIPHER, EVP_MAC, and so on, while 2) seems to be RSA, ECDSA, AES APIs. And 2) is then something we want to get rid of. What we want is that the interface to the algorithm doesn't depend on the algorithm. But even in the same type of algorithmd, some might need more parameters, support different modes, and things like that, so we need a way set all those options. We now have an OSSL_PARAM that can make this much more flexible, and we don't need to add a new function/macro each time we want to do something new. If an applications wants to switch from one to the other algorithm, it should be as easy as possible. But the application might need to change, and might need to be aware which parameters are needed. If the application passes the RSA parameters itself to OpenSSL and it wants to switch to EdDSA, it will not continue to pass the primes, exponent, and so on. I think if it did try that, we should return an error. Kurt From tjh at cryptsoft.com Tue Dec 15 12:34:37 2020 From: tjh at cryptsoft.com (Tim Hudson) Date: Tue, 15 Dec 2020 22:34:37 +1000 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: References: Message-ID: On Tue, Dec 15, 2020 at 10:24 PM Kurt Roeckx wrote: > If an applications wants to switch from one to the other > algorithm, it should be as easy as possible. But the application > might need to change, and might need to be aware which parameters > are needed. A provider may not need any of those parameters - it might just need (for example) a label or key name. That could be entirely sufficient and valid for an HSM usage scenario and setting up a key in that manner should be permitted. Then you don't have any of the sort of parameters you are talking about and it remains perfectly valid - for that provider. For other providers the list may be different. This is one of the areas where there is a conceptual difference - it is a collection of things a provider needs to do its work - it isn't necessarily a complete standalone portable definition of a cryptographic object with all elements available and provided by the application. Part of the point of this is you should be able to use different algorithms without the application having to change - that is part of the point of the sort of APIs we have - so that applications can work with whatever the user of the application wants to work with and you don't have to always go and add extra code into every application if something new comes along that we want to support. Tim. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at roeckx.be Tue Dec 15 13:43:55 2020 From: kurt at roeckx.be (Kurt Roeckx) Date: Tue, 15 Dec 2020 14:43:55 +0100 Subject: OSSL_PARAM behaviour for unknown keys In-Reply-To: <874kknh5qu.wl-levitte@openssl.org> References: <874kknh5qu.wl-levitte@openssl.org> Message-ID: On Tue, Dec 15, 2020 at 08:45:45AM +0100, Richard Levitte wrote: > Whatever we decide on this, I would rather not burden provider authors > with having to check for all sorts of things they aren't interested in. I think you write the provider just a little bit different than what you might be doing now. > I've often had the fictitious algorithm BLARGH (someone should invent > it, just 'cause), and while everyone with access to specs could write > a provider, some might extend it as well, with extra parameters (like > the CRT params for RSA keys) or flags or whatnot. If we burden the > providers with the sort of checks that are discussed here, it would > require every BLARGH implementation to be constantly in sync with > every other BLARGH implementation. That's not a very good idea (*) So you're saying that the applications should change depending on which provider it has loaded? One implementation can name the parameter for the same functionality different than the other? I guess it's then up to the application to query which provider is loaded, and see which parameter it should set for that provider? But it's unlikely that all applications will properly check that the provider provides all the functionality it needs. And we should do what we can to prevents problems. > So, I'm thinking that control should remain with the application / > libcrypto. However, I'll also maintain that, as a matter of protocol, > we can ask the providers to set the return_size field for any > parameter they use, as a way to help out. That would enable the use > of functions like OSSL_PARAM_modified() on the application / libcrypto > side when setting parameters, just at it currently does for getting > them. From a provider author point of view, I'd say that's a much > lesser burden than having to have knowledge of all sorts of params any > other provider might support. To get back to the RSA / CRT example. If you write a provider to do RSA but don't use the CRT parameters, and the application loads the provided key with CRT parameters, it's not hard for a provider to know some other provider might use the CRT parameters and so can ignore it. But it's also fine that it returns an error at first, the application will either switch to an other provider, or the provider will get fixed to ignore it. If it's for a parameter that enables a new mode, and the mode is not supported by the provider, we need a way to check that that mode is supported. For intance blake2 has an optional key, but we never implemented it because the API didn't support setting it. If at some point it is added, an application needs to have a way to make sure it supported. The options I see are: 1) The fetch function should support requesting a version that has support for it. 2) You fetch something and: a) You set the parameter, and check for an error. b) You ask the settable parameters and check that it supports it. It's my understand that you can't get all the parameters in some cases, so this isn't always possible. I don't actually see many applications do 2b), they'll do 2a) and currently don't get an error. It's also likely they will not do 1) correctly, because it was written against a provider that did support it, so it was not clear they needed to request that feature, and so again will currently not get an error, and will be hard to debug. We can detect that the application is trying to do something that's not going to work and should return an error in that case. We need 1), but the question is how fine grained we want to do that, or what we expect a provider to implement asking for a feature. For intance, if we want to load a multiprime RSA key, not all providers will support that, and as far as I know, the FIPS provider will not support that. Note all providers will even support loading an RSA private key, so when fetch RSA, we really need to say that we should be able to set the key. We need documentation that says what you can expect to do depending on the features you requested. As far as I know, if you currently try to load a multiprime key into the FIPS provider, it will not give you any error, it will just not do what you want it to do. We need to define how we're going to deal with all this. I prefer that it's in some consistent way. Kurt From levitte at openssl.org Thu Dec 17 04:09:53 2020 From: levitte at openssl.org (Richard Levitte) Date: Thu, 17 Dec 2020 05:09:53 +0100 Subject: OMC VOTE result: No release of 1.1.1j next week Message-ID: <87o8itf4z2.wl-levitte@openssl.org> As of regular cadence, 1.1.1j would have been released next week. However, seeing that we released 1.1.1i last week, the OMC made a quick vote. Vote: As an exception to the regular cadence, we will not release 1.1.1j on 22 December For: 5, Against: 0, Abstain: 0, Didn?t vote yet: 2 The vote passes. -- Richard Levitte levitte at openssl.org OpenSSL Project http://www.openssl.org/~levitte/ From nic.tuv at gmail.com Mon Dec 21 18:39:49 2020 From: nic.tuv at gmail.com (Nicola Tuveri) Date: Mon, 21 Dec 2020 20:39:49 +0200 Subject: OTC VOTE: Fixing missing failure exit status is a bug fix In-Reply-To: References: Message-ID: The vote is now closed, and accepted! > topic: In the context of the OpenSSL apps, the OTC qualifies as bug > fixes the changes to return a failure exit status when a called > function fails with an unhandled return value. > Even when these bug fixes change the apps behavior triggering > early exits (compared to previous versions of the apps), as bug > fixes, they do not qualify as behavior changes that require an > explicit OMC approval. > Proposed by Nicola Tuveri > Public: yes > opened: 2020-11-30 > closed: 2020-12-21 > accepted: yes (for: 9, against: 0, abstained: 0, not voted: 2) > > Matt [+1] > Mark [ ] > Pauli [+1] > Viktor [ ] > Tim [+1] > Richard [+1] > Shane [+1] > Tomas [+1] > Kurt [+1] > Matthias [+1] > Nicola [+1] On Mon, Nov 30, 2020 at 2:03 PM Nicola Tuveri wrote: > > Vote background > --------------- > > This follows up on a [previous proposal] that was abandoned in favor of > an OMC vote on the behavior change introduced in [PR#13359]. > Within today's OTC meeting this was further discussed with the attending > members that also sit in the OMC. > > The suggestion was to improve the separation of the OTC and OMC domains > here, by having a more generic OTC vote to qualify as bug fixes the > changes to let any OpenSSL app return an (early) failure exit status > when a called function fails. > > The idea is that, if we agree on this technical definition, then no OMC > vote to allow a behavior change in the apps would be required in > general, unless, on a case-by-case basis, the "OMC hold" process is > invoked for whatever reason on the specific bug fix, triggering the > usual OMC decision process. > > [previous proposal]: > > [PR#13359]: > > > > Vote text > --------- > > topic: In the context of the OpenSSL apps, the OTC qualifies as bug > fixes the changes to return a failure exit status when a called > function fails with an unhandled return value. > Even when these bug fixes change the apps behavior triggering > early exits (compared to previous versions of the apps), as bug > fixes, they do not qualify as behavior changes that require an > explicit OMC approval. > Proposed by Nicola Tuveri > Public: yes > opened: 2020-11-30