Removing difference between CLI and FFI use for computing a message digest

Sage Gerard sage at sagegerard.com
Wed Sep 16 04:18:36 UTC 2020


Thank you. I resolved the issue. The root cause was an incorrect cast on the type when crossing the FFI's boundary.


~slg

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, September 15, 2020 6:06 PM, Matt Caswell <matt at openssl.org> wrote:

>
>
> On 15/09/2020 22:48, Sage Gerard wrote:
>
> > I have a Racket program that uses libcrypto through FFI bindings to
> > compute digests. It's wrong because it returns different answers than
> > `openssl dgst`,regardless of hash algorithm.
> > The code is here:
> > https://github.com/zyrolasting/xiden/blob/libcrypto/openssl.rkt#L76
> > It is based on the example in:
> > https://wiki.openssl.org/index.php/EVP_Message_Digests.
> > I'm not expecting anyone to run this program or review Racket code in
> > detail. The links are just there for context. I just want to know if
> > there are common C-level mistakes libcrypto users make that would make
> > their digests disagree with the CLI. As far as I can tell, I replicated
> > the example on wiki.openssl.org well enough to deterministically compute
> > a digest with any byte string.
> > Let me know if there is any other context I can provide.
>
> Common "rookie" errors that spring to mind are:
>
> 1.  Use strlen on binary data and end up passing the wrong length of data
>     to the functions.
>
> 2.  Include carriage return/line feed in the input data in one context
>     but not in another.
>
>     Matt
>




More information about the openssl-users mailing list