# ASN1 integer conversion - why is this correct ?

Dirk-Willem van Gulik dirkx at webweaving.org
Sun Aug 30 13:23:00 UTC 2020

I am converting an unsigned integer (P,Q of an ECDSA 256 bit curve) from a 32 byte array (as provided by Microsoft its .NET cryptographic framework) to an ANS1_INTEGER.

The steps taken are:

unsigned char in[32] = ..

r = BN_bin2bn(in, 32, NULL);
BN_to_ASN1_INTEGER(r, asn1intptr);

All works well; except for these two test cases:

in[]32 =
FF F0 00 00 00 00 00 00   00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00   00 00 00 00 00 00 00 00

in[]32 =
FF F0 00 00 00 00 00 00   00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00   00 00 00 00 00 00 00 FF // < last bits set

Which both yield:

2:d=1  hl=2 l=  33 prim: INTEGER           :EBFFF00000000000000000000000000000000000000000000000000000000000

And

in[]32 = 03 00 00 00 00 00 00 00   00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00   00 00 00 00 00 00 00 FF

Which yields:

37:d=1  hl=2 l=  33 prim: INTEGER           :FF03000000000000000000000000000000000000000000000000000000000000

Could someone explain me what happens here, especially to the last 0xFF bits ?

With kind regards,

Actual code at [1]; test script output of gen-tc.sh[2] in [3].

Dw.

1: https://github.com/minvws/nl-contact-tracing-odds-and-ends/tree/master/dotNet_ms64_to_x962
2: https://github.com/minvws/nl-contact-tracing-odds-and-ends/blob/master/dotNet_ms64_to_x962/gen-tc.sh
3: https://github.com/minvws/nl-contact-tracing-odds-and-ends/blob/master/dotNet_ms64_to_x962/test.txt