[openssl-commits] [openssl] master update
Matt Caswell
matt at openssl.org
Mon Oct 10 22:38:42 UTC 2016
The branch master has been updated
via 0e831db0a6d89a0074d910fb6d4994a3137cfb48 (commit)
via 609b0852e4d50251857dbbac3141ba042e35a9ae (commit)
from 11542af65a82242b47e97506695fa0d306d24fb6 (commit)
- Log -----------------------------------------------------------------
commit 0e831db0a6d89a0074d910fb6d4994a3137cfb48
Author: David Benjamin <davidben at google.com>
Date: Mon Oct 10 17:33:51 2016 -0400
Fix up bn_prime.pl formatting.
Align at 5 characters, not 4. There are 5-digit numbers in the output.
Also avoid emitting an extra blank line and trailing whitespace.
Reviewed-by: Kurt Roeckx <kurt at openssl.org>
Reviewed-by: Matt Caswell <matt at openssl.org>
commit 609b0852e4d50251857dbbac3141ba042e35a9ae
Author: David Benjamin <davidben at google.com>
Date: Mon Oct 10 12:01:24 2016 -0400
Remove trailing whitespace from some files.
The prevailing style seems to not have trailing whitespace, but a few
lines do. This is mostly in the perlasm files, but a few C files got
them after the reformat. This is the result of:
find . -name '*.pl' | xargs sed -E -i '' -e 's/( |'$'\t'')*$//'
find . -name '*.c' | xargs sed -E -i '' -e 's/( |'$'\t'')*$//'
find . -name '*.h' | xargs sed -E -i '' -e 's/( |'$'\t'')*$//'
Then bn_prime.h was excluded since this is a generated file.
Note mkerr.pl has some changes in a heredoc for some help output, but
other lines there lack trailing whitespace too.
Reviewed-by: Kurt Roeckx <kurt at openssl.org>
Reviewed-by: Matt Caswell <matt at openssl.org>
-----------------------------------------------------------------------
Summary of changes:
apps/cms.c | 2 +-
apps/smime.c | 2 +-
apps/speed.c | 4 +-
crypto/aes/asm/aes-586.pl | 12 +-
crypto/aes/asm/aes-ppc.pl | 8 +-
crypto/aes/asm/aes-s390x.pl | 16 +-
crypto/aes/asm/aes-x86_64.pl | 4 +-
crypto/aes/asm/aesni-mb-x86_64.pl | 28 +-
crypto/aes/asm/aesni-sha1-x86_64.pl | 2 +-
crypto/aes/asm/aesni-sha256-x86_64.pl | 2 +-
crypto/aes/asm/aesni-x86.pl | 2 +-
crypto/aes/asm/aesni-x86_64.pl | 10 +-
crypto/aes/asm/aesp8-ppc.pl | 2 +-
crypto/aes/asm/aesv8-armx.pl | 6 +-
crypto/aes/asm/bsaes-armv7.pl | 2 +-
crypto/aes/asm/bsaes-x86_64.pl | 4 +-
crypto/aes/asm/vpaes-armv8.pl | 8 +-
crypto/aes/asm/vpaes-ppc.pl | 8 +-
crypto/aes/asm/vpaes-x86.pl | 8 +-
crypto/aes/asm/vpaes-x86_64.pl | 20 +-
crypto/bn/asm/armv4-gf2m.pl | 2 +-
crypto/bn/asm/armv4-mont.pl | 2 +-
crypto/bn/asm/bn-586.pl | 24 +-
crypto/bn/asm/co-586.pl | 12 +-
crypto/bn/asm/ia64-mont.pl | 4 +-
crypto/bn/asm/mips.pl | 6 +-
crypto/bn/asm/parisc-mont.pl | 4 +-
crypto/bn/asm/ppc-mont.pl | 6 +-
crypto/bn/asm/ppc.pl | 264 ++++++++---------
crypto/bn/asm/rsaz-avx2.pl | 8 +-
crypto/bn/asm/rsaz-x86_64.pl | 36 +--
crypto/bn/asm/s390x-gf2m.pl | 2 +-
crypto/bn/asm/via-mont.pl | 2 +-
crypto/bn/asm/x86-mont.pl | 2 +-
crypto/bn/asm/x86_64-mont5.pl | 10 +-
crypto/bn/bn_prime.h | 513 +++++++++++++++++-----------------
crypto/bn/bn_prime.pl | 6 +-
crypto/camellia/asm/cmll-x86.pl | 6 +-
crypto/cast/asm/cast-586.pl | 6 +-
crypto/chacha/asm/chacha-armv4.pl | 4 +-
crypto/chacha/asm/chacha-armv8.pl | 4 +-
crypto/chacha/asm/chacha-ppc.pl | 4 +-
crypto/des/asm/crypt586.pl | 4 +-
crypto/des/asm/des-586.pl | 6 +-
crypto/des/asm/desboth.pl | 2 +-
crypto/ec/asm/ecp_nistz256-armv8.pl | 2 +-
crypto/ec/asm/ecp_nistz256-sparcv9.pl | 2 +-
crypto/ec/asm/ecp_nistz256-x86.pl | 2 +-
crypto/ec/asm/ecp_nistz256-x86_64.pl | 16 +-
crypto/md5/asm/md5-586.pl | 2 +-
crypto/md5/asm/md5-sparcv9.pl | 2 +-
crypto/mips_arch.h | 2 +-
crypto/modes/asm/ghash-armv4.pl | 4 +-
crypto/modes/asm/ghash-s390x.pl | 2 +-
crypto/modes/asm/ghash-x86.pl | 6 +-
crypto/modes/asm/ghash-x86_64.pl | 8 +-
crypto/perlasm/cbc.pl | 10 +-
crypto/perlasm/ppc-xlate.pl | 2 +-
crypto/perlasm/sparcv9_modes.pl | 16 +-
crypto/perlasm/x86_64-xlate.pl | 12 +-
crypto/perlasm/x86nasm.pl | 2 +-
crypto/rc4/asm/rc4-c64xplus.pl | 2 +-
crypto/rc4/asm/rc4-md5-x86_64.pl | 4 +-
crypto/rc4/asm/rc4-parisc.pl | 4 +-
crypto/rc4/asm/rc4-x86_64.pl | 2 +-
crypto/ripemd/asm/rmd-586.pl | 12 +-
crypto/sha/asm/sha1-586.pl | 2 +-
crypto/sha/asm/sha1-mb-x86_64.pl | 4 +-
crypto/sha/asm/sha1-sparcv9.pl | 2 +-
crypto/sha/asm/sha1-sparcv9a.pl | 2 +-
crypto/sha/asm/sha1-x86_64.pl | 2 +-
crypto/sha/asm/sha256-586.pl | 6 +-
crypto/sha/asm/sha256-mb-x86_64.pl | 2 +-
crypto/sha/asm/sha512-586.pl | 2 +-
crypto/sha/asm/sha512-armv8.pl | 2 +-
crypto/sha/asm/sha512-parisc.pl | 4 +-
crypto/sha/asm/sha512-s390x.pl | 2 +-
crypto/sha/asm/sha512-sparcv9.pl | 4 +-
crypto/sha/asm/sha512-x86_64.pl | 2 +-
crypto/ts/ts_rsp_verify.c | 2 +-
crypto/whrlpool/asm/wp-mmx.pl | 4 +-
crypto/x509v3/v3_enum.c | 2 +-
crypto/x509v3/v3_skey.c | 2 +-
crypto/x86cpuid.pl | 2 +-
engines/asm/e_padlock-x86_64.pl | 2 +-
include/openssl/x509.h | 2 +-
ssl/packet.c | 2 +-
ssl/packet_locl.h | 2 +-
test/pkits-test.pl | 2 +-
test/recipes/tconversion.pl | 2 +-
test/wpackettest.c | 2 +-
util/ck_errf.pl | 2 +-
util/copy.pl | 4 +-
util/fipslink.pl | 2 +-
util/mkdef.pl | 8 +-
util/mkerr.pl | 16 +-
util/su-filter.pl | 2 +-
97 files changed, 649 insertions(+), 650 deletions(-)
diff --git a/apps/cms.c b/apps/cms.c
index 133dc02..21f0961 100644
--- a/apps/cms.c
+++ b/apps/cms.c
@@ -146,7 +146,7 @@ OPTIONS cms_options[] = {
"Do not load certificates from the default certificates directory"},
{"content", OPT_CONTENT, '<',
"Supply or override content for detached signature"},
- {"print", OPT_PRINT, '-',
+ {"print", OPT_PRINT, '-',
"For the -cmsout operation print out all fields of the CMS structure"},
{"secretkey", OPT_SECRETKEY, 's'},
{"secretkeyid", OPT_SECRETKEYID, 's'},
diff --git a/apps/smime.c b/apps/smime.c
index 1f4091f..0c8660f 100644
--- a/apps/smime.c
+++ b/apps/smime.c
@@ -89,7 +89,7 @@ OPTIONS smime_options[] = {
{"no-CApath", OPT_NOCAPATH, '-',
"Do not load certificates from the default certificates directory"},
{"resign", OPT_RESIGN, '-', "Resign a signed message"},
- {"nochain", OPT_NOCHAIN, '-',
+ {"nochain", OPT_NOCHAIN, '-',
"set PKCS7_NOCHAIN so certificates contained in the message are not used as untrusted CAs" },
{"nosmimecap", OPT_NOSMIMECAP, '-', "Omit the SMIMECapabilities attribute"},
{"stream", OPT_STREAM, '-', "Enable CMS streaming" },
diff --git a/apps/speed.c b/apps/speed.c
index e6bdc5d..e9dc8a9 100644
--- a/apps/speed.c
+++ b/apps/speed.c
@@ -1187,8 +1187,8 @@ static int run_benchmark(int async_jobs,
continue;
#endif
- ret = ASYNC_start_job(&loopargs[i].inprogress_job,
- loopargs[i].wait_ctx, &job_op_count, loop_function,
+ ret = ASYNC_start_job(&loopargs[i].inprogress_job,
+ loopargs[i].wait_ctx, &job_op_count, loop_function,
(void *)(loopargs + i), sizeof(loopargs_t));
switch (ret) {
case ASYNC_PAUSE:
diff --git a/crypto/aes/asm/aes-586.pl b/crypto/aes/asm/aes-586.pl
index 1ba3565..61bdce8 100755
--- a/crypto/aes/asm/aes-586.pl
+++ b/crypto/aes/asm/aes-586.pl
@@ -123,7 +123,7 @@
# words every cache-line is *guaranteed* to be accessed within ~50
# cycles window. Why just SSE? Because it's needed on hyper-threading
# CPU! Which is also why it's prefetched with 64 byte stride. Best
-# part is that it has no negative effect on performance:-)
+# part is that it has no negative effect on performance:-)
#
# Version 4.3 implements switch between compact and non-compact block
# functions in AES_cbc_encrypt depending on how much data was asked
@@ -585,7 +585,7 @@ sub enctransform()
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | mm4 | mm0 |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
-# | s3 | s2 | s1 | s0 |
+# | s3 | s2 | s1 | s0 |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# |15|14|13|12|11|10| 9| 8| 7| 6| 5| 4| 3| 2| 1| 0|
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
@@ -805,7 +805,7 @@ sub encstep()
if ($i==3) { $tmp=$s[3]; &mov ($s[2],$__s1); }##%ecx
elsif($i==2){ &movz ($tmp,&HB($s[3])); }#%ebx[2]
- else { &mov ($tmp,$s[3]);
+ else { &mov ($tmp,$s[3]);
&shr ($tmp,24) }
&xor ($out,&DWP(1,$te,$tmp,8));
if ($i<2) { &mov (&DWP(4+4*$i,"esp"),$out); }
@@ -1558,7 +1558,7 @@ sub sse_deccompact()
&pxor ("mm1","mm3"); &pxor ("mm5","mm7"); # tp4
&pshufw ("mm3","mm1",0xb1); &pshufw ("mm7","mm5",0xb1);
&pxor ("mm0","mm1"); &pxor ("mm4","mm5"); # ^= tp4
- &pxor ("mm0","mm3"); &pxor ("mm4","mm7"); # ^= ROTATE(tp4,16)
+ &pxor ("mm0","mm3"); &pxor ("mm4","mm7"); # ^= ROTATE(tp4,16)
&pxor ("mm3","mm3"); &pxor ("mm7","mm7");
&pcmpgtb("mm3","mm1"); &pcmpgtb("mm7","mm5");
@@ -2028,7 +2028,7 @@ sub declast()
{
# stack frame layout
# -4(%esp) # return address 0(%esp)
-# 0(%esp) # s0 backing store 4(%esp)
+# 0(%esp) # s0 backing store 4(%esp)
# 4(%esp) # s1 backing store 8(%esp)
# 8(%esp) # s2 backing store 12(%esp)
# 12(%esp) # s3 backing store 16(%esp)
@@ -2738,7 +2738,7 @@ sub enckey()
&mov (&DWP(80,"edi"),10); # setup number of rounds
&xor ("eax","eax");
&jmp (&label("exit"));
-
+
&set_label("12rounds");
&mov ("eax",&DWP(0,"esi")); # copy first 6 dwords
&mov ("ebx",&DWP(4,"esi"));
diff --git a/crypto/aes/asm/aes-ppc.pl b/crypto/aes/asm/aes-ppc.pl
index 1558d8e..184c28a 100644
--- a/crypto/aes/asm/aes-ppc.pl
+++ b/crypto/aes/asm/aes-ppc.pl
@@ -1433,10 +1433,10 @@ $code.=<<___;
xor $s1,$s1,$acc05
xor $s2,$s2,$acc06
xor $s3,$s3,$acc07
- xor $s0,$s0,$acc08 # ^= ROTATE(r8,8)
- xor $s1,$s1,$acc09
- xor $s2,$s2,$acc10
- xor $s3,$s3,$acc11
+ xor $s0,$s0,$acc08 # ^= ROTATE(r8,8)
+ xor $s1,$s1,$acc09
+ xor $s2,$s2,$acc10
+ xor $s3,$s3,$acc11
b Ldec_compact_loop
.align 4
diff --git a/crypto/aes/asm/aes-s390x.pl b/crypto/aes/asm/aes-s390x.pl
index a93d601..9c17f0e 100644
--- a/crypto/aes/asm/aes-s390x.pl
+++ b/crypto/aes/asm/aes-s390x.pl
@@ -404,7 +404,7 @@ _s390x_AES_encrypt:
or $s1,$t1
or $t2,$i2
or $t3,$i3
-
+
srlg $i1,$s2,`8-3` # i0
srlg $i2,$s2,`16-3` # i1
nr $i1,$mask
@@ -457,7 +457,7 @@ _s390x_AES_encrypt:
x $s2,24($key)
x $s3,28($key)
- br $ra
+ br $ra
.size _s390x_AES_encrypt,.-_s390x_AES_encrypt
___
@@ -779,7 +779,7 @@ _s390x_AES_decrypt:
x $s2,24($key)
x $s3,28($key)
- br $ra
+ br $ra
.size _s390x_AES_decrypt,.-_s390x_AES_decrypt
___
@@ -1297,7 +1297,7 @@ $code.=<<___;
.Lcbc_enc_done:
l${g} $ivp,6*$SIZE_T($sp)
st $s0,0($ivp)
- st $s1,4($ivp)
+ st $s1,4($ivp)
st $s2,8($ivp)
st $s3,12($ivp)
@@ -1635,7 +1635,7 @@ $code.=<<___ if(1);
llgc $len,2*$SIZE_T-1($sp)
nill $len,0x0f # $len%=16
br $ra
-
+
.align 16
.Lxts_km_vanilla:
___
@@ -1862,7 +1862,7 @@ $code.=<<___;
xgr $s1,%r1
lrvgr $s1,$s1 # flip byte order
lrvgr $s3,$s3
- srlg $s0,$s1,32 # smash the tweak to 4x32-bits
+ srlg $s0,$s1,32 # smash the tweak to 4x32-bits
stg $s1,$tweak+0($sp) # save the tweak
llgfr $s1,$s1
srlg $s2,$s3,32
@@ -1913,7 +1913,7 @@ $code.=<<___;
xgr $s1,%r1
lrvgr $s1,$s1 # flip byte order
lrvgr $s3,$s3
- srlg $s0,$s1,32 # smash the tweak to 4x32-bits
+ srlg $s0,$s1,32 # smash the tweak to 4x32-bits
stg $s1,$tweak+0($sp) # save the tweak
llgfr $s1,$s1
srlg $s2,$s3,32
@@ -2105,7 +2105,7 @@ $code.=<<___;
xgr $s1,%r1
lrvgr $s1,$s1 # flip byte order
lrvgr $s3,$s3
- srlg $s0,$s1,32 # smash the tweak to 4x32-bits
+ srlg $s0,$s1,32 # smash the tweak to 4x32-bits
stg $s1,$tweak+0($sp) # save the tweak
llgfr $s1,$s1
srlg $s2,$s3,32
diff --git a/crypto/aes/asm/aes-x86_64.pl b/crypto/aes/asm/aes-x86_64.pl
index ce4ca30..ae7fde2 100755
--- a/crypto/aes/asm/aes-x86_64.pl
+++ b/crypto/aes/asm/aes-x86_64.pl
@@ -1298,7 +1298,7 @@ $code.=<<___;
AES_set_encrypt_key:
push %rbx
push %rbp
- push %r12 # redundant, but allows to share
+ push %r12 # redundant, but allows to share
push %r13 # exception handler...
push %r14
push %r15
@@ -1424,7 +1424,7 @@ $code.=<<___;
xor %rax,%rax
jmp .Lexit
-.L14rounds:
+.L14rounds:
mov 0(%rsi),%rax # copy first 8 dwords
mov 8(%rsi),%rbx
mov 16(%rsi),%rcx
diff --git a/crypto/aes/asm/aesni-mb-x86_64.pl b/crypto/aes/asm/aesni-mb-x86_64.pl
index aa2735e..fcef7c6 100644
--- a/crypto/aes/asm/aesni-mb-x86_64.pl
+++ b/crypto/aes/asm/aesni-mb-x86_64.pl
@@ -134,7 +134,7 @@ $code.=<<___ if ($win64);
movaps %xmm10,0x40(%rsp)
movaps %xmm11,0x50(%rsp)
movaps %xmm12,0x60(%rsp)
- movaps %xmm13,-0x68(%rax) # not used, saved to share se_handler
+ movaps %xmm13,-0x68(%rax) # not used, saved to share se_handler
movaps %xmm14,-0x58(%rax)
movaps %xmm15,-0x48(%rax)
___
@@ -308,9 +308,9 @@ $code.=<<___;
movups @out[0],-16(@outptr[0],$offset)
pxor @inp[0], at out[0]
- movups @out[1],-16(@outptr[1],$offset)
+ movups @out[1],-16(@outptr[1],$offset)
pxor @inp[1], at out[1]
- movups @out[2],-16(@outptr[2],$offset)
+ movups @out[2],-16(@outptr[2],$offset)
pxor @inp[2], at out[2]
movups @out[3],-16(@outptr[3],$offset)
pxor @inp[3], at out[3]
@@ -393,7 +393,7 @@ $code.=<<___ if ($win64);
movaps %xmm10,0x40(%rsp)
movaps %xmm11,0x50(%rsp)
movaps %xmm12,0x60(%rsp)
- movaps %xmm13,-0x68(%rax) # not used, saved to share se_handler
+ movaps %xmm13,-0x68(%rax) # not used, saved to share se_handler
movaps %xmm14,-0x58(%rax)
movaps %xmm15,-0x48(%rax)
___
@@ -563,10 +563,10 @@ $code.=<<___;
movups @out[0],-16(@outptr[0],$offset)
movdqu (@inptr[0],$offset), at out[0]
- movups @out[1],-16(@outptr[1],$offset)
+ movups @out[1],-16(@outptr[1],$offset)
movdqu (@inptr[1],$offset), at out[1]
pxor $zero, at out[0]
- movups @out[2],-16(@outptr[2],$offset)
+ movups @out[2],-16(@outptr[2],$offset)
movdqu (@inptr[2],$offset), at out[2]
pxor $zero, at out[1]
movups @out[3],-16(@outptr[3],$offset)
@@ -835,10 +835,10 @@ $code.=<<___;
vmovups @out[0],-16(@ptr[0]) # write output
sub $offset, at ptr[0] # switch to input
vpxor 0x00($offload), at out[0], at out[0]
- vmovups @out[1],-16(@ptr[1])
+ vmovups @out[1],-16(@ptr[1])
sub `64+1*8`(%rsp), at ptr[1]
vpxor 0x10($offload), at out[1], at out[1]
- vmovups @out[2],-16(@ptr[2])
+ vmovups @out[2],-16(@ptr[2])
sub `64+2*8`(%rsp), at ptr[2]
vpxor 0x20($offload), at out[2], at out[2]
vmovups @out[3],-16(@ptr[3])
@@ -847,10 +847,10 @@ $code.=<<___;
vmovups @out[4],-16(@ptr[4])
sub `64+4*8`(%rsp), at ptr[4]
vpxor @inp[0], at out[4], at out[4]
- vmovups @out[5],-16(@ptr[5])
+ vmovups @out[5],-16(@ptr[5])
sub `64+5*8`(%rsp), at ptr[5]
vpxor @inp[1], at out[5], at out[5]
- vmovups @out[6],-16(@ptr[6])
+ vmovups @out[6],-16(@ptr[6])
sub `64+6*8`(%rsp), at ptr[6]
vpxor @inp[2], at out[6], at out[6]
vmovups @out[7],-16(@ptr[7])
@@ -1128,12 +1128,12 @@ $code.=<<___;
sub $offset, at ptr[0] # switch to input
vmovdqu 128+0(%rsp), at out[0]
vpxor 0x70($offload), at out[7], at out[7]
- vmovups @out[1],-16(@ptr[1])
+ vmovups @out[1],-16(@ptr[1])
sub `64+1*8`(%rsp), at ptr[1]
vmovdqu @out[0],0x00($offload)
vpxor $zero, at out[0], at out[0]
vmovdqu 128+16(%rsp), at out[1]
- vmovups @out[2],-16(@ptr[2])
+ vmovups @out[2],-16(@ptr[2])
sub `64+2*8`(%rsp), at ptr[2]
vmovdqu @out[1],0x10($offload)
vpxor $zero, at out[1], at out[1]
@@ -1149,11 +1149,11 @@ $code.=<<___;
vpxor $zero, at out[3], at out[3]
vmovdqu @inp[0],0x40($offload)
vpxor @inp[0],$zero, at out[4]
- vmovups @out[5],-16(@ptr[5])
+ vmovups @out[5],-16(@ptr[5])
sub `64+5*8`(%rsp), at ptr[5]
vmovdqu @inp[1],0x50($offload)
vpxor @inp[1],$zero, at out[5]
- vmovups @out[6],-16(@ptr[6])
+ vmovups @out[6],-16(@ptr[6])
sub `64+6*8`(%rsp), at ptr[6]
vmovdqu @inp[2],0x60($offload)
vpxor @inp[2],$zero, at out[6]
diff --git a/crypto/aes/asm/aesni-sha1-x86_64.pl b/crypto/aes/asm/aesni-sha1-x86_64.pl
index 4b979a7..a50795e 100644
--- a/crypto/aes/asm/aesni-sha1-x86_64.pl
+++ b/crypto/aes/asm/aesni-sha1-x86_64.pl
@@ -793,7 +793,7 @@ sub body_00_19_dec () { # ((c^d)&b)^d
sub body_20_39_dec () { # b^d^c
# on entry @T[0]=b^d
return &body_40_59_dec() if ($rx==39);
-
+
my @r=@body_20_39;
unshift (@r, at aes256_dec[$rx]) if (@aes256_dec[$rx]);
diff --git a/crypto/aes/asm/aesni-sha256-x86_64.pl b/crypto/aes/asm/aesni-sha256-x86_64.pl
index a5fde2e..ba4964a 100644
--- a/crypto/aes/asm/aesni-sha256-x86_64.pl
+++ b/crypto/aes/asm/aesni-sha256-x86_64.pl
@@ -884,7 +884,7 @@ if ($avx>1) {{
######################################################################
# AVX2+BMI code path
#
-my $a5=$SZ==4?"%esi":"%rsi"; # zap $inp
+my $a5=$SZ==4?"%esi":"%rsi"; # zap $inp
my $PUSH8=8*2*$SZ;
use integer;
diff --git a/crypto/aes/asm/aesni-x86.pl b/crypto/aes/asm/aesni-x86.pl
index ed1a47c..c34d9bf 100644
--- a/crypto/aes/asm/aesni-x86.pl
+++ b/crypto/aes/asm/aesni-x86.pl
@@ -1051,7 +1051,7 @@ if ($PREFIX eq "aesni") {
&set_label("ctr32_one_shortcut",16);
&movups ($inout0,&QWP(0,$rounds_)); # load ivec
&mov ($rounds,&DWP(240,$key));
-
+
&set_label("ctr32_one");
if ($inline)
{ &aesni_inline_generate1("enc"); }
diff --git a/crypto/aes/asm/aesni-x86_64.pl b/crypto/aes/asm/aesni-x86_64.pl
index 25dd120..397e82f 100644
--- a/crypto/aes/asm/aesni-x86_64.pl
+++ b/crypto/aes/asm/aesni-x86_64.pl
@@ -34,7 +34,7 @@
# ECB 4.25/4.25 1.38/1.38 1.28/1.28 1.26/1.26 1.26/1.26
# CTR 5.42/5.42 1.92/1.92 1.44/1.44 1.28/1.28 1.26/1.26
# CBC 4.38/4.43 4.15/1.43 4.07/1.32 4.07/1.29 4.06/1.28
-# CCM 5.66/9.42 4.42/5.41 4.16/4.40 4.09/4.15 4.06/4.07
+# CCM 5.66/9.42 4.42/5.41 4.16/4.40 4.09/4.15 4.06/4.07
# OFB 5.42/5.42 4.64/4.64 4.44/4.44 4.39/4.39 4.38/4.38
# CFB 5.73/5.85 5.56/5.62 5.48/5.56 5.47/5.55 5.47/5.55
#
@@ -118,7 +118,7 @@
# performance is achieved by interleaving instructions working on
# independent blocks. In which case asymptotic limit for such modes
# can be obtained by dividing above mentioned numbers by AES
-# instructions' interleave factor. Westmere can execute at most 3
+# instructions' interleave factor. Westmere can execute at most 3
# instructions at a time, meaning that optimal interleave factor is 3,
# and that's where the "magic" number of 1.25 come from. "Optimal
# interleave factor" means that increase of interleave factor does
@@ -312,7 +312,7 @@ ___
# on 2x subroutine on Atom Silvermont account. For processors that
# can schedule aes[enc|dec] every cycle optimal interleave factor
# equals to corresponding instructions latency. 8x is optimal for
-# * Bridge and "super-optimal" for other Intel CPUs...
+# * Bridge and "super-optimal" for other Intel CPUs...
sub aesni_generate2 {
my $dir=shift;
@@ -1271,7 +1271,7 @@ $code.=<<___;
lea 7($ctr),%r9
mov %r10d,0x60+12(%rsp)
bswap %r9d
- mov OPENSSL_ia32cap_P+4(%rip),%r10d
+ mov OPENSSL_ia32cap_P+4(%rip),%r10d
xor $key0,%r9d
and \$`1<<26|1<<22`,%r10d # isolate XSAVE+MOVBE
mov %r9d,0x70+12(%rsp)
@@ -1551,7 +1551,7 @@ $code.=<<___;
.Lctr32_tail:
# note that at this point $inout0..5 are populated with
- # counter values xor-ed with 0-round key
+ # counter values xor-ed with 0-round key
lea 16($key),$key
cmp \$4,$len
jb .Lctr32_loop3
diff --git a/crypto/aes/asm/aesp8-ppc.pl b/crypto/aes/asm/aesp8-ppc.pl
index 3fdf1ec..0497953 100755
--- a/crypto/aes/asm/aesp8-ppc.pl
+++ b/crypto/aes/asm/aesp8-ppc.pl
@@ -3773,7 +3773,7 @@ foreach(split("\n",$code)) {
if ($flavour =~ /le$/o) {
SWITCH: for($conv) {
/\?inv/ && do { @bytes=map($_^0xf, at bytes); last; };
- /\?rev/ && do { @bytes=reverse(@bytes); last; };
+ /\?rev/ && do { @bytes=reverse(@bytes); last; };
}
}
diff --git a/crypto/aes/asm/aesv8-armx.pl b/crypto/aes/asm/aesv8-armx.pl
index 9246dbb..954c041 100755
--- a/crypto/aes/asm/aesv8-armx.pl
+++ b/crypto/aes/asm/aesv8-armx.pl
@@ -961,21 +961,21 @@ if ($flavour =~ /64/) { ######## 64-bit code
$arg =~ m/q([0-9]+),\s*\{q([0-9]+)\},\s*q([0-9]+)/o &&
sprintf "vtbl.8 d%d,{q%d},d%d\n\t".
- "vtbl.8 d%d,{q%d},d%d", 2*$1,$2,2*$3, 2*$1+1,$2,2*$3+1;
+ "vtbl.8 d%d,{q%d},d%d", 2*$1,$2,2*$3, 2*$1+1,$2,2*$3+1;
}
sub unvdup32 {
my $arg=shift;
$arg =~ m/q([0-9]+),\s*q([0-9]+)\[([0-3])\]/o &&
- sprintf "vdup.32 q%d,d%d[%d]",$1,2*$2+($3>>1),$3&1;
+ sprintf "vdup.32 q%d,d%d[%d]",$1,2*$2+($3>>1),$3&1;
}
sub unvmov32 {
my $arg=shift;
$arg =~ m/q([0-9]+)\[([0-3])\],(.*)/o &&
- sprintf "vmov.32 d%d[%d],%s",2*$1+($2>>1),$2&1,$3;
+ sprintf "vmov.32 d%d[%d],%s",2*$1+($2>>1),$2&1,$3;
}
foreach(split("\n",$code)) {
diff --git a/crypto/aes/asm/bsaes-armv7.pl b/crypto/aes/asm/bsaes-armv7.pl
index 12091ef..3329588 100644
--- a/crypto/aes/asm/bsaes-armv7.pl
+++ b/crypto/aes/asm/bsaes-armv7.pl
@@ -91,7 +91,7 @@ my @s=@_[12..15];
sub InBasisChange {
# input in lsb > [b0, b1, b2, b3, b4, b5, b6, b7] < msb
-# output in lsb > [b6, b5, b0, b3, b7, b1, b4, b2] < msb
+# output in lsb > [b6, b5, b0, b3, b7, b1, b4, b2] < msb
my @b=@_[0..7];
$code.=<<___;
veor @b[2], @b[2], @b[1]
diff --git a/crypto/aes/asm/bsaes-x86_64.pl b/crypto/aes/asm/bsaes-x86_64.pl
index 6b14a51..d257a3e 100644
--- a/crypto/aes/asm/bsaes-x86_64.pl
+++ b/crypto/aes/asm/bsaes-x86_64.pl
@@ -129,7 +129,7 @@ my @s=@_[12..15];
sub InBasisChange {
# input in lsb > [b0, b1, b2, b3, b4, b5, b6, b7] < msb
-# output in lsb > [b6, b5, b0, b3, b7, b1, b4, b2] < msb
+# output in lsb > [b6, b5, b0, b3, b7, b1, b4, b2] < msb
my @b=@_[0..7];
$code.=<<___;
pxor @b[6], @b[5]
@@ -379,7 +379,7 @@ $code.=<<___;
pxor @s[0], @t[3]
pxor @s[1], @t[2]
pxor @s[2], @t[1]
- pxor @s[3], @t[0]
+ pxor @s[3], @t[0]
#Inv_GF16 \t0, \t1, \t2, \t3, \s0, \s1, \s2, \s3
diff --git a/crypto/aes/asm/vpaes-armv8.pl b/crypto/aes/asm/vpaes-armv8.pl
index d6b5f56..2e704a2 100755
--- a/crypto/aes/asm/vpaes-armv8.pl
+++ b/crypto/aes/asm/vpaes-armv8.pl
@@ -769,7 +769,7 @@ _vpaes_schedule_core:
ld1 {v0.16b}, [$inp] // vmovdqu 16(%rdi),%xmm0 # load key part 2 (unaligned)
bl _vpaes_schedule_transform // input transform
mov $inp, #7 // mov \$7, %esi
-
+
.Loop_schedule_256:
sub $inp, $inp, #1 // dec %esi
bl _vpaes_schedule_mangle // output low result
@@ -778,7 +778,7 @@ _vpaes_schedule_core:
// high round
bl _vpaes_schedule_round
cbz $inp, .Lschedule_mangle_last
- bl _vpaes_schedule_mangle
+ bl _vpaes_schedule_mangle
// low round. swap xmm7 and xmm6
dup v0.4s, v0.s[3] // vpshufd \$0xFF, %xmm0, %xmm0
@@ -787,7 +787,7 @@ _vpaes_schedule_core:
mov v7.16b, v6.16b // vmovdqa %xmm6, %xmm7
bl _vpaes_schedule_low_round
mov v7.16b, v5.16b // vmovdqa %xmm5, %xmm7
-
+
b .Loop_schedule_256
##
@@ -814,7 +814,7 @@ _vpaes_schedule_core:
.Lschedule_mangle_last_dec:
ld1 {v20.2d-v21.2d}, [x11] // reload constants
- sub $out, $out, #16 // add \$-16, %rdx
+ sub $out, $out, #16 // add \$-16, %rdx
eor v0.16b, v0.16b, v16.16b // vpxor .Lk_s63(%rip), %xmm0, %xmm0
bl _vpaes_schedule_transform // output transform
st1 {v0.2d}, [$out] // vmovdqu %xmm0, (%rdx) # save last key
diff --git a/crypto/aes/asm/vpaes-ppc.pl b/crypto/aes/asm/vpaes-ppc.pl
index 7f81209..3563f6d 100644
--- a/crypto/aes/asm/vpaes-ppc.pl
+++ b/crypto/aes/asm/vpaes-ppc.pl
@@ -1074,7 +1074,7 @@ Loop_schedule_256:
# high round
bl _vpaes_schedule_round
bdz Lschedule_mangle_last # dec %esi
- bl _vpaes_schedule_mangle
+ bl _vpaes_schedule_mangle
# low round. swap xmm7 and xmm6
?vspltw v0, v0, 3 # vpshufd \$0xFF, %xmm0, %xmm0
@@ -1082,7 +1082,7 @@ Loop_schedule_256:
vmr v7, v6 # vmovdqa %xmm6, %xmm7
bl _vpaes_schedule_low_round
vmr v7, v5 # vmovdqa %xmm5, %xmm7
-
+
b Loop_schedule_256
##
## .aes_schedule_mangle_last
@@ -1130,7 +1130,7 @@ Lschedule_mangle_last:
Lschedule_mangle_last_dec:
lvx $iptlo, r11, r12 # reload $ipt
lvx $ipthi, r9, r12
- addi $out, $out, -16 # add \$-16, %rdx
+ addi $out, $out, -16 # add \$-16, %rdx
vxor v0, v0, v26 # vpxor .Lk_s63(%rip), %xmm0, %xmm0
bl _vpaes_schedule_transform # output transform
@@ -1565,7 +1565,7 @@ foreach (split("\n",$code)) {
if ($flavour =~ /le$/o) {
SWITCH: for($conv) {
/\?inv/ && do { @bytes=map($_^0xf, at bytes); last; };
- /\?rev/ && do { @bytes=reverse(@bytes); last; };
+ /\?rev/ && do { @bytes=reverse(@bytes); last; };
}
}
diff --git a/crypto/aes/asm/vpaes-x86.pl b/crypto/aes/asm/vpaes-x86.pl
index 47615c0..d9157fa 100644
--- a/crypto/aes/asm/vpaes-x86.pl
+++ b/crypto/aes/asm/vpaes-x86.pl
@@ -445,7 +445,7 @@ $k_dsbo=0x2c0; # decryption sbox final output
##
&set_label("schedule_192",16);
&movdqu ("xmm0",&QWP(8,$inp)); # load key part 2 (very unaligned)
- &call ("_vpaes_schedule_transform"); # input transform
+ &call ("_vpaes_schedule_transform"); # input transform
&movdqa ("xmm6","xmm0"); # save short part
&pxor ("xmm4","xmm4"); # clear 4
&movhlps("xmm6","xmm4"); # clobber low side with zeros
@@ -476,7 +476,7 @@ $k_dsbo=0x2c0; # decryption sbox final output
##
&set_label("schedule_256",16);
&movdqu ("xmm0",&QWP(16,$inp)); # load key part 2 (unaligned)
- &call ("_vpaes_schedule_transform"); # input transform
+ &call ("_vpaes_schedule_transform"); # input transform
&mov ($round,7);
&set_label("loop_schedule_256");
@@ -487,7 +487,7 @@ $k_dsbo=0x2c0; # decryption sbox final output
&call ("_vpaes_schedule_round");
&dec ($round);
&jz (&label("schedule_mangle_last"));
- &call ("_vpaes_schedule_mangle");
+ &call ("_vpaes_schedule_mangle");
# low round. swap xmm7 and xmm6
&pshufd ("xmm0","xmm0",0xFF);
@@ -610,7 +610,7 @@ $k_dsbo=0x2c0; # decryption sbox final output
# subbyte
&movdqa ("xmm4",&QWP($k_s0F,$const));
&movdqa ("xmm5",&QWP($k_inv,$const)); # 4 : 1/j
- &movdqa ("xmm1","xmm4");
+ &movdqa ("xmm1","xmm4");
&pandn ("xmm1","xmm0");
&psrld ("xmm1",4); # 1 = i
&pand ("xmm0","xmm4"); # 0 = k
diff --git a/crypto/aes/asm/vpaes-x86_64.pl b/crypto/aes/asm/vpaes-x86_64.pl
index 265b6aa..dd1f13a 100644
--- a/crypto/aes/asm/vpaes-x86_64.pl
+++ b/crypto/aes/asm/vpaes-x86_64.pl
@@ -171,7 +171,7 @@ _vpaes_encrypt_core:
pshufb %xmm1, %xmm0
ret
.size _vpaes_encrypt_core,.-_vpaes_encrypt_core
-
+
##
## Decryption core
##
@@ -332,7 +332,7 @@ _vpaes_schedule_core:
##
.Lschedule_128:
mov \$10, %esi
-
+
.Loop_schedule_128:
call _vpaes_schedule_round
dec %rsi
@@ -366,7 +366,7 @@ _vpaes_schedule_core:
.Loop_schedule_192:
call _vpaes_schedule_round
- palignr \$8,%xmm6,%xmm0
+ palignr \$8,%xmm6,%xmm0
call _vpaes_schedule_mangle # save key n
call _vpaes_schedule_192_smear
call _vpaes_schedule_mangle # save key n+1
@@ -392,7 +392,7 @@ _vpaes_schedule_core:
movdqu 16(%rdi),%xmm0 # load key part 2 (unaligned)
call _vpaes_schedule_transform # input transform
mov \$7, %esi
-
+
.Loop_schedule_256:
call _vpaes_schedule_mangle # output low result
movdqa %xmm0, %xmm6 # save cur_lo in xmm6
@@ -401,7 +401,7 @@ _vpaes_schedule_core:
call _vpaes_schedule_round
dec %rsi
jz .Lschedule_mangle_last
- call _vpaes_schedule_mangle
+ call _vpaes_schedule_mangle
# low round. swap xmm7 and xmm6
pshufd \$0xFF, %xmm0, %xmm0
@@ -409,10 +409,10 @@ _vpaes_schedule_core:
movdqa %xmm6, %xmm7
call _vpaes_schedule_low_round
movdqa %xmm5, %xmm7
-
+
jmp .Loop_schedule_256
-
+
##
## .aes_schedule_mangle_last
##
@@ -511,9 +511,9 @@ _vpaes_schedule_round:
# rotate
pshufd \$0xFF, %xmm0, %xmm0
palignr \$1, %xmm0, %xmm0
-
+
# fall through...
-
+
# low round: same as high round, but no rotation and no rcon.
_vpaes_schedule_low_round:
# smear xmm7
@@ -552,7 +552,7 @@ _vpaes_schedule_low_round:
pxor %xmm4, %xmm0 # 0 = sbox output
# add in smeared stuff
- pxor %xmm7, %xmm0
+ pxor %xmm7, %xmm0
movdqa %xmm0, %xmm7
ret
.size _vpaes_schedule_round,.-_vpaes_schedule_round
diff --git a/crypto/bn/asm/armv4-gf2m.pl b/crypto/bn/asm/armv4-gf2m.pl
index 0bb5433..7a0cdb2 100644
--- a/crypto/bn/asm/armv4-gf2m.pl
+++ b/crypto/bn/asm/armv4-gf2m.pl
@@ -36,7 +36,7 @@
#
# Câmara, D.; Gouvêa, C. P. L.; López, J. & Dahab, R.: Fast Software
# Polynomial Multiplication on ARM Processors using the NEON Engine.
-#
+#
# http://conradoplg.cryptoland.net/files/2010/12/mocrysen13.pdf
$flavour = shift;
diff --git a/crypto/bn/asm/armv4-mont.pl b/crypto/bn/asm/armv4-mont.pl
index 0dc4fe9..75a36f6 100644
--- a/crypto/bn/asm/armv4-mont.pl
+++ b/crypto/bn/asm/armv4-mont.pl
@@ -23,7 +23,7 @@
# [depending on key length, less for longer keys] on ARM920T, and
# +115-80% on Intel IXP425. This is compared to pre-bn_mul_mont code
# base and compiler generated code with in-lined umull and even umlal
-# instructions. The latter means that this code didn't really have an
+# instructions. The latter means that this code didn't really have an
# "advantage" of utilizing some "secret" instruction.
#
# The code is interoperable with Thumb ISA and is rather compact, less
diff --git a/crypto/bn/asm/bn-586.pl b/crypto/bn/asm/bn-586.pl
index 1ca1bbf..1350bcd 100644
--- a/crypto/bn/asm/bn-586.pl
+++ b/crypto/bn/asm/bn-586.pl
@@ -54,7 +54,7 @@ sub bn_mul_add_words
&movd("mm0",&wparam(3)); # mm0 = w
&pxor("mm1","mm1"); # mm1 = carry_in
&jmp(&label("maw_sse2_entry"));
-
+
&set_label("maw_sse2_unrolled",16);
&movd("mm3",&DWP(0,$r,"",0)); # mm3 = r[0]
&paddq("mm1","mm3"); # mm1 = carry_in + r[0]
@@ -675,20 +675,20 @@ sub bn_sub_part_words
&adc($c,0);
&mov(&DWP($i*4,$r,"",0),$tmp1); # *r
}
-
+
&comment("");
&add($b,32);
&add($r,32);
&sub($num,8);
&jnz(&label("pw_neg_loop"));
-
+
&set_label("pw_neg_finish",0);
&mov($tmp2,&wparam(4)); # get dl
&mov($num,0);
&sub($num,$tmp2);
&and($num,7);
&jz(&label("pw_end"));
-
+
for ($i=0; $i<7; $i++)
{
&comment("dl<0 Tail Round $i");
@@ -705,9 +705,9 @@ sub bn_sub_part_words
}
&jmp(&label("pw_end"));
-
+
&set_label("pw_pos",0);
-
+
&and($num,0xfffffff8); # num / 8
&jz(&label("pw_pos_finish"));
@@ -722,18 +722,18 @@ sub bn_sub_part_words
&mov(&DWP($i*4,$r,"",0),$tmp1); # *r
&jnc(&label("pw_nc".$i));
}
-
+
&comment("");
&add($a,32);
&add($r,32);
&sub($num,8);
&jnz(&label("pw_pos_loop"));
-
+
&set_label("pw_pos_finish",0);
&mov($num,&wparam(4)); # get dl
&and($num,7);
&jz(&label("pw_end"));
-
+
for ($i=0; $i<7; $i++)
{
&comment("dl>0 Tail Round $i");
@@ -754,17 +754,17 @@ sub bn_sub_part_words
&mov(&DWP($i*4,$r,"",0),$tmp1); # *r
&set_label("pw_nc".$i,0);
}
-
+
&comment("");
&add($a,32);
&add($r,32);
&sub($num,8);
&jnz(&label("pw_nc_loop"));
-
+
&mov($num,&wparam(4)); # get dl
&and($num,7);
&jz(&label("pw_nc_end"));
-
+
for ($i=0; $i<7; $i++)
{
&mov($tmp1,&DWP($i*4,$a,"",0)); # *a
diff --git a/crypto/bn/asm/co-586.pl b/crypto/bn/asm/co-586.pl
index 60d0363..6f34c37 100644
--- a/crypto/bn/asm/co-586.pl
+++ b/crypto/bn/asm/co-586.pl
@@ -47,7 +47,7 @@ sub mul_add_c
&mov("edx",&DWP(($nb)*4,$b,"",0)) if $pos == 1; # laod next b
###
&adc($c2,0);
- # is pos > 1, it means it is the last loop
+ # is pos > 1, it means it is the last loop
&mov(&DWP($i*4,"eax","",0),$c0) if $pos > 0; # save r[];
&mov("eax",&DWP(($na)*4,$a,"",0)) if $pos == 1; # laod next a
}
@@ -76,7 +76,7 @@ sub sqr_add_c
&mov("edx",&DWP(($nb)*4,$a,"",0)) if ($pos == 1) && ($na != $nb);
###
&adc($c2,0);
- # is pos > 1, it means it is the last loop
+ # is pos > 1, it means it is the last loop
&mov(&DWP($i*4,$r,"",0),$c0) if $pos > 0; # save r[];
&mov("eax",&DWP(($na)*4,$a,"",0)) if $pos == 1; # load next b
}
@@ -127,7 +127,7 @@ sub bn_mul_comba
$c2="ebp";
$a="esi";
$b="edi";
-
+
$as=0;
$ae=0;
$bs=0;
@@ -142,9 +142,9 @@ sub bn_mul_comba
&push("ebx");
&xor($c0,$c0);
- &mov("eax",&DWP(0,$a,"",0)); # load the first word
+ &mov("eax",&DWP(0,$a,"",0)); # load the first word
&xor($c1,$c1);
- &mov("edx",&DWP(0,$b,"",0)); # load the first second
+ &mov("edx",&DWP(0,$b,"",0)); # load the first second
for ($i=0; $i<$tot; $i++)
{
@@ -152,7 +152,7 @@ sub bn_mul_comba
$bi=$bs;
$end=$be+1;
- &comment("################## Calculate word $i");
+ &comment("################## Calculate word $i");
for ($j=$bs; $j<$end; $j++)
{
diff --git a/crypto/bn/asm/ia64-mont.pl b/crypto/bn/asm/ia64-mont.pl
index 5cc5c59..233fdd4 100644
--- a/crypto/bn/asm/ia64-mont.pl
+++ b/crypto/bn/asm/ia64-mont.pl
@@ -80,7 +80,7 @@ $code=<<___;
// int bn_mul_mont (BN_ULONG *rp,const BN_ULONG *ap,
// const BN_ULONG *bp,const BN_ULONG *np,
-// const BN_ULONG *n0p,int num);
+// const BN_ULONG *n0p,int num);
.align 64
.global bn_mul_mont#
.proc bn_mul_mont#
@@ -203,7 +203,7 @@ bn_mul_mont_general:
{ .mmi; .pred.rel "mutex",p39,p41
(p39) add topbit=r0,r0
(p41) add topbit=r0,r0,1
- nop.i 0 }
+ nop.i 0 }
{ .mmi; st8 [tp_1]=n[0]
add tptr=16,sp
add tp_1=8,sp };;
diff --git a/crypto/bn/asm/mips.pl b/crypto/bn/asm/mips.pl
index 102b656..5093177 100644
--- a/crypto/bn/asm/mips.pl
+++ b/crypto/bn/asm/mips.pl
@@ -603,13 +603,13 @@ $code.=<<___;
sltu $v0,$t2,$ta2
$ST $t2,-2*$BNSZ($a0)
$ADDU $v0,$t8
-
+
$ADDU $ta3,$t3
sltu $t9,$ta3,$t3
$ADDU $t3,$ta3,$v0
sltu $v0,$t3,$ta3
$ST $t3,-$BNSZ($a0)
-
+
.set noreorder
bgtz $at,.L_bn_add_words_loop
$ADDU $v0,$t9
@@ -808,7 +808,7 @@ bn_div_3_words:
# so that we can save two arguments
# and return address in registers
# instead of stack:-)
-
+
$LD $a0,($a3)
move $ta2,$a1
bne $a0,$a2,bn_div_3_words_internal
diff --git a/crypto/bn/asm/parisc-mont.pl b/crypto/bn/asm/parisc-mont.pl
index 8aa94e8..61c3625 100644
--- a/crypto/bn/asm/parisc-mont.pl
+++ b/crypto/bn/asm/parisc-mont.pl
@@ -546,7 +546,7 @@ L\$copy
ldd $idx($np),$hi0
std,ma %r0,8($tp)
addib,<> 8,$idx,.-8 ; L\$copy
- std,ma $hi0,8($rp)
+ std,ma $hi0,8($rp)
___
if ($BN_SZ==4) { # PA-RISC 1.1 code-path
@@ -868,7 +868,7 @@ L\$copy_pa11
ldwx $idx($np),$hi0
stws,ma %r0,4($tp)
addib,<> 4,$idx,L\$copy_pa11
- stws,ma $hi0,4($rp)
+ stws,ma $hi0,4($rp)
nop ; alignment
L\$done
diff --git a/crypto/bn/asm/ppc-mont.pl b/crypto/bn/asm/ppc-mont.pl
index 5802260..7a25b1e 100644
--- a/crypto/bn/asm/ppc-mont.pl
+++ b/crypto/bn/asm/ppc-mont.pl
@@ -26,7 +26,7 @@
# So far RSA *sign* performance improvement over pre-bn_mul_mont asm
# for 64-bit application running on PPC970/G5 is:
#
-# 512-bit +65%
+# 512-bit +65%
# 1024-bit +35%
# 2048-bit +18%
# 4096-bit +4%
@@ -49,7 +49,7 @@ if ($flavour =~ /32/) {
$UMULL= "mullw"; # unsigned multiply low
$UMULH= "mulhwu"; # unsigned multiply high
$UCMP= "cmplw"; # unsigned compare
- $SHRI= "srwi"; # unsigned shift right by immediate
+ $SHRI= "srwi"; # unsigned shift right by immediate
$PUSH= $ST;
$POP= $LD;
} elsif ($flavour =~ /64/) {
@@ -69,7 +69,7 @@ if ($flavour =~ /32/) {
$UMULL= "mulld"; # unsigned multiply low
$UMULH= "mulhdu"; # unsigned multiply high
$UCMP= "cmpld"; # unsigned compare
- $SHRI= "srdi"; # unsigned shift right by immediate
+ $SHRI= "srdi"; # unsigned shift right by immediate
$PUSH= $ST;
$POP= $LD;
} else { die "nonsense $flavour"; }
diff --git a/crypto/bn/asm/ppc.pl b/crypto/bn/asm/ppc.pl
index e9262df..1a03f45 100644
--- a/crypto/bn/asm/ppc.pl
+++ b/crypto/bn/asm/ppc.pl
@@ -38,7 +38,7 @@
#rsa 2048 bits 0.3036s 0.0085s 3.3 117.1
#rsa 4096 bits 2.0040s 0.0299s 0.5 33.4
#dsa 512 bits 0.0087s 0.0106s 114.3 94.5
-#dsa 1024 bits 0.0256s 0.0313s 39.0 32.0
+#dsa 1024 bits 0.0256s 0.0313s 39.0 32.0
#
# Same bechmark with this assembler code:
#
@@ -74,7 +74,7 @@
#rsa 4096 bits 0.3700s 0.0058s 2.7 171.0
#dsa 512 bits 0.0016s 0.0020s 610.7 507.1
#dsa 1024 bits 0.0047s 0.0058s 212.5 173.2
-#
+#
# Again, performance increases by at about 75%
#
# Mac OS X, Apple G5 1.8GHz (Note this is 32 bit code)
@@ -125,7 +125,7 @@ if ($flavour =~ /32/) {
$CNTLZ= "cntlzw"; # count leading zeros
$SHL= "slw"; # shift left
$SHR= "srw"; # unsigned shift right
- $SHRI= "srwi"; # unsigned shift right by immediate
+ $SHRI= "srwi"; # unsigned shift right by immediate
$SHLI= "slwi"; # shift left by immediate
$CLRU= "clrlwi"; # clear upper bits
$INSR= "insrwi"; # insert right
@@ -149,10 +149,10 @@ if ($flavour =~ /32/) {
$CNTLZ= "cntlzd"; # count leading zeros
$SHL= "sld"; # shift left
$SHR= "srd"; # unsigned shift right
- $SHRI= "srdi"; # unsigned shift right by immediate
+ $SHRI= "srdi"; # unsigned shift right by immediate
$SHLI= "sldi"; # shift left by immediate
$CLRU= "clrldi"; # clear upper bits
- $INSR= "insrdi"; # insert right
+ $INSR= "insrdi"; # insert right
$ROTL= "rotldi"; # rotate left by immediate
$TR= "td"; # conditional trap
} else { die "nonsense $flavour"; }
@@ -189,7 +189,7 @@ $data=<<EOF;
# below.
# 12/05/03 Suresh Chari
# (with lots of help from) Andy Polyakov
-##
+##
# 1. Initial version 10/20/02 Suresh Chari
#
#
@@ -202,7 +202,7 @@ $data=<<EOF;
# be done in the build process.
#
# Hand optimized assembly code for the following routines
-#
+#
# bn_sqr_comba4
# bn_sqr_comba8
# bn_mul_comba4
@@ -225,10 +225,10 @@ $data=<<EOF;
#--------------------------------------------------------------------------
#
# Defines to be used in the assembly code.
-#
+#
#.set r0,0 # we use it as storage for value of 0
#.set SP,1 # preserved
-#.set RTOC,2 # preserved
+#.set RTOC,2 # preserved
#.set r3,3 # 1st argument/return value
#.set r4,4 # 2nd argument/volatile register
#.set r5,5 # 3rd argument/volatile register
@@ -246,7 +246,7 @@ $data=<<EOF;
# the first . i.e. for example change ".bn_sqr_comba4"
# to "bn_sqr_comba4". This should be automatically done
# in the build.
-
+
.globl .bn_sqr_comba4
.globl .bn_sqr_comba8
.globl .bn_mul_comba4
@@ -257,9 +257,9 @@ $data=<<EOF;
.globl .bn_sqr_words
.globl .bn_mul_words
.globl .bn_mul_add_words
-
+
# .text section
-
+
.machine "any"
#
@@ -278,8 +278,8 @@ $data=<<EOF;
# r3 contains r
# r4 contains a
#
-# Freely use registers r5,r6,r7,r8,r9,r10,r11 as follows:
-#
+# Freely use registers r5,r6,r7,r8,r9,r10,r11 as follows:
+#
# r5,r6 are the two BN_ULONGs being multiplied.
# r7,r8 are the results of the 32x32 giving 64 bit multiply.
# r9,r10, r11 are the equivalents of c1,c2, c3.
@@ -288,10 +288,10 @@ $data=<<EOF;
#
xor r0,r0,r0 # set r0 = 0. Used in the addze
# instructions below
-
+
#sqr_add_c(a,0,c1,c2,c3)
- $LD r5,`0*$BNSZ`(r4)
- $UMULL r9,r5,r5
+ $LD r5,`0*$BNSZ`(r4)
+ $UMULL r9,r5,r5
$UMULH r10,r5,r5 #in first iteration. No need
#to add since c1=c2=c3=0.
# Note c3(r11) is NOT set to 0
@@ -299,20 +299,20 @@ $data=<<EOF;
$ST r9,`0*$BNSZ`(r3) # r[0]=c1;
# sqr_add_c2(a,1,0,c2,c3,c1);
- $LD r6,`1*$BNSZ`(r4)
+ $LD r6,`1*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r7,r7,r7 # compute (r7,r8)=2*(r7,r8)
adde r8,r8,r8
addze r9,r0 # catch carry if any.
- # r9= r0(=0) and carry
-
+ # r9= r0(=0) and carry
+
addc r10,r7,r10 # now add to temp result.
- addze r11,r8 # r8 added to r11 which is 0
+ addze r11,r8 # r8 added to r11 which is 0
addze r9,r9
-
- $ST r10,`1*$BNSZ`(r3) #r[1]=c2;
+
+ $ST r10,`1*$BNSZ`(r3) #r[1]=c2;
#sqr_add_c(a,1,c3,c1,c2)
$UMULL r7,r6,r6
$UMULH r8,r6,r6
@@ -323,23 +323,23 @@ $data=<<EOF;
$LD r6,`2*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r7,r7,r7
adde r8,r8,r8
addze r10,r10
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
- $ST r11,`2*$BNSZ`(r3) #r[2]=c3
+ $ST r11,`2*$BNSZ`(r3) #r[2]=c3
#sqr_add_c2(a,3,0,c1,c2,c3);
- $LD r6,`3*$BNSZ`(r4)
+ $LD r6,`3*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
addc r7,r7,r7
adde r8,r8,r8
addze r11,r0
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
@@ -348,7 +348,7 @@ $data=<<EOF;
$LD r6,`2*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r7,r7,r7
adde r8,r8,r8
addze r11,r11
@@ -363,31 +363,31 @@ $data=<<EOF;
adde r11,r8,r11
addze r9,r0
#sqr_add_c2(a,3,1,c2,c3,c1);
- $LD r6,`3*$BNSZ`(r4)
+ $LD r6,`3*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
addc r7,r7,r7
adde r8,r8,r8
addze r9,r9
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
$ST r10,`4*$BNSZ`(r3) #r[4]=c2
#sqr_add_c2(a,3,2,c3,c1,c2);
- $LD r5,`2*$BNSZ`(r4)
+ $LD r5,`2*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
addc r7,r7,r7
adde r8,r8,r8
addze r10,r0
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
$ST r11,`5*$BNSZ`(r3) #r[5] = c3
#sqr_add_c(a,3,c1,c2,c3);
- $UMULL r7,r6,r6
+ $UMULL r7,r6,r6
$UMULH r8,r6,r6
addc r9,r7,r9
adde r10,r8,r10
@@ -406,7 +406,7 @@ $data=<<EOF;
# for the gcc compiler. This should be automatically
# done in the build
#
-
+
.align 4
.bn_sqr_comba8:
#
@@ -418,15 +418,15 @@ $data=<<EOF;
# r3 contains r
# r4 contains a
#
-# Freely use registers r5,r6,r7,r8,r9,r10,r11 as follows:
-#
+# Freely use registers r5,r6,r7,r8,r9,r10,r11 as follows:
+#
# r5,r6 are the two BN_ULONGs being multiplied.
# r7,r8 are the results of the 32x32 giving 64 bit multiply.
# r9,r10, r11 are the equivalents of c1,c2, c3.
#
# Possible optimization of loading all 8 longs of a into registers
# doesn't provide any speedup
-#
+#
xor r0,r0,r0 #set r0 = 0.Used in addze
#instructions below.
@@ -439,18 +439,18 @@ $data=<<EOF;
#sqr_add_c2(a,1,0,c2,c3,c1);
$LD r6,`1*$BNSZ`(r4)
$UMULL r7,r5,r6
- $UMULH r8,r5,r6
-
+ $UMULH r8,r5,r6
+
addc r10,r7,r10 #add the two register number
adde r11,r8,r0 # (r8,r7) to the three register
addze r9,r0 # number (r9,r11,r10).NOTE:r0=0
-
+
addc r10,r7,r10 #add the two register number
adde r11,r8,r11 # (r8,r7) to the three register
addze r9,r9 # number (r9,r11,r10).
-
+
$ST r10,`1*$BNSZ`(r3) # r[1]=c2
-
+
#sqr_add_c(a,1,c3,c1,c2);
$UMULL r7,r6,r6
$UMULH r8,r6,r6
@@ -461,25 +461,25 @@ $data=<<EOF;
$LD r6,`2*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
-
+
$ST r11,`2*$BNSZ`(r3) #r[2]=c3
#sqr_add_c2(a,3,0,c1,c2,c3);
$LD r6,`3*$BNSZ`(r4) #r6 = a[3]. r5 is already a[0].
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r0
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
@@ -488,20 +488,20 @@ $data=<<EOF;
$LD r6,`2*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
-
+
$ST r9,`3*$BNSZ`(r3) #r[3]=c1;
#sqr_add_c(a,2,c2,c3,c1);
$UMULL r7,r6,r6
$UMULH r8,r6,r6
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r0
@@ -509,11 +509,11 @@ $data=<<EOF;
$LD r6,`3*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
@@ -522,11 +522,11 @@ $data=<<EOF;
$LD r6,`4*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
@@ -535,11 +535,11 @@ $data=<<EOF;
$LD r6,`5*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r0
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
@@ -548,11 +548,11 @@ $data=<<EOF;
$LD r6,`4*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
@@ -561,11 +561,11 @@ $data=<<EOF;
$LD r6,`3*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
@@ -580,11 +580,11 @@ $data=<<EOF;
$LD r6,`4*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
@@ -593,11 +593,11 @@ $data=<<EOF;
$LD r6,`5*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r11
@@ -617,7 +617,7 @@ $data=<<EOF;
$LD r6,`7*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r0
@@ -629,7 +629,7 @@ $data=<<EOF;
$LD r6,`6*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
@@ -652,7 +652,7 @@ $data=<<EOF;
$LD r6,`4*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r10,r7,r10
adde r11,r8,r11
addze r9,r9
@@ -684,7 +684,7 @@ $data=<<EOF;
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
-
+
addc r11,r7,r11
adde r9,r8,r9
addze r10,r10
@@ -704,7 +704,7 @@ $data=<<EOF;
$LD r5,`2*$BNSZ`(r4)
$UMULL r7,r5,r6
$UMULH r8,r5,r6
-
+
addc r9,r7,r9
adde r10,r8,r10
addze r11,r0
@@ -801,7 +801,7 @@ $data=<<EOF;
adde r10,r8,r10
addze r11,r11
$ST r9,`12*$BNSZ`(r3) #r[12]=c1;
-
+
#sqr_add_c2(a,7,6,c2,c3,c1)
$LD r5,`6*$BNSZ`(r4)
$UMULL r7,r5,r6
@@ -850,21 +850,21 @@ $data=<<EOF;
#
xor r0,r0,r0 #r0=0. Used in addze below.
#mul_add_c(a[0],b[0],c1,c2,c3);
- $LD r6,`0*$BNSZ`(r4)
- $LD r7,`0*$BNSZ`(r5)
- $UMULL r10,r6,r7
- $UMULH r11,r6,r7
+ $LD r6,`0*$BNSZ`(r4)
+ $LD r7,`0*$BNSZ`(r5)
+ $UMULL r10,r6,r7
+ $UMULH r11,r6,r7
$ST r10,`0*$BNSZ`(r3) #r[0]=c1
#mul_add_c(a[0],b[1],c2,c3,c1);
- $LD r7,`1*$BNSZ`(r5)
+ $LD r7,`1*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r11,r8,r11
adde r12,r9,r0
addze r10,r0
#mul_add_c(a[1],b[0],c2,c3,c1);
- $LD r6, `1*$BNSZ`(r4)
- $LD r7, `0*$BNSZ`(r5)
+ $LD r6, `1*$BNSZ`(r4)
+ $LD r7, `0*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r11,r8,r11
@@ -872,23 +872,23 @@ $data=<<EOF;
addze r10,r10
$ST r11,`1*$BNSZ`(r3) #r[1]=c2
#mul_add_c(a[2],b[0],c3,c1,c2);
- $LD r6,`2*$BNSZ`(r4)
+ $LD r6,`2*$BNSZ`(r4)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r12,r8,r12
adde r10,r9,r10
addze r11,r0
#mul_add_c(a[1],b[1],c3,c1,c2);
- $LD r6,`1*$BNSZ`(r4)
- $LD r7,`1*$BNSZ`(r5)
+ $LD r6,`1*$BNSZ`(r4)
+ $LD r7,`1*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r12,r8,r12
adde r10,r9,r10
addze r11,r11
#mul_add_c(a[0],b[2],c3,c1,c2);
- $LD r6,`0*$BNSZ`(r4)
- $LD r7,`2*$BNSZ`(r5)
+ $LD r6,`0*$BNSZ`(r4)
+ $LD r7,`2*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r12,r8,r12
@@ -896,7 +896,7 @@ $data=<<EOF;
addze r11,r11
$ST r12,`2*$BNSZ`(r3) #r[2]=c3
#mul_add_c(a[0],b[3],c1,c2,c3);
- $LD r7,`3*$BNSZ`(r5)
+ $LD r7,`3*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r10,r8,r10
@@ -928,7 +928,7 @@ $data=<<EOF;
addze r12,r12
$ST r10,`3*$BNSZ`(r3) #r[3]=c1
#mul_add_c(a[3],b[1],c2,c3,c1);
- $LD r7,`1*$BNSZ`(r5)
+ $LD r7,`1*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r11,r8,r11
@@ -952,7 +952,7 @@ $data=<<EOF;
addze r10,r10
$ST r11,`4*$BNSZ`(r3) #r[4]=c2
#mul_add_c(a[2],b[3],c3,c1,c2);
- $LD r6,`2*$BNSZ`(r4)
+ $LD r6,`2*$BNSZ`(r4)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r12,r8,r12
@@ -968,7 +968,7 @@ $data=<<EOF;
addze r11,r11
$ST r12,`5*$BNSZ`(r3) #r[5]=c3
#mul_add_c(a[3],b[3],c1,c2,c3);
- $LD r7,`3*$BNSZ`(r5)
+ $LD r7,`3*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
addc r10,r8,r10
@@ -988,7 +988,7 @@ $data=<<EOF;
# for the gcc compiler. This should be automatically
# done in the build
#
-
+
.align 4
.bn_mul_comba8:
#
@@ -1003,7 +1003,7 @@ $data=<<EOF;
# r10, r11, r12 are the equivalents of c1, c2, and c3.
#
xor r0,r0,r0 #r0=0. Used in addze below.
-
+
#mul_add_c(a[0],b[0],c1,c2,c3);
$LD r6,`0*$BNSZ`(r4) #a[0]
$LD r7,`0*$BNSZ`(r5) #b[0]
@@ -1065,7 +1065,7 @@ $data=<<EOF;
addc r10,r10,r8
adde r11,r11,r9
addze r12,r12
-
+
#mul_add_c(a[2],b[1],c1,c2,c3);
$LD r6,`2*$BNSZ`(r4)
$LD r7,`1*$BNSZ`(r5)
@@ -1131,7 +1131,7 @@ $data=<<EOF;
adde r10,r10,r9
addze r11,r0
#mul_add_c(a[1],b[4],c3,c1,c2);
- $LD r6,`1*$BNSZ`(r4)
+ $LD r6,`1*$BNSZ`(r4)
$LD r7,`4*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
@@ -1139,7 +1139,7 @@ $data=<<EOF;
adde r10,r10,r9
addze r11,r11
#mul_add_c(a[2],b[3],c3,c1,c2);
- $LD r6,`2*$BNSZ`(r4)
+ $LD r6,`2*$BNSZ`(r4)
$LD r7,`3*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
@@ -1147,7 +1147,7 @@ $data=<<EOF;
adde r10,r10,r9
addze r11,r11
#mul_add_c(a[3],b[2],c3,c1,c2);
- $LD r6,`3*$BNSZ`(r4)
+ $LD r6,`3*$BNSZ`(r4)
$LD r7,`2*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
@@ -1155,7 +1155,7 @@ $data=<<EOF;
adde r10,r10,r9
addze r11,r11
#mul_add_c(a[4],b[1],c3,c1,c2);
- $LD r6,`4*$BNSZ`(r4)
+ $LD r6,`4*$BNSZ`(r4)
$LD r7,`1*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
@@ -1163,7 +1163,7 @@ $data=<<EOF;
adde r10,r10,r9
addze r11,r11
#mul_add_c(a[5],b[0],c3,c1,c2);
- $LD r6,`5*$BNSZ`(r4)
+ $LD r6,`5*$BNSZ`(r4)
$LD r7,`0*$BNSZ`(r5)
$UMULL r8,r6,r7
$UMULH r9,r6,r7
@@ -1555,7 +1555,7 @@ $data=<<EOF;
addi r3,r3,-$BNSZ
addi r5,r5,-$BNSZ
mtctr r6
-Lppcasm_sub_mainloop:
+Lppcasm_sub_mainloop:
$LDU r7,$BNSZ(r4)
$LDU r8,$BNSZ(r5)
subfe r6,r8,r7 # r6 = r7+carry bit + onescomplement(r8)
@@ -1563,7 +1563,7 @@ Lppcasm_sub_mainloop:
# is r7-r8 -1 as we need.
$STU r6,$BNSZ(r3)
bdnz Lppcasm_sub_mainloop
-Lppcasm_sub_adios:
+Lppcasm_sub_adios:
subfze r3,r0 # if carry bit is set then r3 = 0 else -1
andi. r3,r3,1 # keep only last bit.
blr
@@ -1604,13 +1604,13 @@ Lppcasm_sub_adios:
addi r3,r3,-$BNSZ
addi r5,r5,-$BNSZ
mtctr r6
-Lppcasm_add_mainloop:
+Lppcasm_add_mainloop:
$LDU r7,$BNSZ(r4)
$LDU r8,$BNSZ(r5)
adde r8,r7,r8
$STU r8,$BNSZ(r3)
bdnz Lppcasm_add_mainloop
-Lppcasm_add_adios:
+Lppcasm_add_adios:
addze r3,r0 #return carry bit.
blr
.long 0
@@ -1633,11 +1633,11 @@ Lppcasm_add_adios:
# the PPC instruction to count leading zeros instead
# of call to num_bits_word. Since this was compiled
# only at level -O2 we can possibly squeeze it more?
-#
+#
# r3 = h
# r4 = l
# r5 = d
-
+
$UCMPI 0,r5,0 # compare r5 and 0
bne Lppcasm_div1 # proceed if d!=0
li r3,-1 # d=0 return -1
@@ -1653,7 +1653,7 @@ Lppcasm_div1:
Lppcasm_div2:
$UCMP 0,r3,r5 #h>=d?
blt Lppcasm_div3 #goto Lppcasm_div3 if not
- subf r3,r5,r3 #h-=d ;
+ subf r3,r5,r3 #h-=d ;
Lppcasm_div3: #r7 = BN_BITS2-i. so r7=i
cmpi 0,0,r7,0 # is (i == 0)?
beq Lppcasm_div4
@@ -1668,7 +1668,7 @@ Lppcasm_div4:
# as it saves registers.
li r6,2 #r6=2
mtctr r6 #counter will be in count.
-Lppcasm_divouterloop:
+Lppcasm_divouterloop:
$SHRI r8,r3,`$BITS/2` #r8 = (h>>BN_BITS4)
$SHRI r11,r4,`$BITS/2` #r11= (l&BN_MASK2h)>>BN_BITS4
# compute here for innerloop.
@@ -1676,7 +1676,7 @@ Lppcasm_divouterloop:
bne Lppcasm_div5 # goto Lppcasm_div5 if not
li r8,-1
- $CLRU r8,r8,`$BITS/2` #q = BN_MASK2l
+ $CLRU r8,r8,`$BITS/2` #q = BN_MASK2l
b Lppcasm_div6
Lppcasm_div5:
$UDIV r8,r3,r9 #q = h/dh
@@ -1684,7 +1684,7 @@ Lppcasm_div6:
$UMULL r12,r9,r8 #th = q*dh
$CLRU r10,r5,`$BITS/2` #r10=dl
$UMULL r6,r8,r10 #tl = q*dl
-
+
Lppcasm_divinnerloop:
subf r10,r12,r3 #t = h -th
$SHRI r7,r10,`$BITS/2` #r7= (t &BN_MASK2H), sort of...
@@ -1761,7 +1761,7 @@ Lppcasm_div9:
addi r4,r4,-$BNSZ
addi r3,r3,-$BNSZ
mtctr r5
-Lppcasm_sqr_mainloop:
+Lppcasm_sqr_mainloop:
#sqr(r[0],r[1],a[0]);
$LDU r6,$BNSZ(r4)
$UMULL r7,r6,r6
@@ -1769,7 +1769,7 @@ Lppcasm_sqr_mainloop:
$STU r7,$BNSZ(r3)
$STU r8,$BNSZ(r3)
bdnz Lppcasm_sqr_mainloop
-Lppcasm_sqr_adios:
+Lppcasm_sqr_adios:
blr
.long 0
.byte 0,12,0x14,0,0,0,3,0
@@ -1783,7 +1783,7 @@ Lppcasm_sqr_adios:
# done in the build
#
-.align 4
+.align 4
.bn_mul_words:
#
# BN_ULONG bn_mul_words(BN_ULONG *rp, BN_ULONG *ap, int num, BN_ULONG w)
@@ -1797,7 +1797,7 @@ Lppcasm_sqr_adios:
rlwinm. r7,r5,30,2,31 # num >> 2
beq Lppcasm_mw_REM
mtctr r7
-Lppcasm_mw_LOOP:
+Lppcasm_mw_LOOP:
#mul(rp[0],ap[0],w,c1);
$LD r8,`0*$BNSZ`(r4)
$UMULL r9,r6,r8
@@ -1809,7 +1809,7 @@ Lppcasm_mw_LOOP:
#using adde.
$ST r9,`0*$BNSZ`(r3)
#mul(rp[1],ap[1],w,c1);
- $LD r8,`1*$BNSZ`(r4)
+ $LD r8,`1*$BNSZ`(r4)
$UMULL r11,r6,r8
$UMULH r12,r6,r8
adde r11,r11,r10
@@ -1830,7 +1830,7 @@ Lppcasm_mw_LOOP:
addze r12,r12 #this spin we collect carry into
#r12
$ST r11,`3*$BNSZ`(r3)
-
+
addi r3,r3,`4*$BNSZ`
addi r4,r4,`4*$BNSZ`
bdnz Lppcasm_mw_LOOP
@@ -1846,25 +1846,25 @@ Lppcasm_mw_REM:
addze r10,r10
$ST r9,`0*$BNSZ`(r3)
addi r12,r10,0
-
+
addi r5,r5,-1
cmpli 0,0,r5,0
beq Lppcasm_mw_OVER
-
+
#mul(rp[1],ap[1],w,c1);
- $LD r8,`1*$BNSZ`(r4)
+ $LD r8,`1*$BNSZ`(r4)
$UMULL r9,r6,r8
$UMULH r10,r6,r8
addc r9,r9,r12
addze r10,r10
$ST r9,`1*$BNSZ`(r3)
addi r12,r10,0
-
+
addi r5,r5,-1
cmpli 0,0,r5,0
beq Lppcasm_mw_OVER
-
+
#mul_add(rp[2],ap[2],w,c1);
$LD r8,`2*$BNSZ`(r4)
$UMULL r9,r6,r8
@@ -1873,8 +1873,8 @@ Lppcasm_mw_REM:
addze r10,r10
$ST r9,`2*$BNSZ`(r3)
addi r12,r10,0
-
-Lppcasm_mw_OVER:
+
+Lppcasm_mw_OVER:
addi r3,r12,0
blr
.long 0
@@ -1902,11 +1902,11 @@ Lppcasm_mw_OVER:
# empirical evidence suggests that unrolled version performs best!!
#
xor r0,r0,r0 #r0 = 0
- xor r12,r12,r12 #r12 = 0 . used for carry
+ xor r12,r12,r12 #r12 = 0 . used for carry
rlwinm. r7,r5,30,2,31 # num >> 2
beq Lppcasm_maw_leftover # if (num < 4) go LPPCASM_maw_leftover
mtctr r7
-Lppcasm_maw_mainloop:
+Lppcasm_maw_mainloop:
#mul_add(rp[0],ap[0],w,c1);
$LD r8,`0*$BNSZ`(r4)
$LD r11,`0*$BNSZ`(r3)
@@ -1922,9 +1922,9 @@ Lppcasm_maw_mainloop:
#by multiply and will be collected
#in the next spin
$ST r9,`0*$BNSZ`(r3)
-
+
#mul_add(rp[1],ap[1],w,c1);
- $LD r8,`1*$BNSZ`(r4)
+ $LD r8,`1*$BNSZ`(r4)
$LD r9,`1*$BNSZ`(r3)
$UMULL r11,r6,r8
$UMULH r12,r6,r8
@@ -1933,7 +1933,7 @@ Lppcasm_maw_mainloop:
addc r11,r11,r9
#addze r12,r12
$ST r11,`1*$BNSZ`(r3)
-
+
#mul_add(rp[2],ap[2],w,c1);
$LD r8,`2*$BNSZ`(r4)
$UMULL r9,r6,r8
@@ -1944,7 +1944,7 @@ Lppcasm_maw_mainloop:
addc r9,r9,r11
#addze r10,r10
$ST r9,`2*$BNSZ`(r3)
-
+
#mul_add(rp[3],ap[3],w,c1);
$LD r8,`3*$BNSZ`(r4)
$UMULL r11,r6,r8
@@ -1958,7 +1958,7 @@ Lppcasm_maw_mainloop:
addi r3,r3,`4*$BNSZ`
addi r4,r4,`4*$BNSZ`
bdnz Lppcasm_maw_mainloop
-
+
Lppcasm_maw_leftover:
andi. r5,r5,0x3
beq Lppcasm_maw_adios
@@ -1975,10 +1975,10 @@ Lppcasm_maw_leftover:
addc r9,r9,r12
addze r12,r10
$ST r9,0(r3)
-
+
bdz Lppcasm_maw_adios
#mul_add(rp[1],ap[1],w,c1);
- $LDU r8,$BNSZ(r4)
+ $LDU r8,$BNSZ(r4)
$UMULL r9,r6,r8
$UMULH r10,r6,r8
$LDU r11,$BNSZ(r3)
@@ -1987,7 +1987,7 @@ Lppcasm_maw_leftover:
addc r9,r9,r12
addze r12,r10
$ST r9,0(r3)
-
+
bdz Lppcasm_maw_adios
#mul_add(rp[2],ap[2],w,c1);
$LDU r8,$BNSZ(r4)
@@ -1999,8 +1999,8 @@ Lppcasm_maw_leftover:
addc r9,r9,r12
addze r12,r10
$ST r9,0(r3)
-
-Lppcasm_maw_adios:
+
+Lppcasm_maw_adios:
addi r3,r12,0
blr
.long 0
diff --git a/crypto/bn/asm/rsaz-avx2.pl b/crypto/bn/asm/rsaz-avx2.pl
index 0c1b236..f34c84f 100755
--- a/crypto/bn/asm/rsaz-avx2.pl
+++ b/crypto/bn/asm/rsaz-avx2.pl
@@ -382,7 +382,7 @@ $code.=<<___;
vpaddq $TEMP1, $ACC1, $ACC1
vpmuludq 32*7-128($aap), $B2, $ACC2
vpbroadcastq 32*5-128($tpa), $B2
- vpaddq 32*11-448($tp1), $ACC2, $ACC2
+ vpaddq 32*11-448($tp1), $ACC2, $ACC2
vmovdqu $ACC6, 32*6-192($tp0)
vmovdqu $ACC7, 32*7-192($tp0)
@@ -441,7 +441,7 @@ $code.=<<___;
vmovdqu $ACC7, 32*16-448($tp1)
lea 8($tp1), $tp1
- dec $i
+ dec $i
jnz .LOOP_SQR_1024
___
$ZERO = $ACC9;
@@ -786,7 +786,7 @@ $code.=<<___;
vpblendd \$3, $TEMP4, $TEMP5, $TEMP4
vpaddq $TEMP3, $ACC7, $ACC7
vpaddq $TEMP4, $ACC8, $ACC8
-
+
vpsrlq \$29, $ACC4, $TEMP1
vpand $AND_MASK, $ACC4, $ACC4
vpsrlq \$29, $ACC5, $TEMP2
@@ -1451,7 +1451,7 @@ $code.=<<___;
vpaddq $TEMP4, $ACC8, $ACC8
vmovdqu $ACC4, 128-128($rp)
- vmovdqu $ACC5, 160-128($rp)
+ vmovdqu $ACC5, 160-128($rp)
vmovdqu $ACC6, 192-128($rp)
vmovdqu $ACC7, 224-128($rp)
vmovdqu $ACC8, 256-128($rp)
diff --git a/crypto/bn/asm/rsaz-x86_64.pl b/crypto/bn/asm/rsaz-x86_64.pl
index 6f3b664..7bcfafe 100755
--- a/crypto/bn/asm/rsaz-x86_64.pl
+++ b/crypto/bn/asm/rsaz-x86_64.pl
@@ -282,9 +282,9 @@ $code.=<<___;
movq %r9, 16(%rsp)
movq %r10, 24(%rsp)
shrq \$63, %rbx
-
+
#third iteration
- movq 16($inp), %r9
+ movq 16($inp), %r9
movq 24($inp), %rax
mulq %r9
addq %rax, %r12
@@ -532,7 +532,7 @@ $code.=<<___;
movl $times,128+8(%rsp)
movq $out, %xmm0 # off-load
movq %rbp, %xmm1 # off-load
-#first iteration
+#first iteration
mulx %rax, %r8, %r9
mulx 16($inp), %rcx, %r10
@@ -568,7 +568,7 @@ $code.=<<___;
mov %rax, (%rsp)
mov %r8, 8(%rsp)
-#second iteration
+#second iteration
mulx 16($inp), %rax, %rbx
adox %rax, %r10
adcx %rbx, %r11
@@ -607,8 +607,8 @@ $code.=<<___;
mov %r9, 16(%rsp)
.byte 0x4c,0x89,0x94,0x24,0x18,0x00,0x00,0x00 # mov %r10, 24(%rsp)
-
-#third iteration
+
+#third iteration
.byte 0xc4,0x62,0xc3,0xf6,0x8e,0x18,0x00,0x00,0x00 # mulx 24($inp), $out, %r9
adox $out, %r12
adcx %r9, %r13
@@ -643,8 +643,8 @@ $code.=<<___;
mov %r11, 32(%rsp)
.byte 0x4c,0x89,0xa4,0x24,0x28,0x00,0x00,0x00 # mov %r12, 40(%rsp)
-
-#fourth iteration
+
+#fourth iteration
.byte 0xc4,0xe2,0xfb,0xf6,0x9e,0x20,0x00,0x00,0x00 # mulx 32($inp), %rax, %rbx
adox %rax, %r14
adcx %rbx, %r15
@@ -676,8 +676,8 @@ $code.=<<___;
mov %r13, 48(%rsp)
mov %r14, 56(%rsp)
-
-#fifth iteration
+
+#fifth iteration
.byte 0xc4,0x62,0xc3,0xf6,0x9e,0x28,0x00,0x00,0x00 # mulx 40($inp), $out, %r11
adox $out, %r8
adcx %r11, %r9
@@ -704,8 +704,8 @@ $code.=<<___;
mov %r15, 64(%rsp)
mov %r8, 72(%rsp)
-
-#sixth iteration
+
+#sixth iteration
.byte 0xc4,0xe2,0xfb,0xf6,0x9e,0x30,0x00,0x00,0x00 # mulx 48($inp), %rax, %rbx
adox %rax, %r10
adcx %rbx, %r11
@@ -1048,7 +1048,7 @@ $code.=<<___;
movq 56($ap), %rax
movq %rdx, %r14
adcq \$0, %r14
-
+
mulq %rbx
addq %rax, %r14
movq ($ap), %rax
@@ -1150,7 +1150,7 @@ $code.=<<___;
movq ($ap), %rax
adcq \$0, %rdx
addq %r15, %r14
- movq %rdx, %r15
+ movq %rdx, %r15
adcq \$0, %r15
leaq 8(%rdi), %rdi
@@ -1212,7 +1212,7 @@ $code.=<<___ if ($addx);
mulx 48($ap), %rbx, %r14
adcx %rax, %r12
-
+
mulx 56($ap), %rax, %r15
adcx %rbx, %r13
adcx %rax, %r14
@@ -1411,7 +1411,7 @@ $code.=<<___;
___
$code.=<<___ if ($addx);
jmp .Lmul_scatter_tail
-
+
.align 32
.Lmulx_scatter:
movq ($out), %rdx # pass b[0]
@@ -1824,7 +1824,7 @@ __rsaz_512_mul:
movq 56($ap), %rax
movq %rdx, %r14
adcq \$0, %r14
-
+
mulq %rbx
addq %rax, %r14
movq ($ap), %rax
@@ -1901,7 +1901,7 @@ __rsaz_512_mul:
movq ($ap), %rax
adcq \$0, %rdx
addq %r15, %r14
- movq %rdx, %r15
+ movq %rdx, %r15
adcq \$0, %r15
leaq 8(%rdi), %rdi
diff --git a/crypto/bn/asm/s390x-gf2m.pl b/crypto/bn/asm/s390x-gf2m.pl
index cbd16f4..57b0032 100644
--- a/crypto/bn/asm/s390x-gf2m.pl
+++ b/crypto/bn/asm/s390x-gf2m.pl
@@ -198,7 +198,7 @@ $code.=<<___;
xgr $hi, at r[1]
xgr $lo, at r[0]
xgr $hi, at r[2]
- xgr $lo, at r[3]
+ xgr $lo, at r[3]
xgr $hi, at r[3]
xgr $lo,$hi
stg $hi,16($rp)
diff --git a/crypto/bn/asm/via-mont.pl b/crypto/bn/asm/via-mont.pl
index 9f81bc8..558501c 100644
--- a/crypto/bn/asm/via-mont.pl
+++ b/crypto/bn/asm/via-mont.pl
@@ -76,7 +76,7 @@
# dsa 1024 bits 0.001346s 0.001595s 742.7 627.0
# dsa 2048 bits 0.004745s 0.005582s 210.7 179.1
#
-# Conclusions:
+# Conclusions:
# - VIA SDK leaves a *lot* of room for improvement (which this
# implementation successfully fills:-);
# - 'rep montmul' gives up to >3x performance improvement depending on
diff --git a/crypto/bn/asm/x86-mont.pl b/crypto/bn/asm/x86-mont.pl
index 6787503..a8b402d 100755
--- a/crypto/bn/asm/x86-mont.pl
+++ b/crypto/bn/asm/x86-mont.pl
@@ -39,7 +39,7 @@ require "x86asm.pl";
$output = pop;
open STDOUT,">$output";
-
+
&asm_init($ARGV[0],$0);
$sse2=0;
diff --git a/crypto/bn/asm/x86_64-mont5.pl b/crypto/bn/asm/x86_64-mont5.pl
index 3278dc6..8f49391 100755
--- a/crypto/bn/asm/x86_64-mont5.pl
+++ b/crypto/bn/asm/x86_64-mont5.pl
@@ -1049,7 +1049,7 @@ my $bptr="%rdx"; # const void *table,
my $nptr="%rcx"; # const BN_ULONG *nptr,
my $n0 ="%r8"; # const BN_ULONG *n0);
my $num ="%r9"; # int num, has to be divisible by 8
- # int pwr
+ # int pwr
my ($i,$j,$tptr)=("%rbp","%rcx",$rptr);
my @A0=("%r10","%r11");
@@ -1126,7 +1126,7 @@ $code.=<<___;
ja .Lpwr_page_walk
.Lpwr_page_walk_done:
- mov $num,%r10
+ mov $num,%r10
neg $num
##############################################################
@@ -2036,7 +2036,7 @@ __bn_post4x_internal:
jnz .Lsqr4x_sub
mov $num,%r10 # prepare for back-to-back call
- neg $num # restore $num
+ neg $num # restore $num
ret
.size __bn_post4x_internal,.-__bn_post4x_internal
___
@@ -2259,7 +2259,7 @@ bn_mulx4x_mont_gather5:
mov \$0,%r10
cmovc %r10,%r11
sub %r11,%rbp
-.Lmulx4xsp_done:
+.Lmulx4xsp_done:
and \$-64,%rbp # ensure alignment
mov %rsp,%r11
sub %rbp,%r11
@@ -2741,7 +2741,7 @@ bn_powerx5:
ja .Lpwrx_page_walk
.Lpwrx_page_walk_done:
- mov $num,%r10
+ mov $num,%r10
neg $num
##############################################################
diff --git a/crypto/bn/bn_prime.h b/crypto/bn/bn_prime.h
index 41440fa..3bee718 100644
--- a/crypto/bn/bn_prime.h
+++ b/crypto/bn/bn_prime.h
@@ -14,261 +14,260 @@ typedef unsigned short prime_t;
# define NUMPRIMES 2048
static const prime_t primes[2048] = {
-
- 2, 3, 5, 7, 11, 13, 17, 19,
- 23, 29, 31, 37, 41, 43, 47, 53,
- 59, 61, 67, 71, 73, 79, 83, 89,
- 97, 101, 103, 107, 109, 113, 127, 131,
- 137, 139, 149, 151, 157, 163, 167, 173,
- 179, 181, 191, 193, 197, 199, 211, 223,
- 227, 229, 233, 239, 241, 251, 257, 263,
- 269, 271, 277, 281, 283, 293, 307, 311,
- 313, 317, 331, 337, 347, 349, 353, 359,
- 367, 373, 379, 383, 389, 397, 401, 409,
- 419, 421, 431, 433, 439, 443, 449, 457,
- 461, 463, 467, 479, 487, 491, 499, 503,
- 509, 521, 523, 541, 547, 557, 563, 569,
- 571, 577, 587, 593, 599, 601, 607, 613,
- 617, 619, 631, 641, 643, 647, 653, 659,
- 661, 673, 677, 683, 691, 701, 709, 719,
- 727, 733, 739, 743, 751, 757, 761, 769,
- 773, 787, 797, 809, 811, 821, 823, 827,
- 829, 839, 853, 857, 859, 863, 877, 881,
- 883, 887, 907, 911, 919, 929, 937, 941,
- 947, 953, 967, 971, 977, 983, 991, 997,
- 1009, 1013, 1019, 1021, 1031, 1033, 1039, 1049,
- 1051, 1061, 1063, 1069, 1087, 1091, 1093, 1097,
- 1103, 1109, 1117, 1123, 1129, 1151, 1153, 1163,
- 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223,
- 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283,
- 1289, 1291, 1297, 1301, 1303, 1307, 1319, 1321,
- 1327, 1361, 1367, 1373, 1381, 1399, 1409, 1423,
- 1427, 1429, 1433, 1439, 1447, 1451, 1453, 1459,
- 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511,
- 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571,
- 1579, 1583, 1597, 1601, 1607, 1609, 1613, 1619,
- 1621, 1627, 1637, 1657, 1663, 1667, 1669, 1693,
- 1697, 1699, 1709, 1721, 1723, 1733, 1741, 1747,
- 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811,
- 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877,
- 1879, 1889, 1901, 1907, 1913, 1931, 1933, 1949,
- 1951, 1973, 1979, 1987, 1993, 1997, 1999, 2003,
- 2011, 2017, 2027, 2029, 2039, 2053, 2063, 2069,
- 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129,
- 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203,
- 2207, 2213, 2221, 2237, 2239, 2243, 2251, 2267,
- 2269, 2273, 2281, 2287, 2293, 2297, 2309, 2311,
- 2333, 2339, 2341, 2347, 2351, 2357, 2371, 2377,
- 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423,
- 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503,
- 2521, 2531, 2539, 2543, 2549, 2551, 2557, 2579,
- 2591, 2593, 2609, 2617, 2621, 2633, 2647, 2657,
- 2659, 2663, 2671, 2677, 2683, 2687, 2689, 2693,
- 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741,
- 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801,
- 2803, 2819, 2833, 2837, 2843, 2851, 2857, 2861,
- 2879, 2887, 2897, 2903, 2909, 2917, 2927, 2939,
- 2953, 2957, 2963, 2969, 2971, 2999, 3001, 3011,
- 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079,
- 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167,
- 3169, 3181, 3187, 3191, 3203, 3209, 3217, 3221,
- 3229, 3251, 3253, 3257, 3259, 3271, 3299, 3301,
- 3307, 3313, 3319, 3323, 3329, 3331, 3343, 3347,
- 3359, 3361, 3371, 3373, 3389, 3391, 3407, 3413,
- 3433, 3449, 3457, 3461, 3463, 3467, 3469, 3491,
- 3499, 3511, 3517, 3527, 3529, 3533, 3539, 3541,
- 3547, 3557, 3559, 3571, 3581, 3583, 3593, 3607,
- 3613, 3617, 3623, 3631, 3637, 3643, 3659, 3671,
- 3673, 3677, 3691, 3697, 3701, 3709, 3719, 3727,
- 3733, 3739, 3761, 3767, 3769, 3779, 3793, 3797,
- 3803, 3821, 3823, 3833, 3847, 3851, 3853, 3863,
- 3877, 3881, 3889, 3907, 3911, 3917, 3919, 3923,
- 3929, 3931, 3943, 3947, 3967, 3989, 4001, 4003,
- 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057,
- 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129,
- 4133, 4139, 4153, 4157, 4159, 4177, 4201, 4211,
- 4217, 4219, 4229, 4231, 4241, 4243, 4253, 4259,
- 4261, 4271, 4273, 4283, 4289, 4297, 4327, 4337,
- 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409,
- 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481,
- 4483, 4493, 4507, 4513, 4517, 4519, 4523, 4547,
- 4549, 4561, 4567, 4583, 4591, 4597, 4603, 4621,
- 4637, 4639, 4643, 4649, 4651, 4657, 4663, 4673,
- 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751,
- 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813,
- 4817, 4831, 4861, 4871, 4877, 4889, 4903, 4909,
- 4919, 4931, 4933, 4937, 4943, 4951, 4957, 4967,
- 4969, 4973, 4987, 4993, 4999, 5003, 5009, 5011,
- 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087,
- 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167,
- 5171, 5179, 5189, 5197, 5209, 5227, 5231, 5233,
- 5237, 5261, 5273, 5279, 5281, 5297, 5303, 5309,
- 5323, 5333, 5347, 5351, 5381, 5387, 5393, 5399,
- 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443,
- 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507,
- 5519, 5521, 5527, 5531, 5557, 5563, 5569, 5573,
- 5581, 5591, 5623, 5639, 5641, 5647, 5651, 5653,
- 5657, 5659, 5669, 5683, 5689, 5693, 5701, 5711,
- 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791,
- 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849,
- 5851, 5857, 5861, 5867, 5869, 5879, 5881, 5897,
- 5903, 5923, 5927, 5939, 5953, 5981, 5987, 6007,
- 6011, 6029, 6037, 6043, 6047, 6053, 6067, 6073,
- 6079, 6089, 6091, 6101, 6113, 6121, 6131, 6133,
- 6143, 6151, 6163, 6173, 6197, 6199, 6203, 6211,
- 6217, 6221, 6229, 6247, 6257, 6263, 6269, 6271,
- 6277, 6287, 6299, 6301, 6311, 6317, 6323, 6329,
- 6337, 6343, 6353, 6359, 6361, 6367, 6373, 6379,
- 6389, 6397, 6421, 6427, 6449, 6451, 6469, 6473,
- 6481, 6491, 6521, 6529, 6547, 6551, 6553, 6563,
- 6569, 6571, 6577, 6581, 6599, 6607, 6619, 6637,
- 6653, 6659, 6661, 6673, 6679, 6689, 6691, 6701,
- 6703, 6709, 6719, 6733, 6737, 6761, 6763, 6779,
- 6781, 6791, 6793, 6803, 6823, 6827, 6829, 6833,
- 6841, 6857, 6863, 6869, 6871, 6883, 6899, 6907,
- 6911, 6917, 6947, 6949, 6959, 6961, 6967, 6971,
- 6977, 6983, 6991, 6997, 7001, 7013, 7019, 7027,
- 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7121,
- 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207,
- 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253,
- 7283, 7297, 7307, 7309, 7321, 7331, 7333, 7349,
- 7351, 7369, 7393, 7411, 7417, 7433, 7451, 7457,
- 7459, 7477, 7481, 7487, 7489, 7499, 7507, 7517,
- 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561,
- 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621,
- 7639, 7643, 7649, 7669, 7673, 7681, 7687, 7691,
- 7699, 7703, 7717, 7723, 7727, 7741, 7753, 7757,
- 7759, 7789, 7793, 7817, 7823, 7829, 7841, 7853,
- 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919,
- 7927, 7933, 7937, 7949, 7951, 7963, 7993, 8009,
- 8011, 8017, 8039, 8053, 8059, 8069, 8081, 8087,
- 8089, 8093, 8101, 8111, 8117, 8123, 8147, 8161,
- 8167, 8171, 8179, 8191, 8209, 8219, 8221, 8231,
- 8233, 8237, 8243, 8263, 8269, 8273, 8287, 8291,
- 8293, 8297, 8311, 8317, 8329, 8353, 8363, 8369,
- 8377, 8387, 8389, 8419, 8423, 8429, 8431, 8443,
- 8447, 8461, 8467, 8501, 8513, 8521, 8527, 8537,
- 8539, 8543, 8563, 8573, 8581, 8597, 8599, 8609,
- 8623, 8627, 8629, 8641, 8647, 8663, 8669, 8677,
- 8681, 8689, 8693, 8699, 8707, 8713, 8719, 8731,
- 8737, 8741, 8747, 8753, 8761, 8779, 8783, 8803,
- 8807, 8819, 8821, 8831, 8837, 8839, 8849, 8861,
- 8863, 8867, 8887, 8893, 8923, 8929, 8933, 8941,
- 8951, 8963, 8969, 8971, 8999, 9001, 9007, 9011,
- 9013, 9029, 9041, 9043, 9049, 9059, 9067, 9091,
- 9103, 9109, 9127, 9133, 9137, 9151, 9157, 9161,
- 9173, 9181, 9187, 9199, 9203, 9209, 9221, 9227,
- 9239, 9241, 9257, 9277, 9281, 9283, 9293, 9311,
- 9319, 9323, 9337, 9341, 9343, 9349, 9371, 9377,
- 9391, 9397, 9403, 9413, 9419, 9421, 9431, 9433,
- 9437, 9439, 9461, 9463, 9467, 9473, 9479, 9491,
- 9497, 9511, 9521, 9533, 9539, 9547, 9551, 9587,
- 9601, 9613, 9619, 9623, 9629, 9631, 9643, 9649,
- 9661, 9677, 9679, 9689, 9697, 9719, 9721, 9733,
- 9739, 9743, 9749, 9767, 9769, 9781, 9787, 9791,
- 9803, 9811, 9817, 9829, 9833, 9839, 9851, 9857,
- 9859, 9871, 9883, 9887, 9901, 9907, 9923, 9929,
- 9931, 9941, 9949, 9967, 9973, 10007, 10009, 10037,
- 10039, 10061, 10067, 10069, 10079, 10091, 10093, 10099,
- 10103, 10111, 10133, 10139, 10141, 10151, 10159, 10163,
- 10169, 10177, 10181, 10193, 10211, 10223, 10243, 10247,
- 10253, 10259, 10267, 10271, 10273, 10289, 10301, 10303,
- 10313, 10321, 10331, 10333, 10337, 10343, 10357, 10369,
- 10391, 10399, 10427, 10429, 10433, 10453, 10457, 10459,
- 10463, 10477, 10487, 10499, 10501, 10513, 10529, 10531,
- 10559, 10567, 10589, 10597, 10601, 10607, 10613, 10627,
- 10631, 10639, 10651, 10657, 10663, 10667, 10687, 10691,
- 10709, 10711, 10723, 10729, 10733, 10739, 10753, 10771,
- 10781, 10789, 10799, 10831, 10837, 10847, 10853, 10859,
- 10861, 10867, 10883, 10889, 10891, 10903, 10909, 10937,
- 10939, 10949, 10957, 10973, 10979, 10987, 10993, 11003,
- 11027, 11047, 11057, 11059, 11069, 11071, 11083, 11087,
- 11093, 11113, 11117, 11119, 11131, 11149, 11159, 11161,
- 11171, 11173, 11177, 11197, 11213, 11239, 11243, 11251,
- 11257, 11261, 11273, 11279, 11287, 11299, 11311, 11317,
- 11321, 11329, 11351, 11353, 11369, 11383, 11393, 11399,
- 11411, 11423, 11437, 11443, 11447, 11467, 11471, 11483,
- 11489, 11491, 11497, 11503, 11519, 11527, 11549, 11551,
- 11579, 11587, 11593, 11597, 11617, 11621, 11633, 11657,
- 11677, 11681, 11689, 11699, 11701, 11717, 11719, 11731,
- 11743, 11777, 11779, 11783, 11789, 11801, 11807, 11813,
- 11821, 11827, 11831, 11833, 11839, 11863, 11867, 11887,
- 11897, 11903, 11909, 11923, 11927, 11933, 11939, 11941,
- 11953, 11959, 11969, 11971, 11981, 11987, 12007, 12011,
- 12037, 12041, 12043, 12049, 12071, 12073, 12097, 12101,
- 12107, 12109, 12113, 12119, 12143, 12149, 12157, 12161,
- 12163, 12197, 12203, 12211, 12227, 12239, 12241, 12251,
- 12253, 12263, 12269, 12277, 12281, 12289, 12301, 12323,
- 12329, 12343, 12347, 12373, 12377, 12379, 12391, 12401,
- 12409, 12413, 12421, 12433, 12437, 12451, 12457, 12473,
- 12479, 12487, 12491, 12497, 12503, 12511, 12517, 12527,
- 12539, 12541, 12547, 12553, 12569, 12577, 12583, 12589,
- 12601, 12611, 12613, 12619, 12637, 12641, 12647, 12653,
- 12659, 12671, 12689, 12697, 12703, 12713, 12721, 12739,
- 12743, 12757, 12763, 12781, 12791, 12799, 12809, 12821,
- 12823, 12829, 12841, 12853, 12889, 12893, 12899, 12907,
- 12911, 12917, 12919, 12923, 12941, 12953, 12959, 12967,
- 12973, 12979, 12983, 13001, 13003, 13007, 13009, 13033,
- 13037, 13043, 13049, 13063, 13093, 13099, 13103, 13109,
- 13121, 13127, 13147, 13151, 13159, 13163, 13171, 13177,
- 13183, 13187, 13217, 13219, 13229, 13241, 13249, 13259,
- 13267, 13291, 13297, 13309, 13313, 13327, 13331, 13337,
- 13339, 13367, 13381, 13397, 13399, 13411, 13417, 13421,
- 13441, 13451, 13457, 13463, 13469, 13477, 13487, 13499,
- 13513, 13523, 13537, 13553, 13567, 13577, 13591, 13597,
- 13613, 13619, 13627, 13633, 13649, 13669, 13679, 13681,
- 13687, 13691, 13693, 13697, 13709, 13711, 13721, 13723,
- 13729, 13751, 13757, 13759, 13763, 13781, 13789, 13799,
- 13807, 13829, 13831, 13841, 13859, 13873, 13877, 13879,
- 13883, 13901, 13903, 13907, 13913, 13921, 13931, 13933,
- 13963, 13967, 13997, 13999, 14009, 14011, 14029, 14033,
- 14051, 14057, 14071, 14081, 14083, 14087, 14107, 14143,
- 14149, 14153, 14159, 14173, 14177, 14197, 14207, 14221,
- 14243, 14249, 14251, 14281, 14293, 14303, 14321, 14323,
- 14327, 14341, 14347, 14369, 14387, 14389, 14401, 14407,
- 14411, 14419, 14423, 14431, 14437, 14447, 14449, 14461,
- 14479, 14489, 14503, 14519, 14533, 14537, 14543, 14549,
- 14551, 14557, 14561, 14563, 14591, 14593, 14621, 14627,
- 14629, 14633, 14639, 14653, 14657, 14669, 14683, 14699,
- 14713, 14717, 14723, 14731, 14737, 14741, 14747, 14753,
- 14759, 14767, 14771, 14779, 14783, 14797, 14813, 14821,
- 14827, 14831, 14843, 14851, 14867, 14869, 14879, 14887,
- 14891, 14897, 14923, 14929, 14939, 14947, 14951, 14957,
- 14969, 14983, 15013, 15017, 15031, 15053, 15061, 15073,
- 15077, 15083, 15091, 15101, 15107, 15121, 15131, 15137,
- 15139, 15149, 15161, 15173, 15187, 15193, 15199, 15217,
- 15227, 15233, 15241, 15259, 15263, 15269, 15271, 15277,
- 15287, 15289, 15299, 15307, 15313, 15319, 15329, 15331,
- 15349, 15359, 15361, 15373, 15377, 15383, 15391, 15401,
- 15413, 15427, 15439, 15443, 15451, 15461, 15467, 15473,
- 15493, 15497, 15511, 15527, 15541, 15551, 15559, 15569,
- 15581, 15583, 15601, 15607, 15619, 15629, 15641, 15643,
- 15647, 15649, 15661, 15667, 15671, 15679, 15683, 15727,
- 15731, 15733, 15737, 15739, 15749, 15761, 15767, 15773,
- 15787, 15791, 15797, 15803, 15809, 15817, 15823, 15859,
- 15877, 15881, 15887, 15889, 15901, 15907, 15913, 15919,
- 15923, 15937, 15959, 15971, 15973, 15991, 16001, 16007,
- 16033, 16057, 16061, 16063, 16067, 16069, 16073, 16087,
- 16091, 16097, 16103, 16111, 16127, 16139, 16141, 16183,
- 16187, 16189, 16193, 16217, 16223, 16229, 16231, 16249,
- 16253, 16267, 16273, 16301, 16319, 16333, 16339, 16349,
- 16361, 16363, 16369, 16381, 16411, 16417, 16421, 16427,
- 16433, 16447, 16451, 16453, 16477, 16481, 16487, 16493,
- 16519, 16529, 16547, 16553, 16561, 16567, 16573, 16603,
- 16607, 16619, 16631, 16633, 16649, 16651, 16657, 16661,
- 16673, 16691, 16693, 16699, 16703, 16729, 16741, 16747,
- 16759, 16763, 16787, 16811, 16823, 16829, 16831, 16843,
- 16871, 16879, 16883, 16889, 16901, 16903, 16921, 16927,
- 16931, 16937, 16943, 16963, 16979, 16981, 16987, 16993,
- 17011, 17021, 17027, 17029, 17033, 17041, 17047, 17053,
- 17077, 17093, 17099, 17107, 17117, 17123, 17137, 17159,
- 17167, 17183, 17189, 17191, 17203, 17207, 17209, 17231,
- 17239, 17257, 17291, 17293, 17299, 17317, 17321, 17327,
- 17333, 17341, 17351, 17359, 17377, 17383, 17387, 17389,
- 17393, 17401, 17417, 17419, 17431, 17443, 17449, 17467,
- 17471, 17477, 17483, 17489, 17491, 17497, 17509, 17519,
- 17539, 17551, 17569, 17573, 17579, 17581, 17597, 17599,
- 17609, 17623, 17627, 17657, 17659, 17669, 17681, 17683,
- 17707, 17713, 17729, 17737, 17747, 17749, 17761, 17783,
- 17789, 17791, 17807, 17827, 17837, 17839, 17851, 17863,
+ 2, 3, 5, 7, 11, 13, 17, 19,
+ 23, 29, 31, 37, 41, 43, 47, 53,
+ 59, 61, 67, 71, 73, 79, 83, 89,
+ 97, 101, 103, 107, 109, 113, 127, 131,
+ 137, 139, 149, 151, 157, 163, 167, 173,
+ 179, 181, 191, 193, 197, 199, 211, 223,
+ 227, 229, 233, 239, 241, 251, 257, 263,
+ 269, 271, 277, 281, 283, 293, 307, 311,
+ 313, 317, 331, 337, 347, 349, 353, 359,
+ 367, 373, 379, 383, 389, 397, 401, 409,
+ 419, 421, 431, 433, 439, 443, 449, 457,
+ 461, 463, 467, 479, 487, 491, 499, 503,
+ 509, 521, 523, 541, 547, 557, 563, 569,
+ 571, 577, 587, 593, 599, 601, 607, 613,
+ 617, 619, 631, 641, 643, 647, 653, 659,
+ 661, 673, 677, 683, 691, 701, 709, 719,
+ 727, 733, 739, 743, 751, 757, 761, 769,
+ 773, 787, 797, 809, 811, 821, 823, 827,
+ 829, 839, 853, 857, 859, 863, 877, 881,
+ 883, 887, 907, 911, 919, 929, 937, 941,
+ 947, 953, 967, 971, 977, 983, 991, 997,
+ 1009, 1013, 1019, 1021, 1031, 1033, 1039, 1049,
+ 1051, 1061, 1063, 1069, 1087, 1091, 1093, 1097,
+ 1103, 1109, 1117, 1123, 1129, 1151, 1153, 1163,
+ 1171, 1181, 1187, 1193, 1201, 1213, 1217, 1223,
+ 1229, 1231, 1237, 1249, 1259, 1277, 1279, 1283,
+ 1289, 1291, 1297, 1301, 1303, 1307, 1319, 1321,
+ 1327, 1361, 1367, 1373, 1381, 1399, 1409, 1423,
+ 1427, 1429, 1433, 1439, 1447, 1451, 1453, 1459,
+ 1471, 1481, 1483, 1487, 1489, 1493, 1499, 1511,
+ 1523, 1531, 1543, 1549, 1553, 1559, 1567, 1571,
+ 1579, 1583, 1597, 1601, 1607, 1609, 1613, 1619,
+ 1621, 1627, 1637, 1657, 1663, 1667, 1669, 1693,
+ 1697, 1699, 1709, 1721, 1723, 1733, 1741, 1747,
+ 1753, 1759, 1777, 1783, 1787, 1789, 1801, 1811,
+ 1823, 1831, 1847, 1861, 1867, 1871, 1873, 1877,
+ 1879, 1889, 1901, 1907, 1913, 1931, 1933, 1949,
+ 1951, 1973, 1979, 1987, 1993, 1997, 1999, 2003,
+ 2011, 2017, 2027, 2029, 2039, 2053, 2063, 2069,
+ 2081, 2083, 2087, 2089, 2099, 2111, 2113, 2129,
+ 2131, 2137, 2141, 2143, 2153, 2161, 2179, 2203,
+ 2207, 2213, 2221, 2237, 2239, 2243, 2251, 2267,
+ 2269, 2273, 2281, 2287, 2293, 2297, 2309, 2311,
+ 2333, 2339, 2341, 2347, 2351, 2357, 2371, 2377,
+ 2381, 2383, 2389, 2393, 2399, 2411, 2417, 2423,
+ 2437, 2441, 2447, 2459, 2467, 2473, 2477, 2503,
+ 2521, 2531, 2539, 2543, 2549, 2551, 2557, 2579,
+ 2591, 2593, 2609, 2617, 2621, 2633, 2647, 2657,
+ 2659, 2663, 2671, 2677, 2683, 2687, 2689, 2693,
+ 2699, 2707, 2711, 2713, 2719, 2729, 2731, 2741,
+ 2749, 2753, 2767, 2777, 2789, 2791, 2797, 2801,
+ 2803, 2819, 2833, 2837, 2843, 2851, 2857, 2861,
+ 2879, 2887, 2897, 2903, 2909, 2917, 2927, 2939,
+ 2953, 2957, 2963, 2969, 2971, 2999, 3001, 3011,
+ 3019, 3023, 3037, 3041, 3049, 3061, 3067, 3079,
+ 3083, 3089, 3109, 3119, 3121, 3137, 3163, 3167,
+ 3169, 3181, 3187, 3191, 3203, 3209, 3217, 3221,
+ 3229, 3251, 3253, 3257, 3259, 3271, 3299, 3301,
+ 3307, 3313, 3319, 3323, 3329, 3331, 3343, 3347,
+ 3359, 3361, 3371, 3373, 3389, 3391, 3407, 3413,
+ 3433, 3449, 3457, 3461, 3463, 3467, 3469, 3491,
+ 3499, 3511, 3517, 3527, 3529, 3533, 3539, 3541,
+ 3547, 3557, 3559, 3571, 3581, 3583, 3593, 3607,
+ 3613, 3617, 3623, 3631, 3637, 3643, 3659, 3671,
+ 3673, 3677, 3691, 3697, 3701, 3709, 3719, 3727,
+ 3733, 3739, 3761, 3767, 3769, 3779, 3793, 3797,
+ 3803, 3821, 3823, 3833, 3847, 3851, 3853, 3863,
+ 3877, 3881, 3889, 3907, 3911, 3917, 3919, 3923,
+ 3929, 3931, 3943, 3947, 3967, 3989, 4001, 4003,
+ 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057,
+ 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129,
+ 4133, 4139, 4153, 4157, 4159, 4177, 4201, 4211,
+ 4217, 4219, 4229, 4231, 4241, 4243, 4253, 4259,
+ 4261, 4271, 4273, 4283, 4289, 4297, 4327, 4337,
+ 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409,
+ 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481,
+ 4483, 4493, 4507, 4513, 4517, 4519, 4523, 4547,
+ 4549, 4561, 4567, 4583, 4591, 4597, 4603, 4621,
+ 4637, 4639, 4643, 4649, 4651, 4657, 4663, 4673,
+ 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751,
+ 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813,
+ 4817, 4831, 4861, 4871, 4877, 4889, 4903, 4909,
+ 4919, 4931, 4933, 4937, 4943, 4951, 4957, 4967,
+ 4969, 4973, 4987, 4993, 4999, 5003, 5009, 5011,
+ 5021, 5023, 5039, 5051, 5059, 5077, 5081, 5087,
+ 5099, 5101, 5107, 5113, 5119, 5147, 5153, 5167,
+ 5171, 5179, 5189, 5197, 5209, 5227, 5231, 5233,
+ 5237, 5261, 5273, 5279, 5281, 5297, 5303, 5309,
+ 5323, 5333, 5347, 5351, 5381, 5387, 5393, 5399,
+ 5407, 5413, 5417, 5419, 5431, 5437, 5441, 5443,
+ 5449, 5471, 5477, 5479, 5483, 5501, 5503, 5507,
+ 5519, 5521, 5527, 5531, 5557, 5563, 5569, 5573,
+ 5581, 5591, 5623, 5639, 5641, 5647, 5651, 5653,
+ 5657, 5659, 5669, 5683, 5689, 5693, 5701, 5711,
+ 5717, 5737, 5741, 5743, 5749, 5779, 5783, 5791,
+ 5801, 5807, 5813, 5821, 5827, 5839, 5843, 5849,
+ 5851, 5857, 5861, 5867, 5869, 5879, 5881, 5897,
+ 5903, 5923, 5927, 5939, 5953, 5981, 5987, 6007,
+ 6011, 6029, 6037, 6043, 6047, 6053, 6067, 6073,
+ 6079, 6089, 6091, 6101, 6113, 6121, 6131, 6133,
+ 6143, 6151, 6163, 6173, 6197, 6199, 6203, 6211,
+ 6217, 6221, 6229, 6247, 6257, 6263, 6269, 6271,
+ 6277, 6287, 6299, 6301, 6311, 6317, 6323, 6329,
+ 6337, 6343, 6353, 6359, 6361, 6367, 6373, 6379,
+ 6389, 6397, 6421, 6427, 6449, 6451, 6469, 6473,
+ 6481, 6491, 6521, 6529, 6547, 6551, 6553, 6563,
+ 6569, 6571, 6577, 6581, 6599, 6607, 6619, 6637,
+ 6653, 6659, 6661, 6673, 6679, 6689, 6691, 6701,
+ 6703, 6709, 6719, 6733, 6737, 6761, 6763, 6779,
+ 6781, 6791, 6793, 6803, 6823, 6827, 6829, 6833,
+ 6841, 6857, 6863, 6869, 6871, 6883, 6899, 6907,
+ 6911, 6917, 6947, 6949, 6959, 6961, 6967, 6971,
+ 6977, 6983, 6991, 6997, 7001, 7013, 7019, 7027,
+ 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7121,
+ 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207,
+ 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253,
+ 7283, 7297, 7307, 7309, 7321, 7331, 7333, 7349,
+ 7351, 7369, 7393, 7411, 7417, 7433, 7451, 7457,
+ 7459, 7477, 7481, 7487, 7489, 7499, 7507, 7517,
+ 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561,
+ 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621,
+ 7639, 7643, 7649, 7669, 7673, 7681, 7687, 7691,
+ 7699, 7703, 7717, 7723, 7727, 7741, 7753, 7757,
+ 7759, 7789, 7793, 7817, 7823, 7829, 7841, 7853,
+ 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919,
+ 7927, 7933, 7937, 7949, 7951, 7963, 7993, 8009,
+ 8011, 8017, 8039, 8053, 8059, 8069, 8081, 8087,
+ 8089, 8093, 8101, 8111, 8117, 8123, 8147, 8161,
+ 8167, 8171, 8179, 8191, 8209, 8219, 8221, 8231,
+ 8233, 8237, 8243, 8263, 8269, 8273, 8287, 8291,
+ 8293, 8297, 8311, 8317, 8329, 8353, 8363, 8369,
+ 8377, 8387, 8389, 8419, 8423, 8429, 8431, 8443,
+ 8447, 8461, 8467, 8501, 8513, 8521, 8527, 8537,
+ 8539, 8543, 8563, 8573, 8581, 8597, 8599, 8609,
+ 8623, 8627, 8629, 8641, 8647, 8663, 8669, 8677,
+ 8681, 8689, 8693, 8699, 8707, 8713, 8719, 8731,
+ 8737, 8741, 8747, 8753, 8761, 8779, 8783, 8803,
+ 8807, 8819, 8821, 8831, 8837, 8839, 8849, 8861,
+ 8863, 8867, 8887, 8893, 8923, 8929, 8933, 8941,
+ 8951, 8963, 8969, 8971, 8999, 9001, 9007, 9011,
+ 9013, 9029, 9041, 9043, 9049, 9059, 9067, 9091,
+ 9103, 9109, 9127, 9133, 9137, 9151, 9157, 9161,
+ 9173, 9181, 9187, 9199, 9203, 9209, 9221, 9227,
+ 9239, 9241, 9257, 9277, 9281, 9283, 9293, 9311,
+ 9319, 9323, 9337, 9341, 9343, 9349, 9371, 9377,
+ 9391, 9397, 9403, 9413, 9419, 9421, 9431, 9433,
+ 9437, 9439, 9461, 9463, 9467, 9473, 9479, 9491,
+ 9497, 9511, 9521, 9533, 9539, 9547, 9551, 9587,
+ 9601, 9613, 9619, 9623, 9629, 9631, 9643, 9649,
+ 9661, 9677, 9679, 9689, 9697, 9719, 9721, 9733,
+ 9739, 9743, 9749, 9767, 9769, 9781, 9787, 9791,
+ 9803, 9811, 9817, 9829, 9833, 9839, 9851, 9857,
+ 9859, 9871, 9883, 9887, 9901, 9907, 9923, 9929,
+ 9931, 9941, 9949, 9967, 9973, 10007, 10009, 10037,
+ 10039, 10061, 10067, 10069, 10079, 10091, 10093, 10099,
+ 10103, 10111, 10133, 10139, 10141, 10151, 10159, 10163,
+ 10169, 10177, 10181, 10193, 10211, 10223, 10243, 10247,
+ 10253, 10259, 10267, 10271, 10273, 10289, 10301, 10303,
+ 10313, 10321, 10331, 10333, 10337, 10343, 10357, 10369,
+ 10391, 10399, 10427, 10429, 10433, 10453, 10457, 10459,
+ 10463, 10477, 10487, 10499, 10501, 10513, 10529, 10531,
+ 10559, 10567, 10589, 10597, 10601, 10607, 10613, 10627,
+ 10631, 10639, 10651, 10657, 10663, 10667, 10687, 10691,
+ 10709, 10711, 10723, 10729, 10733, 10739, 10753, 10771,
+ 10781, 10789, 10799, 10831, 10837, 10847, 10853, 10859,
+ 10861, 10867, 10883, 10889, 10891, 10903, 10909, 10937,
+ 10939, 10949, 10957, 10973, 10979, 10987, 10993, 11003,
+ 11027, 11047, 11057, 11059, 11069, 11071, 11083, 11087,
+ 11093, 11113, 11117, 11119, 11131, 11149, 11159, 11161,
+ 11171, 11173, 11177, 11197, 11213, 11239, 11243, 11251,
+ 11257, 11261, 11273, 11279, 11287, 11299, 11311, 11317,
+ 11321, 11329, 11351, 11353, 11369, 11383, 11393, 11399,
+ 11411, 11423, 11437, 11443, 11447, 11467, 11471, 11483,
+ 11489, 11491, 11497, 11503, 11519, 11527, 11549, 11551,
+ 11579, 11587, 11593, 11597, 11617, 11621, 11633, 11657,
+ 11677, 11681, 11689, 11699, 11701, 11717, 11719, 11731,
+ 11743, 11777, 11779, 11783, 11789, 11801, 11807, 11813,
+ 11821, 11827, 11831, 11833, 11839, 11863, 11867, 11887,
+ 11897, 11903, 11909, 11923, 11927, 11933, 11939, 11941,
+ 11953, 11959, 11969, 11971, 11981, 11987, 12007, 12011,
+ 12037, 12041, 12043, 12049, 12071, 12073, 12097, 12101,
+ 12107, 12109, 12113, 12119, 12143, 12149, 12157, 12161,
+ 12163, 12197, 12203, 12211, 12227, 12239, 12241, 12251,
+ 12253, 12263, 12269, 12277, 12281, 12289, 12301, 12323,
+ 12329, 12343, 12347, 12373, 12377, 12379, 12391, 12401,
+ 12409, 12413, 12421, 12433, 12437, 12451, 12457, 12473,
+ 12479, 12487, 12491, 12497, 12503, 12511, 12517, 12527,
+ 12539, 12541, 12547, 12553, 12569, 12577, 12583, 12589,
+ 12601, 12611, 12613, 12619, 12637, 12641, 12647, 12653,
+ 12659, 12671, 12689, 12697, 12703, 12713, 12721, 12739,
+ 12743, 12757, 12763, 12781, 12791, 12799, 12809, 12821,
+ 12823, 12829, 12841, 12853, 12889, 12893, 12899, 12907,
+ 12911, 12917, 12919, 12923, 12941, 12953, 12959, 12967,
+ 12973, 12979, 12983, 13001, 13003, 13007, 13009, 13033,
+ 13037, 13043, 13049, 13063, 13093, 13099, 13103, 13109,
+ 13121, 13127, 13147, 13151, 13159, 13163, 13171, 13177,
+ 13183, 13187, 13217, 13219, 13229, 13241, 13249, 13259,
+ 13267, 13291, 13297, 13309, 13313, 13327, 13331, 13337,
+ 13339, 13367, 13381, 13397, 13399, 13411, 13417, 13421,
+ 13441, 13451, 13457, 13463, 13469, 13477, 13487, 13499,
+ 13513, 13523, 13537, 13553, 13567, 13577, 13591, 13597,
+ 13613, 13619, 13627, 13633, 13649, 13669, 13679, 13681,
+ 13687, 13691, 13693, 13697, 13709, 13711, 13721, 13723,
+ 13729, 13751, 13757, 13759, 13763, 13781, 13789, 13799,
+ 13807, 13829, 13831, 13841, 13859, 13873, 13877, 13879,
+ 13883, 13901, 13903, 13907, 13913, 13921, 13931, 13933,
+ 13963, 13967, 13997, 13999, 14009, 14011, 14029, 14033,
+ 14051, 14057, 14071, 14081, 14083, 14087, 14107, 14143,
+ 14149, 14153, 14159, 14173, 14177, 14197, 14207, 14221,
+ 14243, 14249, 14251, 14281, 14293, 14303, 14321, 14323,
+ 14327, 14341, 14347, 14369, 14387, 14389, 14401, 14407,
+ 14411, 14419, 14423, 14431, 14437, 14447, 14449, 14461,
+ 14479, 14489, 14503, 14519, 14533, 14537, 14543, 14549,
+ 14551, 14557, 14561, 14563, 14591, 14593, 14621, 14627,
+ 14629, 14633, 14639, 14653, 14657, 14669, 14683, 14699,
+ 14713, 14717, 14723, 14731, 14737, 14741, 14747, 14753,
+ 14759, 14767, 14771, 14779, 14783, 14797, 14813, 14821,
+ 14827, 14831, 14843, 14851, 14867, 14869, 14879, 14887,
+ 14891, 14897, 14923, 14929, 14939, 14947, 14951, 14957,
+ 14969, 14983, 15013, 15017, 15031, 15053, 15061, 15073,
+ 15077, 15083, 15091, 15101, 15107, 15121, 15131, 15137,
+ 15139, 15149, 15161, 15173, 15187, 15193, 15199, 15217,
+ 15227, 15233, 15241, 15259, 15263, 15269, 15271, 15277,
+ 15287, 15289, 15299, 15307, 15313, 15319, 15329, 15331,
+ 15349, 15359, 15361, 15373, 15377, 15383, 15391, 15401,
+ 15413, 15427, 15439, 15443, 15451, 15461, 15467, 15473,
+ 15493, 15497, 15511, 15527, 15541, 15551, 15559, 15569,
+ 15581, 15583, 15601, 15607, 15619, 15629, 15641, 15643,
+ 15647, 15649, 15661, 15667, 15671, 15679, 15683, 15727,
+ 15731, 15733, 15737, 15739, 15749, 15761, 15767, 15773,
+ 15787, 15791, 15797, 15803, 15809, 15817, 15823, 15859,
+ 15877, 15881, 15887, 15889, 15901, 15907, 15913, 15919,
+ 15923, 15937, 15959, 15971, 15973, 15991, 16001, 16007,
+ 16033, 16057, 16061, 16063, 16067, 16069, 16073, 16087,
+ 16091, 16097, 16103, 16111, 16127, 16139, 16141, 16183,
+ 16187, 16189, 16193, 16217, 16223, 16229, 16231, 16249,
+ 16253, 16267, 16273, 16301, 16319, 16333, 16339, 16349,
+ 16361, 16363, 16369, 16381, 16411, 16417, 16421, 16427,
+ 16433, 16447, 16451, 16453, 16477, 16481, 16487, 16493,
+ 16519, 16529, 16547, 16553, 16561, 16567, 16573, 16603,
+ 16607, 16619, 16631, 16633, 16649, 16651, 16657, 16661,
+ 16673, 16691, 16693, 16699, 16703, 16729, 16741, 16747,
+ 16759, 16763, 16787, 16811, 16823, 16829, 16831, 16843,
+ 16871, 16879, 16883, 16889, 16901, 16903, 16921, 16927,
+ 16931, 16937, 16943, 16963, 16979, 16981, 16987, 16993,
+ 17011, 17021, 17027, 17029, 17033, 17041, 17047, 17053,
+ 17077, 17093, 17099, 17107, 17117, 17123, 17137, 17159,
+ 17167, 17183, 17189, 17191, 17203, 17207, 17209, 17231,
+ 17239, 17257, 17291, 17293, 17299, 17317, 17321, 17327,
+ 17333, 17341, 17351, 17359, 17377, 17383, 17387, 17389,
+ 17393, 17401, 17417, 17419, 17431, 17443, 17449, 17467,
+ 17471, 17477, 17483, 17489, 17491, 17497, 17509, 17519,
+ 17539, 17551, 17569, 17573, 17579, 17581, 17597, 17599,
+ 17609, 17623, 17627, 17657, 17659, 17669, 17681, 17683,
+ 17707, 17713, 17729, 17737, 17747, 17749, 17761, 17783,
+ 17789, 17791, 17807, 17827, 17837, 17839, 17851, 17863,
};
diff --git a/crypto/bn/bn_prime.pl b/crypto/bn/bn_prime.pl
index 163d4a9..b1e4707 100644
--- a/crypto/bn/bn_prime.pl
+++ b/crypto/bn/bn_prime.pl
@@ -38,9 +38,9 @@ loop: while ($#primes < $num-1) {
print "typedef unsigned short prime_t;\n";
printf "# define NUMPRIMES %d\n\n", $num;
-printf "static const prime_t primes[%d] = {\n", $num;
+printf "static const prime_t primes[%d] = {", $num;
for (my $i = 0; $i <= $#primes; $i++) {
- printf "\n " if ($i % 8) == 0;
- printf "%4d, ", $primes[$i];
+ printf "\n " if ($i % 8) == 0;
+ printf " %5d,", $primes[$i];
}
print "\n};\n";
diff --git a/crypto/camellia/asm/cmll-x86.pl b/crypto/camellia/asm/cmll-x86.pl
index 59f9ed9..26afad8 100644
--- a/crypto/camellia/asm/cmll-x86.pl
+++ b/crypto/camellia/asm/cmll-x86.pl
@@ -792,9 +792,9 @@ if ($OPENSSL) {
64, 40,211,123,187,201, 67,193, 21,227,173,244,119,199,128,158);
sub S1110 { my $i=shift; $i=@SBOX[$i]; return $i<<24|$i<<16|$i<<8; }
-sub S4404 { my $i=shift; $i=($i<<1|$i>>7)&0xff; $i=@SBOX[$i]; return $i<<24|$i<<16|$i; }
-sub S0222 { my $i=shift; $i=@SBOX[$i]; $i=($i<<1|$i>>7)&0xff; return $i<<16|$i<<8|$i; }
-sub S3033 { my $i=shift; $i=@SBOX[$i]; $i=($i>>1|$i<<7)&0xff; return $i<<24|$i<<8|$i; }
+sub S4404 { my $i=shift; $i=($i<<1|$i>>7)&0xff; $i=@SBOX[$i]; return $i<<24|$i<<16|$i; }
+sub S0222 { my $i=shift; $i=@SBOX[$i]; $i=($i<<1|$i>>7)&0xff; return $i<<16|$i<<8|$i; }
+sub S3033 { my $i=shift; $i=@SBOX[$i]; $i=($i>>1|$i<<7)&0xff; return $i<<24|$i<<8|$i; }
&set_label("Camellia_SIGMA",64);
&data_word(
diff --git a/crypto/cast/asm/cast-586.pl b/crypto/cast/asm/cast-586.pl
index 6beb9c5..1fc2b1a 100644
--- a/crypto/cast/asm/cast-586.pl
+++ b/crypto/cast/asm/cast-586.pl
@@ -7,7 +7,7 @@
# https://www.openssl.org/source/license.html
-# This flag makes the inner loop one cycle longer, but generates
+# This flag makes the inner loop one cycle longer, but generates
# code that runs %30 faster on the pentium pro/II, 44% faster
# of PIII, while only %7 slower on the pentium.
# By default, this flag is on.
@@ -157,7 +157,7 @@ sub E_CAST {
if ($ppro) {
&xor( $tmp1, $tmp1);
&mov( $tmp2, 0xff);
-
+
&movb( &LB($tmp1), &HB($tmp4)); # A
&and( $tmp2, $tmp4);
@@ -166,7 +166,7 @@ sub E_CAST {
} else {
&mov( $tmp2, $tmp4); # B
&movb( &LB($tmp1), &HB($tmp4)); # A # BAD BAD BAD
-
+
&shr( $tmp4, 16); #
&and( $tmp2, 0xff);
}
diff --git a/crypto/chacha/asm/chacha-armv4.pl b/crypto/chacha/asm/chacha-armv4.pl
index b5e21e4..c90306e 100755
--- a/crypto/chacha/asm/chacha-armv4.pl
+++ b/crypto/chacha/asm/chacha-armv4.pl
@@ -15,7 +15,7 @@
# ====================================================================
#
# December 2014
-#
+#
# ChaCha20 for ARMv4.
#
# Performance in cycles per byte out of large buffer.
@@ -720,7 +720,7 @@ ChaCha20_neon:
vadd.i32 $d2,$d1,$t0 @ counter+2
str @t[3], [sp,#4*(16+15)]
mov @t[3],#10
- add @x[12], at x[12],#3 @ counter+3
+ add @x[12], at x[12],#3 @ counter+3
b .Loop_neon
.align 4
diff --git a/crypto/chacha/asm/chacha-armv8.pl b/crypto/chacha/asm/chacha-armv8.pl
index f7e1074..db3776a 100755
--- a/crypto/chacha/asm/chacha-armv8.pl
+++ b/crypto/chacha/asm/chacha-armv8.pl
@@ -15,7 +15,7 @@
# ====================================================================
#
# June 2015
-#
+#
# ChaCha20 for ARMv8.
#
# Performance in cycles per byte out of large buffer.
@@ -201,7 +201,7 @@ ChaCha20_ctr32:
mov $ctr,#10
subs $len,$len,#64
.Loop:
- sub $ctr,$ctr,#1
+ sub $ctr,$ctr,#1
___
foreach (&ROUND(0, 4, 8,12)) { eval; }
foreach (&ROUND(0, 5,10,15)) { eval; }
diff --git a/crypto/chacha/asm/chacha-ppc.pl b/crypto/chacha/asm/chacha-ppc.pl
index 8a54cba..7da99e0 100755
--- a/crypto/chacha/asm/chacha-ppc.pl
+++ b/crypto/chacha/asm/chacha-ppc.pl
@@ -15,7 +15,7 @@
# ====================================================================
#
# October 2015
-#
+#
# ChaCha20 for PowerPC/AltiVec.
#
# Performance in cycles per byte out of large buffer.
@@ -524,7 +524,7 @@ $code.=<<___;
lwz @d[3],12($ctr)
vadduwm @K[5], at K[4], at K[5]
- vspltisw $twenty,-12 # synthesize constants
+ vspltisw $twenty,-12 # synthesize constants
vspltisw $twelve,12
vspltisw $twenty5,-7
#vspltisw $seven,7 # synthesized in the loop
diff --git a/crypto/des/asm/crypt586.pl b/crypto/des/asm/crypt586.pl
index d5911a1..ad89eeb 100644
--- a/crypto/des/asm/crypt586.pl
+++ b/crypto/des/asm/crypt586.pl
@@ -111,7 +111,7 @@ sub D_ENCRYPT
&and( $u, "0xfcfcfcfc" ); # 2
&xor( $tmp1, $tmp1); # 1
&and( $t, "0xcfcfcfcf" ); # 2
- &xor( $tmp2, $tmp2);
+ &xor( $tmp2, $tmp2);
&movb( &LB($tmp1), &LB($u) );
&movb( &LB($tmp2), &HB($u) );
&rotr( $t, 4 );
@@ -175,7 +175,7 @@ sub IP_new
&R_PERM_OP($l,$tt,$r,14,"0x33333333",$r);
&R_PERM_OP($tt,$r,$l,22,"0x03fc03fc",$r);
&R_PERM_OP($l,$r,$tt, 9,"0xaaaaaaaa",$r);
-
+
if ($lr != 3)
{
if (($lr-3) < 0)
diff --git a/crypto/des/asm/des-586.pl b/crypto/des/asm/des-586.pl
index 3d7c7f1..d45102c 100644
--- a/crypto/des/asm/des-586.pl
+++ b/crypto/des/asm/des-586.pl
@@ -85,7 +85,7 @@ sub DES_encrypt_internal()
&function_end_B("_x86_DES_encrypt");
}
-
+
sub DES_decrypt_internal()
{
&function_begin_B("_x86_DES_decrypt");
@@ -122,7 +122,7 @@ sub DES_decrypt_internal()
&function_end_B("_x86_DES_decrypt");
}
-
+
sub DES_encrypt
{
local($name,$do_ip)=@_;
@@ -283,7 +283,7 @@ sub IP_new
&R_PERM_OP($l,$tt,$r,14,"0x33333333",$r);
&R_PERM_OP($tt,$r,$l,22,"0x03fc03fc",$r);
&R_PERM_OP($l,$r,$tt, 9,"0xaaaaaaaa",$r);
-
+
if ($lr != 3)
{
if (($lr-3) < 0)
diff --git a/crypto/des/asm/desboth.pl b/crypto/des/asm/desboth.pl
index 76759fb..ef7054e 100644
--- a/crypto/des/asm/desboth.pl
+++ b/crypto/des/asm/desboth.pl
@@ -34,7 +34,7 @@ sub DES_encrypt3
&IP_new($L,$R,"edx",0);
# put them back
-
+
if ($enc)
{
&mov(&DWP(4,"ebx","",0),$R);
diff --git a/crypto/ec/asm/ecp_nistz256-armv8.pl b/crypto/ec/asm/ecp_nistz256-armv8.pl
index cdc9161..d93c4fe 100644
--- a/crypto/ec/asm/ecp_nistz256-armv8.pl
+++ b/crypto/ec/asm/ecp_nistz256-armv8.pl
@@ -660,7 +660,7 @@ __ecp_nistz256_div_by_2:
adc $ap,xzr,xzr // zap $ap
tst $acc0,#1 // is a even?
- csel $acc0,$acc0,$t0,eq // ret = even ? a : a+modulus
+ csel $acc0,$acc0,$t0,eq // ret = even ? a : a+modulus
csel $acc1,$acc1,$t1,eq
csel $acc2,$acc2,$t2,eq
csel $acc3,$acc3,$t3,eq
diff --git a/crypto/ec/asm/ecp_nistz256-sparcv9.pl b/crypto/ec/asm/ecp_nistz256-sparcv9.pl
index 97201cb..ee11069 100755
--- a/crypto/ec/asm/ecp_nistz256-sparcv9.pl
+++ b/crypto/ec/asm/ecp_nistz256-sparcv9.pl
@@ -1874,7 +1874,7 @@ $code.=<<___ if ($i<3);
ldx [$bp+8*($i+1)],$bi ! bp[$i+1]
___
$code.=<<___;
- addcc $acc1,$t0,$acc1 ! accumulate high parts of multiplication
+ addcc $acc1,$t0,$acc1 ! accumulate high parts of multiplication
sllx $acc0,32,$t0
addxccc $acc2,$t1,$acc2
srlx $acc0,32,$t1
diff --git a/crypto/ec/asm/ecp_nistz256-x86.pl b/crypto/ec/asm/ecp_nistz256-x86.pl
index 1d9e006..f637c84 100755
--- a/crypto/ec/asm/ecp_nistz256-x86.pl
+++ b/crypto/ec/asm/ecp_nistz256-x86.pl
@@ -443,7 +443,7 @@ for(1..37) {
&mov (&DWP(20,"esp"),"eax");
&mov (&DWP(24,"esp"),"eax");
&mov (&DWP(28,"esp"),"eax");
-
+
&call ("_ecp_nistz256_sub");
&stack_pop(8);
diff --git a/crypto/ec/asm/ecp_nistz256-x86_64.pl b/crypto/ec/asm/ecp_nistz256-x86_64.pl
index 16b6639..adb49f3 100755
--- a/crypto/ec/asm/ecp_nistz256-x86_64.pl
+++ b/crypto/ec/asm/ecp_nistz256-x86_64.pl
@@ -611,7 +611,7 @@ __ecp_nistz256_mul_montq:
adc \$0, $acc0
########################################################################
- # Second reduction step
+ # Second reduction step
mov $acc1, $t1
shl \$32, $acc1
mulq $poly3
@@ -658,7 +658,7 @@ __ecp_nistz256_mul_montq:
adc \$0, $acc1
########################################################################
- # Third reduction step
+ # Third reduction step
mov $acc2, $t1
shl \$32, $acc2
mulq $poly3
@@ -705,7 +705,7 @@ __ecp_nistz256_mul_montq:
adc \$0, $acc2
########################################################################
- # Final reduction step
+ # Final reduction step
mov $acc3, $t1
shl \$32, $acc3
mulq $poly3
@@ -718,7 +718,7 @@ __ecp_nistz256_mul_montq:
mov $acc5, $t1
adc \$0, $acc2
- ########################################################################
+ ########################################################################
# Branch-less conditional subtraction of P
sub \$-1, $acc4 # .Lpoly[0]
mov $acc0, $t2
@@ -2118,7 +2118,7 @@ $code.=<<___;
movq %xmm1, $r_ptr
call __ecp_nistz256_sqr_mont$x # p256_sqr_mont(res_y, S);
___
-{
+{
######## ecp_nistz256_div_by_2(res_y, res_y); ##########################
# operate in 4-5-6-7 "name space" that matches squaring output
#
@@ -2207,7 +2207,7 @@ $code.=<<___;
lea $M(%rsp), $b_ptr
mov $acc4, $acc6 # harmonize sub output and mul input
xor %ecx, %ecx
- mov $acc4, $S+8*0(%rsp) # have to save:-(
+ mov $acc4, $S+8*0(%rsp) # have to save:-(
mov $acc5, $acc2
mov $acc5, $S+8*1(%rsp)
cmovz $acc0, $acc3
@@ -3055,8 +3055,8 @@ ___
########################################################################
# Convert ecp_nistz256_table.c to layout expected by ecp_nistz_gather_w7
#
-open TABLE,"<ecp_nistz256_table.c" or
-open TABLE,"<${dir}../ecp_nistz256_table.c" or
+open TABLE,"<ecp_nistz256_table.c" or
+open TABLE,"<${dir}../ecp_nistz256_table.c" or
die "failed to open ecp_nistz256_table.c:",$!;
use integer;
diff --git a/crypto/md5/asm/md5-586.pl b/crypto/md5/asm/md5-586.pl
index 24f68af..de5758f 100644
--- a/crypto/md5/asm/md5-586.pl
+++ b/crypto/md5/asm/md5-586.pl
@@ -57,7 +57,7 @@ sub R0
local($pos,$a,$b,$c,$d,$K,$ki,$s,$t)=@_;
&mov($tmp1,$C) if $pos < 0;
- &mov($tmp2,&DWP($xo[$ki]*4,$K,"",0)) if $pos < 0; # very first one
+ &mov($tmp2,&DWP($xo[$ki]*4,$K,"",0)) if $pos < 0; # very first one
# body proper
diff --git a/crypto/md5/asm/md5-sparcv9.pl b/crypto/md5/asm/md5-sparcv9.pl
index 09e6d71..6cfb044 100644
--- a/crypto/md5/asm/md5-sparcv9.pl
+++ b/crypto/md5/asm/md5-sparcv9.pl
@@ -242,7 +242,7 @@ md5_block_asm_data_order:
ldd [%o1 + 0x20], %f16
ldd [%o1 + 0x28], %f18
ldd [%o1 + 0x30], %f20
- subcc %o2, 1, %o2 ! done yet?
+ subcc %o2, 1, %o2 ! done yet?
ldd [%o1 + 0x38], %f22
add %o1, 0x40, %o1
prefetch [%o1 + 63], 20
diff --git a/crypto/mips_arch.h b/crypto/mips_arch.h
index e0cd905..75043e7 100644
--- a/crypto/mips_arch.h
+++ b/crypto/mips_arch.h
@@ -15,7 +15,7 @@
&& !defined(_MIPS_ARCH_MIPS32R2)
# define _MIPS_ARCH_MIPS32R2
# endif
-
+
# if (defined(_MIPS_ARCH_MIPS64R3) || defined(_MIPS_ARCH_MIPS64R5) || \
defined(_MIPS_ARCH_MIPS64R6)) \
&& !defined(_MIPS_ARCH_MIPS64R2)
diff --git a/crypto/modes/asm/ghash-armv4.pl b/crypto/modes/asm/ghash-armv4.pl
index 7d880c9..9cc072e 100644
--- a/crypto/modes/asm/ghash-armv4.pl
+++ b/crypto/modes/asm/ghash-armv4.pl
@@ -54,7 +54,7 @@
#
# Câmara, D.; Gouvêa, C. P. L.; López, J. & Dahab, R.: Fast Software
# Polynomial Multiplication on ARM Processors using the NEON Engine.
-#
+#
# http://conradoplg.cryptoland.net/files/2010/12/mocrysen13.pdf
# ====================================================================
@@ -528,7 +528,7 @@ $code.=<<___;
#ifdef __ARMEL__
vrev64.8 $Xl,$Xl
#endif
- sub $Xi,#16
+ sub $Xi,#16
vst1.64 $Xl#hi,[$Xi]! @ write out Xi
vst1.64 $Xl#lo,[$Xi]
diff --git a/crypto/modes/asm/ghash-s390x.pl b/crypto/modes/asm/ghash-s390x.pl
index 65ffaf9..f8b038c 100644
--- a/crypto/modes/asm/ghash-s390x.pl
+++ b/crypto/modes/asm/ghash-s390x.pl
@@ -158,7 +158,7 @@ $code.=<<___;
lg $Zhi,0+1($Xi)
lghi $tmp,0
.Louter:
- xg $Zhi,0($inp) # Xi ^= inp
+ xg $Zhi,0($inp) # Xi ^= inp
xg $Zlo,8($inp)
xgr $Zhi,$tmp
stg $Zlo,8+1($Xi)
diff --git a/crypto/modes/asm/ghash-x86.pl b/crypto/modes/asm/ghash-x86.pl
index cd84582..66307ae 100644
--- a/crypto/modes/asm/ghash-x86.pl
+++ b/crypto/modes/asm/ghash-x86.pl
@@ -811,7 +811,7 @@ sub mmx_loop() {
&bswap ($dat);
&pshufw ($Zhi,$Zhi,0b00011011); # 76543210
&bswap ("ebx");
-
+
&cmp ("ecx",&DWP(528+16+8,"esp")); # are we done?
&jne (&label("outer"));
}
@@ -915,7 +915,7 @@ my ($Xhi,$Xi) = @_;
&psllq ($Xi,57); #
&movdqa ($T1,$Xi); #
&pslldq ($Xi,8);
- &psrldq ($T1,8); #
+ &psrldq ($T1,8); #
&pxor ($Xi,$T2);
&pxor ($Xhi,$T1); #
@@ -1085,7 +1085,7 @@ my ($Xhi,$Xi) = @_;
&psllq ($Xi,57); #
&movdqa ($T1,$Xi); #
&pslldq ($Xi,8);
- &psrldq ($T1,8); #
+ &psrldq ($T1,8); #
&pxor ($Xi,$T2);
&pxor ($Xhi,$T1); #
&pshufd ($T1,$Xhn,0b01001110);
diff --git a/crypto/modes/asm/ghash-x86_64.pl b/crypto/modes/asm/ghash-x86_64.pl
index b4a8ddb..6941b3c 100644
--- a/crypto/modes/asm/ghash-x86_64.pl
+++ b/crypto/modes/asm/ghash-x86_64.pl
@@ -468,7 +468,7 @@ $code.=<<___;
psllq \$57,$Xi #
movdqa $Xi,$T1 #
pslldq \$8,$Xi
- psrldq \$8,$T1 #
+ psrldq \$8,$T1 #
pxor $T2,$Xi
pxor $T1,$Xhi #
@@ -582,7 +582,7 @@ ___
&clmul64x64_T2 ($Xhi,$Xi,$Hkey,$T2);
$code.=<<___ if (0 || (&reduction_alg9($Xhi,$Xi)&&0));
# experimental alternative. special thing about is that there
- # no dependency between the two multiplications...
+ # no dependency between the two multiplications...
mov \$`0xE1<<1`,%eax
mov \$0xA040608020C0E000,%r10 # ((7..0)·0xE0)&0xff
mov \$0x07,%r11d
@@ -757,7 +757,7 @@ $code.=<<___;
movdqa $T2,$T1 #
pslldq \$8,$T2
pclmulqdq \$0x00,$Hkey2,$Xln
- psrldq \$8,$T1 #
+ psrldq \$8,$T1 #
pxor $T2,$Xi
pxor $T1,$Xhi #
movdqu 0($inp),$T1
@@ -893,7 +893,7 @@ $code.=<<___;
psllq \$57,$Xi #
movdqa $Xi,$T1 #
pslldq \$8,$Xi
- psrldq \$8,$T1 #
+ psrldq \$8,$T1 #
pxor $T2,$Xi
pshufd \$0b01001110,$Xhn,$Xmn
pxor $T1,$Xhi #
diff --git a/crypto/perlasm/cbc.pl b/crypto/perlasm/cbc.pl
index ad79b24..01bafe4 100644
--- a/crypto/perlasm/cbc.pl
+++ b/crypto/perlasm/cbc.pl
@@ -15,7 +15,7 @@
# des_cblock (*ivec);
# int enc;
#
-# calls
+# calls
# des_encrypt((DES_LONG *)tin,schedule,DES_ENCRYPT);
#
@@ -36,7 +36,7 @@ sub cbc
# name is the function name
# enc_func and dec_func and the functions to call for encrypt/decrypt
# swap is true if byte order needs to be reversed
- # iv_off is parameter number for the iv
+ # iv_off is parameter number for the iv
# enc_off is parameter number for the encrypt/decrypt flag
# p1,p2,p3 are the offsets for parameters to be passed to the
# underlying calls.
@@ -114,7 +114,7 @@ sub cbc
#############################################################
&set_label("encrypt_loop");
- # encrypt start
+ # encrypt start
# "eax" and "ebx" hold iv (or the last cipher text)
&mov("ecx", &DWP(0,$in,"",0)); # load first 4 bytes
@@ -208,7 +208,7 @@ sub cbc
#############################################################
#############################################################
&set_label("decrypt",1);
- # decrypt start
+ # decrypt start
&and($count,0xfffffff8);
# The next 2 instructions are only for if the jz is taken
&mov("eax", &DWP($data_off+8,"esp","",0)); # get iv[0]
@@ -350,7 +350,7 @@ sub cbc
&align(64);
&function_end_B($name);
-
+
}
1;
diff --git a/crypto/perlasm/ppc-xlate.pl b/crypto/perlasm/ppc-xlate.pl
index 55b02bc..de796d7 100755
--- a/crypto/perlasm/ppc-xlate.pl
+++ b/crypto/perlasm/ppc-xlate.pl
@@ -36,7 +36,7 @@ my $globl = sub {
my $ret;
$name =~ s|^\.||;
-
+
SWITCH: for ($flavour) {
/aix/ && do { if (!$$type) {
$$type = "\@function";
diff --git a/crypto/perlasm/sparcv9_modes.pl b/crypto/perlasm/sparcv9_modes.pl
index bfdada8..b9922e0 100644
--- a/crypto/perlasm/sparcv9_modes.pl
+++ b/crypto/perlasm/sparcv9_modes.pl
@@ -117,7 +117,7 @@ $::code.=<<___;
brnz,pn $ooff, 2f
sub $len, 1, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
brnz,pt $len, .L${bits}_cbc_enc_loop
@@ -224,7 +224,7 @@ $::code.=<<___;
call _${alg}${bits}_encrypt_1x
add $inp, 16, $inp
sub $len, 1, $len
-
+
stda %f0, [$out]0xe2 ! ASI_BLK_INIT, T4-specific
add $out, 8, $out
stda %f2, [$out]0xe2 ! ASI_BLK_INIT, T4-specific
@@ -339,7 +339,7 @@ $::code.=<<___;
brnz,pn $ooff, 2f
sub $len, 1, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
brnz,pt $len, .L${bits}_cbc_dec_loop2x
@@ -445,7 +445,7 @@ $::code.=<<___;
brnz,pn $ooff, 2f
sub $len, 2, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
std %f4, [$out + 16]
@@ -702,7 +702,7 @@ $::code.=<<___;
brnz,pn $ooff, 2f
sub $len, 1, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
brnz,pt $len, .L${bits}_ctr32_loop2x
@@ -791,7 +791,7 @@ $::code.=<<___;
brnz,pn $ooff, 2f
sub $len, 2, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
std %f4, [$out + 16]
@@ -1024,7 +1024,7 @@ $code.=<<___;
brnz,pn $ooff, 2f
sub $len, 1, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
brnz,pt $len, .L${bits}_xts_${dir}loop2x
@@ -1135,7 +1135,7 @@ $code.=<<___;
brnz,pn $ooff, 2f
sub $len, 2, $len
-
+
std %f0, [$out + 0]
std %f2, [$out + 8]
std %f4, [$out + 16]
diff --git a/crypto/perlasm/x86_64-xlate.pl b/crypto/perlasm/x86_64-xlate.pl
index 617adf9..f615fff 100755
--- a/crypto/perlasm/x86_64-xlate.pl
+++ b/crypto/perlasm/x86_64-xlate.pl
@@ -151,7 +151,7 @@ my %globals;
if ($gas) {
if ($self->{op} eq "movz") { # movz is pain...
sprintf "%s%s%s",$self->{op},$self->{sz},shift;
- } elsif ($self->{op} =~ /^set/) {
+ } elsif ($self->{op} =~ /^set/) {
"$self->{op}";
} elsif ($self->{op} eq "ret") {
my $epilogue = "";
@@ -178,7 +178,7 @@ my %globals;
$self->{op} .= $self->{sz};
} elsif ($self->{op} eq "call" && $current_segment eq ".CRT\$XCU") {
$self->{op} = "\tDQ";
- }
+ }
$self->{op};
}
}
@@ -639,7 +639,7 @@ my %globals;
if ($sz eq "D" && ($current_segment=~/.[px]data/ || $dir eq ".rva"))
{ $var=~s/([_a-z\$\@][_a-z0-9\$\@]*)/$nasm?"$1 wrt ..imagebase":"imagerel $1"/egi; }
$var;
- };
+ };
$sz =~ tr/bvlrq/BWDDQ/;
$self->{value} = "\tD$sz\t";
@@ -649,7 +649,7 @@ my %globals;
};
/\.byte/ && do { my @str=split(/,\s*/,$$line);
map(s/(0b[0-1]+)/oct($1)/eig, at str);
- map(s/0x([0-9a-f]+)/0$1h/ig, at str) if ($masm);
+ map(s/0x([0-9a-f]+)/0$1h/ig, at str) if ($masm);
while ($#str>15) {
$self->{value}.="DB\t"
.join(",", at str[0..15])."\n";
@@ -896,7 +896,7 @@ while(defined(my $line=<>)) {
printf "%s",$directive->out();
} elsif (my $opcode=opcode->re(\$line)) {
my $asm = eval("\$".$opcode->mnemonic());
-
+
if ((ref($asm) eq 'CODE') && scalar(my @bytes=&$asm($line))) {
print $gas?".byte\t":"DB\t",join(',', at bytes),"\n";
next;
@@ -974,7 +974,7 @@ close STDOUT;
# %r13 - -
# %r14 - -
# %r15 - -
-#
+#
# (*) volatile register
# (-) preserved by callee
# (#) Nth argument, volatile
diff --git a/crypto/perlasm/x86nasm.pl b/crypto/perlasm/x86nasm.pl
index 4b664a8..b4d4e2a 100644
--- a/crypto/perlasm/x86nasm.pl
+++ b/crypto/perlasm/x86nasm.pl
@@ -132,7 +132,7 @@ ___
grep {s/(^extern\s+${nmdecor}OPENSSL_ia32cap_P)/\;$1/} @out;
push (@out,$comm)
}
- push (@out,$initseg) if ($initseg);
+ push (@out,$initseg) if ($initseg);
}
sub ::comment { foreach (@_) { push(@out,"\t; $_\n"); } }
diff --git a/crypto/rc4/asm/rc4-c64xplus.pl b/crypto/rc4/asm/rc4-c64xplus.pl
index daed75c..9f282fe 100644
--- a/crypto/rc4/asm/rc4-c64xplus.pl
+++ b/crypto/rc4/asm/rc4-c64xplus.pl
@@ -89,7 +89,7 @@ _RC4:
|| NOP 5
STB $XX,*${KEYA}[-2] ; key->x
|| SUB4 $YY,$TX,$YY
-|| BNOP B3
+|| BNOP B3
STB $YY,*${KEYB}[-1] ; key->y
|| NOP 5
.endasmfunc
diff --git a/crypto/rc4/asm/rc4-md5-x86_64.pl b/crypto/rc4/asm/rc4-md5-x86_64.pl
index 890161b..433ed85 100644
--- a/crypto/rc4/asm/rc4-md5-x86_64.pl
+++ b/crypto/rc4/asm/rc4-md5-x86_64.pl
@@ -51,7 +51,7 @@ my ($rc4,$md5)=(1,1); # what to generate?
my $D="#" if (!$md5); # if set to "#", MD5 is stitched into RC4(),
# but its result is discarded. Idea here is
# to be able to use 'openssl speed rc4' for
- # benchmarking the stitched subroutine...
+ # benchmarking the stitched subroutine...
my $flavour = shift;
my $output = shift;
@@ -419,7 +419,7 @@ $code.=<<___ if ($rc4 && (!$md5 || $D));
and \$63,$len # remaining bytes
jnz .Loop1
jmp .Ldone
-
+
.align 16
.Loop1:
add $TX[0]#b,$YY#b
diff --git a/crypto/rc4/asm/rc4-parisc.pl b/crypto/rc4/asm/rc4-parisc.pl
index 006b6b0..81ec098 100644
--- a/crypto/rc4/asm/rc4-parisc.pl
+++ b/crypto/rc4/asm/rc4-parisc.pl
@@ -98,7 +98,7 @@ sub unrolledloopbody {
for ($i=0;$i<4;$i++) {
$code.=<<___;
ldo 1($XX[0]),$XX[1]
- `sprintf("$LDX %$TY(%$key),%$dat1") if ($i>0)`
+ `sprintf("$LDX %$TY(%$key),%$dat1") if ($i>0)`
and $mask,$XX[1],$XX[1]
$LDX $YY($key),$TY
$MKX $YY,$key,$ix
@@ -166,7 +166,7 @@ RC4
ldo `2*$SZ`($key),$key
ldi 0xff,$mask
- ldi 3,$dat0
+ ldi 3,$dat0
ldo 1($XX[0]),$XX[0] ; warm up loop
and $mask,$XX[0],$XX[0]
diff --git a/crypto/rc4/asm/rc4-x86_64.pl b/crypto/rc4/asm/rc4-x86_64.pl
index aaed2b1..6e07c7c 100755
--- a/crypto/rc4/asm/rc4-x86_64.pl
+++ b/crypto/rc4/asm/rc4-x86_64.pl
@@ -48,7 +48,7 @@
# April 2005
#
-# P4 EM64T core appears to be "allergic" to 64-bit inc/dec. Replacing
+# P4 EM64T core appears to be "allergic" to 64-bit inc/dec. Replacing
# those with add/sub results in 50% performance improvement of folded
# loop...
diff --git a/crypto/ripemd/asm/rmd-586.pl b/crypto/ripemd/asm/rmd-586.pl
index 544c496..f19fdc9 100644
--- a/crypto/ripemd/asm/rmd-586.pl
+++ b/crypto/ripemd/asm/rmd-586.pl
@@ -34,7 +34,7 @@ $KL2=0x6ED9EBA1;
$KL3=0x8F1BBCDC;
$KL4=0xA953FD4E;
$KR0=0x50A28BE6;
-$KR1=0x5C4DD124;
+$KR1=0x5C4DD124;
$KR2=0x6D703EF3;
$KR3=0x7A6D76E9;
@@ -543,28 +543,28 @@ sub ripemd160_block
# &mov($tmp2, &wparam(0)); # Moved into last round
&mov($tmp1, &DWP( 4,$tmp2,"",0)); # ctx->B
- &add($D, $tmp1);
+ &add($D, $tmp1);
&mov($tmp1, &swtmp(16+2)); # $c
&add($D, $tmp1);
&mov($tmp1, &DWP( 8,$tmp2,"",0)); # ctx->C
- &add($E, $tmp1);
+ &add($E, $tmp1);
&mov($tmp1, &swtmp(16+3)); # $d
&add($E, $tmp1);
&mov($tmp1, &DWP(12,$tmp2,"",0)); # ctx->D
- &add($A, $tmp1);
+ &add($A, $tmp1);
&mov($tmp1, &swtmp(16+4)); # $e
&add($A, $tmp1);
&mov($tmp1, &DWP(16,$tmp2,"",0)); # ctx->E
- &add($B, $tmp1);
+ &add($B, $tmp1);
&mov($tmp1, &swtmp(16+0)); # $a
&add($B, $tmp1);
&mov($tmp1, &DWP( 0,$tmp2,"",0)); # ctx->A
- &add($C, $tmp1);
+ &add($C, $tmp1);
&mov($tmp1, &swtmp(16+1)); # $b
&add($C, $tmp1);
diff --git a/crypto/sha/asm/sha1-586.pl b/crypto/sha/asm/sha1-586.pl
index 0efed70..3bf8200 100644
--- a/crypto/sha/asm/sha1-586.pl
+++ b/crypto/sha/asm/sha1-586.pl
@@ -133,7 +133,7 @@ $ymm=1 if ($xmm &&
=~ /GNU assembler version ([2-9]\.[0-9]+)/ &&
$1>=2.19); # first version supporting AVX
-$ymm=1 if ($xmm && !$ymm && $ARGV[0] eq "win32n" &&
+$ymm=1 if ($xmm && !$ymm && $ARGV[0] eq "win32n" &&
`nasm -v 2>&1` =~ /NASM version ([2-9]\.[0-9]+)/ &&
$1>=2.03); # first version supporting AVX
diff --git a/crypto/sha/asm/sha1-mb-x86_64.pl b/crypto/sha/asm/sha1-mb-x86_64.pl
index 51c73c0..2f6b35f 100644
--- a/crypto/sha/asm/sha1-mb-x86_64.pl
+++ b/crypto/sha/asm/sha1-mb-x86_64.pl
@@ -95,7 +95,7 @@ $K="%xmm15";
if (1) {
# Atom-specific optimization aiming to eliminate pshufb with high
- # registers [and thus get rid of 48 cycles accumulated penalty]
+ # registers [and thus get rid of 48 cycles accumulated penalty]
@Xi=map("%xmm$_",(0..4));
($tx,$t0,$t1,$t2,$t3)=map("%xmm$_",(5..9));
@V=($A,$B,$C,$D,$E)=map("%xmm$_",(10..14));
@@ -126,7 +126,7 @@ my $k=$i+2;
# ...
# $i==13: 14,15,15,15,
# $i==14: 15
-#
+#
# Then at $i==15 Xupdate is applied one iteration in advance...
$code.=<<___ if ($i==0);
movd (@ptr[0]), at Xi[0]
diff --git a/crypto/sha/asm/sha1-sparcv9.pl b/crypto/sha/asm/sha1-sparcv9.pl
index 7437ff4..cdd5b9a 100644
--- a/crypto/sha/asm/sha1-sparcv9.pl
+++ b/crypto/sha/asm/sha1-sparcv9.pl
@@ -227,7 +227,7 @@ sha1_block_data_order:
ldd [%o1 + 0x20], %f16
ldd [%o1 + 0x28], %f18
ldd [%o1 + 0x30], %f20
- subcc %o2, 1, %o2 ! done yet?
+ subcc %o2, 1, %o2 ! done yet?
ldd [%o1 + 0x38], %f22
add %o1, 0x40, %o1
prefetch [%o1 + 63], 20
diff --git a/crypto/sha/asm/sha1-sparcv9a.pl b/crypto/sha/asm/sha1-sparcv9a.pl
index f9ed563..8dfde46 100644
--- a/crypto/sha/asm/sha1-sparcv9a.pl
+++ b/crypto/sha/asm/sha1-sparcv9a.pl
@@ -519,7 +519,7 @@ $code.=<<___;
mov $Cctx,$C
mov $Dctx,$D
mov $Ectx,$E
- alignaddr %g0,$tmp0,%g0
+ alignaddr %g0,$tmp0,%g0
dec 1,$len
ba .Loop
mov $nXfer,$Xfer
diff --git a/crypto/sha/asm/sha1-x86_64.pl b/crypto/sha/asm/sha1-x86_64.pl
index 97baae3..66054ce 100755
--- a/crypto/sha/asm/sha1-x86_64.pl
+++ b/crypto/sha/asm/sha1-x86_64.pl
@@ -262,7 +262,7 @@ sha1_block_data_order:
jz .Lialu
___
$code.=<<___ if ($shaext);
- test \$`1<<29`,%r10d # check SHA bit
+ test \$`1<<29`,%r10d # check SHA bit
jnz _shaext_shortcut
___
$code.=<<___ if ($avx>1);
diff --git a/crypto/sha/asm/sha256-586.pl b/crypto/sha/asm/sha256-586.pl
index 6af1d84..8e7f4ee 100644
--- a/crypto/sha/asm/sha256-586.pl
+++ b/crypto/sha/asm/sha256-586.pl
@@ -47,7 +47,7 @@
#
# Performance in clock cycles per processed byte (less is better):
#
-# gcc icc x86 asm(*) SIMD x86_64 asm(**)
+# gcc icc x86 asm(*) SIMD x86_64 asm(**)
# Pentium 46 57 40/38 - -
# PIII 36 33 27/24 - -
# P4 41 38 28 - 17.3
@@ -276,7 +276,7 @@ my $suffix=shift;
&mov ($Coff,"ecx");
&mov ($Doff,"edi");
&mov (&DWP(0,"esp"),"ebx"); # magic
- &mov ($E,&DWP(16,"esi"));
+ &mov ($E,&DWP(16,"esi"));
&mov ("ebx",&DWP(20,"esi"));
&mov ("ecx",&DWP(24,"esi"));
&mov ("edi",&DWP(28,"esi"));
@@ -385,7 +385,7 @@ my @AH=($A,$K256);
&xor ($AH[1],"ecx"); # magic
&mov (&DWP(8,"esp"),"ecx");
&mov (&DWP(12,"esp"),"ebx");
- &mov ($E,&DWP(16,"esi"));
+ &mov ($E,&DWP(16,"esi"));
&mov ("ebx",&DWP(20,"esi"));
&mov ("ecx",&DWP(24,"esi"));
&mov ("esi",&DWP(28,"esi"));
diff --git a/crypto/sha/asm/sha256-mb-x86_64.pl b/crypto/sha/asm/sha256-mb-x86_64.pl
index fbcd29f..b8a77c7 100644
--- a/crypto/sha/asm/sha256-mb-x86_64.pl
+++ b/crypto/sha/asm/sha256-mb-x86_64.pl
@@ -36,7 +36,7 @@
# (iii) "this" is for n=8, when we gather twice as much data, result
# for n=4 is 20.3+4.44=24.7;
# (iv) presented improvement coefficients are asymptotic limits and
-# in real-life application are somewhat lower, e.g. for 2KB
+# in real-life application are somewhat lower, e.g. for 2KB
# fragments they range from 75% to 130% (on Haswell);
$flavour = shift;
diff --git a/crypto/sha/asm/sha512-586.pl b/crypto/sha/asm/sha512-586.pl
index 0887e06..94cc011 100644
--- a/crypto/sha/asm/sha512-586.pl
+++ b/crypto/sha/asm/sha512-586.pl
@@ -383,7 +383,7 @@ if ($sse2) {
&set_label("16_79_sse2",16);
for ($j=0;$j<2;$j++) { # 2x unroll
- #&movq ("mm7",&QWP(8*(9+16-1),"esp")); # prefetched in BODY_00_15
+ #&movq ("mm7",&QWP(8*(9+16-1),"esp")); # prefetched in BODY_00_15
&movq ("mm5",&QWP(8*(9+16-14),"esp"));
&movq ("mm1","mm7");
&psrlq ("mm7",1);
diff --git a/crypto/sha/asm/sha512-armv8.pl b/crypto/sha/asm/sha512-armv8.pl
index c1aaf77..620aa39 100644
--- a/crypto/sha/asm/sha512-armv8.pl
+++ b/crypto/sha/asm/sha512-armv8.pl
@@ -26,7 +26,7 @@
# Denver 2.01 10.5 (+26%) 6.70 (+8%)
# X-Gene 20.0 (+100%) 12.8 (+300%(***))
# Mongoose 2.36 13.0 (+50%) 8.36 (+33%)
-#
+#
# (*) Software SHA256 results are of lesser relevance, presented
# mostly for informational purposes.
# (**) The result is a trade-off: it's possible to improve it by
diff --git a/crypto/sha/asm/sha512-parisc.pl b/crypto/sha/asm/sha512-parisc.pl
index fcb6157..d28a5af 100755
--- a/crypto/sha/asm/sha512-parisc.pl
+++ b/crypto/sha/asm/sha512-parisc.pl
@@ -368,7 +368,7 @@ L\$parisc1
___
@V=( $Ahi, $Alo, $Bhi, $Blo, $Chi, $Clo, $Dhi, $Dlo,
- $Ehi, $Elo, $Fhi, $Flo, $Ghi, $Glo, $Hhi, $Hlo) =
+ $Ehi, $Elo, $Fhi, $Flo, $Ghi, $Glo, $Hhi, $Hlo) =
( "%r1", "%r2", "%r3", "%r4", "%r5", "%r6", "%r7", "%r8",
"%r9","%r10","%r11","%r12","%r13","%r14","%r15","%r16");
$a0 ="%r17";
@@ -419,7 +419,7 @@ $code.=<<___;
add $t0,$hlo,$hlo
shd $ahi,$alo,$Sigma0[0],$t0
addc $t1,$hhi,$hhi ; h += Sigma1(e)
- shd $alo,$ahi,$Sigma0[0],$t1
+ shd $alo,$ahi,$Sigma0[0],$t1
add $a0,$hlo,$hlo
shd $ahi,$alo,$Sigma0[1],$t2
addc $a1,$hhi,$hhi ; h += Ch(e,f,g)
diff --git a/crypto/sha/asm/sha512-s390x.pl b/crypto/sha/asm/sha512-s390x.pl
index 582d393..92d7a77 100644
--- a/crypto/sha/asm/sha512-s390x.pl
+++ b/crypto/sha/asm/sha512-s390x.pl
@@ -311,7 +311,7 @@ $code.=<<___;
cl${g} $inp,`$frame+4*$SIZE_T`($sp)
jne .Lloop
- lm${g} %r6,%r15,`$frame+6*$SIZE_T`($sp)
+ lm${g} %r6,%r15,`$frame+6*$SIZE_T`($sp)
br %r14
.size $Func,.-$Func
.string "SHA${label} block transform for s390x, CRYPTOGAMS by <appro\@openssl.org>"
diff --git a/crypto/sha/asm/sha512-sparcv9.pl b/crypto/sha/asm/sha512-sparcv9.pl
index 4a1ce5f..098c2a1 100644
--- a/crypto/sha/asm/sha512-sparcv9.pl
+++ b/crypto/sha/asm/sha512-sparcv9.pl
@@ -102,7 +102,7 @@ if ($output =~ /512/) {
$locals=0; # X[16] is register resident
@X=("%o0","%o1","%o2","%o3","%o4","%o5","%g1","%o7");
-
+
$A="%l0";
$B="%l1";
$C="%l2";
@@ -254,7 +254,7 @@ $code.=<<___;
$SLL $a,`$SZ*8- at Sigma0[1]`,$tmp1
xor $tmp0,$h,$h
$SRL $a, at Sigma0[2],$tmp0
- xor $tmp1,$h,$h
+ xor $tmp1,$h,$h
$SLL $a,`$SZ*8- at Sigma0[0]`,$tmp1
xor $tmp0,$h,$h
xor $tmp1,$h,$h ! Sigma0(a)
diff --git a/crypto/sha/asm/sha512-x86_64.pl b/crypto/sha/asm/sha512-x86_64.pl
index 63a6265..01bbb77 100755
--- a/crypto/sha/asm/sha512-x86_64.pl
+++ b/crypto/sha/asm/sha512-x86_64.pl
@@ -1782,7 +1782,7 @@ if ($avx>1) {{
######################################################################
# AVX2+BMI code path
#
-my $a5=$SZ==4?"%esi":"%rsi"; # zap $inp
+my $a5=$SZ==4?"%esi":"%rsi"; # zap $inp
my $PUSH8=8*2*$SZ;
use integer;
diff --git a/crypto/ts/ts_rsp_verify.c b/crypto/ts/ts_rsp_verify.c
index 2755dd0..66f5be6 100644
--- a/crypto/ts/ts_rsp_verify.c
+++ b/crypto/ts/ts_rsp_verify.c
@@ -480,7 +480,7 @@ static char *ts_get_status_text(STACK_OF(ASN1_UTF8STRING) *text)
return result;
}
-static int ts_check_policy(const ASN1_OBJECT *req_oid,
+static int ts_check_policy(const ASN1_OBJECT *req_oid,
const TS_TST_INFO *tst_info)
{
const ASN1_OBJECT *resp_oid = tst_info->policy_id;
diff --git a/crypto/whrlpool/asm/wp-mmx.pl b/crypto/whrlpool/asm/wp-mmx.pl
index f63945c..d628d56 100644
--- a/crypto/whrlpool/asm/wp-mmx.pl
+++ b/crypto/whrlpool/asm/wp-mmx.pl
@@ -31,7 +31,7 @@
# multiplying 64 by CPU clock frequency and dividing by relevant
# value from the given table:
#
-# $SCALE=2/8 icc8 gcc3
+# $SCALE=2/8 icc8 gcc3
# Intel P4 3200/4600 4600(*) 6400
# Intel PIII 2900/3000 4900 5400
# AMD K[78] 2500/1800 9900 8200(**)
@@ -502,6 +502,6 @@ for($i=0;$i<8;$i++) {
&L(0xca,0x2d,0xbf,0x07,0xad,0x5a,0x83,0x33);
&function_end_B("whirlpool_block_mmx");
-&asm_finish();
+&asm_finish();
close STDOUT;
diff --git a/crypto/x509v3/v3_enum.c b/crypto/x509v3/v3_enum.c
index f39cb5a..3b0f197 100644
--- a/crypto/x509v3/v3_enum.c
+++ b/crypto/x509v3/v3_enum.c
@@ -38,7 +38,7 @@ const X509V3_EXT_METHOD v3_crl_reason = {
crl_reasons
};
-char *i2s_ASN1_ENUMERATED_TABLE(X509V3_EXT_METHOD *method,
+char *i2s_ASN1_ENUMERATED_TABLE(X509V3_EXT_METHOD *method,
const ASN1_ENUMERATED *e)
{
ENUMERATED_NAMES *enam;
diff --git a/crypto/x509v3/v3_skey.c b/crypto/x509v3/v3_skey.c
index 39597dc..749f51b 100644
--- a/crypto/x509v3/v3_skey.c
+++ b/crypto/x509v3/v3_skey.c
@@ -24,7 +24,7 @@ const X509V3_EXT_METHOD v3_skey_id = {
NULL
};
-char *i2s_ASN1_OCTET_STRING(X509V3_EXT_METHOD *method,
+char *i2s_ASN1_OCTET_STRING(X509V3_EXT_METHOD *method,
const ASN1_OCTET_STRING *oct)
{
return OPENSSL_buf2hexstr(oct->data, oct->length);
diff --git a/crypto/x86cpuid.pl b/crypto/x86cpuid.pl
index c45b183..176e8e3 100644
--- a/crypto/x86cpuid.pl
+++ b/crypto/x86cpuid.pl
@@ -89,7 +89,7 @@ for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
&ja (&label("generic"));
&and ("edx",0xefffffff); # clear hyper-threading bit
&jmp (&label("generic"));
-
+
&set_label("intel");
&cmp ("edi",7);
&jb (&label("cacheinfo"));
diff --git a/engines/asm/e_padlock-x86_64.pl b/engines/asm/e_padlock-x86_64.pl
index da285ab..834b1ea 100644
--- a/engines/asm/e_padlock-x86_64.pl
+++ b/engines/asm/e_padlock-x86_64.pl
@@ -535,7 +535,7 @@ $code.=<<___ if ($PADLOCK_PREFETCH{$mode});
sub $len,%rsp
shr \$3,$len
lea (%rsp),$out
- .byte 0xf3,0x48,0xa5 # rep movsq
+ .byte 0xf3,0x48,0xa5 # rep movsq
lea (%r8),$out
lea (%rsp),$inp
mov $chunk,$len
diff --git a/include/openssl/x509.h b/include/openssl/x509.h
index c8996f3..038cef9 100644
--- a/include/openssl/x509.h
+++ b/include/openssl/x509.h
@@ -805,7 +805,7 @@ X509_NAME_ENTRY *X509_NAME_ENTRY_create_by_txt(X509_NAME_ENTRY **ne,
const unsigned char *bytes,
int len);
X509_NAME_ENTRY *X509_NAME_ENTRY_create_by_NID(X509_NAME_ENTRY **ne, int nid,
- int type,
+ int type,
const unsigned char *bytes,
int len);
int X509_NAME_add_entry_by_txt(X509_NAME *name, const char *field, int type,
diff --git a/ssl/packet.c b/ssl/packet.c
index 2a8fe25..27462e9 100644
--- a/ssl/packet.c
+++ b/ssl/packet.c
@@ -178,7 +178,7 @@ static int wpacket_intern_close(WPACKET *pkt)
}
/* Write out the WPACKET length if needed */
- if (sub->lenbytes > 0
+ if (sub->lenbytes > 0
&& !put_value((unsigned char *)&pkt->buf->data[sub->packet_len],
packlen, sub->lenbytes))
return 0;
diff --git a/ssl/packet_locl.h b/ssl/packet_locl.h
index 55e41bb..cee1400 100644
--- a/ssl/packet_locl.h
+++ b/ssl/packet_locl.h
@@ -707,7 +707,7 @@ int WPACKET_sub_allocate_bytes__(WPACKET *pkt, size_t len,
* maximum size will be. If this function is used, then it should be immediately
* followed by a WPACKET_allocate_bytes() call before any other WPACKET
* functions are called (unless the write to the allocated bytes is abandoned).
- *
+ *
* For example: If we are generating a signature, then the size of that
* signature may not be known in advance. We can use WPACKET_reserve_bytes() to
* handle this:
diff --git a/test/pkits-test.pl b/test/pkits-test.pl
index ae7279c..41444f1 100644
--- a/test/pkits-test.pl
+++ b/test/pkits-test.pl
@@ -6,7 +6,7 @@
# in the file LICENSE in the source distribution or at
# https://www.openssl.org/source/license.html
-# Perl utility to run PKITS tests for RFC3280 compliance.
+# Perl utility to run PKITS tests for RFC3280 compliance.
my $ossl_path;
diff --git a/test/recipes/tconversion.pl b/test/recipes/tconversion.pl
index e5fa9de..1655cd4 100644
--- a/test/recipes/tconversion.pl
+++ b/test/recipes/tconversion.pl
@@ -23,7 +23,7 @@ my %conversionforms = (
sub tconversion {
my $testtype = shift;
my $t = shift;
- my @conversionforms =
+ my @conversionforms =
defined($conversionforms{$testtype}) ?
@{$conversionforms{$testtype}} :
@{$conversionforms{"*"}};
diff --git a/test/wpackettest.c b/test/wpackettest.c
index 2cb9e72..aabf781 100644
--- a/test/wpackettest.c
+++ b/test/wpackettest.c
@@ -115,7 +115,7 @@ static int test_WPACKET_set_max_size(void)
|| !WPACKET_set_max_size(&pkt, SIZE_MAX)
|| !WPACKET_finish(&pkt)) {
testfail("test_WPACKET_set_max_size():1 failed\n", &pkt);
- return 0;
+ return 0;
}
if (!WPACKET_init_len(&pkt, buf, 1)
diff --git a/util/ck_errf.pl b/util/ck_errf.pl
index 7fc5367..01ed905 100755
--- a/util/ck_errf.pl
+++ b/util/ck_errf.pl
@@ -8,7 +8,7 @@
# This is just a quick script to scan for cases where the 'error'
# function name in a XXXerr() macro is wrong.
-#
+#
# Run in the top level by going
# perl util/ck_errf.pl */*.c */*/*.c
#
diff --git a/util/copy.pl b/util/copy.pl
index ef4d870..c4aeea6 100644
--- a/util/copy.pl
+++ b/util/copy.pl
@@ -40,7 +40,7 @@ if ($fnum <= 1)
}
$dest = pop @filelist;
-
+
if ($fnum > 2 && ! -d $dest)
{
die "Destination must be a directory";
@@ -73,5 +73,5 @@ foreach (@filelist)
close(OUT);
print "Copying: $_ to $dfile\n";
}
-
+
diff --git a/util/fipslink.pl b/util/fipslink.pl
index 18a9153..bb685bf 100644
--- a/util/fipslink.pl
+++ b/util/fipslink.pl
@@ -109,7 +109,7 @@ sub check_hash
$hashval =~ s/^.*=\s+//;
die "Invalid hash syntax in file" if (length($hashfile) != 40);
die "Invalid hash received for file" if (length($hashval) != 40);
- die "***HASH VALUE MISMATCH FOR FILE $filename ***" if ($hashval ne $hashfile);
+ die "***HASH VALUE MISMATCH FOR FILE $filename ***" if ($hashval ne $hashfile);
}
diff --git a/util/mkdef.pl b/util/mkdef.pl
index b54c925..7640293 100755
--- a/util/mkdef.pl
+++ b/util/mkdef.pl
@@ -24,7 +24,7 @@
# existence:platform:kind:algorithms
#
# - "existence" can be "EXIST" or "NOEXIST" depending on if the symbol is
-# found somewhere in the source,
+# found somewhere in the source,
# - "platforms" is empty if it exists on all platforms, otherwise it contains
# comma-separated list of the platform, just as they are if the symbol exists
# for those platforms, or prepended with a "!" if not. This helps resolve
@@ -172,7 +172,7 @@ foreach (@ARGV, split(/ /, $config{options}))
$do_ssl=1 if $_ eq "libssl";
if ($_ eq "ssl") {
- $do_ssl=1;
+ $do_ssl=1;
$libname=$_
}
$do_crypto=1 if $_ eq "libcrypto";
@@ -211,7 +211,7 @@ foreach (@ARGV, split(/ /, $config{options}))
}
-if (!$libname) {
+if (!$libname) {
if ($do_ssl) {
$libname="LIBSSL";
}
@@ -339,7 +339,7 @@ if($do_crypto == 1) {
}
&update_numbers(*OUT,"LIBCRYPTO",*crypto_list,$max_crypto, at crypto_symbols);
close OUT;
-}
+}
} elsif ($do_checkexist) {
&check_existing(*ssl_list, @ssl_symbols)
diff --git a/util/mkerr.pl b/util/mkerr.pl
index 79c8cfc..4645658 100644
--- a/util/mkerr.pl
+++ b/util/mkerr.pl
@@ -97,7 +97,7 @@ Options:
Default: keep previously assigned numbers. (You are warned
when collisions are detected.)
- -nostatic Generates a different source code, where these additional
+ -nostatic Generates a different source code, where these additional
functions are generated for each library specified in the
config file:
void ERR_load_<LIB>_strings(void);
@@ -105,7 +105,7 @@ Options:
void ERR_<LIB>_error(int f, int r, char *fn, int ln);
#define <LIB>err(f,r) ERR_<LIB>_error(f,r,OPENSSL_FILE,OPENSSL_LINE)
while the code facilitates the use of these in an environment
- where the error support routines are dynamically loaded at
+ where the error support routines are dynamically loaded at
runtime.
Default: 'static' code generation.
@@ -114,8 +114,8 @@ Options:
-unref Print out unreferenced function and reason codes.
- -write Actually (over)write the generated code to the header and C
- source files as assigned to each library through the config
+ -write Actually (over)write the generated code to the header and C
+ source files as assigned to each library through the config
file.
Default: don't write.
@@ -196,7 +196,7 @@ while (($hdr, $lib) = each %libinc)
if(/\/\*/) {
if (not /\*\//) { # multiline comment...
$line = $_; # ... just accumulate
- next;
+ next;
} else {
s/\/\*.*?\*\///gs; # wipe it
}
@@ -370,7 +370,7 @@ foreach $file (@source) {
print STDERR "ERROR: mismatch $file:$linenr $func:$3\n";
$errcount++;
}
- print STDERR "Function: $1\t= $fcodes{$1} (lib: $2, name: $3)\n" if $debug;
+ print STDERR "Function: $1\t= $fcodes{$1} (lib: $2, name: $3)\n" if $debug;
}
if(/(([A-Z0-9]+)_R_[A-Z0-9_]+)/) {
next unless exists $csrc{$2};
@@ -379,8 +379,8 @@ foreach $file (@source) {
$rcodes{$1} = "X";
$rnew{$2}++;
}
- print STDERR "Reason: $1\t= $rcodes{$1} (lib: $2)\n" if $debug;
- }
+ print STDERR "Reason: $1\t= $rcodes{$1} (lib: $2)\n" if $debug;
+ }
}
close IN;
}
diff --git a/util/su-filter.pl b/util/su-filter.pl
index 5996f58..389c7c3 100644
--- a/util/su-filter.pl
+++ b/util/su-filter.pl
@@ -108,7 +108,7 @@ sub structureData {
if($inbrace) {
if($item eq "}") {
$inbrace --;
-
+
if(!$inbrace) {
$substruc = structureData($dataitem);
$dataitem = $substruc;
More information about the openssl-commits
mailing list