On Sun, Apr 24, 2022 at 1:54 PM Mikulas Patocka <[email protected]> wrote:
>
> + *
> + * Explanation of the logic:
> + * (ch - '9' - 1) is negative if ch <= '9'
> + * ('0' - 1 - ch) is negative if ch >= '0'
True, but...
Please, just to make me happier, make the sign of 'ch' be something
very explicit. Right now that code uses 'char ch', which could be
signed or unsigned.
It doesn't really matter in this case, since the arithmetic will be
done in 'int', and as long as 'int' is larger than 'char' as a type
(to be really nit-picky), it all ends up working ok regardless.
But just to make me happier, and to make the algorithm actually do the
_same_ thing on every architecture, please use an explicit signedness
for that 'ch' type.
Because then that 'ch >= X' is well-defined.
Again - your code _works_. That's not what I worry about. But when
playing these kinds of tricks, please make it have the same behavior
across architectures, not just "the end result will be the same
regardless".
Yes, a 'ch' with the high bit set will always be either >= '0' or <=
'9', but note how *which* one it is depends on the exact type, and
'char' is simply not well-defined.
Finally, for the same reason - please don't use ">> 8". Because I do
not believe that bit 8 is well-defined in your arithmetic. The *sign*
bit will be, but I'm not convinced bit 8 is.
So use ">> 31" or similar.
Also, I do worry that this is *exactly* the kind of trick that a
compiler could easily turn back into a conditional. Usually compilers
tend to go the other way (ie turn conditionals into arithmetic if
possible), but..
Linus
On Sun, 24 Apr 2022, Linus Torvalds wrote:
> On Sun, Apr 24, 2022 at 1:54 PM Mikulas Patocka <[email protected]> wrote:
> >
> > + *
> > + * Explanation of the logic:
> > + * (ch - '9' - 1) is negative if ch <= '9'
> > + * ('0' - 1 - ch) is negative if ch >= '0'
>
> True, but...
>
> Please, just to make me happier, make the sign of 'ch' be something
> very explicit. Right now that code uses 'char ch', which could be
> signed or unsigned.
OK, I fixed it, here I'm sending the second version.
> Finally, for the same reason - please don't use ">> 8". Because I do
> not believe that bit 8 is well-defined in your arithmetic. The *sign*
We are subtracting values that are in the 0 ... 255 range. So, the result
will be in the -255 ... 255 range. So, the bits 8 to 31 of the result are
equivalent.
> bit will be, but I'm not convinced bit 8 is.
>
> So use ">> 31" or similar.
We can pick any number between 8 and 28. 31 won't work because the C
standard doesn't specify that the right shift keeps the sign bit.
To make it standard-compliant, I added a cast that casts the int value to
unsigned. We have an unsigned value with bits 8 to 31 equivalent. When we
shift it 8 bits to the right, we have either 0 or 0xffffff - and this
value is suitable for masking.
> Also, I do worry that this is *exactly* the kind of trick that a
> compiler could easily turn back into a conditional. Usually compilers
> tend to go the other way (ie turn conditionals into arithmetic if
> possible), but..
Some old version that I tried used "(ch - '0' + 1) * ((unsigned)(ch - '0')
<= 9)" - it worked with gcc, but clang was too smart and turned it into a
cmov when compiling for i686 and to a conditional branch when compiling
for i586.
Another version used "-(c - '0' + 1) * (((unsigned)(c - '0') >= 10) - 1)"
- it almost worked, except that clang still turned it into a conditional
jump on sparc32 and powerpc32.
So, I came up with this version that avoids comparison operators at all
and tested it with gcc and clang on all architectures that I could try.
Mikulas
> Linus
>
From: Mikulas Patocka <[email protected]>
The function hex2bin is used to load cryptographic keys into device mapper
targets dm-crypt and dm-integrity. It should take constant time
independent on the processed data, so that concurrently running
unprivileged code can't infer any information about the keys via
microarchitectural convert channels.
This patch changes the function hex_to_bin so that it contains no branches
and no memory accesses.
Note that this shouldn't cause performance degradation because the size of
the new function is the same as the size of the old function (on x86-64) -
and the new function causes no branch misprediction penalties.
I compile-tested this function with gcc on aarch64 alpha arm hppa hppa64
i386 ia64 m68k mips32 mips64 powerpc powerpc64 riscv sh4 s390x sparc32
sparc64 x86_64 and with clang on aarch64 arm hexagon i386 mips32 mips64
powerpc powerpc64 s390x sparc32 sparc64 x86_64 to verify that there are no
branches in the generated code.
Signed-off-by: Mikulas Patocka <[email protected]>
Cc: [email protected]
---
include/linux/kernel.h | 2 +-
lib/hexdump.c | 32 +++++++++++++++++++++++++-------
2 files changed, 26 insertions(+), 8 deletions(-)
Index: linux-2.6/lib/hexdump.c
===================================================================
--- linux-2.6.orig/lib/hexdump.c 2022-04-24 18:51:20.000000000 +0200
+++ linux-2.6/lib/hexdump.c 2022-04-25 13:15:26.000000000 +0200
@@ -22,15 +22,33 @@ EXPORT_SYMBOL(hex_asc_upper);
*
* hex_to_bin() converts one hex digit to its actual value or -1 in case of bad
* input.
+ *
+ * This function is used to load cryptographic keys, so it is coded in such a
+ * way that there are no conditions or memory accesses that depend on data.
+ *
+ * Explanation of the logic:
+ * (ch - '9' - 1) is negative if ch <= '9'
+ * ('0' - 1 - ch) is negative if ch >= '0'
+ * we "and" these two values, so the result is negative if ch is in the range
+ * '0' ... '9'
+ * we are only interested in the sign, so we do a shift ">> 8"; note that right
+ * shift of a negative value is implementation-defined, so we cast the
+ * value to (unsigned) before the shift --- we have 0xffffff if ch is in
+ * the range '0' ... '9', 0 otherwise
+ * we "and" this value with (ch - '0' + 1) --- we have a value 1 ... 10 if ch is
+ * in the range '0' ... '9', 0 otherwise
+ * we add this value to -1 --- we have a value 0 ... 9 if ch is in the range '0'
+ * ... '9', -1 otherwise
+ * the next line is similar to the previous one, but we need to decode both
+ * uppercase and lowercase letters, so we use (ch & 0xdf), which converts
+ * lowercase to uppercase
*/
-int hex_to_bin(char ch)
+int hex_to_bin(unsigned char ch)
{
- if ((ch >= '0') && (ch <= '9'))
- return ch - '0';
- ch = tolower(ch);
- if ((ch >= 'a') && (ch <= 'f'))
- return ch - 'a' + 10;
- return -1;
+ unsigned char cu = ch & 0xdf;
+ return -1 +
+ ((ch - '0' + 1) & (unsigned)((ch - '9' - 1) & ('0' - 1 - ch)) >> 8) +
+ ((cu - 'A' + 11) & (unsigned)((cu - 'F' - 1) & ('A' - 1 - cu)) >> 8);
}
EXPORT_SYMBOL(hex_to_bin);
Index: linux-2.6/include/linux/kernel.h
===================================================================
--- linux-2.6.orig/include/linux/kernel.h 2022-04-21 17:31:39.000000000 +0200
+++ linux-2.6/include/linux/kernel.h 2022-04-25 07:33:43.000000000 +0200
@@ -285,7 +285,7 @@ static inline char *hex_byte_pack_upper(
return buf;
}
-extern int hex_to_bin(char ch);
+extern int hex_to_bin(unsigned char ch);
extern int __must_check hex2bin(u8 *dst, const char *src, size_t count);
extern char *bin2hex(char *dst, const void *src, size_t count);
On Mon, Apr 25, 2022 at 5:07 AM Mikulas Patocka <[email protected]> wrote:
>
> We are subtracting values that are in the 0 ... 255 range.
Well, except that's not what the original patch did.
It was subtracting values that were in the -128 ... 255 range (where
the exact range depended on the sign of 'char').
But yeah, I think bit8 was always safe. Probably. Particularly as the
possible ranges across different architectures is bigger than the
range within one _particular_ architecture (so you'll never see "255 -
-128" even when the sign wasn't defined ;)
> > Also, I do worry that this is *exactly* the kind of trick that a
> > compiler could easily turn back into a conditional. Usually compilers
> > tend to go the other way (ie turn conditionals into arithmetic if
> > possible), but..
>
> Some old version that I tried used "(ch - '0' + 1) * ((unsigned)(ch - '0')
> <= 9)" - it worked with gcc, but clang was too smart and turned it into a
> cmov when compiling for i686 and to a conditional branch when compiling
> for i586.
>
> Another version used "-(c - '0' + 1) * (((unsigned)(c - '0') >= 10) - 1)"
> - it almost worked, except that clang still turned it into a conditional
> jump on sparc32 and powerpc32.
>
> So, I came up with this version that avoids comparison operators at all
> and tested it with gcc and clang on all architectures that I could try.
Yeah, the thing about those compiler heuristics is that they are often
based on almost arbitrary patterns that just happen to be what
somebody has found in some benchmark.
Hopefully nobody ever uses something like this as a benchmark.
Linus
On Wed, May 4, 2022 at 11:47 AM Milan Broz <[email protected]> wrote:
> BTW we use exactly the same code from Mikulas in cryptsetup now (actually the report
> was initiated from here :) and I added some tests for this code,
> you can probably adapt it (we just use generic wrapper around it):
I use something pretty similar in wireguard-tools:
https://git.zx2c4.com/wireguard-tools/tree/src/encoding.c#n74
The code is fine. This is looking like a different issue somewhere
else in the OpenRISC stack...
On Wed, 4 May 2022, Andy Shevchenko wrote:
> On Wed, May 04, 2022 at 04:57:35AM -0400, Mikulas Patocka wrote:
> > On Wed, 4 May 2022, Stafford Horne wrote:
> > > On Mon, Apr 25, 2022 at 08:07:48AM -0400, Mikulas Patocka wrote:
>
> ...
>
> > > Just a heads up it seems this patch is causing some instability with crypto self
> > > tests on OpenRISC when using a PREEMPT kernel (no SMP).
> > >
> > > This was reported by Jason A. Donenfeld as it came up in wireguard testing.
> > >
> > > I am trying to figure out if this is an OpenRISC PREEMPT issue or something
> > > else.
>
> > That patch is so simple that I can't imagine how could it break the
> > curve25519 test. Are you sure that you bisected it correctly?
>
> Can you provide a test cases for hex_to_bin()?
I tested it with this:
#include <stdio.h>
int hex_to_bin(unsigned char c);
int main(void)
{
int i;
for (i = 0; i < 256; i++)
printf("%02x - %d\n", i, hex_to_bin(i));
return 0;
}
Mikulas
On Wed, May 4, 2022 at 1:26 PM Linus Torvalds
<[email protected]> wrote:
>
> Could be anywhere. Xfinity, Nest WiFi, or the cable router. They all
> are doing their own dns thing.
>
> Probably my cable box, since it's likely the oldest thing in the chain.
No, it seems to be my Nest WiFi router. I told it to use google DNS to
avoid Xfinity or the cable box, and it still shows the same behavior.
Not that I care much, since I consider those IDN names to be dangerous
anyway, but I think it would have been less sad if it had been some
old cable modem.
Linus
On Wed, May 4, 2022 at 3:15 AM Jason A. Donenfeld <[email protected]> wrote:
>
> > Alignment? Compiler bug? HW issue?
>
> Probably one of those, yea. Removing the instruction addresses, the only
> difference between the two compiles is: https://xn--4db.cc/Rrn8usaX/diff#line-440
Well, that address doesn't work for me at all. It turns into א.cc.
I'd love to see the compiler problem, since I find those fascinating
(mainly because they scare the hell out of me), but those web
addresses you use are not working for me.
It most definitely looks like an OpenRISC compiler bug - that code
doesn't look like it does anything remotely undefined (and with the
"unsigned char", nothing implementation-defined either).
Linus
On Wed, May 4, 2022 at 12:51 PM Linus Torvalds
<[email protected]> wrote:
>
> But I don't think that it's the browser, actually. Even 'nslookup'
> refuses to touch it with
>
> ** server can't find א.cc: SERVFAIL
>
> and it seems it's literally the local dns caching (dnsmasq?)
Looks like Fedora builds dnsmasq with 'no-i18n', although "dnsmasq -v"
also shows "IDN2", so who knows.. Maybe it's some default config issue
rather than the build configuration.
Linus
On Thu, May 05, 2022 at 05:38:58AM +0900, Stafford Horne wrote:
> On Wed, May 04, 2022 at 01:10:03PM -0700, Linus Torvalds wrote:
> > On Wed, May 4, 2022 at 12:58 PM Stafford Horne <[email protected]> wrote:
> > >
> > > I have uploaded a diff I created here:
> > > https://gist.github.com/54334556f2907104cd12374872a0597c
> > >
> > > It shows the same output.
> >
> > In hex_to_bin itself it seems to only be a difference due to some
> > register allocation (r19 and r3 switched around).
> >
> > But then it gets inlined into hex2bin and there changes there seem to
> > be about instruction and basic block scheduling, so it's a lot harder
> > to see what's going on.
> >
> > And a lot of constant changes, which honestly look just like code code
> > moved around by 16 bytes and offsets changed due to that.
> >
> > So I doubt it's hex_to_bin() that is causing problems, I think it's
> > purely code movement. Which explains why adding a nop or a fake printk
> > fixes things.
> >
> > Some alignment assumption that got broken?
>
> This is what it looks like to me too. I will have to do a deep dive on what is
> going on with this particular build combination as I can't figure out what it is
> off the top of my head.
>
> This test is using a gcc 11 compiler, I tried with my gcc 12 toolchain and the
> issue cannot be reproduced.
>
> - musl gcc 11 - https://musl.cc/or1k-linux-musl-cross.tgz
> - openrisc gcc 12 - https://github.com/openrisc/or1k-gcc/releases/tag/or1k-12.0.1-20220210-20220304
>
> But again the difference between the two compiler outputs is a lot of register
> allocation and offsets changes. Its not easy to see anything that stands out.
> I checked the change log for the openrisc specific changes from gcc 11 to gcc
> 12. Nothing seems to stand out, mcount profiler fix for PIC, a new large binary
> link flag.
Hello,
Just an update on this. The issue so far has been traced to the alignment of
the crypto multiplication function fe_mul_impl in lib/crypto/curve25519-fiat32.c.
This patch e5be15767e7e ("hex2bin: make the function hex_to_bin constant-time")
allowed for the alignment to be just right to cause the failure. Without
this patch and forcing the alignment we can reproduce the issue. So though the
bisect is correct, this patch is not the root cause of the issue.
Using some l.nop sliding techniques and some strategically placed .align
statements I have been able to reproduce the issue on:
- gcc 11 and gcc 12
- preempt and non-preempt kernels
I have not been able to reproduce this on my FPGA, so far only QEMU. My
hunch now is that since the fe_mul_impl function contains some rather long basic
blocks it appears that timer interrupts that interrupt qemu mid basic block
execution somehow is causing this. The hypothesis is it may be basic block
resuming behavior in qemu that causes incosistent behavior.
It will take a bit more time to trace this. Since I maintain OpenRISC QEMU the
issue is on me.
Again, It's safe to say that patch e5be15767e7e ("hex2bin: make the function
hex_to_bin constant-time") is not an issue.
-Stafford