2019-04-03 05:46:30

by George Spelvin

[permalink] [raw]
Subject: [PATCH v2] ubsan: Avoid unnecessary 128-bit shifts

If CONFIG_ARCH_SUPPORTS_INT128, s_max is 128 bits, and variable
sign-extending shifts of such a double-word data type are a non-trivial
amount of code and complexity. Do a single-word shift *before* the cast
to (s_max), greatly simplifying the object code.

(Yes, I know "signed long" is redundant. It's there for emphasis.)

On s390 (and perhaps some other arches), gcc implements variable
128-bit shifts using an __ashrti3 helper function which the kernel
doesn't provide, causing a link error. In that case, this patch is
a prerequisite for enabling INT128 support. Andrey Ryabinin has gven
permission for any arch that needs it to cherry-pick it so they don't
have to wait for ubsan to be merged into Linus' tree.

We *could*, alternatively, implement __ashrti3, but that becomes dead as
soon as this patch is merged, so it seems like a waste of time and its
absence discourages people from adding inefficient code. Note that the
shifts in <math64.h> (unsigned, and by a compile-time constant amount)
are simpler and generated inline.

Signed-off-by: George Spelvin <[email protected]>
Acked-By: Andrey Ryabinin <[email protected]>
Cc: [email protected]
Cc: Heiko Carstens <[email protected]>
---
lib/ubsan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

v1->v2: Eliminated redundant cast to (s_max).
Rewrote commit message without "is this the right thing to do?"
verbiage.
Incorporated ack from Andrey Ryabinin.

diff --git a/lib/ubsan.c b/lib/ubsan.c
index e4162f59a81c..a7eb55fbeede 100644
--- a/lib/ubsan.c
+++ b/lib/ubsan.c
@@ -89,8 +89,8 @@ static bool is_inline_int(struct type_descriptor *type)
static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
{
if (is_inline_int(type)) {
- unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type);
- return ((s_max)val) << extra_bits >> extra_bits;
+ unsigned extra_bits = sizeof(val)*8 - type_bit_width(type);
+ return (signed long)val << extra_bits >> extra_bits;
}

if (type_bit_width(type) == 64)
--
2.20.1


2019-04-03 06:52:39

by Rasmus Villemoes

[permalink] [raw]
Subject: Re: [PATCH v2] ubsan: Avoid unnecessary 128-bit shifts

On 03/04/2019 07.45, George Spelvin wrote:
>
> diff --git a/lib/ubsan.c b/lib/ubsan.c
> index e4162f59a81c..a7eb55fbeede 100644
> --- a/lib/ubsan.c
> +++ b/lib/ubsan.c
> @@ -89,8 +89,8 @@ static bool is_inline_int(struct type_descriptor *type)
> static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
> {
> if (is_inline_int(type)) {
> - unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type);
> - return ((s_max)val) << extra_bits >> extra_bits;
> + unsigned extra_bits = sizeof(val)*8 - type_bit_width(type);
> + return (signed long)val << extra_bits >> extra_bits;
> }

Maybe add some #ifdef BITS_PER_LONG == 64 #define sign_extend_long
sign_extend[32/64] stuff to linux/bitops.h and write this as
sign_extend_long(val, type_bit_width(type)-1)? Or do it locally in
lib/ubsan.c so that "git grep" will tell that it's available once the
next potential user comes along.

Btw., ubsan.c is probably compiled without instrumentation, but it would
be a nice touch to avoid UB in the implementation anyway (i.e., the left
shift should be done in the unsigned type, then cast to signed and
right-shifted).

Rasmus