Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1783519yba; Thu, 4 Apr 2019 18:59:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqxp2gvcSFGizA6tsjTKnMT4kunLxB6Tf6Mksfo8ygPg43rIjtb5yaYpZSBLJTG4Qp8HUPlK X-Received: by 2002:a63:360e:: with SMTP id d14mr9127151pga.188.1554429594555; Thu, 04 Apr 2019 18:59:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554429594; cv=none; d=google.com; s=arc-20160816; b=VVwzc3ofvvseHZF1DqrKtxKnEdOVZoAiuoCUWTtw0S2e4O4iaTa1ORbDFnISM49LpF UKnxBS+HLwjYwGS9fk8Z9GSf/0oEpYovG6vv6TShhMujA6jMoS1ggetcjWR/wzfesHpJ o2vo3STpKXbEfhx8fH7F/2YlmInusWDnUqUcdRoCYYDwfJvxIGE8BNcdPu9UyavH8m5J Y11bTzGfYRDG61P3+UaJu2iT256xWy9AgE4vExlldvRuA4csDbyzCV2MAFz8tJKCUojy vPaW7mhEHsPxVF2B4EC53Cu43J+J/KhY0w+Y1EJulMUmTOSp3iKJbQIybPFaZRW7w1d8 gFFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:from:message-id:date :dkim-signature; bh=ruhW6nZm1Al+SOkCdG2Afi37MQl1jVgfqIP6UMALdVA=; b=gRwfGF7EbTjpp3hoGJC9yZYkuoEVCnWP/H+m7gJSfWjR7qNImiio9UTVpXg6WTSbUP nC3jT0a79nKU4Jkl/5B5VvF5dD7nYQvxrvtzHHwYsiyiYwKVrP/4ykm1ltwp5Q/tpvJS YeKbtCiPfVf2NNg9szGzudjU2IQatzmkDcv6Dp1Dw9LVPbvQun1+6tHDC+Yk8QeDzgne 00QnFsU3qPlnGnAZZE175xk23UPIK3qgdKPfpQMM8tkaDlHSoDr3tOMqKAhp2sQvBbXr nOkHUnTL4bQTgnaOyFOd6+/KN1eWd4MAaRJhMuZ1aLciuMemsoMNsFj4xLmN24NToBUv Y/bw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sdf.org header.s=default header.b="bIvu/Pik"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h22si11689599pfn.238.2019.04.04.18.59.39; Thu, 04 Apr 2019 18:59:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@sdf.org header.s=default header.b="bIvu/Pik"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729831AbfDEB7F (ORCPT + 99 others); Thu, 4 Apr 2019 21:59:05 -0400 Received: from mx.sdf.org ([205.166.94.20]:60673 "EHLO mx.sdf.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727051AbfDEB7F (ORCPT ); Thu, 4 Apr 2019 21:59:05 -0400 Received: from sdf.org (IDENT:lkml@sdf.lonestar.org [205.166.94.16]) by mx.sdf.org (8.15.2/8.14.5) with ESMTPS id x351wrFM003299 (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256 bits) verified NO); Fri, 5 Apr 2019 01:58:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=sdf.org; s=default; t=1554429539; bh=qS7JQ7wN+w0wiDP0k4ZcziyusQw8j9igF6uhtwSZIcw=; h=Date:From:Subject:To:Cc; b=bIvu/Pik2EVALCBSiniZHhoW94BU2ojePphFK5Vp1+F9CPFvg+Q+uRamejPsBpI1J uo5czmzmEzslXsVNm8Sv2Cb/MHFunCTZMS0pkYQDXKt5L9bKqhl3Y5mzNTqln26sjz OVP0IZtPLgh7lq75KaTUXkRlBOVKoCc27UN42w88= Received: (from lkml@localhost) by sdf.org (8.15.2/8.12.8/Submit) id x351wr9f016512; Fri, 5 Apr 2019 01:58:53 GMT Date: Fri, 5 Apr 2019 01:58:53 GMT Message-Id: <201904050158.x351wr9f016512@sdf.org> From: George Spelvin Subject: [PATCH v3] ubsan: Avoid unnecessary 128-bit shifts To: Andrey Ryabinin Cc: linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, Heiko Carstens , Rasmus Villemoes , George Spelvin Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If CONFIG_ARCH_SUPPORTS_INT128, s_max is 128 bits, and variable sign-extending shifts of such a double-word data type are a non-trivial amount of code and complexity. Do a single-word sign-extension *before* the cast to (s_max), greatly simplifying the object code. Rasmus Villemoes suggested using sign_extend* from . On s390 (and perhaps some other arches), gcc implements variable 128-bit shifts using an __ashrti3 helper function which the kernel doesn't provide, causing a link error. In that case, this patch is a prerequisite for enabling INT128 support. Andrey Ryabinin has gven permission for any arch that needs it to cherry-pick it so they don't have to wait for ubsan to be merged into Linus' tree. We *could*, alternatively, implement __ashrti3, but that becomes dead as soon as this patch is merged, so it seems like a waste of time and its absence discourages people from adding inefficient code. Note that the shifts in (unsigned, and by a compile-time constant amount) are simpler and generated inline. Signed-off-by: George Spelvin Acked-By: Andrey Ryabinin Feedback-from: Rasmus Villemoes Cc: linux-s390@vger.kernel.org Cc: Heiko Carstens --- include/linux/bitops.h | 7 +++++++ lib/ubsan.c | 13 +++++-------- 2 files changed, 12 insertions(+), 8 deletions(-) v3: Added sign_extend_long() to sign_extend{32,64} in . Used sign_extend_long rather than hand-rolling sign extension. Changed to more uniform if ... else if ... else ... structure. v2: Eliminated redundant cast to (s_max). Rewrote commit message without "is this the right thing to do?" verbiage. Incorporated ack from Andrey Ryabinin. diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 705f7c442691..8d33c2bfe6c5 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -157,6 +157,13 @@ static inline __s64 sign_extend64(__u64 value, int index) return (__s64)(value << shift) >> shift; } +static inline long sign_extend_long(unsigned long value, int index) +{ + if (sizeof(value) == 4) + return sign_extend32(value); + return sign_extend64(value); +} + static inline unsigned fls_long(unsigned long l) { if (sizeof(l) == 4) diff --git a/lib/ubsan.c b/lib/ubsan.c index e4162f59a81c..24d4920317e4 100644 --- a/lib/ubsan.c +++ b/lib/ubsan.c @@ -88,15 +88,12 @@ static bool is_inline_int(struct type_descriptor *type) static s_max get_signed_val(struct type_descriptor *type, unsigned long val) { - if (is_inline_int(type)) { - unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type); - return ((s_max)val) << extra_bits >> extra_bits; - } + if (is_inline_int(type)) + return sign_extend_long(val, type_bit_width(type) - 1); - - if (type_bit_width(type) == 64) + else if (type_bit_width(type) == 64) return *(s64 *)val; - - return *(s_max *)val; + else + return *(s_max *)val; } static bool val_is_negative(struct type_descriptor *type, unsigned long val) -- 2.20.1