Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2017786yba; Tue, 2 Apr 2019 22:46:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqxr9+gGPQC1rzLNaCM/tGevBEjNY7cPjwSq2k13nwemQOGEe65/cNxRBbHlG6GmigqerxLt X-Received: by 2002:a17:902:8d8b:: with SMTP id v11mr1358324plo.133.1554270390889; Tue, 02 Apr 2019 22:46:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554270390; cv=none; d=google.com; s=arc-20160816; b=osFJjZld5/takdu4OWlSIiPey5ziO06Qa+IDy98v++5X+hKeHU4PRGYIr9Ol7aRvib rSunGGRF2uEhArugt+m7dT9fn0G5xFRAbDwwMZFr8rWb91xQNAC7nKdUF2R2vqJk70lF 48JRHjJi+US8C8ppA2YTUNwm90z5rrFdRgtjOFTIXqG9amrytIAUkkhGoEBUvvSoPMkL hdWxcnzMhoR6G+0qpMZhLJuJ/V/F1bLScBgiO+/8u4Ps8kr6kLoWJg6DAKOTNe0xgCSm fBoQ0rmQW6i2JmklgF7BEEPXETD3UpMFF6hJpIu8OyyGLTPMw9VT+/SnaC5p43mJ/yXo Nb6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:from:message-id:date; bh=BaycfaW1/3NdTB19VaWQUYVXRn7Ckt27ZLP3UfgjM8U=; b=Qxl0MoKQFQnkoQ0GE94BJbkuzjNtfmIngw/drfo6eBz/Y6CmUNXvNCqWa/B6FTwLFr SljHfXyPlcItYlc0FeAyD13o5RdjQrwr0VYaVyfEs8MGb/0TMd1UgFfrviJjGtGnA3vn lcAj0hzhOWezhHgxdZw5HBTHYFTgEeVnozR92EPIwEOn1DsH0XRP82MpnltudWtlmyUk wOUfW0sRntSJSCGl09lK6EMufQggF4UchnJ1WMX0Af+ETF2gzsrNKsBAq4izwlqAuhU6 rrBsCcN5a3ka2uQ3srMkGxQvOpZYrlrUjHISAD+fKm5+9MQndwNnpH+I/rwfX+EGEVGP oUXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w6si2437984pgs.518.2019.04.02.22.46.15; Tue, 02 Apr 2019 22:46:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728786AbfDCFpk (ORCPT + 99 others); Wed, 3 Apr 2019 01:45:40 -0400 Received: from mx.sdf.org ([205.166.94.20]:61780 "EHLO mx.sdf.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726330AbfDCFpk (ORCPT ); Wed, 3 Apr 2019 01:45:40 -0400 Received: from sdf.org (IDENT:lkml@sdf.lonestar.org [205.166.94.16]) by mx.sdf.org (8.15.2/8.14.5) with ESMTPS id x335jRBu028339 (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256 bits) verified NO); Wed, 3 Apr 2019 05:45:27 GMT Received: (from lkml@localhost) by sdf.org (8.15.2/8.12.8/Submit) id x335jQJO015258; Wed, 3 Apr 2019 05:45:26 GMT Date: Wed, 3 Apr 2019 05:45:26 GMT Message-Id: <201904030545.x335jQJO015258@sdf.org> From: George Spelvin Subject: [PATCH v2] ubsan: Avoid unnecessary 128-bit shifts To: Andrey Ryabinin Cc: linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, Heiko Carstens , George Spelvin Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If CONFIG_ARCH_SUPPORTS_INT128, s_max is 128 bits, and variable sign-extending shifts of such a double-word data type are a non-trivial amount of code and complexity. Do a single-word shift *before* the cast to (s_max), greatly simplifying the object code. (Yes, I know "signed long" is redundant. It's there for emphasis.) On s390 (and perhaps some other arches), gcc implements variable 128-bit shifts using an __ashrti3 helper function which the kernel doesn't provide, causing a link error. In that case, this patch is a prerequisite for enabling INT128 support. Andrey Ryabinin has gven permission for any arch that needs it to cherry-pick it so they don't have to wait for ubsan to be merged into Linus' tree. We *could*, alternatively, implement __ashrti3, but that becomes dead as soon as this patch is merged, so it seems like a waste of time and its absence discourages people from adding inefficient code. Note that the shifts in (unsigned, and by a compile-time constant amount) are simpler and generated inline. Signed-off-by: George Spelvin Acked-By: Andrey Ryabinin Cc: linux-s390@vger.kernel.org Cc: Heiko Carstens --- lib/ubsan.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) v1->v2: Eliminated redundant cast to (s_max). Rewrote commit message without "is this the right thing to do?" verbiage. Incorporated ack from Andrey Ryabinin. diff --git a/lib/ubsan.c b/lib/ubsan.c index e4162f59a81c..a7eb55fbeede 100644 --- a/lib/ubsan.c +++ b/lib/ubsan.c @@ -89,8 +89,8 @@ static bool is_inline_int(struct type_descriptor *type) static s_max get_signed_val(struct type_descriptor *type, unsigned long val) { if (is_inline_int(type)) { - unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type); - return ((s_max)val) << extra_bits >> extra_bits; + unsigned extra_bits = sizeof(val)*8 - type_bit_width(type); + return (signed long)val << extra_bits >> extra_bits; } if (type_bit_width(type) == 64) -- 2.20.1