Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3782494imm; Tue, 17 Jul 2018 10:09:40 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeh3SLx4YtNeUl8daWy5PfR5cO6MI6NS/OtfJ6KsDwEvOQYgCSweuJn9r0XHMdW3QrCFbRS X-Received: by 2002:a63:1f20:: with SMTP id f32-v6mr2315271pgf.84.1531847379993; Tue, 17 Jul 2018 10:09:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531847379; cv=none; d=google.com; s=arc-20160816; b=qx08tFZJUQeGZduAM718ZrWvWVvo3fI4hAQOTSwiOF5lKGX0YjebI7YPZlZUUJaOxB 8RX5J2XPU/QXHzHKgAhFtg6bON/Z+EWuV0nE0X1LJjAZPSZP7ggsv3wcuTepN8DoRLLy /5/0D3PEvMYy2FVX83kzCZMHDGLHBaqSf3wUBdd8ywSMxx20OY64C1qjMYXPIGVawfpX 2hT7g1PwSdR4aOTXOXix43TqX+kqZD2LgP9dhvt52jLum+fgKXRpbzdcBjDEo/9uez3u kcFeFPdadCWhr2PwI1MF80WRErqGypQgT+UrrXCI1UuFp6gPI66XbA6xYSweW11EfOfs S8TQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id :arc-authentication-results; bh=Tg8KZM3f9KdtmpjxeOkhrHiSoqbVBB+amZGK+0KIFL8=; b=nUQQdEqpYjIhQ/3pE0C8tdJuwgXEXbTHQGOiHqajCrQ+OAyLCk4rXDOkD27Oyln3Hl wzWcsLcQDbphzNqj22AXTdh5dqyc22mAsVjv9skpvdcs0vW7KSXNrxJ0F2PSZNvORhju v9cpvK8GzOf4eKfDUx+/NViSyp3kweC5B4PFUxtrtazOyaESlSERbv2K4OmmWFJ3xXNr o/2VW+n5/Z/jeVQRAcFIH1Z5AEa9PC1Zv4/rXvr7MOWT58L/5xEFZaZQoNz+85Wx8dTz 5bQuf8R6QjL4wi/XgaeUA/OZHt7E9fan6aVxmotcvrwRDN+IjnPAXIT+NNSPdNlMLcsv +HFQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c4-v6si1392329pfk.361.2018.07.17.10.09.25; Tue, 17 Jul 2018 10:09:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730180AbeGQRmY (ORCPT + 99 others); Tue, 17 Jul 2018 13:42:24 -0400 Received: from smtprelay0226.hostedemail.com ([216.40.44.226]:35437 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729708AbeGQRmY (ORCPT ); Tue, 17 Jul 2018 13:42:24 -0400 Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60]) by smtprelay01.hostedemail.com (Postfix) with ESMTP id 0BDF3100E86CA; Tue, 17 Jul 2018 17:08:48 +0000 (UTC) X-Session-Marker: 6A6F6540706572636865732E636F6D X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,joe@perches.com,:::::::::::::::,RULES_HIT:41:355:379:541:599:800:960:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2691:2828:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3874:4321:4384:5007:6238:7208:7576:7875:7903:8660:8957:10004:10848:11232:11658:11914:12043:12048:12295:12555:12740:12760:12895:13148:13230:13439:13846:14096:14097:14181:14659:14721:14915:21063:21080:21212:21220:21451:21627:30012:30054:30091,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: boats66_2f8725726b951 X-Filterd-Recvd-Size: 5288 Received: from XPS-9350.home (unknown [47.151.153.53]) (Authenticated sender: joe@perches.com) by omf10.hostedemail.com (Postfix) with ESMTPA; Tue, 17 Jul 2018 17:08:46 +0000 (UTC) Message-ID: <99f5bcb9b5a3604264d43e7305950dcf7f6cfcc7.camel@perches.com> Subject: Re: [PATCH] ipv6: sr: fix useless rol32 call on hash From: Joe Perches To: David Lebrun , Colin King , "David S . Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI , netdev@vger.kernel.org Cc: kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org Date: Tue, 17 Jul 2018 10:08:45 -0700 In-Reply-To: <6ab9af97-33b6-d5cc-d1e9-6e5fbd9158a8@gmail.com> References: <20180717155254.30367-1-colin.king@canonical.com> <6ab9af97-33b6-d5cc-d1e9-6e5fbd9158a8@gmail.com> Content-Type: text/plain; charset="ISO-8859-1" X-Mailer: Evolution 3.28.1-2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2018-07-17 at 17:03 +0100, David Lebrun wrote: > On 07/17/2018 04:52 PM, Colin King wrote: > > From: Colin Ian King > > > > The rol32 call is currently rotating hash but the rol'd value is > > being discarded. I believe the current code is incorrect and hash > > should be assigned the rotated value returned from rol32. > > > > Detected by CoverityScan, CID#1468411 ("Useless call") > > > > Fixes: b5facfdba14c ("ipv6: sr: Compute flowlabel for outer IPv6 header of seg6 encap mode") > > Signed-off-by: Colin Ian King > > Acked-by: dlebrun@google.com > > Good catch, thanks ! > > In that case, the same issue is present in > include/net/ipv6.h:ip6_make_flowlabel(). Perhaps all of the ror and rol definitions should add __must_check Something like the below and perhaps many more of the functions that return some value should have __must_chedk added as well. --- include/linux/bitops.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index af419012d77d..3cddde65c8bb 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -67,7 +67,7 @@ static inline __u64 rol64(__u64 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u64 ror64(__u64 word, unsigned int shift) +static inline __must_check __u64 ror64(__u64 word, unsigned int shift) { return (word >> shift) | (word << (64 - shift)); } @@ -77,7 +77,7 @@ static inline __u64 ror64(__u64 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u32 rol32(__u32 word, unsigned int shift) +static inline __must_check __u32 rol32(__u32 word, unsigned int shift) { return (word << shift) | (word >> ((-shift) & 31)); } @@ -87,7 +87,7 @@ static inline __u32 rol32(__u32 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u32 ror32(__u32 word, unsigned int shift) +static inline __must_check __u32 ror32(__u32 word, unsigned int shift) { return (word >> shift) | (word << (32 - shift)); } @@ -97,7 +97,7 @@ static inline __u32 ror32(__u32 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u16 rol16(__u16 word, unsigned int shift) +static inline __must_check __u16 rol16(__u16 word, unsigned int shift) { return (word << shift) | (word >> (16 - shift)); } @@ -107,7 +107,7 @@ static inline __u16 rol16(__u16 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u16 ror16(__u16 word, unsigned int shift) +static inline __must_check __u16 ror16(__u16 word, unsigned int shift) { return (word >> shift) | (word << (16 - shift)); } @@ -117,7 +117,7 @@ static inline __u16 ror16(__u16 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u8 rol8(__u8 word, unsigned int shift) +static inline __must_check __u8 rol8(__u8 word, unsigned int shift) { return (word << shift) | (word >> (8 - shift)); } @@ -127,7 +127,7 @@ static inline __u8 rol8(__u8 word, unsigned int shift) * @word: value to rotate * @shift: bits to roll */ -static inline __u8 ror8(__u8 word, unsigned int shift) +static inline __must_check __u8 ror8(__u8 word, unsigned int shift) { return (word >> shift) | (word << (8 - shift)); } @@ -139,7 +139,7 @@ static inline __u8 ror8(__u8 word, unsigned int shift) * * This is safe to use for 16- and 8-bit types as well. */ -static inline __s32 sign_extend32(__u32 value, int index) +static inline __must_check __s32 sign_extend32(__u32 value, int index) { __u8 shift = 31 - index; return (__s32)(value << shift) >> shift; @@ -150,7 +150,7 @@ static inline __s32 sign_extend32(__u32 value, int index) * @value: value to sign extend * @index: 0 based bit index (0<=index<64) to sign bit */ -static inline __s64 sign_extend64(__u64 value, int index) +static inline __must_check __s64 sign_extend64(__u64 value, int index) { __u8 shift = 63 - index; return (__s64)(value << shift) >> shift;