Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753762AbbBSXrd (ORCPT ); Thu, 19 Feb 2015 18:47:33 -0500 Received: from mail-wg0-f51.google.com ([74.125.82.51]:57073 "EHLO mail-wg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751883AbbBSXrb (ORCPT ); Thu, 19 Feb 2015 18:47:31 -0500 Date: Fri, 20 Feb 2015 00:47:21 +0100 From: Karl Beldan To: David Laight Cc: "'Jiri Slaby'" , "stable@vger.kernel.org" , "linux-kernel@vger.kernel.org" , Karl Beldan , Al Viro , Eric Dumazet , Arnd Bergmann , Mike Frysinger , "netdev@vger.kernel.org" , Eric Dumazet , "David S. Miller" Subject: Re: [PATCH 3.12 065/122] lib/checksum.c: fix carry in csum_tcpudp_nofold Message-ID: <20150219234721.GA22013@magnum.frso.rivierawaves.com> References: <07707a797d6c3cd0bfe86f037d3d1eb329acbc86.1424099973.git.jslaby@suse.cz> <063D6719AE5E284EB5DD2968C1650D6D1CAE4D1E@AcuExch.aculab.com> <20150217195717.GA6779@magnum.frso.rivierawaves.com> <063D6719AE5E284EB5DD2968C1650D6D1CAE56AE@AcuExch.aculab.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <063D6719AE5E284EB5DD2968C1650D6D1CAE56AE@AcuExch.aculab.com> X-Location: France-Nice User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1782 Lines: 45 On Wed, Feb 18, 2015 at 09:40:23AM +0000, David Laight wrote: > From: Karl Beldan > > On Tue, Feb 17, 2015 at 12:04:22PM +0000, David Laight wrote: > > > > +static inline u32 from64to32(u64 x) > > > > +{ > > > > + /* add up 32-bit and 32-bit for 32+c bit */ > > > > + x = (x & 0xffffffff) + (x >> 32); > > > > + /* add up carry.. */ > > > > + x = (x & 0xffffffff) + (x >> 32); > > > > + return (u32)x; > > > > +} > > > > > > As a matter of interest, does the compiler optimise away the > > > second (x & 0xffffffff) ? > > > The code could just be: > > > x = (x & 0xffffffff) + (x >> 32); > > > return x + (x >> 32); > > > > > > > On my side, from what I've seen so far, your version results in better > > assembly, esp. with clang, but my first version > > http://article.gmane.org/gmane.linux.kernel/1875407: > > x += (x << 32) + (x >> 32); > > return (__force __wsum)(x >> 32); > > resulted in even better assembly, I just verified with gcc/clang, > > x86_64/ARM and -O1,2,3. > > The latter looks to have a shorter dependency chain as well. > Although I'd definitely include a comment saying that it is equivalent > to the two lines in the current patch. > > Does either compiler manage to use a rotate for the two shifts? > Using '(x << 32) | (x >> 32)' might convince it to do so. > That would reduce it to three 'real' instructions and a register rename. > gcc and clang rotate for tile (just checked gcc) and x86_64, not for arm (and IMHO rightly so). Both '|' and '+' yielded the same asm for those 3 archs. Karl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/