Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752775AbdCFIr1 (ORCPT ); Mon, 6 Mar 2017 03:47:27 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:33399 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752599AbdCFIrS (ORCPT ); Mon, 6 Mar 2017 03:47:18 -0500 Subject: Re: [PATCH 1/3] futex: remove duplicated code To: Stafford Horne , "H. Peter Anvin" References: <20170303122712.13353-1-jslaby@suse.cz> <20170304130550.GT21222@n2100.armlinux.org.uk> <3994975e-89a5-d2b5-60be-a8633ddc3733@zytor.com> <20170304213805.GA2449@lianli.shorne-pla.net> <201703042308.v24N8wvh012716@mail.zytor.com> <20170304233919.GB2449@lianli.shorne-pla.net> Cc: Russell King - ARM Linux , akpm@linux-foundation.org, linux-kernel@vger.kernel.org, Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Catalin Marinas , Will Deacon , Richard Kuo , Tony Luck , Fenghua Yu , Michal Simek , Ralf Baechle , Jonas Bonn , Stefan Kristiansson , "James E.J. Bottomley" , Helge Deller , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Martin Schwidefsky , Heiko Carstens , Yoshinori Sato , Rich Felker , DavidS.Miller@zytor.com From: Jiri Slaby Message-ID: <3662dd60-2467-f858-dc32-0b0fb6abb33b@suse.cz> Date: Mon, 6 Mar 2017 09:46:26 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20170304233919.GB2449@lianli.shorne-pla.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4663 Lines: 125 On 03/05/2017, 12:39 AM, Stafford Horne wrote: > On Sat, Mar 04, 2017 at 03:08:50PM -0800, H. Peter Anvin wrote: >> ,Chris Metcalf ,Thomas Gleixner ,Ingo Molnar ,Chris Zankel ,Max Filippov ,Arnd Bergmann ,x86@kernel.org,linux-alpha@vger.kernel.org,linux-snps-arc@lists.infradead.org,linux-arm-kernel@lists.infradead.org,linux-hexagon@vger.kernel.org,linux-ia64@vger.kernel.org,linux-mips@linux-mips.org,openrisc@lists.librecores.org,linux-parisc@vger.kernel.org,linuxppc-dev@lists.ozlabs.org,linux-s390@vger.kernel.org,linux-sh@vger.kernel.org,sparclinux@vger.kernel.org,linux-xtensa@linux-xtensa.org,linux-arch@vger.kernel.org >> From: hpa@zytor.com >> Message-ID: >> >> On March 4, 2017 1:38:05 PM PST, Stafford Horne wrote: >>> On Sat, Mar 04, 2017 at 11:15:17AM -0800, H. Peter Anvin wrote: >>>> On 03/04/17 05:05, Russell King - ARM Linux wrote: >>>>>> >>>>>> +static int futex_atomic_op_inuser(int encoded_op, u32 __user >>> *uaddr) >>>>>> +{ >>>>>> + int op = (encoded_op >> 28) & 7; >>>>>> + int cmp = (encoded_op >> 24) & 15; >>>>>> + int oparg = (encoded_op << 8) >> 20; >>>>>> + int cmparg = (encoded_op << 20) >> 20; >>>>> >>>>> Hmm. oparg and cmparg look like they're doing these shifts to get >>> sign >>>>> extension of the 12-bit values by assuming that "int" is 32-bit - >>>>> probably worth a comment, or for safety, they should be "s32" so >>> it's >>>>> not dependent on the bit-width of "int". >>>>> >>>> >>>> For readability, perhaps we should make sign- and zero-extension an >>>> explicit facility? >>> >>> There is some of this in already here, 32 and 64 bit versions: >>> >>> include/linux/bitops.h >>> >>> Do we really need zero extension? It seems the same. >>> >>> Example implementation from bitops.h >>> >>> static inline __s32 sign_extend32(__u32 value, int index) >>> { >>> __u8 shift = 31 - index; >>> return (__s32)(value << shift) >> shift; >>> } >>> >>>> /* >>>> * Truncate an integer x to n bits, using sign- or >>>> * zero-extension, respectively. >>>> */ >>>> static inline __const_func__ s32 sex32(s32 x, int n) >>>> { >>>> return (x << (32-n)) >> (32-n); >>>> } >>>> >>>> static inline __const_func__ s64 sex64(s64 x, int n) >>>> { >>>> return (x << (64-n)) >> (64-n); >>>> } >>>> >>>> #define sex(x,y) \ >>>> ((__typeof__(x)) \ >>>> (((__builtin_constant_p(y) && ((y) <= 32)) || \ >>>> (sizeof(x) <= sizeof(s32))) \ >>>> ? sex32((x),(y)) : sex64((x),(y)))) >>>> >>>> static inline __const_func__ u32 zex32(u32 x, int n) >>>> { >>>> return (x << (32-n)) >> (32-n); >>>> } >>>> >>>> static inline __const_func__ u64 zex64(u64 x, int n) >>>> { >>>> return (x << (64-n)) >> (64-n); >>>> } >>>> >>>> #define zex(x,y) \ >>>> ((__typeof__(x)) \ >>>> (((__builtin_constant_p(y) && ((y) <= 32)) || \ >>>> (sizeof(x) <= sizeof(u32))) \ >>>> ? zex32((x),(y)) : zex64((x),(y)))) >>>> >> >> Also, i strongly believe that making it syntactically cumbersome encodes people to open-code it which is bad... > > Right, I missed the signed vs unsigned bit. What about this? commit 811c8c60ea83727e77f92117e3301a4f30a66e8c Author: Jiri Slaby Date: Fri Nov 4 13:38:34 2016 +0100 futex: make the encoded_op decoding readable Decoding of encoded_op is a bit unreadable. It contains shifts to the left and to the right by some constants. Make it clearly visible what part of the bit mask is taken and shift the values only to the right appropriately. And make sure sign extension takes place using sign_extend32. Signed-off-by: Jiri Slaby diff --git a/kernel/futex.c b/kernel/futex.c index 0ead0756a593..f90314bd42cb 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -1461,10 +1461,10 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) static int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr) { - int op = (encoded_op >> 28) & 7; - int cmp = (encoded_op >> 24) & 15; - int oparg = (encoded_op << 8) >> 20; - int cmparg = (encoded_op << 20) >> 20; + int op = (encoded_op & 0x70000000) >> 28; + int cmp = (encoded_op & 0x0f000000) >> 24; + int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 12); + int cmparg = sign_extend32(encoded_op & 0x00000fff, 12); int oldval, ret; if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) { thanks, -- js suse labs