Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751875AbdINVF1 (ORCPT ); Thu, 14 Sep 2017 17:05:27 -0400 Received: from mail.kernel.org ([198.145.29.99]:47236 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751330AbdINVF0 (ORCPT ); Thu, 14 Sep 2017 17:05:26 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EBDB122A72 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: AOwi7QCc6GPWP0iGBz+7IEaLFd/3NgIUcXgZjOJoBBDSdMW1n4KnYPD2aHklkRvxV6hdNW9fPYBOi0BL7y4h8Reng9A= MIME-Version: 1.0 In-Reply-To: <20170912225756.GA19364@altlinux.org> References: <20170912225756.GA19364@altlinux.org> From: Andy Lutomirski Date: Thu, 14 Sep 2017 14:05:04 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] x86/asm/64: do not clear high 32 bits of syscall number when CONFIG_X86_X32=y To: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , X86 ML , Andy Lutomirski , Oleg Nesterov , Eugene Syromyatnikov , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1754 Lines: 55 On Tue, Sep 12, 2017 at 3:57 PM, Dmitry V. Levin wrote: > Before this change, CONFIG_X86_X32=y fastpath behaviour was different > from slowpath: > > $ gcc -xc -Wall -O2 - <<'EOF' > #include > #include > int main(void) { > unsigned long nr = ~0xffffffffUL | __NR_exit; > return !!syscall(nr, 42, 1, 2, 3, 4, 5); > } > EOF > $ ./a.out; echo \$?=$? > $?=42 > $ strace -enone ./a.out > syscall_18446744069414584380(0x2a, 0x1, 0x2, 0x3, 0x4, 0x5) = -1 (errno 38) > +++ exited with 1 +++ > > This change syncs CONFIG_X86_X32=y fastpath behaviour with the case > when CONFIG_X86_X32 is not enabled. Do you see real brokenness here, or is it just weird? > > Fixes: fca460f95e92 ("x32: Handle the x32 system call flag") > Cc: stable@vger.kernel.org > Signed-off-by: Dmitry V. Levin > --- > arch/x86/entry/entry_64.S | 8 +++----- > 1 file changed, 3 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S > index 4916725..3bab6af 100644 > --- a/arch/x86/entry/entry_64.S > +++ b/arch/x86/entry/entry_64.S > @@ -185,12 +185,10 @@ entry_SYSCALL_64_fastpath: > */ > TRACE_IRQS_ON > ENABLE_INTERRUPTS(CLBR_NONE) > -#if __SYSCALL_MASK == ~0 > - cmpq $__NR_syscall_max, %rax > -#else > - andl $__SYSCALL_MASK, %eax > - cmpl $__NR_syscall_max, %eax > +#if __SYSCALL_MASK != ~0 > + andq $__SYSCALL_MASK, %rax > #endif > + cmpq $__NR_syscall_max, %rax I don't know much about x32 userspace, but there's an argument that the high bits *should* be masked off if the x32 bit is set. Of course, that's slower, but it could be done without performance loss, I think. --Andy