Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3535628imu; Wed, 7 Nov 2018 11:50:53 -0800 (PST) X-Google-Smtp-Source: AJdET5dt4+nCZYjjMfyHHqA8ofNKJ4802drQJRRWzF7yVfAIySZRS6g/VSB9RRiCgzTDx4CR1ZMN X-Received: by 2002:a62:6e47:: with SMTP id j68-v6mr1560447pfc.197.1541620252935; Wed, 07 Nov 2018 11:50:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541620252; cv=none; d=google.com; s=arc-20160816; b=eq985xytRxTEoK9BgsMqqmsSFZwokG4Y2yeVjpwF3IJfxHZvXKO9STB7S7KvprcTEb IQniYc0f89BP8jHq4Af/rQERA2zv3mbMXZBdIgMLkMubj0K8Jh4pzxtK4ZAvHSIy7p1s OuA06Nqp0gz1vomTa4HNiYC4+aFOkf4zcx5VdtTmDe2kxSKNllRBP5ZSCJT7uHA9VqB1 0FNGtbIW3c4wvlsVDc/f2rdIfjq+8M3lUZe729GbbRexKMPqZlfcKohJRNrbPrX3E+gp CZECBbmBZDwkWMTN8DAtZArolenQv/9OuFhNQgPh1iU2eSOhn3xmF6N/QrhLxCGMEvnk jfAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=T8UTLk94+akal/MG7L2EtDwy4xeBhyqXUyVssEQe8O0=; b=gs7iMrg9/IY6to8yZPu2H5QAZxFgd5K3X+bzaX38SlEwOp2no9yc1+qd007dwvUyA7 k4lsQLkUOJBwKdevX6q/VPsCgTZRny938IWZMNJL3/v2a70dh8GmNJapW9f5frG2jKg1 yT6Nlj+p/2wEWKWQtX9YXa488mr5kBV429YnBMJ96mil0Fya8G0koq/HsW9kVyqK4DQw cIOAs1WVGLZPwmQRtLx/qDnQCAgUzSDs9LJv9k35+Y1VVDCme4xJDKjzipYLQ+Nc+9PR vhxt2xoadxNZn8LfP3lw2LdfvoX0j9W6FGtH+tm9w111IiqioPXH2DZUdsF/1zThKLSv yUCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t16-v6si1574876plr.352.2018.11.07.11.50.37; Wed, 07 Nov 2018 11:50:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728197AbeKHFVZ (ORCPT + 99 others); Thu, 8 Nov 2018 00:21:25 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:41990 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727454AbeKHFVY (ORCPT ); Thu, 8 Nov 2018 00:21:24 -0500 Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gKTpR-00070Q-7K; Wed, 07 Nov 2018 20:49:29 +0100 From: Sebastian Andrzej Siewior To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, Andy Lutomirski , Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , kvm@vger.kernel.org, "Jason A. Donenfeld" , Rik van Riel , Dave Hansen , Sebastian Andrzej Siewior Subject: [PATCH 22/23] x86/fpu: Don't restore the FPU state directly from userland in __fpu__restore_sig() Date: Wed, 7 Nov 2018 20:48:57 +0100 Message-Id: <20181107194858.9380-23-bigeasy@linutronix.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181107194858.9380-1-bigeasy@linutronix.de> References: <20181107194858.9380-1-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org __fpu__restore_sig() restores the CPU's FPU state directly from userland. If we restore registers on return to userland then we can't load them directly from userland because a context switch/BH could destroy them. Restore the FPU registers after they have been copied from userland. __fpregs_changes_begin() ensures that they are not modified while beeing worked on. TIF_NEED_FPU_LOAD is clreared we want to keep our state, not the saved state. Signed-off-by: Sebastian Andrzej Siewior --- arch/x86/include/asm/fpu/internal.h | 34 ----------------------------- arch/x86/kernel/fpu/signal.c | 33 ++++++++++++++++++---------- 2 files changed, 22 insertions(+), 45 deletions(-) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 9e213a6703c84..5e86ff60a3a5c 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -137,28 +137,11 @@ static inline void copy_kernel_to_fxregs(struct fxregs_state *fx) } } -static inline int copy_user_to_fxregs(struct fxregs_state __user *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return user_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else if (IS_ENABLED(CONFIG_AS_FXSAVEQ)) - return user_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); - - /* See comment in copy_fxregs_to_kernel() below. */ - return user_insn(rex64/fxrstor (%[fx]), "=m" (*fx), [fx] "R" (fx), - "m" (*fx)); -} - static inline void copy_kernel_to_fregs(struct fregs_state *fx) { kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); } -static inline int copy_user_to_fregs(struct fregs_state __user *fx) -{ - return user_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - static inline void copy_fxregs_to_kernel(struct fpu *fpu) { if (IS_ENABLED(CONFIG_X86_32)) @@ -333,23 +316,6 @@ static inline void copy_kernel_to_xregs(struct xregs_state *xstate, u64 mask) XSTATE_XRESTORE(xstate, lmask, hmask); } -/* - * Restore xstate from user space xsave area. - */ -static inline int copy_user_to_xregs(struct xregs_state __user *buf, u64 mask) -{ - struct xregs_state *xstate = ((__force struct xregs_state *)buf); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - stac(); - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - clac(); - - return err; -} - /* * These must be called with preempt disabled. Returns * 'true' if the FPU state is still intact and we can diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 179e2b19976ad..9720529859483 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -228,23 +228,30 @@ sanitize_restored_xstate(union fpregs_state *state, /* * Restore the extended state if present. Otherwise, restore the FP/SSE state. */ -static inline int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only) +static void copy_to_fpregs_zeroing(struct fpu *fpu, u64 xbv, int fx_only) { + __fpregs_changes_begin(); if (use_xsave()) { - if ((unsigned long)buf % 64 || fx_only) { + if (fx_only) { u64 init_bv = xfeatures_mask & ~XFEATURE_MASK_FPSSE; + copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); - return copy_user_to_fxregs(buf); + copy_kernel_to_fxregs(&fpu->state.fxsave); } else { u64 init_bv = xfeatures_mask & ~xbv; + if (unlikely(init_bv)) copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); - return copy_user_to_xregs(buf, xbv); + copy_kernel_to_xregs(&fpu->state.xsave, xbv); } } else if (use_fxsr()) { - return copy_user_to_fxregs(buf); - } else - return copy_user_to_fregs(buf); + copy_kernel_to_fxregs(&fpu->state.fxsave); + } else { + copy_kernel_to_fregs(&fpu->state.fsave); + } + clear_thread_flag(TIF_NEED_FPU_LOAD); + fpregs_activate(fpu); + __fpregs_changes_end(); } static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) @@ -255,6 +262,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) int state_size = fpu_kernel_xstate_size; u64 xfeatures = 0; int fx_only = 0; + int err = 0; ia32_fxstate &= (IS_ENABLED(CONFIG_X86_32) || IS_ENABLED(CONFIG_IA32_EMULATION)); @@ -298,7 +306,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) union fpregs_state *state; void *tmp; struct user_i387_ia32_struct env; - int err = 0; tmp = kmalloc(sizeof(*state) + fpu_kernel_xstate_size + 64, GFP_KERNEL); if (!tmp) @@ -327,12 +334,16 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) } else { /* * For 64-bit frames and 32-bit fsave frames, restore the user - * state to the registers directly (with exceptions handled). + * state from a copy in thread's fpu state. */ - if (copy_user_to_fpregs_zeroing(buf_fx, xfeatures, fx_only)) { + err = __copy_from_user(&fpu->state.xsave, buf_fx, state_size); + if (err) { fpu__clear(fpu); - return -1; + return -EFAULT; } + if ((unsigned long)buf_fx % 64) + fx_only = 1; + copy_to_fpregs_zeroing(fpu, xfeatures, fx_only); } return 0; -- 2.19.1