Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2839991pxj; Sun, 23 May 2021 12:39:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6UXS3Eb0skNhAGiDv5BpfqjxbPL7Qlozgqr2igtPnrYzCV0042q3WF3wnjih6dswD6kXG X-Received: by 2002:a92:9411:: with SMTP id c17mr14313313ili.264.1621798781577; Sun, 23 May 2021 12:39:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621798781; cv=none; d=google.com; s=arc-20160816; b=H6n03u4freVKbD6i1P9/bmWBnV5rik2tBm2ne+r8Rlng9bGD7XE+nQcZo//vpdIxPK 7c6ic0pAE859yflil1Njf4G+cvgMvC38uPcqTWLy3uofypW3O4qh3KfBaNWwyYXXWYpV ngixlh8siZQUQsbetndLsCmE/IStSVLoovKYlmLPaA9VYX2oCkBzu673OkBFrnmQMbX5 ivo/NHK/q7Xmf/sJXiGZV5V+XQXDx3t1md5asHVYLR9V63lFpBS7/D1tEq9r8Eo/hZwA OrcJhMHdkDEvoFghq/VgzjDmO2ylrhUm+QMsZo2hks6ntO+y9fe/oRbEKhtHW7FFBeb1 7VfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=nWUle58qLJCr3ZWfS4DigDjEXX5+23nkf5mEn3231vE=; b=AIL417pFw7ktJcNdXL2+k/rc56C03mhrXlEpk0lzf2aGnYDUW4G48wM/yD7cPVAAwy QJVwNc2QJz5sow04YBNEs6n0UQvz1kUm3khoicYzb6YYCUvlGIUH4jNYEN9Yw5vxAGeL ObjP2nBd0f613+zJyF31feTGv0tLc38FL10QR1tMAcH7iE2PXovDUnalJ2egcPWW6Ijc 23srxRSAz/Q+9LdZQRdDeunfIeDP3yNgxQJ1S0pmdIY3icA2QIq+U38EZsCd0Bmz8IMi QK2bEq/Skf6Td30cWZzi5q4mFavOdRmQjT0Sg7D9YmhTXcyoD09FNdwLtfGOcyrh0Nlu QBCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i35si14054670jav.20.2021.05.23.12.39.28; Sun, 23 May 2021 12:39:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232049AbhEWTkC (ORCPT + 99 others); Sun, 23 May 2021 15:40:02 -0400 Received: from mga11.intel.com ([192.55.52.93]:31996 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231947AbhEWTjz (ORCPT ); Sun, 23 May 2021 15:39:55 -0400 IronPort-SDR: 9feDkES/wd9Y8zfE6copBMcNG3AUYf+KHsyffMKSXWzUucb8mhUBiz0zoWi3O9kUY92SQFzbd3 czneybOJ0oUA== X-IronPort-AV: E=McAfee;i="6200,9189,9993"; a="198740674" X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="198740674" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 May 2021 12:38:28 -0700 IronPort-SDR: RTCg+YofZwTIox+F6r84kOqbjbo8mQ/iD4svMzBAgCtGGDBa8SVnaEhdKRt8pW9pJJm5mwyYAT N8HrD9T/ljAg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="407467064" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by fmsmga007.fm.intel.com with ESMTP; 23 May 2021 12:38:27 -0700 From: "Chang S. Bae" To: bp@suse.de, luto@kernel.org, tglx@linutronix.de, mingo@kernel.org, x86@kernel.org Cc: len.brown@intel.com, dave.hansen@intel.com, jing2.liu@intel.com, ravi.v.shankar@intel.com, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com Subject: [PATCH v5 02/28] x86/fpu/xstate: Modify state copy helpers to handle both static and dynamic buffers Date: Sun, 23 May 2021 12:32:33 -0700 Message-Id: <20210523193259.26200-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210523193259.26200-1-chang.seok.bae@intel.com> References: <20210523193259.26200-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Have all the functions copying xstate take a struct fpu * pointer in preparation for dynamic state buffer support. No functional change. Signed-off-by: Chang S. Bae Reviewed-by: Len Brown Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v3: * Updated the changelog. (Borislav Petkov) Changes from v2: * Updated the changelog with task->fpu removed. (Borislav Petkov) --- arch/x86/include/asm/fpu/xstate.h | 8 ++++---- arch/x86/kernel/fpu/regset.c | 6 +++--- arch/x86/kernel/fpu/signal.c | 16 +++++++--------- arch/x86/kernel/fpu/xstate.c | 19 +++++++++++++++---- 4 files changed, 29 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h index 47a92232d595..e0f1b22f53ce 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -105,10 +105,10 @@ const void *get_xsave_field_ptr(int xfeature_nr); int using_compacted_format(void); int xfeature_size(int xfeature_nr); struct membuf; -void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave); -int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf); -int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf); -void copy_supervisor_to_kernel(struct xregs_state *xsave); +void copy_xstate_to_kernel(struct membuf to, struct fpu *fpu); +int copy_kernel_to_xstate(struct fpu *fpu, const void *kbuf); +int copy_user_to_xstate(struct fpu *fpu, const void __user *ubuf); +void copy_supervisor_to_kernel(struct fpu *fpu); void copy_dynamic_supervisor_to_kernel(struct xregs_state *xstate, u64 mask); void copy_kernel_to_dynamic_supervisor(struct xregs_state *xstate, u64 mask); diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index 4c4d9059ff36..5e13e58d11d4 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -85,7 +85,7 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset, fpu__prepare_read(fpu); if (using_compacted_format()) { - copy_xstate_to_kernel(to, xsave); + copy_xstate_to_kernel(to, fpu); return 0; } else { fpstate_sanitize_xstate(fpu); @@ -126,9 +126,9 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, if (using_compacted_format()) { if (kbuf) - ret = copy_kernel_to_xstate(xsave, kbuf); + ret = copy_kernel_to_xstate(fpu, kbuf); else - ret = copy_user_to_xstate(xsave, ubuf); + ret = copy_user_to_xstate(fpu, ubuf); } else { ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, xsave, 0, -1); if (!ret) diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index a4ec65317a7f..0d6deb75c507 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -212,11 +212,11 @@ int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size) } static inline void -sanitize_restored_user_xstate(union fpregs_state *state, +sanitize_restored_user_xstate(struct fpu *fpu, struct user_i387_ia32_struct *ia32_env, u64 user_xfeatures, int fx_only) { - struct xregs_state *xsave = &state->xsave; + struct xregs_state *xsave = &fpu->state.xsave; struct xstate_header *header = &xsave->header; if (use_xsave()) { @@ -253,7 +253,7 @@ sanitize_restored_user_xstate(union fpregs_state *state, xsave->i387.mxcsr &= mxcsr_feature_mask; if (ia32_env) - convert_to_fxsr(&state->fxsave, ia32_env); + convert_to_fxsr(&fpu->state.fxsave, ia32_env); } } @@ -396,7 +396,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) * current supervisor states first and invalidate the FPU regs. */ if (xfeatures_mask_supervisor()) - copy_supervisor_to_kernel(&fpu->state.xsave); + copy_supervisor_to_kernel(fpu); set_thread_flag(TIF_NEED_FPU_LOAD); } __fpu_invalidate_fpregs_state(fpu); @@ -406,7 +406,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) u64 init_bv = xfeatures_mask_user() & ~user_xfeatures; if (using_compacted_format()) { - ret = copy_user_to_xstate(&fpu->state.xsave, buf_fx); + ret = copy_user_to_xstate(fpu, buf_fx); } else { ret = __copy_from_user(&fpu->state.xsave, buf_fx, state_size); @@ -416,8 +416,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) if (ret) goto err_out; - sanitize_restored_user_xstate(&fpu->state, envp, user_xfeatures, - fx_only); + sanitize_restored_user_xstate(fpu, envp, user_xfeatures, fx_only); fpregs_lock(); if (unlikely(init_bv)) @@ -437,8 +436,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) goto err_out; } - sanitize_restored_user_xstate(&fpu->state, envp, user_xfeatures, - fx_only); + sanitize_restored_user_xstate(fpu, envp, user_xfeatures, fx_only); fpregs_lock(); if (use_xsave()) { diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 767ad6b008c2..cb634c6afbb2 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1071,14 +1071,17 @@ static void copy_part(struct membuf *to, unsigned *last, unsigned offset, * It supports partial copy but pos always starts from zero. This is called * from xstateregs_get() and there we check the CPU has XSAVES. */ -void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave) +void copy_xstate_to_kernel(struct membuf to, struct fpu *fpu) { struct xstate_header header; const unsigned off_mxcsr = offsetof(struct fxregs_state, mxcsr); + struct xregs_state *xsave; unsigned size = to.left; unsigned last = 0; int i; + xsave = &fpu->state.xsave; + /* * The destination is a ptrace buffer; we put in only user xstates: */ @@ -1127,8 +1130,9 @@ void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave) * Convert from a ptrace standard-format kernel buffer to kernel XSAVES format * and copy to the target thread. This is called from xstateregs_set(). */ -int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) +int copy_kernel_to_xstate(struct fpu *fpu, const void *kbuf) { + struct xregs_state *xsave; unsigned int offset, size; int i; struct xstate_header hdr; @@ -1141,6 +1145,8 @@ int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) if (validate_user_xstate_header(&hdr)) return -EINVAL; + xsave = &fpu->state.xsave; + for (i = 0; i < XFEATURE_MAX; i++) { u64 mask = ((u64)1 << i); @@ -1180,8 +1186,9 @@ int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) * xstateregs_set(), as well as potentially from the sigreturn() and * rt_sigreturn() system calls. */ -int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf) +int copy_user_to_xstate(struct fpu *fpu, const void __user *ubuf) { + struct xregs_state *xsave; unsigned int offset, size; int i; struct xstate_header hdr; @@ -1195,6 +1202,8 @@ int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf) if (validate_user_xstate_header(&hdr)) return -EINVAL; + xsave = &fpu->state.xsave; + for (i = 0; i < XFEATURE_MAX; i++) { u64 mask = ((u64)1 << i); @@ -1235,9 +1244,10 @@ int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf) * old states, and is intended to be used only in __fpu__restore_sig(), where * user states are restored from the user buffer. */ -void copy_supervisor_to_kernel(struct xregs_state *xstate) +void copy_supervisor_to_kernel(struct fpu *fpu) { struct xstate_header *header; + struct xregs_state *xstate; u64 max_bit, min_bit; u32 lmask, hmask; int err, i; @@ -1251,6 +1261,7 @@ void copy_supervisor_to_kernel(struct xregs_state *xstate) max_bit = __fls(xfeatures_mask_supervisor()); min_bit = __ffs(xfeatures_mask_supervisor()); + xstate = &fpu->state.xsave; lmask = xfeatures_mask_supervisor(); hmask = xfeatures_mask_supervisor() >> 32; XSTATE_OP(XSAVES, xstate, lmask, hmask, err); -- 2.17.1