Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp620407pxb; Tue, 1 Feb 2022 07:08:47 -0800 (PST) X-Google-Smtp-Source: ABdhPJyqZnLqKj2l024wzOYjp0jhUW60FvP0AzL4i1CB3bG/cRskWpco9NYeiZqCEfD+DVUOlBzw X-Received: by 2002:aa7:d949:: with SMTP id l9mr26079144eds.153.1643728127399; Tue, 01 Feb 2022 07:08:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643728127; cv=none; d=google.com; s=arc-20160816; b=NvpaXRVqA04L/FVqVChiG3CS+R5PZ9WgeSjBbHfZ76NFuBHOY6x71rhDJUpctjQZ9z wBZZLKWWzQQIB/mk7yhHVtvjvHAaQU+TO7hZWGZuBO613S+KbyOjbMVIYvhQm4JTpnf/ 0Su25wwZOnTuwoDe49dIIKNnbFkNsc5YUo1SlvMuNEHtHnNgahfmgeEhCN0qXcM1hrMS cvvR92OkNRPWexxR7U3fpekxNCgm2wNiTa6BkC+TSNjpLH7sbim7EpVwFbUvrN+N02Mb OyjtZB1rRN0a+jFDoXG5gV52jAVNKvelCzaqckoTZm+6AXRDfjIlh16zqQRCEpkr6hqf 8bdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=kuv6nLst3KtBLVrCmbUE41GNlkiUnW8z7p9XtLgBFiE=; b=EhN8xt5Z+Xk3UlZ/HdL8MC9Pvj/aPpDJIcFY68I2s1TQbnn9jb4iAevaVv5HiU4Vn/ eRMFEAXq6L7ROUziJ4t0fGGq8ZT0Q+eoNvcPqnS3VkXcc02Nr9jS13YOb7XUqb1N3GDe 8VHJbOztS7y+WhFoxccbf1mtR9Q4Y5o7VIK5YKSAmK4Iz0fmZWd5C/yzN7wDyGmk+OoW tzYeGKn+kPYWbVEjJPSRzPBAIoqNicfF13w4ic6kazSpq7hMCHsZ8FVZ/43oAQ11W9Zw 5VofB42q5FXqhllEv/KlV7NfMDsjdjdj4qRW576fARcgYi77x4KRIj0OM6NvEQdm7An3 Eb9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="W4j/1Hzj"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ht10si4024092ejc.820.2022.02.01.07.08.20; Tue, 01 Feb 2022 07:08:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="W4j/1Hzj"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356917AbiA3V0L (ORCPT + 99 others); Sun, 30 Jan 2022 16:26:11 -0500 Received: from mga06.intel.com ([134.134.136.31]:52047 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356508AbiA3VZe (ORCPT ); Sun, 30 Jan 2022 16:25:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643577934; x=1675113934; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=AKky5ODZPWXr+Y60awjhLEp8vC5Ov5UFi7xn02fKIeA=; b=W4j/1Hzj27wxvyBkudBZA4TdOp0FgHefSNV/4Bia556bFFAH/eSutVG7 pK3SiuGIyf15XK0l0+7wk5ir+uRiUf46of/biRyQ80sAWCsJ02L+pQgHf IicxXAzOnhu4ekvg2QycepLBLtIVExNP7yz24tnooLF+wwfCvXsJaOEE+ OmUFMAeBdB0FJ5ZZBvbEPptghU1sDjAVFQTB4VC6AIXmrhLipvqVRg2J/ Oh/OiVt26QzzHOjs9XQ1/CWId1OzKn+3MZCk7ZG/dUbw7Xf0j4S/srR1n P3eo99kSJ4XJtIiDuCzRNBdMYO0ABMv8xYcCO/6gNGRbPNN92/AziQfLB w==; X-IronPort-AV: E=McAfee;i="6200,9189,10243"; a="308104974" X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="308104974" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:22:08 -0800 X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="536856924" Received: from avmallar-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.123.171]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:22:07 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Dave Martin , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com Cc: rick.p.edgecombe@intel.com Subject: [PATCH 27/35] x86/fpu: Add unsafe xsave buffer helpers Date: Sun, 30 Jan 2022 13:18:30 -0800 Message-Id: <20220130211838.8382-28-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220130211838.8382-1-rick.p.edgecombe@intel.com> References: <20220130211838.8382-1-rick.p.edgecombe@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org CET will need to modify the xsave buffer of a new FPU that was just created in the process of copying a thread. In this case the normal helpers will not work, because they operate on the current thread's FPU. So add unsafe helpers to allow for this kind of modification. Make the unsafe helpers operate on the MSR like the safe helpers for symmetry and to avoid exposing the underling xsave structures. Don't add a read helper because it is not needed at this time. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/fpu/api.h | 9 ++++++--- arch/x86/kernel/fpu/xstate.c | 27 ++++++++++++++++++++++----- 2 files changed, 28 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index 6aec27984b62..5cb557b9d118 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -167,7 +167,10 @@ extern long fpu_xstate_prctl(struct task_struct *tsk, int option, unsigned long void *start_update_xsave_msrs(int xfeature_nr); void end_update_xsave_msrs(void); -int xsave_rdmsrl(void *state, unsigned int msr, unsigned long long *p); -int xsave_wrmsrl(void *state, u32 msr, u64 val); -int xsave_set_clear_bits_msrl(void *state, u32 msr, u64 set, u64 clear); +int xsave_rdmsrl(void *xstate, unsigned int msr, unsigned long long *p); +int xsave_wrmsrl(void *xstate, u32 msr, u64 val); +int xsave_set_clear_bits_msrl(void *xstate, u32 msr, u64 set, u64 clear); + +void *get_xsave_buffer_unsafe(struct fpu *fpu, int xfeature_nr); +int xsave_wrmsrl_unsafe(void *xstate, u32 msr, u64 val); #endif /* _ASM_X86_FPU_API_H */ diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 25b1b0c417fd..71b08026474c 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1881,6 +1881,17 @@ static u64 *__get_xsave_member(void *xstate, u32 msr) } } +/* + * Operate on the xsave buffer directly. It makes no gaurantees that the + * buffer will stay valid now or in the futre. This function is pretty + * much only useful when the caller knows the fpu's thread can't be + * scheduled or otherwise operated on concurrently. + */ +void *get_xsave_buffer_unsafe(struct fpu *fpu, int xfeature_nr) +{ + return get_xsave_addr(&fpu->fpstate->regs.xsave, xfeature_nr); +} + /* * Return a pointer to the xstate for the feature if it should be used, or NULL * if the MSRs should be written to directly. To do this safely, using the @@ -1971,14 +1982,11 @@ int xsave_rdmsrl(void *xstate, unsigned int msr, unsigned long long *p) return 0; } -int xsave_wrmsrl(void *xstate, u32 msr, u64 val) + +int xsave_wrmsrl_unsafe(void *xstate, u32 msr, u64 val) { u64 *member_ptr; - __xsave_msrl_prepare_write(); - if (!xstate) - return wrmsrl_safe(msr, val); - member_ptr = __get_xsave_member(xstate, msr); if (!member_ptr) return 1; @@ -1988,6 +1996,15 @@ int xsave_wrmsrl(void *xstate, u32 msr, u64 val) return 0; } +int xsave_wrmsrl(void *xstate, u32 msr, u64 val) +{ + __xsave_msrl_prepare_write(); + if (!xstate) + return wrmsrl_safe(msr, val); + + return xsave_wrmsrl_unsafe(xstate, msr, val); +} + int xsave_set_clear_bits_msrl(void *xstate, u32 msr, u64 set, u64 clear) { u64 val, new_val; -- 2.17.1