Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1090713pxb; Sun, 21 Feb 2021 11:06:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJwwl+xaAGrG8YpAm2Wut12l8j7HfIaocpC8zzj9jNveolkVULMPooJo+MbvQfxEJYjPK4Tm X-Received: by 2002:a17:907:2d10:: with SMTP id gs16mr17993352ejc.0.1613934380304; Sun, 21 Feb 2021 11:06:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1613934380; cv=none; d=google.com; s=arc-20160816; b=yHKWOX0aaGS0kFRt0J2OgWVpz3ZKrBM160zn8DOxHcge7rIdY4dZNoAXyHVJQBWokb NE+hY5hKo3rU4wBcWYJlPsiQz4YeBIFn0Uugemkkix0ekOeh8P7r+QDw+POcIbQd5SOu yeNCcE+Urj/0ZVOqVpnyM/TVHh52OMFPmcDwUd2QEu8bF8ZXSJ0LBIcfoxJfVHPGG5nN sYxxoRHksseMtPRf7Ji6fYUbnF4qXBvKeB8VcZL4r3ERN5JCWQjnw5gYb3PxZs7TyvPx nb7hU2qoobNhYCe9BTBlZnZrKww9JFCgVKQwkb5USXPBSSZtbl7WmgE3Qfm/V/9LXKwi mUwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:ironport-sdr:ironport-sdr; bh=hFA9HChZlCSbEL6K2oISJz4d1p2+Gr63DTcjNcxBzjo=; b=zj041MFKwmVlEZ/43bKKLqPtNcxbBS/kn29ihInSSxYxPqkryMeymqEO5XwvdSCifD IfsNcmI7BW9UUQGcYOBN2gHtPF2T0tVBGUAkaCu0vv+/ssAp1SXDjTjgJFoIEGQFzsvY NxGE+i7574j/NatBOA+43besVf+2PIK4eBrFkswS+I7c9OYUXeREcP1Az/7OlyqPKFj0 BHNMB6BKQNMJcxHdfI6EFGVeMdcyCvDd1YpHg/y4KJnL+u4w7670fjUrX/XfVrAzy881 fvMB29glr1/6mgkGQBlds1oeRJ3SKlsW44DYbZZBBFqSuDq90vurZgKRvkDT69znrKo6 CUAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v24si10298621ejo.251.2021.02.21.11.05.57; Sun, 21 Feb 2021 11:06:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230398AbhBUTCb (ORCPT + 99 others); Sun, 21 Feb 2021 14:02:31 -0500 Received: from mga05.intel.com ([192.55.52.43]:37166 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230221AbhBUTCJ (ORCPT ); Sun, 21 Feb 2021 14:02:09 -0500 IronPort-SDR: aAotmMR90liy+X7aczppp7Yeu0g1mKzXhat9li5/APP4aBqOdd+XD8hkdqD7fyjxLa+5RJn1kW 32SvjOoGkEpA== X-IronPort-AV: E=McAfee;i="6000,8403,9902"; a="269192145" X-IronPort-AV: E=Sophos;i="5.81,195,1610438400"; d="scan'208";a="269192145" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2021 11:01:28 -0800 IronPort-SDR: xx+adIFNLuuHk0uhnPDvQojuFMeFYg8mmf0+Zxyi/YIZdy1L+rwZe6myrjL54aKBLlvAVMVM4n Prt+uGuB6/Ig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,195,1610438400"; d="scan'208";a="429792106" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by FMSMGA003.fm.intel.com with ESMTP; 21 Feb 2021 11:01:27 -0800 From: "Chang S. Bae" To: bp@suse.de, luto@kernel.org, tglx@linutronix.de, mingo@kernel.org, x86@kernel.org Cc: len.brown@intel.com, dave.hansen@intel.com, jing2.liu@intel.com, ravi.v.shankar@intel.com, linux-kernel@vger.kernel.org, chang.seok.bae@intel.com Subject: [PATCH v4 09/22] x86/fpu/xstate: Introduce helpers to manage the xstate buffer dynamically Date: Sun, 21 Feb 2021 10:56:24 -0800 Message-Id: <20210221185637.19281-10-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210221185637.19281-1-chang.seok.bae@intel.com> References: <20210221185637.19281-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The static per-task xstate buffer contains the extended register states -- but it is not expandable at runtime. Introduce runtime methods and a new fpu struct field to support the expansion. fpu->state_mask indicates which state components are reserved to be saved in the xstate buffer. alloc_xstate_buffer() uses vmalloc(). If use of this mechanism grows to allocate buffers larger than 64KB, a more sophisticated allocation scheme that includes purpose-built reclaim capability might be justified. Introduce a new helper -- get_xstate_size() to calculate the buffer size. Also, use the new field and helper to initialize the buffer. Signed-off-by: Chang S. Bae Reviewed-by: Len Brown Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v3: * Updated code comments. (Borislav Petkov) * Used vzalloc() instead of vmalloc() with memset(). (Borislav Petkov) * Removed the max size check for >64KB. (Borislav Petkov) * Removed the allocation size check in the helper. (Borislav Petkov) * Switched the function description in the kernel-doc style. * Used them for buffer initialization -- moved from the next patch. Changes from v2: * Updated the changelog with task->fpu removed. (Borislav Petkov) * Replaced 'area' with 'buffer' in the comments and the changelog. * Updated the code comments. Changes from v1: * Removed unneeded interrupt masking (Andy Lutomirski) * Added vmalloc() error tracing (Dave Hansen, PeterZ, and Andy Lutomirski) --- arch/x86/include/asm/fpu/types.h | 7 ++ arch/x86/include/asm/fpu/xstate.h | 4 + arch/x86/include/asm/trace/fpu.h | 5 ++ arch/x86/kernel/fpu/core.c | 14 ++-- arch/x86/kernel/fpu/xstate.c | 125 ++++++++++++++++++++++++++++++ 5 files changed, 148 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h index dcd28a545377..6fc707c14350 100644 --- a/arch/x86/include/asm/fpu/types.h +++ b/arch/x86/include/asm/fpu/types.h @@ -336,6 +336,13 @@ struct fpu { */ unsigned long avx512_timestamp; + /* + * @state_mask: + * + * The bitmap represents state components reserved to be saved in ->state. + */ + u64 state_mask; + /* * @state: * diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h index 1fba2ca15874..cbb4795d2b45 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -112,6 +112,10 @@ extern unsigned int get_xstate_config(enum xstate_config cfg); void set_xstate_config(enum xstate_config cfg, unsigned int value); void *get_xsave_addr(struct fpu *fpu, int xfeature_nr); +unsigned int get_xstate_size(u64 mask); +int alloc_xstate_buffer(struct fpu *fpu, u64 mask); +void free_xstate_buffer(struct fpu *fpu); + const void *get_xsave_field_ptr(int xfeature_nr); int using_compacted_format(void); int xfeature_size(int xfeature_nr); diff --git a/arch/x86/include/asm/trace/fpu.h b/arch/x86/include/asm/trace/fpu.h index ef82f4824ce7..b691c2db47c7 100644 --- a/arch/x86/include/asm/trace/fpu.h +++ b/arch/x86/include/asm/trace/fpu.h @@ -89,6 +89,11 @@ DEFINE_EVENT(x86_fpu, x86_fpu_xstate_check_failed, TP_ARGS(fpu) ); +DEFINE_EVENT(x86_fpu, x86_fpu_xstate_alloc_failed, + TP_PROTO(struct fpu *fpu), + TP_ARGS(fpu) +); + #undef TRACE_INCLUDE_PATH #define TRACE_INCLUDE_PATH asm/trace/ #undef TRACE_INCLUDE_FILE diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 60a581aa0be8..5debb1cd3c74 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -203,9 +203,8 @@ void fpstate_init(struct fpu *fpu) if (fpu) { state = fpu->state; - /* The dynamic user states are not prepared yet. */ - mask = xfeatures_mask_all & ~xfeatures_mask_user_dynamic; - size = get_xstate_config(XSTATE_MIN_SIZE); + mask = fpu->state_mask; + size = get_xstate_size(fpu->state_mask); } else { state = &init_fpstate; mask = xfeatures_mask_all; @@ -241,14 +240,15 @@ int fpu__copy(struct task_struct *dst, struct task_struct *src) WARN_ON_FPU(src_fpu != ¤t->thread.fpu); + /* + * The child does not inherit the dynamic states. Thus, use the buffer + * embedded in struct task_struct, which has the minimum size. + */ + dst_fpu->state_mask = (xfeatures_mask_all & ~xfeatures_mask_user_dynamic); dst_fpu->state = &dst_fpu->__default_state; - /* * Don't let 'init optimized' areas of the XSAVE area * leak into the child task: - * - * The child does not inherit the dynamic states. So, - * the xstate buffer has the minimum size. */ memset(&dst_fpu->state->xsave, 0, get_xstate_config(XSTATE_MIN_SIZE)); diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 8c067a7a0eec..86251b947403 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -19,6 +20,7 @@ #include #include +#include /* * Although we spell it out in here, the Processor Trace @@ -71,6 +73,11 @@ static unsigned int xstate_offsets[XFEATURE_MAX] = { [ 0 ... XFEATURE_MAX - 1] = static unsigned int xstate_sizes[XFEATURE_MAX] = { [ 0 ... XFEATURE_MAX - 1] = -1}; static unsigned int xstate_comp_offsets[XFEATURE_MAX] = { [ 0 ... XFEATURE_MAX - 1] = -1}; static unsigned int xstate_supervisor_only_offsets[XFEATURE_MAX] = { [ 0 ... XFEATURE_MAX - 1] = -1}; +/* + * True if the buffer of the corresponding XFEATURE is located on the next 64 + * byte boundary. Otherwise, it follows the preceding component immediately. + */ +static bool xstate_aligns[XFEATURE_MAX] = { [ 0 ... XFEATURE_MAX - 1] = false}; /** * struct fpu_xstate_buffer_config - xstate per-task buffer configuration @@ -168,6 +175,58 @@ static bool xfeature_is_supervisor(int xfeature_nr) return ecx & 1; } +/** + * get_xstate_size() - calculate an xstate buffer size + * @mask: This bitmap tells which components reserved in the buffer. + * + * Available once those arrays for the offset, size, and alignment info are set up, + * by setup_xstate_features(). + * + * Returns: The buffer size + */ +unsigned int get_xstate_size(u64 mask) +{ + unsigned int size; + u64 xmask; + int i, nr; + + if (!mask) + return 0; + + /* + * The minimum buffer size excludes the dynamic user state. When a task + * uses the state, the buffer can grow up to the max size. + */ + if (mask == (xfeatures_mask_all & ~xfeatures_mask_user_dynamic)) + return get_xstate_config(XSTATE_MIN_SIZE); + else if (mask == xfeatures_mask_all) + return get_xstate_config(XSTATE_MAX_SIZE); + + nr = fls64(mask) - 1; + + if (!using_compacted_format()) + return xstate_offsets[nr] + xstate_sizes[nr]; + + xmask = BIT_ULL(nr + 1) - 1; + + if (mask == (xmask & xfeatures_mask_all)) + return xstate_comp_offsets[nr] + xstate_sizes[nr]; + + /* + * With the given mask, no relevant size is found so far. So, calculate + * it by summing up each state size. + */ + for (size = FXSAVE_SIZE + XSAVE_HDR_SIZE, i = FIRST_EXTENDED_XFEATURE; i <= nr; i++) { + if (!(mask & BIT_ULL(i))) + continue; + + if (xstate_aligns[i]) + size = ALIGN(size, 64); + size += xstate_sizes[i]; + } + return size; +} + /* * When executing XSAVEOPT (or other optimized XSAVE instructions), if * a processor implementation detects that an FPU state component is still @@ -308,10 +367,12 @@ static void __init setup_xstate_features(void) xstate_offsets[XFEATURE_FP] = 0; xstate_sizes[XFEATURE_FP] = offsetof(struct fxregs_state, xmm_space); + xstate_aligns[XFEATURE_FP] = true; xstate_offsets[XFEATURE_SSE] = xstate_sizes[XFEATURE_FP]; xstate_sizes[XFEATURE_SSE] = sizeof_field(struct fxregs_state, xmm_space); + xstate_aligns[XFEATURE_SSE] = true; for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { if (!xfeature_enabled(i)) @@ -329,6 +390,7 @@ static void __init setup_xstate_features(void) continue; xstate_offsets[i] = ebx; + xstate_aligns[i] = (ecx & 2) ? true : false; /* * In our xstate size checks, we assume that the highest-numbered @@ -915,6 +977,9 @@ void __init fpu__init_system_xstate(void) if (err) goto out_disable; + /* Make sure init_task does not include the dynamic user states. */ + current->thread.fpu.state_mask = (xfeatures_mask_all & ~xfeatures_mask_user_dynamic); + /* * Update info used for ptrace frames; use standard-format size and no * supervisor xstates: @@ -1135,6 +1200,66 @@ static inline bool xfeatures_mxcsr_quirk(u64 xfeatures) return true; } +void free_xstate_buffer(struct fpu *fpu) +{ + /* Free up only the dynamically-allocated memory. */ + if (fpu->state != &fpu->__default_state) + vfree(fpu->state); +} + +/** + * alloc_xstate_buffer() - allocate an xstate buffer with the size calculated based on @mask. + * + * @fpu: A struct fpu * pointer + * @mask: The bitmap tells which components to be reserved in the new buffer. + * + * Use vmalloc() simply here. If the task with a vmalloc()-allocated buffer tends + * to terminate quickly, vfree()-induced IPIs may be a concern. Caching may be + * helpful for this. But the task with large state is likely to live longer. + * + * Also, this method does not shrink or reclaim the buffer. + * + * Returns 0 on success, -ENOMEM on allocation error. + */ +int alloc_xstate_buffer(struct fpu *fpu, u64 mask) +{ + union fpregs_state *state; + unsigned int oldsz, newsz; + u64 state_mask; + + state_mask = fpu->state_mask | mask; + + oldsz = get_xstate_size(fpu->state_mask); + newsz = get_xstate_size(state_mask); + + if (oldsz >= newsz) + return 0; + + state = vzalloc(newsz); + if (!state) { + /* + * When allocation requested from #NM, the error code may not be + * populated well. Then, this tracepoint is useful for providing + * the failure context. + */ + trace_x86_fpu_xstate_alloc_failed(fpu); + return -ENOMEM; + } + + if (using_compacted_format()) + fpstate_init_xstate(&state->xsave, state_mask); + + /* + * As long as the register state is intact, save the xstate in the new buffer + * at the next context copy/switch or potentially ptrace-driven xstate writing. + */ + + free_xstate_buffer(fpu); + fpu->state = state; + fpu->state_mask = state_mask; + return 0; +} + static void fill_gap(struct membuf *to, unsigned *last, unsigned offset) { if (*last >= offset) -- 2.17.1