Received: by 2002:a05:6358:53a8:b0:117:f937:c515 with SMTP id z40csp5669308rwe; Tue, 18 Apr 2023 09:45:55 -0700 (PDT) X-Google-Smtp-Source: AKy350YmFgh3yHpOWelz83VwJpyD4tLaF2l0JeJg3e2P5C2Eqa45jhENM2kYZpeCZTnp5+652T6R X-Received: by 2002:a05:6a20:8416:b0:f0:9f97:fc42 with SMTP id c22-20020a056a20841600b000f09f97fc42mr479097pzd.18.1681836355319; Tue, 18 Apr 2023 09:45:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681836355; cv=none; d=google.com; s=arc-20160816; b=A/pRALH1N39/qS/+I5dYHefxkQyXvVtcNvDacc/OE0evrxYWvQ/MYVJunBuMC4xtoe GLwJYxDJMf9nKDhhjH1MOisD4wl2EeQUqctMCzOzcED0d3mAut0iaF1mkMWQq0f/gOGg 7mY5cyAEqRrI6xJ2xuMYvtuAnskCWgvaOEKHDEvl565dTxewJqmrRfXQIvDad7qTS1E/ CpMHNBXQx217nf7gLSyzC3/Zb2b1pgNNl25x/iWUVK8/rEr8Jvz3+f481ykjnvUslt0j CG+7IjY2cEo4kKmwQi/v0xqaPN44a4RUZlCJh2Drcu+ojaKpnGuTsmMmLHsLGowcBsTk 6ZyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=MGNWl9uWC2HfGqqKAvGoM7aa0MPDtW+LYgr5G/j3vaw=; b=0ij1LWjUbZj40kf4ozr1PxgD5HKdisP3v4GCV1QdlhGcu6PaoAR6WG/KYr6Yt62J3F NG6DTqXCFGNv9fMHGcIeZikJn44G/EGSR7XOUxAegqR6qLKlAFdXy8w0HvJUhSHpgurI E4UyPWSaeNKZFExG/elmIl9Qzgs23yfTEXO0Yt/ubNmSnrRnhlYhTDWVA29Ug3fe6QIQ tUH83/WSGOPybEb09OeI/zqQFM1nYONd+aDEKOUXAXpK/4A5TtpVa8vbwN/n+NczxAO5 KbYtiy3GwXZFD9/cjEJD+VWXwnwoQzLr5CIzsoGTXwBkjyzIpPQI5o9rw23ILMx8c7NT BcNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="aa/2yUdH"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s1-20020a63dc01000000b0051b614be920si13508727pgg.498.2023.04.18.09.45.41; Tue, 18 Apr 2023 09:45:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="aa/2yUdH"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232699AbjDRQpQ (ORCPT + 99 others); Tue, 18 Apr 2023 12:45:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232693AbjDRQpF (ORCPT ); Tue, 18 Apr 2023 12:45:05 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9D4B1445C; Tue, 18 Apr 2023 09:44:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681836297; x=1713372297; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=dgJOKWutCD5cDBKT9pJpPU5kwpG0kpedHmUzoMJjFDA=; b=aa/2yUdHTjhLRhNbOT+JsAbfUjLHXXSsC8PHEspahGCsVZXMV/1GrgRC 63my/flvWlWRpf1CrFfXgZH6ZWmpcD9qrAkpTNcJzku/HSTW8HZg68yoC 1/k7QNR5nhVRDvnbxSz29PKKSeYSpSI7VknuOED2+YIDAaJP+QTc0seF5 yCDEviTTCb0DzWxpvkLdwou98oVfgWOus9F4vr1a9lPWvl1oYn652U7Hl cogw7Sa+o5owlD82tEq/wTcmY7jwhQMpUa3txwAfwuMCI5KHh4sk+slCa kewPah7v9URmhSsVYWNl8b7gzHDL5TsIpiK+ReNqh/PtZQFKAv745KbL/ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10684"; a="324845871" X-IronPort-AV: E=Sophos;i="5.99,207,1677571200"; d="scan'208";a="324845871" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2023 09:44:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10684"; a="691177166" X-IronPort-AV: E=Sophos;i="5.99,207,1677571200"; d="scan'208";a="691177166" Received: from cbalasoi-mobl.ger.corp.intel.com (HELO himmelriiki) ([10.251.212.206]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2023 09:44:54 -0700 Date: Tue, 18 Apr 2023 19:44:43 +0300 From: Mikko Ylinen To: Anand Krishnamoorthi Cc: Kristen Carlson Accardi , "jarkko@kernel.org" , "dave.hansen@linux.intel.com" , "tj@kernel.org" , "linux-kernel@vger.kernel.org" , "linux-sgx@vger.kernel.org" , "cgroups@vger.kernel.org" , "Bo Zhang (ACC)" , "zhiquan1.li@intel.com" Subject: Re: [EXTERNAL] [PATCH v2 00/18] Add Cgroup support for SGX EPC memory Message-ID: References: <20221202183655.3767674-1-kristen@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_PASS, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On Thu, Apr 13, 2023 at 06:49:53PM +0000, Anand Krishnamoorthi wrote: > For Azure, SGX cgroup support feature is very useful. > It is needed to enforce the EPC resource limitation of Kubernetes pods on SGX nodes. I've been working on enabling the same use case with the difference that I'm setting per container EPC limits (instead of pods). The Open Container Initiative (OCI) runtime spec [1] defines how it's done and with the misc controller implemented here "misc.max": "sgx_epc 42" setting for a container is supported by runc out of the box. In addition to being able to set limits per container/pod, the cgroup for SGX EPC helps to build better telemetry/monitoring for EPC consumtion. [1] https://github.com/opencontainers/runtime-spec/blob/main/config-linux.md#unified > > Today, in Azure Kubernetes Service, each pod on SGX node claims a nominal EPC memory requirement. K8s will track the unclaimed EPC memories on SGX nodes to schedule pods. > However, there's no enforcement on the node whether the pod uses more EPC memory than what it claims. If EPC is running out on the node, the kernel will do EPC paging, which will cause all pods suffering performance degradation. > > Cgroup support for EPC will enforce EPC resource limitation on pod level, so that when a pod tries to use more EPC than what it claims, it will be EPC paged while other pods are not affected. > > -Anand > > From: Anand Krishnamoorthi > Sent: Monday, April 3, 2023 2:26 PM > To: Kristen Carlson Accardi ; jarkko@kernel.org ; dave.hansen@linux.intel.com ; tj@kernel.org ; linux-kernel@vger.kernel.org ; linux-sgx@vger.kernel.org ; cgroups@vger.kernel.org ; Bo Zhang (ACC) > Cc: zhiquan1.li@intel.com > Subject: Re: [EXTERNAL] [PATCH v2 00/18] Add Cgroup support for SGX EPC memory > > Adding Bo Zhang to thread. > > -Anand > > > From: Kristen Carlson Accardi > Sent: Friday, December 2, 2022 10:36 AM > To: jarkko@kernel.org ; dave.hansen@linux.intel.com ; tj@kernel.org ; linux-kernel@vger.kernel.org ; linux-sgx@vger.kernel.org ; cgroups@vger.kernel.org > Cc: zhiquan1.li@intel.com > Subject: [EXTERNAL] [PATCH v2 00/18] Add Cgroup support for SGX EPC memory > > Utilize the Miscellaneous cgroup controller to regulate the distribution > of SGX EPC memory, which is a subset of system RAM that is used to provide > SGX-enabled applications with protected memory, and is otherwise inaccessible. > > SGX EPC memory allocations are separate from normal RAM allocations, > and is managed solely by the SGX subsystem. The existing cgroup memory > controller cannot be used to limit or account for SGX EPC memory. > > This patchset implements the support for sgx_epc memory within the > misc cgroup controller, and then utilizes the misc cgroup controller > to provide support for setting the total system capacity, max limit > per cgroup, and events. > > This work was originally authored by Sean Christopherson a few years ago, > and was modified to work with more recent kernels, and to utilize the > misc cgroup controller rather than a custom controller. It is currently > based on top of the MCA patches. > > Here's the MCA patchset for reference. > https://lore.kernel.org/linux-sgx/2d52c8c4-8ed0-6df2-2911-da5b9fcc9ae4@intel.com/T/#t > > The patchset adds support for multiple LRUs to track both reclaimable > EPC pages (i.e. pages the reclaimer knows about), as well as unreclaimable > EPC pages (i.e. pages which the reclaimer isn't aware of, such as va pages). > These pages are assigned to an LRU, as well as an enclave, so that an > enclave's full EPC usage can be tracked, and limited to a max value. During > OOM events, an enclave can be have its memory zapped, and all the EPC pages > not tracked by the reclaimer can be freed. > > I appreciate your comments and feedback. > > Changelog: > > v2: > * rename struct sgx_epc_lru to sgx_epc_lru_lists to be more clear > that this struct contains 2 lists. > * use inline functions rather than macros for sgx_epc_page_list* > wrappers. > * Remove flags macros and open code all flags. > * Improve the commit message for RECLAIM_IN_PROGRESS patch to make > it more clear what the patch does. > * remove notifier_block from misc cgroup changes and use a set > of ops for callbacks instead. > * rename root_misc to misc_cg_root and parent_misc to misc_cg_parent > * consolidate misc cgroup changes to 2 patches and remove most of > the previous helper functions. > > Kristen Carlson Accardi (7): > x86/sgx: Add 'struct sgx_epc_lru_lists' to encapsulate lru list(s) > x86/sgx: Use sgx_epc_lru_lists for existing active page list > x86/sgx: Track epc pages on reclaimable or unreclaimable lists > cgroup/misc: Add per resource callbacks for css events > cgroup/misc: Prepare for SGX usage > x86/sgx: Add support for misc cgroup controller > Docs/x86/sgx: Add description for cgroup support > > Sean Christopherson (11): > x86/sgx: Call cond_resched() at the end of sgx_reclaim_pages() > x86/sgx: Store struct sgx_encl when allocating new VA pages > x86/sgx: Introduce RECLAIM_IN_PROGRESS flag for EPC pages > x86/sgx: Use a list to track to-be-reclaimed pages during reclaim > x86/sgx: Allow reclaiming up to 32 pages, but scan 16 by default > x86/sgx: Return the number of EPC pages that were successfully > reclaimed > x86/sgx: Add option to ignore age of page during EPC reclaim > x86/sgx: Prepare for multiple LRUs > x86/sgx: Expose sgx_reclaim_pages() for use by EPC cgroup > x86/sgx: Add helper to grab pages from an arbitrary EPC LRU > x86/sgx: Add EPC OOM path to forcefully reclaim EPC > > Documentation/x86/sgx.rst | 77 ++++ > arch/x86/Kconfig | 13 + > arch/x86/kernel/cpu/sgx/Makefile | 1 + > arch/x86/kernel/cpu/sgx/encl.c | 90 ++++- > arch/x86/kernel/cpu/sgx/encl.h | 4 +- > arch/x86/kernel/cpu/sgx/epc_cgroup.c | 539 +++++++++++++++++++++++++++ > arch/x86/kernel/cpu/sgx/epc_cgroup.h | 59 +++ > arch/x86/kernel/cpu/sgx/ioctl.c | 14 +- > arch/x86/kernel/cpu/sgx/main.c | 412 ++++++++++++++++---- > arch/x86/kernel/cpu/sgx/sgx.h | 122 +++++- > arch/x86/kernel/cpu/sgx/virt.c | 28 +- > include/linux/misc_cgroup.h | 35 ++ > kernel/cgroup/misc.c | 76 +++- > 13 files changed, 1341 insertions(+), 129 deletions(-) > create mode 100644 arch/x86/kernel/cpu/sgx/epc_cgroup.c > create mode 100644 arch/x86/kernel/cpu/sgx/epc_cgroup.h > > -- > 2.38.1 -- Regards, Mikko