Received: by 2002:ab2:b82:0:b0:1f3:401:3cfb with SMTP id 2csp120736lqh; Wed, 27 Mar 2024 17:25:29 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWECGcCoA5kVn01obrcsswl8BEjdAQpvA4oo5Xc4v7ID+xrsxCz+4ItPiQGOIvQ6F6IJ+H5RK0zl/rIELmGDtVWGl8mSUAuIT0Un6hdbA== X-Google-Smtp-Source: AGHT+IGk7AcAstrjWRRLS4EpmR9j6ZjdVp3xodc9+wzjjnCazbMmxZg7wtDjO551krVeYCTNNHgJ X-Received: by 2002:a17:90a:3045:b0:29f:a779:1075 with SMTP id q5-20020a17090a304500b0029fa7791075mr1172113pjl.37.1711585529220; Wed, 27 Mar 2024 17:25:29 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711585529; cv=pass; d=google.com; s=arc-20160816; b=Y2/kVjQEv4b2lKFgjOp/dSwvTIboEByBMtcereErnW85mfvO3awg0G2z+JQOkeTxNw zM6DEKRb3nVF7hWaqXc66f5mVW1yVa8zIHTQvAlE6Ru8XxYzKGX0tPSh9X/HrCQthFtd Acauw7sNQ5oHpdrAyR0jDCAHaYN74qn3ytm1XvXynYa/T7UTtQGKDjtLa5SDzSPPLcbf X8Q2t5t5xBCmGvaN/tQThE/beO6OVZ1Bbbgg7d7N/qyI4EZWy+F39qdFBXfkhZ1rI906 4l0Lritx4PSNLSWwrarH1BVuAXkuRCWS0B2FsM3LBCD7LB7e5MJv5XxGsxT+JSZJ993U 8ABA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=OBKiMbHz8XyUQZOj5URQSOOf5Jc7qLCMlLb40gfM/Qk=; fh=G1r0sJ0F/0v82zumykPUwC7a5Du5vVkWNEWao5ifD3o=; b=0gl7vmVYGaCPbSYS09vVnzMxmoRTNvVEBEG7R4vAm+2syfiWWVvWVrWIXc3Wyggq9H pJdAqMGKMfLwHm9IER4MQyRjm6ZfLxlNoLN1yKeA8+WXUgAwzJPRq6aCwiNeFWVjpNY7 Y7xhpUP08T0UAgq71KP24W4jZCVlSz/5av10lEXTNWsOP+LvoTtOruqaCqic+Ru+cNC5 B7qbsDVVg52r+Bxv9jZoNN7tXYhoef0WVh2LZLbP60U67JihJB9bqE2Ggz60G0Z2tH2d V1D4qxS2usakD3Z4me0tb76dWia67QvWKEWrzpC2cGNtB2Np35oDnm3B70esj3sMNzqq R1+Q==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YrTOeKut; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-122223-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-122223-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id hk2-20020a17090b224200b0029c3ff6c354si380761pjb.148.2024.03.27.17.25.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 17:25:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-122223-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=YrTOeKut; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-122223-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-122223-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id E0C1CB23C5F for ; Thu, 28 Mar 2024 00:24:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6E4FE376E0; Thu, 28 Mar 2024 00:22:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YrTOeKut" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCFD92555B; Thu, 28 Mar 2024 00:22:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711585371; cv=none; b=f94gMTbwZSC4JvL7NCekjorCROZT5dJUXghWPlFYZt4Lt+DYL//F458brq3bBmKZztm2jGAPQiWK04wL2zE0Mnsm9DzXbTEaAZcnthZU9QruA9SA/Tt8dr7ivasUJx5L7X5R8LOfrHsLcY8csJcJqtONP8hsVvf9xdMx+ceEM2k= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711585371; c=relaxed/simple; bh=2UyslTz7tV4d16AXglJHpJ8Q3PxwMPTL6mmYHAwOhM4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QB0jMsxBLNsKxgv5JSgySELsTBKH67kbvT9qLkXhc9jq68Yswpf6yyO9n+4OgVO7icR5LNfvJ9JA32+alxWZ2TICLrBbeEULSfWGxK3jCrfG9ZIONbgMVlkoRsDgkYmcBQibGer715SLfved0eAiHab1cvD6xUpOOhk8ltB6FU8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YrTOeKut; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711585369; x=1743121369; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2UyslTz7tV4d16AXglJHpJ8Q3PxwMPTL6mmYHAwOhM4=; b=YrTOeKutGqu49vlhJmBdeDabzwuX9G7DSjzH5SOClvmgIQK201dQ04ZS RipV2JYVEbI4DEhutbcD27zFToDOcoQo/3aNlwMkvAr1aYglxKXcdSff6 F2gpI/wNcozDgvnRKjV3tnFrdehdDnriH+gaM9jjVqVUk3f9KYeO0fGs5 ccFp+j50G9U6X8TWJ6G27OCOyyA8OwMZxtvgIPeGtlqYdiaCPfdZhY8BY vvUrPk+GjVY/1NGtO47uTXztjk9la2eX1XTZwlfz+eD6BjNQCxXd4U8mC ZsFgBK2TvyBBqJs4p2SWvc+A8nCpDv3Vmp7nXIvspfyzUaDWvjGQ0b6P6 g==; X-CSE-ConnectionGUID: 3Fd17+RdSqmiwb8MJCyeTg== X-CSE-MsgGUID: Uh1Vy72EQpe+WNirILtuKQ== X-IronPort-AV: E=McAfee;i="6600,9927,11026"; a="6580701" X-IronPort-AV: E=Sophos;i="6.07,160,1708416000"; d="scan'208";a="6580701" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Mar 2024 17:22:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,160,1708416000"; d="scan'208";a="16411804" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by orviesa009.jf.intel.com with ESMTP; 27 Mar 2024 17:22:31 -0700 From: Haitao Huang To: jarkko@kernel.org, dave.hansen@linux.intel.com, kai.huang@intel.com, tj@kernel.org, mkoutny@suse.com, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, x86@kernel.org, cgroups@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, sohil.mehta@intel.com, tim.c.chen@linux.intel.com Cc: zhiquan1.li@intel.com, kristen@linux.intel.com, seanjc@google.com, zhanb@microsoft.com, anakrish@microsoft.com, mikko.ylinen@linux.intel.com, yangjie@microsoft.com, chrisyan@microsoft.com Subject: [PATCH v10 08/14] x86/sgx: Add basic EPC reclamation flow for cgroup Date: Wed, 27 Mar 2024 17:22:23 -0700 Message-Id: <20240328002229.30264-9-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240328002229.30264-1-haitao.huang@linux.intel.com> References: <20240328002229.30264-1-haitao.huang@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Kristen Carlson Accardi When a cgroup usage reaches its limit, and it is to be charged, i.e., sgx_cgroup_try_charge() called for new allocations, the cgroup needs to reclaim pages from its LRU or LRUs of its descendants to make room for any new allocations. This patch adds the basic building block for the per-cgroup reclamation flow and use it for synchronous reclamation in sgx_cgroup_try_charge(). First, modify sgx_reclaim_pages() to let callers to pass in the LRU from which pages are reclaimed, so it can be reused by both the global and cgroup reclaimers. Also return the number of pages attempted, so a cgroup reclaimer can use it to track reclamation progress from its descendants. For the global reclaimer, replace all call sites of sgx_reclaim_pages() with calls to a newly created wrapper, sgx_reclaim_pages_global(), which just calls sgx_reclaim_pages() with the global LRU passed in. For cgroup reclamation, implement a basic reclamation flow, encapsulated in the top-level function, sgx_cgroup_reclaim_pages(). It performs a pre-order walk on a given cgroup subtree, and calls sgx_reclaim_pages() at each node passing in the LRU of that node. It keeps track of total attempted pages and stops the walk if desired number of pages are attempted. Finally, pass a parameter to sgx_cgroup_try_charge() to indicate whether a synchronous reclamation is allowed. If the caller allows and cgroup usage is at its limit, trigger the synchronous reclamation by calling sgx_cgroup_reclaim_pages() in a loop with cond_resched() in between iterations. A later patch will add support for asynchronous reclamation reusing sgx_cgroup_reclaim_pages(). Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Co-developed-by: Haitao Huang Signed-off-by: Haitao Huang --- V10: - Simplify the signature by removing a pointer to nr_to_scan (Kai) - Return pages attempted instead of reclaimed as it is really what the cgroup caller needs to track progress. This further simplifies the design. - Merge patch for exposing sgx_reclaim_pages() with basic synchronous reclamation. (Kai) - Shorten names for EPC cgroup functions. (Jarkko) - Fix/add comments to justify the design (Kai) - Separate out a helper for for addressing single iteration of the loop in sgx_cgroup_try_charge(). (Jarkko) V9: - Add comments for static variables. (Jarkko) V8: - Use width of 80 characters in text paragraphs. (Jarkko) - Remove alignment for substructure variables. (Jarkko) V7: - Reworked from patch 9 of V6, "x86/sgx: Restructure top-level EPC reclaim function". Do not split the top level function (Kai) - Dropped patches 7 and 8 of V6. - Split this out from the big patch, #10 in V6. (Dave, Kai) --- arch/x86/kernel/cpu/sgx/epc_cgroup.c | 127 ++++++++++++++++++++++++++- arch/x86/kernel/cpu/sgx/epc_cgroup.h | 5 +- arch/x86/kernel/cpu/sgx/main.c | 45 ++++++---- arch/x86/kernel/cpu/sgx/sgx.h | 1 + 4 files changed, 156 insertions(+), 22 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/epc_cgroup.c b/arch/x86/kernel/cpu/sgx/epc_cgroup.c index a1dd43c195b2..f7a487a29ed1 100644 --- a/arch/x86/kernel/cpu/sgx/epc_cgroup.c +++ b/arch/x86/kernel/cpu/sgx/epc_cgroup.c @@ -9,16 +9,136 @@ static struct sgx_cgroup sgx_cg_root; /** - * sgx_cgroup_try_charge() - try to charge cgroup for a single EPC page + * sgx_cgroup_lru_empty() - check if a cgroup tree has no pages on its LRUs + * @root: Root of the tree to check + * + * Used to avoid livelocks due to a cgroup having a non-zero charge count but + * no pages on its LRUs, e.g. due to a dead enclave waiting to be released or + * because all pages in the cgroup are unreclaimable. + * + * Return: %true if all cgroups under the specified root have empty LRU lists. + */ +static bool sgx_cgroup_lru_empty(struct misc_cg *root) +{ + struct cgroup_subsys_state *css_root; + struct cgroup_subsys_state *pos; + struct sgx_cgroup *sgx_cg; + bool ret = true; + + /* + * Caller ensure css_root ref acquired + */ + css_root = &root->css; + + rcu_read_lock(); + css_for_each_descendant_pre(pos, css_root) { + if (!css_tryget(pos)) + break; + + rcu_read_unlock(); + + sgx_cg = sgx_cgroup_from_misc_cg(css_misc(pos)); + + spin_lock(&sgx_cg->lru.lock); + ret = list_empty(&sgx_cg->lru.reclaimable); + spin_unlock(&sgx_cg->lru.lock); + + rcu_read_lock(); + css_put(pos); + if (!ret) + break; + } + + rcu_read_unlock(); + + return ret; +} + +/** + * sgx_cgroup_reclaim_pages() - reclaim EPC from a cgroup tree + * @root: The root of cgroup tree to reclaim from. * + * This function performs a pre-order walk in the cgroup tree under the given + * root, attempting to reclaim pages at each node until a fixed number of pages + * (%SGX_NR_TO_SCAN) are attempted for reclamation. No guarantee of success on + * the actual reclamation process. In extreme cases, if all pages in front of + * the LRUs are recently accessed, i.e., considered "too young" to reclaim, no + * page will actually be reclaimed after walking the whole tree. + * + * Callers check for the need for reclamation before calling this function. Some + * callers may run this function in a loop guarded by some criteria for + * triggering reclamation, and call cond_resched() in between iterations to + * avoid indefinite blocking. + */ +static void sgx_cgroup_reclaim_pages(struct misc_cg *root) +{ + struct cgroup_subsys_state *css_root; + struct cgroup_subsys_state *pos; + struct sgx_cgroup *sgx_cg; + unsigned int cnt = 0; + + /* Caller ensure css_root ref acquired */ + css_root = &root->css; + + rcu_read_lock(); + css_for_each_descendant_pre(pos, css_root) { + if (!css_tryget(pos)) + break; + rcu_read_unlock(); + + sgx_cg = sgx_cgroup_from_misc_cg(css_misc(pos)); + cnt += sgx_reclaim_pages(&sgx_cg->lru); + + rcu_read_lock(); + css_put(pos); + + if (cnt >= SGX_NR_TO_SCAN) + break; + } + + rcu_read_unlock(); +} + +static int __sgx_cgroup_try_charge(struct sgx_cgroup *epc_cg) +{ + if (!misc_cg_try_charge(MISC_CG_RES_SGX_EPC, epc_cg->cg, PAGE_SIZE)) + return 0; + + if (sgx_cgroup_lru_empty(epc_cg->cg)) + return -ENOMEM; + + if (signal_pending(current)) + return -ERESTARTSYS; + + return -EBUSY; +} + +/** + * sgx_cgroup_try_charge() - try to charge cgroup for a single EPC page * @sgx_cg: The EPC cgroup to be charged for the page. + * @reclaim: Whether or not synchronous EPC reclaim is allowed. * Return: * * %0 - If successfully charged. * * -errno - for failures. */ -int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg) +int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg, enum sgx_reclaim reclaim) { - return misc_cg_try_charge(MISC_CG_RES_SGX_EPC, sgx_cg->cg, PAGE_SIZE); + int ret; + + for (;;) { + ret = __sgx_cgroup_try_charge(sgx_cg); + + if (ret != -EBUSY) + return ret; + + if (reclaim == SGX_NO_RECLAIM) + return -ENOMEM; + + sgx_cgroup_reclaim_pages(sgx_cg->cg); + cond_resched(); + } + + return 0; } /** @@ -50,6 +170,7 @@ const struct misc_res_ops sgx_cgroup_ops = { static void sgx_cgroup_misc_init(struct misc_cg *cg, struct sgx_cgroup *sgx_cg) { + sgx_lru_init(&sgx_cg->lru); cg->res[MISC_CG_RES_SGX_EPC].priv = sgx_cg; sgx_cg->cg = cg; } diff --git a/arch/x86/kernel/cpu/sgx/epc_cgroup.h b/arch/x86/kernel/cpu/sgx/epc_cgroup.h index 8f794e23fad6..f62dce0cac51 100644 --- a/arch/x86/kernel/cpu/sgx/epc_cgroup.h +++ b/arch/x86/kernel/cpu/sgx/epc_cgroup.h @@ -20,7 +20,7 @@ static inline struct sgx_cgroup *sgx_get_current_cg(void) static inline void sgx_put_cg(struct sgx_cgroup *sgx_cg) { } -static inline int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg) +static inline int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg, enum sgx_reclaim r) { return 0; } @@ -31,6 +31,7 @@ static inline void sgx_cgroup_init(void) { } #else struct sgx_cgroup { struct misc_cg *cg; + struct sgx_epc_lru_list lru; }; static inline struct sgx_cgroup *sgx_cgroup_from_misc_cg(struct misc_cg *cg) @@ -61,7 +62,7 @@ static inline void sgx_put_cg(struct sgx_cgroup *sgx_cg) put_misc_cg(sgx_cg->cg); } -int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg); +int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg, enum sgx_reclaim reclaim); void sgx_cgroup_uncharge(struct sgx_cgroup *sgx_cg); void sgx_cgroup_init(void); diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 8f83f7ac386e..aaf341abc641 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -286,11 +286,14 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, mutex_unlock(&encl->lock); } -/* - * Take a fixed number of pages from the head of the active page pool and - * reclaim them to the enclave's private shmem files. Skip the pages, which have - * been accessed since the last scan. Move those pages to the tail of active - * page pool so that the pages get scanned in LRU like fashion. +/** + * sgx_reclaim_pages() - Attempt to reclaim a fixed number of pages from an LRU + * @lru: The LRU from which pages are reclaimed. + * + * Take a fixed number of pages from the head of a given LRU and reclaim them to + * the enclave's private shmem files. Skip the pages, which have been accessed + * since the last scan. Move those pages to the tail of the list so that the + * pages get scanned in LRU like fashion. * * Batch process a chunk of pages (at the moment 16) in order to degrade amount * of IPI's and ETRACK's potentially required. sgx_encl_ewb() does degrade a bit @@ -298,8 +301,10 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, * + EWB) but not sufficiently. Reclaiming one page at a time would also be * problematic as it would increase the lock contention too much, which would * halt forward progress. + * + * Return: Number of pages attempted for reclamation. */ -static void sgx_reclaim_pages(void) +unsigned int sgx_reclaim_pages(struct sgx_epc_lru_list *lru) { struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; struct sgx_backing backing[SGX_NR_TO_SCAN]; @@ -310,10 +315,9 @@ static void sgx_reclaim_pages(void) int ret; int i; - spin_lock(&sgx_global_lru.lock); + spin_lock(&lru->lock); for (i = 0; i < SGX_NR_TO_SCAN; i++) { - epc_page = list_first_entry_or_null(&sgx_global_lru.reclaimable, - struct sgx_epc_page, list); + epc_page = list_first_entry_or_null(&lru->reclaimable, struct sgx_epc_page, list); if (!epc_page) break; @@ -328,7 +332,7 @@ static void sgx_reclaim_pages(void) */ epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; } - spin_unlock(&sgx_global_lru.lock); + spin_unlock(&lru->lock); for (i = 0; i < cnt; i++) { epc_page = chunk[i]; @@ -351,9 +355,9 @@ static void sgx_reclaim_pages(void) continue; skip: - spin_lock(&sgx_global_lru.lock); - list_add_tail(&epc_page->list, &sgx_global_lru.reclaimable); - spin_unlock(&sgx_global_lru.lock); + spin_lock(&lru->lock); + list_add_tail(&epc_page->list, &lru->reclaimable); + spin_unlock(&lru->lock); kref_put(&encl_page->encl->refcount, sgx_encl_release); @@ -379,6 +383,8 @@ static void sgx_reclaim_pages(void) sgx_free_epc_page(epc_page); } + + return cnt; } static bool sgx_should_reclaim(unsigned long watermark) @@ -387,6 +393,11 @@ static bool sgx_should_reclaim(unsigned long watermark) !list_empty(&sgx_global_lru.reclaimable); } +static void sgx_reclaim_pages_global(void) +{ + sgx_reclaim_pages(&sgx_global_lru); +} + /* * sgx_reclaim_direct() should be called (without enclave's mutex held) * in locations where SGX memory resources might be low and might be @@ -395,7 +406,7 @@ static bool sgx_should_reclaim(unsigned long watermark) void sgx_reclaim_direct(void) { if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) - sgx_reclaim_pages(); + sgx_reclaim_pages_global(); } static int ksgxd(void *p) @@ -418,7 +429,7 @@ static int ksgxd(void *p) sgx_should_reclaim(SGX_NR_HIGH_PAGES)); if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) - sgx_reclaim_pages(); + sgx_reclaim_pages_global(); cond_resched(); } @@ -572,7 +583,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim) int ret; sgx_cg = sgx_get_current_cg(); - ret = sgx_cgroup_try_charge(sgx_cg); + ret = sgx_cgroup_try_charge(sgx_cg, reclaim); if (ret) { sgx_put_cg(sgx_cg); return ERR_PTR(ret); @@ -604,7 +615,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim) * Need to do a global reclamation if cgroup was not full but free * physical pages run out, causing __sgx_alloc_epc_page() to fail. */ - sgx_reclaim_pages(); + sgx_reclaim_pages_global(); cond_resched(); } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index fd28e2e98620..954f02efb516 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -119,6 +119,7 @@ void sgx_reclaim_direct(void); void sgx_mark_page_reclaimable(struct sgx_epc_page *page); int sgx_unmark_page_reclaimable(struct sgx_epc_page *page); struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim); +unsigned int sgx_reclaim_pages(struct sgx_epc_lru_list *lru); void sgx_ipi_cb(void *info); -- 2.25.1