Received: by 2002:ab2:4a89:0:b0:1f4:a8b6:6e69 with SMTP id w9csp324407lqj; Wed, 10 Apr 2024 11:27:37 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXo9PnhQb1YJ7S2GLyd0KroOra/WXm//gAcIS49DWPQl6Oj+XniIxJtwcV/CnEOAj7TRjL68J3i0QVcyii29OwXhbafnkd2dFQTlw1qgg== X-Google-Smtp-Source: AGHT+IEK5HUy14aTYkprv0H9Fugz+uQNvPaJgU7iJrq5+3palfVNHPSrbZifTFSJ0v4e7/z7odXl X-Received: by 2002:a2e:9e57:0:b0:2d8:7266:ae74 with SMTP id g23-20020a2e9e57000000b002d87266ae74mr2916263ljk.26.1712773657363; Wed, 10 Apr 2024 11:27:37 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712773657; cv=pass; d=google.com; s=arc-20160816; b=pXj6/XkoK+CvBTbmOFwZ0ObuvYmPOg4NUqyOcHV1+s6MTx+DHwiu9Um3e9ZaZ4VcNW gNJEfsxWvHpePvGDn69Q9tlFySVMv6uf4GYG/uyo+7DgNYg/baD2TTzvm2VnlN8AZgt/ utrmfhHtL5iaAxCzKUH00VwZDJACX/EKPY1bgYwL2keKeNT1x+6aZgVVJcVqZ19HxG4j D3ficljO7M3b+5otuAb9kL2hHMGuo3+4Jn+0yRvnSde79/UjGRQmL8PvXFgzXSoTjQtm 6UZKb84L4Yla+N2wwZWlF9OsySHI0NgJIIGD6RQeECW5czb1xyL2E/v9K3imI80pmnuO PNwQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=WGsmVKJFnSmh1lJ+FkCPB/PsFnWK+qamo/RirnoI+TA=; fh=G1r0sJ0F/0v82zumykPUwC7a5Du5vVkWNEWao5ifD3o=; b=wijdciyuioOOW8jbMcFFgRN+4JIWUbCpcMnlBDzuCLEC+ScMeiMNBciIp0jUgd/iWS kd71nVFqpvCyeNGiICvn+YnhnFB1FUKQxu2V78eVlErp2XCGxww4mX9P/EyJyoPcUxx2 IAX6Akr6kOkE6Tu4+2L1emfT8iYOfoaLI2T4sA5o2pz0dzqm96WuPQLxzHjXSV8xVMOk u4H/+cY8T0v8AA5PSSEOcPURkCx6zHoxma7b7X2KHi9tVPg1A9L/sEcPutfi7FRWRLpz 3/lF/BIdUCDxnkkgY4yJen0Xyjim5UHpo08Mr5d7eisIICN4hqRzaNRziUAUc3i1FYd4 zBTw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=n643RZkh; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-139193-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-139193-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id nb17-20020a1709071c9100b00a51cb37909bsi4056084ejc.107.2024.04.10.11.27.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Apr 2024 11:27:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-139193-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=n643RZkh; arc=pass (i=1 dkim=pass dkdomain=intel.com dmarc=pass fromdomain=linux.intel.com); spf=pass (google.com: domain of linux-kernel+bounces-139193-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-139193-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id E3C681F27662 for ; Wed, 10 Apr 2024 18:27:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8453B181D19; Wed, 10 Apr 2024 18:26:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="n643RZkh" Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1729180A6A; Wed, 10 Apr 2024 18:26:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712773565; cv=none; b=pII7OFKhH4LWsI3ugM8NkA3Dp9JtETA2RXBKGunxIDLnLdzoYv/OBjg8L2lXDoeUpUfrS3OcbV4otgJEVM3uFvTSTyNNy6eeU88UlM4qSF5Bs0YX9wQscNsR7t+fMtnf+PBvI2pZcDInNXkpZkqWBi+Nv1O4/Jo5RPYr/qgxZCw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712773565; c=relaxed/simple; bh=R9g3MkWcJkEExM+quwe0pbKoKS64W7ZQe9SFHouCUH4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cyq2g1EvvB7fICDnxXazfFmdkeJE6qGychQ8I3TVEMw8pJH38Jvw5Rp8IIuwxVKcxl++UzYMsHbempteqALJfnJcm4mZVERRTFv/ZCy23ZmXuDDpi7jTn2GT8tcC0NIqHiCOb29n2JBByhiON2jDNBH0zuR8cYZ0nCO7y5unlpA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=n643RZkh; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712773564; x=1744309564; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R9g3MkWcJkEExM+quwe0pbKoKS64W7ZQe9SFHouCUH4=; b=n643RZkhhJMIrs04e/G84lPJh26VjzrQMt8TNFtHjfHI/ezEXtXY7D7u 5iY3Urtm3lHFbAogNCykhSlgJR59cYV+J3VQsPINy9kD9rEpIClhXUMll aoCYSPhtGKZBu8kf9dGxNYO2KOvCUg9duCtQMdRTA3UFB0gHIaGUztKpL 9QzqyFMHi4/7o1f2DAQLBsoxc7xwIFGTHEZ1+Jta9G5p2I49nrYq4IzrJ 3XrgFT+c+Q5e2XxaTCKL5hMEfeG2BMzWXkRlZFMjsZDgw+ztydeZH2Sxz W6oYAuf4qIvDu73nfm4tKw8L1InZTGvOmSgP/UN65exJwdCHGy1R+LVQs g==; X-CSE-ConnectionGUID: koYNyFnCQ06bJOTGTiYHTg== X-CSE-MsgGUID: Adl7pAb6QfmYlpaFc9SbDg== X-IronPort-AV: E=McAfee;i="6600,9927,11039"; a="19583682" X-IronPort-AV: E=Sophos;i="6.07,191,1708416000"; d="scan'208";a="19583682" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2024 11:26:00 -0700 X-CSE-ConnectionGUID: b0y0neCaS/uaGoN9rqhfmg== X-CSE-MsgGUID: I20iapF2SU2BXGr1YoBlDw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,191,1708416000"; d="scan'208";a="21232425" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by orviesa008.jf.intel.com with ESMTP; 10 Apr 2024 11:25:59 -0700 From: Haitao Huang To: jarkko@kernel.org, dave.hansen@linux.intel.com, kai.huang@intel.com, tj@kernel.org, mkoutny@suse.com, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, x86@kernel.org, cgroups@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, sohil.mehta@intel.com, tim.c.chen@linux.intel.com Cc: zhiquan1.li@intel.com, kristen@linux.intel.com, seanjc@google.com, zhanb@microsoft.com, anakrish@microsoft.com, mikko.ylinen@linux.intel.com, yangjie@microsoft.com, chrisyan@microsoft.com Subject: [PATCH v11 06/14] x86/sgx: Add sgx_epc_lru_list to encapsulate LRU list Date: Wed, 10 Apr 2024 11:25:50 -0700 Message-Id: <20240410182558.41467-7-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240410182558.41467-1-haitao.huang@linux.intel.com> References: <20240410182558.41467-1-haitao.huang@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Sean Christopherson Introduce a data structure to wrap the existing reclaimable list and its spinlock. Each cgroup later will have one instance of this structure to track EPC pages allocated for processes associated with the same cgroup. Just like the global SGX reclaimer (ksgxd), an EPC cgroup reclaims pages from the reclaimable list in this structure when its usage reaches near its limit. Use this structure to encapsulate the LRU list and its lock used by the global reclaimer. Signed-off-by: Sean Christopherson Co-developed-by: Kristen Carlson Accardi Signed-off-by: Kristen Carlson Accardi Co-developed-by: Haitao Huang Signed-off-by: Haitao Huang Cc: Sean Christopherson Reviewed-by: Jarkko Sakkinen --- V6: - removed introduction to unreclaimables in commit message. V4: - Removed unneeded comments for the spinlock and the non-reclaimables. (Kai, Jarkko) - Revised the commit to add introduction comments for unreclaimables and multiple LRU lists.(Kai) - Reordered the patches: delay all changes for unreclaimables to later, and this one becomes the first change in the SGX subsystem. V3: - Removed the helper functions and revised commit messages. --- arch/x86/kernel/cpu/sgx/main.c | 39 +++++++++++++++++----------------- arch/x86/kernel/cpu/sgx/sgx.h | 15 +++++++++++++ 2 files changed, 35 insertions(+), 19 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index d482ae7fdabf..b782207d41b6 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -28,10 +28,9 @@ static DEFINE_XARRAY(sgx_epc_address_space); /* * These variables are part of the state of the reclaimer, and must be accessed - * with sgx_reclaimer_lock acquired. + * with sgx_global_lru.lock acquired. */ -static LIST_HEAD(sgx_active_page_list); -static DEFINE_SPINLOCK(sgx_reclaimer_lock); +static struct sgx_epc_lru_list sgx_global_lru; static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0); @@ -306,13 +305,13 @@ static void sgx_reclaim_pages(void) int ret; int i; - spin_lock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); for (i = 0; i < SGX_NR_TO_SCAN; i++) { - if (list_empty(&sgx_active_page_list)) + epc_page = list_first_entry_or_null(&sgx_global_lru.reclaimable, + struct sgx_epc_page, list); + if (!epc_page) break; - epc_page = list_first_entry(&sgx_active_page_list, - struct sgx_epc_page, list); list_del_init(&epc_page->list); encl_page = epc_page->owner; @@ -324,7 +323,7 @@ static void sgx_reclaim_pages(void) */ epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; } - spin_unlock(&sgx_reclaimer_lock); + spin_unlock(&sgx_global_lru.lock); for (i = 0; i < cnt; i++) { epc_page = chunk[i]; @@ -347,9 +346,9 @@ static void sgx_reclaim_pages(void) continue; skip: - spin_lock(&sgx_reclaimer_lock); - list_add_tail(&epc_page->list, &sgx_active_page_list); - spin_unlock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); + list_add_tail(&epc_page->list, &sgx_global_lru.reclaimable); + spin_unlock(&sgx_global_lru.lock); kref_put(&encl_page->encl->refcount, sgx_encl_release); @@ -380,7 +379,7 @@ static void sgx_reclaim_pages(void) static bool sgx_should_reclaim(unsigned long watermark) { return atomic_long_read(&sgx_nr_free_pages) < watermark && - !list_empty(&sgx_active_page_list); + !list_empty(&sgx_global_lru.reclaimable); } /* @@ -432,6 +431,8 @@ static bool __init sgx_page_reclaimer_init(void) ksgxd_tsk = tsk; + sgx_lru_init(&sgx_global_lru); + return true; } @@ -507,10 +508,10 @@ static struct sgx_epc_page *__sgx_alloc_epc_page(void) */ void sgx_mark_page_reclaimable(struct sgx_epc_page *page) { - spin_lock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); page->flags |= SGX_EPC_PAGE_RECLAIMER_TRACKED; - list_add_tail(&page->list, &sgx_active_page_list); - spin_unlock(&sgx_reclaimer_lock); + list_add_tail(&page->list, &sgx_global_lru.reclaimable); + spin_unlock(&sgx_global_lru.lock); } /** @@ -525,18 +526,18 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page) */ int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) { - spin_lock(&sgx_reclaimer_lock); + spin_lock(&sgx_global_lru.lock); if (page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED) { /* The page is being reclaimed. */ if (list_empty(&page->list)) { - spin_unlock(&sgx_reclaimer_lock); + spin_unlock(&sgx_global_lru.lock); return -EBUSY; } list_del(&page->list); page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; } - spin_unlock(&sgx_reclaimer_lock); + spin_unlock(&sgx_global_lru.lock); return 0; } @@ -578,7 +579,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim) break; } - if (list_empty(&sgx_active_page_list)) { + if (list_empty(&sgx_global_lru.reclaimable)) { page = ERR_PTR(-ENOMEM); break; } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 79337dd39348..81bdff099d69 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -114,6 +114,21 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page) return section->virt_addr + index * PAGE_SIZE; } +/* + * Contains EPC pages tracked by the global reclaimer (ksgxd) or an EPC + * cgroup. + */ +struct sgx_epc_lru_list { + spinlock_t lock; + struct list_head reclaimable; +}; + +static inline void sgx_lru_init(struct sgx_epc_lru_list *lru) +{ + spin_lock_init(&lru->lock); + INIT_LIST_HEAD(&lru->reclaimable); +} + void sgx_free_epc_page(struct sgx_epc_page *page); void sgx_reclaim_direct(void); -- 2.25.1