Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp960472rwb; Fri, 23 Sep 2022 06:32:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6qW/tNdrX/UQq3lEw0EzpUkGttl7r+spIzimvHDORVrrQcPUEUKp5w5Ojui/X3mrRaA01K X-Received: by 2002:a05:6a00:17a1:b0:542:be09:7b23 with SMTP id s33-20020a056a0017a100b00542be097b23mr9243325pfg.12.1663939950015; Fri, 23 Sep 2022 06:32:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663939950; cv=none; d=google.com; s=arc-20160816; b=yFQFlH1CC/ff7noLG/S9Rk+V3C+sbHORpjaQZK9GPB6M2BPF/hLc2RMtJAd4oQL0xl Cp1kyz3jtzFCS6RKgF+i3QGHuuMcS3etSYKstJswJgRu9tSY6WFZIJkaL8EAXykSVqCh cepBCjKVp9Cqe3mthRuLl0BQoiW7v62FdSNnlswPpxznRrnASW2v+Ac4hCVkPEdrZhU1 aj5hkBH+X6+VmK+VtXoVb+3hTw52c6e9u4IG38orsIp/ijg0VPaDqd49IfFANaSaxbbi OUkX18+zXzx/imtVd4tx36bO7qfAkB5grwYIdLlabfTnBOyw0KtrQUetb3DIY/3HX641 sctg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=ywavRpo+8qs0LCPGY9FJJbtt9jQdk+lS0ee2efVFq4Q=; b=0d5+Zi3+hUKccTVb/f9IaO33UVIITE88AWSNRw93FvluzFfDmCbiTNsd6S6b4Yixvw WzVAGqUIu4fsi0ATy5AqbM+/94fMhgAd+5TzIj+9010G6qGufBxu3HJekH6G0rlvFjfp MYDFxEUPGFxzh8blluTY+MJJSqvM+efXvHEJod0cM3G1ORJLcAK1ux61Q7n8jcMlj+HM 0/RN0UGXEmzEZh2zudAlrlQc9dE+AH76dl54tQu75vvFy8JVwoGH35GO2FxOr10/0AUG efqQNXuyBcNYXa2xTOoxgo+6bvAxyM64GzDGi/ZfSybLAL9CSzLuKPHQjTr6oqGGolrl Qpig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TAdQxdtO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m13-20020a170902f64d00b00176d229eddasi3006386plg.70.2022.09.23.06.32.16; Fri, 23 Sep 2022 06:32:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TAdQxdtO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbiIWNUk (ORCPT + 99 others); Fri, 23 Sep 2022 09:20:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230285AbiIWNUh (ORCPT ); Fri, 23 Sep 2022 09:20:37 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B7E3AA4F2; Fri, 23 Sep 2022 06:20:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 04C03CE1FB0; Fri, 23 Sep 2022 13:20:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7FB4C433C1; Fri, 23 Sep 2022 13:20:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1663939232; bh=m4s+ej4u+QgeEBi8DIicKwr72CdU2CLBWzQKqu6vTOQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=TAdQxdtOfPEbrimwRWe/Axk+8c9SVxUTqdQsyEbKt31vj9nH26Zavq9Mc5h5yRbd6 291RhwDEBQI41nlFTmBtVgVCHgzdewQPaDfRDJTE+460ZDVpK5fr4UXhzKrf6SfHwb ssYSiHb5HpAW18JeD/5+im3tmLN5fso7zdt7skJJhCFdMF/+MIbvRRH5nndB8t8INa IQfHw3uyl1Geib2tZBNWTKI9ARQWLndTawFVHLqk1xF3giVfWbTFyN8oZPLPRY/v07 KNNg/5ZyZxmT+tWzyq4nYJBufxIS+0Wuo9wK+l+kANMjPQJ6bs8wHlpVpy/zVoSOtg wPUX+yd6FCKoQ== Date: Fri, 23 Sep 2022 16:20:28 +0300 From: Jarkko Sakkinen To: Kristen Carlson Accardi Cc: linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, cgroups@vger.kernel.org, Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson Subject: Re: [RFC PATCH 04/20] x86/sgx: Add 'struct sgx_epc_lru' to encapsulate lru list(s) Message-ID: References: <20220922171057.1236139-1-kristen@linux.intel.com> <20220922171057.1236139-5-kristen@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220922171057.1236139-5-kristen@linux.intel.com> X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 22, 2022 at 10:10:41AM -0700, Kristen Carlson Accardi wrote: > From: Sean Christopherson > > Wrap the existing reclaimable list and its spinlock in a struct to > minimize the code changes needed to handle multiple LRUs as well as > reclaimable and non-reclaimable lists, both of which will be introduced > and used by SGX EPC cgroups. > > Signed-off-by: Sean Christopherson > Signed-off-by: Kristen Carlson Accardi > Cc: Sean Christopherson The commit message could explicitly state the added data type. The data type is not LRU: together with the LIFO list, i.e. a queue, the code implements LRU alike policy. I would name the data type as sgx_epc_queue because it is a less confusing name. > --- > arch/x86/kernel/cpu/sgx/main.c | 37 +++++++++++++++++----------------- > arch/x86/kernel/cpu/sgx/sgx.h | 11 ++++++++++ > 2 files changed, 30 insertions(+), 18 deletions(-) > > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c > index 4cdeb915dc86..af68dc1c677b 100644 > --- a/arch/x86/kernel/cpu/sgx/main.c > +++ b/arch/x86/kernel/cpu/sgx/main.c > @@ -26,10 +26,9 @@ static DEFINE_XARRAY(sgx_epc_address_space); > > /* > * These variables are part of the state of the reclaimer, and must be accessed > - * with sgx_reclaimer_lock acquired. > + * with sgx_global_lru.lock acquired. > */ > -static LIST_HEAD(sgx_active_page_list); > -static DEFINE_SPINLOCK(sgx_reclaimer_lock); > +static struct sgx_epc_lru sgx_global_lru; > > static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0); > > @@ -298,12 +297,12 @@ static void sgx_reclaim_pages(void) > int ret; > int i; > > - spin_lock(&sgx_reclaimer_lock); > + spin_lock(&sgx_global_lru.lock); > for (i = 0; i < SGX_NR_TO_SCAN; i++) { > - if (list_empty(&sgx_active_page_list)) > + if (list_empty(&sgx_global_lru.reclaimable)) > break; > > - epc_page = list_first_entry(&sgx_active_page_list, > + epc_page = list_first_entry(&sgx_global_lru.reclaimable, > struct sgx_epc_page, list); > list_del_init(&epc_page->list); > encl_page = epc_page->owner; > @@ -316,7 +315,7 @@ static void sgx_reclaim_pages(void) > */ > epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; > } > - spin_unlock(&sgx_reclaimer_lock); > + spin_unlock(&sgx_global_lru.lock); > > for (i = 0; i < cnt; i++) { > epc_page = chunk[i]; > @@ -339,9 +338,9 @@ static void sgx_reclaim_pages(void) > continue; > > skip: > - spin_lock(&sgx_reclaimer_lock); > - list_add_tail(&epc_page->list, &sgx_active_page_list); > - spin_unlock(&sgx_reclaimer_lock); > + spin_lock(&sgx_global_lru.lock); > + list_add_tail(&epc_page->list, &sgx_global_lru.reclaimable); > + spin_unlock(&sgx_global_lru.lock); > > kref_put(&encl_page->encl->refcount, sgx_encl_release); > > @@ -374,7 +373,7 @@ static void sgx_reclaim_pages(void) > static bool sgx_should_reclaim(unsigned long watermark) > { > return atomic_long_read(&sgx_nr_free_pages) < watermark && > - !list_empty(&sgx_active_page_list); > + !list_empty(&sgx_global_lru.reclaimable); > } > > /* > @@ -427,6 +426,8 @@ static bool __init sgx_page_reclaimer_init(void) > > ksgxd_tsk = tsk; > > + sgx_lru_init(&sgx_global_lru); > + > return true; > } > > @@ -502,10 +503,10 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void) > */ > void sgx_mark_page_reclaimable(struct sgx_epc_page *page) > { > - spin_lock(&sgx_reclaimer_lock); > + spin_lock(&sgx_global_lru.lock); > page->flags |= SGX_EPC_PAGE_RECLAIMER_TRACKED; > - list_add_tail(&page->list, &sgx_active_page_list); > - spin_unlock(&sgx_reclaimer_lock); > + list_add_tail(&page->list, &sgx_global_lru.reclaimable); > + spin_unlock(&sgx_global_lru.lock); > } > > /** > @@ -520,18 +521,18 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page) > */ > int sgx_unmark_page_reclaimable(struct sgx_epc_page *page) > { > - spin_lock(&sgx_reclaimer_lock); > + spin_lock(&sgx_global_lru.lock); > if (page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED) { > /* The page is being reclaimed. */ > if (list_empty(&page->list)) { > - spin_unlock(&sgx_reclaimer_lock); > + spin_unlock(&sgx_global_lru.lock); > return -EBUSY; > } > > list_del(&page->list); > page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; > } > - spin_unlock(&sgx_reclaimer_lock); > + spin_unlock(&sgx_global_lru.lock); > > return 0; > } > @@ -564,7 +565,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) > break; > } > > - if (list_empty(&sgx_active_page_list)) > + if (list_empty(&sgx_global_lru.reclaimable)) > return ERR_PTR(-ENOMEM); > > if (!reclaim) { > diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h > index 5a7e858a8f98..7b208ee8eb45 100644 > --- a/arch/x86/kernel/cpu/sgx/sgx.h > +++ b/arch/x86/kernel/cpu/sgx/sgx.h > @@ -83,6 +83,17 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page) > return section->virt_addr + index * PAGE_SIZE; > } > > +struct sgx_epc_lru { > + spinlock_t lock; > + struct list_head reclaimable; s/reclaimable/list/ > +}; > + > +static inline void sgx_lru_init(struct sgx_epc_lru *lru) > +{ > + spin_lock_init(&lru->lock); > + INIT_LIST_HEAD(&lru->reclaimable); > +} > + > struct sgx_epc_page *__sgx_alloc_epc_page(void); > void sgx_free_epc_page(struct sgx_epc_page *page); > > -- > 2.37.3 > Please also add these: /* * Must be called with queue->lock acquired. */ static inline struct sgx_epc_page *__sgx_epc_queue_push(struct sgx_epc_queue *queue, struct sgx_page *page) { list_add_tail(&page->list, &queue->list); } /* * Must be called with queue->lock acquired. */ static inline struct sgx_epc_page *__sgx_epc_queue_pop(struct sgx_epc_queue *queue) { struct sgx_epc_page *page; if (list_empty(&queue->list) return NULL; page = list_first_entry(&queue->list, struct sgx_epc_page, list); list_del_init(&page->list); return page; } And use them in existing sites. It ensures coherent behavior. You should be able to replace all uses with either, or combination of them (list_move). BR, Jarkko