Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp4146749rdb; Thu, 14 Sep 2023 13:15:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF59GLAjr2YICeYeBJr2iPaqlTh4MA38Z7c7q9ekiucWmTefS55i0VnHdKMJRN4JQMgF8PT X-Received: by 2002:a17:902:d111:b0:1c4:1cd3:8062 with SMTP id w17-20020a170902d11100b001c41cd38062mr1890295plw.2.1694722503372; Thu, 14 Sep 2023 13:15:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694722503; cv=none; d=google.com; s=arc-20160816; b=dJ/PagdqyW14nX7T8SCvt3zs4pqoHg9aJ71aZq/feYAWvOnNe6DGXhThSHruon05CQ WuGYkEavhnDd9hQJ/rj/4IuDQlG7tpOQs1OvwpxFX2+BqEzqxLInI3W+tT6ie/Lgh734 9UI50I3JvEPQKlyY82DyeqQnBslOLYa42Uge8Aq4itIQx7yCJJcEas8tMbXPFbIj4hpr yGfPrQvxJ4MtPlT0yHbG9TIvKE8gPuDTJbPciCIzQv0P+ThP/VEmlC+Lowye/lOlkodz LvijEy6YS8+/4QCdwC93ZSN7ZDl0S7qek9VEYRSgqoLtOytpyH+FGyeXH8lh0/53Qhad gJ+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=yMf/mGBNMvwBwKEgiine7dB6/SkUT0grLYI07QpIu5w=; fh=6Nl2MjDBXofbhMXyzAhf7L7loTtEBflFxRNey74qSFc=; b=WBB4wAcvxCO7OqrB+j6zzcqvseG1I1qgPrl30/P1CG9IGTs0aIo+SidYXsuejE/m6k akq2jRmtDT/UtlVXhKh4NriV16T783Aez8eeDks4TI01hsVKBFUC4f2JYwHUySHZivUU 4i9745YCh1OfiNAdPGC4iIilGZbs+gSErjbucu3CIie5W5Mjxh5O9j5NpY/e2NDfXkRr HK/O+RHlJPlozjN2vRKVjk3JryKJO8YHz7G9qLaVgIJDuNRq5DMseuLQPihIynoCgvF2 LcHNUZS4mG28jZoqBCVMC/Gi5rk94p3nJE+2sp/KMe7h+fXPAIHaqing99OqTZNnudiX 49mA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id o4-20020a17090ac70400b0025bf86c41absi2089997pjt.151.2023.09.14.13.15.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 13:15:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 0DC0C8452EEF; Thu, 14 Sep 2023 10:22:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238940AbjINRWg (ORCPT + 99 others); Thu, 14 Sep 2023 13:22:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239657AbjINRWW (ORCPT ); Thu, 14 Sep 2023 13:22:22 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B9B5A2123 for ; Thu, 14 Sep 2023 10:22:17 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B5D3012FC; Thu, 14 Sep 2023 10:22:54 -0700 (PDT) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B4EE73F5A1; Thu, 14 Sep 2023 10:22:14 -0700 (PDT) From: James Morse To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , H Peter Anvin , Babu Moger , James Morse , shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, xingxin.hx@openanolis.org, baolin.wang@linux.alibaba.com, Jamie Iles , Xin Hao , peternewman@google.com, dfustini@baylibre.com, amitsinght@marvell.com Subject: [PATCH v6 07/24] x86/resctrl: Allow RMID allocation to be scoped by CLOSID Date: Thu, 14 Sep 2023 17:21:21 +0000 Message-Id: <20230914172138.11977-8-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230914172138.11977-1-james.morse@arm.com> References: <20230914172138.11977-1-james.morse@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 10:22:39 -0700 (PDT) MPAMs RMID values are not unique unless the CLOSID is considered as well. alloc_rmid() expects the RMID to be an independent number. Pass the CLOSID in to alloc_rmid(). Use this to compare indexes when allocating. If the CLOSID is not relevant to the index, this ends up comparing the free RMID with itself, and the first free entry will be used. With MPAM the CLOSID is included in the index, so this becomes a walk of the free RMID entries, until one that matches the supplied CLOSID is found. Reviewed-by: Shaopeng Tan Tested-by: Shaopeng Tan Tested-By: Peter Newman Signed-off-by: James Morse --- Changes since v2; * Rephrased comment in resctrl_find_free_rmid() to describe this in terms of list_entry_first() * Rephrased comment above alloc_rmid() Changes since v3: * Flipped conditions in alloc_rmid() Changes since v4: * Typo in comment Changes since v5: * Reworded two comments. --- arch/x86/kernel/cpu/resctrl/internal.h | 2 +- arch/x86/kernel/cpu/resctrl/monitor.c | 52 +++++++++++++++++------ arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 2 +- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 2 +- 4 files changed, 42 insertions(+), 16 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index ab96af8d9953..ad6e874d9ed2 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -535,7 +535,7 @@ void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp); struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r); int closids_supported(void); void closid_free(int closid); -int alloc_rmid(void); +int alloc_rmid(u32 closid); void free_rmid(u32 closid, u32 rmid); int rdt_get_mon_l3_config(struct rdt_resource *r); void resctrl_exit_mon_l3_config(struct rdt_resource *r); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index be0b7cb6e1f5..d286aba1ee63 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -345,24 +345,50 @@ bool has_busy_rmid(struct rdt_domain *d) return find_first_bit(d->rmid_busy_llc, idx_limit) != idx_limit; } -/* - * As of now the RMIDs allocation is global. - * However we keep track of which packages the RMIDs - * are used to optimize the limbo list management. - */ -int alloc_rmid(void) +static struct rmid_entry *resctrl_find_free_rmid(u32 closid) { - struct rmid_entry *entry; - - lockdep_assert_held(&rdtgroup_mutex); + struct rmid_entry *itr; + u32 itr_idx, cmp_idx; if (list_empty(&rmid_free_lru)) - return rmid_limbo_count ? -EBUSY : -ENOSPC; + return rmid_limbo_count ? ERR_PTR(-EBUSY) : ERR_PTR(-ENOSPC); + + list_for_each_entry(itr, &rmid_free_lru, list) { + /* + * Get the index of this free RMID, and the index it would need + * to be if it were used with this CLOSID. + * If the CLOSID is irrelevant on this architecture, the two + * index values are always same on every entry and thus the + * very first entry will be returned. + + */ + itr_idx = resctrl_arch_rmid_idx_encode(itr->closid, itr->rmid); + cmp_idx = resctrl_arch_rmid_idx_encode(closid, itr->rmid); + + if (itr_idx == cmp_idx) + return itr; + } + + return ERR_PTR(-ENOSPC); +} + +/* + * For MPAM the RMID value is not unique, and has to be considered with + * the CLOSID. The (CLOSID, RMID) pair is allocated on all domains, which + * allows all domains to be managed by a single free list. + * Each domain also has a rmid_busy_llc to reduce the work of the limbo handler. + */ +int alloc_rmid(u32 closid) +{ + struct rmid_entry *entry; + + lockdep_assert_held(&rdtgroup_mutex); + + entry = resctrl_find_free_rmid(closid); + if (IS_ERR(entry)) + return PTR_ERR(entry); - entry = list_first_entry(&rmid_free_lru, - struct rmid_entry, list); list_del(&entry->list); - return entry->rmid; } diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c index 65bee6f11015..d8f44113ed1f 100644 --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c @@ -777,7 +777,7 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp) int ret; if (rdt_mon_capable) { - ret = alloc_rmid(); + ret = alloc_rmid(rdtgrp->closid); if (ret < 0) { rdt_last_cmd_puts("Out of RMIDs\n"); return ret; diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 61f338d96906..ac1a6437469f 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -3172,7 +3172,7 @@ static int mkdir_rdt_prepare_rmid_alloc(struct rdtgroup *rdtgrp) if (!rdt_mon_capable) return 0; - ret = alloc_rmid(); + ret = alloc_rmid(rdtgrp->closid); if (ret < 0) { rdt_last_cmd_puts("Out of RMIDs\n"); return ret; -- 2.39.2