Received: by 2002:a05:7412:31a9:b0:e2:908c:2ebd with SMTP id et41csp4058227rdb; Thu, 14 Sep 2023 10:25:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHuxkr4e/cGJEx6VvCU2jaYWYUL5/EAl7k+gFRXpWo2OnQz4AIxOW3y5InUoRbS35mKHUPh X-Received: by 2002:a17:90a:77c1:b0:26d:11ff:1832 with SMTP id e1-20020a17090a77c100b0026d11ff1832mr5376996pjs.27.1694712302273; Thu, 14 Sep 2023 10:25:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694712302; cv=none; d=google.com; s=arc-20160816; b=1DLzRwrmzPGMZpX2Vn8D5ooOIeXDqPBRjHdegrloQ6slratOJTRWBBaEWN1jRSRCRh SVAoIn1U3L4eL+BtKnehAP5zJfGKZkhvMREYmc41ojEDC5oYPsnm3BJWkWPce/YnEFOk X4gAAi6sAvOo7GzzTt33FXKogzC0vOvIvw4m16hJPIm7ArAN2pOLhprslsyrng8aq2fc mpSlBP7eUONE8uHO2Kd+6T+UJOpTxD0oT8Qo4gm/1cXBcofs24A0fr8jyz8t1F8U1MIy XoqGfyZq3u+J5rhy3YsbuKVEwAodxYEqqZXzcXyI3s9XcQf9ZhS7MjeZlyWwcycLE9WJ YE7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oFsfpePogkE1hN3fnNc+tX0WrFJNYPYguOT9aS6fXpA=; fh=6Nl2MjDBXofbhMXyzAhf7L7loTtEBflFxRNey74qSFc=; b=KGFRw0BjwJF2t7clFSMRs7ypnKhgKfe31PP/CBIjtKGoB+HHwMbQzxbyA02iGJcRhw oDHjEvx7SY8Xou0eAF8Isf30IGRIsgvgCKmd9reHP0HT6lrKqR/wu/6jiuVtm5HbbNYb C7GSccxlTDXMgzu0cHVBZn28z9UN+um9UyejRp1l/BemRBXQL0km5aPND4t2mfhLHeJq uShndTQxp2p2cjMhai5Qv2yVZPDp9xP5ZidLfSuCghLfcjY5WfJofn51AyZhvrBCDxaS kPlxs5ZKWORrgf4CsGC2unX3Q5xYKjAOLc6XD+LMW8vQsDXpWgpMUrZSPqINbuTfb6iE c/4Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id kk10-20020a17090b4a0a00b002683fd38fd4si2220424pjb.31.2023.09.14.10.25.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 10:25:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id E394581C3969; Thu, 14 Sep 2023 10:23:54 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239816AbjINRWr (ORCPT + 99 others); Thu, 14 Sep 2023 13:22:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239878AbjINRWb (ORCPT ); Thu, 14 Sep 2023 13:22:31 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C6FAF2695 for ; Thu, 14 Sep 2023 10:22:20 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C250113D5; Thu, 14 Sep 2023 10:22:57 -0700 (PDT) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C02383F5A1; Thu, 14 Sep 2023 10:22:17 -0700 (PDT) From: James Morse To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , H Peter Anvin , Babu Moger , James Morse , shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, xingxin.hx@openanolis.org, baolin.wang@linux.alibaba.com, Jamie Iles , Xin Hao , peternewman@google.com, dfustini@baylibre.com, amitsinght@marvell.com Subject: [PATCH v6 08/24] x86/resctrl: Track the number of dirty RMID a CLOSID has Date: Thu, 14 Sep 2023 17:21:22 +0000 Message-Id: <20230914172138.11977-9-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230914172138.11977-1-james.morse@arm.com> References: <20230914172138.11977-1-james.morse@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Thu, 14 Sep 2023 10:23:55 -0700 (PDT) X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email MPAM's PMG bits extend its PARTID space, meaning the same PMG value can be used for different control groups. This means once a CLOSID is allocated, all its monitoring ids may still be dirty, and held in limbo. Keep track of the number of RMID held in limbo each CLOSID has. This will allow a future helper to find the 'cleanest' CLOSID when allocating. The array is only needed when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined. This will never be the case on x86. Reviewed-by: Shaopeng Tan Tested-by: Shaopeng Tan Tested-By: Peter Newman Signed-off-by: James Morse --- Changes since v4: * Moved closid_num_dirty_rmid[] update under entry->busy check * Take the mutex in dom_data_init() as the caller doesn't. Changes since v5: * Added braces after an else. * Made closid_num_dirty_rmid an unsigned int. * Moved mutex_lock() in dom_data_init() to cover the whole function. --- arch/x86/kernel/cpu/resctrl/monitor.c | 66 +++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index d286aba1ee63..0c783301d106 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -51,6 +51,13 @@ struct rmid_entry { */ static LIST_HEAD(rmid_free_lru); +/** + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has. + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined. + * Indexed by CLOSID. Protected by rdtgroup_mutex. + */ +static unsigned int *closid_num_dirty_rmid; + /** * @rmid_limbo_count count of currently unused but (potentially) * dirty RMIDs. @@ -293,6 +300,17 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, return 0; } +static void limbo_release_entry(struct rmid_entry *entry) +{ + lockdep_assert_held(&rdtgroup_mutex); + + rmid_limbo_count--; + list_add_tail(&entry->list, &rmid_free_lru); + + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) + closid_num_dirty_rmid[entry->closid]--; +} + /* * Check the RMIDs that are marked as busy for this domain. If the * reported LLC occupancy is below the threshold clear the busy bit and @@ -329,10 +347,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free) if (force_free || !rmid_dirty) { clear_bit(idx, d->rmid_busy_llc); - if (!--entry->busy) { - rmid_limbo_count--; - list_add_tail(&entry->list, &rmid_free_lru); - } + if (!--entry->busy) + limbo_release_entry(entry); } cur_idx = idx + 1; } @@ -400,6 +416,8 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) u64 val = 0; u32 idx; + lockdep_assert_held(&rdtgroup_mutex); + idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid); entry->busy = 0; @@ -425,10 +443,13 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) } put_cpu(); - if (entry->busy) + if (entry->busy) { rmid_limbo_count++; - else + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) + closid_num_dirty_rmid[entry->closid]++; + } else { list_add_tail(&entry->list, &rmid_free_lru); + } } void free_rmid(u32 closid, u32 rmid) @@ -796,13 +817,30 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms) static int dom_data_init(struct rdt_resource *r) { u32 idx_limit = resctrl_arch_system_num_rmid_idx(); + u32 num_closid = resctrl_arch_get_num_closid(r); struct rmid_entry *entry = NULL; + int err = 0, i; u32 idx; - int i; + + mutex_lock(&rdtgroup_mutex); + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + int *tmp; + + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL); + if (!tmp) { + err = -ENOMEM; + goto out_unlock; + } + + closid_num_dirty_rmid = tmp; + } rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); - if (!rmid_ptrs) - return -ENOMEM; + if (!rmid_ptrs) { + kfree(closid_num_dirty_rmid); + err = -ENOMEM; + goto out_unlock; + } for (i = 0; i < idx_limit; i++) { entry = &rmid_ptrs[i]; @@ -822,13 +860,21 @@ static int dom_data_init(struct rdt_resource *r) entry = __rmid_entry(idx); list_del(&entry->list); - return 0; +out_unlock: + mutex_unlock(&rdtgroup_mutex); + + return err; } void resctrl_exit_mon_l3_config(struct rdt_resource *r) { mutex_lock(&rdtgroup_mutex); + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + kfree(closid_num_dirty_rmid); + closid_num_dirty_rmid = NULL; + } + kfree(rmid_ptrs); rmid_ptrs = NULL; -- 2.39.2