Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp1359032rwb; Fri, 13 Jan 2023 11:06:54 -0800 (PST) X-Google-Smtp-Source: AMrXdXtlOvOcXoiBH+/EH0b7hy1vpR/mnZE4zw2tfwfx57BkzgdFuXN+SFpEKutUDcu7SnZeT2nh X-Received: by 2002:a17:906:c0d5:b0:84d:2eb0:57d6 with SMTP id bn21-20020a170906c0d500b0084d2eb057d6mr28342909ejb.52.1673636814700; Fri, 13 Jan 2023 11:06:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673636814; cv=none; d=google.com; s=arc-20160816; b=iMKllMmCch/m6AP3txV0meRIm+pC/nwKpn4klftKTwRKNX6sFbrkjiK9hiY6f96+84 HgmhWvK53p+Z99tda1Uhu0cgbhLdcu60NBE9XfsQ3fV0U9uOkDLAPlJJPxL+HCPvvy1y hfW2sskBm16Oxxm2vyTre3U78tz498mw03Ov8s5cZVqjbzj2IiMTPDxogiZzuYhrAl4F I1oPLYoMQv+S/GQ3V/7chPkLCL9tvEmGlwymrTPwToYs4DpMDSka26Qq6y7I9Vb3rqEm KQZE9QX10BN9jVp0Nx4hgystlCi7I84SuT5jEqDWBDlpR5Jf/abS+kfzGR2aH6NNWAgJ c1BA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=o0Qj58Bd7a7sipNMK321gnIydKdzg1IZHIGNIDFtMy0=; b=MCguaeSnoMnhDesJfsIQevFDek+5Gl/oil66ck4CZyJXpeuxyJP3MIkN9VNHp4E7GL MiL/XeKSOr801W60XCLSGhsuX2pybvEus4MOH18sgTniP1J/bG6gLY59m4MrpZUA7gd0 +0ITODnJKUlb+aJzS6OXOYoOSovSt4/DJFqz68Ugaf1cJvCckrctgMGa7fCEajsY3bFX klaCrTNJ1P1m76cYN1y0aqxSz/Fe7YeHwd9pxtCjaefGK4sfhbi9mtPU7UY89r2vU/Bt 2WQ8CwCvekUOQzJbt+hI/fhAFcCCqlvN7vPNkPsDWlfqBTggc0OX4DxW9mDovmLYtNs3 mG9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wv4-20020a170907080400b0084ca5f2b837si21074151ejb.676.2023.01.13.11.06.42; Fri, 13 Jan 2023 11:06:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230204AbjAMSDJ (ORCPT + 51 others); Fri, 13 Jan 2023 13:03:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230122AbjAMSBi (ORCPT ); Fri, 13 Jan 2023 13:01:38 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 799291175 for ; Fri, 13 Jan 2023 09:56:10 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 58D1AFEC; Fri, 13 Jan 2023 09:56:52 -0800 (PST) Received: from merodach.members.linode.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C04423F67D; Fri, 13 Jan 2023 09:56:07 -0800 (PST) From: James Morse To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: Fenghua Yu , Reinette Chatre , Thomas Gleixner , Ingo Molnar , Borislav Petkov , H Peter Anvin , Babu Moger , James Morse , shameerali.kolothum.thodi@huawei.com, D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, xingxin.hx@openanolis.org, baolin.wang@linux.alibaba.com, Jamie Iles , Xin Hao , peternewman@google.com Subject: [PATCH v2 10/18] x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read() Date: Fri, 13 Jan 2023 17:54:51 +0000 Message-Id: <20230113175459.14825-11-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230113175459.14825-1-james.morse@arm.com> References: <20230113175459.14825-1-james.morse@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Depending on the number of monitors available, Arm's MPAM may need to allocate a monitor prior to reading the counter value. Allocating a contended resource may involve sleeping. All callers of resctrl_arch_rmid_read() read the counter on more than one domain. If the monitor is allocated globally, there is no need to allocate and free it for each call to resctrl_arch_rmid_read(). Add arch hooks for this allocation, which need calling before resctrl_arch_rmid_read(). The allocated monitor is passed to resctrl_arch_rmid_read(), then freed again afterwards. The helper can be called on any CPU, and can sleep. Tested-by: Shaopeng Tan Signed-off-by: James Morse --- arch/x86/include/asm/resctrl.h | 11 +++++++ arch/x86/kernel/cpu/resctrl/internal.h | 1 + arch/x86/kernel/cpu/resctrl/monitor.c | 40 +++++++++++++++++++++++--- include/linux/resctrl.h | 4 +-- 4 files changed, 50 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h index d589a82995ac..194a1570af7b 100644 --- a/arch/x86/include/asm/resctrl.h +++ b/arch/x86/include/asm/resctrl.h @@ -136,6 +136,17 @@ static inline u32 resctrl_arch_rmid_idx_encode(u32 closid, u32 rmid) return rmid; } +/* x86 can always read an rmid, nothing needs allocating */ +struct rdt_resource; +static inline int resctrl_arch_mon_ctx_alloc(struct rdt_resource *r, int evtid) +{ + might_sleep(); + return 0; +}; + +static inline void resctrl_arch_mon_ctx_free(struct rdt_resource *r, int evtid, + int ctx) { }; + void resctrl_cpu_detect(struct cpuinfo_x86 *c); #else diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 1f90a10b75a1..e85e454bec72 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -88,6 +88,7 @@ struct rmid_read { bool first; int err; u64 val; + int arch_mon_ctx; }; extern bool rdt_alloc_capable; diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index d6ae4b713801..4e248f4a5f59 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -15,6 +15,7 @@ * Software Developer Manual June 2016, volume 3, section 17.17. */ +#include #include #include #include @@ -236,7 +237,7 @@ static void __rmid_read(void *arg) int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, u32 closid, u32 rmid, enum resctrl_event_id eventid, - u64 *val) + u64 *val, int ignored) { struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r); struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d); @@ -285,9 +286,14 @@ void __check_limbo(struct rdt_domain *d, bool force_free) u32 idx_limit = resctrl_arch_system_num_rmid_idx(); struct rmid_entry *entry; u32 idx, cur_idx = 1; + int arch_mon_ctx; bool rmid_dirty; u64 val = 0; + arch_mon_ctx = resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID); + if (arch_mon_ctx < 0) + return; + /* * Skip RMID 0 and start from RMID 1 and check all the RMIDs that * are marked as busy for occupancy < threshold. If the occupancy @@ -301,7 +307,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free) entry = __rmid_entry(idx); if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid, - QOS_L3_OCCUP_EVENT_ID, &val)) { + QOS_L3_OCCUP_EVENT_ID, &val, + arch_mon_ctx)) { rmid_dirty = true; } else { rmid_dirty = (val >= resctrl_rmid_realloc_threshold); @@ -316,6 +323,8 @@ void __check_limbo(struct rdt_domain *d, bool force_free) } cur_idx = idx + 1; } + + resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx); } bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d) @@ -407,16 +416,22 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) { struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; struct rdt_domain *d; + int arch_mon_ctx; u64 val = 0; u32 idx; int err; idx = resctrl_arch_rmid_idx_encode(entry->closid, entry->rmid); + arch_mon_ctx = resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID); + if (arch_mon_ctx < 0) + return; + entry->busy = 0; list_for_each_entry(d, &r->domains, list) { err = resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid, - QOS_L3_OCCUP_EVENT_ID, &val); + QOS_L3_OCCUP_EVENT_ID, &val, + arch_mon_ctx); if (err || val <= resctrl_rmid_realloc_threshold) continue; @@ -429,6 +444,7 @@ static void add_rmid_to_limbo(struct rmid_entry *entry) set_bit(idx, d->rmid_busy_llc); entry->busy++; } + resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx); if (entry->busy) rmid_limbo_count++; @@ -465,7 +481,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr) resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid); rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, rr->evtid, - &tval); + &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; @@ -538,6 +554,9 @@ int mon_event_count(void *info) int ret; rdtgrp = rr->rgrp; + rr->arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr->r, rr->evtid); + if (rr->arch_mon_ctx < 0) + return rr->arch_mon_ctx; ret = __mon_event_count(rdtgrp->closid, rdtgrp->mon.rmid, rr); @@ -564,6 +583,8 @@ int mon_event_count(void *info) if (ret == 0) rr->err = 0; + resctrl_arch_mon_ctx_free(rr->r, rr->evtid, rr->arch_mon_ctx); + return 0; } @@ -700,11 +721,21 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, if (is_mbm_total_enabled()) { rr.evtid = QOS_L3_MBM_TOTAL_EVENT_ID; rr.val = 0; + rr.arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid); + if (rr.arch_mon_ctx < 0) + return; + __mon_event_count(closid, rmid, &rr); + + resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); } if (is_mbm_local_enabled()) { rr.evtid = QOS_L3_MBM_LOCAL_EVENT_ID; rr.val = 0; + rr.arch_mon_ctx = resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid); + if (rr.arch_mon_ctx < 0) + return; + __mon_event_count(closid, rmid, &rr); /* @@ -714,6 +745,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, */ if (is_mba_sc(NULL)) mbm_bw_count(closid, rmid, &rr); + resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); } } diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 57d32c3ce06f..d90d3dca48e9 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -230,6 +230,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d); * @rmid: rmid of the counter to read. * @eventid: eventid to read, e.g. L3 occupancy. * @val: result of the counter read in bytes. + * @arch_mon_ctx: An allocated context from resctrl_arch_mon_ctx_alloc(). * * Call from process context on a CPU that belongs to domain @d. * @@ -238,8 +239,7 @@ void resctrl_offline_domain(struct rdt_resource *r, struct rdt_domain *d); */ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, u32 closid, u32 rmid, enum resctrl_event_id eventid, - u64 *val); - + u64 *val, int arch_mon_ctx); /** * resctrl_arch_reset_rmid() - Reset any private state associated with rmid -- 2.30.2