Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp5757532ioo; Wed, 1 Jun 2022 11:57:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyhj4tOvzj1A3l1ThEPToEuOWgh9XhjoSVe31d2e8feJ2CTWnGg2xq5Xe+kedVeLLnDf9zd X-Received: by 2002:a63:6685:0:b0:3fb:fe7d:b5f4 with SMTP id a127-20020a636685000000b003fbfe7db5f4mr687672pgc.275.1654109840197; Wed, 01 Jun 2022 11:57:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654109840; cv=none; d=google.com; s=arc-20160816; b=oD/yarvd0c0pokdAmCxVhrXISbazc162Cpe1+TJKWZp2B88CB+HbZSb2LOF+p1KxXR NDSV8LP8GqrkbplV7F48NLykV95aH3N8oisgqwOVqz/sTfLH3R5Vmg+t4Uqf37XSy8Ec eDBzHGQe5axgIAk39hOXz8R+1dxMejOF/NJvbQaFESgu1Yw7dzVHVMM3Wsyq8ROsyNt0 C9HTA37mBtaYiiV9uDKkI9f1es+t9zWygCx4A39qwvcLVpl0CGnjlG+fMhiaKHUPVxDY mo1xa/E5GDNeSb73lEW/M+aiXSf5W0tDp0JOfF8DurFG3cn479Hz92d88ywBe4kzrBoR 6KNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=otYH+jVzEfl9Z2PO/DybPkXp0wk0yW6bLDXolwRW234=; b=p1T8mhxVAFWLMYWeOgPCuvv1Hx/mHfCfBwdDQ7lxMF8rawNdFtCdgLiFTSM9/AlC/j jN87RwJL6gUTI9/Lx2x36rBB+8lAE9DeRHY6WSZAxkccTCOpIPMl7lxmr8dGQ+dvlpOA dUhIZpy6VxAGHe5MgaVca/EDaS7D3GuAVfinPiiuJRvVCvvYvYOEWMv+v9adys+sODlU +TcBiKktHHZ+OTToonqw5t7gtEm/wMbzDsOh66dTWPw9IX3pl3c80UUJD7G+sQKccC+7 l23Rksbo0mmBldJTaHwwC+eYb/ZqzB++h0mMXBrDnalxyBd9ZTnmt+mFAVygR84nOj2Q o8vQ== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id e10-20020a6558ca000000b003fa6f09aa4esi3493581pgu.70.2022.06.01.11.57.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jun 2022 11:57:20 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 22801B0D0F; Wed, 1 Jun 2022 11:44:36 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353037AbiFAMp5 (ORCPT + 99 others); Wed, 1 Jun 2022 08:45:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353022AbiFAMpx (ORCPT ); Wed, 1 Jun 2022 08:45:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5E00E4A91D for ; Wed, 1 Jun 2022 05:45:51 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B3F9ED1; Wed, 1 Jun 2022 05:45:51 -0700 (PDT) Received: from bogus (unknown [10.57.9.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0D5863F73D; Wed, 1 Jun 2022 05:45:48 -0700 (PDT) Date: Wed, 1 Jun 2022 13:45:01 +0100 From: Sudeep Holla To: Gavin Shan Cc: linux-kernel@vger.kernel.org, Atish Patra , Atish Patra , Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Qing Wang , linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, Rob Herring Subject: Re: [PATCH v3 02/16] cacheinfo: Add helper to access any cache index for a given CPU Message-ID: <20220601124501.v4ylb4oqs4uq2oxj@bogus> References: <20220525081416.3306043-1-sudeep.holla@arm.com> <20220525081416.3306043-2-sudeep.holla@arm.com> <20220525081416.3306043-3-sudeep.holla@arm.com> <9a70f2b2-e45c-b125-1c14-d5a06097be58@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9a70f2b2-e45c-b125-1c14-d5a06097be58@redhat.com> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 01, 2022 at 10:44:20AM +0800, Gavin Shan wrote: > Hi Sudeep, > > On 5/25/22 4:14 PM, Sudeep Holla wrote: > > The cacheinfo for a given CPU at a given index is used at quite a few > > places by fetching the base point for index 0 using the helper > > per_cpu_cacheinfo(cpu) and offseting it by the required index. > > > > Instead, add another helper to fetch the required pointer directly and > > use it to simplify and improve readability. > > > > Signed-off-by: Sudeep Holla > > --- > > drivers/base/cacheinfo.c | 14 +++++++------- > > 1 file changed, 7 insertions(+), 7 deletions(-) > > > > s/offseting/offsetting > > It looks good to me with below nits fixed: > > Reviewed-by: Gavin Shan > > > > diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c > > index b0bde272e2ae..c4547d8ac6f3 100644 > > --- a/drivers/base/cacheinfo.c > > +++ b/drivers/base/cacheinfo.c > > @@ -25,6 +25,8 @@ static DEFINE_PER_CPU(struct cpu_cacheinfo, ci_cpu_cacheinfo); > > #define ci_cacheinfo(cpu) (&per_cpu(ci_cpu_cacheinfo, cpu)) > > #define cache_leaves(cpu) (ci_cacheinfo(cpu)->num_leaves) > > #define per_cpu_cacheinfo(cpu) (ci_cacheinfo(cpu)->info_list) > > +#define per_cpu_cacheinfo_idx(cpu, idx) \ > > + (per_cpu_cacheinfo(cpu) + (idx)) > > struct cpu_cacheinfo *get_cpu_cacheinfo(unsigned int cpu) > > { > > @@ -172,7 +174,7 @@ static int cache_setup_of_node(unsigned int cpu) > > } > > while (index < cache_leaves(cpu)) { > > - this_leaf = this_cpu_ci->info_list + index; > > + this_leaf = per_cpu_cacheinfo_idx(cpu, index); > > if (this_leaf->level != 1) > > np = of_find_next_cache_node(np); > > else > > @@ -231,7 +233,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu) > > for (index = 0; index < cache_leaves(cpu); index++) { > > unsigned int i; > > - this_leaf = this_cpu_ci->info_list + index; > > + this_leaf = per_cpu_cacheinfo_idx(cpu, index); > > /* skip if shared_cpu_map is already populated */ > > if (!cpumask_empty(&this_leaf->shared_cpu_map)) > > continue; > > @@ -242,7 +244,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu) > > if (i == cpu || !sib_cpu_ci->info_list) > > continue;/* skip if itself or no cacheinfo */ > > - sib_leaf = sib_cpu_ci->info_list + index; > > + sib_leaf = per_cpu_cacheinfo_idx(i, index); > > if (cache_leaves_are_shared(this_leaf, sib_leaf)) { > > cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map); > > cpumask_set_cpu(i, &this_leaf->shared_cpu_map); > > @@ -258,12 +260,11 @@ static int cache_shared_cpu_map_setup(unsigned int cpu) > > static void cache_shared_cpu_map_remove(unsigned int cpu) > > { > > - struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > > struct cacheinfo *this_leaf, *sib_leaf; > > unsigned int sibling, index; > > for (index = 0; index < cache_leaves(cpu); index++) { > > - this_leaf = this_cpu_ci->info_list + index; > > + this_leaf = per_cpu_cacheinfo_idx(cpu, index); > > for_each_cpu(sibling, &this_leaf->shared_cpu_map) { > > struct cpu_cacheinfo *sib_cpu_ci; > > > > In cache_shared_cpu_map_remove(), the newly introduced macro > can be applied when the sibling CPU's cache info is fetched. > > sib_leaf = sib_cpu_ci->info_list + index; > > to > > sib_leaf = per_cpu_cacheinfo_idx(sibling, index); > Right, I clearly missed to identify this. Thanks again for the review, all other comments are now fixed locally and pushed @[1], will post them as part of next version. -- Regards, Sudeep [1] https://git.kernel.org/sudeep.holla/h/arch_topology