Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp701176rdg; Thu, 12 Oct 2023 20:09:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGViLt8LKRd/P7sqETt8mDs9QjGuEv4KFRLw+rK2Iyil4dEhGWf2wwmDDNsS6Y6NhjiFuxZ X-Received: by 2002:a05:6870:17a3:b0:1d6:5c40:11b6 with SMTP id r35-20020a05687017a300b001d65c4011b6mr32471814oae.14.1697166563943; Thu, 12 Oct 2023 20:09:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697166563; cv=none; d=google.com; s=arc-20160816; b=BtoE8sQodjlKa197FNycEpAVmQ3MEv75zuMh6dMsFtO4llfvcMHsKf8PVqJbaiuLWA U2IVFh4g6zLi5Usp2gPxJobdZJ9/6MFUTDfZysXRX61xSDDTPA9i7xgFG1zkmAqSYAkg w/Dc1xd0ssxl8QJkDEmU2qaWD/QTe21lfgmCLoC/Oux+N4i4Z1nu1Xomcx5By4PI9uA/ Ahc+j6SVFCiW9qvr6tfSh00h1LwAK1dcYKQiCDEb3B2IEwTLuP1zMGg38JULMdS+/eUp E2txY+WelgM26h8noCPejmgVaqiS6p8vt+HgSKbmxa5v+VU6nAfQhnXf+PL1CljQhyje mKXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=gEASk/yU/FH14pK9TMd3jMSlcdS9K+ffrXQEBJfXVZA=; fh=oLaCincPtxcWhHjRPwSBejOHzXwM+e/1MTyqsbWMSLI=; b=0Am6sHrcGA7cryGxt6PMeUP7SHo0t1Ew2oK3aGF8dhvvgWo/vaNue6eZbbW9ZFDJlj B6qiR00hPiXEheym6/ArNMbwqOzypOf1v1TfZaEWNg32nxlH1kFqPeSu2EHfe+PkRxm9 k/mevxPFFP1CyzUsbe6uhE+sahWoxkJE2IP1pcPAaBY8aP7BOqeIWuim2Aywf+q3mvtu KFzv6deAReayCBGp5oofHJ3vqcrqkoUD1FLJ38Rr0m1qI8J6ywUdnWd1PEdC4wSJdBQP MXOucDUEwQQAyDajvPKNfQiHQG+xSj2EIH5dT3K/JNsoyh8jLEIiu3pTjbULFVIh9ESj YnIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ScHmWAWW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id 15-20020a630d4f000000b005898cf1c6a0si3570054pgn.324.2023.10.12.20.09.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Oct 2023 20:09:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ScHmWAWW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 0AF6581A9B45; Thu, 12 Oct 2023 20:09:21 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229504AbjJMDJI (ORCPT + 99 others); Thu, 12 Oct 2023 23:09:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229436AbjJMDJH (ORCPT ); Thu, 12 Oct 2023 23:09:07 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86FAC91 for ; Thu, 12 Oct 2023 20:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697166545; x=1728702545; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=o/31H4jmTsoKxypwaT/p0YWnSttnLl3/KzbSB3YQFtU=; b=ScHmWAWWC8tn9vMkrNVxv7lhxPhLLjkLBXTKhv3u4ElQSvCX8uErI03e 09D93kLTq5fel9xDOg5QxkuKeg3Ef+//D8EyM3HaJF5eZyGc4VTy70JA6 8Kkkk34UojdXsmKAkZ6N3rCO5M7DX4HfFFNSkQpDw75DN6/u0IWawYPaT WwGsTLS1wzQULRT+CVlGIPGjH5kzmlKoeG0N0/NWGfUaxRvPPu6z0h0we 4tMCU5zWSKbfEo8MVPRRm8uCCNn8yBWkVyEYa2F9Z9azisgMdgA3Qa1DM ur/AwCClyC9ip32B5lxj/yhr5WPgLIBKWEBAnwtTNxzs7odtOTGlxYxcj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10861"; a="370163248" X-IronPort-AV: E=Sophos;i="6.03,219,1694761200"; d="scan'208";a="370163248" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2023 20:09:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10861"; a="748168202" X-IronPort-AV: E=Sophos;i="6.03,219,1694761200"; d="scan'208";a="748168202" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2023 20:08:59 -0700 From: "Huang, Ying" To: Mel Gorman Cc: , , Arjan Van De Ven , Sudeep Holla , Andrew Morton , Vlastimil Babka , "David Hildenbrand" , Johannes Weiner , "Dave Hansen" , Michal Hocko , "Pavel Tatashin" , Matthew Wilcox , Christoph Lameter Subject: Re: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size References: <20230920061856.257597-1-ying.huang@intel.com> <20230920061856.257597-3-ying.huang@intel.com> <20231011122027.pw3uw32sdxxqjsrq@techsingularity.net> <87h6mwf3gf.fsf@yhuang6-desk2.ccr.corp.intel.com> <20231012125253.fpeehd6362c5v2sj@techsingularity.net> <87v8bcdly7.fsf@yhuang6-desk2.ccr.corp.intel.com> <20231012152250.xuu5mvghwtonpvp2@techsingularity.net> Date: Fri, 13 Oct 2023 11:06:51 +0800 In-Reply-To: <20231012152250.xuu5mvghwtonpvp2@techsingularity.net> (Mel Gorman's message of "Thu, 12 Oct 2023 16:22:50 +0100") Message-ID: <87pm1jcjas.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Thu, 12 Oct 2023 20:09:21 -0700 (PDT) Mel Gorman writes: > On Thu, Oct 12, 2023 at 09:12:00PM +0800, Huang, Ying wrote: >> Mel Gorman writes: >> >> > On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote: >> >> Mel Gorman writes: >> >> >> >> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote: >> >> >> Per-CPU data cache size is useful information. For example, it can be >> >> >> used to determine per-CPU cache size. So, in this patch, the data >> >> >> cache size for each CPU is calculated via data_cache_size / >> >> >> shared_cpu_weight. >> >> >> >> >> >> A brute-force algorithm to iterate all online CPUs is used to avoid >> >> >> to allocate an extra cpumask, especially in offline callback. >> >> >> >> >> >> Signed-off-by: "Huang, Ying" >> >> > >> >> > It's not necessarily relevant to the patch, but at least the scheduler >> >> > also stores some per-cpu topology information such as sd_llc_size -- the >> >> > number of CPUs sharing the same last-level-cache as this CPU. It may be >> >> > worth unifying this at some point if it's common that per-cpu >> >> > information is too fine and per-zone or per-node information is too >> >> > coarse. This would be particularly true when considering locking >> >> > granularity, >> >> > >> >> >> Cc: Sudeep Holla >> >> >> Cc: Andrew Morton >> >> >> Cc: Mel Gorman >> >> >> Cc: Vlastimil Babka >> >> >> Cc: David Hildenbrand >> >> >> Cc: Johannes Weiner >> >> >> Cc: Dave Hansen >> >> >> Cc: Michal Hocko >> >> >> Cc: Pavel Tatashin >> >> >> Cc: Matthew Wilcox >> >> >> Cc: Christoph Lameter >> >> >> --- >> >> >> drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++- >> >> >> include/linux/cacheinfo.h | 1 + >> >> >> 2 files changed, 42 insertions(+), 1 deletion(-) >> >> >> >> >> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c >> >> >> index cbae8be1fe52..3e8951a3fbab 100644 >> >> >> --- a/drivers/base/cacheinfo.c >> >> >> +++ b/drivers/base/cacheinfo.c >> >> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu) >> >> >> return rc; >> >> >> } >> >> >> >> >> >> +static void update_data_cache_size_cpu(unsigned int cpu) >> >> >> +{ >> >> >> + struct cpu_cacheinfo *ci; >> >> >> + struct cacheinfo *leaf; >> >> >> + unsigned int i, nr_shared; >> >> >> + unsigned int size_data = 0; >> >> >> + >> >> >> + if (!per_cpu_cacheinfo(cpu)) >> >> >> + return; >> >> >> + >> >> >> + ci = ci_cacheinfo(cpu); >> >> >> + for (i = 0; i < cache_leaves(cpu); i++) { >> >> >> + leaf = per_cpu_cacheinfo_idx(cpu, i); >> >> >> + if (leaf->type != CACHE_TYPE_DATA && >> >> >> + leaf->type != CACHE_TYPE_UNIFIED) >> >> >> + continue; >> >> >> + nr_shared = cpumask_weight(&leaf->shared_cpu_map); >> >> >> + if (!nr_shared) >> >> >> + continue; >> >> >> + size_data += leaf->size / nr_shared; >> >> >> + } >> >> >> + ci->size_data = size_data; >> >> >> +} >> >> > >> >> > This needs comments. >> >> > >> >> > It would be nice to add a comment on top describing the limitation of >> >> > CACHE_TYPE_UNIFIED here in the context of >> >> > update_data_cache_size_cpu(). >> >> >> >> Sure. Will do that. >> >> >> > >> > Thanks. >> > >> >> > The L2 cache could be unified but much smaller than a L3 or other >> >> > last-level-cache. It's not clear from the code what level of cache is being >> >> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data >> >> > is not the size of a cache, it appears to be the share of a cache a CPU >> >> > would have under ideal circumstances. >> >> >> >> Yes. And it isn't for one specific level of cache. It's sum of per-CPU >> >> shares of all levels of cache. But the calculation is inaccurate. More >> >> details are in the below reply. >> >> >> >> > However, as it appears to also be >> >> > iterating hierarchy then this may not be accurate. Caches may or may not >> >> > allow data to be duplicated between levels so the value may be inaccurate. >> >> >> >> Thank you very much for pointing this out! The cache can be inclusive >> >> or not. So, we cannot calculate the per-CPU slice of all-level caches >> >> via adding them together blindly. I will change this in a follow-on >> >> patch. >> >> >> > >> > Please do, I would strongly suggest basing this on LLC only because it's >> > the only value you can be sure of. This change is the only change that may >> > warrant a respin of the series as the history will be somewhat confusing >> > otherwise. >> >> I am still checking whether it's possible to get cache inclusive >> information via cpuid. >> > > cpuid may be x86-specific so that potentially leads to different behaviours > on different architectures. > >> If there's no reliable way to do that. We can use the max value of >> per-CPU share of each level of cache. For inclusive cache, that will be >> the value of LLC. For non-inclusive cache, the value will be more >> accurate. For example, on Intel Sapphire Rapids, the L2 cache is 2 MB >> per core, while LLC is 1.875 MB per core according to [1]. >> > > Be that as it may, it still opens the possibility of significantly different > behaviour depending on the CPU family. I would strongly recommend that you > start with LLC only because LLC is also the topology level of interest used > by the scheduler and it's information that is generally available. Trying > to get accurate information on every level and the complexity of dealing > with inclusive vs exclusive cache or write-back vs write-through should > be a separate patch, with separate justification and notes on how it can > lead to behaviour specific to the CPU family or architecture. IMHO, we should try to optimize for as many CPUs as possible. The size of the per-CPU (HW thread for SMT) slice of LLC of latest Intel server CPUs is as follows, Icelake: 0.75 MB Sapphire Rapids: 0.9375 MB While pcp->batch is 63 * 4 / 1024 = 0.2461 MB. In [03/10], only if "per_cpu_cache_slice > 4 * pcp->batch", we will cache pcp->batch before draining the PCP. This makes the optimization unavailable for a significant portion of the server CPUs. In theory, if "per_cpu_cache_slice > 2 * pcp->batch", we can reuse cache-hot pages between CPUs. So, if we change the condition to "per_cpu_cache_slice > 3 * pcp->batch", I think that we are still safe. As for other CPUs, according to [2], AMD CPUs have larger per-CPU LLC. So, it's OK for them. ARM CPUs has much smaller per-CPU LLC, so some further optimization is needed. [2] https://www.anandtech.com/show/16594/intel-3rd-gen-xeon-scalable-review/2 So, I suggest to use "per_cpu_cache_slice > 3 * pcp->batch" in [03/10], and use LLC in this patch [02/10]. Then, we can optimize the per-CPU slice of cache calculation in the follow-up patches. -- Best Regards, Huang, Ying