Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp21600pxy; Tue, 20 Apr 2021 11:34:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyjvR/T3RbZBuGRulfEoeEVIZDDmUUGbur4EStiaRqr7TQeQWu/VlrXkRnrFhNJNFbfhMrg X-Received: by 2002:a17:907:984d:: with SMTP id jj13mr28646152ejc.45.1618943654463; Tue, 20 Apr 2021 11:34:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618943654; cv=none; d=google.com; s=arc-20160816; b=tVnJVUq/8TaxMknnx2cDoK/TJT8jFffnVeAxT0dBmlka9/X+/c7j+ZbEr3Be8LQhOf TRr8SMmG706ZiinGOiNrl3ZH5HSn4Kfi79Go1oD1nnWY11qPe4mWW/zs5EkoH9cgDXvE Kux9P+H91Oyo5kvBIAmasxY4ryj6+REzmaW8qdLBktu0jRP1zuaVDbNaG58kx248uISz XiUBYsy0ODEcVjzm/LFzYDi0z8Bp6k147za6G5K6zhmnN3jgbZfPT6XCrKcTRaJyRdGM e3cOSIO5w6RoTi0VtNHbzuRb7bEXEpQ+dCiNP25AJ7chXlwDBuHEv74WS3NLXTxy1I5q VJZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:ironport-sdr:ironport-sdr; bh=mUJNDx00emcSrq4KVIFCzG5aMPwVo/jot8V6V8GnGNY=; b=lPiWwlUfJTgXPugbFoMjZ369l2jLjaDHRbCrJwAryaq44x/5ViDRdHeMrbOO+Rvtwt fWKTu2s8oiOcLoP1wmEjoUQ70G56LlUE+YL3hzASXMBLEtY1XGoewdWdCRLJPOMt7ID7 yI7XYXKUqxwOQaJIzPonz7ZS/7CNzjLX1ytr+4jH1G051/z4bIjqP4b5Ti3bhYfItgUD Fg/mLeUnp38DCVr040Jgf77LComKEiX24B5jiBcmw/yCIQVPa0j98Lffz84QbRnoEoOr tgTC7sKRdjcl4uIpFr1RfwUrwe8X6BSWVJVsgb7eaR7qXfwOakrNXgIDmDn84zGWgl6k N0Yw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i26si14485340ejb.13.2021.04.20.11.33.50; Tue, 20 Apr 2021 11:34:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233518AbhDTScS (ORCPT + 99 others); Tue, 20 Apr 2021 14:32:18 -0400 Received: from mga17.intel.com ([192.55.52.151]:10300 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233381AbhDTScR (ORCPT ); Tue, 20 Apr 2021 14:32:17 -0400 IronPort-SDR: Ur1in0XaMSYybqyUAYOstIldPbrkjKlf2Kjj9IsUfmMQTISbZPJkfMNl4oK6CFSvnvsS4Tqohd zfM9LckAbslw== X-IronPort-AV: E=McAfee;i="6200,9189,9960"; a="175670165" X-IronPort-AV: E=Sophos;i="5.82,237,1613462400"; d="scan'208";a="175670165" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Apr 2021 11:31:45 -0700 IronPort-SDR: nI6XlCRoQqyX9gVpSOBF3qh4q1h3IJIBuTQNtvecQ++HW1wKk1jlLPR3I/IZgycW99ThL2Hsw5 c7CHtZTjtZcQ== X-IronPort-AV: E=Sophos;i="5.82,237,1613462400"; d="scan'208";a="427099403" Received: from schen9-mobl.amr.corp.intel.com ([10.212.244.102]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Apr 2021 11:31:44 -0700 Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 To: "Song Bao Hua (Barry Song)" , "catalin.marinas@arm.com" , "will@kernel.org" , "rjw@rjwysocki.net" , "vincent.guittot@linaro.org" , "bp@alien8.de" , "tglx@linutronix.de" , "mingo@redhat.com" , "lenb@kernel.org" , "peterz@infradead.org" , "dietmar.eggemann@arm.com" , "rostedt@goodmis.org" , "bsegall@google.com" , "mgorman@suse.de" Cc: "msys.mizuma@gmail.com" , "valentin.schneider@arm.com" , "gregkh@linuxfoundation.org" , Jonathan Cameron , "juri.lelli@redhat.com" , "mark.rutland@arm.com" , "sudeep.holla@arm.com" , "aubrey.li@linux.intel.com" , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , "linux-acpi@vger.kernel.org" , "x86@kernel.org" , "xuwei (O)" , "Zengtao (B)" , "guodong.xu@linaro.org" , yangyicong , "Liguozhu (Kenneth)" , "linuxarm@openeuler.org" , "hpa@zytor.com" References: <20210319041618.14316-1-song.bao.hua@hisilicon.com> <20210319041618.14316-5-song.bao.hua@hisilicon.com> <110234d1-22ce-8a9a-eabb-c15ac29a5dcd@linux.intel.com> <67cc380019fd40d88d7a493b6cbc0852@hisilicon.com> From: Tim Chen Message-ID: <422b5d06-ec0e-f064-32fe-15df5b2957dd@linux.intel.com> Date: Tue, 20 Apr 2021 11:31:38 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <67cc380019fd40d88d7a493b6cbc0852@hisilicon.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote: >> >> On 3/18/21 9:16 PM, Barry Song wrote: >>> From: Tim Chen >>> >>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce >>> is shared among a cluster of cores instead of being exclusive >>> to one single core. >>> >>> To prevent oversubscription of L2 cache, load should be >>> balanced between such L2 clusters, especially for tasks with >>> no shared data. >>> >>> Also with cluster scheduling policy where tasks are woken up >>> in the same L2 cluster, we will benefit from keeping tasks >>> related to each other and likely sharing data in the same L2 >>> cluster. >>> >>> Add CPU masks of CPUs sharing the L2 cache so we can build such >>> L2 cluster scheduler domain. >>> >>> Signed-off-by: Tim Chen >>> Signed-off-by: Barry Song >> >> >> Barry, >> >> Can you also add this chunk to the patch. >> Thanks. > > Sure, Tim, Thanks. I'll put that into patch 4/4 in v6. > Barry, This chunk will also need to be added to return cluster id for x86. Please add it in your next rev. Thanks. Tim --- diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 800fa48c9fcd..2548d824f103 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -109,6 +109,7 @@ extern const struct cpumask *cpu_clustergroup_mask(int cpu); #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) #define topology_logical_die_id(cpu) (cpu_data(cpu).logical_die_id) #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id) +#define topology_cluster_id(cpu) (per_cpu(cpu_l2c_id, cpu)) #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id) extern unsigned int __max_die_per_package;