Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp1641094pxb; Mon, 12 Apr 2021 03:11:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwY8rw3foHLqZ8IdRsaMlIrYA8FMLeMebRFZoEh+il0uekfXjMKk0Fv42ALirSjvXlImTZe X-Received: by 2002:a17:906:b34e:: with SMTP id cd14mr3494765ejb.369.1618222278293; Mon, 12 Apr 2021 03:11:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618222278; cv=none; d=google.com; s=arc-20160816; b=vaK7YBnHu5HkqGCiZVYGeVAN7eqh+TI3h8WdGL0+KQSOJ3a6/vEDE+ze7SbxMpn04F lMxGBqSqUwIMIyc3f42TUbfn4uOhhvjZ6O7l9aekNVuRNkpvlnKvMUz2WTioQaxfxbjZ OqYZxU4Hu6qrh6PIQi9K3hVFukLzGO5L52CcjorTVl9rYqECeoE44iolU3oOOGM3uiSw BJ7RxbTVxwh83wHYzA2UdXwPlZEX4lON6pyTiLghfkiaCiA3h7RGRQkaNv0XAJg6e1yI Dg0MiZXpB9MKQ7x50UQmXLIsu9/4BAB/ph5Rp8/yThjNoZ/DpXkvWtlbE3ehFuSrX1lY cBVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=aSUe1mbABaapRtll4wDRqkILEm/eeAQXufy2ZlwAzio=; b=wtogehPz9ZGDumUCrDHAIDvuDvpI/5oKrwY37CQJUslq/osu03DxYsyqYbxbfx8nIz ISNaQElmCVTBvDpECO+/3D1ilVf/gcIHC2DPdcKFmKh8A+cDU6nBV4T3WVWSDWvrOwDS G4HelPhtInObpKBLpdhdBhX5HwKQGuoCcz8XWSEuhr9s2C6lsGO5L1fzFCOV42UaZKlV Bk+rf82hBJOVW172ECnqlZHUcl0G05q8onQ8ttO7Fi8tjQFqPMYfybp1Pzayoa22Suph 5pos4M4x6+xCgcZ8zJphqtMQ7AlDBWw8fjN6hk9azedcCRfvi7xdDtcn2D8e2CmimrQM M8zQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m14si6991689ejx.468.2021.04.12.03.10.55; Mon, 12 Apr 2021 03:11:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239068AbhDLKIp (ORCPT + 99 others); Mon, 12 Apr 2021 06:08:45 -0400 Received: from foss.arm.com ([217.140.110.172]:45288 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240222AbhDLKGp (ORCPT ); Mon, 12 Apr 2021 06:06:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AC6A51FB; Mon, 12 Apr 2021 03:06:26 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E18933F694; Mon, 12 Apr 2021 03:06:24 -0700 (PDT) From: Valentin Schneider To: Mel Gorman , Srikar Dronamraju Cc: "Gautham R. Shenoy" , Michael Ellerman , Michael Neuling , Rik van Riel , Vincent Guittot , Dietmar Eggemann , Nicholas Piggin , Anton Blanchard , Parth Shah , Vaidyanathan Srinivasan , LKML , linuxppc-dev@lists.ozlabs.org Subject: Re: [RFC/PATCH] powerpc/smp: Add SD_SHARE_PKG_RESOURCES flag to MC sched-domain In-Reply-To: <20210412093722.GS3697@techsingularity.net> References: <1617341874-1205-1-git-send-email-ego@linux.vnet.ibm.com> <20210412062436.GB2633526@linux.vnet.ibm.com> <20210412093722.GS3697@techsingularity.net> Date: Mon, 12 Apr 2021 11:06:19 +0100 Message-ID: <871rbfom04.mognet@arm.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/04/21 10:37, Mel Gorman wrote: > On Mon, Apr 12, 2021 at 11:54:36AM +0530, Srikar Dronamraju wrote: >> * Gautham R. Shenoy [2021-04-02 11:07:54]: >> >> > >> > To remedy this, this patch proposes that the LLC be moved to the MC >> > level which is a group of cores in one half of the chip. >> > >> > SMT (SMT4) --> MC (Hemisphere)[LLC] --> DIE >> > >> >> I think marking Hemisphere as a LLC in a P10 scenario is a good idea. >> >> > While there is no cache being shared at this level, this is still the >> > level where some amount of cache-snooping takes place and it is >> > relatively faster to access the data from the caches of the cores >> > within this domain. With this change, we no longer see regressions on >> > P10 for applications which require single threaded performance. >> >> Peter, Valentin, Vincent, Mel, etal >> >> On architectures where we have multiple levels of cache access latencies >> within a DIE, (For example: one within the current LLC or SMT core and the >> other at MC or Hemisphere, and finally across hemispheres), do you have any >> suggestions on how we could handle the same in the core scheduler? >> > > Minimally I think it would be worth detecting when there are multiple > LLCs per node and detecting that in generic code as a static branch. In > select_idle_cpu, consider taking two passes -- first on the LLC domain > and if no idle CPU is found then taking a second pass if the search depth > allows within the node with the LLC CPUs masked out. I think that's actually a decent approach. Tying SD_SHARE_PKG_RESOURCES to something other than pure cache topology in a generic manner is tough (as it relies on murky, ill-defined hardware fabric properties). Last I tried thinking about that, I stopped at having a core-to-core latency matrix, building domains off of that, and having some knob specifying the highest distance value below which we'd set SD_SHARE_PKG_RESOURCES. There's a few things I 'hate' about that; for one it makes cpus_share_cache() somewhat questionable. > While there would be > a latency hit because cache is not shared, it would still be a CPU local > to memory that is idle. That would potentially be beneficial on Zen* > as well without having to introduce new domains in the topology hierarchy. > > -- > Mel Gorman > SUSE Labs