Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1682709pxj; Wed, 19 May 2021 11:22:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw7ISv9buAKMz79pZtMg9DKZUpqL7/cm2XQ2HKH2AjAyYMt4vSbsJsVEeyr73V1ChrGP/Cn X-Received: by 2002:a17:906:4d13:: with SMTP id r19mr459981eju.496.1621448531713; Wed, 19 May 2021 11:22:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621448531; cv=none; d=google.com; s=arc-20160816; b=YhZgY7Ir5C8ZKY2a/sC16SiyRZbg+go7NdEC5nmu0TBNNuS2KnEI/WvfajD6RAgz9O YHUY+PtonE2KAdfPpSiPUdbItXQoh5pNbR+gF3DZ1AqU1Zn+s2Lj0OycjlXd/bY0/I7/ VJixDBbqdhfGfFazpvdr+4PVl9HXR6IcjnbrgiCa5p/f925aXM9xE0tiFBCVB6IQVzb1 cTtM7kFWiD1N6SNaRO/tMoN0j6ixO7aZelvl8+UAUr3x7Cdfp/sAzVp9uNBH9Z7wIrXy z/yBThVW1hBYEthfUeVyGizYR5c00z1mlsZ2ELbxxXM/UtixD4G02SmB6nhJEku44w5Q CLnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=f5CZ4Dn8yZBiksZckAo/odqwbBNXSZ2prYqwTSNjLhs=; b=UWKKQ3L7+ZHtUF7q9bhxp/yC45Q00tRMN++lXpdFRZZU6J6dznD+i6xrnAHoCxkkbh oSpJrpEW19UuYHGHLCWzve4uEpu2/2LW45Rcqm/AE/zV2liOOrYXdbrQhdpl5KIltD+C eCW2BZljESxGHgAUs1j+oRLZERkvg1ZgDIvYLDVLkOfpqw4ab5MyRRQsLwciGxoRoVfU NsO+E2c8UuBHxEo+A1lvmkkVyMDW2cs67rXbEoaphdtaMOBF9J2dyu7KBBr0C1wQyhII 5DKZztM+YBZFwgcipY6NDPtdTr57WNz6BY4CdsbuK7ZQnGQGxaUmUTlZarWMBEpGs1ZD GAsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f25si477396ejh.126.2021.05.19.11.21.47; Wed, 19 May 2021 11:22:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345937AbhERPtT (ORCPT + 99 others); Tue, 18 May 2021 11:49:19 -0400 Received: from foss.arm.com ([217.140.110.172]:55394 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245521AbhERPtS (ORCPT ); Tue, 18 May 2021 11:49:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4FB93ED1; Tue, 18 May 2021 08:48:00 -0700 (PDT) Received: from e120325.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD82D3F73B; Tue, 18 May 2021 08:47:58 -0700 (PDT) Date: Tue, 18 May 2021 16:47:56 +0100 From: Beata Michalska To: Vincent Guittot Cc: linux-kernel , Peter Zijlstra , Ingo Molnar , Juri Lelli , Valentin Schneider , Dietmar Eggemann , "corbet@lwn.net" , Randy Dunlap , Linux Doc Mailing List Subject: Re: [PATCH v4 1/3] sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag Message-ID: <20210518154756.GD3993@e120325.cambridge.arm.com> References: <1621239831-5870-1-git-send-email-beata.michalska@arm.com> <1621239831-5870-2-git-send-email-beata.michalska@arm.com> <20210518142746.GA3993@e120325.cambridge.arm.com> <20210518150947.GC3993@e120325.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 18, 2021 at 05:28:11PM +0200, Vincent Guittot wrote: > On Tue, 18 May 2021 at 17:09, Beata Michalska wrote: > > > > On Tue, May 18, 2021 at 04:53:09PM +0200, Vincent Guittot wrote: > > > On Tue, 18 May 2021 at 16:27, Beata Michalska wrote: > > > > > > > > On Tue, May 18, 2021 at 03:39:27PM +0200, Vincent Guittot wrote: > > > > > On Mon, 17 May 2021 at 10:24, Beata Michalska wrote: > > > > > > > > > > > > Introducing new, complementary to SD_ASYM_CPUCAPACITY, sched_domain > > > > > > topology flag, to distinguish between shed_domains where any CPU > > > > > > capacity asymmetry is detected (SD_ASYM_CPUCAPACITY) and ones where > > > > > > a full range of CPU capacities is visible to all domain members > > > > > > (SD_ASYM_CPUCAPACITY_FULL). > > > > > > > > > > I'm not sure about what you want to detect: > > > > > > > > > > Is it a sched_domain level with a full range of cpu capacity, i.e. > > > > > with at least 1 min capacity and 1 max capacity ? > > > > > or do you want to get at least 1 cpu of each capacity ? > > > > That would be at least one CPU of each available capacity within given domain, > > > > so full -set- of available capacities within a domain. > > > > > > Would be good to add the precision. > > Will do. > > > > > > Although I'm not sure if that's the best policy compared to only > > > getting the range which would be far simpler to implement. > > > Do you have some topology example ? > > > > An example from second patch from the series: > > > > DIE [ ] > > MC [ ][ ] > > > > CPU [0] [1] [2] [3] [4] [5] [6] [7] > > Capacity |.....| |.....| |.....| |.....| > > L M B B > > The one above , which is described in your patchset, works with the range policy Yeap, but that is just a variation of all the possibilities.... > > > > > Where: > > arch_scale_cpu_capacity(L) = 512 > > arch_scale_cpu_capacity(M) = 871 > > arch_scale_cpu_capacity(B) = 1024 > > > > which could also look like: > > > > DIE [ ] > > MC [ ][ ] > > > > CPU [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] > > Capacity |.....| |.....| |.....| |.....| |.....| > > L M B L B > > I know that that HW guys can come with crazy idea but they would > probably add M instead of L with B in the 2nd cluster as a boost of > performance at the cost of powering up another "cluster" in which case > the range policy works as well > > > > > Considering only range would mean loosing the 2 (M) CPUs out of sight > > for feec in some cases. > > Is it realistic ? Considering all the code and complexity added by > patch 2, will we really use it at the end ? > I do completely agree that the first approach was slightly .... blown out of proportions, but with Peter's idea, the complexity has dropped significantly. With the range being considered we are back to per domain tracking of available capacities (min/max), plus additional cycles on comparing capacities. Unless I fail to see the simplicity of that approach ? --- BR B. > Regards, > Vincent > > > > --- > > BR. > > B > > > > > > > > > > > > > > > > > > > > > > > > > > --- > > > > BR > > > > B. > > > > > > > > > > > > > > > > > > > > > > With the distinction between full and partial CPU capacity asymmetry, > > > > > > brought in by the newly introduced flag, the scope of the original > > > > > > SD_ASYM_CPUCAPACITY flag gets shifted, still maintaining the existing > > > > > > behaviour when one is detected on a given sched domain, allowing > > > > > > misfit migrations within sched domains that do not observe full range > > > > > > of CPU capacities but still do have members with different capacity > > > > > > values. It loses though it's meaning when it comes to the lowest CPU > > > > > > asymmetry sched_domain level per-cpu pointer, which is to be now > > > > > > denoted by SD_ASYM_CPUCAPACITY_FULL flag. > > > > > > > > > > > > Signed-off-by: Beata Michalska > > > > > > Reviewed-by: Valentin Schneider > > > > > > --- > > > > > > include/linux/sched/sd_flags.h | 10 ++++++++++ > > > > > > 1 file changed, 10 insertions(+) > > > > > > > > > > > > diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h > > > > > > index 34b21e9..57bde66 100644 > > > > > > --- a/include/linux/sched/sd_flags.h > > > > > > +++ b/include/linux/sched/sd_flags.h > > > > > > @@ -91,6 +91,16 @@ SD_FLAG(SD_WAKE_AFFINE, SDF_SHARED_CHILD) > > > > > > SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) > > > > > > > > > > > > /* > > > > > > + * Domain members have different CPU capacities spanning all unique CPU > > > > > > + * capacity values. > > > > > > + * > > > > > > + * SHARED_PARENT: Set from the topmost domain down to the first domain where > > > > > > + * all available CPU capacities are visible > > > > > > + * NEEDS_GROUPS: Per-CPU capacity is asymmetric between groups. > > > > > > + */ > > > > > > +SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) > > > > > > + > > > > > > +/* > > > > > > * Domain members share CPU capacity (i.e. SMT) > > > > > > * > > > > > > * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share > > > > > > -- > > > > > > 2.7.4 > > > > > >