Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp872097ybb; Wed, 25 Mar 2020 11:12:43 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtkpKfxCKqHoaGPbA34qS4XAESDL5p93suXwxnEvxGXqo9R75uJm/UNZ8bQwb6T5ea6woSd X-Received: by 2002:a9d:654c:: with SMTP id q12mr3157355otl.305.1585159963271; Wed, 25 Mar 2020 11:12:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585159963; cv=none; d=google.com; s=arc-20160816; b=OmxlnJllHjwfPe8pd6L5uk2AViEHAHSfk/ExAD61MBchoNouvu7tYrN0YxytuA1yYT Z6mxWXFmJ0W4jUylWrEbDi7LubEQPHU02zmZDlDv8glW97RmxwBE/2rdWp/7Ipq7FsVz VBvYjJtZeDQSR/7i7PVU7217AGjj67ey5YyAbD5zFcbB8AKkmCWjFjXRPN0gZNYfigUn yj7RSZ17J+/guQB9n2VhEZymvn5Za4cFW3Rd0piXYraOZLdW59wze/tZhA5EmwMzLXRY R/kBKKoB+tgR/p0i+mYESlxFGHF6WJrSYObXpeGQ0kylQrM16Zvjd/0rDNwqHypcpAvl bATA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=PREt7bULBF0iIgLVoXz4BdrB8TN2D9/7DV827U2M6TQ=; b=vwNQofx8nTNNaywHtIFcAROnf/Hb/R8VLuGaeEPtt+OkGhHsTFVzC0wsS9oLbRSD2O 1cehgvEKBjhnAWkLJjZO0+ubhdjmHP/bEdat4u9DmKGD5m8KvbzvP8VqaEdoUpYQnHZT zrgHnSSJ9Zn6LiPdpxIAvrBy75Xz01Wg9bzQea7daNe2Edx5Gm8VqY9t3/p+ik0F33bH n4agVlvv23K10c9cKXyPkaX+qsJC/nl45uhJiRujMMXAlocAdHGg0R2NZlXIZBhwAD94 eLO1FkNSquCMxSFJ34NbPlJJMvbzVAbLX1zBNTfkLnv2GZExjnzSNlckG8STBdjOETi4 GPBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z29si1725104oog.56.2020.03.25.11.12.28; Wed, 25 Mar 2020 11:12:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727541AbgCYSJw (ORCPT + 99 others); Wed, 25 Mar 2020 14:09:52 -0400 Received: from foss.arm.com ([217.140.110.172]:51762 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726820AbgCYSJw (ORCPT ); Wed, 25 Mar 2020 14:09:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 661B530E; Wed, 25 Mar 2020 11:09:47 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 644E33F52E; Wed, 25 Mar 2020 11:09:46 -0700 (PDT) References: <20200324125533.17447-1-valentin.schneider@arm.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, mingo@kernel.org, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, mgorman@techsingularity.net Subject: Re: [PATCH] sched/topology: Fix overlapping sched_group build In-reply-to: Date: Wed, 25 Mar 2020 18:09:44 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 25 2020, Valentin Schneider wrote: > On Tue, Mar 24 2020, Valentin Schneider wrote: >> kernel/sched/topology.c | 23 ++++++++++++++++++++--- >> 1 file changed, 20 insertions(+), 3 deletions(-) >> >> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c >> index 8344757bba6e..7033b27e5162 100644 >> --- a/kernel/sched/topology.c >> +++ b/kernel/sched/topology.c >> @@ -866,7 +866,7 @@ build_balance_mask(struct sched_domain *sd, struct sched_group *sg, struct cpuma >> continue; >> >> /* If we would not end up here, we can't continue from here */ >> - if (!cpumask_equal(sg_span, sched_domain_span(sibling->child))) >> + if (!cpumask_subset(sg_span, sched_domain_span(sibling->child))) > > So this is one source of issues; what I've done here is a bit stupid > since we include CPUs that *cannot* end up there. What I should've done > is something like: > > cpumask_and(tmp, sched_domain_span(sibling->child), sched_domain_span(sd)); > if (!cpumask_equal(sg_span, tmp)) > ... > > But even with that I just unfold even more horrors: this breaks the > overlapping sched_group_capacity (see 1676330ecfa8 ("sched/topology: Fix > overlapping sched_group_capacity")). > > For instance, here I would have > > CPU0-domain2-group4: span=4-5 > CPU4-domain2-group4: span=4-7 mask=4-5 > ^ That's using Dietmar's qemu setup; on the D06 that is CPU0-domain2-group48: span=48-71 CPU48-domain2-group48: span=48-95 mask=48-71 > Both groups are at the same topology level and have the same first CPU, > so they point to the same sched_group_capacity structure - but they > don't have the same span. They would without my "fix", but then the > group spans are back to being wrong. I'm starting to think this is > doomed, at least in the current state of things :/