Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1040501pxb; Thu, 4 Feb 2021 03:40:49 -0800 (PST) X-Google-Smtp-Source: ABdhPJwQHb8zXJCkKpFQzKU1oM1zn5q0f4MdJuTwyOzRvqxSepDLjoHo5EcWxurdyrn+7tsoUJoq X-Received: by 2002:a05:6402:35ca:: with SMTP id z10mr7534060edc.186.1612438849310; Thu, 04 Feb 2021 03:40:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612438849; cv=none; d=google.com; s=arc-20160816; b=u1djsB9EFsnzkKVd1agi5OJBbzDihRKk7SJ8DVzQ2nJwUDlhTxmz9aDQ9iBi2YZ82C dOWFLhO5PuqwBkTbLbgD6y9voi5p3z74b895Uzcjaqk0yBPljTn7cDAr4hnqc41d/aS3 D+j5/A7MFGNqetzoqsg/Ne28liKmcIJgfa2uCvZ8wsjyCiuRv/zTT5cuEJ1lIt68Tucw wksvJRLcljwhHTx9RlJ7tCByksEUx+ghE5blb2t+umvsGIoMZwnvc0Bp6ceyqs3Rnwvr Uhgt4Yyj9zAgRi2hbJuLnbkRnku8leUfIgyGy0ZxrF9/ANbhvdVk4mKTQhCUK7swQE7B dJyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:user-agent :references:in-reply-to:subject:cc:to:from; bh=augoCYsX02AgGZ0wEChjN5qIpujasuAkOzfz1s390Ps=; b=oFUK0jCPtX/CTTVpVbJJkanJKIY3Jdv26aXD7d2bLpyZUi0tCEkro0/IDg6WPfBP47 0UUsTbOYAB6aHpXcqX8j0bsk4AKGRK5MyAhskUiE5CLFZP1FiquoHcYZot5wrd+Dmnl/ 1j6Omi8YBc+IhQ9Lb+UFKirTObjJZc7c4PZBLSIztJranwJr+OinPvcoXmMaihc3RZzA gU7QBd7YYCxJxCgKmDHsp28Bc/pOnqTuLrCiXsspFFVRc9UsbzjLTBvidsAw9mfMefrj 5TYuH719pKWbIvz7i6XJBZQmFONANbxfFBIj/j7poOkf0mYyMlwyyVMumx1L6Hl23hjB obIQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id du8si2869656ejc.659.2021.02.04.03.40.23; Thu, 04 Feb 2021 03:40:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236057AbhBDLhW (ORCPT + 99 others); Thu, 4 Feb 2021 06:37:22 -0500 Received: from foss.arm.com ([217.140.110.172]:56552 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235688AbhBDLfc (ORCPT ); Thu, 4 Feb 2021 06:35:32 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A98E7D6E; Thu, 4 Feb 2021 03:34:46 -0800 (PST) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5730B3F73B; Thu, 4 Feb 2021 03:34:45 -0800 (PST) From: Valentin Schneider To: Dietmar Eggemann , Qais Yousef Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Ingo Molnar , Vincent Guittot , Morten Rasmussen , Quentin Perret , Pavan Kondeti , Rik van Riel Subject: Re: [PATCH 5/8] sched/fair: Make check_misfit_status() only compare dynamic capacities In-Reply-To: References: <20210128183141.28097-1-valentin.schneider@arm.com> <20210128183141.28097-6-valentin.schneider@arm.com> <20210203151546.rwkbdjxc2vgiodvx@e107158-lin> User-Agent: Notmuch/0.21 (http://notmuchmail.org) Emacs/26.3 (x86_64-pc-linux-gnu) Date: Thu, 04 Feb 2021 11:34:38 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/02/21 11:49, Dietmar Eggemann wrote: > On 03/02/2021 16:15, Qais Yousef wrote: >> On 01/28/21 18:31, Valentin Schneider wrote: > > [...] > >>> @@ -10238,7 +10236,7 @@ static void nohz_balancer_kick(struct rq *rq) >>> * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU >>> * to run the misfit task on. >>> */ >>> - if (check_misfit_status(rq, sd)) { >>> + if (check_misfit_status(rq)) { > > Since check_misfit_status() doesn't need sd anymore it looks like that > rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)) could be replaced by > static_branch_unlikely(&sched_asym_cpucapacity)) in nohz_balancer_kick(). > > But as you mentioned in an earlier conversation we do need to check sd > because of asymmetric CPU capacity systems w/ exclusive cpusets which > could create symmetric islands (unique capacity_orig among CPUs). > > Maybe worth putting a comment here (similar to the one in sis()) so > people don't try to optimize? How about: --->8--- diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c2351b87824f..4b71f4d1d324 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6322,15 +6322,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) * sd_asym_cpucapacity rather than sd_llc. */ if (static_branch_unlikely(&sched_asym_cpucapacity)) { + /* See sd_has_asym_cpucapacity() */ sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target)); - /* - * On an asymmetric CPU capacity system where an exclusive - * cpuset defines a symmetric island (i.e. one unique - * capacity_orig value through the cpuset), the key will be set - * but the CPUs within that cpuset will not have a domain with - * SD_ASYM_CPUCAPACITY. These should follow the usual symmetric - * capacity path. - */ if (sd) { i = select_idle_capacity(p, sd, target); return ((unsigned)i < nr_cpumask_bits) ? i : target; @@ -10274,6 +10267,10 @@ static void nohz_balancer_kick(struct rq *rq) } } + /* + * Below checks don't actually use the sd, but they still hinge on its + * presence. See sd_has_asym_cpucapacity(). + */ sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)); if (sd) { /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 21bd71f58c06..ea7f0155e268 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1482,6 +1482,33 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity); extern struct static_key_false sched_asym_cpucapacity; +/* + * Note that the static key is system-wide, but the visibility of + * SD_ASYM_CPUCAPACITY isn't. Thus the static key being enabled does not + * imply all CPUs can see asymmetry. + * + * Consider an asymmetric CPU capacity system such as: + * + * MC [ ] + * 0 1 2 3 4 5 + * L L L L B B + * + * w/ arch_scale_cpu_capacity(L) < arch_scale_cpu_capacity(B) + * + * By default, booting this system will enable the sched_asym_cpucapacity + * static key, and all CPUs will see SD_ASYM_CPUCAPACITY set at their MC + * sched_domain. + * + * Further consider exclusive cpusets creating a "symmetric island": + * + * MC [ ][ ] + * 0 1 2 3 4 5 + * L L L L B B + * + * Again, booting this will enable the static key, but CPUs 0-1 will *not* have + * SD_ASYM_CPUCAPACITY set in any of their sched_domain. This is the intending + * behaviour, as CPUs 0-1 should be treated as a regular, isolated SMP system. + */ static inline bool sd_has_asym_cpucapacity(struct sched_domain *sd) { return static_branch_unlikely(&sched_asym_cpucapacity) &&