Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp751867pxb; Tue, 3 Nov 2020 11:29:55 -0800 (PST) X-Google-Smtp-Source: ABdhPJwcnXJC9tm34eT89TvcSx0d3oQ/MiJGkag/bfe5jeTzUxBaELe+EANh08ayUg01MyttTh6D X-Received: by 2002:a17:906:3fc5:: with SMTP id k5mr22047036ejj.158.1604431795322; Tue, 03 Nov 2020 11:29:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604431795; cv=none; d=google.com; s=arc-20160816; b=LC/+1BvKOH5BpcpoxDqYaeYxrnbBqLYa6FZHwrtTnGpmi5DHpen0Fqq5pL5u8XcSsy dKvkwRqE9rwnfa0QUILBf6EG2FP36wElyaCGPHF+WnMQUmlSDTu7ToHF/F04VTXnnxt4 B8nUQICeqo4AK526uFoNJ0UdGSxW9BmU0hpIQxupYdCVMOljSYRIe9A/lz3DhToE8uH1 BLbDLF8sJ9vh15Kew+y1nWGaYFgS4K1DN32bf/Bg9mF/+4AKwoFMdUX7KZ2+4iUSlXV6 gQWzobkeMuBqFhAs5QGvIuY9/EtJ9++C+tWg6bRXd/GAal9dfYne/rFWyqd54S430eAY LQbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:in-reply-to:subject :cc:to:from:user-agent:references; bh=rFUwME4FO9leZ/rRhoygTKPMUv26f/CtbzOQupTIjXI=; b=wVPNjIzPr3lA95oKHcM8fGkW/K2MGQtNvyIwk1sonP6xrFWJDyQcs6wuj+U/CpZvtX 3wYaD9PkzV2BdT0ySN3zrHUbPr3WBlGahtZ9c9P1ME/bBl3/qhrgR0ObuDrOGd4RP/X4 xRVnwCx0vW0ZYkYk0HUv6WUz10Rw5H2INFAS+mIUng/iYHS8MW5DaR6+buSwVesVimdv 4LC08K492PHAb7VCeeFoGsNGzLhb6WgXzO2QMsSVrcv1miduALXr9At0PMjUGMV9OSaK bUgqERqVoZBAIGF1YfNxiERdVCxT2kHkDEVa9QqOz0amzHH94eGSgJDgCNBRZyUFTZL+ 7jAA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s9si10297627edu.457.2020.11.03.11.29.32; Tue, 03 Nov 2020 11:29:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729546AbgKCT2E (ORCPT + 99 others); Tue, 3 Nov 2020 14:28:04 -0500 Received: from foss.arm.com ([217.140.110.172]:54722 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725957AbgKCT2E (ORCPT ); Tue, 3 Nov 2020 14:28:04 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 87CE1139F; Tue, 3 Nov 2020 11:28:03 -0800 (PST) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C0D123F66E; Tue, 3 Nov 2020 11:28:01 -0800 (PST) References: <20201021150335.1103231-1-aubrey.li@linux.intel.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Aubrey Li Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org, Aubrey Li , Qais Yousef , Jiang Biao Subject: Re: [RFC PATCH v3] sched/fair: select idle cpu from idle cpumask for task wakeup In-reply-to: <20201021150335.1103231-1-aubrey.li@linux.intel.com> Date: Tue, 03 Nov 2020 19:27:56 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 21/10/20 16:03, Aubrey Li wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 6b3b59cc51d6..088d1995594f 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6023,6 +6023,38 @@ void __update_idle_core(struct rq *rq) > rcu_read_unlock(); > } > > +static DEFINE_PER_CPU(bool, cpu_idle_state); I would've expected this to be far less compact than a cpumask, but that's not the story readelf is telling me. Objdump tells me this is recouping some of the padding in .data..percpu, at least with the arm64 defconfig. In any case this ought to be better wrt cacheline bouncing, which I suppose is what we ultimately want here. Also, see rambling about init value below. > @@ -10070,6 +10107,12 @@ static void nohz_balancer_kick(struct rq *rq) > if (unlikely(rq->idle_balance)) > return; > > + /* The CPU is not in idle, update idle cpumask */ > + if (unlikely(sched_idle_cpu(cpu))) { > + /* Allow SCHED_IDLE cpu as a wakeup target */ > + update_idle_cpumask(rq, true); > + } else > + update_idle_cpumask(rq, false); This means that without CONFIG_NO_HZ_COMMON, a CPU going into idle will never be accounted as going out of it, right? Eventually the cpumask should end up full, which conceptually implements the previous behaviour of select_idle_cpu() but in a fairly roundabout way... > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 9079d865a935..f14a6ef4de57 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -1407,6 +1407,7 @@ sd_init(struct sched_domain_topology_level *tl, > sd->shared = *per_cpu_ptr(sdd->sds, sd_id); > atomic_inc(&sd->shared->ref); > atomic_set(&sd->shared->nr_busy_cpus, sd_weight); > + cpumask_copy(sds_idle_cpus(sd->shared), sched_domain_span(sd)); So at init you would have (single LLC for sake of simplicity): \all cpu : cpu_idle_state[cpu] == false cpumask_full(sds_idle_cpus) == true IOW it'll require all CPUs to go idle at some point for these two states to be properly aligned. Should cpu_idle_state not then be init'd to 1? This then happens again for hotplug, except that cpu_idle_state[cpu] may be either true or false when the sds_idle_cpus mask is reset to 1's.