Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp385806pxk; Fri, 11 Sep 2020 09:30:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw0B/OpSsUmYAyv0y5BfF76y6iwlbEhRKBTX1zHYXCRs/7afzQIlniVn9B3C1gyyIr8+k6U X-Received: by 2002:a17:907:72cc:: with SMTP id du12mr2777547ejc.150.1599841814183; Fri, 11 Sep 2020 09:30:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599841814; cv=none; d=google.com; s=arc-20160816; b=sXLRHbOtNEAYBtQInboMBEHShteRTppPFEnUIiKZsOs6UgZjD5Latf1UTlvoBoML6v XV9+d2vrPJOdExZshHJiVC4/P7+yLODkbadNJqMjTwmeZgcRM6OiVHeCijlBQbS2DsYJ qRc6GTCKDmnPS9hRPdXE80I6m4syVXXRyLBFNTlpRqwNQ1vwpFUzWCP1L+WrM53kjtPk 6p+kfCCvluCfzM4KlKYzvGyg3HFl5B/SViZY9r0itDy+dypJ7qCwV802U70dg54Bo9Vr OrXbbmSPa6Ue2CSBeOqKiQdC6gXYkGF7V780kp4TQAYIBi6K+Zp/PEwe48E6tvX5r150 OvhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=2I031J8PxxPwg6HXIt26NIzDWRP8IsNR98tbNdHPI9w=; b=z0NnLlMkk29r0KVZLu2WG57ePLZiWpeT1Cc3LXoyA0L1H5cjugi1i8RkNTiJG0ls0n 8DwG5mD+u1vB3XUVJD9M+nOXMbCUy7Khr70OOzoWcygN6PIC+56kca5aK128RnmTVHXC KRLO0FU8O8mRQ5Y8i5QgskIaHTwPYZaKxoow985NnUVLDJJvY66nZ3cY8c2HCjtoHvv3 ebj3cjdnEfakdIsTe6Frs00roUk+XaToMGd+ed0h4KO+ekZ24rYFMRzUToifhiY/ezAF 6zJTWvMNlaTw0V1aFgoc1hNTpAzO8Hx2wijEe03rY8tga3E5YuD9kpcfLkN2elg7fkQd GRqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u23si1624987edq.185.2020.09.11.09.29.51; Fri, 11 Sep 2020 09:30:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726449AbgIKQ3V (ORCPT + 99 others); Fri, 11 Sep 2020 12:29:21 -0400 Received: from foss.arm.com ([217.140.110.172]:38784 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726239AbgIKQ27 (ORCPT ); Fri, 11 Sep 2020 12:28:59 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F062106F; Fri, 11 Sep 2020 09:28:58 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D35473F73C; Fri, 11 Sep 2020 09:28:56 -0700 (PDT) Date: Fri, 11 Sep 2020 17:28:54 +0100 From: Qais Yousef To: Aubrey Li Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, valentin.schneider@arm.com, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org, Aubrey Li Subject: Re: [RFC PATCH v1 1/1] sched/fair: select idle cpu from idle cpumask in sched domain Message-ID: <20200911162853.xldy6fvvqph2lahj@e107158-lin.cambridge.arm.com> References: <20200910054203.525420-1-aubrey.li@intel.com> <20200910054203.525420-2-aubrey.li@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20200910054203.525420-2-aubrey.li@intel.com> User-Agent: NeoMutt/20171215 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/10/20 13:42, Aubrey Li wrote: > Added idle cpumask to track idle cpus in sched domain. When a CPU > enters idle, its corresponding bit in the idle cpumask will be set, > and when the CPU exits idle, its bit will be cleared. > > When a task wakes up to select an idle cpu, scanning idle cpumask > has low cost than scanning all the cpus in last level cache domain, > especially when the system is heavily loaded. > > Signed-off-by: Aubrey Li > --- > include/linux/sched/topology.h | 13 +++++++++++++ > kernel/sched/fair.c | 4 +++- > kernel/sched/topology.c | 2 +- > 3 files changed, 17 insertions(+), 2 deletions(-) > > diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h > index fb11091129b3..43a641d26154 100644 > --- a/include/linux/sched/topology.h > +++ b/include/linux/sched/topology.h > @@ -65,8 +65,21 @@ struct sched_domain_shared { > atomic_t ref; > atomic_t nr_busy_cpus; > int has_idle_cores; > + /* > + * Span of all idle CPUs in this domain. > + * > + * NOTE: this field is variable length. (Allocated dynamically > + * by attaching extra space to the end of the structure, > + * depending on how many CPUs the kernel has booted up with) > + */ > + unsigned long idle_cpus_span[]; Can't you use cpumask_var_t and zalloc_cpumask_var() instead? The patch looks useful. Did it help you with any particular workload? It'd be good to expand on that in the commit message. Thanks -- Qais Yousef