Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2086446pxk; Mon, 14 Sep 2020 04:31:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzPWtOXb1MNzljywDWL7Jife7VWKbliN8i9J91NzjdwY3NIIqOgBzwbzRTLHJcHe9uxdKY5 X-Received: by 2002:a17:906:660f:: with SMTP id b15mr14837805ejp.333.1600083094469; Mon, 14 Sep 2020 04:31:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600083094; cv=none; d=google.com; s=arc-20160816; b=ugCtUxiTDrpcaHGUitPD4QHdT5+ufefFME5rySJon2VB7ZZvvH3Xyvh4YP+eDmPu7J d5QBY1ybKQYbVaDahhT58eL9xg02ormjYvGvb6EW6BSpLx9mEzcjt5m0q2kG6tTY/+sg 6TBEWTuZKGgia/o/v/cv+ZAWa8iloIQK+BrN9rtoyJnGhyryCPWXkHkQcDfVaH6DmMeX CIVvVBdA67bshq2OswHWauPRHlkGRsvgGuXPlhkeNdO95LKAVXTgeG4oYDEA4hzvn4zW OjR8oKUYljUGUPPoyJrJOdoBskjnG/6zsESCqiHdQJi0xC63Xn/MMv5g8nnvVOtCqcGC RFVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=TB/w2SE5yCpJKLt3YGoR2ekCfINA66LVTA/N8r0vRlk=; b=EUADfq8JbDqayf3wyyWKmIYfyXqePpMZDBwBhAbxgjDNBMH6FHcVgrO9ERk3WCkrLp QQ/RZGK3G3sis/HWuGLfNZXA1pp0leGkYZlYMafVyZGTyxuNtrYRsknknpWosmTdF0uz zLCyuiz8CIsPrgNVXuodxIQwVnQttBSFRAlV+rvulrjCJNmNAgpDg9E37dpf1efMmjtU NFTDxKmiYdzbakxFlY9aRNxHa0e7tH9ScgfEFFUTgumP7Dan56Y1dBF6sI4qhpZ3rNzp SdbVIHzelTqHAxdJVYOcOqpJ5Z6ogAz3Awc8AxUbxdTo45Q8UUVaaPAz5XgcXkrujtjT gmCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id sd6si6869094ejb.207.2020.09.14.04.31.11; Mon, 14 Sep 2020 04:31:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726060AbgINL2E (ORCPT + 99 others); Mon, 14 Sep 2020 07:28:04 -0400 Received: from foss.arm.com ([217.140.110.172]:34758 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726070AbgINL0y (ORCPT ); Mon, 14 Sep 2020 07:26:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 34438106F; Mon, 14 Sep 2020 04:26:51 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 854733F68F; Mon, 14 Sep 2020 04:26:49 -0700 (PDT) References: <20200910054203.525420-1-aubrey.li@intel.com> <20200910054203.525420-2-aubrey.li@intel.com> <20200911162853.xldy6fvvqph2lahj@e107158-lin.cambridge.arm.com> <3f1571ea-b74c-fc40-2696-39ef3fe8b968@linux.intel.com> <20200914110809.2nu7vt2s3lzlvxoz@e107158-lin.cambridge.arm.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Qais Yousef Cc: "Li\, Aubrey" , Aubrey Li , mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH v1 1/1] sched/fair: select idle cpu from idle cpumask in sched domain In-reply-to: <20200914110809.2nu7vt2s3lzlvxoz@e107158-lin.cambridge.arm.com> Date: Mon, 14 Sep 2020 12:26:47 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/09/20 12:08, Qais Yousef wrote: > On 09/14/20 11:31, Valentin Schneider wrote: >> >> On 12/09/20 00:04, Li, Aubrey wrote: >> >>> +++ b/include/linux/sched/topology.h >> >>> @@ -65,8 +65,21 @@ struct sched_domain_shared { >> >>> atomic_t ref; >> >>> atomic_t nr_busy_cpus; >> >>> int has_idle_cores; >> >>> + /* >> >>> + * Span of all idle CPUs in this domain. >> >>> + * >> >>> + * NOTE: this field is variable length. (Allocated dynamically >> >>> + * by attaching extra space to the end of the structure, >> >>> + * depending on how many CPUs the kernel has booted up with) >> >>> + */ >> >>> + unsigned long idle_cpus_span[]; >> >> >> >> Can't you use cpumask_var_t and zalloc_cpumask_var() instead? >> > >> > I can use the existing free code. Do we have a problem of this? >> > >> >> Nah, flexible array members are the preferred approach here; this also > > Is this your opinion or a rule written somewhere I missed? I don't think there's a written rule, but AIUI it is preferred by at least Peter: https://lore.kernel.org/linux-pm/20180612125930.GP12217@hirez.programming.kicks-ass.net/ https://lore.kernel.org/lkml/20180619110734.GO2458@hirez.programming.kicks-ass.net/ And my opinion is that, if you can, having fewer separate allocation is better. > >> means we don't let CONFIG_CPUMASK_OFFSTACK dictate where this gets >> allocated. >> >> See struct numa_group, struct sched_group, struct sched_domain, struct >> em_perf_domain... > > struct root_domain, struct cpupri_vec, struct generic_pm_domain, > struct irq_common_data.. > > Use cpumask_var_t. > > Both approach look correct to me, so no objection in principle. cpumask_var_t > looks neater IMO and will be necessary once more than one cpumask are required > in a struct. > You're right in that cpumask_var_t becomes necessary when you need more than one mask. For those that use it despite requiring only one mask (cpupri stuff, struct nohz too), I'm not sure.