Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2050667pxk; Mon, 14 Sep 2020 03:33:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzx8D8QczmL1WXR3jt0yUCEu554qky6efpVwBbE5HDcZT6ywgTtzUyHAOhoB2hD0T9KfL8s X-Received: by 2002:a17:906:270f:: with SMTP id z15mr3298196ejc.6.1600079609699; Mon, 14 Sep 2020 03:33:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600079609; cv=none; d=google.com; s=arc-20160816; b=G8ywNqRkSBwI+S0Rjj99O78/HR1OYYCPlmOsvNBHz1WtRQoeaqU1btjT2N/VaMOSvF EuhV41gKFwWdm/kRVqPkJT90Tgc8xlu4hwaAvDw9lJ0d11AT9iV4ws3jLhCNrnoP2sVM /WJDNB1RKoXaZZeSKEi1yuZ/Nwt3E9WPWpw5kTPojHCH8QCJSFuHm1+BHmgSnMp8rOwE tvoFrgEZW9Pn2u1gZeajgwiLHeNEHsvCxFdk2RF4A2ZXGeVOUGYM9mynBOTh57SNFied 0y3om9HUIhAt8kd/JeYqzuPSsth0oYJlV4wzKuEsrK1cCy2/FxIceezKcdFf28jFs5Wu yB9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=4o/3rvHGF0d/rBfk5MKpRT/OhgQQzfuHJJRQN4hcZ0w=; b=QwS4Ng9G3LMCI6fUwcFpevqKba2aB/Nlv9ZtcShNd7LVZ+wTfIwYdUXFlse00ozkQW hdoyk03JFCNe6IwEiu5ml8qVzGs53M9Udv7BLWAGB/r/m3wOOkNYEg0NdVUBgCcIQJZ0 yt7eYWCLTrm/J2iYEAvEdKK5/7GJvoQqsuTBjtGovXA1lV2cJaFI/SROsIW9VAdw48HE xw47qh1vAU6EU3X5PLWBiho5S25zedpnbwI2I4Zol9aB83wCEnGPNn4KNdz5Fkx74qCF PLl+Jn8G889Pp3ameAwuZHs6CiMCrcwZ7p6t2o5x30ZtKmwvsePumekw8ECkfUnEMm5t ocEg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z17si6370098ejo.565.2020.09.14.03.33.06; Mon, 14 Sep 2020 03:33:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726443AbgINKbu (ORCPT + 99 others); Mon, 14 Sep 2020 06:31:50 -0400 Received: from foss.arm.com ([217.140.110.172]:34070 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726428AbgINKbt (ORCPT ); Mon, 14 Sep 2020 06:31:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E1DDD1FB; Mon, 14 Sep 2020 03:31:48 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 400583F68F; Mon, 14 Sep 2020 03:31:47 -0700 (PDT) References: <20200910054203.525420-1-aubrey.li@intel.com> <20200910054203.525420-2-aubrey.li@intel.com> <20200911162853.xldy6fvvqph2lahj@e107158-lin.cambridge.arm.com> <3f1571ea-b74c-fc40-2696-39ef3fe8b968@linux.intel.com> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: "Li\, Aubrey" Cc: Qais Yousef , Aubrey Li , mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH v1 1/1] sched/fair: select idle cpu from idle cpumask in sched domain In-reply-to: <3f1571ea-b74c-fc40-2696-39ef3fe8b968@linux.intel.com> Date: Mon, 14 Sep 2020 11:31:42 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/09/20 00:04, Li, Aubrey wrote: >>> +++ b/include/linux/sched/topology.h >>> @@ -65,8 +65,21 @@ struct sched_domain_shared { >>> atomic_t ref; >>> atomic_t nr_busy_cpus; >>> int has_idle_cores; >>> + /* >>> + * Span of all idle CPUs in this domain. >>> + * >>> + * NOTE: this field is variable length. (Allocated dynamically >>> + * by attaching extra space to the end of the structure, >>> + * depending on how many CPUs the kernel has booted up with) >>> + */ >>> + unsigned long idle_cpus_span[]; >> >> Can't you use cpumask_var_t and zalloc_cpumask_var() instead? > > I can use the existing free code. Do we have a problem of this? > Nah, flexible array members are the preferred approach here; this also means we don't let CONFIG_CPUMASK_OFFSTACK dictate where this gets allocated. See struct numa_group, struct sched_group, struct sched_domain, struct em_perf_domain... >> >> The patch looks useful. Did it help you with any particular workload? It'd be >> good to expand on that in the commit message. >> > Odd, that included in patch v1 0/1, did you receive it? > > Thanks, > -Aubrey