Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp35876514rwd; Mon, 10 Jul 2023 14:09:52 -0700 (PDT) X-Google-Smtp-Source: APBJJlHB2uJ8Zp6CdRtLsgNXxxvLHs33LZFHOaZOP7ND0cKlorbSzLJ3tSw1AlME30jB2GWs33To X-Received: by 2002:a17:903:11ce:b0:1b3:c4c1:ec4f with SMTP id q14-20020a17090311ce00b001b3c4c1ec4fmr20772404plh.37.1689023391970; Mon, 10 Jul 2023 14:09:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689023391; cv=none; d=google.com; s=arc-20160816; b=eIwq36QM403zWx8G2NbkHvWHB+tNYISzIt1qkc4MwWfqShGSbBHw4LyLrq4saBawIE HxL8XriQo2DrVN7kkr/Leb0t1LBFU9/jfPcM9k1MkBEoDiJeLlQzOuSj0+Xwg42/Bumx ogE7qNM5DGwCEnSDwFeW33SLDMWeYwZLsyhPUVrRSFXjh2ckKdxZyn+f/34NeE9z+IGA PCYdiHIcE7/c+nGXUdm03Q9WZtHUMj4Nu4c1aWF8wh6HAjt0PSZ6uL7tIW5RHANd73Yi SNbY5ThrGQgcT8wo7yeRW1q7YCTq2kirZB74VmON86nqPFyBJPjr/NYEHJgIJp7w95ar byDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=AuYEoP0Nqpryc2PYnmyV89QNq2H9DQnGxj0KyoT+e7E=; fh=3sXT0aMw5d8e3atwtTtyVGNOTAYgyWgNvtjjJwP63/Q=; b=nsF2bXZYFtUf1XTGHT6PssdIQH0jdg4kbafGq5DW8VDnc0A01/HmkxB0f3erD0v4Vc HUusEPmSXKk6a/aJjJUjqUfMauLh/yHuZrdeTG0QQEeAC4im4rll7KazlU7bR+YNNkzZ IxmPqTjvU05MXWnntIr+oRCyGfRu2CJvDQvER3moSG2z7pbQnTx4xPKDzobyekxrd6rP DhzIYeqLVUm06o1EJNvVLcVZ/Gk4jO02cGt46p3guD9TimKgoSlaUSfhzVorNM9uHE90 q1dWeyfZeNsoogarWJuqctsGTvRQqpMsrGavjOiau65ydIjns06Tdhfq4OYseO8hurqs Lz+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TxoA6MF9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d9-20020a170902cec900b001b9e9b21249si349171plg.649.2023.07.10.14.09.39; Mon, 10 Jul 2023 14:09:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=TxoA6MF9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230155AbjGJUek (ORCPT + 99 others); Mon, 10 Jul 2023 16:34:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231602AbjGJUeh (ORCPT ); Mon, 10 Jul 2023 16:34:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1A81AD for ; Mon, 10 Jul 2023 13:33:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689021233; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=AuYEoP0Nqpryc2PYnmyV89QNq2H9DQnGxj0KyoT+e7E=; b=TxoA6MF9LNhU4B2mepOY+AzPDctICC3AtoT09CNi9rpwIr5HdbaD4ORezdkosTRXPzzfba poDfoTvCcJAN6v1JZa04zLxJ9+bn0RtEfWzTULHk0QwBn1MH0TJ2vfZpvVOjwaAA+9OBfJ PnsCHkBsyHpwquS1WGad8GBMJ6YDyNI= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-475-TeUO3txOPy-ERJsVYndBuA-1; Mon, 10 Jul 2023 16:33:51 -0400 X-MC-Unique: TeUO3txOPy-ERJsVYndBuA-1 Received: by mail-qk1-f198.google.com with SMTP id af79cd13be357-7673e4eee45so440870485a.0 for ; Mon, 10 Jul 2023 13:33:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689021231; x=1691613231; h=mime-version:message-id:date:references:in-reply-to:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AuYEoP0Nqpryc2PYnmyV89QNq2H9DQnGxj0KyoT+e7E=; b=GMLaVWJhjhJHRaCXFIWhtQHJvU2ppOzSFjvMTPj15FHEYeadmbC3Bc+vuLUfQGvJoK aM0UHP9A3yHqlk8fdw6swPOT4AN6BdAcFB+d10AxT8Jn2ZpXuZTbe1Oc6sSl05LNBeAk Mc/q1NTdBG4ukBgjy8Cm/fC7FVPxAb98a0kica/4rYeXCL8xLq/8qZtBiZ2865spMYjN 2AYELI036gtfAJk2a+g6pBj8ZQqQATC7kZMTogoifcs2sbhaEOJ+C7XGNrqndeLT+gVX vfwjDAfZkN0Z5V9Qdu5+JIwsKuX1dNl0ub3Nm5xM6GrD94Xacdvq4o+DVVso+9pLE0A4 QkZg== X-Gm-Message-State: ABy/qLauylLc1QxHaL3Ro5dXwqexp2O10SXxehtCmw/pz4CJHJiOg1pk fXuac/pkTpyeKdwxDqfY82Jr8pCl91hLg7MiUzmMc7AcqJ1wdMCfMDt1bBlc5Fm0JepDwOG7/fq Z5AU4Ty8nad01nKfJZcIl4++g X-Received: by 2002:a05:620a:2a09:b0:767:15ee:cc51 with SMTP id o9-20020a05620a2a0900b0076715eecc51mr13580800qkp.6.1689021231364; Mon, 10 Jul 2023 13:33:51 -0700 (PDT) X-Received: by 2002:a05:620a:2a09:b0:767:15ee:cc51 with SMTP id o9-20020a05620a2a0900b0076715eecc51mr13580773qkp.6.1689021231124; Mon, 10 Jul 2023 13:33:51 -0700 (PDT) Received: from vschneid.remote.csb ([154.57.232.159]) by smtp.gmail.com with ESMTPSA id c9-20020a05620a134900b00767d62ed8e6sm229826qkl.19.2023.07.10.13.33.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 13:33:50 -0700 (PDT) From: Valentin Schneider To: Tim Chen , Peter Zijlstra Cc: Tim C Chen , Juri Lelli , Vincent Guittot , Ricardo Neri , "Ravi V . Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J . Wysocki" , Srinivas Pandruvada , Steven Rostedt , Ionela Voinescu , x86@kernel.org, linux-kernel@vger.kernel.org, Shrikanth Hegde , Srikar Dronamraju , naveen.n.rao@linux.vnet.ibm.com, Yicong Yang , Barry Song , Chen Yu , Hillf Danton Subject: Re: [Patch v3 2/6] sched/topology: Record number of cores in sched group In-Reply-To: <04641eeb0e95c21224352f5743ecb93dfac44654.1688770494.git.tim.c.chen@linux.intel.com> References: <04641eeb0e95c21224352f5743ecb93dfac44654.1688770494.git.tim.c.chen@linux.intel.com> Date: Mon, 10 Jul 2023 21:33:47 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/07/23 15:57, Tim Chen wrote: > From: Tim C Chen > > When balancing sibling domains that have different number of cores, > tasks in respective sibling domain should be proportional to the number > of cores in each domain. In preparation of implementing such a policy, > record the number of tasks in a scheduling group. > > Signed-off-by: Tim Chen > --- > kernel/sched/sched.h | 1 + > kernel/sched/topology.c | 10 +++++++++- > 2 files changed, 10 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 3d0eb36350d2..5f7f36e45b87 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1860,6 +1860,7 @@ struct sched_group { > atomic_t ref; > > unsigned int group_weight; > + unsigned int cores; > struct sched_group_capacity *sgc; > int asym_prefer_cpu; /* CPU of highest priority in group */ > int flags; > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 6d5628fcebcf..6b099dbdfb39 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu) > static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) > { > struct sched_group *sg = sd->groups; > + struct cpumask *mask = sched_domains_tmpmask2; > > WARN_ON(!sg); > > do { > - int cpu, max_cpu = -1; > + int cpu, cores = 0, max_cpu = -1; > > sg->group_weight = cpumask_weight(sched_group_span(sg)); > > + cpumask_copy(mask, sched_group_span(sg)); > + for_each_cpu(cpu, mask) { > + cores++; > + cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); > + } This rekindled my desire for an SMT core cpumask/iterator. I played around with a global mask but that's a headache: what if we end up with a core whose SMT threads are split across two exclusive cpusets? I ended up necro'ing a patch from Peter [1], but didn't get anywhere nice (the LLC shared storage caused me issues). All that to say, I couldn't think of a nicer way :( [1]: https://lore.kernel.org/all/20180530143106.082002139@infradead.org/#t