Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp38071345rwd; Wed, 12 Jul 2023 02:34:43 -0700 (PDT) X-Google-Smtp-Source: APBJJlHGpVi6+ALRpAL1x83LSO41eu91uz1ag1epj0Vt0vmEhuqJAiV7mxF/fNjld6YbP00DD8NL X-Received: by 2002:a05:6870:b48e:b0:1b0:f38:9102 with SMTP id y14-20020a056870b48e00b001b00f389102mr21783713oap.7.1689154483538; Wed, 12 Jul 2023 02:34:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689154483; cv=none; d=google.com; s=arc-20160816; b=EXE8Ox82hLEvwp85hNAsNedETomEapJ2MLaUeqwCOJsDybHmsEet2Aq9V6OFY9V7Cy cTVwYi87YcNVQ/qRXw/yv71IXz2zB3huAqlam7uqQ8Fk7rATIJju/m6QPCC7FeTYKgPT 7m+YWkq2NJaincXvg3KssSHFfyBG6Y6XkRomd2zcnOgRTCul5ERzVQzZTzjLxDYUfpHP mtUy7QyTO0XyVOrJmcr6nQ3u8onquN0bSlEkEHJde+wlnja5kTCpKZyVlOXuZ7dBIClT ZEFRS3pfQBurFs3OOgYoXRad3wZjVVjuK6s/s/CyqmBXsorV/NwOXui9oqc/LZd0zgKk pOTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=0yxfOZOJtDRYGe4/9MKRDlesJmocqo4vTSXNb7waQqE=; fh=MBSoZyFfNbSsHiqIK7bktorgUldvg9BxKuxv0Lf9BWk=; b=wkYwZ4/Acy0D5IksOPafKbiTA2O6cRjfCiCAZXIeb2W9IzYXbVbF+wYihbgwF1cotw iXLXzSkvzQTdmtT10RKdpyC5E106j2KYBeCb8/tPhtxQiVzwIg4JRsadzOqrjxDxM8n2 vv4vqUDhqTgT8q0FnycOYhrmu0qeYQLgJQng06abKWh3uGoh9uYw/l103DzCCbz8uc2n 7c6SycGAQLxfrdc5w03VJ3RY/IcefLJDrhec85jv7adpZE55WoWKABsaWGjD86yIcYGi wDsh+OYDcx8p8N0QVXnf+e0VD9+tl+iXjGIa4LtE9oxig8XkY7lan4rpQeL9UagY5NUP IXwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=d2I67wZJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 65-20020a630044000000b00542aa78ebdasi2850876pga.838.2023.07.12.02.34.31; Wed, 12 Jul 2023 02:34:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=d2I67wZJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232891AbjGLJ2P (ORCPT + 99 others); Wed, 12 Jul 2023 05:28:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233270AbjGLJ17 (ORCPT ); Wed, 12 Jul 2023 05:27:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75654139 for ; Wed, 12 Jul 2023 02:27:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689154041; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=0yxfOZOJtDRYGe4/9MKRDlesJmocqo4vTSXNb7waQqE=; b=d2I67wZJ31WnN9XjGsBiOT772ZFv/uwtkGxqFy1VZ9emjrhcGwIYfmq43bvW1Dv19qIQUX CHDdFieUNXKIDxmtE0QLojPdw1nFw6nWX9dXiV8hJQhUz9/tl7/ca/YXrqbqh3StPEoLUh dpwAe8vSZCzFZC//vyPqEGdfU2mot0E= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-550-uXPIjshcPWOEh7o_PeF_yA-1; Wed, 12 Jul 2023 05:27:14 -0400 X-MC-Unique: uXPIjshcPWOEh7o_PeF_yA-1 Received: by mail-qk1-f200.google.com with SMTP id af79cd13be357-767564705f5so785488585a.1 for ; Wed, 12 Jul 2023 02:27:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689154033; x=1691746033; h=mime-version:message-id:date:references:in-reply-to:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0yxfOZOJtDRYGe4/9MKRDlesJmocqo4vTSXNb7waQqE=; b=GyfH2mhF/Qv/RrbMMCV65vxkXeYsvCuGmlJtGRLozHzCVJTDklZYO9KUI/3w/yT9vW czlrIvtnuv+j4iOoiPZ81iA6JKzPU4k0UpSZpsrduChbupb5W1TtV2hlkXC8TO0VmZvz 4s/aW0oIFkDv6IOeEaZn/uDSzRYnoqq/dsz84o7TSov8n6t7kHwOVL99zo+9Ob/3MHcu 7d5jtPV8lknBFZ1GL9ZKeh1RzD2tEMpcAzzJPwR1DjVeFRSfVWpyYGnpt4E2Zky825Yp ZoaPQjdeDTZQxi/+vGbLtCyecYHsNaAmysQ319NXVHM6Wc0VBX5xLkwiW2oUh9aC15fJ u9RQ== X-Gm-Message-State: ABy/qLZ0xYxSElA9KAsPoKSISDBz34pd/v0iSPORHTctMMqR/T5gC5rs T4ueST6rK65pohbbBeUHQuRmNKtbPa9fRPyq4nBJV88/lO/PelRXzuB6SmCJA4vUSDufBDyBA5m OV9bwZXU6GBKBejhKvZ9uuJWO X-Received: by 2002:a05:620a:3729:b0:75b:23a1:8345 with SMTP id de41-20020a05620a372900b0075b23a18345mr21053566qkb.64.1689154033739; Wed, 12 Jul 2023 02:27:13 -0700 (PDT) X-Received: by 2002:a05:620a:3729:b0:75b:23a1:8345 with SMTP id de41-20020a05620a372900b0075b23a18345mr21053545qkb.64.1689154033514; Wed, 12 Jul 2023 02:27:13 -0700 (PDT) Received: from vschneid.remote.csb ([154.57.232.159]) by smtp.gmail.com with ESMTPSA id g7-20020ae9e107000000b0075cd80fde9esm1984023qkm.89.2023.07.12.02.27.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jul 2023 02:27:12 -0700 (PDT) From: Valentin Schneider To: Tim Chen , Peter Zijlstra Cc: Juri Lelli , Vincent Guittot , Ricardo Neri , "Ravi V . Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J . Wysocki" , Srinivas Pandruvada , Steven Rostedt , Ionela Voinescu , x86@kernel.org, linux-kernel@vger.kernel.org, Shrikanth Hegde , Srikar Dronamraju , naveen.n.rao@linux.vnet.ibm.com, Yicong Yang , Barry Song , Chen Yu , Hillf Danton Subject: Re: [Patch v3 2/6] sched/topology: Record number of cores in sched group In-Reply-To: <0b20535f4bd6908942c91be86bd17bc3c07514f2.camel@linux.intel.com> References: <04641eeb0e95c21224352f5743ecb93dfac44654.1688770494.git.tim.c.chen@linux.intel.com> <0b20535f4bd6908942c91be86bd17bc3c07514f2.camel@linux.intel.com> Date: Wed, 12 Jul 2023 10:27:08 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/07/23 15:13, Tim Chen wrote: > On Mon, 2023-07-10 at 21:33 +0100, Valentin Schneider wrote: >> On 07/07/23 15:57, Tim Chen wrote: >> > From: Tim C Chen >> > >> > When balancing sibling domains that have different number of cores, >> > tasks in respective sibling domain should be proportional to the number >> > of cores in each domain. In preparation of implementing such a policy, >> > record the number of tasks in a scheduling group. >> > >> > Signed-off-by: Tim Chen >> > --- >> > kernel/sched/sched.h | 1 + >> > kernel/sched/topology.c | 10 +++++++++- >> > 2 files changed, 10 insertions(+), 1 deletion(-) >> > >> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h >> > index 3d0eb36350d2..5f7f36e45b87 100644 >> > --- a/kernel/sched/sched.h >> > +++ b/kernel/sched/sched.h >> > @@ -1860,6 +1860,7 @@ struct sched_group { >> > atomic_t ref; >> > >> > unsigned int group_weight; >> > + unsigned int cores; >> > struct sched_group_capacity *sgc; >> > int asym_prefer_cpu; /* CPU of highest priority in group */ >> > int flags; >> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c >> > index 6d5628fcebcf..6b099dbdfb39 100644 >> > --- a/kernel/sched/topology.c >> > +++ b/kernel/sched/topology.c >> > @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu) >> > static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) >> > { >> > struct sched_group *sg = sd->groups; >> > + struct cpumask *mask = sched_domains_tmpmask2; >> > >> > WARN_ON(!sg); >> > >> > do { >> > - int cpu, max_cpu = -1; >> > + int cpu, cores = 0, max_cpu = -1; >> > >> > sg->group_weight = cpumask_weight(sched_group_span(sg)); >> > >> > + cpumask_copy(mask, sched_group_span(sg)); >> > + for_each_cpu(cpu, mask) { >> > + cores++; >> > + cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); >> > + } >> >> >> This rekindled my desire for an SMT core cpumask/iterator. I played around >> with a global mask but that's a headache: what if we end up with a core >> whose SMT threads are split across two exclusive cpusets? > > Peter and I pondered that for a while. But it seems like partitioning > threads in a core between two different sched domains is not a very > reasonable thing to do. > > https://lore.kernel.org/all/20230612112945.GK4253@hirez.programming.kicks-ass.net/ > Thanks for the link. I'll poke at this a bit more, but regardless: Reviewed-by: Valentin Schneider