Received: by 2002:a05:7412:b101:b0:e2:908c:2ebd with SMTP id az1csp3401830rdb; Thu, 16 Nov 2023 08:34:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IG04GzIodBm2iYtFQenEauKbVFUeMQYTm/uZahSq6XEkxXwIk1+lJpXFuLqMlXFFwuc/ZeL X-Received: by 2002:a17:902:cf09:b0:1cc:4d99:36f2 with SMTP id i9-20020a170902cf0900b001cc4d9936f2mr10507716plg.56.1700152472067; Thu, 16 Nov 2023 08:34:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700152472; cv=none; d=google.com; s=arc-20160816; b=x/bxzgW3Nt6HzEWKBcHMIIl//C0rc/1DuzSJKgs9c3JAa/sU5YMLuxmQ48iAhq+rZn yrq4pRZGK/8yYaRxzCW8XAFmeU9arB8Xbcz1zg7Lq6du55UHG18P5YzbhG2laEP6wFJg 2m6FOJAv2MdmjHifK22wBjdBoWN7seg46WqZ/R837SV4eq5ystOuhIVYsguqwb0OeV+N hg+5JWU7/2BnLlx1hanGYxn3lEgOfHGmlQqSZStjhgiDMIFNpt9Jrk0Pb5umBCZqExf/ S+2SYwe1QvploVfy6TGMpmK1N2IkC+F1CMeLk9TuKPa2O0K6hA49InJ27HuoAQdWeBec c9FQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=M9QJPVxD+jrwcv6WOUWbrpCnGgevGm+jtOHlxGovLiI=; fh=N7sRkfJvV9fgjZp17x/vWgLVYGh8g5M6VIb3vdp9s+o=; b=JeXbO+1Au1wsqIs2CowOg9Zc9dVfMeHx3bCNMISZyaj8GpawcdVMmKyancl2QwFuYG jyoSdDaYfgiK5jNBdyTZp2GQxVMKCUnYx3Zj2/J6MBskrl1O/d2uL8ZG+hqxASelHEX3 nGMqVs4l71xZ058KYydkcTxcAakkD8elRB3Sno1NOkqDPAbuUoJ1Ox9SVXRO8RgNkSUR 1TOv9uXWM750Aiw/pfrE4s8uqdi+TGrYq8Sg/LQ5z1KbxAj4hMZTOHhS4JoJOJGQKwWs n++mTgA6MfhoQllYfhH3hkgBlT17SSta62L2hLAKVaOPNsrrHL8iHHYTkOZjjT/0Uq7G ysuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id u5-20020a17090282c500b001c3a05b0b58si11855975plz.500.2023.11.16.08.34.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Nov 2023 08:34:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 68AA881A2058; Thu, 16 Nov 2023 08:34:18 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231196AbjKPQeK (ORCPT + 99 others); Thu, 16 Nov 2023 11:34:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230473AbjKPQeJ (ORCPT ); Thu, 16 Nov 2023 11:34:09 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E409119B for ; Thu, 16 Nov 2023 08:34:05 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 92C461595; Thu, 16 Nov 2023 08:34:51 -0800 (PST) Received: from [192.168.178.6] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7FF603F73F; Thu, 16 Nov 2023 08:34:03 -0800 (PST) Message-ID: <3249da56-b5b8-4f10-a148-8a221cdf269a@arm.com> Date: Thu, 16 Nov 2023 17:34:01 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] sched/fair: Use all little CPUs for CPU-bound workload Content-Language: en-US To: Pierre Gondois , linux-kernel@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider References: <20231110125902.2152380-1-pierre.gondois@arm.com> From: Dietmar Eggemann In-Reply-To: <20231110125902.2152380-1-pierre.gondois@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 16 Nov 2023 08:34:18 -0800 (PST) On 10/11/2023 13:59, Pierre Gondois wrote: > Running n CPU-bound tasks on an n CPUs platform with asymmetric CPU > capacity might result in a task placement where two tasks run on a > big CPU and none on a little CPU. This placement could be more optimal > by using all CPUs. > > Testing platform: > Juno-r2: > - 2 big CPUs (1-2), maximum capacity of 1024 > - 4 little CPUs (0,3-5), maximum capacity of 383 > > Testing workload ([1]): > Spawn 6 CPU-bound tasks. During the first 100ms (step 1), each tasks > is affine to a CPU, except for: > - one little CPU which is left idle. > - one big CPU which has 2 tasks affine. > After the 100ms (step 2), remove the cpumask affinity. I used your workload on my Juno-r0 with LISA (rt-app overwrites the mainline CPU capacity values [446 1024 1024 446 446 466] to [675 1024 1024 675 675 675] to adapt to rt-app busy loop's instruction mix. Here I can't see the issue you bring up here. The two tasks sharing a CPU have util_avg = ~512 and the load-balance from one task to the idle little CPU is happening, I assume it's the diff in CPU capacity of the little CPUs: 383 < 512 < 675 ? > > Before patch: > During step 2, the load balancer running from the idle CPU tags sched > domains as: > - little CPUs: 'group_has_spare'. Indeed, 3 CPU-bound tasks run on a > 4 CPUs sched-domain, and the idle CPU provides enough spare > capacity. > - big CPUs: 'group_overloaded'. Indeed, 3 tasks run on a 2 CPUs > sched-domain, so the following path is used: > group_is_overloaded() > \-if (sgs->sum_nr_running <= sgs->group_weight) return true; > > The following path which would change the migration type to > 'migrate_task' is not taken: > calculate_imbalance() > \-if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) > as the local group has some spare capacity, so the imbalance > is not 0. > > The migration type requested is 'migrate_util' and the busiest > runqueue is the big CPU's runqueue having 2 tasks (each having a > utilization of 512). The idle little CPU cannot pull one of these > task as its capacity is too small for the task. The following path > is used: Ah, here you're describing the issue I mentioned above. > detach_tasks() > \-case migrate_util: > \-if (util > env->imbalance) goto next; > > After patch: > When the local group has spare capacity and the busiest group is at > least tagged as 'group_fully_busy', if the local group has more CPUs > than CFS tasks and the busiest group more CFS tasks than CPUs, > request a 'migrate_task' type migration. > > Improvement: > Running the testing workload [1] with the step 2 representing > a ~10s load for a big CPU: > Before patch: ~19.3s > After patch: ~18s (-6.7%) > > The issue only happens at the DIE level on platforms able to have > 'migrate_util' migration types, i.e. no DynamIQ systems where > SD_SHARE_PKG_RESOURCES is set. Right, mainline Arm DynamIQ should only have 1 SD level (MC). Android might still be affected since they run MC and DIE. > > Signed-off-by: Pierre Gondois > --- > kernel/sched/fair.c | 17 +++++++++++++++++ > 1 file changed, 17 insertions(+) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index df348aa55d3c..5a215c96d420 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -10495,6 +10495,23 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > env->imbalance = max(local->group_capacity, local->group_util) - > local->group_util; > > + /* > + * On an asymmetric system with CPU-bound tasks, a > + * migrate_util balance might not be able to migrate a > + * task from a big to a little CPU, letting a little > + * CPU unused. > + * If local has an empty CPU and busiest is overloaded, > + * balance one task with a migrate_task migration type > + * instead. > + */ > + if (env->sd->flags & SD_ASYM_CPUCAPACITY && > + local->sum_nr_running < local->group_weight && > + busiest->sum_nr_running > busiest->group_weight) { > + env->migration_type = migrate_task; > + env->imbalance = 1; > + return; > + } > + > /* > * In some cases, the group's utilization is max or even > * higher than capacity because of migrations but the