Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp1869756ybh; Fri, 13 Mar 2020 08:49:07 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtKjPpttW5kqh5tplL0mqOEmqBasV7Q71rE2/ULyREDdTfhZRufHRaCrFhUGtasKgbHMXAX X-Received: by 2002:a9d:798d:: with SMTP id h13mr12080614otm.25.1584114547509; Fri, 13 Mar 2020 08:49:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584114547; cv=none; d=google.com; s=arc-20160816; b=KE5az92GtzY1zwrLNV+1S4g4H2Djm+GwOC/y+VpIZ0ctbD1RXt0/0F0rq3rn/6Glr6 fOf2fPOSp+6R+svvvk8XHH6SOtVg8qIV5q6lVakTEXOLXWUl/jbeZ26mkr3t+QtsbKgJ cjDF4qcENkT0O0TSCwBWkhqODpbvSqUuruKJESQ6l143ptAf2tvgkC2eMFhCpVaqUg2B MUwvKdvrkCyuC23ZMLqTffRFbmp8nhEcw3kLpESG6E6nufoYFb/FdS9YsHshc5hpxjtw OpSTZSL2gleWWSN8ktRgO4/8hPumVCsk2OQzDlL3oRarv5k81mWN6f+z3u1PM9h8ghTu TRcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=jWRVKpF5ssoXiXmWOIeoEhUzqOIqKR5qIVYELNYIA9I=; b=gKhpGIJpY6/F14cr+119ZP1DtWYyIG1QIfetNsBLN5H/ID1dswDZHb+YnSZObpkxiX U+dWN98DE6FTozhoBObyQJylYQN/+d5aX3b+x+ddLAsK2oysKIj16ca7UttJa6iK27pp TjUVGoxGcfJQ5AYH0JkFQLC0c7Al5ITmdBKsFhkFfuiWP8aZedkOgcS3qPVESe83tijx bH5VK3gQ16WC0hydS/OIAf9L6/sCF/tG1rz/tlILQAZgwwpH5OQEr/qM6anMWgZY1Zs5 vZxjpdxa1YytlWugODM8lUztFJnNHYP4eW2UidIwSiIJGKAaleLtEwMY7/sUZcLgtYZI Zdag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n17si4515899oic.145.2020.03.13.08.48.54; Fri, 13 Mar 2020 08:49:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727436AbgCMPr2 (ORCPT + 99 others); Fri, 13 Mar 2020 11:47:28 -0400 Received: from foss.arm.com ([217.140.110.172]:59352 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726974AbgCMPr1 (ORCPT ); Fri, 13 Mar 2020 11:47:27 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6A8931B; Fri, 13 Mar 2020 08:47:26 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 710933F6CF; Fri, 13 Mar 2020 08:47:25 -0700 (PDT) References: <20200312165429.990-1-vincent.guittot@linaro.org> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Guittot Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Subject: Re: [PATCH] sched/fair: improve spreading of utilization In-reply-to: Date: Fri, 13 Mar 2020 15:47:17 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 13 2020, Vincent Guittot wrote: >> > And with more coffee that's another Doh, ASYM_PACKING would end up as >> > migrate_task. So this only affects the reduced capacity migration, which >> >> yes ASYM_PACKING uses migrate_task and the case of reduced capacity >> would use it too and would not be impacted by this patch. I say >> "would" because the original rework of load balance got rid of this >> case. I'm going to prepare a separate fix for this > > After more thought, I think that we are safe for reduced capacity too > because this is handled in the migrate_load case. In my previous > reply, I was thinking of the case where rq is not overloaded but cpu > has reduced capacity which is not handled. But in such case, we don't > have to force the migration of the task because there is still enough > capacity otherwise rq would be overloaded and we are back to the case > already handled > Good point on the capacity reduction vs group_is_overloaded. That said, can't we also reach this with migrate_task? Say the local group is entirely idle, and the busiest group has a few non-idle CPUs but they all have at most 1 running task. AFAICT we would still go to calculate_imbalance(), and try to balance out the number of idle CPUs. If the migration_type is migrate_util, that can't happen because of this change. Since we have this progressive balancing strategy (tasks -> util -> load), it's a bit odd to have this "gap" in the middle where we get one less possibility to trigger active balance, don't you think? That is, providing I didn't say nonsense again :) It's not a super big deal, but I think it's nice if we can maintain a consistent / gradual migration policy. >> >> > might be hard to notice in benchmarks.