Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp1935808ybh; Fri, 13 Mar 2020 09:59:05 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtr4O9yJkTjTqAsGTJHC5eM9/8Ds8sd3lIBtRucF4PAiRR8CQGobqHm4ufJNyB0od7Sjpp6 X-Received: by 2002:a05:6808:45:: with SMTP id v5mr7507543oic.90.1584118745768; Fri, 13 Mar 2020 09:59:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584118745; cv=none; d=google.com; s=arc-20160816; b=O/NWEstNZPldrxYcRRNrcp2RYMPi7ifxCPdSyXp/IOu4HMupRebb/9uZIuThRafnJO DBYMixs8KK6xqdsFApN9Ws5QDtUygxy7AdNV0ohg5L7l7DqCvugSxrtQyPcCSGY68MKj QzYyW05veygUYREQkQX9Bzv6prnywsgPA6lkhhvSftqfq2IDjjzZ9QtaE2024dbTz/xn K1noFzK4dU5V3MWN1+NkcG0ryPiJZ9gyUpYdYG2exW5iV5GBdD6gVpZZlkz637pkxPI8 MpF2OIsIoNI6TymGJF9Lky5QLSoVzgNdtSa+zQ0/TfVOTLxlEzcK+KTBXA1iH+wab5W3 H6FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=M/sD2sUUKwD015ypLU5H/rPjAXlRSSR5rM51SIEDyJc=; b=hvJrC6jc5dWbKUUT3DUOIAvTlup+Z9mb4l9/JGypQ2mXBvQejUcdkglS6B+m+ByimO KmGTFkozgWcY1yvpTZP5vFejzgQShXr+enOidhYx51ufHFVObXvq51huvBEGDFNPBfgb RV4TPXvT9/J1LFzg46cdDrsfqwExuIJg1YUvjoWzQ225Z0pssb8bQiFMLd7BMvGxlj2+ nzHHyaD8vzroez55XVTLC/poWtd2Epq5QpO4aYtNMKk009BAtaw6faVE/y23WkLql0Dy PP5CpknVJwWwtsxqgT+EB1et9ePk/wfKzcCXIaeUDtX4f3hDTiFHJoTKabO4K9LBN2E0 g9uA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c12si4944349otr.67.2020.03.13.09.58.53; Fri, 13 Mar 2020 09:59:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727076AbgCMQ56 (ORCPT + 99 others); Fri, 13 Mar 2020 12:57:58 -0400 Received: from foss.arm.com ([217.140.110.172]:33100 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726613AbgCMQ56 (ORCPT ); Fri, 13 Mar 2020 12:57:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF9A131B; Fri, 13 Mar 2020 09:57:57 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8B19C3F534; Fri, 13 Mar 2020 09:57:56 -0700 (PDT) References: <20200312165429.990-1-vincent.guittot@linaro.org> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Guittot Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Subject: Re: [PATCH] sched/fair: improve spreading of utilization In-reply-to: Date: Fri, 13 Mar 2020 16:57:54 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 13 2020, Vincent Guittot wrote: >> Good point on the capacity reduction vs group_is_overloaded. >> >> That said, can't we also reach this with migrate_task? Say the local > > The test has only been added for migrate_util so migrate_task is not impacted > >> group is entirely idle, and the busiest group has a few non-idle CPUs >> but they all have at most 1 running task. AFAICT we would still go to >> calculate_imbalance(), and try to balance out the number of idle CPUs. > > such case is handled by migrate_task when we try to even the number of > tasks between groups > >> >> If the migration_type is migrate_util, that can't happen because of this >> change. Since we have this progressive balancing strategy (tasks -> util >> -> load), it's a bit odd to have this "gap" in the middle where we get >> one less possibility to trigger active balance, don't you think? That >> is, providing I didn't say nonsense again :) > > Right now, I can't think of a use case that could trigger such > situation because we use migrate_util when source is overloaded which > means that there is at least one waiting task and we favor this task > in priority > Right, what I was trying to say is that AIUI migration_type == migrate_task with <= 1 running task per CPU in the busiest group can *currently* lead to a balance attempt, and thus a potential active balance. Consider a local group of 4 idle CPUs, and a busiest group of 3 busy 1 idle CPUs, each busy having only 1 running task. That busiest group would be group_has_spare, so we would compute an imbalance of (4-1) / 2 == 1 task to move. We'll proceed with the load balance, but we'll only move things if we go through an active_balance. My point is that if we prevent this for migrate_util, it would make sense to prevent it for migrate_task, but it's not straightforward since we have things like ASYM_PACKING. >> >> It's not a super big deal, but I think it's nice if we can maintain a >> consistent / gradual migration policy. >> >> >> >> >> > might be hard to notice in benchmarks.