Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp1891750ybh; Fri, 13 Mar 2020 09:11:06 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtuMwwn/38tfPFj7jwmez5VHOscG4iPPzg3CdzCBCgrE/KZu2XPMy7vwx7N3vkBzpA3VCEZ X-Received: by 2002:a9d:4c05:: with SMTP id l5mr1032252otf.371.1584115866085; Fri, 13 Mar 2020 09:11:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584115866; cv=none; d=google.com; s=arc-20160816; b=q8RmYAx5Yn/JnzalavFvMk/E1eAxskT/PvK/NUVodqnn44fT4pxQGl+dWuMGqPWF2/ c+GjrpWAmrv/pOoQQE8IHtdkdbOZNPQkU4F6tFgy8JHbE1uj9H/mz2Fs2725iJtXlkr9 9ivfHP6ZD8wVb3j2TyDaoL0aimoxvOqKmEMMzuBZDtSLFE1V41dHAOLNRrlVYZlZmz2V kzGSygHGZt+oJFGh6cblqqwo0+TN75Zd3Xv7+2dd4yQsVTeQ2O+htKLfHPtUQVgMlMrJ TkltpdVV3UgyDTEQElZebRNdcmlUvOL2jSVGpfRlpFjz3bqCsNR4kGgQdiAZ4TVu88BW sYkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=Lc+TP9X/BI4s1ZOmMJJExFGcSgfoK31/UWMx2IREy7U=; b=Sm8Ztg5ucVasRCkjORTU4EKStQtkU1MBBpUeRP126LxxraHCi2elexZvpVjR5CEabH WwVqhXSyOfH9wnO0PikQ/OBvCwNpwIhPe4ysOSyIYxv1n00YYAZuinNSY5Cy7zNVR+rC c3+62nUZ2rbymSrO7JDyPBxzQLhGE7+xq9LeVV31KtGcrDSyjpvHD5ISCEc0nXbCh+28 vi9y0QoqYi+5bbnQV4t5kTKaEkvdz+i5Ja2DMrQDBmTRQiaaUuGd2SWm1y4HBA0yzGoz F6ySJH9B0ix9pY+C2qcgLfHlvpi9Z3ggxPZ3Lqa0+85cdkj3/yIWr4ONNkvrQhM/3+6r wAlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=WUkPN4Wx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w73si4223382oiw.206.2020.03.13.09.10.40; Fri, 13 Mar 2020 09:11:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=WUkPN4Wx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726834AbgCMQKJ (ORCPT + 99 others); Fri, 13 Mar 2020 12:10:09 -0400 Received: from mail-lj1-f193.google.com ([209.85.208.193]:42672 "EHLO mail-lj1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726480AbgCMQKJ (ORCPT ); Fri, 13 Mar 2020 12:10:09 -0400 Received: by mail-lj1-f193.google.com with SMTP id q19so11095101ljp.9 for ; Fri, 13 Mar 2020 09:10:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Lc+TP9X/BI4s1ZOmMJJExFGcSgfoK31/UWMx2IREy7U=; b=WUkPN4Wxp/NqTBgJtvLPZHaL5YWWT4h6el7l/K39lgDGtaleAy/V3UhoGaGubxN3SP 1G278PShBfkS38pzHUgZdQZ2LNEeV+MZPVck/R5NKf4R1mhAa8xiO0bbCXZvUDBqcsK7 VaXxJPfbe2lQ+9AfTj/6yY8KJGGSI0sgy6kDbm8BO6Nn5STWEUuW9z9wo0nmLFQsWmZr g0+62Db3M8tl5V5/+OBAgUub0X/27PAd+Fb/9kV38l7rr4N4pUODqlWSo5lsls/B8zKL SZ1a9QPc0dWzC7I4ilLGpkqaBwF+DTUvVk4fFoa5k/VC9IbfJZrjWsup7TVNl2yU6Y4N XTcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Lc+TP9X/BI4s1ZOmMJJExFGcSgfoK31/UWMx2IREy7U=; b=p0ddYCp4zl0qMDrTKGTi/0RguYGsr23TuRs3jNpV2xY0TmM7gxUvFRunCENEWndGFg caHYPN+o6WX4nGTKlXVr9pL3j1nCkLV62U05vlVgtUZaON4PGlTEZA1a9/QD/ZVfrdv2 ApzViu2/3k3WMiLedgUAh56vpZlUtwTxB1vIVSOQumTE+2i4btJ7PfFAq7kz34MvjMyF CnBOHTsUfBruqebXwHPRPmAWOgRQP0bq4HFPiX8RoRGgbJgaXfKGSC+zdZ+e0d45tJTG eaqw8/FPxgSYzlaPxE3Z2MxxbK8kGt4iNwgwfaDL9/tUquFCsMdQ3uOdY5UplYT2uEYt B3hw== X-Gm-Message-State: ANhLgQ3/fYc5nIJvTVsf+Cvl+h8hzbuNH1UjICQIaFhuuZfi91Z3tqmq Iwzzp9S7Lk+2Io3AKe5yp+xAIh/esuEjnJW8m5ynsw== X-Received: by 2002:a2e:8112:: with SMTP id d18mr8498758ljg.137.1584115806555; Fri, 13 Mar 2020 09:10:06 -0700 (PDT) MIME-Version: 1.0 References: <20200312165429.990-1-vincent.guittot@linaro.org> In-Reply-To: From: Vincent Guittot Date: Fri, 13 Mar 2020 17:09:55 +0100 Message-ID: Subject: Re: [PATCH] sched/fair: improve spreading of utilization To: Valentin Schneider Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 13 Mar 2020 at 16:47, Valentin Schneider wrote: > > > On Fri, Mar 13 2020, Vincent Guittot wrote: > >> > And with more coffee that's another Doh, ASYM_PACKING would end up as > >> > migrate_task. So this only affects the reduced capacity migration, which > >> > >> yes ASYM_PACKING uses migrate_task and the case of reduced capacity > >> would use it too and would not be impacted by this patch. I say > >> "would" because the original rework of load balance got rid of this > >> case. I'm going to prepare a separate fix for this > > > > After more thought, I think that we are safe for reduced capacity too > > because this is handled in the migrate_load case. In my previous > > reply, I was thinking of the case where rq is not overloaded but cpu > > has reduced capacity which is not handled. But in such case, we don't > > have to force the migration of the task because there is still enough > > capacity otherwise rq would be overloaded and we are back to the case > > already handled > > > > Good point on the capacity reduction vs group_is_overloaded. > > That said, can't we also reach this with migrate_task? Say the local The test has only been added for migrate_util so migrate_task is not impacted > group is entirely idle, and the busiest group has a few non-idle CPUs > but they all have at most 1 running task. AFAICT we would still go to > calculate_imbalance(), and try to balance out the number of idle CPUs. such case is handled by migrate_task when we try to even the number of tasks between groups > > If the migration_type is migrate_util, that can't happen because of this > change. Since we have this progressive balancing strategy (tasks -> util > -> load), it's a bit odd to have this "gap" in the middle where we get > one less possibility to trigger active balance, don't you think? That > is, providing I didn't say nonsense again :) Right now, I can't think of a use case that could trigger such situation because we use migrate_util when source is overloaded which means that there is at least one waiting task and we favor this task in priority > > It's not a super big deal, but I think it's nice if we can maintain a > consistent / gradual migration policy. > > >> > >> > might be hard to notice in benchmarks.