Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp190616pxk; Wed, 16 Sep 2020 01:37:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwCqYUtIz7eLvYOP1XNfhBsGkDpWekVQAF9Q2pcYfimm+YYFNs+8kAfRowlYHrIl+59ECoB X-Received: by 2002:a50:e807:: with SMTP id e7mr26932777edn.84.1600245477139; Wed, 16 Sep 2020 01:37:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600245477; cv=none; d=google.com; s=arc-20160816; b=iYz6oykoWX46p+jcu7HIDXaROrhs0wFN1BPcuyeexh8kaX/Af2y/UhRn+8rWoOmoxn cTiPCQBevTuNMuYx7ME9CcLiVFuyLC6Kmaun226gm5OJ9teuT6iFOzmbb0BmQFc1elqQ J62Tv9hIBxjp6IJB58FPo2gKHyvpiT7NIt/SOERXeVlLUhR4K/aWuR82pR3w5EJEsP7S j8tBJ9nvlJIPEFIWcXXz+FNVmQcU5zNZugMHaT+bfdmqXON1TPId+/ijNGaB232BmDNp ail8bT/ETyehLlHoPrzNlL3KmcuQ1wJUAMUodd2z2gtVrdRPsACp6bmQx/8U3AzbAysg 71Eg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=02gk4ThJUtfNquLqajLU1eESjKXvTS28wyqlheYBcok=; b=jrK2Kp7fgR1/bjlTrm7UOHANIQcvp5aYFgv5FgRjHu+UVaWAIOYC/QrGo6eZR8EMJl Ma7KX0kha+BVFgv/jYYoLhFSNmSvWrGbBbayHWKlrx1FQMP7z4A7TcvwBLQt3rt4kcS9 LnrN7pZ8eQxPGs5jjH3mt8Vw1wZ1UgRLE64sub9z2s6LQXRVLX/Pk6S60qHcdq1VnqED aX76YVsAMJpFvuq+ASuNLptfHMJ8hyVJxZxCTQRiwMvyYC68IxBgPESZDyRXKxVZVjSS 469Pc/36fmcPt8siMxZkqVgHhcaC+T0lMA5cfsTqeO+m2bVSnl1OHectSRBvXEOXsQrD 5N1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s5si11416847edu.65.2020.09.16.01.37.35; Wed, 16 Sep 2020 01:37:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726613AbgIPIeQ (ORCPT + 99 others); Wed, 16 Sep 2020 04:34:16 -0400 Received: from foss.arm.com ([217.140.110.172]:56338 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726474AbgIPIeI (ORCPT ); Wed, 16 Sep 2020 04:34:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A53501045; Wed, 16 Sep 2020 01:34:07 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6E5A63F718; Wed, 16 Sep 2020 01:34:06 -0700 (PDT) References: <20200914100340.17608-1-vincent.guittot@linaro.org> <20200914100340.17608-5-vincent.guittot@linaro.org> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Guittot Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Subject: Re: [PATCH 4/4] sched/fair: reduce busy load balance interval In-reply-to: Date: Wed, 16 Sep 2020 09:34:04 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 16/09/20 08:02, Vincent Guittot wrote: > On Tue, 15 Sep 2020 at 21:04, Valentin Schneider > wrote: >> >> >> On 14/09/20 11:03, Vincent Guittot wrote: >> > The busy_factor, which increases load balance interval when a cpu is busy, >> > is set to 32 by default. This value generates some huge LB interval on >> > large system like the THX2 made of 2 node x 28 cores x 4 threads. >> > For such system, the interval increases from 112ms to 3584ms at MC level. >> > And from 228ms to 7168ms at NUMA level. >> > >> > Even on smaller system, a lower busy factor has shown improvement on the >> > fair distribution of the running time so let reduce it for all. >> > >> >> ISTR you mentioned taking this one step further and making >> (interval * busy_factor) scale logarithmically with the number of CPUs to >> avoid reaching outrageous numbers. Did you experiment with that already? > > Yes I have tried the logarithmically scale but It didn't give any > benefit compared to this solution for the fairness problem but > impacted other use cases because it impacts idle interval and it also > adds more constraints in the computation of the interval and > busy_factor because we can end up with the same interval for 2 > consecutive levels . > Right, I suppose we could frob a topology level index in there to prevent that if we really wanted to... > That being said, it might be useful for other cases but i haven't look > further for this > Fair enough! >> >> > Signed-off-by: Vincent Guittot >> > --- >> > kernel/sched/topology.c | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c >> > index 1a84b778755d..a8477c9e8569 100644 >> > --- a/kernel/sched/topology.c >> > +++ b/kernel/sched/topology.c >> > @@ -1336,7 +1336,7 @@ sd_init(struct sched_domain_topology_level *tl, >> > *sd = (struct sched_domain){ >> > .min_interval = sd_weight, >> > .max_interval = 2*sd_weight, >> > - .busy_factor = 32, >> > + .busy_factor = 16, >> > .imbalance_pct = 117, >> > >> > .cache_nice_tries = 0,