Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3182712pxk; Tue, 15 Sep 2020 12:13:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw3NSO2rwervgGOxNBOrHxohG5Jsme64ZcXgxwEvsoKeYYNr243V2kN3Du2+sC4annLxRkJ X-Received: by 2002:a17:906:46c9:: with SMTP id k9mr21191221ejs.38.1600197188486; Tue, 15 Sep 2020 12:13:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600197188; cv=none; d=google.com; s=arc-20160816; b=ejHIRqhiuarvW7ubAr043ejb7Q9G7MmdkuJe34IylD2JSXZfkNhfbUOWmJAH9PpkP/ UoqvRIbQ2++HXJnnK9yUaZs8kdqoS8fXP4ybAGReuIYcPaBYZDgXhTpqbw+srttsKUmM jlvsyvMLtHKg3cSBe7EM7t573eIkhlaSgH4IewBnxRZDjB1v4aRao0qJ4nT2hPUVMtS3 goZsQwxPjWG1Qc8WkxSU20CBknlK5vGYqlcG39s+4cESItY5594l5+C06jNxywq83Vfj 4ZM3iubfSOSdq+lqX8+sZqiT/KE22Y+xM71+w7wc4Cf5gIw9QvZYiP3tnDic/5x9TkmZ Jy4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:in-reply-to :subject:cc:to:from:user-agent:references; bh=fPMtSnS0GrSurq63yKK8+DBePoXNL0IOGLNG1MW8UxE=; b=j9mC28XL7DWL9FKXAxe69hVQ5VQNL2ckOVFVw4essm751nIdP7ieu7Vo/W+bM5g39w u7DNEq/xVQYOaalF+KadQYBPxfjspre3l8ap5JeYcRdemxjA5tJ16xyhVo0tuydvjsGC 0izC3+R4W7aJvgbOGn2pJVqCAgnc0cEyxfQGYjiJX1teyZQOBtDOrn0OiGLh57CGfA9e 68iQX32nxUulgmjsbPZuDOv08EJIxZqkZUnGzYIGpj7xRJwGsbi/KDJFbL+So00PQ8GH jy/Vq2KH0MZ7ej8FMEfiNAhQUdaE1nChjqf4nvVrBpQlemCkHmh3XJIU1SpqmkGAy7mB VS7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y22si10517150edm.265.2020.09.15.12.12.44; Tue, 15 Sep 2020 12:13:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727579AbgIOTLp (ORCPT + 99 others); Tue, 15 Sep 2020 15:11:45 -0400 Received: from foss.arm.com ([217.140.110.172]:42242 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727961AbgIOTFB (ORCPT ); Tue, 15 Sep 2020 15:05:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7BF3311D4; Tue, 15 Sep 2020 12:04:59 -0700 (PDT) Received: from e113632-lin (e113632-lin.cambridge.arm.com [10.1.194.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 457223F718; Tue, 15 Sep 2020 12:04:58 -0700 (PDT) References: <20200914100340.17608-1-vincent.guittot@linaro.org> <20200914100340.17608-5-vincent.guittot@linaro.org> User-agent: mu4e 0.9.17; emacs 26.3 From: Valentin Schneider To: Vincent Guittot Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, linux-kernel@vger.kernel.org Subject: Re: [PATCH 4/4] sched/fair: reduce busy load balance interval In-reply-to: <20200914100340.17608-5-vincent.guittot@linaro.org> Date: Tue, 15 Sep 2020 20:04:56 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/09/20 11:03, Vincent Guittot wrote: > The busy_factor, which increases load balance interval when a cpu is busy, > is set to 32 by default. This value generates some huge LB interval on > large system like the THX2 made of 2 node x 28 cores x 4 threads. > For such system, the interval increases from 112ms to 3584ms at MC level. > And from 228ms to 7168ms at NUMA level. > > Even on smaller system, a lower busy factor has shown improvement on the > fair distribution of the running time so let reduce it for all. > ISTR you mentioned taking this one step further and making (interval * busy_factor) scale logarithmically with the number of CPUs to avoid reaching outrageous numbers. Did you experiment with that already? > Signed-off-by: Vincent Guittot > --- > kernel/sched/topology.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c > index 1a84b778755d..a8477c9e8569 100644 > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -1336,7 +1336,7 @@ sd_init(struct sched_domain_topology_level *tl, > *sd = (struct sched_domain){ > .min_interval = sd_weight, > .max_interval = 2*sd_weight, > - .busy_factor = 32, > + .busy_factor = 16, > .imbalance_pct = 117, > > .cache_nice_tries = 0,