Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp7148669ybf; Fri, 6 Mar 2020 11:17:56 -0800 (PST) X-Google-Smtp-Source: ADFU+vsdB0fabzPA3Nl0WnYqiiORw39pR7JvAwTG+jbjmdO25PtVsOj2Yl4tb0IvbtGbXIYRmil5 X-Received: by 2002:a9d:75d7:: with SMTP id c23mr3498802otl.73.1583522276820; Fri, 06 Mar 2020 11:17:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583522276; cv=none; d=google.com; s=arc-20160816; b=qvORNqi1B7j9kHjiWE5U0LWVgz4/Bo1zjq40kv9A/AAI9TSxm307gd42tiwmUoyBSS EnVfribVeSHc9Yrbt9LBCevMDOwfflUu+0egbUfs0al9gE4ysQ52j/kdTav/1I+5pVJC X8IzyWU5VnaUcYk6nTgJx8Ob6V97HEaGeNvHNfat1bM4dqKwwTYPCAuosBrsNu0OPPsV /ZB8UWwxPBYc4n3uxcBNTf5kS1pgRi7Uo2/1i8S7EeT2/7fGm5haiEcZkmNvFge8dKAo 7b65COtT8ymkUtFzYQ5VIkVaENAEFfNmYXT3WhYMdtrx4oc7f5W2KYJjma9ncjoGPUFG CtHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:message-id:in-reply-to:date:references:subject:cc:to :from:dkim-signature; bh=EMb1uxg86SKuS/Penoll9c77PJlDDlCBvN4Dz9g4wPo=; b=QYYc4Guhcel/3+Y1CXXgdihN9FQNXodeGkomQowMH5ROR2cH935evRXgCZMQ2nBE5U KTJBLyrb9R0EmM9u9u6NlCU1h0Bbnynm6II2xa8MvMaV76hRD7q1aUCba3X0xiNvOGPY z9KvRer1yALhAcBkRIk0pzBWMMhf8DCWul0MyeEpgu4Up8AQe5DoG+Mcboeqx2YuRnsm fLTyMYO3k74+S9podjfw3hXjyKRMAChsMvr1Sh/aHwS4xfTecVNKpmSLzXX0qm+3oGhT lXrs7pXEkLDnsoYloarnIbvM6MLUhOLrIvxBNGCs51rW2iBkvd6emPK78VhJTxXuS+bX Tjbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ugpjNIfB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t19si166757oif.250.2020.03.06.11.17.44; Fri, 06 Mar 2020 11:17:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ugpjNIfB; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726259AbgCFTRQ (ORCPT + 99 others); Fri, 6 Mar 2020 14:17:16 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:42157 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726167AbgCFTRP (ORCPT ); Fri, 6 Mar 2020 14:17:15 -0500 Received: by mail-pf1-f193.google.com with SMTP id f5so1560239pfk.9 for ; Fri, 06 Mar 2020 11:17:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:references:date:in-reply-to:message-id :user-agent:mime-version:content-transfer-encoding; bh=EMb1uxg86SKuS/Penoll9c77PJlDDlCBvN4Dz9g4wPo=; b=ugpjNIfBan9LAlcZqkHyEH/a6xr4i1Scpyo4XMpevjHZGCxQc2PrAcJF4+GrNBpdZU B7cSqqBQceG6rp/j1n7Ceyyd66oWztOosei3ojumUoeryfUPQNqf+e+ihslxCyL3Jbj4 z7AC/DXrkbIRSUAORXRmn9UjYioMAzGNVuuCBzDCc9+2OtbKT4nZkBkO4h3ckz0ChDWO DGjiOVdes7g32p0pdP+8Rre8jlG+C4ka+QV3Spkzf5CDKT/j5kH9bsL3TKaTPzTXbOtp U2qNWklxq6LmM4EdeT7+gEHZ20kWuP7pOprWvcfitkLbRh2ERprsIokUhoncrnF+pZs5 S12w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:references:date:in-reply-to :message-id:user-agent:mime-version:content-transfer-encoding; bh=EMb1uxg86SKuS/Penoll9c77PJlDDlCBvN4Dz9g4wPo=; b=nNW1f2+HF+bVviFPerwrrEz9a+ggnPDfRMJ+tTPNyqWvP4stjKnxEkdia2qJcsXhps 4nGuU49vWN3/CBwNBDA4dKvbWjQkLZq0DxpPDkIsPaUuUieKPRrRDIRvJbk/iAQQDyw0 AZANOYN5zf2oUu7UOjWc5RzdYctr47NVyZ/SIll8/vA2CA97fiPnSVOYBFbkkLmGV6cw 8dR7WBuWziVTLgtmKJA9VeDqBaxF7XoBpRVs4/PYAAe0tTjYc8nFzQZwCAhkti4eoKNC tyFuLFBI0UkZg+sPvCXHKRRuBuMzDQxljNkv8NTDCTkakTJEFand5ge1AEuBLseJG7Rc 7rHA== X-Gm-Message-State: ANhLgQ2PWlV1fsCZtQguKgFEIpFu4QzLRxZ+R4OpurfDGrT7BVi70BM3 1S1sM1y2YJKLTxdCoitGwdicjBvSZGZ3bsuw X-Received: by 2002:a63:e80d:: with SMTP id s13mr4870600pgh.236.1583522232587; Fri, 06 Mar 2020 11:17:12 -0800 (PST) Received: from bsegall-linux.svl.corp.google.com.localhost ([2620:15c:2cd:202:39d7:98b3:2536:e93f]) by smtp.gmail.com with ESMTPSA id u12sm36373207pgr.3.2020.03.06.11.17.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Mar 2020 11:17:11 -0800 (PST) From: bsegall@google.com To: =?utf-8?B?546L6LSH?= Cc: bsegall@google.com, Peter Zijlstra , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Mel Gorman , "open list\:SCHEDULER" Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small References: <44fa1cee-08db-e4ab-e5ab-08d6fbd421d7@linux.alibaba.com> <20200303195245.GF2596@hirez.programming.kicks-ass.net> <1180c6cd-ff61-2c9f-d689-ffe58f8c5a68@linux.alibaba.com> Date: Fri, 06 Mar 2020 11:17:10 -0800 In-Reply-To: <1180c6cd-ff61-2c9f-d689-ffe58f8c5a68@linux.alibaba.com> (=?utf-8?B?IueOi+i0hw==?= "'s message of "Thu, 5 Mar 2020 09:14:47 +0800") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org =E7=8E=8B=E8=B4=87 writes: > On 2020/3/5 =E4=B8=8A=E5=8D=882:47, bsegall@google.com wrote: > [snip] >>> Argh, because A->cfs_rq.load.weight is B->se.load.weight which is >>> B->shares/nr_cpus. >>> >>>> While the se of D on root cfs_rq is far more bigger than 2, so it >>>> wins the battle. >>>> >>>> This patch add a check on the zero load and make it as MIN_SHARES >>>> to fix the nonsense shares, after applied the group C wins as >>>> expected. >>>> >>>> Signed-off-by: Michael Wang >>>> --- >>>> kernel/sched/fair.c | 2 ++ >>>> 1 file changed, 2 insertions(+) >>>> >>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>>> index 84594f8aeaf8..53d705f75fa4 100644 >>>> --- a/kernel/sched/fair.c >>>> +++ b/kernel/sched/fair.c >>>> @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs= _rq) >>>> tg_shares =3D READ_ONCE(tg->shares); >>>> >>>> load =3D max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_= avg); >>>> + if (!load && cfs_rq->load.weight) >>>> + load =3D MIN_SHARES; >>>> >>>> tg_weight =3D atomic_long_read(&tg->load_avg); >>> >>> Yeah, I suppose that'll do. Hurmph, wants a comment though. >>> >>> But that has me looking at other users of scale_load_down(), and doesn't >>> at least update_tg_cfs_load() suffer the same problem? >>=20 >> I think instead we should probably scale_load_down(tg_shares) and >> scale_load(load_avg). tg_shares is always a scaled integer, so just >> moving the source of the scaling in the multiply should do the job. >>=20 >> ie >>=20 >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index fcc968669aea..6d7a9d72d742 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -3179,9 +3179,9 @@ static long calc_group_shares(struct cfs_rq *cfs_r= q) >> long tg_weight, tg_shares, load, shares; >> struct task_group *tg =3D cfs_rq->tg; >>=20=20 >> - tg_shares =3D READ_ONCE(tg->shares); >> + tg_shares =3D scale_load_down(READ_ONCE(tg->shares)); >>=20=20 >> - load =3D max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.l= oad_avg); >> + load =3D max(cfs_rq->load.weight, scale_load(cfs_rq->avg.load_av= g)); >>=20=20 >> tg_weight =3D atomic_long_read(&tg->load_avg); > > Get the point, but IMHO fix scale_load_down() sounds better, to > cover all the similar cases, let's first try that way see if it's > working :-) Yeah, that might not be a bad idea as well; it's just that doing this fix would keep you from losing all your precision (and I'd have to think if that would result in fairness issues like having all the group ses having the full tg shares, or something like that).