Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp4329369ybf; Wed, 4 Mar 2020 01:44:27 -0800 (PST) X-Google-Smtp-Source: ADFU+vs+VtxifyY8jWs6hVVg0dZoHbZEXHGjEmbup1dHrRrGSJjTzkCPZOI5WEiKm/80f71MXOnV X-Received: by 2002:aca:1903:: with SMTP id l3mr1066850oii.178.1583315067167; Wed, 04 Mar 2020 01:44:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583315067; cv=none; d=google.com; s=arc-20160816; b=tXdQ/TMD95p4Q2gRLLi+mZj8WWaq+MMyosZknSSjFWZ4Qsj+azpGmCCRKh6kXcD/Mx ssjrb7i8+9LPpZ6wVOrxYjEVa4f2KJLjOVPfivOc/boaBn9ptkdQi+yownR0F5DNRjPl +bWc79SPaQAnNtWm9zp6LX74Jy2gx5H85GkBcrpR8JEfNGaxZoL+rN7RZae9JCwyPbGX OhE2qwi2JRqizm5btkEiy/Jxf96IAQsYyOXJ2pfmXcajO5hKn3i4uRov4D1e4N9htlcT G8PEKNiMja3DabLcqc3KvqbMUlR2lfWvJ8gz4fVDTKmJNjqQwU4Wk7p/LRxbRdmOtntp fYUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=ZdaAuvXDt5Zm8uZKcBa5GQKSlbVmS+sOfjwP1g7YNcU=; b=KUq7aBJpFJTMgSaFIAqsYFhFvUHVrHRIrVmyF88uSXI8IiCvFpUsWXebAfId/SRWi2 7YNkTcesxpknKQ8jbugIXbP0JDI8282F6X1EMTo8onUKn6s97WkdpBlZqv3vHDO2f6HQ rMaLMB5bFHgEj/+IYQXqDpBhI4pPslNvWAZo+zAtcd8a7+zVeoqTfr95kaqaxbUucWJu m9JlhfUTb9n/+MEtOBIC8RZSvLfxn1HHuWZOyrNsr/1wN0W8+p9v9SNv+vNF1TDPSmhB V3IKmBHwCLoHKo4Ct2vmdoLLxrDmy5aDKpuv91hL3Zrycpft/TOcxv9RKYNzi419DviG ad/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PDm+34fK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l133si873627oih.188.2020.03.04.01.44.14; Wed, 04 Mar 2020 01:44:27 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PDm+34fK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729232AbgCDJoF (ORCPT + 99 others); Wed, 4 Mar 2020 04:44:05 -0500 Received: from mail-lf1-f65.google.com ([209.85.167.65]:43464 "EHLO mail-lf1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729131AbgCDJoF (ORCPT ); Wed, 4 Mar 2020 04:44:05 -0500 Received: by mail-lf1-f65.google.com with SMTP id s23so933325lfs.10 for ; Wed, 04 Mar 2020 01:44:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=ZdaAuvXDt5Zm8uZKcBa5GQKSlbVmS+sOfjwP1g7YNcU=; b=PDm+34fKqct7NzNr+Nig+fHXOBiWiaNrx8M4sR80jCIOmG8g3mst3DmDhWu23K7MPt EiMCEYtkirPIFYO2UH1UZYr3bxNj9U2o1BY/Yw3P6Q//8DnjKSqkiezrZfcuizDKHo43 UiXBmo2SapD23I7hPbMOi/mOSKB1WBD0s1NGdaydKRT8fmT3WJsrOPq7+am4kYPLrcR2 7UhQsxtKimqsLEJJ7JxKNXT+B5iw/gr7KYtujp2sdfZkKoKTlS7VPZm7u4k2bEFElnqU 2S9HtmTCscRkabiVZXvmfniDYaEIbvuT8ozFt8rEpMiJE+qnjdPLGyEY39USwX7vRV4u 7yxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=ZdaAuvXDt5Zm8uZKcBa5GQKSlbVmS+sOfjwP1g7YNcU=; b=EoXo7tv9Fn60Z7hVLcgcNAiUF+qTRGCxDxVBSqa/3AO6N2HrJHy2Sa8wv/0Xn2YSjI i/857kMQQOZNFYe0U9mbe3Rg8N1NEau++kD5WNzb3iPk9RBK/dFSCijK4X9Wf6YyWzBY Mw0O7ztJwyzyoTWWD25WbdZONJoxbPt1JuzMmdLeCLTb1iz/ENfv9J2gW/DY2elgBO/U r3v3Eie9eU4ZtHBGaXrWeXCTYoupzBeHT6p4iY7wjeugVBe4a81J32jm0x9hNpm6a116 SPZ/lJpEobtxKSIOF1BjY3JtDo9lUb86ChOpbkUKga/y6CijK6KWGY7n2Zc5sdAI7vsj SCEg== X-Gm-Message-State: ANhLgQ1/EYS9dSwxKXiU5/zliqiYf/YDV5ys/rOs4JAqmqtlP36K0Bft 9VOSaJ8gasBUvZ1adEXv2OoHF3925mqooinwxJUXaw== X-Received: by 2002:a19:230d:: with SMTP id j13mr1516518lfj.189.1583315042890; Wed, 04 Mar 2020 01:44:02 -0800 (PST) MIME-Version: 1.0 References: <44fa1cee-08db-e4ab-e5ab-08d6fbd421d7@linux.alibaba.com> <20200303195245.GF2596@hirez.programming.kicks-ass.net> <241603dd-1149-58aa-85cf-43f3da2de43f@linux.alibaba.com> In-Reply-To: From: Vincent Guittot Date: Wed, 4 Mar 2020 10:43:51 +0100 Message-ID: Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small To: =?UTF-8?B?546L6LSH?= Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , "open list:SCHEDULER" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 4 Mar 2020 at 09:47, Vincent Guittot w= rote: > > On Wed, 4 Mar 2020 at 02:19, =E7=8E=8B=E8=B4=87 wrote: > > > > > > > > On 2020/3/4 =E4=B8=8A=E5=8D=883:52, Peter Zijlstra wrote: > > [snip] > > >> The reason is because we have group B with shares as 2, which make > > >> the group A 'cfs_rq->load.weight' very small. > > >> > > >> And in calc_group_shares() we calculate shares as: > > >> > > >> load =3D max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.loa= d_avg); > > >> shares =3D (tg_shares * load) / tg_weight; > > >> > > >> Since the 'cfs_rq->load.weight' is too small, the load become 0 > > >> in here, although 'tg_shares' is 102400, shares of the se which > > >> stand for group A on root cfs_rq become 2. > > > > > > Argh, because A->cfs_rq.load.weight is B->se.load.weight which is > > > B->shares/nr_cpus. > > > > Yeah, that's exactly why it happens, even the share 2 scale up to 2048, > > on 96 CPUs platform, each CPU get only 21 in equal case. > > > > > > > >> While the se of D on root cfs_rq is far more bigger than 2, so it > > >> wins the battle. > > >> > > >> This patch add a check on the zero load and make it as MIN_SHARES > > >> to fix the nonsense shares, after applied the group C wins as > > >> expected. > > >> > > >> Signed-off-by: Michael Wang > > >> --- > > >> kernel/sched/fair.c | 2 ++ > > >> 1 file changed, 2 insertions(+) > > >> > > >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > >> index 84594f8aeaf8..53d705f75fa4 100644 > > >> --- a/kernel/sched/fair.c > > >> +++ b/kernel/sched/fair.c > > >> @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *c= fs_rq) > > >> tg_shares =3D READ_ONCE(tg->shares); > > >> > > >> load =3D max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.= load_avg); > > >> + if (!load && cfs_rq->load.weight) > > >> + load =3D MIN_SHARES; > > >> > > >> tg_weight =3D atomic_long_read(&tg->load_avg); > > > > > > Yeah, I suppose that'll do. Hurmph, wants a comment though. > > > > > > But that has me looking at other users of scale_load_down(), and does= n't > > > at least update_tg_cfs_load() suffer the same problem? > > > > Good point :-) I'm not sure but is scale_load_down() supposed to scale = small > > value into 0? If not, maybe we should fix the helper to make sure it at > > least return some real load? like: > > > > # define scale_load_down(w) ((w + (1 << SCHED_FIXEDPOINT_SHIFT)) >> SCH= ED_FIXEDPOINT_SHIFT) > > you will add +1 of nice prio for each device Of course, it's not prio but only weight which is different > > should we use instead > # define scale_load_down(w) ((w >> SCHED_FIXEDPOINT_SHIFT) ? (w >> > SCHED_FIXEDPOINT_SHIFT) : MIN_SHARES) > > Regards, > Vincent > > > > > Regards, > > Michael Wang > > > > >