Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp3831599ybf; Tue, 3 Mar 2020 13:45:56 -0800 (PST) X-Google-Smtp-Source: ADFU+vu2q0GwhzZjk0Y3pt8aWKO1wXITzFWoSwf7ehADRTyqCIL9sdZENr26afC8BJBV7dJB7Bcp X-Received: by 2002:a9d:404b:: with SMTP id o11mr4830778oti.368.1583271955855; Tue, 03 Mar 2020 13:45:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583271955; cv=none; d=google.com; s=arc-20160816; b=bKgmrUl/QakqWbAZukJx06hjYHw/HvYiVMh9WuK7eV24uSDwV1nS9iJ5e31om+nASx pKDqCzBUbqJg8u+AEW43aTXJHulthi8kzVNSYlrCbb1BM5ylU3VNeAsGvlU5/e+uL3vz ig9F6TMHN2kJNtp/chnIQWWIgv2twf6i+6YaTXeR7dPwE0BwFGbkCbBDJerywoX7jS58 /rrT+csxH5lNYvNZIICkuVIVqJ11lk9xwz6sgmIa5UmdWyOnGsek476PYipS45QpZTee jWBXIyjKI3/0zUmitB9rWwRqMwYgK/QVlNWjxhFYehaWkv8OKTpFSeJV4Pp+ZKBbIjVX BiaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=/XagQY7ezT4gY9tBsVdZa0FpZkAaDyUH4Veo3EaWB6A=; b=BPGFN+1L0Yj+gV/CN7i8J798CUI09LuOVmRE2DyQwTirnYgJJ3+I9a6GCts5sSBafz 23XaQ+wRE1xro4s8KQ/D2Xee9571MYnPqSlf5IJT/VJDKJWI/3yoorJb2ToGobGsJL39 99sZXQ3ZFnnPzepQ/JakSZ3yPcGJzPtxMsH2TcK514/SqtBw4uxuKiMqvQjE1apSWYAT AK1t4H2VZjUuKgkunWaTwFDxGAzLhC+3zqV3bIveCRvPouC6j226NvbGD3Rr7lpldy4I SUC0VGkMOnW1RqxdBp8PMsMp5bEhiohwkX2f4bkxV8u0kJAe0IOT8eSEAVVnAfLRnmCh NvhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="dNLKl8/Z"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k24si14680oik.240.2020.03.03.13.45.43; Tue, 03 Mar 2020 13:45:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="dNLKl8/Z"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731262AbgCCTxF (ORCPT + 99 others); Tue, 3 Mar 2020 14:53:05 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:49876 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730488AbgCCTxF (ORCPT ); Tue, 3 Mar 2020 14:53:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Transfer-Encoding :Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=/XagQY7ezT4gY9tBsVdZa0FpZkAaDyUH4Veo3EaWB6A=; b=dNLKl8/ZzTARnUdwPRGHWuf1xB 24tyChwJ7V5MWs+qKHO9HO6aBphfkosXFZhlo7vIeDfWdBlJE/b2slrINkNzYw8kPENkpNWtpzfrd KQrWvmGHnOQ6de4oyF1OuPUu2UYmjIayTIUDrkkjTSBF7SKtU5s5VKpQucCTULTMac0QGOHkCOFK4 W78Leyb6bYEzOVcLjRqq964022LMMVfFDBL90LjB8RithwS0QT1Nb6Ff9KKr2gvT7INYZWr79YUYj g84E9PP1TBSrc+F9fgcU9iTVh8qsem/Tt1NVwmU8kI0WXLoMKU1xZ1Iev9AOPHHreF8grDJwUiio5 m5pDR9yw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1j9Day-0007vA-An; Tue, 03 Mar 2020 19:52:48 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2035F30110E; Tue, 3 Mar 2020 20:50:47 +0100 (CET) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 592B72021ECAF; Tue, 3 Mar 2020 20:52:45 +0100 (CET) Date: Tue, 3 Mar 2020 20:52:45 +0100 From: Peter Zijlstra To: =?utf-8?B?546L6LSH?= Cc: Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , "open list:SCHEDULER" Subject: Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small Message-ID: <20200303195245.GF2596@hirez.programming.kicks-ass.net> References: <44fa1cee-08db-e4ab-e5ab-08d6fbd421d7@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <44fa1cee-08db-e4ab-e5ab-08d6fbd421d7@linux.alibaba.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 03, 2020 at 10:17:03PM +0800, 王贇 wrote: > During our testing, we found a case that shares no longer > working correctly, the cgroup topology is like: > > /sys/fs/cgroup/cpu/A (shares=102400) > /sys/fs/cgroup/cpu/A/B (shares=2) > /sys/fs/cgroup/cpu/A/B/C (shares=1024) > > /sys/fs/cgroup/cpu/D (shares=1024) > /sys/fs/cgroup/cpu/D/E (shares=1024) > /sys/fs/cgroup/cpu/D/E/F (shares=1024) > > The same benchmark is running in group C & F, no other tasks are > running, the benchmark is capable to consumed all the CPUs. > > We suppose the group C will win more CPU resources since it could > enjoy all the shares of group A, but it's F who wins much more. > > The reason is because we have group B with shares as 2, which make > the group A 'cfs_rq->load.weight' very small. > > And in calc_group_shares() we calculate shares as: > > load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); > shares = (tg_shares * load) / tg_weight; > > Since the 'cfs_rq->load.weight' is too small, the load become 0 > in here, although 'tg_shares' is 102400, shares of the se which > stand for group A on root cfs_rq become 2. Argh, because A->cfs_rq.load.weight is B->se.load.weight which is B->shares/nr_cpus. > While the se of D on root cfs_rq is far more bigger than 2, so it > wins the battle. > > This patch add a check on the zero load and make it as MIN_SHARES > to fix the nonsense shares, after applied the group C wins as > expected. > > Signed-off-by: Michael Wang > --- > kernel/sched/fair.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 84594f8aeaf8..53d705f75fa4 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs_rq) > tg_shares = READ_ONCE(tg->shares); > > load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); > + if (!load && cfs_rq->load.weight) > + load = MIN_SHARES; > > tg_weight = atomic_long_read(&tg->load_avg); Yeah, I suppose that'll do. Hurmph, wants a comment though. But that has me looking at other users of scale_load_down(), and doesn't at least update_tg_cfs_load() suffer the same problem?