Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp646445ybz; Wed, 22 Apr 2020 05:25:15 -0700 (PDT) X-Google-Smtp-Source: APiQypJINCtL+trnDK2rOEZI3sVqt1FZWHwsYCauXtdTYnHlMw+5lKlIhW82DH+Yly7lLdQCmaNZ X-Received: by 2002:a17:906:8609:: with SMTP id o9mr26365085ejx.361.1587558315476; Wed, 22 Apr 2020 05:25:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587558315; cv=none; d=google.com; s=arc-20160816; b=PjlVO2OBq4K8DW+u9jJdTghpFxLc4jhtYln/35gmdvGMp8L1NTJTEXmH1mTuEFRlF8 M7ZNBdqwksPsd3a2iamN4ISwJMEhGeqX4VDsKT1M2AAlD5+zScP7xEymBQCxOezEDpjk zA5nbiBpE3dP7oabSuyPXCOrbIzeg5/QRqym5FpgbHKjQbLc2fevuUy7eAag4mYsL128 KdnecqDKJuppnH9f3UCHo1YdlUWuDJXOkIWT0NbMrromIF37lY5cvnR5VWTgACeYQ5pO Gsov3YZltGosCmMcmrs06S5OA2KzsfeJlwfg8qGRPLHgJZ4V1oae56Bc8pL0cAX51jTp MUfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=hQtR+Xh8/i5ruvESeGDyH2dErBh0JU/JeIVY+1zS448=; b=hdeF4r6qM4gn7GtiJn+Y0f7sz75LqdIZh8YypRAPhc//iKQ1JQo44086cd+yJM1Okc NhAFaK4lbc7tRIljj+xxuqpJYdWdRclVsJkxypz5kd5Bj4+BvHoYihZ5hKgXn4JDVDx6 kycbCIJjyc9bkAZ2BcqSpcKlwTWsLHmrpxdjQSlkCwa8Xmx/Ang+w3/KMpHwT/ND3YTp YSbr8+iHzTZ1XrY5XoYeUi1IoP8fquheKsPH8J/s5Alcke5kXWoQG7pkglApTMdUm89L 4bujtujF4/ww7nvGmZtBN6ThWC9h3e17sJpMnYqChwyw3mZOiMeOWPgrABuskWXGMJVM 6bNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=zdLD8sTe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r11si3731564eju.467.2020.04.22.05.24.52; Wed, 22 Apr 2020 05:25:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=zdLD8sTe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732094AbgDVKwd (ORCPT + 99 others); Wed, 22 Apr 2020 06:52:33 -0400 Received: from mail.kernel.org ([198.145.29.99]:38352 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726819AbgDVKKF (ORCPT ); Wed, 22 Apr 2020 06:10:05 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2B2622070B; Wed, 22 Apr 2020 10:10:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587550204; bh=b4IWSodilP/n6yol7fL8g/RGKl2d0kiQs1kpArRe4r0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zdLD8sTeckPR/B0tcgiAZmchWJMWdl5VlsihsKYMmRguqSCm5Mrvm10CkuIB6uZ0f hxEWG6kF0RQdkAm9sDAAGRLVD0pt8DelDU4msmoJiuFyY6S0kq8htOpm+hDNVf+sAH PxiZy4KiD+lyyaOU780wwAzeZy5oCVgaEkb9aNDY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Peter Zijlstra , Michael Wang , Vincent Guittot , Sasha Levin Subject: [PATCH 4.14 014/199] sched: Avoid scale real weight down to zero Date: Wed, 22 Apr 2020 11:55:40 +0200 Message-Id: <20200422095059.621982077@linuxfoundation.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200422095057.806111593@linuxfoundation.org> References: <20200422095057.806111593@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michael Wang [ Upstream commit 26cf52229efc87e2effa9d788f9b33c40fb3358a ] During our testing, we found a case that shares no longer working correctly, the cgroup topology is like: /sys/fs/cgroup/cpu/A (shares=102400) /sys/fs/cgroup/cpu/A/B (shares=2) /sys/fs/cgroup/cpu/A/B/C (shares=1024) /sys/fs/cgroup/cpu/D (shares=1024) /sys/fs/cgroup/cpu/D/E (shares=1024) /sys/fs/cgroup/cpu/D/E/F (shares=1024) The same benchmark is running in group C & F, no other tasks are running, the benchmark is capable to consumed all the CPUs. We suppose the group C will win more CPU resources since it could enjoy all the shares of group A, but it's F who wins much more. The reason is because we have group B with shares as 2, since A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus, so A->cfs_rq.load.weight become very small. And in calc_group_shares() we calculate shares as: load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); shares = (tg_shares * load) / tg_weight; Since the 'cfs_rq->load.weight' is too small, the load become 0 after scale down, although 'tg_shares' is 102400, shares of the se which stand for group A on root cfs_rq become 2. While the se of D on root cfs_rq is far more bigger than 2, so it wins the battle. Thus when scale_load_down() scale real weight down to 0, it's no longer telling the real story, the caller will have the wrong information and the calculation will be buggy. This patch add check in scale_load_down(), so the real weight will be >= MIN_SHARES after scale, after applied the group C wins as expected. Suggested-by: Peter Zijlstra Signed-off-by: Michael Wang Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Vincent Guittot Link: https://lkml.kernel.org/r/38e8e212-59a1-64b2-b247-b6d0b52d8dc1@linux.alibaba.com Signed-off-by: Sasha Levin --- kernel/sched/sched.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 268f560ec9986..391d73a12ad72 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -89,7 +89,13 @@ static inline void cpu_load_update_active(struct rq *this_rq) { } #ifdef CONFIG_64BIT # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT) -# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT) +# define scale_load_down(w) \ +({ \ + unsigned long __w = (w); \ + if (__w) \ + __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \ + __w; \ +}) #else # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) (w) -- 2.20.1