Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1324671ybz; Thu, 16 Apr 2020 07:14:27 -0700 (PDT) X-Google-Smtp-Source: APiQypJgl+WP6FyDGSejVctdZvDYlJv747BAOVVZ+ptaAOiCIWLAFFOq5Kiz4xF0bo1IYv7+qr9+ X-Received: by 2002:a05:6402:31b6:: with SMTP id dj22mr18357016edb.258.1587046467462; Thu, 16 Apr 2020 07:14:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587046467; cv=none; d=google.com; s=arc-20160816; b=fZr13MwStO30JxZIRzt3LC2vU1bAZlTZVKX09hWENsIn/mybXxC9VdFYvpWUVscEBy zmcURXF5u4oOdPBnXce2e/s0jeh3+IIy3Gn5eU8MqeavIb33doyfvZHJXiedG8GQtNlW VgqdaxTwidgnhbVnFlQan20moh889xNiWXh6FqrjaCblamAg16TQd6PrGXAMVLEKwSPO rGrHwOFX8FzG6ZERdvNJFdF8HKf49Iwm8tSg2Ossi1+U0+iEhBy7eaxjTYYAPyOn721c UrVZHCoa81peH/QqZIkIeUScdKJVSSeJIbrygf4TQb4QAJ7XYCPXb1AqX97ie37U9xg6 C2dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=PrVAV7pRdENIcaofu/zQh0laVi33oWs5G84td43OQM0=; b=OdloNg1w+n7VC4+8m9kBElfUU9fYgGkUG02VZSXUDwxEPul3rgdiWkOD9Twt1fSLgI 6rzK4WIrlWmIJtS6NnGIZ+Fj2OrpfAb18pkM9N3YghmRxL04YbYlK6JylqVrSGa31hbz u4VaT8FeoKz8JwHOpPK3lJ4BGPIoVnfEOriPsQ7CPBxQV6TzM/bXidgpGRp8kMdgaTNf fnqfacC+94OTd5Y7Ma3Zm9A40slhcsHDVSZlbk8xefh9MJg3Vjh5E6FdyYgql9m0Nz0u BOLQfL2a0uu3lcVVZw5dgkM7epP3dkd4enBtJJyTgSmQBabKga6kks7SoeQiYcVrIQRb Pdxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yV1LC+l6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 3si12456787ejz.2.2020.04.16.07.14.04; Thu, 16 Apr 2020 07:14:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yV1LC+l6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393391AbgDPOJM (ORCPT + 99 others); Thu, 16 Apr 2020 10:09:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:59932 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392192AbgDPNqI (ORCPT ); Thu, 16 Apr 2020 09:46:08 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D840B20732; Thu, 16 Apr 2020 13:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587044767; bh=0Lo7CKmDANSyxdZ2yOhcCNB85j0IE61I8BCTf6STT7M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yV1LC+l6zLcNx/Jeq9DKlC/DxdUO9418psJfw3L9bRyEdYTPYQ9lBoGMApdXyik5t DDP20d98l8UiePBc3ZyNqK+edAbKw2Y+dI26G/6qL2pSwGKlAGzVsvRsyUwBH60ylc dgPEfCNxxgsQD0zO5y52u/rQfhhRaPB0nnyYI9Mo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Peter Zijlstra , Michael Wang , Vincent Guittot , Sasha Levin Subject: [PATCH 5.4 043/232] sched: Avoid scale real weight down to zero Date: Thu, 16 Apr 2020 15:22:17 +0200 Message-Id: <20200416131321.208020899@linuxfoundation.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200416131316.640996080@linuxfoundation.org> References: <20200416131316.640996080@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michael Wang [ Upstream commit 26cf52229efc87e2effa9d788f9b33c40fb3358a ] During our testing, we found a case that shares no longer working correctly, the cgroup topology is like: /sys/fs/cgroup/cpu/A (shares=102400) /sys/fs/cgroup/cpu/A/B (shares=2) /sys/fs/cgroup/cpu/A/B/C (shares=1024) /sys/fs/cgroup/cpu/D (shares=1024) /sys/fs/cgroup/cpu/D/E (shares=1024) /sys/fs/cgroup/cpu/D/E/F (shares=1024) The same benchmark is running in group C & F, no other tasks are running, the benchmark is capable to consumed all the CPUs. We suppose the group C will win more CPU resources since it could enjoy all the shares of group A, but it's F who wins much more. The reason is because we have group B with shares as 2, since A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus, so A->cfs_rq.load.weight become very small. And in calc_group_shares() we calculate shares as: load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); shares = (tg_shares * load) / tg_weight; Since the 'cfs_rq->load.weight' is too small, the load become 0 after scale down, although 'tg_shares' is 102400, shares of the se which stand for group A on root cfs_rq become 2. While the se of D on root cfs_rq is far more bigger than 2, so it wins the battle. Thus when scale_load_down() scale real weight down to 0, it's no longer telling the real story, the caller will have the wrong information and the calculation will be buggy. This patch add check in scale_load_down(), so the real weight will be >= MIN_SHARES after scale, after applied the group C wins as expected. Suggested-by: Peter Zijlstra Signed-off-by: Michael Wang Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Vincent Guittot Link: https://lkml.kernel.org/r/38e8e212-59a1-64b2-b247-b6d0b52d8dc1@linux.alibaba.com Signed-off-by: Sasha Levin --- kernel/sched/sched.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e5e2605778c97..c7e7481968bfa 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust); #ifdef CONFIG_64BIT # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT) -# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT) +# define scale_load_down(w) \ +({ \ + unsigned long __w = (w); \ + if (__w) \ + __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \ + __w; \ +}) #else # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT) # define scale_load(w) (w) -- 2.20.1