Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp1296646pxb; Fri, 10 Sep 2021 02:42:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySG4RB6i4jnqF2uEC+cTyPvmrwKwYvc/DZ39dzPpeEqSndj/oPRu+wDlom9r95w1slh12l X-Received: by 2002:a17:906:c317:: with SMTP id s23mr8418951ejz.83.1631266955731; Fri, 10 Sep 2021 02:42:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631266955; cv=none; d=google.com; s=arc-20160816; b=VgezCID6b1r+dLTOTb7dujqrN2hZOpXSsXjlLUwekoYf6xY/Y/I4kmr8gPStz+sJ/o TsFwl23pbaoCNYdgXASY0kIHtOIKVDHRaysfZft55FaLpcKSNaHc1J8tcJVj3og+v+Nr x/2l2k05dL45c5QMBFS05YlheYKlUO86vAl54+K8CWKfmMaBoqSoDq7Xjga6QjCmFsMx p/IrsPslMqN3exksKE7QcWwc680Ts+9WYDj2+o012gR6Kvd7M87GP92Co0E3WQppQPCX X5M02xdeDpcd6wtxmMFoucPKTs5/01dYRDggxKHpAtYs42uPFjaiWM27bc+cAsYD2yia dm1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=9aOkTnkvbgZNMJsGO9EHU5ZB/KSBApC5NW0Pxudh4ww=; b=pMOc0/E7d3wUN3hwfUjTcpkne1kvKm5GfqfB9cBzcXZKJkZ/wYS0ShQFUPPp3z6Xrz moKtHpZLpqHKUZ/YheDu8NIq87zyo2oVyWx8KlklHYVJdA70pfaiZszR2Yq/Rhzhh7TF dKQCGFiQy9YFR+LRhke/dyuSbHIgsIX4brepAyIaBby/oGqZvK70XfTDV1Pmd7d1AbDn rBYoluNRQHJbIh0QEzZggvdo77Kfrn1VBSOf4OtFNxwsuR0kLVzd4qZJEltshrBGNVeM 84GiGgXhV2dLJMjs4tzYloayHUu6V4fMMWXe9wYvu27PE8hRfZih4gE5hew/TntdCUzx W2kw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u30si5826115edb.156.2021.09.10.02.42.08; Fri, 10 Sep 2021 02:42:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232063AbhIJJmE (ORCPT + 99 others); Fri, 10 Sep 2021 05:42:04 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:19025 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231916AbhIJJmD (ORCPT ); Fri, 10 Sep 2021 05:42:03 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4H5W3j0jBZzbmSc; Fri, 10 Sep 2021 17:36:49 +0800 (CST) Received: from dggpeml500018.china.huawei.com (7.185.36.186) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 10 Sep 2021 17:40:43 +0800 Received: from huawei.com (10.67.174.153) by dggpeml500018.china.huawei.com (7.185.36.186) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 10 Sep 2021 17:40:42 +0800 From: Zhang Qiao To: , , , CC: , , Zhang Qiao Subject: [RESEND PATCH v4] sched: Dec cfs_bandwidth_used in destroy_cfs_bandwidth() Date: Fri, 10 Sep 2021 17:41:39 +0800 Message-ID: <20210910094139.184582-1-zhangqiao22@huawei.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.67.174.153] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500018.china.huawei.com (7.185.36.186) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org cfs_bandwidth_used is a static_key to control cfs bandwidth feature. When adding a cfs_bandwidth group, we need increase the key, and decrease it when removing. But currently when we remove a cfs bandwidth group, we don't decrease the key and this switch will always be on even if there is no cfs bandwidth group in the system. Fix the problem as two steps: 1.Rename cfs_bandwidth_usage_{dec, inc}() to cfs_bandwidth_usage_{dec,inc}_cpuslocked() and its caller need to hold the hotplug lock. 2.Add cfs_bandwidth_usage_dec() and its caller don't need to hold the hotplug lock. And when removing a cfs bandwidth group, we decrease cfs_bandwidth_used by calling cfs_bandwidth_usage_dec(). Fixes: 56f570e512ee ("sched: use jump labels to reduce overhead when bandwidth control is inactive") Signed-off-by: Zhang Qiao Reviewed-by: Daniel Jordan --- Changes since v3: - Rebase it on the latest code - Add Reviewed-by tag from Daniel - link to patch v3: https://lkml.org/lkml/2021/7/15/2101 Changes since v2: - Remove cfs_bandwidth_usage_inc() - Make cfs_bandwidth_usage_dec() static Changes since v1: - Add suffix _cpuslocked to cfs_bandwidth_usage_{dec,inc}. - Use static_key_slow_{dec,inc}_cpuslocked() in origin cfs_bandwidth_usage_{dec,inc}(). --- kernel/sched/core.c | 4 ++-- kernel/sched/fair.c | 18 ++++++++++++++---- kernel/sched/sched.h | 4 ++-- 3 files changed, 18 insertions(+), 8 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c4462c454ab9..0e8c789e84a9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10221,7 +10221,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota, * before making related changes, and on->off must occur afterwards */ if (runtime_enabled && !runtime_was_enabled) - cfs_bandwidth_usage_inc(); + cfs_bandwidth_usage_inc_cpuslocked(); raw_spin_lock_irq(&cfs_b->lock); cfs_b->period = ns_to_ktime(period); cfs_b->quota = quota; @@ -10249,7 +10249,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota, rq_unlock_irq(rq, &rf); } if (runtime_was_enabled && !runtime_enabled) - cfs_bandwidth_usage_dec(); + cfs_bandwidth_usage_dec_cpuslocked(); out_unlock: mutex_unlock(&cfs_constraints_mutex); cpus_read_unlock(); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ff69f245b939..28f2e7600b53 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4645,23 +4645,30 @@ static inline bool cfs_bandwidth_used(void) return static_key_false(&__cfs_bandwidth_used); } -void cfs_bandwidth_usage_inc(void) +void cfs_bandwidth_usage_inc_cpuslocked(void) { static_key_slow_inc_cpuslocked(&__cfs_bandwidth_used); } -void cfs_bandwidth_usage_dec(void) +void cfs_bandwidth_usage_dec_cpuslocked(void) { static_key_slow_dec_cpuslocked(&__cfs_bandwidth_used); } + +static void cfs_bandwidth_usage_dec(void) +{ + static_key_slow_dec(&__cfs_bandwidth_used); +} #else /* CONFIG_JUMP_LABEL */ static bool cfs_bandwidth_used(void) { return true; } -void cfs_bandwidth_usage_inc(void) {} -void cfs_bandwidth_usage_dec(void) {} +void cfs_bandwidth_usage_inc_cpuslocked(void) {} +void cfs_bandwidth_usage_dec_cpuslocked(void) {} + +static void cfs_bandwidth_usage_dec(void) {} #endif /* CONFIG_JUMP_LABEL */ /* @@ -5389,6 +5396,9 @@ static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b) if (!cfs_b->throttled_cfs_rq.next) return; + if (cfs_b->quota != RUNTIME_INF) + cfs_bandwidth_usage_dec(); + hrtimer_cancel(&cfs_b->period_timer); hrtimer_cancel(&cfs_b->slack_timer); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3d3e5793e117..05fed8f5e15c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2702,8 +2702,8 @@ extern void init_cfs_rq(struct cfs_rq *cfs_rq); extern void init_rt_rq(struct rt_rq *rt_rq); extern void init_dl_rq(struct dl_rq *dl_rq); -extern void cfs_bandwidth_usage_inc(void); -extern void cfs_bandwidth_usage_dec(void); +extern void cfs_bandwidth_usage_inc_cpuslocked(void); +extern void cfs_bandwidth_usage_dec_cpuslocked(void); #ifdef CONFIG_NO_HZ_COMMON #define NOHZ_BALANCE_KICK_BIT 0 -- 2.18.0