Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp51183imu; Thu, 8 Nov 2018 14:36:13 -0800 (PST) X-Google-Smtp-Source: AJdET5ex7ms7jTxljIE72BpCSsUerKYvjiRLqhq6s78klWp1pvLFZz8tDSQeWfEmOmB70hT5sbTL X-Received: by 2002:a17:902:bd4a:: with SMTP id b10-v6mr6228755plx.171.1541716573821; Thu, 08 Nov 2018 14:36:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541716573; cv=none; d=google.com; s=arc-20160816; b=vx0D4yJyvfB1RqLEM/NYTSENLQey0/zSTKjNOcYYrxv5syTdHvabKi3oRMFh7CzUIi cb/dc/67T+8qXciptBtAEK4QEQipf85w4td/qt5eiKF/pRVvXc1eZhNU2+n5OzDjOmQZ Iix+i8KmHNe9HrvsD5goZJ145VEVCHktOtY3vrGIjhxMuhTwh+67AOuKqnwMpwsrebxR dZWty1nxLPazPoEEFE1pV0zy13kE5BJ0c9HHDiOtIJswYStvr4tUawpL7pkQF9quZMu0 QBF1FHMl+nvQZP048I/rm+Js/7TtyCJotmf3F+SAP8YlWcRbQL8f7mjGJlGQpe2d7lVG ab6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=z52TKOHJYpSyN8MVVEeruEkA/lWcV6ERUS7szDkANdg=; b=wzJOfkA3C45b0VOJKAsxjdSxfxx4w1Iks344KV2rX6YVgbXEtWGdNnKEH+XlfEXyXn 7vx8ewsf+p8i81GIxdN1dsSFKNtjOblTHs0shqtl5Gm6V6/A1JagOLyTkzKV4PloF7Wt nm3Yv5ddQCixbvP6fSIrm+YLDL8sZ/8PfO9gJtXS/Jk4/PC6PwbthLp95oJ1j0x+k6yn Dq5IOJT4PgcW1zWIHJyW9qcVTEzL0jEK6hpZozrGJdEGO8hrkpb8G2bWRuICbR2lTKaR YCvwZTlZZv8f+zqAQCAvG1pfkc7Dqm7KC3h/1iI1pgVLjJ1WA5lgD0I0umrah5g0LNfK dNKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=StMlqWVz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z16-v6si4829592pga.177.2018.11.08.14.35.58; Thu, 08 Nov 2018 14:36:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=StMlqWVz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729969AbeKIHgs (ORCPT + 99 others); Fri, 9 Nov 2018 02:36:48 -0500 Received: from mail.kernel.org ([198.145.29.99]:54418 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728613AbeKIHgq (ORCPT ); Fri, 9 Nov 2018 02:36:46 -0500 Received: from localhost (unknown [208.72.13.198]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5C83E2133D; Thu, 8 Nov 2018 21:59:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1541714357; bh=ytoxuiojMoK0RUcECpWu/pa5g8KrbEghW0PbFfyMwec=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=StMlqWVzaPTGVZkybgsuxs9jDH2lNmnmsMMGF6Ldgs57uA7gXWWOsXeNqyCySCzpN WsVZWg/5jKLk77JeoYqmSj8edRn3Js+Krv4HE2VKgR14A7UPVcXTZSEPPmAfdFfYXK thJVP6UHMnHHbxHS/qGq8WMM2MPCusEZ/ddAxYas= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Peter Zijlstra (Intel)" , Christian Borntraeger , Johannes Weiner , Li Zefan , Linus Torvalds , Oleg Nesterov , "Paul E. McKenney" , Tejun Heo , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 4.4 032/114] sched/cgroup: Fix cgroup entity load tracking tear-down Date: Thu, 8 Nov 2018 13:50:47 -0800 Message-Id: <20181108215101.968460491@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181108215059.051093652@linuxfoundation.org> References: <20181108215059.051093652@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit 6fe1f348b3dd1f700f9630562b7d38afd6949568 ] When a cgroup's CPU runqueue is destroyed, it should remove its remaining load accounting from its parent cgroup. The current site for doing so it unsuited because its far too late and unordered against other cgroup removal (->css_free() will be, but we're also in an RCU callback). Put it in the ->css_offline() callback, which is the start of cgroup destruction, right after the group has been made unavailable to userspace. The ->css_offline() callbacks are called in hierarchical order after the following v4.4 commit: aa226ff4a1ce ("cgroup: make sure a parent css isn't offlined before its children") Signed-off-by: Peter Zijlstra (Intel) Cc: Christian Borntraeger Cc: Johannes Weiner Cc: Li Zefan Cc: Linus Torvalds Cc: Oleg Nesterov Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Tejun Heo Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20160121212416.GL6357@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- kernel/sched/core.c | 4 +--- kernel/sched/fair.c | 37 +++++++++++++++++++++---------------- kernel/sched/sched.h | 2 +- 3 files changed, 23 insertions(+), 20 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 65ed3501c2ca..4743e1f2a3d1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7817,11 +7817,9 @@ void sched_destroy_group(struct task_group *tg) void sched_offline_group(struct task_group *tg) { unsigned long flags; - int i; /* end participation in shares distribution */ - for_each_possible_cpu(i) - unregister_fair_sched_group(tg, i); + unregister_fair_sched_group(tg); spin_lock_irqsave(&task_group_lock, flags); list_del_rcu(&tg->list); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3b136fb4422c..a0c5bb93a3ab 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8154,11 +8154,8 @@ void free_fair_sched_group(struct task_group *tg) for_each_possible_cpu(i) { if (tg->cfs_rq) kfree(tg->cfs_rq[i]); - if (tg->se) { - if (tg->se[i]) - remove_entity_load_avg(tg->se[i]); + if (tg->se) kfree(tg->se[i]); - } } kfree(tg->cfs_rq); @@ -8206,21 +8203,29 @@ err: return 0; } -void unregister_fair_sched_group(struct task_group *tg, int cpu) +void unregister_fair_sched_group(struct task_group *tg) { - struct rq *rq = cpu_rq(cpu); unsigned long flags; + struct rq *rq; + int cpu; - /* - * Only empty task groups can be destroyed; so we can speculatively - * check on_list without danger of it being re-added. - */ - if (!tg->cfs_rq[cpu]->on_list) - return; + for_each_possible_cpu(cpu) { + if (tg->se[cpu]) + remove_entity_load_avg(tg->se[cpu]); - raw_spin_lock_irqsave(&rq->lock, flags); - list_del_leaf_cfs_rq(tg->cfs_rq[cpu]); - raw_spin_unlock_irqrestore(&rq->lock, flags); + /* + * Only empty task groups can be destroyed; so we can speculatively + * check on_list without danger of it being re-added. + */ + if (!tg->cfs_rq[cpu]->on_list) + continue; + + rq = cpu_rq(cpu); + + raw_spin_lock_irqsave(&rq->lock, flags); + list_del_leaf_cfs_rq(tg->cfs_rq[cpu]); + raw_spin_unlock_irqrestore(&rq->lock, flags); + } } void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq, @@ -8302,7 +8307,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent) return 1; } -void unregister_fair_sched_group(struct task_group *tg, int cpu) { } +void unregister_fair_sched_group(struct task_group *tg) { } #endif /* CONFIG_FAIR_GROUP_SCHED */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0c9ebd82a684..af8d8c3eb8ab 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -308,7 +308,7 @@ extern int tg_nop(struct task_group *tg, void *data); extern void free_fair_sched_group(struct task_group *tg); extern int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent); -extern void unregister_fair_sched_group(struct task_group *tg, int cpu); +extern void unregister_fair_sched_group(struct task_group *tg); extern void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq, struct sched_entity *se, int cpu, struct sched_entity *parent); -- 2.17.1