Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp179395pxb; Wed, 11 Nov 2020 00:25:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJwKSuGoKCu6w00wE+oHp+ErQTDrIbM1UhZ01u2CjCDh1bz3i++h5sCyr/+nI1yrd+9K8XIe X-Received: by 2002:a17:906:3413:: with SMTP id c19mr23412502ejb.421.1605083154190; Wed, 11 Nov 2020 00:25:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605083154; cv=none; d=google.com; s=arc-20160816; b=rXWTaHInrE8mSSzq2QCnRw+rVQOFj2HVFMgtSm6A4l+o1gy2RpGBxj1vhZHVzXd+Cj RD6x4s7GycTvbzZLu2MFcyZciq5cl++7m+UpGnjDzmrbbeAL6ztETSBVCzE4Q1vAHxEs pD6uU9Pb2Fp+ThiW2QMCo8Dw1fUvqgIXxcQcGWN8zwJ3ccWPWEzv8FHzK+vD809VnmK9 VVs/ElDr8KaQikD4YgaetRQPr9Vgu4kwEVyDxH7xPL5aM2iHU04iyDJpC+CKU73O64JU cvVCeFp3ozJ6aTlTgPYGd9LfoT+CUM9+vMTxX5IAV0K4jeCVT9ljg6K/cVhUXIM+bj96 9r1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:references:in-reply-to:cc:subject :to:reply-to:sender:from:dkim-signature:dkim-signature:date; bh=6yCVxYMkpImpziLXmx7wh5PCfnLqi/hobcwWfR1qL+Q=; b=CsIBwS1QYRn8CXm6T3gwTuOI+8JHqgtR1BsV4/420YWl5dzzfeWfEWf2KW5vu7z4cv guw3uFgqonvhNOx1eCFnk/CVGNZYcRJb0s6iz0nWd8xMSBo73UHNEfkAUJopis4gGkTk hE+q8mnua5yzZ8OXgEBzv9/E2oE/lnEZeGUamud1g+pgdHs+24xo7eBiOPbOWiqNZiC2 ui/JkhP44Sh0L1HNDS8feCap82ZdOn5SCtn71CATe5VnxATjY1o7DTb9JWuPQAw5Q2Zz 2wNS8fuw7xPx5CoKf3Q47N9Sv2djMDYsxcyXEYx1tRSzQK06RLIDm8V50/uB+emNWIk5 FKfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=m1o2TKEY; dkim=neutral (no key) header.i=@linutronix.de header.b=9SPtRX+8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ci22si548217ejc.100.2020.11.11.00.25.30; Wed, 11 Nov 2020 00:25:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=m1o2TKEY; dkim=neutral (no key) header.i=@linutronix.de header.b=9SPtRX+8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726303AbgKKIXu (ORCPT + 99 others); Wed, 11 Nov 2020 03:23:50 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:36274 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726343AbgKKIX1 (ORCPT ); Wed, 11 Nov 2020 03:23:27 -0500 Date: Wed, 11 Nov 2020 08:23:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1605083004; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6yCVxYMkpImpziLXmx7wh5PCfnLqi/hobcwWfR1qL+Q=; b=m1o2TKEYeXvdJcgfqn3cej9y8PQqxqlyOJmVWw/rO9iKriJ2p39Vzm36obacP9m9ER3J7C 0O3ao0e4zR8Z4fiFvmpLGpGo0wJ8f2JhAyrmRjpWbDXqmCKvJ7fYp60z1q6D+BvAkrcG3L PqwFCKxm1CNepjzelOM7x6UhHdmGiPRDKLpSNuzrHzXvkgAix9TrtdjORhlQjyliOHARSK rwr7fx+JVudWCkh9juUyZ8csR7vaKW4C7rUgQAZBHW0o5bLKaDIO9KCIpCg9/E+dcy/VNm PCovN6/WdwtUmIGK+YvSBPJzHeBD5rbrV44ZPUoew4eliyC8V8QNnrAH1FDPqg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1605083004; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6yCVxYMkpImpziLXmx7wh5PCfnLqi/hobcwWfR1qL+Q=; b=9SPtRX+8gUOS3ySBFfq1BU1pBF6ROhkFs4r50mciYAIbu0TzkMWNfkO/mwggNJ7lVHQrcE 6xChUK7Fbu3JiTCg== From: "tip-bot2 for Peter Zijlstra" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: sched/core] sched: Fix balance_callback() Cc: Scott Wood , "Peter Zijlstra (Intel)" , Valentin Schneider , Daniel Bristot de Oliveira , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20201023102346.203901269@infradead.org> References: <20201023102346.203901269@infradead.org> MIME-Version: 1.0 Message-ID: <160508300397.11244.13967684821070428528.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the sched/core branch of tip: Commit-ID: 565790d28b1e33ee2f77bad5348b99f6dfc366fd Gitweb: https://git.kernel.org/tip/565790d28b1e33ee2f77bad5348b99f6dfc366fd Author: Peter Zijlstra AuthorDate: Mon, 11 May 2020 14:13:00 +02:00 Committer: Peter Zijlstra CommitterDate: Tue, 10 Nov 2020 18:38:57 +01:00 sched: Fix balance_callback() The intent of balance_callback() has always been to delay executing balancing operations until the end of the current rq->lock section. This is because balance operations must often drop rq->lock, and that isn't safe in general. However, as noted by Scott, there were a few holes in that scheme; balance_callback() was called after rq->lock was dropped, which means another CPU can interleave and touch the callback list. Rework code to call the balance callbacks before dropping rq->lock where possible, and otherwise splice the balance list onto a local stack. This guarantees that the balance list must be empty when we take rq->lock. IOW, we'll only ever run our own balance callbacks. Reported-by: Scott Wood Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Valentin Schneider Reviewed-by: Daniel Bristot de Oliveira Link: https://lkml.kernel.org/r/20201023102346.203901269@infradead.org --- kernel/sched/core.c | 119 ++++++++++++++++++++++++++---------------- kernel/sched/sched.h | 3 +- 2 files changed, 78 insertions(+), 44 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5e24104..0196a3f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3485,6 +3485,69 @@ static inline void finish_task(struct task_struct *prev) #endif } +#ifdef CONFIG_SMP + +static void do_balance_callbacks(struct rq *rq, struct callback_head *head) +{ + void (*func)(struct rq *rq); + struct callback_head *next; + + lockdep_assert_held(&rq->lock); + + while (head) { + func = (void (*)(struct rq *))head->func; + next = head->next; + head->next = NULL; + head = next; + + func(rq); + } +} + +static inline struct callback_head *splice_balance_callbacks(struct rq *rq) +{ + struct callback_head *head = rq->balance_callback; + + lockdep_assert_held(&rq->lock); + if (head) + rq->balance_callback = NULL; + + return head; +} + +static void __balance_callbacks(struct rq *rq) +{ + do_balance_callbacks(rq, splice_balance_callbacks(rq)); +} + +static inline void balance_callbacks(struct rq *rq, struct callback_head *head) +{ + unsigned long flags; + + if (unlikely(head)) { + raw_spin_lock_irqsave(&rq->lock, flags); + do_balance_callbacks(rq, head); + raw_spin_unlock_irqrestore(&rq->lock, flags); + } +} + +#else + +static inline void __balance_callbacks(struct rq *rq) +{ +} + +static inline struct callback_head *splice_balance_callbacks(struct rq *rq) +{ + return NULL; +} + +static inline void balance_callbacks(struct rq *rq, struct callback_head *head) +{ +} + +#endif + static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf) { @@ -3510,6 +3573,7 @@ static inline void finish_lock_switch(struct rq *rq) * prev into current: */ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_); + __balance_callbacks(rq); raw_spin_unlock_irq(&rq->lock); } @@ -3651,43 +3715,6 @@ static struct rq *finish_task_switch(struct task_struct *prev) return rq; } -#ifdef CONFIG_SMP - -/* rq->lock is NOT held, but preemption is disabled */ -static void __balance_callback(struct rq *rq) -{ - struct callback_head *head, *next; - void (*func)(struct rq *rq); - unsigned long flags; - - raw_spin_lock_irqsave(&rq->lock, flags); - head = rq->balance_callback; - rq->balance_callback = NULL; - while (head) { - func = (void (*)(struct rq *))head->func; - next = head->next; - head->next = NULL; - head = next; - - func(rq); - } - raw_spin_unlock_irqrestore(&rq->lock, flags); -} - -static inline void balance_callback(struct rq *rq) -{ - if (unlikely(rq->balance_callback)) - __balance_callback(rq); -} - -#else - -static inline void balance_callback(struct rq *rq) -{ -} - -#endif - /** * schedule_tail - first thing a freshly forked thread must call. * @prev: the thread we just switched away from. @@ -3707,7 +3734,6 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev) */ rq = finish_task_switch(prev); - balance_callback(rq); preempt_enable(); if (current->set_child_tid) @@ -4523,10 +4549,11 @@ static void __sched notrace __schedule(bool preempt) rq = context_switch(rq, prev, next, &rf); } else { rq->clock_update_flags &= ~(RQCF_ACT_SKIP|RQCF_REQ_SKIP); - rq_unlock_irq(rq, &rf); - } - balance_callback(rq); + rq_unpin_lock(rq, &rf); + __balance_callbacks(rq); + raw_spin_unlock_irq(&rq->lock); + } } void __noreturn do_task_dead(void) @@ -4937,9 +4964,11 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) out_unlock: /* Avoid rq from going away on us: */ preempt_disable(); - __task_rq_unlock(rq, &rf); - balance_callback(rq); + rq_unpin_lock(rq, &rf); + __balance_callbacks(rq); + raw_spin_unlock(&rq->lock); + preempt_enable(); } #else @@ -5213,6 +5242,7 @@ static int __sched_setscheduler(struct task_struct *p, int retval, oldprio, oldpolicy = -1, queued, running; int new_effective_prio, policy = attr->sched_policy; const struct sched_class *prev_class; + struct callback_head *head; struct rq_flags rf; int reset_on_fork; int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; @@ -5451,6 +5481,7 @@ change: /* Avoid rq from going away on us: */ preempt_disable(); + head = splice_balance_callbacks(rq); task_rq_unlock(rq, p, &rf); if (pi) { @@ -5459,7 +5490,7 @@ change: } /* Run balance callbacks after we've adjusted the PI chain: */ - balance_callback(rq); + balance_callbacks(rq, head); preempt_enable(); return 0; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index df80bfc..738a00b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1221,6 +1221,9 @@ static inline void rq_pin_lock(struct rq *rq, struct rq_flags *rf) rq->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP); rf->clock_update_flags = 0; #endif +#ifdef CONFIG_SMP + SCHED_WARN_ON(rq->balance_callback); +#endif } static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf)