Received: by 10.223.185.116 with SMTP id b49csp2610538wrg; Thu, 15 Feb 2018 14:37:11 -0800 (PST) X-Google-Smtp-Source: AH8x225lw04w2+4hD3lVNhPkZ4q79KEx7AIx2o+FOm0W3ZYf77AE7bxoP8Hmbc1qq9dRqSQYGwLD X-Received: by 10.99.122.86 with SMTP id j22mr3419047pgn.351.1518734231692; Thu, 15 Feb 2018 14:37:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518734231; cv=none; d=google.com; s=arc-20160816; b=ew5yLvTJ6ps53vZQJIB3iOFrU8hXLJ61KVuCQc/iim7hBmWr930eGSW7Xa8lxRZ0yN P59UK6AdW/x6xxFy+duGes92ocGEWT8gGChoz3OiiOOr1myxkHGVVa3wTYteopp6O+yB pT1nZWYXspY1t4EV6/VrFjWHnb7D9JSe4MmoLISoQgZVVOLl2UuNGosOwo0eRXRZ0YhB xbQ0zIxiVrg58L7sUFAs8OM128mBbtjdycTThsA2o5XG1qBn/nm9Uk+IJBjPzporcfIo SxFoH+Glz1Y7KydyHioJJjHoOjE2RWcfaVA+xZeGZqZt78bKMFZNXCu6uwXYavixYa9F 80nQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=OyCePUPrA3so4VnpSPLA61PNCcpZ5F89zCo9VEqK1Lk=; b=S6XU2OiZB4vUCgFKbjpTB+HoC75GHpwGBEihihfbcyg9erFqWjuL8wAkdNsGex7hMk 4a2A0zCQiR58UIGuPH21kKIb8pzdYS41Ld5vv+UCvdgmwgZs/t+3kmcnt8Iv2LX66vUR aGd4SvM2/wfDgwv7h3aSvbulDau14Se26RiVlZxA2vYPw6zwBJolveo1xMuBxqI8aVBo ZAx2X0RVRDIKXIE5EiDLXz5PQkfUmifSE3u1p19XaBeWdMWPy913wiUejtj9j9DMqtcW zeKEMi9D+Cd0o3vDkGutzCiFdEr/cCP/0e/TAl5jtbOgR2Q9RMQF0qapUfF5PARBSlco Pi8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s3-v6si2025929plb.171.2018.02.15.14.36.52; Thu, 15 Feb 2018 14:37:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1164516AbeBOPb5 (ORCPT + 99 others); Thu, 15 Feb 2018 10:31:57 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:56114 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1164491AbeBOPbs (ORCPT ); Thu, 15 Feb 2018 10:31:48 -0500 Received: from localhost (LFbn-1-12258-90.w90-92.abo.wanadoo.fr [90.92.71.90]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 3A661EC4; Thu, 15 Feb 2018 15:31:47 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavan Kondeti , "Steven Rostedt (VMware)" , "Peter Zijlstra (Intel)" , Andrew Morton , Linus Torvalds , Mike Galbraith , Thomas Gleixner , Ingo Molnar Subject: [PATCH 4.14 014/195] sched/rt: Up the root domain ref count when passing it around via IPIs Date: Thu, 15 Feb 2018 16:15:05 +0100 Message-Id: <20180215151706.446001854@linuxfoundation.org> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180215151705.738773577@linuxfoundation.org> References: <20180215151705.738773577@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Steven Rostedt (VMware) commit 364f56653708ba8bcdefd4f0da2a42904baa8eeb upstream. When issuing an IPI RT push, where an IPI is sent to each CPU that has more than one RT task scheduled on it, it references the root domain's rto_mask, that contains all the CPUs within the root domain that has more than one RT task in the runable state. The problem is, after the IPIs are initiated, the rq->lock is released. This means that the root domain that is associated to the run queue could be freed while the IPIs are going around. Add a sched_get_rd() and a sched_put_rd() that will increment and decrement the root domain's ref count respectively. This way when initiating the IPIs, the scheduler will up the root domain's ref count before releasing the rq->lock, ensuring that the root domain does not go away until the IPI round is complete. Reported-by: Pavan Kondeti Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic") Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- kernel/sched/rt.c | 9 +++++++-- kernel/sched/sched.h | 2 ++ kernel/sched/topology.c | 13 +++++++++++++ 3 files changed, 22 insertions(+), 2 deletions(-) --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1990,8 +1990,11 @@ static void tell_cpu_to_push(struct rq * rto_start_unlock(&rq->rd->rto_loop_start); - if (cpu >= 0) + if (cpu >= 0) { + /* Make sure the rd does not get freed while pushing */ + sched_get_rd(rq->rd); irq_work_queue_on(&rq->rd->rto_push_work, cpu); + } } /* Called from hardirq context */ @@ -2021,8 +2024,10 @@ void rto_push_irq_work_func(struct irq_w raw_spin_unlock(&rd->rto_lock); - if (cpu < 0) + if (cpu < 0) { + sched_put_rd(rd); return; + } /* Try the next RT overloaded CPU */ irq_work_queue_on(&rd->rto_push_work, cpu); --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -661,6 +661,8 @@ extern struct mutex sched_domains_mutex; extern void init_defrootdomain(void); extern int sched_init_domains(const struct cpumask *cpu_map); extern void rq_attach_root(struct rq *rq, struct root_domain *rd); +extern void sched_get_rd(struct root_domain *rd); +extern void sched_put_rd(struct root_domain *rd); #ifdef HAVE_RT_PUSH_IPI extern void rto_push_irq_work_func(struct irq_work *work); --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -258,6 +258,19 @@ void rq_attach_root(struct rq *rq, struc call_rcu_sched(&old_rd->rcu, free_rootdomain); } +void sched_get_rd(struct root_domain *rd) +{ + atomic_inc(&rd->refcount); +} + +void sched_put_rd(struct root_domain *rd) +{ + if (!atomic_dec_and_test(&rd->refcount)) + return; + + call_rcu_sched(&rd->rcu, free_rootdomain); +} + static int init_rootdomain(struct root_domain *rd) { if (!zalloc_cpumask_var(&rd->span, GFP_KERNEL))