Received: by 10.223.185.116 with SMTP id b49csp2254971wrg; Thu, 15 Feb 2018 08:50:15 -0800 (PST) X-Google-Smtp-Source: AH8x227JE0NSH/hB8bUZl7hPl0uWVT06b6yIS80DnEAehSeGGGWL4FwiZlzhRpHJhPJoxNtMpRKa X-Received: by 10.167.129.129 with SMTP id g1mr3213544pfi.224.1518713415801; Thu, 15 Feb 2018 08:50:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518713415; cv=none; d=google.com; s=arc-20160816; b=wlCkFT7j8kna+EWE/tcNGkgFF0AEyDT8gzFBTJgXDcBmxLMGKMYa+HVLajRt5p+Qtx oOJKvQgZoyMCzXrXLfq7OwYdCA2o2HH6x4I/uBxyQZ5Bfnzh1/6VKssquug76BoIGCSA pp88+LPyrpQ7fiGrQ4aeSUkUGL8t9SavcZwAz6pmO1O7+hHLx4jq/F+6e8L2qEyUwBmD ZexjoLkpKB17vAsKvazaoMWRLxdgycsascSrgBliBLnS2lvoE8wjeT+2At4ddS5xgvO8 JFNSoEmQlZAeDTCjL6Y1ZwUi9KSbm4Vq5jaLui0SLv6dVMHVb/Ab/9GtPNWtS+iiMZKZ z/xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=o38qThj6t3kt1iX4UVs9oUEbo91zpRnwCx9dRiQ+EtY=; b=kHpUDTdCMiO8Y1QAsayX1CsA1qoZnGKNS7xBWCJz+OS8M/ngv1ljx1vEGxJbPRth1h NOExUjxN8Xy698pdXOzx1Yz1hAxGjdpInrxOiQ4MWqoObHZW60UclVCxPS7hPhkRkUe+ SkK7FiXRdbvVjbkyh1knBeiO4O8QW/K9Uw5PyXkMKYRWQ2+rwHFxT9Gd4MqAEwOk2O3p ai9RMAYfIkBbnjuLrt6Vh3+aWHbW4kMlaGuNKVh7r1d5PfeeV263s3OGQelB2i9UP5RR X2wcmX7oEDvG93zL5a3RZAQbkR9yjaa7h0VebpaqYu64x/nu0K+gnuqAJqvHJ8ubHwJ/ iCSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b2-v6si2342778plm.172.2018.02.15.08.50.00; Thu, 15 Feb 2018 08:50:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1426490AbeBOQsr (ORCPT + 99 others); Thu, 15 Feb 2018 11:48:47 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:60170 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1423742AbeBOPkj (ORCPT ); Thu, 15 Feb 2018 10:40:39 -0500 Received: from localhost (LFbn-1-12258-90.w90-92.abo.wanadoo.fr [90.92.71.90]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id AFA9A115E; Thu, 15 Feb 2018 15:40:38 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavan Kondeti , "Steven Rostedt (VMware)" , "Peter Zijlstra (Intel)" , Andrew Morton , Linus Torvalds , Mike Galbraith , Thomas Gleixner , Ingo Molnar Subject: [PATCH 4.15 010/202] sched/rt: Use container_of() to get root domain in rto_push_irq_work_func() Date: Thu, 15 Feb 2018 16:15:10 +0100 Message-Id: <20180215151713.352610375@linuxfoundation.org> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180215151712.768794354@linuxfoundation.org> References: <20180215151712.768794354@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Steven Rostedt (VMware) commit ad0f1d9d65938aec72a698116cd73a980916895e upstream. When the rto_push_irq_work_func() is called, it looks at the RT overloaded bitmask in the root domain via the runqueue (rq->rd). The problem is that during CPU up and down, nothing here stops rq->rd from changing between taking the rq->rd->rto_lock and releasing it. That means the lock that is released is not the same lock that was taken. Instead of using this_rq()->rd to get the root domain, as the irq work is part of the root domain, we can simply get the root domain from the irq work that is passed to the routine: container_of(work, struct root_domain, rto_push_work) This keeps the root domain consistent. Reported-by: Pavan Kondeti Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic") Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- kernel/sched/rt.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq) * the rt_loop_next will cause the iterator to perform another scan. * */ -static int rto_next_cpu(struct rq *rq) +static int rto_next_cpu(struct root_domain *rd) { - struct root_domain *rd = rq->rd; int next; int cpu; @@ -1985,7 +1984,7 @@ static void tell_cpu_to_push(struct rq * * Otherwise it is finishing up and an ipi needs to be sent. */ if (rq->rd->rto_cpu < 0) - cpu = rto_next_cpu(rq); + cpu = rto_next_cpu(rq->rd); raw_spin_unlock(&rq->rd->rto_lock); @@ -1998,6 +1997,8 @@ static void tell_cpu_to_push(struct rq * /* Called from hardirq context */ void rto_push_irq_work_func(struct irq_work *work) { + struct root_domain *rd = + container_of(work, struct root_domain, rto_push_work); struct rq *rq; int cpu; @@ -2013,18 +2014,18 @@ void rto_push_irq_work_func(struct irq_w raw_spin_unlock(&rq->lock); } - raw_spin_lock(&rq->rd->rto_lock); + raw_spin_lock(&rd->rto_lock); /* Pass the IPI to the next rt overloaded queue */ - cpu = rto_next_cpu(rq); + cpu = rto_next_cpu(rd); - raw_spin_unlock(&rq->rd->rto_lock); + raw_spin_unlock(&rd->rto_lock); if (cpu < 0) return; /* Try the next RT overloaded CPU */ - irq_work_queue_on(&rq->rd->rto_push_work, cpu); + irq_work_queue_on(&rd->rto_push_work, cpu); } #endif /* HAVE_RT_PUSH_IPI */