Received: by 10.223.185.116 with SMTP id b49csp2560804wrg; Thu, 15 Feb 2018 13:40:14 -0800 (PST) X-Google-Smtp-Source: AH8x227QpEB3K5BruPvXW4WDulheR9gXoxstPpw8vDTEnnpxsRgps0PB9PbgT1FCHQp8Ewwc4IJo X-Received: by 2002:a17:902:be06:: with SMTP id r6-v6mr3815133pls.448.1518730814025; Thu, 15 Feb 2018 13:40:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518730813; cv=none; d=google.com; s=arc-20160816; b=L9Y1SCJCcl2Z8CtZVAuiUqxDiHTRJC/fIgJ6sn69MhJEaXNDXO5jYM7mipwrz2Dhjz kmLeUrAa3D8xuX10QY6vMBZsoeUAjwW0R7lOlieF5/SjH827GQSjIdj6pP2wg0NJnhQh uRCoik1a4+pB2/RzvYmNpqiffYQ/cF0f9r6V+XuY0YIFp6grPerfDbmr83580xE7Za0m aGYdOSBn4S85IbNzk0oQmMoXEvIsa5nIs4+j7Z/xXL2C9CK44RC9u8CJUrlSbo7eiPER oNvv6k0Ml7rK8E8OPjWOAWBwSBcOm+SOCfhvrzx0LChfELW3auHe9xDpNc0eB4SS+EgW 7yeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=AIuF5B2FAd38s3Jja57Xcla+IS8yb0XZqhQWJKeW7/k=; b=zWmMxqfIBcn7NSpiuD9GQ9otmmyWvwqJOxR5bOKeAHhlaEJvb4b4in7fOLmrfGqzpe GqC9HRsHr9ob0EiitxZmw6fCBz1xbt15Ys3BENyWhhW34C5YexuWHP2cZQ9DLD/0lGrK leJ6xlPMa7OVttBMP9qfNJvVhMd006hcrucv5smJA4yiW7whQ+eXSIUrZ7e8CqCBv1Nr FMGmWmhsLHDQdn/39VocB/+04ykqp5YUayumSnfSbOC6euzd7i+/ZErY+seWbpAUmE3V CpeOFywJqIrA8GhkS1OeCLed6mq6QZPY04ab+b8+gaYKR7p6/3Wsi1Z6JnuKT1aPt6m2 ftIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i2si348794pgo.550.2018.02.15.13.39.59; Thu, 15 Feb 2018 13:40:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1164497AbeBOPbs (ORCPT + 99 others); Thu, 15 Feb 2018 10:31:48 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:56092 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1164416AbeBOPbp (ORCPT ); Thu, 15 Feb 2018 10:31:45 -0500 Received: from localhost (LFbn-1-12258-90.w90-92.abo.wanadoo.fr [90.92.71.90]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 89538E94; Thu, 15 Feb 2018 15:31:44 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavan Kondeti , "Steven Rostedt (VMware)" , "Peter Zijlstra (Intel)" , Andrew Morton , Linus Torvalds , Mike Galbraith , Thomas Gleixner , Ingo Molnar Subject: [PATCH 4.14 013/195] sched/rt: Use container_of() to get root domain in rto_push_irq_work_func() Date: Thu, 15 Feb 2018 16:15:04 +0100 Message-Id: <20180215151706.405191812@linuxfoundation.org> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180215151705.738773577@linuxfoundation.org> References: <20180215151705.738773577@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Steven Rostedt (VMware) commit ad0f1d9d65938aec72a698116cd73a980916895e upstream. When the rto_push_irq_work_func() is called, it looks at the RT overloaded bitmask in the root domain via the runqueue (rq->rd). The problem is that during CPU up and down, nothing here stops rq->rd from changing between taking the rq->rd->rto_lock and releasing it. That means the lock that is released is not the same lock that was taken. Instead of using this_rq()->rd to get the root domain, as the irq work is part of the root domain, we can simply get the root domain from the irq work that is passed to the routine: container_of(work, struct root_domain, rto_push_work) This keeps the root domain consistent. Reported-by: Pavan Kondeti Signed-off-by: Steven Rostedt (VMware) Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic") Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- kernel/sched/rt.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq) * the rt_loop_next will cause the iterator to perform another scan. * */ -static int rto_next_cpu(struct rq *rq) +static int rto_next_cpu(struct root_domain *rd) { - struct root_domain *rd = rq->rd; int next; int cpu; @@ -1985,7 +1984,7 @@ static void tell_cpu_to_push(struct rq * * Otherwise it is finishing up and an ipi needs to be sent. */ if (rq->rd->rto_cpu < 0) - cpu = rto_next_cpu(rq); + cpu = rto_next_cpu(rq->rd); raw_spin_unlock(&rq->rd->rto_lock); @@ -1998,6 +1997,8 @@ static void tell_cpu_to_push(struct rq * /* Called from hardirq context */ void rto_push_irq_work_func(struct irq_work *work) { + struct root_domain *rd = + container_of(work, struct root_domain, rto_push_work); struct rq *rq; int cpu; @@ -2013,18 +2014,18 @@ void rto_push_irq_work_func(struct irq_w raw_spin_unlock(&rq->lock); } - raw_spin_lock(&rq->rd->rto_lock); + raw_spin_lock(&rd->rto_lock); /* Pass the IPI to the next rt overloaded queue */ - cpu = rto_next_cpu(rq); + cpu = rto_next_cpu(rd); - raw_spin_unlock(&rq->rd->rto_lock); + raw_spin_unlock(&rd->rto_lock); if (cpu < 0) return; /* Try the next RT overloaded CPU */ - irq_work_queue_on(&rq->rd->rto_push_work, cpu); + irq_work_queue_on(&rd->rto_push_work, cpu); } #endif /* HAVE_RT_PUSH_IPI */