Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp9266835ybi; Wed, 24 Jul 2019 01:03:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqzE41tzTqM5xeh5kV12geRuV9vQIiAXITM0/EoYa+wY8n5FME4MXX3XNWWEJVOYv7TvnYk3 X-Received: by 2002:a65:6259:: with SMTP id q25mr40541995pgv.145.1563955389108; Wed, 24 Jul 2019 01:03:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563955389; cv=none; d=google.com; s=arc-20160816; b=JBdgAHNzI9WQ9oW2hA9r+QUjCVSv+NeuHGf+d/kLQ5it5Cf62oTOphpNnaWJPgdRQ+ ANHqszN5L+lPNOsa3wYBUVtp091vdHswIveKOTEQzJqggibiUqIPPW4m0hnFmAYg77T2 IgIfHAQF1dPFY6ey0T6+6kyBtBQTniM47hSJGqC7B6S9Ga9x1DoaXr9i26Rt7dK4OIMG pJwnh4R8V5yGum+dryYrxwUF9KMPP2oJW0DkckbzzhpLDOyNzQfPvaIYjLckZPnyAEIP IiOKVsf+Aty3W2P+C1gwR/Zy7AJG1aWrtr/KZu/sf0/9K5X4fPZxHc7N7u6ju8lLFf61 mc1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=olrgd6MCRqlJiMh7/E9LAB/e544eI6Yy4FzXags1LAw=; b=TKmz1WnUd7yEbsE54Pr9dtQus1rRNB+rp3Gxr5QpGH1AaErkuNbZwlcPGWGfZBZ3tu LPvILzJafgk4tgqP1dukfULS0efSKYCV2fBAIeJyrqLpFVZpkoV9F6CFNj4vG4Le2d5E ycwhiO3+FXLL3T28PNL5V55PnxqW49dsCzSBtCXUpphHOTlCpKi+lHlWe7nIS3DtPK3W LP+VY0Dz7kSC6D7CmOE/XbtQJl6BExjSXDyDrCg7kdrBSUq0GmBVhQRoftcW7jZLmTh5 lT3WWClyEr6U99UDilGe8Vo5Hi/uhxRuIs4u+BOAA7qu4yNLDNumYWaRFfWLZdAm2zJH 6U8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f21si13642487pgj.263.2019.07.24.01.02.53; Wed, 24 Jul 2019 01:03:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726308AbfGXIAm (ORCPT + 99 others); Wed, 24 Jul 2019 04:00:42 -0400 Received: from lgeamrelo12.lge.com ([156.147.23.52]:42090 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725826AbfGXIAm (ORCPT ); Wed, 24 Jul 2019 04:00:42 -0400 Received: from unknown (HELO lgemrelse7q.lge.com) (156.147.1.151) by 156.147.23.52 with ESMTP; 24 Jul 2019 17:00:29 +0900 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: byungchul.park@lge.com Received: from unknown (HELO X58A-UD3R) (10.177.222.33) by 156.147.1.151 with ESMTP; 24 Jul 2019 17:00:29 +0900 X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com Date: Wed, 24 Jul 2019 16:59:20 +0900 From: Byungchul Park To: "Paul E. McKenney" Cc: Joel Fernandes , Byungchul Park , rcu , LKML , kernel-team@lge.com Subject: Re: [PATCH] rcu: Make jiffies_till_sched_qs writable Message-ID: <20190724075919.GB14712@X58A-UD3R> References: <20190713174111.GG26519@linux.ibm.com> <20190719003942.GA28226@X58A-UD3R> <20190719074329.GY14271@linux.ibm.com> <20190719195728.GF14271@linux.ibm.com> <20190723110521.GA28883@X58A-UD3R> <20190723134717.GT14271@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190723134717.GT14271@linux.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 23, 2019 at 06:47:17AM -0700, Paul E. McKenney wrote: > On Tue, Jul 23, 2019 at 08:05:21PM +0900, Byungchul Park wrote: > > On Fri, Jul 19, 2019 at 04:33:56PM -0400, Joel Fernandes wrote: > > > On Fri, Jul 19, 2019 at 3:57 PM Paul E. McKenney wrote: > > > > > > > > On Fri, Jul 19, 2019 at 06:57:58PM +0900, Byungchul Park wrote: > > > > > On Fri, Jul 19, 2019 at 4:43 PM Paul E. McKenney wrote: > > > > > > > > > > > > On Thu, Jul 18, 2019 at 08:52:52PM -0400, Joel Fernandes wrote: > > > > > > > On Thu, Jul 18, 2019 at 8:40 PM Byungchul Park wrote: > > > > > > > [snip] > > > > > > > > > - There is a bug in the CPU stopper machinery itself preventing it > > > > > > > > > from scheduling the stopper on Y. Even though Y is not holding up the > > > > > > > > > grace period. > > > > > > > > > > > > > > > > Or any thread on Y is busy with preemption/irq disabled preventing the > > > > > > > > stopper from being scheduled on Y. > > > > > > > > > > > > > > > > Or something is stuck in ttwu() to wake up the stopper on Y due to any > > > > > > > > scheduler locks such as pi_lock or rq->lock or something. > > > > > > > > > > > > > > > > I think what you mentioned can happen easily. > > > > > > > > > > > > > > > > Basically we would need information about preemption/irq disabled > > > > > > > > sections on Y and scheduler's current activity on every cpu at that time. > > > > > > > > > > > > > > I think all that's needed is an NMI backtrace on all CPUs. An ARM we > > > > > > > don't have NMI solutions and only IPI or interrupt based backtrace > > > > > > > works which should at least catch and the preempt disable and softirq > > > > > > > disable cases. > > > > > > > > > > > > True, though people with systems having hundreds of CPUs might not > > > > > > thank you for forcing an NMI backtrace on each of them. Is it possible > > > > > > to NMI only the ones that are holding up the CPU stopper? > > > > > > > > > > What a good idea! I think it's possible! > > > > > > > > > > But we need to think about the case NMI doesn't work when the > > > > > holding-up was caused by IRQ disabled. > > > > > > > > > > Though it's just around the corner of weekend, I will keep thinking > > > > > on it during weekend! > > > > > > > > Very good! > > > > > > Me too will think more about it ;-) Agreed with point about 100s of > > > CPUs usecase, > > > > > > Thanks, have a great weekend, > > > > BTW, if there's any long code section with irq/preemption disabled, then > > the problem would be not only about RCU stall. And we can also use > > latency tracer or something to detect the bad situation. > > > > So in this case, sending ipi/nmi to the CPUs where the stoppers cannot > > to be scheduled does not give us additional meaningful information. > > > > I think Paul started to think about this to solve some real problem. I > > seriously love to help RCU and it's my pleasure to dig deep into kind of > > RCU stuff, but I've yet to define exactly what problem is. Sorry. > > > > Could you share the real issue? I think you don't have to reproduce it. > > Just sharing the issue that you got inspired from is enough. Then I > > might be able to develop 'how' with Joel! :-) It's our pleasure! > > It is unfortunately quite intermittent. I was hoping to find a way > to make it happen more often. Part of the underlying problem appears > to be lock contention, in that reducing contention made it even more > intermittent. Which is good in general, but not for exercising the > CPU-stopper issue. > > But perhaps your hardware will make this happen more readily than does > mine. The repeat-by is simple, namely run TREE04 on branch "dev" on an > eight-CPU system. It appear that the number of CPUs used by the test > should match the number available on the system that you are running on, > though perhaps affinity could allow mismatches. > > So why not try it and see what happens? Thank you. I'll try it too.