Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp9264896ybi; Wed, 24 Jul 2019 01:01:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqzUTBHEPymVJeimAPMZ+3opQRwCyVq4wskr2lmDypX44dLraILvNYHTMtHKSNtioFc/BFwq X-Received: by 2002:a62:3347:: with SMTP id z68mr10265365pfz.174.1563955285646; Wed, 24 Jul 2019 01:01:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563955285; cv=none; d=google.com; s=arc-20160816; b=uCX6U+AgwP00eJEoi0L+nayIkxYaBfuI0k4Px8k2Yxxio0y8pAixDe7GSYgHKH4ygm Hl61WiJ1Cr2ttLrTyCZOvpKydLeoXCz36eQ31a0J5fi0DjFfRhs2AAw6UvibcJIUtzug nhkXg/0CRN5DIg4viafMAgOEYCKtkm5A6h4xOZ/POqR2oH/gr6bqJAB5WueVamm1cnIS BO7Ulwoa+ZGxAhh2bRnuH/CihM40IOR/FRxjuz+TVlETM460hA4ynaGzvAk7omEfHG6j +PohDXzfJcsI+c0Mppg3ZiRIyGBNuxesUodt5dLKFM7cX8XlQZMfCoQ41XxkMUjouRdS o32w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=s2E82902NOahdV23TFt1VCkqCy6QWAgBv+9A5gfAMHU=; b=PxiywQ6UcIV37JXSAbMD+dYnsBOivlfB2G39dbh+4c9llJKn4Zx/SznxBOf/2fjvfr W0r9VWG2iF2mfWm/557sTr1LF0fE2ImtDFE1i7E68gFPpvLbX1Oa8g+61WV9P/Tu3mnk 5eUzHs01fKJwk9tCPOtEagL3Kpg0lhv3khe2lCmanlAqxUxNG05MpJHGCB6ps1sg9Qbt 2iAHl7KN9Gt8HDqmANbrmHICCnOTgUaxGeHMAi+DdgWej8UnrdHpEDilK3asoCQVEpPQ IZKbhlnMByt/MceHaVct2WJWUcZEzsMTBOtJ4weQIVCb/ErT+f2i1kTZJRtO0l9BvvF4 y9qw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z2si12949217pgv.417.2019.07.24.01.01.11; Wed, 24 Jul 2019 01:01:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726513AbfGXH74 (ORCPT + 99 others); Wed, 24 Jul 2019 03:59:56 -0400 Received: from lgeamrelo11.lge.com ([156.147.23.51]:53437 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725955AbfGXH74 (ORCPT ); Wed, 24 Jul 2019 03:59:56 -0400 Received: from unknown (HELO lgemrelse7q.lge.com) (156.147.1.151) by 156.147.23.51 with ESMTP; 24 Jul 2019 16:59:51 +0900 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: byungchul.park@lge.com Received: from unknown (HELO X58A-UD3R) (10.177.222.33) by 156.147.1.151 with ESMTP; 24 Jul 2019 16:59:51 +0900 X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com Date: Wed, 24 Jul 2019 16:58:41 +0900 From: Byungchul Park To: "Paul E. McKenney" Cc: Joel Fernandes , Byungchul Park , rcu , LKML , kernel-team@lge.com Subject: Re: [PATCH] rcu: Make jiffies_till_sched_qs writable Message-ID: <20190724075841.GA14712@X58A-UD3R> References: <20190719003942.GA28226@X58A-UD3R> <20190719074329.GY14271@linux.ibm.com> <20190719195728.GF14271@linux.ibm.com> <20190723110521.GA28883@X58A-UD3R> <20190723134717.GT14271@linux.ibm.com> <20190723165403.GA7239@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190723165403.GA7239@linux.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 23, 2019 at 09:54:03AM -0700, Paul E. McKenney wrote: > On Tue, Jul 23, 2019 at 06:47:17AM -0700, Paul E. McKenney wrote: > > On Tue, Jul 23, 2019 at 08:05:21PM +0900, Byungchul Park wrote: > > > On Fri, Jul 19, 2019 at 04:33:56PM -0400, Joel Fernandes wrote: > > > > On Fri, Jul 19, 2019 at 3:57 PM Paul E. McKenney wrote: > > > > > > > > > > On Fri, Jul 19, 2019 at 06:57:58PM +0900, Byungchul Park wrote: > > > > > > On Fri, Jul 19, 2019 at 4:43 PM Paul E. McKenney wrote: > > > > > > > > > > > > > > On Thu, Jul 18, 2019 at 08:52:52PM -0400, Joel Fernandes wrote: > > > > > > > > On Thu, Jul 18, 2019 at 8:40 PM Byungchul Park wrote: > > > > > > > > [snip] > > > > > > > > > > - There is a bug in the CPU stopper machinery itself preventing it > > > > > > > > > > from scheduling the stopper on Y. Even though Y is not holding up the > > > > > > > > > > grace period. > > > > > > > > > > > > > > > > > > Or any thread on Y is busy with preemption/irq disabled preventing the > > > > > > > > > stopper from being scheduled on Y. > > > > > > > > > > > > > > > > > > Or something is stuck in ttwu() to wake up the stopper on Y due to any > > > > > > > > > scheduler locks such as pi_lock or rq->lock or something. > > > > > > > > > > > > > > > > > > I think what you mentioned can happen easily. > > > > > > > > > > > > > > > > > > Basically we would need information about preemption/irq disabled > > > > > > > > > sections on Y and scheduler's current activity on every cpu at that time. > > > > > > > > > > > > > > > > I think all that's needed is an NMI backtrace on all CPUs. An ARM we > > > > > > > > don't have NMI solutions and only IPI or interrupt based backtrace > > > > > > > > works which should at least catch and the preempt disable and softirq > > > > > > > > disable cases. > > > > > > > > > > > > > > True, though people with systems having hundreds of CPUs might not > > > > > > > thank you for forcing an NMI backtrace on each of them. Is it possible > > > > > > > to NMI only the ones that are holding up the CPU stopper? > > > > > > > > > > > > What a good idea! I think it's possible! > > > > > > > > > > > > But we need to think about the case NMI doesn't work when the > > > > > > holding-up was caused by IRQ disabled. > > > > > > > > > > > > Though it's just around the corner of weekend, I will keep thinking > > > > > > on it during weekend! > > > > > > > > > > Very good! > > > > > > > > Me too will think more about it ;-) Agreed with point about 100s of > > > > CPUs usecase, > > > > > > > > Thanks, have a great weekend, > > > > > > BTW, if there's any long code section with irq/preemption disabled, then > > > the problem would be not only about RCU stall. And we can also use > > > latency tracer or something to detect the bad situation. > > > > > > So in this case, sending ipi/nmi to the CPUs where the stoppers cannot > > > to be scheduled does not give us additional meaningful information. > > > > > > I think Paul started to think about this to solve some real problem. I > > > seriously love to help RCU and it's my pleasure to dig deep into kind of > > > RCU stuff, but I've yet to define exactly what problem is. Sorry. > > > > > > Could you share the real issue? I think you don't have to reproduce it. > > > Just sharing the issue that you got inspired from is enough. Then I > > > might be able to develop 'how' with Joel! :-) It's our pleasure! > > > > It is unfortunately quite intermittent. I was hoping to find a way > > to make it happen more often. Part of the underlying problem appears > > to be lock contention, in that reducing contention made it even more > > intermittent. Which is good in general, but not for exercising the > > CPU-stopper issue. > > > > But perhaps your hardware will make this happen more readily than does > > mine. The repeat-by is simple, namely run TREE04 on branch "dev" on an > > eight-CPU system. It appear that the number of CPUs used by the test > > should match the number available on the system that you are running on, > > though perhaps affinity could allow mismatches. > > > > So why not try it and see what happens? > > And another potential issue causing this is a CONFIG_NO_HZ_FULL=y I see. This provides more insight into the problem. Thanks, Byungchul > kernel running in kernel mode (rcutorture on the one hand and callback > invocation on the other) for extended periods of time with the scheduling > clock disabled. Just started the tests for this. They will be running > for quite some time, which this week is a good thing. ;-) > > Thanx, Paul