Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp817305pxb; Wed, 11 Nov 2020 17:41:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJxElJ0XvOTbT1jI+wRIqlUcw+AEP/i7nBFrHxxdWAgRRIkPImI/KVj3BhQnhtkdAn2r6Sw1 X-Received: by 2002:a17:906:6a57:: with SMTP id n23mr29709765ejs.315.1605145314701; Wed, 11 Nov 2020 17:41:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605145314; cv=none; d=google.com; s=arc-20160816; b=Bu88eeKqx/vnakY53YVhyX6IrZPkXXfbvsejoIP0+QM75vvBvewniAn9JQN/Te9J90 eZ29u6whU7NXomqviyR5HTyRIlvVeXDodXm9uuSD1ESGf4AO5mePRAR03b5V3KS4up4h 7wVX41WQ1/yJcmwGXchD+g76R0ow+qJu0qMNTsUKziZHcKS97QrC6afJEAhDfkxKMvpE 4Jdx9ih+O/89mytS8a/vw2oJ1SHExUkaM5Wv/qvA96RxL/eNjlB1BHbPtNpgs/UztY5J VwufXbyGVnVHtGbv9aZ7teLrAqHYehCcaf1j/eetnRXxIrNunh1ufd4qipAWdzLA1n8P dKtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=4pSyW3mlRQ8n31x6jOCP224mmj8cLcrLJSpYD732ias=; b=biu9mXqwhpuRVj2Fln2IxGob8So5Q3dwEm+jp001xZ8de+ETJdv22APf2IHnRVGUNj FG2EwQRRAIq4Uc6yeIZcrK71lxcpO5LesluUSSWVwGVGoyZ+LSKG7nH1+G8ziBT24xxJ LVMISrKExYy05t6sizBWVFSkIGGpD4d6NbFg3eWL/BrTjEb2Bfy4WgXu613ao0ckXzRv VQTm8KBCaEkr+A3VjtWGnX8X0HIYL/1ME9MuFAvwqVm1GrfJ7U+Cap5fd0mqsY/bXxMK u5mzo65vU9f1rCUGNbx248tBlfWn3rfjsj31JupVHlEpCDE5HM2hDTXuNWtD65B4UZ0s IxfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yuHOwueL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o7si2461518ejb.91.2020.11.11.17.41.31; Wed, 11 Nov 2020 17:41:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=yuHOwueL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729078AbgKLBg4 (ORCPT + 99 others); Wed, 11 Nov 2020 20:36:56 -0500 Received: from mail.kernel.org ([198.145.29.99]:35002 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728026AbgKLALb (ORCPT ); Wed, 11 Nov 2020 19:11:31 -0500 Received: from paulmck-ThinkPad-P72.home (50-39-104-11.bvtn.or.frontiernet.net [50.39.104.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 61F942072C; Thu, 12 Nov 2020 00:11:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605139890; bh=UUizPUi+5E2gGhUXhujEXQx6cSOfSopZTLC533nokJc=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=yuHOwueLPRh7XYhg7N1PKfNbkkla0wA87gc4bZSptoORGlluEdo8F9NTTrSZWIzMo fHJ3yLY04gmvNPf6eJmTpLkx2nSegrutBXIErzzSaaC/0mcw/APa0Ex+jXitD37e7N IaCO7rVaSLzEPVRe268SIRGIEi5tJ14Tj4E2Nfuw= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id F0BFF352287B; Wed, 11 Nov 2020 16:11:29 -0800 (PST) Date: Wed, 11 Nov 2020 16:11:29 -0800 From: "Paul E. McKenney" To: Marco Elver Cc: Steven Rostedt , Anders Roxell , Andrew Morton , Alexander Potapenko , Dmitry Vyukov , Jann Horn , Mark Rutland , Linux Kernel Mailing List , Linux-MM , kasan-dev , rcu@vger.kernel.org, peterz@infradead.org Subject: Re: [PATCH] kfence: Avoid stalling work queue task without allocations Message-ID: <20201112001129.GD3249@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20201110135320.3309507-1-elver@google.com> <20201111133813.GA81547@elver.google.com> <20201111130543.27d29462@gandalf.local.home> <20201111182333.GA3249@paulmck-ThinkPad-P72> <20201111183430.GN517454@elver.google.com> <20201111192123.GB3249@paulmck-ThinkPad-P72> <20201111202153.GT517454@elver.google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201111202153.GT517454@elver.google.com> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Nov 11, 2020 at 09:21:53PM +0100, Marco Elver wrote: > On Wed, Nov 11, 2020 at 11:21AM -0800, Paul E. McKenney wrote: > [...] > > > > rcu: Don't invoke try_invoke_on_locked_down_task() with irqs disabled > > > > > > Sadly, no, next-20201110 already included that one, and that's what I > > > tested and got me all those warnings above. > > > > Hey, I had to ask! The only uncertainty I seee is the acquisition of > > the lock in rcu_iw_handler(), for which I add a lockdep check in the > > (untested) patch below. The other thing I could do is sprinkle such > > checks through the stall-warning code on the assumption that something > > RCU is calling is enabling interrupts. > > > > Other thoughts? > > > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h > > index 70d48c5..3d67650 100644 > > --- a/kernel/rcu/tree_stall.h > > +++ b/kernel/rcu/tree_stall.h > > @@ -189,6 +189,7 @@ static void rcu_iw_handler(struct irq_work *iwp) > > > > rdp = container_of(iwp, struct rcu_data, rcu_iw); > > rnp = rdp->mynode; > > + lockdep_assert_irqs_disabled(); > > raw_spin_lock_rcu_node(rnp); > > if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) { > > rdp->rcu_iw_gp_seq = rnp->gp_seq; > > This assert didn't fire yet, I just get more of the below. I'll keep > rerunning, but am not too hopeful... Is bisection a possibility? Failing that, please see the updated patch below. This adds a few more calls to lockdep_assert_irqs_disabled(), but perhaps more helpfully dumps the current stack of the CPU that the RCU grace-period kthread wants to run on in the case where this kthread has been starved of CPU. Thanx, Paul ------------------------------------------------------------------------ diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 70d48c5..d203ea0 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -189,6 +189,7 @@ static void rcu_iw_handler(struct irq_work *iwp) rdp = container_of(iwp, struct rcu_data, rcu_iw); rnp = rdp->mynode; + lockdep_assert_irqs_disabled(); raw_spin_lock_rcu_node(rnp); if (!WARN_ON_ONCE(!rdp->rcu_iw_pending)) { rdp->rcu_iw_gp_seq = rnp->gp_seq; @@ -449,21 +450,32 @@ static void print_cpu_stall_info(int cpu) /* Complain about starvation of grace-period kthread. */ static void rcu_check_gp_kthread_starvation(void) { + int cpu; struct task_struct *gpk = rcu_state.gp_kthread; unsigned long j; if (rcu_is_gp_kthread_starving(&j)) { + cpu = gpk ? task_cpu(gpk) : -1; pr_err("%s kthread starved for %ld jiffies! g%ld f%#x %s(%d) ->state=%#lx ->cpu=%d\n", rcu_state.name, j, (long)rcu_seq_current(&rcu_state.gp_seq), data_race(rcu_state.gp_flags), gp_state_getname(rcu_state.gp_state), rcu_state.gp_state, - gpk ? gpk->state : ~0, gpk ? task_cpu(gpk) : -1); + gpk ? gpk->state : ~0, cpu); if (gpk) { pr_err("\tUnless %s kthread gets sufficient CPU time, OOM is now expected behavior.\n", rcu_state.name); pr_err("RCU grace-period kthread stack dump:\n"); + lockdep_assert_irqs_disabled(); sched_show_task(gpk); + lockdep_assert_irqs_disabled(); + if (cpu >= 0) { + pr_err("Stack dump where RCU grace-period kthread last ran:\n"); + if (!trigger_single_cpu_backtrace(cpu)) + dump_cpu_task(cpu); + } + lockdep_assert_irqs_disabled(); wake_up_process(gpk); + lockdep_assert_irqs_disabled(); } } }