Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp11806ybt; Tue, 30 Jun 2020 13:47:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzzZn2j467iYIO/kNKCKI7ViI6ks6+t1kJRpmSPsHxxK259f3o8/PkIqXvORKTMdZHpN6vi X-Received: by 2002:a17:907:4240:: with SMTP id oi24mr19242661ejb.23.1593549943207; Tue, 30 Jun 2020 13:45:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593549943; cv=none; d=google.com; s=arc-20160816; b=hVeF6yB++wkegvgR8/iKi4WRqDBaEw2p49UFRdcdfz1ekGkizVQ9X7IsPywiHbCKei KXgoo+EmuCLoyzYvcW0ElbNDzNh3Fqbz6IXdd8exNk27ve/Q3s/TrGtWRES5qifGxkHG PL24c9zrVBHsAUFMRtGtvFgmyj8KqvtfL0FP5oSZvPWIAZaqia756Qlb0IN0/4c20d6A wIdmOk37QpciM32T9vqS45YyRLyvIGtXlupaBexilXqHo42D6kqKf1bHJXFWQIjslee8 zKpGJq/sDjLwNn6QrvSs46k7DJ6mJKKhAJSYncmUvaYet+TgTsYkzNuN4Mk/hW7R7yaW cOfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=+81qoKOCf2IlRkhrw2uHW3HvhDyh63eJ12Y9g3fiyZE=; b=Kb1UmyNMcnRDu5VX2PxxO7ybaUzWOHdMFxjKmMVMT1UD+cljxn/nFofovxldnrumqj R5SlOa7EE1xbfQ4hgJwSZ8+c2AN8AfJ/FAlXp5qOduPM5oQag9njmBNkBGoEgXAlRvNv 3X2g9TdLIsc6nsucVyJUs5UEQn7TNUZF3mIMQFP41kFjxuDthJ/Th25Bf2BFpRgCYL+g VfGwPK6HMYU5ThXd4quQ5DJyZx5i+szRDorojdcE8Im5nbbj7vPG1AW8IkDXhz1HwYcU CZrVLYUJvcfJSHe40O+EMASed9OlTOGHJUGXIrXh0mLcdbcChpiIs17srA4DNGcPqS+9 6LBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Jo7cYQcS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f12si2420017ejx.143.2020.06.30.13.45.19; Tue, 30 Jun 2020 13:45:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Jo7cYQcS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733172AbgF3Sff (ORCPT + 99 others); Tue, 30 Jun 2020 14:35:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:44264 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725963AbgF3Sff (ORCPT ); Tue, 30 Jun 2020 14:35:35 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-105-78.bvtn.or.frontiernet.net [50.39.105.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A66392068F; Tue, 30 Jun 2020 18:35:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593542134; bh=RYKTZT8L2YCF69DV1EdcAmR1v9iwHZlvoOwk9RIUBfI=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=Jo7cYQcSnqavmJu3py5reMUEBjHqYsP72S5+fGaXFuuliQXf7MQYLdi8xzpH6+94s yOpBKjGschI641wSf53+P5BFQnemkANvqB/pnhkl8YlF25Q+M5ykes5xA3/hcfIss2 1gLogO1nUHFCaKYCML6eGu8CDKtrccY50qPfZMmw= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 74B173522640; Tue, 30 Jun 2020 11:35:34 -0700 (PDT) Date: Tue, 30 Jun 2020 11:35:34 -0700 From: "Paul E. McKenney" To: Sebastian Andrzej Siewior Cc: joel@joelfernandes.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, Uladzislau Rezki Subject: Re: [PATCH tip/core/rcu 03/17] rcu/tree: Skip entry into the page allocator for PREEMPT_RT Message-ID: <20200630183534.GG9247@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20200624201200.GA28901@paulmck-ThinkPad-P72> <20200624201226.21197-3-paulmck@kernel.org> <20200630164543.4mdcf6zb4zfclhln@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200630164543.4mdcf6zb4zfclhln@linutronix.de> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 30, 2020 at 06:45:43PM +0200, Sebastian Andrzej Siewior wrote: > On 2020-06-24 13:12:12 [-0700], paulmck@kernel.org wrote: > > From: "Joel Fernandes (Google)" > > > > To keep the kfree_rcu() code working in purely atomic sections on RT, > > such as non-threaded IRQ handlers and raw spinlock sections, avoid > > calling into the page allocator which uses sleeping locks on RT. > > > > In fact, even if the caller is preemptible, the kfree_rcu() code is > > not, as the krcp->lock is a raw spinlock. > > > > Calling into the page allocator is optional and avoiding it should be > > Ok, especially with the page pre-allocation support in future patches. > > Such pre-allocation would further avoid the a need for a dynamically > > allocated page in the first place. > > > > Cc: Sebastian Andrzej Siewior > > Reviewed-by: Uladzislau Rezki > > Co-developed-by: Uladzislau Rezki > > Signed-off-by: Uladzislau Rezki > > Signed-off-by: Joel Fernandes (Google) > > Signed-off-by: Uladzislau Rezki (Sony) > > Signed-off-by: Paul E. McKenney > > --- > > kernel/rcu/tree.c | 12 ++++++++++++ > > 1 file changed, 12 insertions(+) > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index 64592b4..dbdd509 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -3184,6 +3184,18 @@ kfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, > > if (!bnode) { > > WARN_ON_ONCE(sizeof(struct kfree_rcu_bulk_data) > PAGE_SIZE); > > > > + /* > > + * To keep this path working on raw non-preemptible > > + * sections, prevent the optional entry into the > > + * allocator as it uses sleeping locks. In fact, even > > + * if the caller of kfree_rcu() is preemptible, this > > + * path still is not, as krcp->lock is a raw spinlock. > > + * With additional page pre-allocation in the works, > > + * hitting this return is going to be much less likely. > > + */ > > + if (IS_ENABLED(CONFIG_PREEMPT_RT)) > > + return false; > > This is not going to work together with the "wait context validator" > (CONFIG_PROVE_RAW_LOCK_NESTING). As of -rc3 it should complain about > printk() which is why it is still disabled by default. Fixing that should be "interesting". In particular, RCU CPU stall warnings rely on the raw spin lock to reduce false positives due to race conditions. Some thought will be required here. > So assume that this is fixed and enabled then on !PREEMPT_RT it will > complain that you have a raw_spinlock_t acquired (the one from patch > 02/17) and attempt to acquire a spinlock_t in the memory allocator. Given that the slab allocator doesn't acquire any locks until it gets a fair way in, wouldn't it make sense to allow a "shallow" allocation while a raw spinlock is held? This would require yet another GFP_ flag, but that won't make all that much of a difference in the total. ;-) Thanx, Paul > > bnode = (struct kfree_rcu_bulk_data *) > > __get_free_page(GFP_NOWAIT | __GFP_NOWARN); > > } > > Sebastian