Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp702547ybz; Wed, 22 Apr 2020 06:30:26 -0700 (PDT) X-Google-Smtp-Source: APiQypLQY92aCxKqRX1A2zcVRd6gX3NTmIGmkpCpcvwP/5JzYUS+Vw//YftFK8bDHJoxEybKVvPx X-Received: by 2002:a17:906:1393:: with SMTP id f19mr24365988ejc.26.1587562226216; Wed, 22 Apr 2020 06:30:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587562226; cv=none; d=google.com; s=arc-20160816; b=u/NvblgMdqoLkhSYgKP5b7b0x6+btdCzJbR82aqwpeI1WrEMeAoZrc+netTe5rOufF a3zBgfDXC3arUVeEICW0CF/X+TpxTZ6tF3rhdn0evX/IAu60hk6zLXVLOejevMnaIGi7 dxJwCA9bxH66ls/qCrETitmzXdiiA5c3tHsxqv8tGyaj2LEYGzdRHJw6yRc2XXD/WfRr BD6BOuPyCdzjGFxU86Ppa2tc5oTBapGL3r+ntT19872JzOvEZcYU+8oTLGhD8q0Fy5He qbbS8S5AGfhHatk7fCScORx0RQZXXJgDjx+A6LRmD58lqbNPGwpGs5HmV8LfI4vM9TZE Np1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=2r/I5PsBDPZQtKp4dE8zZLc+o2ddc6VRL4XH0yduqhc=; b=MGRBsM9/bfDfQxSViFjG4w7EmYtUGnYD5xutqIe/ZWFUV/i6H9PXhDH7oxgS31OFYV J4pJr+5N9onIWLMsxUw5ow8/bk5fGNLtX8kJvDJ4G2UfN/K4/g3xoX15zqzVKvh59P2y aOCg924U7EE/Bm7yX76KdUqqkzYqDh1L0sULX0zCqC1uM2QTQx/WtHsDk7RP9djR3PFt ffwBV8dKWFTqihMgYfuzN+jtxwRs2lUyHEsLx8YGlf4NFkpVVdM4YFKsS5aLZi8AUC8S HE8to2JXf41F4+daRaaGqeCUpAdJxhr+B6MkdEcbP4kNvaSMS9icHjeQCLRzXTkBtrPq oVxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Ee9UfwH+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w13si792804edu.346.2020.04.22.06.30.02; Wed, 22 Apr 2020 06:30:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Ee9UfwH+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726721AbgDVN2y (ORCPT + 99 others); Wed, 22 Apr 2020 09:28:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:34432 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726158AbgDVN2x (ORCPT ); Wed, 22 Apr 2020 09:28:53 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-105-78.bvtn.or.frontiernet.net [50.39.105.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8DAE02076E; Wed, 22 Apr 2020 13:28:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587562133; bh=NNiOPClwlL2Uf3TxN760MALFoiDiNkVe+JxJE9jpMJA=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=Ee9UfwH+/6GStN9KA6DLYc1cHHAmnpD9Nm6IdVpx3Cm6vn1AoC7F2XjKoi/Yui94A Z0ZPaWUW9qyi+AYQsiv5F8o4BBl5LWBUbOSy5kx7XGqcA7TOlNVIsOOHYa/5QTi0ra hyf+LsEV1BTOquW0lcQ04ISkeylaN79S4TNXp42E= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 6FB323523118; Wed, 22 Apr 2020 06:28:53 -0700 (PDT) Date: Wed, 22 Apr 2020 06:28:53 -0700 From: "Paul E. McKenney" To: joel@joelfernandes.org Cc: Uladzislau Rezki , linux-kernel@vger.kernel.org, Sebastian Andrzej Siewior , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , rcu@vger.kernel.org, Steven Rostedt Subject: Re: [PATCH rcu/dev -fixes 2/4] rcu/tree: Skip entry into the page allocator for PREEMPT_RT Message-ID: <20200422132853.GM17661@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20200420153837.194532-1-joel@joelfernandes.org> <20200420153837.194532-3-joel@joelfernandes.org> <20200422103536.GA16263@pc636> <51A69C78-FDD4-4A15-8969-EEA17CBA5BBD@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51A69C78-FDD4-4A15-8969-EEA17CBA5BBD@joelfernandes.org> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 22, 2020 at 09:18:41AM -0400, joel@joelfernandes.org wrote: > > > On April 22, 2020 6:35:36 AM EDT, Uladzislau Rezki wrote: > >On Mon, Apr 20, 2020 at 11:38:35AM -0400, Joel Fernandes (Google) > >wrote: > >> To keep kfree_rcu() path working on raw non-preemptible sections, > >> prevent the optional entry into the allocator as it uses sleeping > >locks. > >> In fact, even if the caller of kfree_rcu() is preemptible, this path > >> still is not, as krcp->lock is a raw spinlock as done in previous > >> patches. With additional page pre-allocation in the works, hitting > >this > >> return is going to be much less likely soon so just prevent it for > >now > >> so that PREEMPT_RT does not break. Note that page allocation here is > >an > >> optimization and skipping it still makes kfree_rcu() work. > >> > >> Cc: Sebastian Andrzej Siewior > >> Co-developed-by: Uladzislau Rezki > >> Signed-off-by: Uladzislau Rezki > >> Signed-off-by: Joel Fernandes (Google) > >> --- > >> kernel/rcu/tree.c | 12 ++++++++++++ > >> 1 file changed, 12 insertions(+) > >> > >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > >> index cf68d3d9f5b81..cd61649e1b001 100644 > >> --- a/kernel/rcu/tree.c > >> +++ b/kernel/rcu/tree.c > >> @@ -3092,6 +3092,18 @@ kfree_call_rcu_add_ptr_to_bulk(struct > >kfree_rcu_cpu *krcp, > >> if (!bnode) { > >> WARN_ON_ONCE(sizeof(struct kfree_rcu_bulk_data) > PAGE_SIZE); > >> > >> + /* > >> + * To keep this path working on raw non-preemptible > >> + * sections, prevent the optional entry into the > >> + * allocator as it uses sleeping locks. In fact, even > >> + * if the caller of kfree_rcu() is preemptible, this > >> + * path still is not, as krcp->lock is a raw spinlock. > >> + * With additional page pre-allocation in the works, > >> + * hitting this return is going to be much less likely. > >> + */ > >> + if (IS_ENABLED(CONFIG_PREEMPT_RT)) > >> + return false; > >> + > >> bnode = (struct kfree_rcu_bulk_data *) > >> __get_free_page(GFP_NOWAIT | __GFP_NOWARN); > >> } > >This will not be correctly working by just reverting everything to the > >"rcu_head path" for CONFIG_PREEMPT_RT kernel. We need to preallocate at > >least once. I can add caching on top of this series to address it. > > I discussed with Vlad offline, this patch is fine for mainline as we > don't have headless support yet. So this patch is good. Future patches > adding caching will also add the headless support after additional > caching, so skipping the allocation here is ok. Sounds good! But would one of the realtime guys be willing to give a Tested-by? Thanx, Paul > Thanks. > > - Joel > > > > > >-- > >Vlad Rezki > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity.