Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp5771878rwl; Tue, 11 Apr 2023 09:44:04 -0700 (PDT) X-Google-Smtp-Source: AKy350YViP/TnR+ouklDPzQb15eR5eeR0WsAva1siRHcpQyaleVz/oCEmpCdmGrNFtZsxP4X8Jtv X-Received: by 2002:a05:6a20:4e13:b0:ea:e535:ce2c with SMTP id gk19-20020a056a204e1300b000eae535ce2cmr5076325pzb.0.1681231444021; Tue, 11 Apr 2023 09:44:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681231444; cv=none; d=google.com; s=arc-20160816; b=aNvSXCA6k5m8KB3KHGOQnduGbWn7U5P2BDziJKFC/f+XzH/Cv/TczXoACTbN3X/usZ 3QC+3Hb/A1Nh+DDH4j80/oIPUosZkezU/a96GB7PAZNE6TEIf2D5CmjRf2PhS2O76JPe Y88v4MPvjUo8iwUBoeYmzGftxzP2jbBSlAjXr7xhrnCfFk3mxMnXgHQ/GRU6ihA6SnwG fwz17YjNTPgYS3/G/Vq7FAIyTtdohA+XUUQtLHTQVOIGShn58KWtcEmcsc89jTVKJurD 7X8Fs7/yIFk0G3tLGsdB3+kvpUPqZKT+iFLr8WpTkg494FhS9KFt/K3J2V9KSunXkowC 5I2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=RGwapPyBpdASFFgUTLukGTfDiJ1p9+BdymSCLjEilqI=; b=ROLQtuGPGCsj4Or2RzkhoWKhZZyjlpc9NJA/f3PepmYFPVrP1E0uNS1daCA0IkmX2s K/MVtGwaqvPkxwhE471OQ3iMjlsuCsqo81l8qhPQ57kBVvpsrdvEVGK1VV929LxDJwcM z0Fym2iR9KiWAno1WNDtvW7toV1Jwtzw+tH6cYJtB2V9qwHKeSPRu0ltTy2YF8y0ZZXL kCwRSvNK1a8zzFgDbyhlmjS7YeWtCSCtEf0ElusayCPbmHdHIhY+VubyVMGq7eGP+Bg2 qyq6lsH4DXrKrl1EyL9UJgSzO36YedubNip7ZBjRmDqs0f8LMlLSxrRZVz2QDP2dHBEU Ce6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=bY3ZIwOr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g13-20020aa79dcd000000b005a8ac319433si13704913pfq.178.2023.04.11.09.43.52; Tue, 11 Apr 2023 09:44:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=bY3ZIwOr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229575AbjDKQmn (ORCPT + 99 others); Tue, 11 Apr 2023 12:42:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229516AbjDKQmk (ORCPT ); Tue, 11 Apr 2023 12:42:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D2C610E9; Tue, 11 Apr 2023 09:42:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0949560E71; Tue, 11 Apr 2023 16:42:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 65907C433EF; Tue, 11 Apr 2023 16:42:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681231358; bh=yflufE8ZksbV5SDv5btYZ8nUlqv9M275okMCEBw67wM=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=bY3ZIwOrtfCePa52ubRazsn/kQB1xYs3TT+xcbUgS6JEptNMDlKkE4YMox3xjNnm4 3ENTTCms8b+n+HV0TglJYp5kDaSbxZxkE39r6e9w9NkdGXykdfmH+gXyp/XZDor7YX V0CuYb/lg3NAAuPhegMMsPcs5O9SwW3G93PNPlXVP3SAz2iIMP8x6gQxP/sjZkVVTD e3EVW7R1rBww1tAWuKvE+FhxjrGbeEnwbvcxFZl3MmE9W7rLnAsCb8GwZej0hsp00g rJlK+WwY+rqHstM51mRYcNmV+FIgeXYBjY3DC6YJAeRGTU3KN6e7KMlBSh9aEsj83U EdrKUTRNHZRiw== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 050C11540478; Tue, 11 Apr 2023 09:42:38 -0700 (PDT) Date: Tue, 11 Apr 2023 09:42:37 -0700 From: "Paul E. McKenney" To: Uladzislau Rezki Cc: "Zhang, Qiang1" , "frederic@kernel.org" , "joel@joelfernandes.org" , "qiang.zhang1211@gmail.com" , "rcu@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v3] rcu/kvfree: Prevents cache growing when the backoff_page_cache_fill is set Message-ID: <2159c88e-ec99-4ad6-a166-baf4199d138f@paulmck-laptop> Reply-To: paulmck@kernel.org References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 11, 2023 at 04:58:22PM +0200, Uladzislau Rezki wrote: > On Tue, Apr 11, 2023 at 02:42:27PM +0000, Zhang, Qiang1 wrote: > > > > Currently, in kfree_rcu_shrink_scan(), the drain_page_cache() is > > > > executed before kfree_rcu_monitor() to drain page cache, if the bnode > > > > structure's->gp_snap has done, the kvfree_rcu_bulk() will fill the > > > > page cache again in kfree_rcu_monitor(), this commit add a check > > > > for krcp structure's->backoff_page_cache_fill in put_cached_bnode(), > > > > if the krcp structure's->backoff_page_cache_fill is set, prevent page > > > > cache growing and disable allocated page in fill_page_cache_func(). > > > > > > > > Signed-off-by: Zqiang > > > > > > > >Much improved! But still some questions below... > > > > > > > > Thanx, Paul > > > > > > > > --- > > > > kernel/rcu/tree.c | 4 +++- > > > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > > index cc34d13be181..9d9d3772cc45 100644 > > > > --- a/kernel/rcu/tree.c > > > > +++ b/kernel/rcu/tree.c > > > > @@ -2908,6 +2908,8 @@ static inline bool > > > > put_cached_bnode(struct kfree_rcu_cpu *krcp, > > > > struct kvfree_rcu_bulk_data *bnode) > > > > { > > > > + if (atomic_read(&krcp->backoff_page_cache_fill)) > > > > + return false; > > > > > > > >This will mean that under low-memory conditions, we will keep zero > > > >pages in ->bkvcache. All attempts to put something there will fail. > > > > > > > >This is probably not an issue for structures containing an rcu_head > > > >that are passed to kfree_rcu(p, field), but doesn't this mean that > > > >kfree_rcu_mightsleep() unconditionally invokes synchronize_rcu()? > > > >This could seriously slow up freeing under low-memory conditions, > > > >which might exacerbate the low-memory conditions. > > > > > > Thanks for mentioning this, I didn't think of this before????. > > > > > > > > > > >Is this really what we want? Zero cached rather than just fewer cached? > > > > > > > > > > > > > > > > // Check the limit. > > > > if (krcp->nr_bkv_objs >= rcu_min_cached_objs) > > > > return false; > > > > @@ -3221,7 +3223,7 @@ static void fill_page_cache_func(struct work_struct *work) > > > > int i; > > > > > > > > nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ? > > > > - 1 : rcu_min_cached_objs; > > > > + 0 : rcu_min_cached_objs; > > > > > > > > for (i = 0; i < nr_pages; i++) { > > > > > > > >I am still confused as to why we start "i" at zero rather than at > > > >->nr_bkv_objs. What am I missing here? > > > > > > > > > No, you are right, I missed this place. > > > > > > --- a/kernel/rcu/tree.c > > > +++ b/kernel/rcu/tree.c > > > @@ -2908,6 +2908,8 @@ static inline bool > > > put_cached_bnode(struct kfree_rcu_cpu *krcp, > > > struct kvfree_rcu_bulk_data *bnode) > > > { > > > + if (atomic_read(&krcp->backoff_page_cache_fill)) > > > + return false; > > > > > >This is broken, unfortunately. If a low memory condition we fill > > >fill a cache with at least one page anyway because of we do not want > > >to hit a slow path. > > > > Thanks remind, please ignore my v4 patch, how about the following? > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index 41daae3239b5..e2e8412e687f 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -3238,6 +3238,9 @@ static void fill_page_cache_func(struct work_struct *work) > > free_page((unsigned long) bnode); > > break; > > } > > + > > + if (atomic_read(&krcp->backoff_page_cache_fill)) > > + break; > > } > It does not fix an "issue" you are reporting. kvfree_rcu_bulk() function > can still fill it back. IMHO, the solution here is to disable cache if > a low memory condition and enable back later on. > > The cache size is controlled by the rcu_min_cached_objs variable. We can > set it to 1 and restore it back to original value to make the cache operating > as before. It would be best to use a second variable for this. Users might get annoyed if their changes to rcu_min_cached_objs got overwritten once things got set back to normal operation. Thanx, Paul