Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp372260ybj; Wed, 6 May 2020 20:28:06 -0700 (PDT) X-Google-Smtp-Source: APiQypJjVcX/r/Rh8YMFoL8l7Udfc4kt+AdRG6NiDnPF55ibWFihIXxnTrscGia4e57qgDgb13Ix X-Received: by 2002:a17:906:af6f:: with SMTP id os15mr10430894ejb.78.1588822086408; Wed, 06 May 2020 20:28:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588822086; cv=none; d=google.com; s=arc-20160816; b=KifzYHo3ll3geesQaD8atlPp2rqf4hfkAKw5nvhA+mfOedMw0Vs/O9dz/ooeNIvRLJ xfwsdEKUG8rX7DRPCWvLLSg+Sg82XNaVVu4KyDvaX8PVm3hkXKYvtWYJjp4+lB/8I1ok wfVunOF+9pzEqUyTdjZuQf+nBewSz1rS3HsGkhJWa/XfrXOkofT5V7qMWHFn0yGtg2Rm fFhd1hZt1OYt3iVmHl4V2rw6PQDwx3WnI65urYdEL1o34Zw2jjjl2MSol+CdPJSC6QcQ 02YHG46q5v4cp4x9XI9TU9228Ny46eZOIz6dNjvksAwB9rAaOoF4h80YWhW6QZdRFrWP DLLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=OXBS/eJwVEDf689WQ3iDMkGTOPqUnoxN1bFe7sb0k18=; b=oCFV2N1+GP1cfjCwq/lYDK31pfgI+5fUMzeZHz6eIpoE1DrpyweFDPuV16fL/l2Jre 9v80JAoYi677doLGA7ijba1slDxPYWavvxIwuaCBfQwVgM6c0GtcRxD/PrXFhtZ+J8PN CCjHq+6zio6OQu88176QVPQmizBo9VOCveHTpROzGIWmv7jyad+SMmevro2c5+Ltk0nX EnYEq073XpnxUrgvLEM6K+6+HSDeSPFZa3sHpO/QUoaPvIF3ImKS8hXhxuBjw7C47bYE lgAaoArhZ04xR0ASLuyPkoe5VZq9taue4vcwvMJBrXa83CBT7MGUNxaxO4L43/6QJNDz KFWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oy6si2368115ejb.383.2020.05.06.20.27.41; Wed, 06 May 2020 20:28:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728402AbgEGDDf (ORCPT + 99 others); Wed, 6 May 2020 23:03:35 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3830 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726437AbgEGDDf (ORCPT ); Wed, 6 May 2020 23:03:35 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 0FB04998E81665E6A654; Thu, 7 May 2020 11:03:30 +0800 (CST) Received: from [10.134.22.195] (10.134.22.195) by smtp.huawei.com (10.3.19.201) with Microsoft SMTP Server (TLS) id 14.3.487.0; Thu, 7 May 2020 11:03:28 +0800 Subject: Re: [PATCH v2] f2fs: shrink spinlock coverage To: Jaegeuk Kim CC: , , References: <20200506104542.123575-1-yuchao0@huawei.com> <20200506150521.GE107238@google.com> From: Chao Yu Message-ID: Date: Thu, 7 May 2020 11:03:25 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20200506150521.GE107238@google.com> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.134.22.195] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/5/6 23:05, Jaegeuk Kim wrote: > On 05/06, Chao Yu wrote: >> In f2fs_try_to_free_nids(), .nid_list_lock spinlock critical region will >> increase as expected shrink number increase, to avoid spining other CPUs >> for long time, it's better to implement like extent cache and nats >> shrinker. >> >> Signed-off-by: Chao Yu >> --- >> v2: >> - fix unlock wrong spinlock. >> fs/f2fs/node.c | 15 +++++++++++---- >> 1 file changed, 11 insertions(+), 4 deletions(-) >> >> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c >> index 4da0d8713df5..ad0b14f4dab8 100644 >> --- a/fs/f2fs/node.c >> +++ b/fs/f2fs/node.c >> @@ -2488,7 +2488,6 @@ void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid) >> int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink) >> { >> struct f2fs_nm_info *nm_i = NM_I(sbi); >> - struct free_nid *i, *next; >> int nr = nr_shrink; >> >> if (nm_i->nid_cnt[FREE_NID] <= MAX_FREE_NIDS) >> @@ -2498,14 +2497,22 @@ int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink) >> return 0; >> >> spin_lock(&nm_i->nid_list_lock); >> - list_for_each_entry_safe(i, next, &nm_i->free_nid_list, list) { >> - if (nr_shrink <= 0 || >> - nm_i->nid_cnt[FREE_NID] <= MAX_FREE_NIDS) >> + while (nr_shrink) { >> + struct free_nid *i; >> + >> + if (nm_i->nid_cnt[FREE_NID] <= MAX_FREE_NIDS) >> break; >> >> + i = list_first_entry(&nm_i->free_nid_list, >> + struct free_nid, list); >> + list_del(&i->list); >> + spin_unlock(&nm_i->nid_list_lock); >> + >> __remove_free_nid(sbi, i, FREE_NID); > > __remove_free_nid() will do list_del again. btw, how about just splitting out Oh, my bad. How about moving __remove_free_nid into .nid_list_lock coverage? > given nr_shrink into multiple trials? Like this? while (shrink) { batch = DEFAULT_BATCH_NUMBER; // 16 spinlock(); list_for_each_entry_safe() { if (!shrink || !batch) break; remove_item_from_list; shrink--; batch--; } spin_unlock(); } Thanks, > >> kmem_cache_free(free_nid_slab, i); >> nr_shrink--; >> + >> + spin_lock(&nm_i->nid_list_lock); >> } >> spin_unlock(&nm_i->nid_list_lock); >> mutex_unlock(&nm_i->build_lock); >> -- >> 2.18.0.rc1 > . >