Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1723454ybj; Wed, 6 May 2020 04:15:32 -0700 (PDT) X-Google-Smtp-Source: APiQypI3wnoUowJaolZi6A/CETDJEk5Dzbng8FcIQo+TL//hgGZgmyEgkGI2M+QDcnGUNC5hWiAz X-Received: by 2002:a17:906:804a:: with SMTP id x10mr7203556ejw.86.1588763732746; Wed, 06 May 2020 04:15:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588763732; cv=none; d=google.com; s=arc-20160816; b=NnAeyLDTOpfnrXQU1bkqixzyGqIi2tnWYT8+bFeqT1bBSwA+INkRFo1E7nZDQLVrpl d6ysQHdbm8I5hDswop7dAUDC9USAZC+vO2vhrt6W2Y9RWhu9AWV9h8R4U5sV6X4FJpSo QkeLmMsPpOzHWz5OMDUco6f5/GySsLO3zFf1YAFrAbpMM7D6JBhIEuy8cAhpxrmQ50lA 9+FHsgGf0ZfO/98IpIFhEbpM65LWuBWEyZ+XQu4XvT9bI+OXTNthov6Dnna4+5aS65QD /ke30V23R9EKZAz0tmHscNMbW4ZpqgHic5JVynFMVCHVZGwOl7F508r/lbMf6tmjBsqP 8Xtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=0FFDsM5W9DnQM0SQZ7/kIO10fvdu/gXICi6d2RcOrm8=; b=Y1mmX316sK35zxYPSMbz2u9ErpJ2T+xX9AOi82r/OZnbZHamC1xWkzxsCRp+DCBCMk UYqI6oMuEbqaIiQIf4kzN8TXTY2qaRLlONyvuK0GG7xBgIBdnRJphgPYDSqqdgQWUAPG g/4PuPj7h+hfW9ctJDYjRl1mclOSPOPNtvn8rW/yEKhaoFxUMGDsW9vDBr/q4ZJvJIoh eUuopO8uN90Mm9x+T02CvbFg3h5Rt79Lj/DcfVoYL0MGE/iodDEPYLKh+lIZLp6hxVW7 E7f1fD2IWoU4ZP/90f00MGEI+GvSpwc6rO9UDaQBHjqVHw8616LLlYVDc6Q1OIAE1HL0 3+ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bu17si825480edb.124.2020.05.06.04.15.09; Wed, 06 May 2020 04:15:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726495AbgEFKqM (ORCPT + 99 others); Wed, 6 May 2020 06:46:12 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:3862 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725796AbgEFKqL (ORCPT ); Wed, 6 May 2020 06:46:11 -0400 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id BC9F388DE92295BC2E09; Wed, 6 May 2020 18:46:08 +0800 (CST) Received: from szvp000203569.huawei.com (10.120.216.130) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Wed, 6 May 2020 18:45:58 +0800 From: Chao Yu To: CC: , , , Chao Yu Subject: [PATCH v2] f2fs: shrink spinlock coverage Date: Wed, 6 May 2020 18:45:42 +0800 Message-ID: <20200506104542.123575-1-yuchao0@huawei.com> X-Mailer: git-send-email 2.18.0.rc1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.120.216.130] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In f2fs_try_to_free_nids(), .nid_list_lock spinlock critical region will increase as expected shrink number increase, to avoid spining other CPUs for long time, it's better to implement like extent cache and nats shrinker. Signed-off-by: Chao Yu --- v2: - fix unlock wrong spinlock. fs/f2fs/node.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 4da0d8713df5..ad0b14f4dab8 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -2488,7 +2488,6 @@ void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid) int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink) { struct f2fs_nm_info *nm_i = NM_I(sbi); - struct free_nid *i, *next; int nr = nr_shrink; if (nm_i->nid_cnt[FREE_NID] <= MAX_FREE_NIDS) @@ -2498,14 +2497,22 @@ int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink) return 0; spin_lock(&nm_i->nid_list_lock); - list_for_each_entry_safe(i, next, &nm_i->free_nid_list, list) { - if (nr_shrink <= 0 || - nm_i->nid_cnt[FREE_NID] <= MAX_FREE_NIDS) + while (nr_shrink) { + struct free_nid *i; + + if (nm_i->nid_cnt[FREE_NID] <= MAX_FREE_NIDS) break; + i = list_first_entry(&nm_i->free_nid_list, + struct free_nid, list); + list_del(&i->list); + spin_unlock(&nm_i->nid_list_lock); + __remove_free_nid(sbi, i, FREE_NID); kmem_cache_free(free_nid_slab, i); nr_shrink--; + + spin_lock(&nm_i->nid_list_lock); } spin_unlock(&nm_i->nid_list_lock); mutex_unlock(&nm_i->build_lock); -- 2.18.0.rc1