Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp3668289pxb; Mon, 9 Nov 2020 18:18:15 -0800 (PST) X-Google-Smtp-Source: ABdhPJzxV6jGDGv54IVORQoED0uv2HMcz4qiWAlq9yB0Bk41mdER4c1T+/stwuFOUv7BF9fC+P3L X-Received: by 2002:a17:906:4145:: with SMTP id l5mr17476863ejk.317.1604974695366; Mon, 09 Nov 2020 18:18:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604974695; cv=none; d=google.com; s=arc-20160816; b=yvGZBZGPChPaGmMy1ho8Ib+gZp4M7AMKFw4UBqo9Oz+qhSNDnYsUQEngHyO9ENjwpU 7Hfg3zAyqPkxLvJhuYYvO6Xl6bJopJxO0kVxCkJ5YHXGnB0uNqhR9Ht/QgwVJNqn4ZNf mUIoYl1wchx1/A0/fXi0xnY/viQ+iTkV/knyr/jIGNNLUKJ4qi5HgKu4p3+Y9AlacS/e 0vBxoMP/+yuy7hVvMZsoZyXjQ4xZi/xMbW245vtRYE2PGkFG4XlteHjlzTFvkPUaQeoL ER64hY1iZzNTl2L57rVmQoJXPDvOm9RLcyqH+sZStEOj+nVeFCbzCakBGuOL7j6BCFOO G1cQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=2AtXYnbR1WGveefVSnzKH4nX+esY6zM21ukXewaYyvw=; b=SRoKVov2aWZ3Sx7Jrj9hOtT70c5Kc8JtvJE2gfPsc3hcdcQzvpSWpuJYoPeK+C6X7K uEMyi1HV/8Kga4zRKTgaY1TGPUaCb4qGNsodGkfUMDpfyWUuj/rH9vlwOqvXuM4uyrrp rVXAGEjje1ggD+Y6iFhXmx2N875vzLhihF3WWYvnr0b2txkRrfo3zktGV4ROlAscb02c 8ro3hkJ+3sd25vdipKypISkN5KVf+CHqIdwJRSc0TM1bMgbX45ApjDbxYmQt+E7EDMgX wHSAwg+8ay7+Mul+EPWrz4oEQRIlWEaTZeijtb5VZWX7tUdVzXNZi1haaHN8tc89Ke90 U4kQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r23si4904871eds.96.2020.11.09.18.17.52; Mon, 09 Nov 2020 18:18:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729983AbgKJCPv (ORCPT + 99 others); Mon, 9 Nov 2020 21:15:51 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:7164 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725889AbgKJCPu (ORCPT ); Mon, 9 Nov 2020 21:15:50 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4CVWfz4ypBz15KWt; Tue, 10 Nov 2020 10:15:39 +0800 (CST) Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com (10.3.19.208) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 10 Nov 2020 10:15:43 +0800 Subject: Re: [f2fs-dev] [PATCH] f2fs: avoid race condition for shinker count To: Jaegeuk Kim , , , CC: Light Hsieh References: <20201109170012.2129411-1-jaegeuk@kernel.org> From: Chao Yu Message-ID: Date: Tue, 10 Nov 2020 10:15:43 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20201109170012.2129411-1-jaegeuk@kernel.org> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.136.114.67] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/11/10 1:00, Jaegeuk Kim wrote: > Light reported sometimes shinker gets nat_cnt < dirty_nat_cnt resulting in I didn't get the problem clearly, did you mean __count_nat_entries() will give the wrong shrink count due to race condition? should there be a lock while reading these two variables? > wrong do_shinker work. Basically the two counts should not happen like that. > > So, I suspect this race condtion where: > - f2fs_try_to_free_nats __flush_nat_entry_set > nat_cnt=2, dirty_nat_cnt=2 > __clear_nat_cache_dirty > spin_lock(nat_list_lock) > list_move() > spin_unlock(nat_list_lock) > spin_lock(nat_list_lock) > list_del() > spin_unlock(nat_list_lock) > nat_cnt=1, dirty_nat_cnt=2 > nat_cnt=1, dirty_nat_cnt=1 nm_i->nat_cnt and nm_i->dirty_nat_cnt were protected by nm_i->nat_tree_lock, I didn't see why expanding nat_list_lock range will help... since there are still places nat_list_lock() didn't cover these two reference counts. Thanks, > > Reported-by: Light Hsieh > Signed-off-by: Jaegeuk Kim > --- > fs/f2fs/node.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c > index 42394de6c7eb..e8ec65e40f06 100644 > --- a/fs/f2fs/node.c > +++ b/fs/f2fs/node.c > @@ -269,11 +269,10 @@ static void __clear_nat_cache_dirty(struct f2fs_nm_info *nm_i, > { > spin_lock(&nm_i->nat_list_lock); > list_move_tail(&ne->list, &nm_i->nat_entries); > - spin_unlock(&nm_i->nat_list_lock); > - > set_nat_flag(ne, IS_DIRTY, false); > set->entry_cnt--; > nm_i->dirty_nat_cnt--; > + spin_unlock(&nm_i->nat_list_lock); > } > > static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i, >