Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp2301388pxp; Mon, 21 Mar 2022 16:19:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxih1U63QCRuClciwl04LwoTXcJcZQnYGhJCzTRJTnzFf+VrbnF9YrqOG+9QPgGXp5GnICU X-Received: by 2002:a17:903:32c7:b0:154:19dd:fd43 with SMTP id i7-20020a17090332c700b0015419ddfd43mr15724827plr.150.1647904751694; Mon, 21 Mar 2022 16:19:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647904751; cv=none; d=google.com; s=arc-20160816; b=QYnmVSF3wWokkA2n7DN57hGhMsl0vT/y7Wey5YKJmk9ZAhTQaj2YAG52M6kV9Y3zZn KV4yLIYOD863GVgJoQm3hIH2kOnBvJR9spPTU83Wprax76W1SH9CXmmnbgWmGc+j1C8z GapDcwAiNgU2tjguzjdxhkO5dFyXeV9vG1moo9qT466JatqpbTCJ6CGdiJ4SA7QZ2HXg Z9r6jTxEPXu+ZbNd6VbrYSXp5mRJLh3otjIHBrx3xUwWFRlmhDRmbXBZzSIASP9EUBsG J+aiS5/Kk2+2VEFsxU8SANCFA0tgUjWX9BPFIa8hBRwnPh5p3EZAUQRlVcXzPeJXZcju Gigw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=Dgbj1qDsZE7I8qu9E1kseC7nK5515PKEMiTiSilkfrs=; b=cT/A4tz0kzpSKqmpjo2/2xO66LAiApN7eGvCCmcG72CILF7R9CxTohmzYFLM7f4EOO i9PgnEDDEV7aXtnSsT5TPmAMWfJRJn/qtuKETqIWDpAGUogLQO+dYjIa0tmU6+G4RPNW YtB2cYmab7y+PLIboJHSkQ2G86BgarZF0WJKx6885rD5szaRAdGPwgE6Z+Mek8PSm1Lh d4MyfQ9gJwi+Z8SQZZLqaW6GvWZeFes2IGV2yDQOPLTPckB3k0H2t0W/h2apf0JCZ9Ox byYXbxjU0tpJYdKXB6XELs4tq3tTjsd1LTxBMoyBzyBIhp/cdsldrw0R8dpk3z+Goxv6 bfAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id l2-20020a62be02000000b004fa3a8e00acsi7870385pff.355.2022.03.21.16.19.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Mar 2022 16:19:11 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 98C165674B; Mon, 21 Mar 2022 15:17:48 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348747AbiCUOQO (ORCPT + 99 others); Mon, 21 Mar 2022 10:16:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351573AbiCUOLJ (ORCPT ); Mon, 21 Mar 2022 10:11:09 -0400 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1068749F27; Mon, 21 Mar 2022 07:08:52 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R321e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0V7qO7VJ_1647871727; Received: from 192.168.31.65(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0V7qO7VJ_1647871727) by smtp.aliyun-inc.com(127.0.0.1); Mon, 21 Mar 2022 22:08:49 +0800 Message-ID: <6bc551d2-15fc-5d17-c99b-8db588c6b671@linux.alibaba.com> Date: Mon, 21 Mar 2022 22:08:47 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Subject: Re: [PATCH v5 03/22] cachefiles: introduce on-demand read mode Content-Language: en-US To: Matthew Wilcox Cc: dhowells@redhat.com, linux-cachefs@redhat.com, xiang@kernel.org, chao@kernel.org, linux-erofs@lists.ozlabs.org, torvalds@linux-foundation.org, gregkh@linuxfoundation.org, linux-fsdevel@vger.kernel.org, joseph.qi@linux.alibaba.com, bo.liu@linux.alibaba.com, tao.peng@linux.alibaba.com, gerry@linux.alibaba.com, eguan@linux.alibaba.com, linux-kernel@vger.kernel.org, luodaowen.backend@bytedance.com References: <20220316131723.111553-1-jefflexu@linux.alibaba.com> <20220316131723.111553-4-jefflexu@linux.alibaba.com> From: JeffleXu In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/21/22 9:40 PM, Matthew Wilcox wrote: > On Wed, Mar 16, 2022 at 09:17:04PM +0800, Jeffle Xu wrote: >> +#ifdef CONFIG_CACHEFILES_ONDEMAND >> + struct xarray reqs; /* xarray of pending on-demand requests */ >> + rwlock_t reqs_lock; /* Lock for reqs xarray */ > > Why do you have a separate rwlock when the xarray already has its own > spinlock? This is usually a really bad idea. Hi, Thanks for reviewing. reqs_lock is also used to protect the check of cache->flags. Please refer to patch 4 [1] of this patchset. ``` + /* + * Enqueue the pending request. + * + * Stop enqueuing the request when daemon is dying. So we need to + * 1) check cache state, and 2) enqueue request if cache is alive. + * + * The above two ops need to be atomic as a whole. @reqs_lock is used + * here to ensure that. Otherwise, request may be enqueued after xarray + * has been flushed, in which case the orphan request will never be + * completed and thus netfs will hang there forever. + */ + read_lock(&cache->reqs_lock); + + /* recheck dead state under lock */ + if (test_bit(CACHEFILES_DEAD, &cache->flags)) { + read_unlock(&cache->reqs_lock); + ret = -EIO; + goto out; + } + + xa_lock(xa); + ret = __xa_alloc(xa, &id, req, xa_limit_32b, GFP_KERNEL); + if (!ret) + __xa_set_mark(xa, id, CACHEFILES_REQ_NEW); + xa_unlock(xa); + + read_unlock(&cache->reqs_lock); ``` It's mainly used to protect against the xarray flush. Besides, IMHO read-write lock shall be more performance friendly, since most cases are the read side. [1] https://lkml.org/lkml/2022/3/16/351 -- Thanks, Jeffle