Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp101545ybl; Thu, 15 Aug 2019 13:29:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqxb6oo9rFsptg9k3Q1fqgeAdwHCGUb7K+uApGwqqvuJY1npWBrcskAmVrZIdnsxA8FE/drB X-Received: by 2002:a17:902:524:: with SMTP id 33mr6009469plf.27.1565900996912; Thu, 15 Aug 2019 13:29:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565900996; cv=none; d=google.com; s=arc-20160816; b=jX4Gj7NR20o8crZd50phpPu3SCAIMUsnRCNoD1AZdaaaUWorF2bUZ8FwE7jouKb9mi hGHMNKpgqw/aMHlNhVyW4yBfohajAevzKhDXeMGHz06IORd1df0rPlCo3f0D1tCU5L3M Sk3XcPMN/227VLzYn0mZ0nSdmBsrumxalJEUD8CELi7JLDS6CiblxL0Fev3AIsVswodp 2RQTgOU2+Vzq9TS2mHGOg8ZdfG/ZxsV/s/Njgr+6zYLJgQce5mUhBJoJQgJMcP5V/lAU YZArC+oLSP5AgcEOfrs1BbrKqijxghd3YREDDM2gbafpTH1GI1pwwe0GZOW9DLitl/HK Cmyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:from:user-agent:content-disposition :mime-version:message-id:subject:to:date; bh=DXP/D1uKoZdI5MbQvSg566vgc4/OFRiW1YKpJsWzqOs=; b=JewoJXSgMkBBKzgxkMegwwQiz8z+Q93mBpB1xzNpfb+g/vq95HGhGyKXTLZNo2nYbi qhJtROfpxq1Mi0+979s7x667BLcnv8skUM6mBFIf3txjU+j7l3YLWvwevTzPlN83S7Sc Fg8nKH6OnNssuoeBcccec9+4o+Q80Knj+Ii8G+YOMsZWNn794F6a9KJnN6UT0z2M/ayd wRXa1HOJXmZc87O2yCzE5YP3rdh93feqPcCIhAfQYtvnRNl9hMgLjKn+pBz4yfUsjQIr 02h8Swvf6y0GBIWr31ANbXojmIgZDic1GVp8OsU0QEnKWkuEwszXxHDxt20r/lKDB3SC wsCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d34si2617565pla.283.2019.08.15.13.29.26; Thu, 15 Aug 2019 13:29:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730148AbfHOTyE (ORCPT + 99 others); Thu, 15 Aug 2019 15:54:04 -0400 Received: from fieldses.org ([173.255.197.46]:34694 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729818AbfHOTyE (ORCPT ); Thu, 15 Aug 2019 15:54:04 -0400 Received: by fieldses.org (Postfix, from userid 2815) id 3B0CA63F; Thu, 15 Aug 2019 15:54:04 -0400 (EDT) Date: Thu, 15 Aug 2019 15:54:04 -0400 To: linux-nfs@vger.kernel.org Subject: [PATCH] nfsd: use i_wrlock instead of rcu for nfsdfs i_private Message-ID: <20190815195404.GA19554@fieldses.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) From: bfields@fieldses.org (J. Bruce Fields) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: "J. Bruce Fields" synchronize_rcu() gets called multiple times each time a client is destroyed. If the laundromat thread has a lot of clients to destroy, the delay can be noticeable. This was causing pynfs test RENEW3 to fail. We could embed an rcu_head in each inode and do the kref_put in an rcu callback. But simplest is just to take a lock here. (I also wonder if the laundromat thread would be better replaced by a bunch of scheduled work or timers or something.) Signed-off-by: J. Bruce Fields --- fs/nfsd/nfsctl.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 928a0b2c05dc..b14f825c62fe 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -1215,11 +1215,9 @@ static void clear_ncl(struct inode *inode) struct nfsdfs_client *ncl = inode->i_private; inode->i_private = NULL; - synchronize_rcu(); kref_put(&ncl->cl_ref, ncl->cl_release); } - static struct nfsdfs_client *__get_nfsdfs_client(struct inode *inode) { struct nfsdfs_client *nc = inode->i_private; @@ -1233,9 +1231,9 @@ struct nfsdfs_client *get_nfsdfs_client(struct inode *inode) { struct nfsdfs_client *nc; - rcu_read_lock(); + inode_lock_shared(inode); nc = __get_nfsdfs_client(inode); - rcu_read_unlock(); + inode_unlock_shared(inode); return nc; } /* from __rpc_unlink */ -- 2.21.0