Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1119291ybl; Wed, 28 Aug 2019 09:54:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqyNDDh58VkbX1GlvjImAsgXg0HMESGtQYzm79VxTDiRQEYeJh6YaPTsI+Z0L2WfQ3wKX0TN X-Received: by 2002:a62:7d0d:: with SMTP id y13mr5660247pfc.150.1567011296035; Wed, 28 Aug 2019 09:54:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567011296; cv=none; d=google.com; s=arc-20160816; b=iuHOsPxJ779Zw+Q2fvNVcI6Bj6YJoJ9v1FkGd1F7MNmUefarkbTnO8QR/wfSj7sdW4 CIFetNmT07MQKXFjhKG9M+IkUNIAMl1eRTYL35gM35dXawocM3bSQNTUSohdW/fH+O0w PATi2hFTsdKYmILrbq0acn3l8X3RTF6gdRIspqIBttNcb+FYOlTRKUaw5RgQzSWGOed5 Eiiw4JYYF19VxHWOu7E2uAhGizu2qABSPvxktLApFdLcfNteXanZMCy0t7B1irzsXvKO kc9bkMYw0Gl2dyjxdkqdV0qk9xBr9dYuO+JC60zGn3DZxEuy6mFfHoPV2L7VAcm21XMm ECkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=VhIs+Q8CdHDo4rXyegC7sRAz3wJIwPrlgG4xU5im/Wo=; b=A5U2rHj/eDCToTGbhcFI82SreLgsjnELQzqslpVUf6M1vvz59KDqXDwFTuP7XFmxhB hCUJ0IBcizn4MnpOz9KCP9efusKwOLmq6avXoEGPQzDn97EX6htBhLr8razA56hrsXnY TnGMr7blIZBYp35czvolXgrmK/xZ00Y9mGWY9F4Pxdj2uvFmDE9SQ7bVUZP1LAhZ8qov ndpceni3/WU6jXDcZBDvN17k7ppfoKc4OkydocbcxF8g2rv9Q1iFBDIY1MkyTzQWYdgv ZYjGBolfqM62AYK8tQ3Qft/B5/WjK53LG6Y++q1wrUkqUSRSHh70Dc7wDn45AT7VYtRk erjA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 126si2538071pgb.47.2019.08.28.09.54.30; Wed, 28 Aug 2019 09:54:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726504AbfH1Qya (ORCPT + 99 others); Wed, 28 Aug 2019 12:54:30 -0400 Received: from fieldses.org ([173.255.197.46]:49170 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726415AbfH1Qy3 (ORCPT ); Wed, 28 Aug 2019 12:54:29 -0400 Received: by fieldses.org (Postfix, from userid 2815) id 56D14BD8; Wed, 28 Aug 2019 12:54:29 -0400 (EDT) Date: Wed, 28 Aug 2019 12:54:29 -0400 From: "J. Bruce Fields" To: Alex Lyakas Cc: chuck.lever@oracle.com, linux-nfs@vger.kernel.org, Shyam Kaushik Subject: Re: [RFC-PATCH] nfsd: when unhashing openowners, increment openowner's refcount Message-ID: <20190828165429.GC26284@fieldses.org> References: <1566406146-7887-1-git-send-email-alex@zadara.com> <20190826133951.GC22759@fieldses.org> <20190827205158.GB13198@fieldses.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Wed, Aug 28, 2019 at 06:20:22PM +0300, Alex Lyakas wrote: > On Tue, Aug 27, 2019 at 11:51 PM J. Bruce Fields wrote: > > > > On Tue, Aug 27, 2019 at 12:05:28PM +0300, Alex Lyakas wrote: > > > Is the described issue familiar to you? > > > > Yep, got it, but I haven't seen anyone try to solve it using the fault > > injection code, that's interesting! > > > > There's also fs/nfsd/unlock_filesystem. It only unlocks NLM (NFSv3) > > locks. But it'd probably be reasonable to teach it to get NFSv4 state > > too (locks, opens, delegations, and layouts). > > > > But my feeling's always been that the cleanest way to do it is to create > > two containers with separate net namespaces and run nfsd in both of > > them. You can start and stop the servers in the different containers > > independently. > > I am looking at the code, and currently nfsd creates a single > namespace subsystem in init_nfsd. All nfs4_clients run in this > subsystem. > > So the proposal is to use register_pernet_subsys() for every > filesystem that is exported? No, I'm proposing any krenel changes. Just create separate net namespaces from userspace and start nfsd from within them. And you'll also need to arrange for them different nfsd's to get different exports. In practice, the best way to do this may be using some container management service, I'm not sure. > I presume that current nfsd code cannot > do this, and some rework is required to move away from a single > subsystem to per-export subsystem. Also, grepping through kernel code, > I see that namespace subsystems are created by different modules as > part of module initialization, rather than doing that dynamically. > Furthermore, in our case the same nfsd machine S can export tens or > even hundreds of local filesystems.Is this fine to have hundreds of > subsystems? I haven't done it myself, but I suspect hundreds of containers should be OK. It may depend on available resources, of course. > Otherwise, I understand that the current behavior is a "won't fix", > and it is expected for the client machine to unmount the export before > un-exporting the file system at nfsd machine. Is this correct? You're definitely not the only ones to request this, so I'd like to have a working solution. My preference would be to try the namespace/container approach first. And if that turns out no to work well for some reason, to update fs/nfsd/unlock_filesystem to handle NFSv4 stuff. The fault injection code isn't the right interface for this. Even if we did decide it was worth fixing up and maintaining--it's really only designed for testing clients. I'd expect distros not to build it in their default kernels. --b.