Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1119082ybl; Fri, 30 Aug 2019 12:08:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqw5Co+/w3xNrm+cwlKsUxY3ehvV9E+h/rapucw5CQSMDPlzUTm2UcHE//h+KIMq09g97TEq X-Received: by 2002:aa7:9799:: with SMTP id o25mr19939340pfp.74.1567192101878; Fri, 30 Aug 2019 12:08:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567192101; cv=none; d=google.com; s=arc-20160816; b=mU4vZ0dLO6OSXgwiBvboDn4yzbGAomvUWrK/jkmPdwZ5qRNLPwGoqBcNvETWztP3tv KxM1zJJdJliX2PsQRgawpURRavg97AW5TkzUJWenW9TBfv6yBqC9Q4ZBQGrPrte79TKB 6/m03+f/xZ7flHuPIrAKtJRMkkm0GHXDvxqRx63SyJOBouN3gjGl8XYkq1GGDIF4sD7n izuGxUXV2QOdTjvBMYlH4cwqsIEmE7XAMB3nqkrPiN+jMaQM1eV1u296EHbqBPddPbWd 0W0oEGZ4Av0Fx/U1HgaKJ2J5ns3qpPRgpXrCDu6+XeB3fQlpzWAWGKEn32R7f9yM+D7S uLCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=L+f+aqxR0exRE20QWOJMIPsUrL3vEtAhVOYoslA5FEs=; b=gnqnIAM60ttdiSWKuOodpYNnV8PttJSKEKJuPUbXVu9eyOxY7XApZWPVHqXKvLXI5X TSzStA6ZRNQXo74I/oyzoG4R2I+HJOTg0gVp4txIRoPX0EY+U9pRqnKdynFgKNIvCR61 zcICo5QssH4dwnfqIVAb/QKIq53l3bPLnsKwlhzJmWQKkUdDtDhrDR+sTDk5G8soa7NO 3mBrbfgOmICF5egxmXbYl2/s5DJe59FGjIdaRQYZ1rL+IgfEpS1vyR0oHi/7qWOvBS+K Jdo6jo2Nla8MgEUm1dkjcOsxooe6Out4AksU8m7UoDj+yhqj9CaAecyNfvZqNf8DeuGK RtKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w4si5478139pjr.76.2019.08.30.12.08.06; Fri, 30 Aug 2019 12:08:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727888AbfH3TIF (ORCPT + 99 others); Fri, 30 Aug 2019 15:08:05 -0400 Received: from fieldses.org ([173.255.197.46]:51272 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727791AbfH3TIF (ORCPT ); Fri, 30 Aug 2019 15:08:05 -0400 Received: by fieldses.org (Postfix, from userid 2815) id CEFA81CB4; Fri, 30 Aug 2019 15:08:04 -0400 (EDT) Date: Fri, 30 Aug 2019 15:08:04 -0400 From: "J. Bruce Fields" To: Alex Lyakas Cc: chuck.lever@oracle.com, linux-nfs@vger.kernel.org, Shyam Kaushik Subject: Re: [RFC-PATCH] nfsd: when unhashing openowners, increment openowner's refcount Message-ID: <20190830190804.GB5053@fieldses.org> References: <1566406146-7887-1-git-send-email-alex@zadara.com> <20190826133951.GC22759@fieldses.org> <20190827205158.GB13198@fieldses.org> <20190828165429.GC26284@fieldses.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Thu, Aug 29, 2019 at 09:12:49PM +0300, Alex Lyakas wrote: > We evaluated the network namespaces approach. But, unfortunately, it > doesn't fit easily into how our system is currently structured. We > would have to create and configure interfaces for every namespace, and > have a separate IP address (presumably) for every namespace. Yes. > All this > seems a bit of an overkill, to just have several local filesystems > exported to the same client (which is when we hit the issue). I would > assume that some other users would argue as well that creating a > separate network namespace for every local filesystem is not the way > to go from the administration point of view. OK, makes sense. And I take it you don't want to go around to each client and shut things down cleanly. And you're fine with the client applications erroring out when you yank their filesystem out from underneath them. (I wonder what happens these days when that happens on a linux client when there are dirty pages. I think you may just end up with a useless mount that you can't get rid of till you power down the client.) > Regarding the failure injection code, we did not actually enable and > use it. We instead wrote some custom code that is highly modeled after > the failure injection code. Sounds interesting.... I'll try to understand it and give some comments later. ... > Currently this code is invoked from a custom procfs entry, by > user-space application, before unmounting the local file system. > > Would moving this code into the "unlock_filesystem" infrastructure be > acceptable? Yes. I'd be interested in patches. > Since the "share_id" approach is very custom for our > usage, what criteria would you suggest for selecting the openowners to > be "forgotten"? The share_id shouldn't be necessary. I'll think about it. --b.