Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2733361yba; Sun, 28 Apr 2019 07:51:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqzqvA2yKLQ8jwJCE0t9OUwbx6oHkfh2WGvXjftRfms7LM1unQVxoEeNSyVoWi5Di0ixnOwN X-Received: by 2002:a17:902:8ecc:: with SMTP id x12mr57424100plo.0.1556463080183; Sun, 28 Apr 2019 07:51:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556463080; cv=none; d=google.com; s=arc-20160816; b=WRFkD9AeWuSsSMWViCcXny4wgBUmrnmEvsVejAroFBNTU7cKTQJKicNTH5b8Qy/969 wbCheVS17cK18dLzHMb9hCkgh+UStu9jLQ2DjLUQmd23GpncIj8JGLId7blCAE6NP6M5 sQ4MhJ8Ev4fECf1WImydxBNhe0FTzdJ4P/Y6OzhsuCuJeTThr3Dp1Oz1TWbYHhe6HZcj cTIkyWkNa1nvzW6TRWrVe76y4KkK0ima6uk1j4z5Hyh9cxZlTeolRLy1mFtCqVkVgDRz 6e6S+mO4mSvpA5a8k9po6nqi2XNpmqkhblnatotsqrkfifFXDGjC/eunHIf73mQKNC33 WC6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=RmA4Bo+mnb9ApA6ia+X/Ep17r82cZbKuGQDApZl28TA=; b=zoHbZEVusSsxDrZK/ZSoAp/5Lipc8cU4ukoHeuw0y1jNbkDfRZgCsjJ8SRHtXgssVp 6XdcXfohL0fJht9cpvVnw/wJDLsbe5bHImfnW7ASTawoPO1LEpBF325PCouycfOxe/9P oIKkJAnbut2ONOwXqOAsnAw804PK7K3GofEHM9gz7GTGMjtU80s+RzTfkK6q764UPh7s esX7XRxEpoQwHS1E27OrbF69g/hU9RMoPkE1zdkBMhbeBJVHD8C9yf4AnSkun5Sa7IJ7 zMDefH//o8FNKMj+GYgHPbJXg/aol8xHVR1IVy3M4oncYawvOVoQ8/dxOM7bcCld/hk/ Dmdg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e14si12815883pgm.340.2019.04.28.07.51.04; Sun, 28 Apr 2019 07:51:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726795AbfD1Os5 (ORCPT + 99 others); Sun, 28 Apr 2019 10:48:57 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:40234 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726374AbfD1Os4 (ORCPT ); Sun, 28 Apr 2019 10:48:56 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92 #3 (Red Hat Linux)) id 1hKl6p-0001uM-0n; Sun, 28 Apr 2019 14:48:51 +0000 Date: Sun, 28 Apr 2019 15:48:50 +0100 From: Al Viro To: Jeff Layton Cc: Linus Torvalds , Ilya Dryomov , ceph-devel@vger.kernel.org, Linux List Kernel Mailing , linux-cifs Subject: Re: [GIT PULL] Ceph fixes for 5.1-rc7 Message-ID: <20190428144850.GA23075@ZenIV.linux.org.uk> References: <20190425174739.27604-1-idryomov@gmail.com> <342ef35feb1110197108068d10e518742823a210.camel@kernel.org> <20190425200941.GW2217@ZenIV.linux.org.uk> <86674e79e9f24e81feda75bc3c0dd4215604ffa5.camel@kernel.org> <20190426165055.GY2217@ZenIV.linux.org.uk> <20190428043801.GE2217@ZenIV.linux.org.uk> <7bac7ba5655a8e783a70f915853a0846e7ff143b.camel@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7bac7ba5655a8e783a70f915853a0846e7ff143b.camel@kernel.org> User-Agent: Mutt/1.11.3 (2019-02-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Apr 28, 2019 at 09:27:20AM -0400, Jeff Layton wrote: > I don't see a problem doing what you suggest. An offset + fixed length > buffer would be fine there. > > Is there a real benefit to using __getname though? It sucks when we have > to reallocate but I doubt that it happens with any frequency. Most of > these paths will end up being much shorter than PATH_MAX and that slims > down the memory footprint a bit. AFAICS, they are all short-lived; don't forget that slabs have cache, so in that situation allocations are cheap. > Also, FWIW -- this code was originally copied from cifs' > build_path_from_dentry(). Should we aim to put something in common > infrastructure that both can call? > > There are some significant logic differences in the two functions though > so we might need some sort of callback function or something to know > when to stop walking. Not if you want it fast... Indirect calls are not cheap; the cost of those callbacks would be considerable. Besides, you want more than "where do I stop", right? It's also "what output do I use for this dentry", both for you and for cifs (there it's "which separator to use", in ceph it's "these we want represented as //")... Can it be called on detached subtree, during e.g. open_by_handle()? There it can get really fishy; you end up with base being at the random point on the way towards root. How does that work, and if it *does* work, why do we need the whole path in the first place? BTW, for cifs there's no need to play with ->d_lock as we go. For ceph, the only need comes from looking at d_inode(), and I wonder if it would be better to duplicate that information ("is that a snapdir/nosnap") into dentry iself - would certainly be cheaper. OTOH, we are getting short on spare bits in ->d_flags...