Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2777799yba; Sun, 28 Apr 2019 08:49:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqzlufOOJszEBXVSCZC+vXapyFdCQNbW2VipQAAfwO0faiQZ263Xmuz3/AbnwyQfJGc8NzJ+ X-Received: by 2002:a63:c601:: with SMTP id w1mr5025302pgg.190.1556466546733; Sun, 28 Apr 2019 08:49:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556466546; cv=none; d=google.com; s=arc-20160816; b=cW2sABhQCJvfnAgOvk4g2T20sQPzgCkcK8jjoXCDGakiZmNfRYWxOqkEz9XCsDIm6j QICPLHawx0ungDxYTtlVKW0Xo93dH+CIUrKG+UZyi/eXnXhyq1Q0NhZen8fpEBon5Orw NyB1t6hHSCN8daIuXEFhe1aR/AM3A3PsdzIy5fLz9sc+PZGx/fwgCtFIdG2n5YqB3LGf HVV121JzyozWpAdwL/q2mYABxHZlYBCw0bIta9uVJjD5TisMVCgr4KtREV6Lx8OXcHf+ g64y+r10JmFMqXIVCotz4cGAYfjWKdEWcJ7zHqhT+mm2ZVQvZj0BNdZJexp6u+tYzsmY oe7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=S/WdjZ2x0OuFUOq75dIJksHowPOSz6uvjGdKpb2/kz0=; b=fbxa8aADvsYhyK3U6v0H+q78SMBTwd5NYBMEqxLUzdiUJvoDpRw+uhejODnnEt5nnD anJ712Bw6vZguWW2x1roRroCrXBzQNcBShznXXyfBx8lCZeF2TuRJJhEt1ruTt2i6zOn fbvW/lRmEhior4vyiaJ6idC9Vv+TX0LpN86OvegORnja0y1Rnyc9k0rXkjlUnCR2YmOZ IcfQ89xXLS03jtwZxiEwDaofu/1+lhV6i2DJF1cypnEWTxVp0T5iyIHAqNq0SOtt/X2P nbmL4hGKbjrZDoWLAf6dN/OcSBrjQoL+VhLrd9QLAr2I9xFY+KWRsj36/A9W82dmLjqC b5Yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=iRh3zEG8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u22si30973043plq.193.2019.04.28.08.48.51; Sun, 28 Apr 2019 08:49:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=iRh3zEG8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726866AbfD1PsC (ORCPT + 99 others); Sun, 28 Apr 2019 11:48:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:41762 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726744AbfD1PsB (ORCPT ); Sun, 28 Apr 2019 11:48:01 -0400 Received: from tleilax.poochiereds.net (cpe-71-70-156-158.nc.res.rr.com [71.70.156.158]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3A9FF20656; Sun, 28 Apr 2019 15:48:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1556466480; bh=1e9w9r2QRSeNypcp8eWRC6bh4F+buDCPj2PsTcrn58k=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=iRh3zEG8ICSzQtUDi7dNIGMUStCJ5eQ7C7MtLNnmW7NpNzYDtrBpnzHzyx0hYLZ56 Fpa7PkEcx/nt2zTXUDwMho50rjpUC2c40jTONzjv42o3alY//uYyHyqs+lf4S4J1dD A1A+ZzI4xeG2X8mKVaPMyBd+WCRCCmXDUsMkrvLo= Message-ID: Subject: Re: [GIT PULL] Ceph fixes for 5.1-rc7 From: Jeff Layton To: Al Viro Cc: Linus Torvalds , Ilya Dryomov , ceph-devel@vger.kernel.org, Linux List Kernel Mailing , linux-cifs Date: Sun, 28 Apr 2019 11:47:58 -0400 In-Reply-To: <20190428144850.GA23075@ZenIV.linux.org.uk> References: <20190425174739.27604-1-idryomov@gmail.com> <342ef35feb1110197108068d10e518742823a210.camel@kernel.org> <20190425200941.GW2217@ZenIV.linux.org.uk> <86674e79e9f24e81feda75bc3c0dd4215604ffa5.camel@kernel.org> <20190426165055.GY2217@ZenIV.linux.org.uk> <20190428043801.GE2217@ZenIV.linux.org.uk> <7bac7ba5655a8e783a70f915853a0846e7ff143b.camel@kernel.org> <20190428144850.GA23075@ZenIV.linux.org.uk> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.30.5 (3.30.5-1.fc29) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 2019-04-28 at 15:48 +0100, Al Viro wrote: > On Sun, Apr 28, 2019 at 09:27:20AM -0400, Jeff Layton wrote: > > > I don't see a problem doing what you suggest. An offset + fixed length > > buffer would be fine there. > > > > Is there a real benefit to using __getname though? It sucks when we have > > to reallocate but I doubt that it happens with any frequency. Most of > > these paths will end up being much shorter than PATH_MAX and that slims > > down the memory footprint a bit. > > AFAICS, they are all short-lived; don't forget that slabs have cache, > so in that situation allocations are cheap. > Fair enough. Al also pointed out on IRC that the __getname/__putname caches are likely to be hot, so using that may be less costly cpu-wise. > > Also, FWIW -- this code was originally copied from cifs' > > build_path_from_dentry(). Should we aim to put something in common > > infrastructure that both can call? > > > > There are some significant logic differences in the two functions though > > so we might need some sort of callback function or something to know > > when to stop walking. > > Not if you want it fast... Indirect calls are not cheap; the cost of > those callbacks would be considerable. Besides, you want more than > "where do I stop", right? It's also "what output do I use for this > dentry", both for you and for cifs (there it's "which separator to use", > in ceph it's "these we want represented as //")... > > Can it be called on detached subtree, during e.g. open_by_handle()? > There it can get really fishy; you end up with base being at the > random point on the way towards root. How does that work, and if > it *does* work, why do we need the whole path in the first place? > This I'm not sure of. commit 79b33c8874334e (ceph: snapshot nfs re- export) explains this a bit, but I'm not sure it really covers this case. Zheng/Sage, feel free to correct me here: My understanding is that for snapshots you need the base inode number, snapid, and the full path from there to the dentry for a ceph MDS call. There is a filehandle type for a snapshotted inode: struct ceph_nfs_snapfh { u64 ino; u64 snapid; u64 parent_ino; u32 hash; } __attribute__ ((packed)); So I guess it is possible. You could do name_to_handle_at for an inode deep down in a snapshotted tree, and then try to open_by_handle_at after the dcache gets cleaned out for some other reason. What I'm not clear on is why we need to build paths at all for snapshots. Why is a parent inode number (inside the snapshot) + a snapid + dentry name not sufficient? > BTW, for cifs there's no need to play with ->d_lock as we go. For > ceph, the only need comes from looking at d_inode(), and I wonder if > it would be better to duplicate that information ("is that a > snapdir/nosnap") into dentry iself - would certainly be cheaper. > OTOH, we are getting short on spare bits in ->d_flags... We could stick that in ceph_dentry_info (->d_fsdata). We have a flags field in there already. -- Jeff Layton