Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp857318yba; Fri, 26 Apr 2019 09:52:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqyO7GRdUmyD/ZpyH4svHi9+ZnX24Zxs7aSYbE/QUtx4G8HxobNMbvcWZ5OHvLIlnZqsIoua X-Received: by 2002:a17:902:e185:: with SMTP id cd5mr6820209plb.124.1556297552792; Fri, 26 Apr 2019 09:52:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556297552; cv=none; d=google.com; s=arc-20160816; b=OngIxxqD6HRNHrsjs+bRrTyEoZF/olJW7uksIS9JhDPYy68SGViy1G4s410ao/xiRl 1IZgoiyhbuwUOZP8wk37mKuPO4WTF3Pu3JHowhTPhyFtAdbonsOaaPBPBMXqFIiWbmr6 ymjlbdK3enRQaZB/niXXvTDBmA8naw6/Mxcze13AJgFc5kFIuhTIpos2BjG+vnvFbz15 M6CtZkOdTE3shQqVvj8ylFBSDxi9LF43Pn8DfBVDiwoSEYiWEwVkZJLGhkT+YYgiEh6C Ttyyz0YZSNHKL4ZYTiehxEmgLMvKL2yffgOnWEST+zJZ33hmHveRIeoZcotbdJYMVUtN 8JFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=RFpE//L86yMV606l+f0/CjA/a+a+TXY9CRQEDwmtqfI=; b=ElPIufwqDG8ru5mOyMrE3E0dQl+TaumkqcMb7MZMylpyHdDBoPhHCk11YjIXSMz/h6 Ad4h63lY8rkXV2tRVHoNjybmu9HPUellZ7+0J5t5Ew9o6B78PNJVcHpyGyQ5I1/6upGw wCneYw0XA5cO9SR7Nim58aocEycyv49WmGd+8OsYGZn0+8liewp/DztTXd8d+ybmoVji g3Ei2gUfk4WG+ax+XasyRSEpG/LwnxxEAIRbQ2v5wHpUmvMnmw+bm4NgYUClNXxLOaMg D1cEE/tYM69gwj7xqriD3W0EjPUWs5KWrMM+MmkpSP+yLRudH5SJ3VNTyGU7Tr0oSP6J 8mkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k136si853440pgc.184.2019.04.26.09.52.17; Fri, 26 Apr 2019 09:52:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726481AbfDZQvA (ORCPT + 99 others); Fri, 26 Apr 2019 12:51:00 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:39292 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726317AbfDZQu7 (ORCPT ); Fri, 26 Apr 2019 12:50:59 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92 #3 (Red Hat Linux)) id 1hK43r-0003ZP-QS; Fri, 26 Apr 2019 16:50:55 +0000 Date: Fri, 26 Apr 2019 17:50:55 +0100 From: Al Viro To: Jeff Layton Cc: Linus Torvalds , Ilya Dryomov , ceph-devel@vger.kernel.org, Linux List Kernel Mailing Subject: Re: [GIT PULL] Ceph fixes for 5.1-rc7 Message-ID: <20190426165055.GY2217@ZenIV.linux.org.uk> References: <20190425174739.27604-1-idryomov@gmail.com> <342ef35feb1110197108068d10e518742823a210.camel@kernel.org> <20190425200941.GW2217@ZenIV.linux.org.uk> <86674e79e9f24e81feda75bc3c0dd4215604ffa5.camel@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <86674e79e9f24e81feda75bc3c0dd4215604ffa5.camel@kernel.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 26, 2019 at 12:25:03PM -0400, Jeff Layton wrote: > It turns out though that using name_snapshot from ceph is a bit more > tricky. In some cases, we have to call ceph_mdsc_build_path to build up > a full path string. We can't easily populate a name_snapshot from there > because struct external_name is only defined in fs/dcache.c. Explain, please. For ceph_mdsc_build_path() you don't need name snapshots at all and existing code is, AFAICS, just fine, except for pointless pr_err() there. I _probably_ would take allocation out of the loop (e.g. make it __getname(), called unconditionally) and turned it into the d_path.c-style read_seqbegin_or_lock()/need_seqretry()/done_seqretry() loop, so that the first pass would go under rcu_read_lock(), while the second (if needed) would just hold rename_lock exclusive (without bumping the refcount). But that's a matter of (theoretical) livelock avoidance, not the locking correctness for ->d_name accesses. Oh, and *base = ceph_ino(d_inode(temp)); *plen = len; probably belongs in critical section - _that_ might be a correctness issue, since temp is not held by anything once you are out of there. > I could add some routines to do this, but it feels a lot like I'm > abusing internal dcache interfaces. I'll keep thinking about it though. > > While we're on the subject though: > > struct external_name { > union { > atomic_t count; > struct rcu_head head; > } u; > unsigned char name[]; > }; > > Is it really ok to union the count and rcu_head there? > > I haven't trawled through all of the code yet, but what prevents someone > from trying to access the count inside an RCU critical section, after > call_rcu has been called on it? The fact that no lockless accesses to ->count are ever done?