Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4469566pxj; Tue, 8 Jun 2021 15:22:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3UTQn8kLovCDFn0zheE+4+6I/ipPgE5tS/Kc7F9D06mOHzFKis/9xlkCC5Trkax2+TjSO X-Received: by 2002:a05:6402:1511:: with SMTP id f17mr1298963edw.377.1623190933281; Tue, 08 Jun 2021 15:22:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623190933; cv=none; d=google.com; s=arc-20160816; b=h5PcWUChS77bccSzlDlEXYi5v/h6t+BDxXUDAbCmsVI8+Xr5M+LswE5bb2oXzhXyOC Ec15P9jrfwb6eIGom232Fr46ARtmVviwRznqt1k8jxwjsI6FZzK1X5cFddCLTOFDb/JU 83dL5Butl/QUiOURBMVNGIgfy9Vyu4OKChPNU5XNtzRhSkooORcxK1sWxr2LFkvR5ZGv bneg2lKxUTaxHxFQyoXYceXcdDJ3QiAYrDDgyGqtopBGUIoQmG4KLTXY+rDND6Ymp6F2 L5spOX3CPKisDpZK2wjQahI2Hf5d2JYwgYE1X9p18+A8XL6+KzuBWZ/P0STJB17IaPdU 0Ffg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=+VIRpBztYnAXts1rrmomfPLpQ0zvs6yHM3q1X+TRblk=; b=Tlk29xDgObUYuYHv1RkymNuGlmMLu2Hn8shBKWJ8uu1R1COz4ybfcjiCN+Q+CiopM1 0azs67zzHXZ2tWQIT4KP6C7pjx+0kjanADuNfFcnGOUa6db0M9DEcqhLov7i+fzjvkn3 J0m6SvxhgpijqImKv+3hFExjeVIwZ7AjUXt0nsSjDH0vD88wYcZThAXpu+kEiRF0GQZk t5hPMpr01lKkKrgWHlGD4n0fxSUURZVHsyG/PSVDXzq3iodOBpBJJyszbG8y/hxIUXhN jgRbqRaXE+leEvDbKhtkxZDC6BiVdHhmCP6LAfBT8CHBpxvntF+lA3fAydhMH6VgllYk 5BNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n9si762601edw.373.2021.06.08.15.21.18; Tue, 08 Jun 2021 15:22:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234307AbhFHWUg (ORCPT + 99 others); Tue, 8 Jun 2021 18:20:36 -0400 Received: from mail110.syd.optusnet.com.au ([211.29.132.97]:56720 "EHLO mail110.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229548AbhFHWUf (ORCPT ); Tue, 8 Jun 2021 18:20:35 -0400 Received: from dread.disaster.area (pa49-179-138-183.pa.nsw.optusnet.com.au [49.179.138.183]) by mail110.syd.optusnet.com.au (Postfix) with ESMTPS id 31D2F10625F; Wed, 9 Jun 2021 08:18:16 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1lqk35-00AXg4-VK; Wed, 09 Jun 2021 08:18:15 +1000 Date: Wed, 9 Jun 2021 08:18:15 +1000 From: Dave Chinner To: Jan Kara Cc: linux-fsdevel@vger.kernel.org, Christoph Hellwig , ceph-devel@vger.kernel.org, Chao Yu , Damien Le Moal , "Darrick J. Wong" , Jaegeuk Kim , Jeff Layton , Johannes Thumshirn , linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-xfs@vger.kernel.org, Miklos Szeredi , Steve French , Ted Tso , Matthew Wilcox , Pavel Reichl , Dave Chinner , Eric Sandeen , Christoph Hellwig Subject: Re: [PATCH 07/14] xfs: Refactor xfs_isilocked() Message-ID: <20210608221815.GM664593@dread.disaster.area> References: <20210607144631.8717-1-jack@suse.cz> <20210607145236.31852-7-jack@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210607145236.31852-7-jack@suse.cz> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=YKPhNiOx c=1 sm=1 tr=0 a=MnllW2CieawZLw/OcHE/Ng==:117 a=MnllW2CieawZLw/OcHE/Ng==:17 a=kj9zAlcOel0A:10 a=r6YtysWOX24A:10 a=20KFwNOVAAAA:8 a=yPCof4ZbAAAA:8 a=7-415B0cAAAA:8 a=wL01ZgEgnFvSBV24lgMA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Mon, Jun 07, 2021 at 04:52:17PM +0200, Jan Kara wrote: > From: Pavel Reichl > > Refactor xfs_isilocked() to use newly introduced __xfs_rwsem_islocked(). > __xfs_rwsem_islocked() is a helper function which encapsulates checking > state of rw_semaphores hold by inode. > > Signed-off-by: Pavel Reichl > Suggested-by: Dave Chinner > Suggested-by: Eric Sandeen > Suggested-by: Darrick J. Wong > Reviewed-by: Darrick J. Wong > Reviewed-by: Christoph Hellwig > Signed-off-by: Jan Kara > --- > fs/xfs/xfs_inode.c | 39 +++++++++++++++++++++++++++++++-------- > fs/xfs/xfs_inode.h | 21 ++++++++++++++------- > 2 files changed, 45 insertions(+), 15 deletions(-) As a standalone patch, this is overly elaborate and way more complex than it needs to be. It's not really just a refactor, either, because of the unnecessary shifting games it adds. > diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c > index 0369eb22c1bb..6247977870bd 100644 > --- a/fs/xfs/xfs_inode.c > +++ b/fs/xfs/xfs_inode.c > @@ -342,9 +342,34 @@ xfs_ilock_demote( > } > > #if defined(DEBUG) || defined(XFS_WARN) > -int > +static inline bool > +__xfs_rwsem_islocked( > + struct rw_semaphore *rwsem, > + int lock_flags, > + int shift) > +{ > + lock_flags >>= shift; > + > + if (!debug_locks) > + return rwsem_is_locked(rwsem); > + /* > + * If the shared flag is not set, pass 0 to explicitly check for > + * exclusive access to the lock. If the shared flag is set, we typically > + * want to make sure the lock is at least held in shared mode > + * (i.e., shared | excl) but we don't necessarily care that it might > + * actually be held exclusive. Therefore, pass -1 to check whether the > + * lock is held in any mode rather than one of the explicit shared mode > + * values (1 or 2)." > + */ > + if (lock_flags & (1 << XFS_SHARED_LOCK_SHIFT)) { > + return lockdep_is_held_type(rwsem, -1); > + } > + return lockdep_is_held_type(rwsem, 0); > +} Pass in a boolean value for shared/exclusive and you can get rid of passing in the lock flags as well. static bool __xfs_rwsem_islocked( struct rw_semaphore *rwsem, bool shared) { if (!debug_locks) return rwsem_is_locked(rwsem); if (!shared) return lockdep_is_held_type(rwsem, 0); /* * We are checking that the lock is held at least in shared * mode but don't care that it might be held exclusively * (i.e. shared | excl). Hence we check if the lock is held * in any mode rather than an explicit shared mode. */ return lockdep_is_held_type(rwsem, -1); } > + > +bool > xfs_isilocked( > - xfs_inode_t *ip, > + struct xfs_inode *ip, > uint lock_flags) > { > if (lock_flags & (XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)) { > @@ -359,15 +384,13 @@ xfs_isilocked( > return rwsem_is_locked(&ip->i_mmaplock.mr_lock); > } > > - if (lock_flags & (XFS_IOLOCK_EXCL|XFS_IOLOCK_SHARED)) { > - if (!(lock_flags & XFS_IOLOCK_SHARED)) > - return !debug_locks || > - lockdep_is_held_type(&VFS_I(ip)->i_rwsem, 0); > - return rwsem_is_locked(&VFS_I(ip)->i_rwsem); > + if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) { > + return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem, lock_flags, > + XFS_IOLOCK_FLAG_SHIFT); Then this is simply: return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem, (lock_flags & XFS_IOLOCK_SHARED)); And the conversion for the MMAPLOCK in the next patch is equally simple. > diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h > index ca826cfba91c..1c0e15c480bc 100644 > --- a/fs/xfs/xfs_inode.h > +++ b/fs/xfs/xfs_inode.h > @@ -262,12 +262,19 @@ static inline bool xfs_inode_has_bigtime(struct xfs_inode *ip) > * Bit ranges: 1<<1 - 1<<16-1 -- iolock/ilock modes (bitfield) > * 1<<16 - 1<<32-1 -- lockdep annotation (integers) > */ > -#define XFS_IOLOCK_EXCL (1<<0) > -#define XFS_IOLOCK_SHARED (1<<1) > -#define XFS_ILOCK_EXCL (1<<2) > -#define XFS_ILOCK_SHARED (1<<3) > -#define XFS_MMAPLOCK_EXCL (1<<4) > -#define XFS_MMAPLOCK_SHARED (1<<5) > + > +#define XFS_IOLOCK_FLAG_SHIFT 0 > +#define XFS_ILOCK_FLAG_SHIFT 2 > +#define XFS_MMAPLOCK_FLAG_SHIFT 4 > + > +#define XFS_SHARED_LOCK_SHIFT 1 > + > +#define XFS_IOLOCK_EXCL (1 << XFS_IOLOCK_FLAG_SHIFT) > +#define XFS_IOLOCK_SHARED (XFS_IOLOCK_EXCL << XFS_SHARED_LOCK_SHIFT) > +#define XFS_ILOCK_EXCL (1 << XFS_ILOCK_FLAG_SHIFT) > +#define XFS_ILOCK_SHARED (XFS_ILOCK_EXCL << XFS_SHARED_LOCK_SHIFT) > +#define XFS_MMAPLOCK_EXCL (1 << XFS_MMAPLOCK_FLAG_SHIFT) > +#define XFS_MMAPLOCK_SHARED (XFS_MMAPLOCK_EXCL << XFS_SHARED_LOCK_SHIFT) > > #define XFS_LOCK_MASK (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED \ > | XFS_ILOCK_EXCL | XFS_ILOCK_SHARED \ And all this shifting goes away and the change is much, much simpler. If/when other changes are made to this code that require shifting like this, then we can make these modifications. But in this patch for this usage they don't really make much sense at all.. Cheers, Dave. -- Dave Chinner david@fromorbit.com