Long ago, file locks used to hang off of a singly-linked list in struct
inode. Because of this, when leases were added, they were added to the
same list and so they had to be tracked using the same sort of
structure.
Several years ago, we added struct file_lock_context, which allowed us
to use separate lists to track different types of file locks. Given
that, leases no longer need to be tracked using struct file_lock.
That said, a lot of the underlying infrastructure _is_ the same between
file leases and locks, so we can't completely separate everything.
This patchset first splits a group of fields used by both file locks and
leases into a new struct file_lock_core, that is then embedded in struct
file_lock. Coccinelle was then used to convert a lot of the callers to
deal with the move, with the remaining 25% or so converted by hand.
It then converts several internal functions in fs/locks.c to work
with struct file_lock_core. Lastly, struct file_lock is split into
struct file_lock and file_lease, and the lease-related APIs converted to
take struct file_lease.
After the first few patches (which I left split up for easier review),
the set should be bisectable. I'll plan to squash the first few
together to make sure the resulting set is bisectable before merge.
Finally, I left the coccinelle scripts I used in tree. I had heard it
was preferable to merge those along with the patches that they
generate, but I wasn't sure where they go. I can either move those to a
more appropriate location or we can just drop that commit if it's not
needed.
Signed-off-by: Jeff Layton <[email protected]>
---
Changes in v2:
- renamed file_lock_core fields to have "flc_" prefix
- used macros to more easily do the change piecemeal
- broke up patches into per-subsystem ones
- Link to v1: https://lore.kernel.org/r/[email protected]
---
Jeff Layton (41):
filelock: rename some fields in tracepoints
filelock: rename fl_pid variable in lock_get_status
dlm: rename fl_flags variable in dlm_posix_unlock
nfs: rename fl_flags variable in nfs4_proc_unlck
nfsd: rename fl_type and fl_flags variables in nfsd4_lock
lockd: rename fl_flags and fl_type variables in nlmclnt_lock
9p: rename fl_type variable in v9fs_file_do_lock
afs: rename fl_type variable in afs_next_locker
filelock: drop the IS_* macros
filelock: split common fields into struct file_lock_core
filelock: add coccinelle scripts to move fields to struct file_lock_core
filelock: have fs/locks.c deal with file_lock_core directly
filelock: convert some internal functions to use file_lock_core instead
filelock: convert more internal functions to use file_lock_core
filelock: make posix_same_owner take file_lock_core pointers
filelock: convert posix_owner_key to take file_lock_core arg
filelock: make locks_{insert,delete}_global_locks take file_lock_core arg
filelock: convert locks_{insert,delete}_global_blocked
filelock: make __locks_delete_block and __locks_wake_up_blocks take file_lock_core
filelock: convert __locks_insert_block, conflict and deadlock checks to use file_lock_core
filelock: convert fl_blocker to file_lock_core
filelock: clean up locks_delete_block internals
filelock: reorganize locks_delete_block and __locks_insert_block
filelock: make assign_type helper take a file_lock_core pointer
filelock: convert locks_wake_up_blocks to take a file_lock_core pointer
filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx
filelock: convert locks_translate_pid to take file_lock_core
filelock: convert seqfile handling to use file_lock_core
9p: adapt to breakup of struct file_lock
afs: adapt to breakup of struct file_lock
ceph: adapt to breakup of struct file_lock
dlm: adapt to breakup of struct file_lock
gfs2: adapt to breakup of struct file_lock
lockd: adapt to breakup of struct file_lock
nfs: adapt to breakup of struct file_lock
nfsd: adapt to breakup of struct file_lock
ocfs2: adapt to breakup of struct file_lock
smb/client: adapt to breakup of struct file_lock
smb/server: adapt to breakup of struct file_lock
filelock: remove temporary compatability macros
filelock: split leases out of struct file_lock
cocci/filelock.cocci | 88 +++++
cocci/nlm.cocci | 81 ++++
fs/9p/vfs_file.c | 40 +-
fs/afs/flock.c | 59 +--
fs/ceph/locks.c | 74 ++--
fs/dlm/plock.c | 44 +--
fs/gfs2/file.c | 16 +-
fs/libfs.c | 2 +-
fs/lockd/clnt4xdr.c | 14 +-
fs/lockd/clntlock.c | 2 +-
fs/lockd/clntproc.c | 65 +--
fs/lockd/clntxdr.c | 14 +-
fs/lockd/svc4proc.c | 10 +-
fs/lockd/svclock.c | 64 +--
fs/lockd/svcproc.c | 10 +-
fs/lockd/svcsubs.c | 24 +-
fs/lockd/xdr.c | 14 +-
fs/lockd/xdr4.c | 14 +-
fs/locks.c | 848 ++++++++++++++++++++++------------------
fs/nfs/delegation.c | 4 +-
fs/nfs/file.c | 22 +-
fs/nfs/nfs3proc.c | 2 +-
fs/nfs/nfs4_fs.h | 2 +-
fs/nfs/nfs4file.c | 2 +-
fs/nfs/nfs4proc.c | 39 +-
fs/nfs/nfs4state.c | 22 +-
fs/nfs/nfs4trace.h | 4 +-
fs/nfs/nfs4xdr.c | 8 +-
fs/nfs/write.c | 8 +-
fs/nfsd/filecache.c | 4 +-
fs/nfsd/nfs4callback.c | 2 +-
fs/nfsd/nfs4layouts.c | 34 +-
fs/nfsd/nfs4state.c | 118 +++---
fs/ocfs2/locks.c | 12 +-
fs/ocfs2/stack_user.c | 2 +-
fs/open.c | 2 +-
fs/posix_acl.c | 4 +-
fs/smb/client/cifsfs.c | 2 +-
fs/smb/client/cifssmb.c | 8 +-
fs/smb/client/file.c | 76 ++--
fs/smb/client/smb2file.c | 2 +-
fs/smb/server/smb2pdu.c | 44 +--
fs/smb/server/vfs.c | 14 +-
include/linux/filelock.h | 80 ++--
include/linux/fs.h | 5 +-
include/linux/lockd/lockd.h | 8 +-
include/linux/lockd/xdr.h | 2 +-
include/trace/events/afs.h | 4 +-
include/trace/events/filelock.h | 102 ++---
49 files changed, 1198 insertions(+), 923 deletions(-)
---
base-commit: 615d300648869c774bd1fe54b4627bb0c20faed4
change-id: 20240116-flsplit-bdb46824db68
Best regards,
--
Jeff Layton <[email protected]>
In later patches we're going to introduce some temporary macros with
names that clash with the variable name here. Rename it.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfs/nfs4proc.c | 10 +++++-----
fs/nfs/nfs4state.c | 16 ++++++++--------
2 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 23819a756508..5dd936a403f9 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -7045,7 +7045,7 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int cmd, struct file_lock *
struct rpc_task *task;
struct nfs_seqid *(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
int status = 0;
- unsigned char fl_flags = request->fl_flags;
+ unsigned char saved_flags = request->fl_flags;
status = nfs4_set_lock_state(state, request);
/* Unlock _before_ we do the RPC call */
@@ -7080,7 +7080,7 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int cmd, struct file_lock *
status = rpc_wait_for_completion_task(task);
rpc_put_task(task);
out:
- request->fl_flags = fl_flags;
+ request->fl_flags = saved_flags;
trace_nfs4_unlock(request, state, F_SETLK, status);
return status;
}
@@ -7398,7 +7398,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
{
struct nfs_inode *nfsi = NFS_I(state->inode);
struct nfs4_state_owner *sp = state->owner;
- unsigned char fl_flags = request->fl_flags;
+ unsigned char flags = request->fl_flags;
int status;
request->fl_flags |= FL_ACCESS;
@@ -7410,7 +7410,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
if (test_bit(NFS_DELEGATED_STATE, &state->flags)) {
/* Yes: cache locks! */
/* ...but avoid races with delegation recall... */
- request->fl_flags = fl_flags & ~FL_SLEEP;
+ request->fl_flags = flags & ~FL_SLEEP;
status = locks_lock_inode_wait(state->inode, request);
up_read(&nfsi->rwsem);
mutex_unlock(&sp->so_delegreturn_mutex);
@@ -7420,7 +7420,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
mutex_unlock(&sp->so_delegreturn_mutex);
status = _nfs4_do_setlk(state, cmd, request, NFS_LOCK_NEW);
out:
- request->fl_flags = fl_flags;
+ request->fl_flags = flags;
return status;
}
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 9a5d911a7edc..471caf06fa7b 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -847,15 +847,15 @@ void nfs4_close_sync(struct nfs4_state *state, fmode_t fmode)
*/
static struct nfs4_lock_state *
__nfs4_find_lock_state(struct nfs4_state *state,
- fl_owner_t fl_owner, fl_owner_t fl_owner2)
+ fl_owner_t owner, fl_owner_t owner2)
{
struct nfs4_lock_state *pos, *ret = NULL;
list_for_each_entry(pos, &state->lock_states, ls_locks) {
- if (pos->ls_owner == fl_owner) {
+ if (pos->ls_owner == owner) {
ret = pos;
break;
}
- if (pos->ls_owner == fl_owner2)
+ if (pos->ls_owner == owner2)
ret = pos;
}
if (ret)
@@ -868,7 +868,7 @@ __nfs4_find_lock_state(struct nfs4_state *state,
* exists, return an uninitialized one.
*
*/
-static struct nfs4_lock_state *nfs4_alloc_lock_state(struct nfs4_state *state, fl_owner_t fl_owner)
+static struct nfs4_lock_state *nfs4_alloc_lock_state(struct nfs4_state *state, fl_owner_t owner)
{
struct nfs4_lock_state *lsp;
struct nfs_server *server = state->owner->so_server;
@@ -879,7 +879,7 @@ static struct nfs4_lock_state *nfs4_alloc_lock_state(struct nfs4_state *state, f
nfs4_init_seqid_counter(&lsp->ls_seqid);
refcount_set(&lsp->ls_count, 1);
lsp->ls_state = state;
- lsp->ls_owner = fl_owner;
+ lsp->ls_owner = owner;
lsp->ls_seqid.owner_id = ida_alloc(&server->lockowner_id, GFP_KERNEL_ACCOUNT);
if (lsp->ls_seqid.owner_id < 0)
goto out_free;
@@ -993,7 +993,7 @@ static int nfs4_copy_lock_stateid(nfs4_stateid *dst,
const struct nfs_lock_context *l_ctx)
{
struct nfs4_lock_state *lsp;
- fl_owner_t fl_owner, fl_flock_owner;
+ fl_owner_t owner, fl_flock_owner;
int ret = -ENOENT;
if (l_ctx == NULL)
@@ -1002,11 +1002,11 @@ static int nfs4_copy_lock_stateid(nfs4_stateid *dst,
if (test_bit(LK_STATE_IN_USE, &state->flags) == 0)
goto out;
- fl_owner = l_ctx->lockowner;
+ owner = l_ctx->lockowner;
fl_flock_owner = l_ctx->open_context->flock_owner;
spin_lock(&state->state_lock);
- lsp = __nfs4_find_lock_state(state, fl_owner, fl_flock_owner);
+ lsp = __nfs4_find_lock_state(state, owner, fl_flock_owner);
if (lsp && test_bit(NFS_LOCK_LOST, &lsp->ls_flags))
ret = -EIO;
else if (lsp != NULL && test_bit(NFS_LOCK_INITIALIZED, &lsp->ls_flags) != 0) {
--
2.43.0
In later patches we're going to introduce some macros with names that
clash with the variable names here. Rename them.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfsd/nfs4state.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 2fa54cfd4882..f66e67394157 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -7493,8 +7493,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
int lkflg;
int err;
bool new = false;
- unsigned char fl_type;
- unsigned int fl_flags = FL_POSIX;
+ unsigned char type;
+ unsigned int flags = FL_POSIX;
struct net *net = SVC_NET(rqstp);
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
@@ -7557,14 +7557,14 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
goto out;
if (lock->lk_reclaim)
- fl_flags |= FL_RECLAIM;
+ flags |= FL_RECLAIM;
fp = lock_stp->st_stid.sc_file;
switch (lock->lk_type) {
case NFS4_READW_LT:
if (nfsd4_has_session(cstate) ||
exportfs_lock_op_is_async(sb->s_export_op))
- fl_flags |= FL_SLEEP;
+ flags |= FL_SLEEP;
fallthrough;
case NFS4_READ_LT:
spin_lock(&fp->fi_lock);
@@ -7572,12 +7572,12 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
if (nf)
get_lock_access(lock_stp, NFS4_SHARE_ACCESS_READ);
spin_unlock(&fp->fi_lock);
- fl_type = F_RDLCK;
+ type = F_RDLCK;
break;
case NFS4_WRITEW_LT:
if (nfsd4_has_session(cstate) ||
exportfs_lock_op_is_async(sb->s_export_op))
- fl_flags |= FL_SLEEP;
+ flags |= FL_SLEEP;
fallthrough;
case NFS4_WRITE_LT:
spin_lock(&fp->fi_lock);
@@ -7585,7 +7585,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
if (nf)
get_lock_access(lock_stp, NFS4_SHARE_ACCESS_WRITE);
spin_unlock(&fp->fi_lock);
- fl_type = F_WRLCK;
+ type = F_WRLCK;
break;
default:
status = nfserr_inval;
@@ -7605,7 +7605,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
* on those filesystems:
*/
if (!exportfs_lock_op_is_async(sb->s_export_op))
- fl_flags &= ~FL_SLEEP;
+ flags &= ~FL_SLEEP;
nbl = find_or_allocate_block(lock_sop, &fp->fi_fhandle, nn);
if (!nbl) {
@@ -7615,11 +7615,11 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
}
file_lock = &nbl->nbl_lock;
- file_lock->fl_type = fl_type;
+ file_lock->fl_type = type;
file_lock->fl_owner = (fl_owner_t)lockowner(nfs4_get_stateowner(&lock_sop->lo_owner));
file_lock->fl_pid = current->tgid;
file_lock->fl_file = nf->nf_file;
- file_lock->fl_flags = fl_flags;
+ file_lock->fl_flags = flags;
file_lock->fl_lmops = &nfsd_posix_mng_ops;
file_lock->fl_start = lock->lk_offset;
file_lock->fl_end = last_byte_offset(lock->lk_offset, lock->lk_length);
@@ -7632,7 +7632,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
goto out;
}
- if (fl_flags & FL_SLEEP) {
+ if (flags & FL_SLEEP) {
nbl->nbl_time = ktime_get_boottime_seconds();
spin_lock(&nn->blocked_locks_lock);
list_add_tail(&nbl->nbl_list, &lock_sop->lo_blocked);
@@ -7669,7 +7669,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
out:
if (nbl) {
/* dequeue it if we queued it before */
- if (fl_flags & FL_SLEEP) {
+ if (flags & FL_SLEEP) {
spin_lock(&nn->blocked_locks_lock);
if (!list_empty(&nbl->nbl_list) &&
!list_empty(&nbl->nbl_lru)) {
--
2.43.0
In later patches, we're going to introduce some macros that conflict
with the variable name here. Rename it.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/9p/vfs_file.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index bae330c2f0cf..3df8aa1b5996 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -121,7 +121,6 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
struct p9_fid *fid;
uint8_t status = P9_LOCK_ERROR;
int res = 0;
- unsigned char fl_type;
struct v9fs_session_info *v9ses;
fid = filp->private_data;
@@ -208,11 +207,12 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
* it locally
*/
if (res < 0 && fl->fl_type != F_UNLCK) {
- fl_type = fl->fl_type;
+ unsigned char type = fl->fl_type;
+
fl->fl_type = F_UNLCK;
/* Even if this fails we want to return the remote error */
locks_lock_file_wait(filp, fl);
- fl->fl_type = fl_type;
+ fl->fl_type = type;
}
if (flock.client_id != fid->clnt->name)
kfree(flock.client_id);
--
2.43.0
In a future patch, we're going to split file leases into their own
structure. Since a lot of the underlying machinery uses the same fields
move those into a new file_lock_core, and embed that inside struct
file_lock.
For now, add some macros to ensure that we can continue to build while
the conversion is in progress.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/9p/vfs_file.c | 1 +
fs/afs/internal.h | 1 +
fs/ceph/locks.c | 1 +
fs/dlm/plock.c | 1 +
fs/gfs2/file.c | 1 +
fs/lockd/clntproc.c | 1 +
fs/locks.c | 1 +
fs/nfs/file.c | 1 +
fs/nfs/nfs4_fs.h | 1 +
fs/nfs/write.c | 1 +
fs/nfsd/netns.h | 1 +
fs/ocfs2/locks.c | 1 +
fs/ocfs2/stack_user.c | 1 +
fs/open.c | 2 +-
fs/posix_acl.c | 4 ++--
fs/smb/client/cifsglob.h | 1 +
fs/smb/client/cifssmb.c | 1 +
fs/smb/client/file.c | 3 ++-
fs/smb/client/smb2file.c | 1 +
fs/smb/server/smb2pdu.c | 1 +
fs/smb/server/vfs.c | 1 +
include/linux/filelock.h | 47 ++++++++++++++++++++++++++++++++++-------------
include/linux/lockd/xdr.h | 3 ++-
23 files changed, 59 insertions(+), 18 deletions(-)
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index 3df8aa1b5996..a1dabcf73380 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/sched.h>
#include <linux/file.h>
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 9c03fcf7ffaa..f5dd428e40f4 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -9,6 +9,7 @@
#include <linux/kernel.h>
#include <linux/ktime.h>
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/pagemap.h>
#include <linux/rxrpc.h>
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index e07ad29ff8b9..ccb358c398ca 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -7,6 +7,7 @@
#include "super.h"
#include "mds_client.h"
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/ceph/pagelist.h>
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index 1b66b2d2b801..b89dca1d51b0 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -4,6 +4,7 @@
*/
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/miscdevice.h>
#include <linux/poll.h>
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 992ca4effb50..9e7cd054e924 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -15,6 +15,7 @@
#include <linux/mm.h>
#include <linux/mount.h>
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/gfs2_ondisk.h>
#include <linux/falloc.h>
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index cc596748e359..1f71260603b7 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -12,6 +12,7 @@
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/nfs_fs.h>
#include <linux/utsname.h>
diff --git a/fs/locks.c b/fs/locks.c
index 87212f86eca9..cee3f183a872 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -48,6 +48,7 @@
* children.
*
*/
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/capability.h>
#include <linux/file.h>
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 8577ccf621f5..3c9a8ad91540 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -31,6 +31,7 @@
#include <linux/swap.h>
#include <linux/uaccess.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include "delegation.h"
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 581698f1b7b2..752224a48f1c 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -23,6 +23,7 @@
#define NFS4_MAX_LOOP_ON_RECOVER (10)
#include <linux/seqlock.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
struct idmap;
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index bb79d3a886ae..ed837a3675cf 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -25,6 +25,7 @@
#include <linux/freezer.h>
#include <linux/wait.h>
#include <linux/iversion.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/uaccess.h>
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 74b4360779a1..fd91125208be 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -10,6 +10,7 @@
#include <net/net_namespace.h>
#include <net/netns/generic.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/percpu_counter.h>
#include <linux/siphash.h>
diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
index f37174e79fad..8a9970dc852e 100644
--- a/fs/ocfs2/locks.c
+++ b/fs/ocfs2/locks.c
@@ -8,6 +8,7 @@
*/
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/fcntl.h>
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index 9b76ee66aeb2..460c882c5384 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -9,6 +9,7 @@
#include <linux/module.h>
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/miscdevice.h>
#include <linux/mutex.h>
diff --git a/fs/open.c b/fs/open.c
index a84d21e55c39..0a73afe04d34 100644
--- a/fs/open.c
+++ b/fs/open.c
@@ -1364,7 +1364,7 @@ struct file *filp_open(const char *filename, int flags, umode_t mode)
{
struct filename *name = getname_kernel(filename);
struct file *file = ERR_CAST(name);
-
+
if (!IS_ERR(name)) {
file = file_open_name(name, flags, mode);
putname(name);
diff --git a/fs/posix_acl.c b/fs/posix_acl.c
index e1af20893ebe..6bf587d1a9b8 100644
--- a/fs/posix_acl.c
+++ b/fs/posix_acl.c
@@ -786,12 +786,12 @@ struct posix_acl *posix_acl_from_xattr(struct user_namespace *userns,
return ERR_PTR(count);
if (count == 0)
return NULL;
-
+
acl = posix_acl_alloc(count, GFP_NOFS);
if (!acl)
return ERR_PTR(-ENOMEM);
acl_e = acl->a_entries;
-
+
for (end = entry + count; entry != end; acl_e++, entry++) {
acl_e->e_tag = le16_to_cpu(entry->e_tag);
acl_e->e_perm = le16_to_cpu(entry->e_perm);
diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 20036fb16cec..fcda4c77c649 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -26,6 +26,7 @@
#include <uapi/linux/cifs/cifs_mount.h>
#include "../common/smb2pdu.h"
#include "smb2pdu.h"
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#define SMB_PATH_MAX 260
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index 01e89070df5a..e19ecf692c20 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -15,6 +15,7 @@
/* want to reuse a stale file handle and only the caller knows the file info */
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/kernel.h>
#include <linux/vfs.h>
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index 3a213432775b..dd87b2ef24dc 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -9,6 +9,7 @@
*
*/
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/backing-dev.h>
#include <linux/stat.h>
@@ -2951,7 +2952,7 @@ static int cifs_writepages_region(struct address_space *mapping,
continue;
}
- folio_batch_release(&fbatch);
+ folio_batch_release(&fbatch);
cond_resched();
} while (wbc->nr_to_write > 0);
diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
index e0ee96d69d49..cd225d15a7c5 100644
--- a/fs/smb/client/smb2file.c
+++ b/fs/smb/client/smb2file.c
@@ -7,6 +7,7 @@
*
*/
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/stat.h>
#include <linux/slab.h>
diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
index ba7a72a6a4f4..d12d11cdea29 100644
--- a/fs/smb/server/smb2pdu.c
+++ b/fs/smb/server/smb2pdu.c
@@ -12,6 +12,7 @@
#include <linux/ethtool.h>
#include <linux/falloc.h>
#include <linux/mount.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include "glob.h"
diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
index a6961bfe3e13..d0686ec344f5 100644
--- a/fs/smb/server/vfs.c
+++ b/fs/smb/server/vfs.c
@@ -6,6 +6,7 @@
#include <linux/kernel.h>
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/uaccess.h>
#include <linux/backing-dev.h>
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 95e868e09e29..0c0db7f20ff6 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -85,23 +85,44 @@ bool opens_in_grace(struct net *);
*
* Obviously, the last two criteria only matter for POSIX locks.
*/
-struct file_lock {
- struct file_lock *fl_blocker; /* The lock, that is blocking us */
- struct list_head fl_list; /* link into file_lock_context */
- struct hlist_node fl_link; /* node in global lists */
- struct list_head fl_blocked_requests; /* list of requests with
+
+struct file_lock_core {
+ struct file_lock *flc_blocker; /* The lock that is blocking us */
+ struct list_head flc_list; /* link into file_lock_context */
+ struct hlist_node flc_link; /* node in global lists */
+ struct list_head flc_blocked_requests; /* list of requests with
* ->fl_blocker pointing here
*/
- struct list_head fl_blocked_member; /* node in
+ struct list_head flc_blocked_member; /* node in
* ->fl_blocker->fl_blocked_requests
*/
- fl_owner_t fl_owner;
- unsigned int fl_flags;
- unsigned char fl_type;
- unsigned int fl_pid;
- int fl_link_cpu; /* what cpu's list is this on? */
- wait_queue_head_t fl_wait;
- struct file *fl_file;
+ fl_owner_t flc_owner;
+ unsigned int flc_flags;
+ unsigned char flc_type;
+ unsigned int flc_pid;
+ int flc_link_cpu; /* what cpu's list is this on? */
+ wait_queue_head_t flc_wait;
+ struct file *flc_file;
+};
+
+/* Temporary macros to allow building during coccinelle conversion */
+#ifdef _NEED_FILE_LOCK_FIELD_MACROS
+#define fl_list fl_core.flc_list
+#define fl_blocker fl_core.flc_blocker
+#define fl_link fl_core.flc_link
+#define fl_blocked_requests fl_core.flc_blocked_requests
+#define fl_blocked_member fl_core.flc_blocked_member
+#define fl_owner fl_core.flc_owner
+#define fl_flags fl_core.flc_flags
+#define fl_type fl_core.flc_type
+#define fl_pid fl_core.flc_pid
+#define fl_link_cpu fl_core.flc_link_cpu
+#define fl_wait fl_core.flc_wait
+#define fl_file fl_core.flc_file
+#endif
+
+struct file_lock {
+ struct file_lock_core fl_core;
loff_t fl_start;
loff_t fl_end;
diff --git a/include/linux/lockd/xdr.h b/include/linux/lockd/xdr.h
index b60fbcd8cdfa..a3f068b0ca86 100644
--- a/include/linux/lockd/xdr.h
+++ b/include/linux/lockd/xdr.h
@@ -11,6 +11,7 @@
#define LOCKD_XDR_H
#include <linux/fs.h>
+#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/nfs.h>
#include <linux/sunrpc/xdr.h>
@@ -52,7 +53,7 @@ struct nlm_lock {
* FreeBSD uses 16, Apple Mac OS X 10.3 uses 20. Therefore we set it to
* 32 bytes.
*/
-
+
struct nlm_cookie
{
unsigned char data[NLM_MAXCOOKIELEN];
--
2.43.0
This patch creates two ".cocci" semantic patches in a top level cocci/
directory. These patches were used to help generate several of the
following patches. We can drop this patch or move the files to a more
appropriate location before merging.
Signed-off-by: Jeff Layton <[email protected]>
---
cocci/filelock.cocci | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++++
cocci/nlm.cocci | 81 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 169 insertions(+)
diff --git a/cocci/filelock.cocci b/cocci/filelock.cocci
new file mode 100644
index 000000000000..93fb4ed8341a
--- /dev/null
+++ b/cocci/filelock.cocci
@@ -0,0 +1,88 @@
+@@
+struct file_lock *fl;
+@@
+(
+- fl->fl_blocker
++ fl->fl_core.flc_blocker
+|
+- fl->fl_list
++ fl->fl_core.flc_list
+|
+- fl->fl_link
++ fl->fl_core.flc_link
+|
+- fl->fl_blocked_requests
++ fl->fl_core.flc_blocked_requests
+|
+- fl->fl_blocked_member
++ fl->fl_core.flc_blocked_member
+|
+- fl->fl_owner
++ fl->fl_core.flc_owner
+|
+- fl->fl_flags
++ fl->fl_core.flc_flags
+|
+- fl->fl_type
++ fl->fl_core.flc_type
+|
+- fl->fl_pid
++ fl->fl_core.flc_pid
+|
+- fl->fl_link_cpu
++ fl->fl_core.flc_link_cpu
+|
+- fl->fl_wait
++ fl->fl_core.flc_wait
+|
+- fl->fl_file
++ fl->fl_core.flc_file
+)
+
+@@
+struct file_lock fl;
+@@
+(
+- fl.fl_blocker
++ fl.fl_core.flc_blocker
+|
+- fl.fl_list
++ fl.fl_core.flc_list
+|
+- fl.fl_link
++ fl.fl_core.flc_link
+|
+- fl.fl_blocked_requests
++ fl.fl_core.flc_blocked_requests
+|
+- fl.fl_blocked_member
++ fl.fl_core.flc_blocked_member
+|
+- fl.fl_owner
++ fl.fl_core.flc_owner
+|
+- fl.fl_flags
++ fl.fl_core.flc_flags
+|
+- fl.fl_type
++ fl.fl_core.flc_type
+|
+- fl.fl_pid
++ fl.fl_core.flc_pid
+|
+- fl.fl_link_cpu
++ fl.fl_core.flc_link_cpu
+|
+- fl.fl_wait
++ fl.fl_core.flc_wait
+|
+- fl.fl_file
++ fl.fl_core.flc_file
+)
+
+@@
+struct file_lock *fl;
+struct list_head *li;
+@@
+- list_for_each_entry(fl, li, fl_list)
++ list_for_each_entry(fl, li, fl_core.flc_list)
diff --git a/cocci/nlm.cocci b/cocci/nlm.cocci
new file mode 100644
index 000000000000..bf22f0a75812
--- /dev/null
+++ b/cocci/nlm.cocci
@@ -0,0 +1,81 @@
+@@
+struct nlm_lock *nlck;
+@@
+(
+- nlck->fl.fl_blocker
++ nlck->fl.fl_core.flc_blocker
+|
+- nlck->fl.fl_list
++ nlck->fl.fl_core.flc_list
+|
+- nlck->fl.fl_link
++ nlck->fl.fl_core.flc_link
+|
+- nlck->fl.fl_blocked_requests
++ nlck->fl.fl_core.flc_blocked_requests
+|
+- nlck->fl.fl_blocked_member
++ nlck->fl.fl_core.flc_blocked_member
+|
+- nlck->fl.fl_owner
++ nlck->fl.fl_core.flc_owner
+|
+- nlck->fl.fl_flags
++ nlck->fl.fl_core.flc_flags
+|
+- nlck->fl.fl_type
++ nlck->fl.fl_core.flc_type
+|
+- nlck->fl.fl_pid
++ nlck->fl.fl_core.flc_pid
+|
+- nlck->fl.fl_link_cpu
++ nlck->fl.fl_core.flc_link_cpu
+|
+- nlck->fl.fl_wait
++ nlck->fl.fl_core.flc_wait
+|
+- nlck->fl.fl_file
++ nlck->fl.fl_core.flc_file
+)
+
+@@
+struct nlm_args *argp;
+@@
+(
+- argp->lock.fl.fl_blocker
++ argp->lock.fl.fl_core.flc_blocker
+|
+- argp->lock.fl.fl_list
++ argp->lock.fl.fl_core.flc_list
+|
+- argp->lock.fl.fl_link
++ argp->lock.fl.fl_core.flc_link
+|
+- argp->lock.fl.fl_blocked_requests
++ argp->lock.fl.fl_core.flc_blocked_requests
+|
+- argp->lock.fl.fl_blocked_member
++ argp->lock.fl.fl_core.flc_blocked_member
+|
+- argp->lock.fl.fl_owner
++ argp->lock.fl.fl_core.flc_owner
+|
+- argp->lock.fl.fl_flags
++ argp->lock.fl.fl_core.flc_flags
+|
+- argp->lock.fl.fl_type
++ argp->lock.fl.fl_core.flc_type
+|
+- argp->lock.fl.fl_pid
++ argp->lock.fl.fl_core.flc_pid
+|
+- argp->lock.fl.fl_link_cpu
++ argp->lock.fl.fl_core.flc_link_cpu
+|
+- argp->lock.fl.fl_wait
++ argp->lock.fl.fl_core.flc_wait
+|
+- argp->lock.fl.fl_file
++ argp->lock.fl.fl_core.flc_file
+)
--
2.43.0
Convert fs/locks.c to access fl_core fields direcly rather than using
the backward-compatability macros. Most of this was done with
coccinelle, with a few by-hand fixups.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 479 ++++++++++++++++++++--------------------
include/trace/events/filelock.h | 32 +--
2 files changed, 260 insertions(+), 251 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index cee3f183a872..b06fa4dea298 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -48,8 +48,6 @@
* children.
*
*/
-#define _NEED_FILE_LOCK_FIELD_MACROS
-
#include <linux/capability.h>
#include <linux/file.h>
#include <linux/fdtable.h>
@@ -73,16 +71,16 @@
static bool lease_breaking(struct file_lock *fl)
{
- return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
+ return fl->fl_core.flc_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
}
static int target_leasetype(struct file_lock *fl)
{
- if (fl->fl_flags & FL_UNLOCK_PENDING)
+ if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
return F_UNLCK;
- if (fl->fl_flags & FL_DOWNGRADE_PENDING)
+ if (fl->fl_core.flc_flags & FL_DOWNGRADE_PENDING)
return F_RDLCK;
- return fl->fl_type;
+ return fl->fl_core.flc_type;
}
static int leases_enable = 1;
@@ -201,8 +199,10 @@ locks_dump_ctx_list(struct list_head *list, char *list_type)
{
struct file_lock *fl;
- list_for_each_entry(fl, list, fl_list) {
- pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n", list_type, fl->fl_owner, fl->fl_flags, fl->fl_type, fl->fl_pid);
+ list_for_each_entry(fl, list, fl_core.flc_list) {
+ pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n", list_type,
+ fl->fl_core.flc_owner, fl->fl_core.flc_flags,
+ fl->fl_core.flc_type, fl->fl_core.flc_pid);
}
}
@@ -230,13 +230,14 @@ locks_check_ctx_file_list(struct file *filp, struct list_head *list,
struct file_lock *fl;
struct inode *inode = file_inode(filp);
- list_for_each_entry(fl, list, fl_list)
- if (fl->fl_file == filp)
+ list_for_each_entry(fl, list, fl_core.flc_list)
+ if (fl->fl_core.flc_file == filp)
pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
" fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n",
list_type, MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
- fl->fl_owner, fl->fl_flags, fl->fl_type, fl->fl_pid);
+ fl->fl_core.flc_owner, fl->fl_core.flc_flags,
+ fl->fl_core.flc_type, fl->fl_core.flc_pid);
}
void
@@ -252,11 +253,11 @@ locks_free_lock_context(struct inode *inode)
static void locks_init_lock_heads(struct file_lock *fl)
{
- INIT_HLIST_NODE(&fl->fl_link);
- INIT_LIST_HEAD(&fl->fl_list);
- INIT_LIST_HEAD(&fl->fl_blocked_requests);
- INIT_LIST_HEAD(&fl->fl_blocked_member);
- init_waitqueue_head(&fl->fl_wait);
+ INIT_HLIST_NODE(&fl->fl_core.flc_link);
+ INIT_LIST_HEAD(&fl->fl_core.flc_list);
+ INIT_LIST_HEAD(&fl->fl_core.flc_blocked_requests);
+ INIT_LIST_HEAD(&fl->fl_core.flc_blocked_member);
+ init_waitqueue_head(&fl->fl_core.flc_wait);
}
/* Allocate an empty lock structure. */
@@ -273,11 +274,11 @@ EXPORT_SYMBOL_GPL(locks_alloc_lock);
void locks_release_private(struct file_lock *fl)
{
- BUG_ON(waitqueue_active(&fl->fl_wait));
- BUG_ON(!list_empty(&fl->fl_list));
- BUG_ON(!list_empty(&fl->fl_blocked_requests));
- BUG_ON(!list_empty(&fl->fl_blocked_member));
- BUG_ON(!hlist_unhashed(&fl->fl_link));
+ BUG_ON(waitqueue_active(&fl->fl_core.flc_wait));
+ BUG_ON(!list_empty(&fl->fl_core.flc_list));
+ BUG_ON(!list_empty(&fl->fl_core.flc_blocked_requests));
+ BUG_ON(!list_empty(&fl->fl_core.flc_blocked_member));
+ BUG_ON(!hlist_unhashed(&fl->fl_core.flc_link));
if (fl->fl_ops) {
if (fl->fl_ops->fl_release_private)
@@ -287,8 +288,8 @@ void locks_release_private(struct file_lock *fl)
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_put_owner) {
- fl->fl_lmops->lm_put_owner(fl->fl_owner);
- fl->fl_owner = NULL;
+ fl->fl_lmops->lm_put_owner(fl->fl_core.flc_owner);
+ fl->fl_core.flc_owner = NULL;
}
fl->fl_lmops = NULL;
}
@@ -310,10 +311,10 @@ bool locks_owner_has_blockers(struct file_lock_context *flctx,
struct file_lock *fl;
spin_lock(&flctx->flc_lock);
- list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
- if (fl->fl_owner != owner)
+ list_for_each_entry(fl, &flctx->flc_posix, fl_core.flc_list) {
+ if (fl->fl_core.flc_owner != owner)
continue;
- if (!list_empty(&fl->fl_blocked_requests)) {
+ if (!list_empty(&fl->fl_core.flc_blocked_requests)) {
spin_unlock(&flctx->flc_lock);
return true;
}
@@ -337,8 +338,8 @@ locks_dispose_list(struct list_head *dispose)
struct file_lock *fl;
while (!list_empty(dispose)) {
- fl = list_first_entry(dispose, struct file_lock, fl_list);
- list_del_init(&fl->fl_list);
+ fl = list_first_entry(dispose, struct file_lock, fl_core.flc_list);
+ list_del_init(&fl->fl_core.flc_list);
locks_free_lock(fl);
}
}
@@ -355,11 +356,11 @@ EXPORT_SYMBOL(locks_init_lock);
*/
void locks_copy_conflock(struct file_lock *new, struct file_lock *fl)
{
- new->fl_owner = fl->fl_owner;
- new->fl_pid = fl->fl_pid;
- new->fl_file = NULL;
- new->fl_flags = fl->fl_flags;
- new->fl_type = fl->fl_type;
+ new->fl_core.flc_owner = fl->fl_core.flc_owner;
+ new->fl_core.flc_pid = fl->fl_core.flc_pid;
+ new->fl_core.flc_file = NULL;
+ new->fl_core.flc_flags = fl->fl_core.flc_flags;
+ new->fl_core.flc_type = fl->fl_core.flc_type;
new->fl_start = fl->fl_start;
new->fl_end = fl->fl_end;
new->fl_lmops = fl->fl_lmops;
@@ -367,7 +368,7 @@ void locks_copy_conflock(struct file_lock *new, struct file_lock *fl)
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_get_owner)
- fl->fl_lmops->lm_get_owner(fl->fl_owner);
+ fl->fl_lmops->lm_get_owner(fl->fl_core.flc_owner);
}
}
EXPORT_SYMBOL(locks_copy_conflock);
@@ -379,7 +380,7 @@ void locks_copy_lock(struct file_lock *new, struct file_lock *fl)
locks_copy_conflock(new, fl);
- new->fl_file = fl->fl_file;
+ new->fl_core.flc_file = fl->fl_core.flc_file;
new->fl_ops = fl->fl_ops;
if (fl->fl_ops) {
@@ -398,12 +399,14 @@ static void locks_move_blocks(struct file_lock *new, struct file_lock *fl)
* ->fl_blocked_requests, so we don't need a lock to check if it
* is empty.
*/
- if (list_empty(&fl->fl_blocked_requests))
+ if (list_empty(&fl->fl_core.flc_blocked_requests))
return;
spin_lock(&blocked_lock_lock);
- list_splice_init(&fl->fl_blocked_requests, &new->fl_blocked_requests);
- list_for_each_entry(f, &new->fl_blocked_requests, fl_blocked_member)
- f->fl_blocker = new;
+ list_splice_init(&fl->fl_core.flc_blocked_requests,
+ &new->fl_core.flc_blocked_requests);
+ list_for_each_entry(f, &new->fl_core.flc_blocked_requests,
+ fl_core.flc_blocked_member)
+ f->fl_core.flc_blocker = new;
spin_unlock(&blocked_lock_lock);
}
@@ -424,11 +427,11 @@ static void flock_make_lock(struct file *filp, struct file_lock *fl, int type)
{
locks_init_lock(fl);
- fl->fl_file = filp;
- fl->fl_owner = filp;
- fl->fl_pid = current->tgid;
- fl->fl_flags = FL_FLOCK;
- fl->fl_type = type;
+ fl->fl_core.flc_file = filp;
+ fl->fl_core.flc_owner = filp;
+ fl->fl_core.flc_pid = current->tgid;
+ fl->fl_core.flc_flags = FL_FLOCK;
+ fl->fl_core.flc_type = type;
fl->fl_end = OFFSET_MAX;
}
@@ -438,7 +441,7 @@ static int assign_type(struct file_lock *fl, int type)
case F_RDLCK:
case F_WRLCK:
case F_UNLCK:
- fl->fl_type = type;
+ fl->fl_core.flc_type = type;
break;
default:
return -EINVAL;
@@ -483,10 +486,10 @@ static int flock64_to_posix_lock(struct file *filp, struct file_lock *fl,
} else
fl->fl_end = OFFSET_MAX;
- fl->fl_owner = current->files;
- fl->fl_pid = current->tgid;
- fl->fl_file = filp;
- fl->fl_flags = FL_POSIX;
+ fl->fl_core.flc_owner = current->files;
+ fl->fl_core.flc_pid = current->tgid;
+ fl->fl_core.flc_file = filp;
+ fl->fl_core.flc_flags = FL_POSIX;
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
@@ -520,7 +523,7 @@ lease_break_callback(struct file_lock *fl)
static void
lease_setup(struct file_lock *fl, void **priv)
{
- struct file *filp = fl->fl_file;
+ struct file *filp = fl->fl_core.flc_file;
struct fasync_struct *fa = *priv;
/*
@@ -548,11 +551,11 @@ static int lease_init(struct file *filp, int type, struct file_lock *fl)
if (assign_type(fl, type) != 0)
return -EINVAL;
- fl->fl_owner = filp;
- fl->fl_pid = current->tgid;
+ fl->fl_core.flc_owner = filp;
+ fl->fl_core.flc_pid = current->tgid;
- fl->fl_file = filp;
- fl->fl_flags = FL_LEASE;
+ fl->fl_core.flc_file = filp;
+ fl->fl_core.flc_flags = FL_LEASE;
fl->fl_start = 0;
fl->fl_end = OFFSET_MAX;
fl->fl_ops = NULL;
@@ -590,7 +593,7 @@ static inline int locks_overlap(struct file_lock *fl1, struct file_lock *fl2)
*/
static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
{
- return fl1->fl_owner == fl2->fl_owner;
+ return fl1->fl_core.flc_owner == fl2->fl_core.flc_owner;
}
/* Must be called with the flc_lock held! */
@@ -601,8 +604,8 @@ static void locks_insert_global_locks(struct file_lock *fl)
percpu_rwsem_assert_held(&file_rwsem);
spin_lock(&fll->lock);
- fl->fl_link_cpu = smp_processor_id();
- hlist_add_head(&fl->fl_link, &fll->hlist);
+ fl->fl_core.flc_link_cpu = smp_processor_id();
+ hlist_add_head(&fl->fl_core.flc_link, &fll->hlist);
spin_unlock(&fll->lock);
}
@@ -618,33 +621,34 @@ static void locks_delete_global_locks(struct file_lock *fl)
* is done while holding the flc_lock, and new insertions into the list
* also require that it be held.
*/
- if (hlist_unhashed(&fl->fl_link))
+ if (hlist_unhashed(&fl->fl_core.flc_link))
return;
- fll = per_cpu_ptr(&file_lock_list, fl->fl_link_cpu);
+ fll = per_cpu_ptr(&file_lock_list, fl->fl_core.flc_link_cpu);
spin_lock(&fll->lock);
- hlist_del_init(&fl->fl_link);
+ hlist_del_init(&fl->fl_core.flc_link);
spin_unlock(&fll->lock);
}
static unsigned long
posix_owner_key(struct file_lock *fl)
{
- return (unsigned long)fl->fl_owner;
+ return (unsigned long) fl->fl_core.flc_owner;
}
static void locks_insert_global_blocked(struct file_lock *waiter)
{
lockdep_assert_held(&blocked_lock_lock);
- hash_add(blocked_hash, &waiter->fl_link, posix_owner_key(waiter));
+ hash_add(blocked_hash, &waiter->fl_core.flc_link,
+ posix_owner_key(waiter));
}
static void locks_delete_global_blocked(struct file_lock *waiter)
{
lockdep_assert_held(&blocked_lock_lock);
- hash_del(&waiter->fl_link);
+ hash_del(&waiter->fl_core.flc_link);
}
/* Remove waiter from blocker's block list.
@@ -655,28 +659,28 @@ static void locks_delete_global_blocked(struct file_lock *waiter)
static void __locks_delete_block(struct file_lock *waiter)
{
locks_delete_global_blocked(waiter);
- list_del_init(&waiter->fl_blocked_member);
+ list_del_init(&waiter->fl_core.flc_blocked_member);
}
static void __locks_wake_up_blocks(struct file_lock *blocker)
{
- while (!list_empty(&blocker->fl_blocked_requests)) {
+ while (!list_empty(&blocker->fl_core.flc_blocked_requests)) {
struct file_lock *waiter;
- waiter = list_first_entry(&blocker->fl_blocked_requests,
- struct file_lock, fl_blocked_member);
+ waiter = list_first_entry(&blocker->fl_core.flc_blocked_requests,
+ struct file_lock, fl_core.flc_blocked_member);
__locks_delete_block(waiter);
if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
waiter->fl_lmops->lm_notify(waiter);
else
- wake_up(&waiter->fl_wait);
+ wake_up(&waiter->fl_core.flc_wait);
/*
* The setting of fl_blocker to NULL marks the "done"
* point in deleting a block. Paired with acquire at the top
* of locks_delete_block().
*/
- smp_store_release(&waiter->fl_blocker, NULL);
+ smp_store_release(&waiter->fl_core.flc_blocker, NULL);
}
}
@@ -711,12 +715,12 @@ int locks_delete_block(struct file_lock *waiter)
* no new locks can be inserted into its fl_blocked_requests list, and
* can avoid doing anything further if the list is empty.
*/
- if (!smp_load_acquire(&waiter->fl_blocker) &&
- list_empty(&waiter->fl_blocked_requests))
+ if (!smp_load_acquire(&waiter->fl_core.flc_blocker) &&
+ list_empty(&waiter->fl_core.flc_blocked_requests))
return status;
spin_lock(&blocked_lock_lock);
- if (waiter->fl_blocker)
+ if (waiter->fl_core.flc_blocker)
status = 0;
__locks_wake_up_blocks(waiter);
__locks_delete_block(waiter);
@@ -725,7 +729,7 @@ int locks_delete_block(struct file_lock *waiter)
* The setting of fl_blocker to NULL marks the "done" point in deleting
* a block. Paired with acquire at the top of this function.
*/
- smp_store_release(&waiter->fl_blocker, NULL);
+ smp_store_release(&waiter->fl_core.flc_blocker, NULL);
spin_unlock(&blocked_lock_lock);
return status;
}
@@ -752,17 +756,19 @@ static void __locks_insert_block(struct file_lock *blocker,
struct file_lock *))
{
struct file_lock *fl;
- BUG_ON(!list_empty(&waiter->fl_blocked_member));
+ BUG_ON(!list_empty(&waiter->fl_core.flc_blocked_member));
new_blocker:
- list_for_each_entry(fl, &blocker->fl_blocked_requests, fl_blocked_member)
+ list_for_each_entry(fl, &blocker->fl_core.flc_blocked_requests,
+ fl_core.flc_blocked_member)
if (conflict(fl, waiter)) {
blocker = fl;
goto new_blocker;
}
- waiter->fl_blocker = blocker;
- list_add_tail(&waiter->fl_blocked_member, &blocker->fl_blocked_requests);
- if ((blocker->fl_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
+ waiter->fl_core.flc_blocker = blocker;
+ list_add_tail(&waiter->fl_core.flc_blocked_member,
+ &blocker->fl_core.flc_blocked_requests);
+ if ((blocker->fl_core.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
locks_insert_global_blocked(waiter);
/* The requests in waiter->fl_blocked are known to conflict with
@@ -797,7 +803,7 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
* fl_blocked_requests list does not require the flc_lock, so we must
* recheck list_empty() after acquiring the blocked_lock_lock.
*/
- if (list_empty(&blocker->fl_blocked_requests))
+ if (list_empty(&blocker->fl_core.flc_blocked_requests))
return;
spin_lock(&blocked_lock_lock);
@@ -808,7 +814,7 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
static void
locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
{
- list_add_tail(&fl->fl_list, before);
+ list_add_tail(&fl->fl_core.flc_list, before);
locks_insert_global_locks(fl);
}
@@ -816,7 +822,7 @@ static void
locks_unlink_lock_ctx(struct file_lock *fl)
{
locks_delete_global_locks(fl);
- list_del_init(&fl->fl_list);
+ list_del_init(&fl->fl_core.flc_list);
locks_wake_up_blocks(fl);
}
@@ -825,7 +831,7 @@ locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
{
locks_unlink_lock_ctx(fl);
if (dispose)
- list_add(&fl->fl_list, dispose);
+ list_add(&fl->fl_core.flc_list, dispose);
else
locks_free_lock(fl);
}
@@ -836,9 +842,9 @@ locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
static bool locks_conflict(struct file_lock *caller_fl,
struct file_lock *sys_fl)
{
- if (sys_fl->fl_type == F_WRLCK)
+ if (sys_fl->fl_core.flc_type == F_WRLCK)
return true;
- if (caller_fl->fl_type == F_WRLCK)
+ if (caller_fl->fl_core.flc_type == F_WRLCK)
return true;
return false;
}
@@ -869,7 +875,7 @@ static bool posix_test_locks_conflict(struct file_lock *caller_fl,
struct file_lock *sys_fl)
{
/* F_UNLCK checks any locks on the same fd. */
- if (caller_fl->fl_type == F_UNLCK) {
+ if (caller_fl->fl_core.flc_type == F_UNLCK) {
if (!posix_same_owner(caller_fl, sys_fl))
return false;
return locks_overlap(caller_fl, sys_fl);
@@ -886,7 +892,7 @@ static bool flock_locks_conflict(struct file_lock *caller_fl,
/* FLOCK locks referring to the same filp do not conflict with
* each other.
*/
- if (caller_fl->fl_file == sys_fl->fl_file)
+ if (caller_fl->fl_core.flc_file == sys_fl->fl_core.flc_file)
return false;
return locks_conflict(caller_fl, sys_fl);
@@ -903,13 +909,13 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
ctx = locks_inode_context(inode);
if (!ctx || list_empty_careful(&ctx->flc_posix)) {
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
return;
}
retry:
spin_lock(&ctx->flc_lock);
- list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
+ list_for_each_entry(cfl, &ctx->flc_posix, fl_core.flc_list) {
if (!posix_test_locks_conflict(fl, cfl))
continue;
if (cfl->fl_lmops && cfl->fl_lmops->lm_lock_expirable
@@ -925,7 +931,7 @@ posix_test_lock(struct file *filp, struct file_lock *fl)
locks_copy_conflock(fl, cfl);
goto out;
}
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
out:
spin_unlock(&ctx->flc_lock);
return;
@@ -972,10 +978,10 @@ static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
{
struct file_lock *fl;
- hash_for_each_possible(blocked_hash, fl, fl_link, posix_owner_key(block_fl)) {
+ hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, posix_owner_key(block_fl)) {
if (posix_same_owner(fl, block_fl)) {
- while (fl->fl_blocker)
- fl = fl->fl_blocker;
+ while (fl->fl_core.flc_blocker)
+ fl = fl->fl_core.flc_blocker;
return fl;
}
}
@@ -994,7 +1000,7 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
* This deadlock detector can't reasonably detect deadlocks with
* FL_OFDLCK locks, since they aren't owned by a process, per-se.
*/
- if (caller_fl->fl_flags & FL_OFDLCK)
+ if (caller_fl->fl_core.flc_flags & FL_OFDLCK)
return 0;
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
@@ -1022,14 +1028,14 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
bool found = false;
LIST_HEAD(dispose);
- ctx = locks_get_lock_context(inode, request->fl_type);
+ ctx = locks_get_lock_context(inode, request->fl_core.flc_type);
if (!ctx) {
- if (request->fl_type != F_UNLCK)
+ if (request->fl_core.flc_type != F_UNLCK)
return -ENOMEM;
- return (request->fl_flags & FL_EXISTS) ? -ENOENT : 0;
+ return (request->fl_core.flc_flags & FL_EXISTS) ? -ENOENT : 0;
}
- if (!(request->fl_flags & FL_ACCESS) && (request->fl_type != F_UNLCK)) {
+ if (!(request->fl_core.flc_flags & FL_ACCESS) && (request->fl_core.flc_type != F_UNLCK)) {
new_fl = locks_alloc_lock();
if (!new_fl)
return -ENOMEM;
@@ -1037,37 +1043,37 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
- if (request->fl_flags & FL_ACCESS)
+ if (request->fl_core.flc_flags & FL_ACCESS)
goto find_conflict;
- list_for_each_entry(fl, &ctx->flc_flock, fl_list) {
- if (request->fl_file != fl->fl_file)
+ list_for_each_entry(fl, &ctx->flc_flock, fl_core.flc_list) {
+ if (request->fl_core.flc_file != fl->fl_core.flc_file)
continue;
- if (request->fl_type == fl->fl_type)
+ if (request->fl_core.flc_type == fl->fl_core.flc_type)
goto out;
found = true;
locks_delete_lock_ctx(fl, &dispose);
break;
}
- if (request->fl_type == F_UNLCK) {
- if ((request->fl_flags & FL_EXISTS) && !found)
+ if (request->fl_core.flc_type == F_UNLCK) {
+ if ((request->fl_core.flc_flags & FL_EXISTS) && !found)
error = -ENOENT;
goto out;
}
find_conflict:
- list_for_each_entry(fl, &ctx->flc_flock, fl_list) {
+ list_for_each_entry(fl, &ctx->flc_flock, fl_core.flc_list) {
if (!flock_locks_conflict(request, fl))
continue;
error = -EAGAIN;
- if (!(request->fl_flags & FL_SLEEP))
+ if (!(request->fl_core.flc_flags & FL_SLEEP))
goto out;
error = FILE_LOCK_DEFERRED;
locks_insert_block(fl, request, flock_locks_conflict);
goto out;
}
- if (request->fl_flags & FL_ACCESS)
+ if (request->fl_core.flc_flags & FL_ACCESS)
goto out;
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
@@ -1100,9 +1106,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
void *owner;
void (*func)(void);
- ctx = locks_get_lock_context(inode, request->fl_type);
+ ctx = locks_get_lock_context(inode, request->fl_core.flc_type);
if (!ctx)
- return (request->fl_type == F_UNLCK) ? 0 : -ENOMEM;
+ return (request->fl_core.flc_type == F_UNLCK) ? 0 : -ENOMEM;
/*
* We may need two file_lock structures for this operation,
@@ -1110,8 +1116,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
*
* In some cases we can be sure, that no new locks will be needed
*/
- if (!(request->fl_flags & FL_ACCESS) &&
- (request->fl_type != F_UNLCK ||
+ if (!(request->fl_core.flc_flags & FL_ACCESS) &&
+ (request->fl_core.flc_type != F_UNLCK ||
request->fl_start != 0 || request->fl_end != OFFSET_MAX)) {
new_fl = locks_alloc_lock();
new_fl2 = locks_alloc_lock();
@@ -1125,8 +1131,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
* there are any, either return error or put the request on the
* blocker's list of waiters and the global blocked_hash.
*/
- if (request->fl_type != F_UNLCK) {
- list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+ if (request->fl_core.flc_type != F_UNLCK) {
+ list_for_each_entry(fl, &ctx->flc_posix, fl_core.flc_list) {
if (!posix_locks_conflict(request, fl))
continue;
if (fl->fl_lmops && fl->fl_lmops->lm_lock_expirable
@@ -1143,7 +1149,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
if (conflock)
locks_copy_conflock(conflock, fl);
error = -EAGAIN;
- if (!(request->fl_flags & FL_SLEEP))
+ if (!(request->fl_core.flc_flags & FL_SLEEP))
goto out;
/*
* Deadlock detection and insertion into the blocked
@@ -1168,22 +1174,22 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
/* If we're just looking for a conflict, we're done. */
error = 0;
- if (request->fl_flags & FL_ACCESS)
+ if (request->fl_core.flc_flags & FL_ACCESS)
goto out;
/* Find the first old lock with the same owner as the new lock */
- list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+ list_for_each_entry(fl, &ctx->flc_posix, fl_core.flc_list) {
if (posix_same_owner(request, fl))
break;
}
/* Process locks with this owner. */
- list_for_each_entry_safe_from(fl, tmp, &ctx->flc_posix, fl_list) {
+ list_for_each_entry_safe_from(fl, tmp, &ctx->flc_posix, fl_core.flc_list) {
if (!posix_same_owner(request, fl))
break;
/* Detect adjacent or overlapping regions (if same lock type) */
- if (request->fl_type == fl->fl_type) {
+ if (request->fl_core.flc_type == fl->fl_core.flc_type) {
/* In all comparisons of start vs end, use
* "start - 1" rather than "end + 1". If end
* is OFFSET_MAX, end + 1 will become negative.
@@ -1223,7 +1229,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
continue;
if (fl->fl_start > request->fl_end)
break;
- if (request->fl_type == F_UNLCK)
+ if (request->fl_core.flc_type == F_UNLCK)
added = true;
if (fl->fl_start < request->fl_start)
left = fl;
@@ -1256,7 +1262,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
locks_move_blocks(new_fl, request);
request = new_fl;
new_fl = NULL;
- locks_insert_lock_ctx(request, &fl->fl_list);
+ locks_insert_lock_ctx(request,
+ &fl->fl_core.flc_list);
locks_delete_lock_ctx(fl, &dispose);
added = true;
}
@@ -1274,8 +1281,8 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
error = 0;
if (!added) {
- if (request->fl_type == F_UNLCK) {
- if (request->fl_flags & FL_EXISTS)
+ if (request->fl_core.flc_type == F_UNLCK) {
+ if (request->fl_core.flc_flags & FL_EXISTS)
error = -ENOENT;
goto out;
}
@@ -1286,7 +1293,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
}
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
- locks_insert_lock_ctx(new_fl, &fl->fl_list);
+ locks_insert_lock_ctx(new_fl, &fl->fl_core.flc_list);
fl = new_fl;
new_fl = NULL;
}
@@ -1298,7 +1305,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
left = new_fl2;
new_fl2 = NULL;
locks_copy_lock(left, right);
- locks_insert_lock_ctx(left, &fl->fl_list);
+ locks_insert_lock_ctx(left, &fl->fl_core.flc_list);
}
right->fl_start = request->fl_end + 1;
locks_wake_up_blocks(right);
@@ -1359,8 +1366,8 @@ static int posix_lock_inode_wait(struct inode *inode, struct file_lock *fl)
error = posix_lock_inode(inode, fl, NULL);
if (error != FILE_LOCK_DEFERRED)
break;
- error = wait_event_interruptible(fl->fl_wait,
- list_empty(&fl->fl_blocked_member));
+ error = wait_event_interruptible(fl->fl_core.flc_wait,
+ list_empty(&fl->fl_core.flc_blocked_member));
if (error)
break;
}
@@ -1372,10 +1379,10 @@ static void lease_clear_pending(struct file_lock *fl, int arg)
{
switch (arg) {
case F_UNLCK:
- fl->fl_flags &= ~FL_UNLOCK_PENDING;
+ fl->fl_core.flc_flags &= ~FL_UNLOCK_PENDING;
fallthrough;
case F_RDLCK:
- fl->fl_flags &= ~FL_DOWNGRADE_PENDING;
+ fl->fl_core.flc_flags &= ~FL_DOWNGRADE_PENDING;
}
}
@@ -1389,11 +1396,11 @@ int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
lease_clear_pending(fl, arg);
locks_wake_up_blocks(fl);
if (arg == F_UNLCK) {
- struct file *filp = fl->fl_file;
+ struct file *filp = fl->fl_core.flc_file;
f_delown(filp);
filp->f_owner.signum = 0;
- fasync_helper(0, fl->fl_file, 0, &fl->fl_fasync);
+ fasync_helper(0, fl->fl_core.flc_file, 0, &fl->fl_fasync);
if (fl->fl_fasync != NULL) {
printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
fl->fl_fasync = NULL;
@@ -1419,7 +1426,7 @@ static void time_out_leases(struct inode *inode, struct list_head *dispose)
lockdep_assert_held(&ctx->flc_lock);
- list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) {
+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_core.flc_list) {
trace_time_out_leases(inode, fl);
if (past_time(fl->fl_downgrade_time))
lease_modify(fl, F_RDLCK, dispose);
@@ -1435,11 +1442,11 @@ static bool leases_conflict(struct file_lock *lease, struct file_lock *breaker)
if (lease->fl_lmops->lm_breaker_owns_lease
&& lease->fl_lmops->lm_breaker_owns_lease(lease))
return false;
- if ((breaker->fl_flags & FL_LAYOUT) != (lease->fl_flags & FL_LAYOUT)) {
+ if ((breaker->fl_core.flc_flags & FL_LAYOUT) != (lease->fl_core.flc_flags & FL_LAYOUT)) {
rc = false;
goto trace;
}
- if ((breaker->fl_flags & FL_DELEG) && (lease->fl_flags & FL_LEASE)) {
+ if ((breaker->fl_core.flc_flags & FL_DELEG) && (lease->fl_core.flc_flags & FL_LEASE)) {
rc = false;
goto trace;
}
@@ -1458,7 +1465,7 @@ any_leases_conflict(struct inode *inode, struct file_lock *breaker)
lockdep_assert_held(&ctx->flc_lock);
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
+ list_for_each_entry(fl, &ctx->flc_lease, fl_core.flc_list) {
if (leases_conflict(fl, breaker))
return true;
}
@@ -1490,7 +1497,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
new_fl = lease_alloc(NULL, want_write ? F_WRLCK : F_RDLCK);
if (IS_ERR(new_fl))
return PTR_ERR(new_fl);
- new_fl->fl_flags = type;
+ new_fl->fl_core.flc_flags = type;
/* typically we will check that ctx is non-NULL before calling */
ctx = locks_inode_context(inode);
@@ -1514,18 +1521,18 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
break_time++; /* so that 0 means no break time */
}
- list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) {
+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_core.flc_list) {
if (!leases_conflict(fl, new_fl))
continue;
if (want_write) {
- if (fl->fl_flags & FL_UNLOCK_PENDING)
+ if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
continue;
- fl->fl_flags |= FL_UNLOCK_PENDING;
+ fl->fl_core.flc_flags |= FL_UNLOCK_PENDING;
fl->fl_break_time = break_time;
} else {
if (lease_breaking(fl))
continue;
- fl->fl_flags |= FL_DOWNGRADE_PENDING;
+ fl->fl_core.flc_flags |= FL_DOWNGRADE_PENDING;
fl->fl_downgrade_time = break_time;
}
if (fl->fl_lmops->lm_break(fl))
@@ -1542,7 +1549,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
}
restart:
- fl = list_first_entry(&ctx->flc_lease, struct file_lock, fl_list);
+ fl = list_first_entry(&ctx->flc_lease, struct file_lock, fl_core.flc_list);
break_time = fl->fl_break_time;
if (break_time != 0)
break_time -= jiffies;
@@ -1554,9 +1561,9 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
- error = wait_event_interruptible_timeout(new_fl->fl_wait,
- list_empty(&new_fl->fl_blocked_member),
- break_time);
+ error = wait_event_interruptible_timeout(new_fl->fl_core.flc_wait,
+ list_empty(&new_fl->fl_core.flc_blocked_member),
+ break_time);
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
@@ -1602,8 +1609,8 @@ void lease_get_mtime(struct inode *inode, struct timespec64 *time)
if (ctx && !list_empty_careful(&ctx->flc_lease)) {
spin_lock(&ctx->flc_lock);
fl = list_first_entry_or_null(&ctx->flc_lease,
- struct file_lock, fl_list);
- if (fl && (fl->fl_type == F_WRLCK))
+ struct file_lock, fl_core.flc_list);
+ if (fl && (fl->fl_core.flc_type == F_WRLCK))
has_lease = true;
spin_unlock(&ctx->flc_lock);
}
@@ -1649,8 +1656,8 @@ int fcntl_getlease(struct file *filp)
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
- if (fl->fl_file != filp)
+ list_for_each_entry(fl, &ctx->flc_lease, fl_core.flc_list) {
+ if (fl->fl_core.flc_file != filp)
continue;
type = target_leasetype(fl);
break;
@@ -1715,7 +1722,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
struct file_lock *fl, *my_fl = NULL, *lease;
struct inode *inode = file_inode(filp);
struct file_lock_context *ctx;
- bool is_deleg = (*flp)->fl_flags & FL_DELEG;
+ bool is_deleg = (*flp)->fl_core.flc_flags & FL_DELEG;
int error;
LIST_HEAD(dispose);
@@ -1741,7 +1748,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
- error = check_conflicting_open(filp, arg, lease->fl_flags);
+ error = check_conflicting_open(filp, arg, lease->fl_core.flc_flags);
if (error)
goto out;
@@ -1754,9 +1761,9 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
* except for this filp.
*/
error = -EAGAIN;
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
- if (fl->fl_file == filp &&
- fl->fl_owner == lease->fl_owner) {
+ list_for_each_entry(fl, &ctx->flc_lease, fl_core.flc_list) {
+ if (fl->fl_core.flc_file == filp &&
+ fl->fl_core.flc_owner == lease->fl_core.flc_owner) {
my_fl = fl;
continue;
}
@@ -1771,7 +1778,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
* Modifying our existing lease is OK, but no getting a
* new lease if someone else is opening for write:
*/
- if (fl->fl_flags & FL_UNLOCK_PENDING)
+ if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
goto out;
}
@@ -1798,7 +1805,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
* precedes these checks.
*/
smp_mb();
- error = check_conflicting_open(filp, arg, lease->fl_flags);
+ error = check_conflicting_open(filp, arg, lease->fl_core.flc_flags);
if (error) {
locks_unlink_lock_ctx(lease);
goto out;
@@ -1834,9 +1841,9 @@ static int generic_delete_lease(struct file *filp, void *owner)
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
- if (fl->fl_file == filp &&
- fl->fl_owner == owner) {
+ list_for_each_entry(fl, &ctx->flc_lease, fl_core.flc_list) {
+ if (fl->fl_core.flc_file == filp &&
+ fl->fl_core.flc_owner == owner) {
victim = fl;
break;
}
@@ -2012,8 +2019,8 @@ static int flock_lock_inode_wait(struct inode *inode, struct file_lock *fl)
error = flock_lock_inode(inode, fl);
if (error != FILE_LOCK_DEFERRED)
break;
- error = wait_event_interruptible(fl->fl_wait,
- list_empty(&fl->fl_blocked_member));
+ error = wait_event_interruptible(fl->fl_core.flc_wait,
+ list_empty(&fl->fl_core.flc_blocked_member));
if (error)
break;
}
@@ -2031,7 +2038,7 @@ static int flock_lock_inode_wait(struct inode *inode, struct file_lock *fl)
int locks_lock_inode_wait(struct inode *inode, struct file_lock *fl)
{
int res = 0;
- switch (fl->fl_flags & (FL_POSIX|FL_FLOCK)) {
+ switch (fl->fl_core.flc_flags & (FL_POSIX|FL_FLOCK)) {
case FL_POSIX:
res = posix_lock_inode_wait(inode, fl);
break;
@@ -2093,13 +2100,13 @@ SYSCALL_DEFINE2(flock, unsigned int, fd, unsigned int, cmd)
flock_make_lock(f.file, &fl, type);
- error = security_file_lock(f.file, fl.fl_type);
+ error = security_file_lock(f.file, fl.fl_core.flc_type);
if (error)
goto out_putf;
can_sleep = !(cmd & LOCK_NB);
if (can_sleep)
- fl.fl_flags |= FL_SLEEP;
+ fl.fl_core.flc_flags |= FL_SLEEP;
if (f.file->f_op->flock)
error = f.file->f_op->flock(f.file,
@@ -2125,7 +2132,7 @@ SYSCALL_DEFINE2(flock, unsigned int, fd, unsigned int, cmd)
*/
int vfs_test_lock(struct file *filp, struct file_lock *fl)
{
- WARN_ON_ONCE(filp != fl->fl_file);
+ WARN_ON_ONCE(filp != fl->fl_core.flc_file);
if (filp->f_op->lock)
return filp->f_op->lock(filp, F_GETLK, fl);
posix_test_lock(filp, fl);
@@ -2145,12 +2152,12 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
pid_t vnr;
struct pid *pid;
- if (fl->fl_flags & FL_OFDLCK)
+ if (fl->fl_core.flc_flags & FL_OFDLCK)
return -1;
/* Remote locks report a negative pid value */
- if (fl->fl_pid <= 0)
- return fl->fl_pid;
+ if (fl->fl_core.flc_pid <= 0)
+ return fl->fl_core.flc_pid;
/*
* If the flock owner process is dead and its pid has been already
@@ -2158,10 +2165,10 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
* flock owner pid number in init pidns.
*/
if (ns == &init_pid_ns)
- return (pid_t)fl->fl_pid;
+ return (pid_t) fl->fl_core.flc_pid;
rcu_read_lock();
- pid = find_pid_ns(fl->fl_pid, &init_pid_ns);
+ pid = find_pid_ns(fl->fl_core.flc_pid, &init_pid_ns);
vnr = pid_nr_ns(pid, ns);
rcu_read_unlock();
return vnr;
@@ -2184,7 +2191,7 @@ static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
fl->fl_end - fl->fl_start + 1;
flock->l_whence = 0;
- flock->l_type = fl->fl_type;
+ flock->l_type = fl->fl_core.flc_type;
return 0;
}
@@ -2196,7 +2203,7 @@ static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
fl->fl_end - fl->fl_start + 1;
flock->l_whence = 0;
- flock->l_type = fl->fl_type;
+ flock->l_type = fl->fl_core.flc_type;
}
#endif
@@ -2225,16 +2232,16 @@ int fcntl_getlk(struct file *filp, unsigned int cmd, struct flock *flock)
if (flock->l_pid != 0)
goto out;
- fl->fl_flags |= FL_OFDLCK;
- fl->fl_owner = filp;
+ fl->fl_core.flc_flags |= FL_OFDLCK;
+ fl->fl_core.flc_owner = filp;
}
error = vfs_test_lock(filp, fl);
if (error)
goto out;
- flock->l_type = fl->fl_type;
- if (fl->fl_type != F_UNLCK) {
+ flock->l_type = fl->fl_core.flc_type;
+ if (fl->fl_core.flc_type != F_UNLCK) {
error = posix_lock_to_flock(flock, fl);
if (error)
goto out;
@@ -2281,7 +2288,7 @@ int fcntl_getlk(struct file *filp, unsigned int cmd, struct flock *flock)
*/
int vfs_lock_file(struct file *filp, unsigned int cmd, struct file_lock *fl, struct file_lock *conf)
{
- WARN_ON_ONCE(filp != fl->fl_file);
+ WARN_ON_ONCE(filp != fl->fl_core.flc_file);
if (filp->f_op->lock)
return filp->f_op->lock(filp, cmd, fl);
else
@@ -2294,7 +2301,7 @@ static int do_lock_file_wait(struct file *filp, unsigned int cmd,
{
int error;
- error = security_file_lock(filp, fl->fl_type);
+ error = security_file_lock(filp, fl->fl_core.flc_type);
if (error)
return error;
@@ -2302,8 +2309,8 @@ static int do_lock_file_wait(struct file *filp, unsigned int cmd,
error = vfs_lock_file(filp, cmd, fl, NULL);
if (error != FILE_LOCK_DEFERRED)
break;
- error = wait_event_interruptible(fl->fl_wait,
- list_empty(&fl->fl_blocked_member));
+ error = wait_event_interruptible(fl->fl_core.flc_wait,
+ list_empty(&fl->fl_core.flc_blocked_member));
if (error)
break;
}
@@ -2316,13 +2323,13 @@ static int do_lock_file_wait(struct file *filp, unsigned int cmd,
static int
check_fmode_for_setlk(struct file_lock *fl)
{
- switch (fl->fl_type) {
+ switch (fl->fl_core.flc_type) {
case F_RDLCK:
- if (!(fl->fl_file->f_mode & FMODE_READ))
+ if (!(fl->fl_core.flc_file->f_mode & FMODE_READ))
return -EBADF;
break;
case F_WRLCK:
- if (!(fl->fl_file->f_mode & FMODE_WRITE))
+ if (!(fl->fl_core.flc_file->f_mode & FMODE_WRITE))
return -EBADF;
}
return 0;
@@ -2361,8 +2368,8 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
goto out;
cmd = F_SETLK;
- file_lock->fl_flags |= FL_OFDLCK;
- file_lock->fl_owner = filp;
+ file_lock->fl_core.flc_flags |= FL_OFDLCK;
+ file_lock->fl_core.flc_owner = filp;
break;
case F_OFD_SETLKW:
error = -EINVAL;
@@ -2370,11 +2377,11 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
goto out;
cmd = F_SETLKW;
- file_lock->fl_flags |= FL_OFDLCK;
- file_lock->fl_owner = filp;
+ file_lock->fl_core.flc_flags |= FL_OFDLCK;
+ file_lock->fl_core.flc_owner = filp;
fallthrough;
case F_SETLKW:
- file_lock->fl_flags |= FL_SLEEP;
+ file_lock->fl_core.flc_flags |= FL_SLEEP;
}
error = do_lock_file_wait(filp, cmd, file_lock);
@@ -2384,8 +2391,8 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
* lock that was just acquired. There is no need to do that when we're
* unlocking though, or for OFD locks.
*/
- if (!error && file_lock->fl_type != F_UNLCK &&
- !(file_lock->fl_flags & FL_OFDLCK)) {
+ if (!error && file_lock->fl_core.flc_type != F_UNLCK &&
+ !(file_lock->fl_core.flc_flags & FL_OFDLCK)) {
struct files_struct *files = current->files;
/*
* We need that spin_lock here - it prevents reordering between
@@ -2396,7 +2403,7 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
f = files_lookup_fd_locked(files, fd);
spin_unlock(&files->file_lock);
if (f != filp) {
- file_lock->fl_type = F_UNLCK;
+ file_lock->fl_core.flc_type = F_UNLCK;
error = do_lock_file_wait(filp, cmd, file_lock);
WARN_ON_ONCE(error);
error = -EBADF;
@@ -2435,16 +2442,16 @@ int fcntl_getlk64(struct file *filp, unsigned int cmd, struct flock64 *flock)
if (flock->l_pid != 0)
goto out;
- fl->fl_flags |= FL_OFDLCK;
- fl->fl_owner = filp;
+ fl->fl_core.flc_flags |= FL_OFDLCK;
+ fl->fl_core.flc_owner = filp;
}
error = vfs_test_lock(filp, fl);
if (error)
goto out;
- flock->l_type = fl->fl_type;
- if (fl->fl_type != F_UNLCK)
+ flock->l_type = fl->fl_core.flc_type;
+ if (fl->fl_core.flc_type != F_UNLCK)
posix_lock_to_flock64(flock, fl);
out:
@@ -2484,8 +2491,8 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
goto out;
cmd = F_SETLK64;
- file_lock->fl_flags |= FL_OFDLCK;
- file_lock->fl_owner = filp;
+ file_lock->fl_core.flc_flags |= FL_OFDLCK;
+ file_lock->fl_core.flc_owner = filp;
break;
case F_OFD_SETLKW:
error = -EINVAL;
@@ -2493,11 +2500,11 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
goto out;
cmd = F_SETLKW64;
- file_lock->fl_flags |= FL_OFDLCK;
- file_lock->fl_owner = filp;
+ file_lock->fl_core.flc_flags |= FL_OFDLCK;
+ file_lock->fl_core.flc_owner = filp;
fallthrough;
case F_SETLKW64:
- file_lock->fl_flags |= FL_SLEEP;
+ file_lock->fl_core.flc_flags |= FL_SLEEP;
}
error = do_lock_file_wait(filp, cmd, file_lock);
@@ -2507,8 +2514,8 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
* lock that was just acquired. There is no need to do that when we're
* unlocking though, or for OFD locks.
*/
- if (!error && file_lock->fl_type != F_UNLCK &&
- !(file_lock->fl_flags & FL_OFDLCK)) {
+ if (!error && file_lock->fl_core.flc_type != F_UNLCK &&
+ !(file_lock->fl_core.flc_flags & FL_OFDLCK)) {
struct files_struct *files = current->files;
/*
* We need that spin_lock here - it prevents reordering between
@@ -2519,7 +2526,7 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
f = files_lookup_fd_locked(files, fd);
spin_unlock(&files->file_lock);
if (f != filp) {
- file_lock->fl_type = F_UNLCK;
+ file_lock->fl_core.flc_type = F_UNLCK;
error = do_lock_file_wait(filp, cmd, file_lock);
WARN_ON_ONCE(error);
error = -EBADF;
@@ -2553,13 +2560,13 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner)
return;
locks_init_lock(&lock);
- lock.fl_type = F_UNLCK;
- lock.fl_flags = FL_POSIX | FL_CLOSE;
+ lock.fl_core.flc_type = F_UNLCK;
+ lock.fl_core.flc_flags = FL_POSIX | FL_CLOSE;
lock.fl_start = 0;
lock.fl_end = OFFSET_MAX;
- lock.fl_owner = owner;
- lock.fl_pid = current->tgid;
- lock.fl_file = filp;
+ lock.fl_core.flc_owner = owner;
+ lock.fl_core.flc_pid = current->tgid;
+ lock.fl_core.flc_file = filp;
lock.fl_ops = NULL;
lock.fl_lmops = NULL;
@@ -2582,7 +2589,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx)
return;
flock_make_lock(filp, &fl, F_UNLCK);
- fl.fl_flags |= FL_CLOSE;
+ fl.fl_core.flc_flags |= FL_CLOSE;
if (filp->f_op->flock)
filp->f_op->flock(filp, F_SETLKW, &fl);
@@ -2605,8 +2612,8 @@ locks_remove_lease(struct file *filp, struct file_lock_context *ctx)
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
- list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
- if (filp == fl->fl_file)
+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_core.flc_list)
+ if (filp == fl->fl_core.flc_file)
lease_modify(fl, F_UNLCK, &dispose);
spin_unlock(&ctx->flc_lock);
percpu_up_read(&file_rwsem);
@@ -2650,7 +2657,7 @@ void locks_remove_file(struct file *filp)
*/
int vfs_cancel_lock(struct file *filp, struct file_lock *fl)
{
- WARN_ON_ONCE(filp != fl->fl_file);
+ WARN_ON_ONCE(filp != fl->fl_core.flc_file);
if (filp->f_op->lock)
return filp->f_op->lock(filp, F_CANCELLK, fl);
return 0;
@@ -2695,7 +2702,7 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = proc_pid_ns(file_inode(f->file)->i_sb);
- int type = fl->fl_type;
+ int type = fl->fl_core.flc_type;
pid = locks_translate_pid(fl, proc_pidns);
/*
@@ -2704,37 +2711,37 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
* init_pid_ns to get saved lock pid value.
*/
- if (fl->fl_file != NULL)
- inode = file_inode(fl->fl_file);
+ if (fl->fl_core.flc_file != NULL)
+ inode = file_inode(fl->fl_core.flc_file);
seq_printf(f, "%lld: ", id);
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
- if (fl->fl_flags & FL_POSIX) {
- if (fl->fl_flags & FL_ACCESS)
+ if (fl->fl_core.flc_flags & FL_POSIX) {
+ if (fl->fl_core.flc_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
- else if (fl->fl_flags & FL_OFDLCK)
+ else if (fl->fl_core.flc_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
seq_printf(f, " %s ",
(inode == NULL) ? "*NOINODE*" : "ADVISORY ");
- } else if (fl->fl_flags & FL_FLOCK) {
+ } else if (fl->fl_core.flc_flags & FL_FLOCK) {
seq_puts(f, "FLOCK ADVISORY ");
- } else if (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+ } else if (fl->fl_core.flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
type = target_leasetype(fl);
- if (fl->fl_flags & FL_DELEG)
+ if (fl->fl_core.flc_flags & FL_DELEG)
seq_puts(f, "DELEG ");
else
seq_puts(f, "LEASE ");
if (lease_breaking(fl))
seq_puts(f, "BREAKING ");
- else if (fl->fl_file)
+ else if (fl->fl_core.flc_file)
seq_puts(f, "ACTIVE ");
else
seq_puts(f, "BREAKER ");
@@ -2752,7 +2759,7 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
} else {
seq_printf(f, "%d <none>:0 ", pid);
}
- if (fl->fl_flags & FL_POSIX) {
+ if (fl->fl_core.flc_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else
@@ -2767,13 +2774,14 @@ static struct file_lock *get_next_blocked_member(struct file_lock *node)
struct file_lock *tmp;
/* NULL node or root node */
- if (node == NULL || node->fl_blocker == NULL)
+ if (node == NULL || node->fl_core.flc_blocker == NULL)
return NULL;
/* Next member in the linked list could be itself */
- tmp = list_next_entry(node, fl_blocked_member);
- if (list_entry_is_head(tmp, &node->fl_blocker->fl_blocked_requests, fl_blocked_member)
- || tmp == node) {
+ tmp = list_next_entry(node, fl_core.flc_blocked_member);
+ if (list_entry_is_head(tmp, &node->fl_core.flc_blocker->fl_core.flc_blocked_requests,
+ fl_core.flc_blocked_member)
+ || tmp == node) {
return NULL;
}
@@ -2787,13 +2795,13 @@ static int locks_show(struct seq_file *f, void *v)
struct pid_namespace *proc_pidns = proc_pid_ns(file_inode(f->file)->i_sb);
int level = 0;
- cur = hlist_entry(v, struct file_lock, fl_link);
+ cur = hlist_entry(v, struct file_lock, fl_core.flc_link);
if (locks_translate_pid(cur, proc_pidns) == 0)
return 0;
/* View this crossed linked list as a binary tree, the first member of fl_blocked_requests
- * is the left child of current node, the next silibing in fl_blocked_member is the
+ * is the left child of current node, the next silibing in flc_blocked_member is the
* right child, we can alse get the parent of current node from fl_blocker, so this
* question becomes traversal of a binary tree
*/
@@ -2803,17 +2811,18 @@ static int locks_show(struct seq_file *f, void *v)
else
lock_get_status(f, cur, iter->li_pos, "", level);
- if (!list_empty(&cur->fl_blocked_requests)) {
+ if (!list_empty(&cur->fl_core.flc_blocked_requests)) {
/* Turn left */
- cur = list_first_entry_or_null(&cur->fl_blocked_requests,
- struct file_lock, fl_blocked_member);
+ cur = list_first_entry_or_null(&cur->fl_core.flc_blocked_requests,
+ struct file_lock,
+ fl_core.flc_blocked_member);
level++;
} else {
/* Turn right */
tmp = get_next_blocked_member(cur);
/* Fall back to parent node */
- while (tmp == NULL && cur->fl_blocker != NULL) {
- cur = cur->fl_blocker;
+ while (tmp == NULL && cur->fl_core.flc_blocker != NULL) {
+ cur = cur->fl_core.flc_blocker;
level--;
tmp = get_next_blocked_member(cur);
}
@@ -2830,12 +2839,12 @@ static void __show_fd_locks(struct seq_file *f,
{
struct file_lock *fl;
- list_for_each_entry(fl, head, fl_list) {
+ list_for_each_entry(fl, head, fl_core.flc_list) {
- if (filp != fl->fl_file)
+ if (filp != fl->fl_core.flc_file)
continue;
- if (fl->fl_owner != files &&
- fl->fl_owner != filp)
+ if (fl->fl_core.flc_owner != files &&
+ fl->fl_core.flc_owner != filp)
continue;
(*id)++;
diff --git a/include/trace/events/filelock.h b/include/trace/events/filelock.h
index 8fb1d41b1c67..9efd7205460c 100644
--- a/include/trace/events/filelock.h
+++ b/include/trace/events/filelock.h
@@ -82,11 +82,11 @@ DECLARE_EVENT_CLASS(filelock_lock,
__entry->fl = fl ? fl : NULL;
__entry->s_dev = inode->i_sb->s_dev;
__entry->i_ino = inode->i_ino;
- __entry->blocker = fl ? fl->fl_blocker : NULL;
- __entry->owner = fl ? fl->fl_owner : NULL;
- __entry->pid = fl ? fl->fl_pid : 0;
- __entry->flags = fl ? fl->fl_flags : 0;
- __entry->type = fl ? fl->fl_type : 0;
+ __entry->blocker = fl ? fl->fl_core.flc_blocker : NULL;
+ __entry->owner = fl ? fl->fl_core.flc_owner : NULL;
+ __entry->pid = fl ? fl->fl_core.flc_pid : 0;
+ __entry->flags = fl ? fl->fl_core.flc_flags : 0;
+ __entry->type = fl ? fl->fl_core.flc_type : 0;
__entry->fl_start = fl ? fl->fl_start : 0;
__entry->fl_end = fl ? fl->fl_end : 0;
__entry->ret = ret;
@@ -137,10 +137,10 @@ DECLARE_EVENT_CLASS(filelock_lease,
__entry->fl = fl ? fl : NULL;
__entry->s_dev = inode->i_sb->s_dev;
__entry->i_ino = inode->i_ino;
- __entry->blocker = fl ? fl->fl_blocker : NULL;
- __entry->owner = fl ? fl->fl_owner : NULL;
- __entry->flags = fl ? fl->fl_flags : 0;
- __entry->type = fl ? fl->fl_type : 0;
+ __entry->blocker = fl ? fl->fl_core.flc_blocker : NULL;
+ __entry->owner = fl ? fl->fl_core.flc_owner : NULL;
+ __entry->flags = fl ? fl->fl_core.flc_flags : 0;
+ __entry->type = fl ? fl->fl_core.flc_type : 0;
__entry->break_time = fl ? fl->fl_break_time : 0;
__entry->downgrade_time = fl ? fl->fl_downgrade_time : 0;
),
@@ -190,9 +190,9 @@ TRACE_EVENT(generic_add_lease,
__entry->wcount = atomic_read(&inode->i_writecount);
__entry->rcount = atomic_read(&inode->i_readcount);
__entry->icount = atomic_read(&inode->i_count);
- __entry->owner = fl->fl_owner;
- __entry->flags = fl->fl_flags;
- __entry->type = fl->fl_type;
+ __entry->owner = fl->fl_core.flc_owner;
+ __entry->flags = fl->fl_core.flc_flags;
+ __entry->type = fl->fl_core.flc_type;
),
TP_printk("dev=0x%x:0x%x ino=0x%lx wcount=%d rcount=%d icount=%d fl_owner=%p fl_flags=%s fl_type=%s",
@@ -220,11 +220,11 @@ TRACE_EVENT(leases_conflict,
TP_fast_assign(
__entry->lease = lease;
- __entry->l_fl_flags = lease->fl_flags;
- __entry->l_fl_type = lease->fl_type;
+ __entry->l_fl_flags = lease->fl_core.flc_flags;
+ __entry->l_fl_type = lease->fl_core.flc_type;
__entry->breaker = breaker;
- __entry->b_fl_flags = breaker->fl_flags;
- __entry->b_fl_type = breaker->fl_type;
+ __entry->b_fl_flags = breaker->fl_core.flc_flags;
+ __entry->b_fl_type = breaker->fl_core.flc_type;
__entry->conflict = conflict;
),
--
2.43.0
Convert some internal fs/locks.c function to take and deal with struct
file_lock_core instead of struct file_lock:
- locks_init_lock_heads
- locks_alloc_lock
- locks_init_lock
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index b06fa4dea298..3a91515dbccd 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -251,13 +251,13 @@ locks_free_lock_context(struct inode *inode)
}
}
-static void locks_init_lock_heads(struct file_lock *fl)
+static void locks_init_lock_heads(struct file_lock_core *flc)
{
- INIT_HLIST_NODE(&fl->fl_core.flc_link);
- INIT_LIST_HEAD(&fl->fl_core.flc_list);
- INIT_LIST_HEAD(&fl->fl_core.flc_blocked_requests);
- INIT_LIST_HEAD(&fl->fl_core.flc_blocked_member);
- init_waitqueue_head(&fl->fl_core.flc_wait);
+ INIT_HLIST_NODE(&flc->flc_link);
+ INIT_LIST_HEAD(&flc->flc_list);
+ INIT_LIST_HEAD(&flc->flc_blocked_requests);
+ INIT_LIST_HEAD(&flc->flc_blocked_member);
+ init_waitqueue_head(&flc->flc_wait);
}
/* Allocate an empty lock structure. */
@@ -266,7 +266,7 @@ struct file_lock *locks_alloc_lock(void)
struct file_lock *fl = kmem_cache_zalloc(filelock_cache, GFP_KERNEL);
if (fl)
- locks_init_lock_heads(fl);
+ locks_init_lock_heads(&fl->fl_core);
return fl;
}
@@ -347,7 +347,7 @@ locks_dispose_list(struct list_head *dispose)
void locks_init_lock(struct file_lock *fl)
{
memset(fl, 0, sizeof(struct file_lock));
- locks_init_lock_heads(fl);
+ locks_init_lock_heads(&fl->fl_core);
}
EXPORT_SYMBOL(locks_init_lock);
--
2.43.0
Convert more internal fs/locks.c functions to take and deal with struct
file_lock_core instead of struct file_lock:
- locks_dump_ctx_list
- locks_check_ctx_file_list
- locks_release_private
- locks_owner_has_blockers
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 51 +++++++++++++++++++++++++--------------------------
1 file changed, 25 insertions(+), 26 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 3a91515dbccd..a0d6fc0e043a 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -197,13 +197,12 @@ locks_get_lock_context(struct inode *inode, int type)
static void
locks_dump_ctx_list(struct list_head *list, char *list_type)
{
- struct file_lock *fl;
+ struct file_lock_core *flc;
- list_for_each_entry(fl, list, fl_core.flc_list) {
- pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n", list_type,
- fl->fl_core.flc_owner, fl->fl_core.flc_flags,
- fl->fl_core.flc_type, fl->fl_core.flc_pid);
- }
+ list_for_each_entry(flc, list, flc_list)
+ pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n",
+ list_type, flc->flc_owner, flc->flc_flags,
+ flc->flc_type, flc->flc_pid);
}
static void
@@ -224,20 +223,19 @@ locks_check_ctx_lists(struct inode *inode)
}
static void
-locks_check_ctx_file_list(struct file *filp, struct list_head *list,
- char *list_type)
+locks_check_ctx_file_list(struct file *filp, struct list_head *list, char *list_type)
{
- struct file_lock *fl;
+ struct file_lock_core *flc;
struct inode *inode = file_inode(filp);
- list_for_each_entry(fl, list, fl_core.flc_list)
- if (fl->fl_core.flc_file == filp)
+ list_for_each_entry(flc, list, flc_list)
+ if (flc->flc_file == filp)
pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
" fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n",
list_type, MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
- fl->fl_core.flc_owner, fl->fl_core.flc_flags,
- fl->fl_core.flc_type, fl->fl_core.flc_pid);
+ flc->flc_owner, flc->flc_flags,
+ flc->flc_type, flc->flc_pid);
}
void
@@ -274,11 +272,13 @@ EXPORT_SYMBOL_GPL(locks_alloc_lock);
void locks_release_private(struct file_lock *fl)
{
- BUG_ON(waitqueue_active(&fl->fl_core.flc_wait));
- BUG_ON(!list_empty(&fl->fl_core.flc_list));
- BUG_ON(!list_empty(&fl->fl_core.flc_blocked_requests));
- BUG_ON(!list_empty(&fl->fl_core.flc_blocked_member));
- BUG_ON(!hlist_unhashed(&fl->fl_core.flc_link));
+ struct file_lock_core *flc = &fl->fl_core;
+
+ BUG_ON(waitqueue_active(&flc->flc_wait));
+ BUG_ON(!list_empty(&flc->flc_list));
+ BUG_ON(!list_empty(&flc->flc_blocked_requests));
+ BUG_ON(!list_empty(&flc->flc_blocked_member));
+ BUG_ON(!hlist_unhashed(&flc->flc_link));
if (fl->fl_ops) {
if (fl->fl_ops->fl_release_private)
@@ -288,8 +288,8 @@ void locks_release_private(struct file_lock *fl)
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_put_owner) {
- fl->fl_lmops->lm_put_owner(fl->fl_core.flc_owner);
- fl->fl_core.flc_owner = NULL;
+ fl->fl_lmops->lm_put_owner(flc->flc_owner);
+ flc->flc_owner = NULL;
}
fl->fl_lmops = NULL;
}
@@ -305,16 +305,15 @@ EXPORT_SYMBOL_GPL(locks_release_private);
* %true: @owner has at least one blocker
* %false: @owner has no blockers
*/
-bool locks_owner_has_blockers(struct file_lock_context *flctx,
- fl_owner_t owner)
+bool locks_owner_has_blockers(struct file_lock_context *flctx, fl_owner_t owner)
{
- struct file_lock *fl;
+ struct file_lock_core *flc;
spin_lock(&flctx->flc_lock);
- list_for_each_entry(fl, &flctx->flc_posix, fl_core.flc_list) {
- if (fl->fl_core.flc_owner != owner)
+ list_for_each_entry(flc, &flctx->flc_posix, flc_list) {
+ if (flc->flc_owner != owner)
continue;
- if (!list_empty(&fl->fl_core.flc_blocked_requests)) {
+ if (!list_empty(&flc->flc_blocked_requests)) {
spin_unlock(&flctx->flc_lock);
return true;
}
--
2.43.0
Change posix_same_owner to take struct file_lock_core pointers, and
convert the callers to pass those in.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index a0d6fc0e043a..bd0cfee230ae 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -590,9 +590,9 @@ static inline int locks_overlap(struct file_lock *fl1, struct file_lock *fl2)
/*
* Check whether two locks have the same owner.
*/
-static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
+static int posix_same_owner(struct file_lock_core *fl1, struct file_lock_core *fl2)
{
- return fl1->fl_core.flc_owner == fl2->fl_core.flc_owner;
+ return fl1->flc_owner == fl2->flc_owner;
}
/* Must be called with the flc_lock held! */
@@ -857,7 +857,7 @@ static bool posix_locks_conflict(struct file_lock *caller_fl,
/* POSIX locks owned by the same process do not conflict with
* each other.
*/
- if (posix_same_owner(caller_fl, sys_fl))
+ if (posix_same_owner(&caller_fl->fl_core, &sys_fl->fl_core))
return false;
/* Check whether they overlap */
@@ -875,7 +875,7 @@ static bool posix_test_locks_conflict(struct file_lock *caller_fl,
{
/* F_UNLCK checks any locks on the same fd. */
if (caller_fl->fl_core.flc_type == F_UNLCK) {
- if (!posix_same_owner(caller_fl, sys_fl))
+ if (!posix_same_owner(&caller_fl->fl_core, &sys_fl->fl_core))
return false;
return locks_overlap(caller_fl, sys_fl);
}
@@ -978,7 +978,7 @@ static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
struct file_lock *fl;
hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, posix_owner_key(block_fl)) {
- if (posix_same_owner(fl, block_fl)) {
+ if (posix_same_owner(&fl->fl_core, &block_fl->fl_core)) {
while (fl->fl_core.flc_blocker)
fl = fl->fl_core.flc_blocker;
return fl;
@@ -1005,7 +1005,7 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
if (i++ > MAX_DEADLK_ITERATIONS)
return 0;
- if (posix_same_owner(caller_fl, block_fl))
+ if (posix_same_owner(&caller_fl->fl_core, &block_fl->fl_core))
return 1;
}
return 0;
@@ -1178,13 +1178,13 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
/* Find the first old lock with the same owner as the new lock */
list_for_each_entry(fl, &ctx->flc_posix, fl_core.flc_list) {
- if (posix_same_owner(request, fl))
+ if (posix_same_owner(&request->fl_core, &fl->fl_core))
break;
}
/* Process locks with this owner. */
list_for_each_entry_safe_from(fl, tmp, &ctx->flc_posix, fl_core.flc_list) {
- if (!posix_same_owner(request, fl))
+ if (!posix_same_owner(&request->fl_core, &fl->fl_core))
break;
/* Detect adjacent or overlapping regions (if same lock type) */
--
2.43.0
Convert posix_owner_key to take struct file_lock_core pointer, and fix
up the callers to pass one in.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index bd0cfee230ae..effe84f954f9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -630,9 +630,9 @@ static void locks_delete_global_locks(struct file_lock *fl)
}
static unsigned long
-posix_owner_key(struct file_lock *fl)
+posix_owner_key(struct file_lock_core *flc)
{
- return (unsigned long) fl->fl_core.flc_owner;
+ return (unsigned long) flc->flc_owner;
}
static void locks_insert_global_blocked(struct file_lock *waiter)
@@ -640,7 +640,7 @@ static void locks_insert_global_blocked(struct file_lock *waiter)
lockdep_assert_held(&blocked_lock_lock);
hash_add(blocked_hash, &waiter->fl_core.flc_link,
- posix_owner_key(waiter));
+ posix_owner_key(&waiter->fl_core));
}
static void locks_delete_global_blocked(struct file_lock *waiter)
@@ -977,7 +977,7 @@ static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
{
struct file_lock *fl;
- hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, posix_owner_key(block_fl)) {
+ hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, posix_owner_key(&block_fl->fl_core)) {
if (posix_same_owner(&fl->fl_core, &block_fl->fl_core)) {
while (fl->fl_core.flc_blocker)
fl = fl->fl_core.flc_blocker;
--
2.43.0
Convert these functions to take a file_lock_core instead of a file_lock.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index effe84f954f9..ad4bb9cd4c9d 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -596,20 +596,20 @@ static int posix_same_owner(struct file_lock_core *fl1, struct file_lock_core *f
}
/* Must be called with the flc_lock held! */
-static void locks_insert_global_locks(struct file_lock *fl)
+static void locks_insert_global_locks(struct file_lock_core *flc)
{
struct file_lock_list_struct *fll = this_cpu_ptr(&file_lock_list);
percpu_rwsem_assert_held(&file_rwsem);
spin_lock(&fll->lock);
- fl->fl_core.flc_link_cpu = smp_processor_id();
- hlist_add_head(&fl->fl_core.flc_link, &fll->hlist);
+ flc->flc_link_cpu = smp_processor_id();
+ hlist_add_head(&flc->flc_link, &fll->hlist);
spin_unlock(&fll->lock);
}
/* Must be called with the flc_lock held! */
-static void locks_delete_global_locks(struct file_lock *fl)
+static void locks_delete_global_locks(struct file_lock_core *flc)
{
struct file_lock_list_struct *fll;
@@ -620,12 +620,12 @@ static void locks_delete_global_locks(struct file_lock *fl)
* is done while holding the flc_lock, and new insertions into the list
* also require that it be held.
*/
- if (hlist_unhashed(&fl->fl_core.flc_link))
+ if (hlist_unhashed(&flc->flc_link))
return;
- fll = per_cpu_ptr(&file_lock_list, fl->fl_core.flc_link_cpu);
+ fll = per_cpu_ptr(&file_lock_list, flc->flc_link_cpu);
spin_lock(&fll->lock);
- hlist_del_init(&fl->fl_core.flc_link);
+ hlist_del_init(&flc->flc_link);
spin_unlock(&fll->lock);
}
@@ -814,13 +814,13 @@ static void
locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
{
list_add_tail(&fl->fl_core.flc_list, before);
- locks_insert_global_locks(fl);
+ locks_insert_global_locks(&fl->fl_core);
}
static void
locks_unlink_lock_ctx(struct file_lock *fl)
{
- locks_delete_global_locks(fl);
+ locks_delete_global_locks(&fl->fl_core);
list_del_init(&fl->fl_core.flc_list);
locks_wake_up_blocks(fl);
}
--
2.43.0
Have locks_insert_global_blocked and locks_delete_global_blocked take a
struct file_lock_core pointer.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index ad4bb9cd4c9d..d6d47612527c 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -635,19 +635,18 @@ posix_owner_key(struct file_lock_core *flc)
return (unsigned long) flc->flc_owner;
}
-static void locks_insert_global_blocked(struct file_lock *waiter)
+static void locks_insert_global_blocked(struct file_lock_core *waiter)
{
lockdep_assert_held(&blocked_lock_lock);
- hash_add(blocked_hash, &waiter->fl_core.flc_link,
- posix_owner_key(&waiter->fl_core));
+ hash_add(blocked_hash, &waiter->flc_link, posix_owner_key(waiter));
}
-static void locks_delete_global_blocked(struct file_lock *waiter)
+static void locks_delete_global_blocked(struct file_lock_core *waiter)
{
lockdep_assert_held(&blocked_lock_lock);
- hash_del(&waiter->fl_core.flc_link);
+ hash_del(&waiter->flc_link);
}
/* Remove waiter from blocker's block list.
@@ -657,7 +656,7 @@ static void locks_delete_global_blocked(struct file_lock *waiter)
*/
static void __locks_delete_block(struct file_lock *waiter)
{
- locks_delete_global_blocked(waiter);
+ locks_delete_global_blocked(&waiter->fl_core);
list_del_init(&waiter->fl_core.flc_blocked_member);
}
@@ -768,7 +767,7 @@ static void __locks_insert_block(struct file_lock *blocker,
list_add_tail(&waiter->fl_core.flc_blocked_member,
&blocker->fl_core.flc_blocked_requests);
if ((blocker->fl_core.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
- locks_insert_global_blocked(waiter);
+ locks_insert_global_blocked(&waiter->fl_core);
/* The requests in waiter->fl_blocked are known to conflict with
* waiter, but might not conflict with blocker, or the requests
--
2.43.0
Convert __locks_delete_block and __locks_wake_up_blocks to take a struct
file_lock_core pointer.
While we could do this in another way, we're going to need to add a
file_lock() helper function later anyway, so introduce and use it now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 45 +++++++++++++++++++++++++++------------------
1 file changed, 27 insertions(+), 18 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index d6d47612527c..fb113103dc1b 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -69,6 +69,11 @@
#include <linux/uaccess.h>
+static struct file_lock *file_lock(struct file_lock_core *flc)
+{
+ return container_of(flc, struct file_lock, fl_core);
+}
+
static bool lease_breaking(struct file_lock *fl)
{
return fl->fl_core.flc_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
@@ -654,31 +659,35 @@ static void locks_delete_global_blocked(struct file_lock_core *waiter)
*
* Must be called with blocked_lock_lock held.
*/
-static void __locks_delete_block(struct file_lock *waiter)
+static void __locks_delete_block(struct file_lock_core *waiter)
{
- locks_delete_global_blocked(&waiter->fl_core);
- list_del_init(&waiter->fl_core.flc_blocked_member);
+ locks_delete_global_blocked(waiter);
+ list_del_init(&waiter->flc_blocked_member);
}
-static void __locks_wake_up_blocks(struct file_lock *blocker)
+static void __locks_wake_up_blocks(struct file_lock_core *blocker)
{
- while (!list_empty(&blocker->fl_core.flc_blocked_requests)) {
- struct file_lock *waiter;
+ while (!list_empty(&blocker->flc_blocked_requests)) {
+ struct file_lock_core *waiter;
+ struct file_lock *fl;
+
+ waiter = list_first_entry(&blocker->flc_blocked_requests,
+ struct file_lock_core, flc_blocked_member);
- waiter = list_first_entry(&blocker->fl_core.flc_blocked_requests,
- struct file_lock, fl_core.flc_blocked_member);
+ fl = file_lock(waiter);
__locks_delete_block(waiter);
- if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
- waiter->fl_lmops->lm_notify(waiter);
+ if ((waiter->flc_flags & (FL_POSIX | FL_FLOCK)) &&
+ fl->fl_lmops && fl->fl_lmops->lm_notify)
+ fl->fl_lmops->lm_notify(fl);
else
- wake_up(&waiter->fl_core.flc_wait);
+ wake_up(&waiter->flc_wait);
/*
- * The setting of fl_blocker to NULL marks the "done"
+ * The setting of flc_blocker to NULL marks the "done"
* point in deleting a block. Paired with acquire at the top
* of locks_delete_block().
*/
- smp_store_release(&waiter->fl_core.flc_blocker, NULL);
+ smp_store_release(&waiter->flc_blocker, NULL);
}
}
@@ -720,8 +729,8 @@ int locks_delete_block(struct file_lock *waiter)
spin_lock(&blocked_lock_lock);
if (waiter->fl_core.flc_blocker)
status = 0;
- __locks_wake_up_blocks(waiter);
- __locks_delete_block(waiter);
+ __locks_wake_up_blocks(&waiter->fl_core);
+ __locks_delete_block(&waiter->fl_core);
/*
* The setting of fl_blocker to NULL marks the "done" point in deleting
@@ -773,7 +782,7 @@ static void __locks_insert_block(struct file_lock *blocker,
* waiter, but might not conflict with blocker, or the requests
* and lock which block it. So they all need to be woken.
*/
- __locks_wake_up_blocks(waiter);
+ __locks_wake_up_blocks(&waiter->fl_core);
}
/* Must be called with flc_lock held. */
@@ -805,7 +814,7 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
return;
spin_lock(&blocked_lock_lock);
- __locks_wake_up_blocks(blocker);
+ __locks_wake_up_blocks(&blocker->fl_core);
spin_unlock(&blocked_lock_lock);
}
@@ -1159,7 +1168,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
* Ensure that we don't find any locks blocked on this
* request during deadlock detection.
*/
- __locks_wake_up_blocks(request);
+ __locks_wake_up_blocks(&request->fl_core);
if (likely(!posix_locks_deadlock(request, fl))) {
error = FILE_LOCK_DEFERRED;
__locks_insert_block(fl, request,
--
2.43.0
Have both __locks_insert_block and the deadlock and conflict checking
functions take a struct file_lock_core pointer instead of a struct
file_lock one. Also, change posix_locks_deadlock to return bool.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 134 +++++++++++++++++++++++++++++++++----------------------------
1 file changed, 73 insertions(+), 61 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index fb113103dc1b..a86841fc8220 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -757,39 +757,41 @@ EXPORT_SYMBOL(locks_delete_block);
* waiters, and add beneath any waiter that blocks the new waiter.
* Thus wakeups don't happen until needed.
*/
-static void __locks_insert_block(struct file_lock *blocker,
- struct file_lock *waiter,
- bool conflict(struct file_lock *,
- struct file_lock *))
+static void __locks_insert_block(struct file_lock *blocker_fl,
+ struct file_lock *waiter_fl,
+ bool conflict(struct file_lock_core *,
+ struct file_lock_core *))
{
- struct file_lock *fl;
- BUG_ON(!list_empty(&waiter->fl_core.flc_blocked_member));
+ struct file_lock_core *blocker = &blocker_fl->fl_core;
+ struct file_lock_core *waiter = &waiter_fl->fl_core;
+ struct file_lock_core *flc;
+ BUG_ON(!list_empty(&waiter->flc_blocked_member));
new_blocker:
- list_for_each_entry(fl, &blocker->fl_core.flc_blocked_requests,
- fl_core.flc_blocked_member)
- if (conflict(fl, waiter)) {
- blocker = fl;
+ list_for_each_entry(flc, &blocker->flc_blocked_requests, flc_blocked_member)
+ if (conflict(flc, waiter)) {
+ blocker = flc;
goto new_blocker;
}
- waiter->fl_core.flc_blocker = blocker;
- list_add_tail(&waiter->fl_core.flc_blocked_member,
- &blocker->fl_core.flc_blocked_requests);
- if ((blocker->fl_core.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
- locks_insert_global_blocked(&waiter->fl_core);
+ waiter->flc_blocker = file_lock(blocker);
+ list_add_tail(&waiter->flc_blocked_member,
+ &blocker->flc_blocked_requests);
- /* The requests in waiter->fl_blocked are known to conflict with
+ if ((blocker->flc_flags & (FL_POSIX|FL_OFDLCK)) == (FL_POSIX|FL_OFDLCK))
+ locks_insert_global_blocked(waiter);
+
+ /* The requests in waiter->flc_blocked are known to conflict with
* waiter, but might not conflict with blocker, or the requests
* and lock which block it. So they all need to be woken.
*/
- __locks_wake_up_blocks(&waiter->fl_core);
+ __locks_wake_up_blocks(waiter);
}
/* Must be called with flc_lock held. */
static void locks_insert_block(struct file_lock *blocker,
struct file_lock *waiter,
- bool conflict(struct file_lock *,
- struct file_lock *))
+ bool conflict(struct file_lock_core *,
+ struct file_lock_core *))
{
spin_lock(&blocked_lock_lock);
__locks_insert_block(blocker, waiter, conflict);
@@ -846,12 +848,12 @@ locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
/* Determine if lock sys_fl blocks lock caller_fl. Common functionality
* checks for shared/exclusive status of overlapping locks.
*/
-static bool locks_conflict(struct file_lock *caller_fl,
- struct file_lock *sys_fl)
+static bool locks_conflict(struct file_lock_core *caller_fl,
+ struct file_lock_core *sys_fl)
{
- if (sys_fl->fl_core.flc_type == F_WRLCK)
+ if (sys_fl->flc_type == F_WRLCK)
return true;
- if (caller_fl->fl_core.flc_type == F_WRLCK)
+ if (caller_fl->flc_type == F_WRLCK)
return true;
return false;
}
@@ -859,20 +861,23 @@ static bool locks_conflict(struct file_lock *caller_fl,
/* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
* checking before calling the locks_conflict().
*/
-static bool posix_locks_conflict(struct file_lock *caller_fl,
- struct file_lock *sys_fl)
+static bool posix_locks_conflict(struct file_lock_core *caller_flc,
+ struct file_lock_core *sys_flc)
{
+ struct file_lock *caller_fl = file_lock(caller_flc);
+ struct file_lock *sys_fl = file_lock(sys_flc);
+
/* POSIX locks owned by the same process do not conflict with
* each other.
*/
- if (posix_same_owner(&caller_fl->fl_core, &sys_fl->fl_core))
+ if (posix_same_owner(caller_flc, sys_flc))
return false;
/* Check whether they overlap */
if (!locks_overlap(caller_fl, sys_fl))
return false;
- return locks_conflict(caller_fl, sys_fl);
+ return locks_conflict(caller_flc, sys_flc);
}
/* Determine if lock sys_fl blocks lock caller_fl. Used on xx_GETLK
@@ -881,28 +886,31 @@ static bool posix_locks_conflict(struct file_lock *caller_fl,
static bool posix_test_locks_conflict(struct file_lock *caller_fl,
struct file_lock *sys_fl)
{
+ struct file_lock_core *caller = &caller_fl->fl_core;
+ struct file_lock_core *sys = &sys_fl->fl_core;
+
/* F_UNLCK checks any locks on the same fd. */
- if (caller_fl->fl_core.flc_type == F_UNLCK) {
- if (!posix_same_owner(&caller_fl->fl_core, &sys_fl->fl_core))
+ if (caller->flc_type == F_UNLCK) {
+ if (!posix_same_owner(caller, sys))
return false;
return locks_overlap(caller_fl, sys_fl);
}
- return posix_locks_conflict(caller_fl, sys_fl);
+ return posix_locks_conflict(caller, sys);
}
/* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
* checking before calling the locks_conflict().
*/
-static bool flock_locks_conflict(struct file_lock *caller_fl,
- struct file_lock *sys_fl)
+static bool flock_locks_conflict(struct file_lock_core *caller_flc,
+ struct file_lock_core *sys_flc)
{
/* FLOCK locks referring to the same filp do not conflict with
* each other.
*/
- if (caller_fl->fl_core.flc_file == sys_fl->fl_core.flc_file)
+ if (caller_flc->flc_file == sys_flc->flc_file)
return false;
- return locks_conflict(caller_fl, sys_fl);
+ return locks_conflict(caller_flc, sys_flc);
}
void
@@ -980,25 +988,27 @@ EXPORT_SYMBOL(posix_test_lock);
#define MAX_DEADLK_ITERATIONS 10
-/* Find a lock that the owner of the given block_fl is blocking on. */
-static struct file_lock *what_owner_is_waiting_for(struct file_lock *block_fl)
+/* Find a lock that the owner of the given @blocker is blocking on. */
+static struct file_lock_core *what_owner_is_waiting_for(struct file_lock_core *blocker)
{
- struct file_lock *fl;
+ struct file_lock_core *flc;
- hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, posix_owner_key(&block_fl->fl_core)) {
- if (posix_same_owner(&fl->fl_core, &block_fl->fl_core)) {
- while (fl->fl_core.flc_blocker)
- fl = fl->fl_core.flc_blocker;
- return fl;
+ hash_for_each_possible(blocked_hash, flc, flc_link, posix_owner_key(blocker)) {
+ if (posix_same_owner(flc, blocker)) {
+ while (flc->flc_blocker)
+ flc = &flc->flc_blocker->fl_core;
+ return flc;
}
}
return NULL;
}
/* Must be called with the blocked_lock_lock held! */
-static int posix_locks_deadlock(struct file_lock *caller_fl,
- struct file_lock *block_fl)
+static bool posix_locks_deadlock(struct file_lock *caller_fl,
+ struct file_lock *block_fl)
{
+ struct file_lock_core *caller = &caller_fl->fl_core;
+ struct file_lock_core *blocker = &block_fl->fl_core;
int i = 0;
lockdep_assert_held(&blocked_lock_lock);
@@ -1007,16 +1017,16 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
* This deadlock detector can't reasonably detect deadlocks with
* FL_OFDLCK locks, since they aren't owned by a process, per-se.
*/
- if (caller_fl->fl_core.flc_flags & FL_OFDLCK)
- return 0;
+ if (caller->flc_flags & FL_OFDLCK)
+ return false;
- while ((block_fl = what_owner_is_waiting_for(block_fl))) {
+ while ((blocker = what_owner_is_waiting_for(blocker))) {
if (i++ > MAX_DEADLK_ITERATIONS)
- return 0;
- if (posix_same_owner(&caller_fl->fl_core, &block_fl->fl_core))
- return 1;
+ return false;
+ if (posix_same_owner(caller, blocker))
+ return true;
}
- return 0;
+ return false;
}
/* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
@@ -1071,7 +1081,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
find_conflict:
list_for_each_entry(fl, &ctx->flc_flock, fl_core.flc_list) {
- if (!flock_locks_conflict(request, fl))
+ if (!flock_locks_conflict(&request->fl_core, &fl->fl_core))
continue;
error = -EAGAIN;
if (!(request->fl_core.flc_flags & FL_SLEEP))
@@ -1140,7 +1150,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
*/
if (request->fl_core.flc_type != F_UNLCK) {
list_for_each_entry(fl, &ctx->flc_posix, fl_core.flc_list) {
- if (!posix_locks_conflict(request, fl))
+ if (!posix_locks_conflict(&request->fl_core, &fl->fl_core))
continue;
if (fl->fl_lmops && fl->fl_lmops->lm_lock_expirable
&& (*fl->fl_lmops->lm_lock_expirable)(fl)) {
@@ -1442,23 +1452,25 @@ static void time_out_leases(struct inode *inode, struct list_head *dispose)
}
}
-static bool leases_conflict(struct file_lock *lease, struct file_lock *breaker)
+static bool leases_conflict(struct file_lock_core *lc, struct file_lock_core *bc)
{
bool rc;
+ struct file_lock *lease = file_lock(lc);
+ struct file_lock *breaker = file_lock(bc);
if (lease->fl_lmops->lm_breaker_owns_lease
&& lease->fl_lmops->lm_breaker_owns_lease(lease))
return false;
- if ((breaker->fl_core.flc_flags & FL_LAYOUT) != (lease->fl_core.flc_flags & FL_LAYOUT)) {
+ if ((bc->flc_flags & FL_LAYOUT) != (lc->flc_flags & FL_LAYOUT)) {
rc = false;
goto trace;
}
- if ((breaker->fl_core.flc_flags & FL_DELEG) && (lease->fl_core.flc_flags & FL_LEASE)) {
+ if ((bc->flc_flags & FL_DELEG) && (lc->flc_flags & FL_LEASE)) {
rc = false;
goto trace;
}
- rc = locks_conflict(breaker, lease);
+ rc = locks_conflict(bc, lc);
trace:
trace_leases_conflict(rc, lease, breaker);
return rc;
@@ -1468,12 +1480,12 @@ static bool
any_leases_conflict(struct inode *inode, struct file_lock *breaker)
{
struct file_lock_context *ctx = inode->i_flctx;
- struct file_lock *fl;
+ struct file_lock_core *flc;
lockdep_assert_held(&ctx->flc_lock);
- list_for_each_entry(fl, &ctx->flc_lease, fl_core.flc_list) {
- if (leases_conflict(fl, breaker))
+ list_for_each_entry(flc, &ctx->flc_lease, flc_list) {
+ if (leases_conflict(flc, &breaker->fl_core))
return true;
}
return false;
@@ -1529,7 +1541,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
}
list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_core.flc_list) {
- if (!leases_conflict(fl, new_fl))
+ if (!leases_conflict(&fl->fl_core, &new_fl->fl_core))
continue;
if (want_write) {
if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
--
2.43.0
Rework the internals of locks_delete_block to use struct file_lock_core
(mostly just for clarity's sake). The prototype is not changed.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 8e320c95c416..739af36d98df 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -697,9 +697,10 @@ static void __locks_wake_up_blocks(struct file_lock_core *blocker)
*
* lockd/nfsd need to disconnect the lock while working on it.
*/
-int locks_delete_block(struct file_lock *waiter)
+int locks_delete_block(struct file_lock *waiter_fl)
{
int status = -ENOENT;
+ struct file_lock_core *waiter = &waiter_fl->fl_core;
/*
* If fl_blocker is NULL, it won't be set again as this thread "owns"
@@ -722,21 +723,21 @@ int locks_delete_block(struct file_lock *waiter)
* no new locks can be inserted into its fl_blocked_requests list, and
* can avoid doing anything further if the list is empty.
*/
- if (!smp_load_acquire(&waiter->fl_core.flc_blocker) &&
- list_empty(&waiter->fl_core.flc_blocked_requests))
+ if (!smp_load_acquire(&waiter->flc_blocker) &&
+ list_empty(&waiter->flc_blocked_requests))
return status;
spin_lock(&blocked_lock_lock);
- if (waiter->fl_core.flc_blocker)
+ if (waiter->flc_blocker)
status = 0;
- __locks_wake_up_blocks(&waiter->fl_core);
- __locks_delete_block(&waiter->fl_core);
+ __locks_wake_up_blocks(waiter);
+ __locks_delete_block(waiter);
/*
* The setting of fl_blocker to NULL marks the "done" point in deleting
* a block. Paired with acquire at the top of this function.
*/
- smp_store_release(&waiter->fl_core.flc_blocker, NULL);
+ smp_store_release(&waiter->flc_blocker, NULL);
spin_unlock(&blocked_lock_lock);
return status;
}
--
2.43.0
Rename the old __locks_delete_block to __locks_unlink_lock. Rename
change old locks_delete_block function to __locks_delete_block and
have it take a file_lock_core. Make locks_delete_block a simple wrapper
around __locks_delete_block.
Also, change __locks_insert_block to take struct file_lock_core, and
fix up its callers.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 42 ++++++++++++++++++++++--------------------
1 file changed, 22 insertions(+), 20 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 739af36d98df..647a778d2c85 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -659,7 +659,7 @@ static void locks_delete_global_blocked(struct file_lock_core *waiter)
*
* Must be called with blocked_lock_lock held.
*/
-static void __locks_delete_block(struct file_lock_core *waiter)
+static void __locks_unlink_block(struct file_lock_core *waiter)
{
locks_delete_global_blocked(waiter);
list_del_init(&waiter->flc_blocked_member);
@@ -675,7 +675,7 @@ static void __locks_wake_up_blocks(struct file_lock_core *blocker)
struct file_lock_core, flc_blocked_member);
fl = file_lock(waiter);
- __locks_delete_block(waiter);
+ __locks_unlink_block(waiter);
if ((waiter->flc_flags & (FL_POSIX | FL_FLOCK)) &&
fl->fl_lmops && fl->fl_lmops->lm_notify)
fl->fl_lmops->lm_notify(fl);
@@ -691,16 +691,9 @@ static void __locks_wake_up_blocks(struct file_lock_core *blocker)
}
}
-/**
- * locks_delete_block - stop waiting for a file lock
- * @waiter: the lock which was waiting
- *
- * lockd/nfsd need to disconnect the lock while working on it.
- */
-int locks_delete_block(struct file_lock *waiter_fl)
+static int __locks_delete_block(struct file_lock_core *waiter)
{
int status = -ENOENT;
- struct file_lock_core *waiter = &waiter_fl->fl_core;
/*
* If fl_blocker is NULL, it won't be set again as this thread "owns"
@@ -731,7 +724,7 @@ int locks_delete_block(struct file_lock *waiter_fl)
if (waiter->flc_blocker)
status = 0;
__locks_wake_up_blocks(waiter);
- __locks_delete_block(waiter);
+ __locks_unlink_block(waiter);
/*
* The setting of fl_blocker to NULL marks the "done" point in deleting
@@ -741,6 +734,17 @@ int locks_delete_block(struct file_lock *waiter_fl)
spin_unlock(&blocked_lock_lock);
return status;
}
+
+/**
+ * locks_delete_block - stop waiting for a file lock
+ * @waiter: the lock which was waiting
+ *
+ * lockd/nfsd need to disconnect the lock while working on it.
+ */
+int locks_delete_block(struct file_lock *waiter)
+{
+ return __locks_delete_block(&waiter->fl_core);
+}
EXPORT_SYMBOL(locks_delete_block);
/* Insert waiter into blocker's block list.
@@ -758,13 +762,11 @@ EXPORT_SYMBOL(locks_delete_block);
* waiters, and add beneath any waiter that blocks the new waiter.
* Thus wakeups don't happen until needed.
*/
-static void __locks_insert_block(struct file_lock *blocker_fl,
- struct file_lock *waiter_fl,
+static void __locks_insert_block(struct file_lock_core *blocker,
+ struct file_lock_core *waiter,
bool conflict(struct file_lock_core *,
struct file_lock_core *))
{
- struct file_lock_core *blocker = &blocker_fl->fl_core;
- struct file_lock_core *waiter = &waiter_fl->fl_core;
struct file_lock_core *flc;
BUG_ON(!list_empty(&waiter->flc_blocked_member));
@@ -789,8 +791,8 @@ static void __locks_insert_block(struct file_lock *blocker_fl,
}
/* Must be called with flc_lock held. */
-static void locks_insert_block(struct file_lock *blocker,
- struct file_lock *waiter,
+static void locks_insert_block(struct file_lock_core *blocker,
+ struct file_lock_core *waiter,
bool conflict(struct file_lock_core *,
struct file_lock_core *))
{
@@ -1088,7 +1090,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
if (!(request->fl_core.flc_flags & FL_SLEEP))
goto out;
error = FILE_LOCK_DEFERRED;
- locks_insert_block(fl, request, flock_locks_conflict);
+ locks_insert_block(&fl->fl_core, &request->fl_core, flock_locks_conflict);
goto out;
}
if (request->fl_core.flc_flags & FL_ACCESS)
@@ -1182,7 +1184,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
__locks_wake_up_blocks(&request->fl_core);
if (likely(!posix_locks_deadlock(request, fl))) {
error = FILE_LOCK_DEFERRED;
- __locks_insert_block(fl, request,
+ __locks_insert_block(&fl->fl_core, &request->fl_core,
posix_locks_conflict);
}
spin_unlock(&blocked_lock_lock);
@@ -1575,7 +1577,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
break_time -= jiffies;
if (break_time == 0)
break_time++;
- locks_insert_block(fl, new_fl, leases_conflict);
+ locks_insert_block(&fl->fl_core, &new_fl->fl_core, leases_conflict);
trace_break_lease_block(inode, new_fl);
spin_unlock(&ctx->flc_lock);
percpu_up_read(&file_rwsem);
--
2.43.0
Have assign_type take struct file_lock_core instead of file_lock.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 647a778d2c85..6182f5c5e7b4 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -439,13 +439,13 @@ static void flock_make_lock(struct file *filp, struct file_lock *fl, int type)
fl->fl_end = OFFSET_MAX;
}
-static int assign_type(struct file_lock *fl, int type)
+static int assign_type(struct file_lock_core *flc, int type)
{
switch (type) {
case F_RDLCK:
case F_WRLCK:
case F_UNLCK:
- fl->fl_core.flc_type = type;
+ flc->flc_type = type;
break;
default:
return -EINVAL;
@@ -497,7 +497,7 @@ static int flock64_to_posix_lock(struct file *filp, struct file_lock *fl,
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
- return assign_type(fl, l->l_type);
+ return assign_type(&fl->fl_core, l->l_type);
}
/* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
@@ -552,7 +552,7 @@ static const struct lock_manager_operations lease_manager_ops = {
*/
static int lease_init(struct file *filp, int type, struct file_lock *fl)
{
- if (assign_type(fl, type) != 0)
+ if (assign_type(&fl->fl_core, type) != 0)
return -EINVAL;
fl->fl_core.flc_owner = filp;
@@ -1409,7 +1409,7 @@ static void lease_clear_pending(struct file_lock *fl, int arg)
/* We already had a lease on this file; just change its type */
int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
{
- int error = assign_type(fl, arg);
+ int error = assign_type(&fl->fl_core, arg);
if (error)
return error;
--
2.43.0
Have locks_wake_up_blocks take a file_lock_core pointer, and fix up the
callers to pass one in.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 6182f5c5e7b4..03985cfb7eff 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -806,7 +806,7 @@ static void locks_insert_block(struct file_lock_core *blocker,
*
* Must be called with the inode->flc_lock held!
*/
-static void locks_wake_up_blocks(struct file_lock *blocker)
+static void locks_wake_up_blocks(struct file_lock_core *blocker)
{
/*
* Avoid taking global lock if list is empty. This is safe since new
@@ -815,11 +815,11 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
* fl_blocked_requests list does not require the flc_lock, so we must
* recheck list_empty() after acquiring the blocked_lock_lock.
*/
- if (list_empty(&blocker->fl_core.flc_blocked_requests))
+ if (list_empty(&blocker->flc_blocked_requests))
return;
spin_lock(&blocked_lock_lock);
- __locks_wake_up_blocks(&blocker->fl_core);
+ __locks_wake_up_blocks(blocker);
spin_unlock(&blocked_lock_lock);
}
@@ -835,7 +835,7 @@ locks_unlink_lock_ctx(struct file_lock *fl)
{
locks_delete_global_locks(&fl->fl_core);
list_del_init(&fl->fl_core.flc_list);
- locks_wake_up_blocks(fl);
+ locks_wake_up_blocks(&fl->fl_core);
}
static void
@@ -1328,11 +1328,11 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
locks_insert_lock_ctx(left, &fl->fl_core.flc_list);
}
right->fl_start = request->fl_end + 1;
- locks_wake_up_blocks(right);
+ locks_wake_up_blocks(&right->fl_core);
}
if (left) {
left->fl_end = request->fl_start - 1;
- locks_wake_up_blocks(left);
+ locks_wake_up_blocks(&left->fl_core);
}
out:
spin_unlock(&ctx->flc_lock);
@@ -1414,7 +1414,7 @@ int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
if (error)
return error;
lease_clear_pending(fl, arg);
- locks_wake_up_blocks(fl);
+ locks_wake_up_blocks(&fl->fl_core);
if (arg == F_UNLCK) {
struct file *filp = fl->fl_core.flc_file;
--
2.43.0
Have these functions take a file_lock_core pointer instead of a
file_lock.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 44 ++++++++++++++++++++++----------------------
1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 03985cfb7eff..0491d621417d 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -824,28 +824,28 @@ static void locks_wake_up_blocks(struct file_lock_core *blocker)
}
static void
-locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
+locks_insert_lock_ctx(struct file_lock_core *fl, struct list_head *before)
{
- list_add_tail(&fl->fl_core.flc_list, before);
- locks_insert_global_locks(&fl->fl_core);
+ list_add_tail(&fl->flc_list, before);
+ locks_insert_global_locks(fl);
}
static void
-locks_unlink_lock_ctx(struct file_lock *fl)
+locks_unlink_lock_ctx(struct file_lock_core *fl)
{
- locks_delete_global_locks(&fl->fl_core);
- list_del_init(&fl->fl_core.flc_list);
- locks_wake_up_blocks(&fl->fl_core);
+ locks_delete_global_locks(fl);
+ list_del_init(&fl->flc_list);
+ locks_wake_up_blocks(fl);
}
static void
-locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
+locks_delete_lock_ctx(struct file_lock_core *fl, struct list_head *dispose)
{
locks_unlink_lock_ctx(fl);
if (dispose)
- list_add(&fl->fl_core.flc_list, dispose);
+ list_add(&fl->flc_list, dispose);
else
- locks_free_lock(fl);
+ locks_free_lock(file_lock(fl));
}
/* Determine if lock sys_fl blocks lock caller_fl. Common functionality
@@ -1072,7 +1072,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
if (request->fl_core.flc_type == fl->fl_core.flc_type)
goto out;
found = true;
- locks_delete_lock_ctx(fl, &dispose);
+ locks_delete_lock_ctx(&fl->fl_core, &dispose);
break;
}
@@ -1097,7 +1097,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
goto out;
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
- locks_insert_lock_ctx(new_fl, &ctx->flc_flock);
+ locks_insert_lock_ctx(&new_fl->fl_core, &ctx->flc_flock);
new_fl = NULL;
error = 0;
@@ -1236,7 +1236,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
else
request->fl_end = fl->fl_end;
if (added) {
- locks_delete_lock_ctx(fl, &dispose);
+ locks_delete_lock_ctx(&fl->fl_core, &dispose);
continue;
}
request = fl;
@@ -1265,7 +1265,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
* one (This may happen several times).
*/
if (added) {
- locks_delete_lock_ctx(fl, &dispose);
+ locks_delete_lock_ctx(&fl->fl_core, &dispose);
continue;
}
/*
@@ -1282,9 +1282,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
locks_move_blocks(new_fl, request);
request = new_fl;
new_fl = NULL;
- locks_insert_lock_ctx(request,
+ locks_insert_lock_ctx(&request->fl_core,
&fl->fl_core.flc_list);
- locks_delete_lock_ctx(fl, &dispose);
+ locks_delete_lock_ctx(&fl->fl_core, &dispose);
added = true;
}
}
@@ -1313,7 +1313,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
}
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
- locks_insert_lock_ctx(new_fl, &fl->fl_core.flc_list);
+ locks_insert_lock_ctx(&new_fl->fl_core, &fl->fl_core.flc_list);
fl = new_fl;
new_fl = NULL;
}
@@ -1325,7 +1325,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
left = new_fl2;
new_fl2 = NULL;
locks_copy_lock(left, right);
- locks_insert_lock_ctx(left, &fl->fl_core.flc_list);
+ locks_insert_lock_ctx(&left->fl_core, &fl->fl_core.flc_list);
}
right->fl_start = request->fl_end + 1;
locks_wake_up_blocks(&right->fl_core);
@@ -1425,7 +1425,7 @@ int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
fl->fl_fasync = NULL;
}
- locks_delete_lock_ctx(fl, dispose);
+ locks_delete_lock_ctx(&fl->fl_core, dispose);
}
return 0;
}
@@ -1558,7 +1558,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
fl->fl_downgrade_time = break_time;
}
if (fl->fl_lmops->lm_break(fl))
- locks_delete_lock_ctx(fl, &dispose);
+ locks_delete_lock_ctx(&fl->fl_core, &dispose);
}
if (list_empty(&ctx->flc_lease))
@@ -1816,7 +1816,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
if (!leases_enable)
goto out;
- locks_insert_lock_ctx(lease, &ctx->flc_lease);
+ locks_insert_lock_ctx(&lease->fl_core, &ctx->flc_lease);
/*
* The check in break_lease() is lockless. It's possible for another
* open to race in after we did the earlier check for a conflicting
@@ -1829,7 +1829,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
smp_mb();
error = check_conflicting_open(filp, arg, lease->fl_core.flc_flags);
if (error) {
- locks_unlink_lock_ctx(lease);
+ locks_unlink_lock_ctx(&lease->fl_core);
goto out;
}
--
2.43.0
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 0491d621417d..e8afdd084245 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2169,17 +2169,17 @@ EXPORT_SYMBOL_GPL(vfs_test_lock);
*
* Used to translate a fl_pid into a namespace virtual pid number
*/
-static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
+static pid_t locks_translate_pid(struct file_lock_core *fl, struct pid_namespace *ns)
{
pid_t vnr;
struct pid *pid;
- if (fl->fl_core.flc_flags & FL_OFDLCK)
+ if (fl->flc_flags & FL_OFDLCK)
return -1;
/* Remote locks report a negative pid value */
- if (fl->fl_core.flc_pid <= 0)
- return fl->fl_core.flc_pid;
+ if (fl->flc_pid <= 0)
+ return fl->flc_pid;
/*
* If the flock owner process is dead and its pid has been already
@@ -2187,10 +2187,10 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
* flock owner pid number in init pidns.
*/
if (ns == &init_pid_ns)
- return (pid_t) fl->fl_core.flc_pid;
+ return (pid_t) fl->flc_pid;
rcu_read_lock();
- pid = find_pid_ns(fl->fl_core.flc_pid, &init_pid_ns);
+ pid = find_pid_ns(fl->flc_pid, &init_pid_ns);
vnr = pid_nr_ns(pid, ns);
rcu_read_unlock();
return vnr;
@@ -2198,7 +2198,7 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
{
- flock->l_pid = locks_translate_pid(fl, task_active_pid_ns(current));
+ flock->l_pid = locks_translate_pid(&fl->fl_core, task_active_pid_ns(current));
#if BITS_PER_LONG == 32
/*
* Make sure we can represent the posix lock via
@@ -2220,7 +2220,7 @@ static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
#if BITS_PER_LONG == 32
static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
{
- flock->l_pid = locks_translate_pid(fl, task_active_pid_ns(current));
+ flock->l_pid = locks_translate_pid(&fl->fl_core, task_active_pid_ns(current));
flock->l_start = fl->fl_start;
flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
fl->fl_end - fl->fl_start + 1;
@@ -2726,7 +2726,7 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
struct pid_namespace *proc_pidns = proc_pid_ns(file_inode(f->file)->i_sb);
int type = fl->fl_core.flc_type;
- pid = locks_translate_pid(fl, proc_pidns);
+ pid = locks_translate_pid(&fl->fl_core, proc_pidns);
/*
* If lock owner is dead (and pid is freed) or not visible in current
* pidns, zero is shown as a pid value. Check lock info from
@@ -2819,7 +2819,7 @@ static int locks_show(struct seq_file *f, void *v)
cur = hlist_entry(v, struct file_lock, fl_core.flc_link);
- if (locks_translate_pid(cur, proc_pidns) == 0)
+ if (locks_translate_pid(&cur->fl_core, proc_pidns) == 0)
return 0;
/* View this crossed linked list as a binary tree, the first member of fl_blocked_requests
--
2.43.0
Reduce some pointer manipulation by just using file_lock_core where we
can and only translate to a file_lock when needed.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 71 +++++++++++++++++++++++++++++++-------------------------------
1 file changed, 36 insertions(+), 35 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index e8afdd084245..de93d38da2f9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2718,52 +2718,54 @@ struct locks_iterator {
loff_t li_pos;
};
-static void lock_get_status(struct seq_file *f, struct file_lock *fl,
+static void lock_get_status(struct seq_file *f, struct file_lock_core *flc,
loff_t id, char *pfx, int repeat)
{
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = proc_pid_ns(file_inode(f->file)->i_sb);
- int type = fl->fl_core.flc_type;
+ int type = flc->flc_type;
+ struct file_lock *fl = file_lock(flc);
+
+ pid = locks_translate_pid(flc, proc_pidns);
- pid = locks_translate_pid(&fl->fl_core, proc_pidns);
/*
* If lock owner is dead (and pid is freed) or not visible in current
* pidns, zero is shown as a pid value. Check lock info from
* init_pid_ns to get saved lock pid value.
*/
- if (fl->fl_core.flc_file != NULL)
- inode = file_inode(fl->fl_core.flc_file);
+ if (flc->flc_file != NULL)
+ inode = file_inode(flc->flc_file);
seq_printf(f, "%lld: ", id);
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
- if (fl->fl_core.flc_flags & FL_POSIX) {
- if (fl->fl_core.flc_flags & FL_ACCESS)
+ if (flc->flc_flags & FL_POSIX) {
+ if (flc->flc_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
- else if (fl->fl_core.flc_flags & FL_OFDLCK)
+ else if (flc->flc_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
seq_printf(f, " %s ",
(inode == NULL) ? "*NOINODE*" : "ADVISORY ");
- } else if (fl->fl_core.flc_flags & FL_FLOCK) {
+ } else if (flc->flc_flags & FL_FLOCK) {
seq_puts(f, "FLOCK ADVISORY ");
- } else if (fl->fl_core.flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+ } else if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
type = target_leasetype(fl);
- if (fl->fl_core.flc_flags & FL_DELEG)
+ if (flc->flc_flags & FL_DELEG)
seq_puts(f, "DELEG ");
else
seq_puts(f, "LEASE ");
if (lease_breaking(fl))
seq_puts(f, "BREAKING ");
- else if (fl->fl_core.flc_file)
+ else if (flc->flc_file)
seq_puts(f, "ACTIVE ");
else
seq_puts(f, "BREAKER ");
@@ -2781,7 +2783,7 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
} else {
seq_printf(f, "%d <none>:0 ", pid);
}
- if (fl->fl_core.flc_flags & FL_POSIX) {
+ if (flc->flc_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else
@@ -2791,18 +2793,18 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
}
}
-static struct file_lock *get_next_blocked_member(struct file_lock *node)
+static struct file_lock_core *get_next_blocked_member(struct file_lock_core *node)
{
- struct file_lock *tmp;
+ struct file_lock_core *tmp;
/* NULL node or root node */
- if (node == NULL || node->fl_core.flc_blocker == NULL)
+ if (node == NULL || node->flc_blocker == NULL)
return NULL;
/* Next member in the linked list could be itself */
- tmp = list_next_entry(node, fl_core.flc_blocked_member);
- if (list_entry_is_head(tmp, &node->fl_core.flc_blocker->flc_blocked_requests,
- fl_core.flc_blocked_member)
+ tmp = list_next_entry(node, flc_blocked_member);
+ if (list_entry_is_head(tmp, &node->flc_blocker->flc_blocked_requests,
+ flc_blocked_member)
|| tmp == node) {
return NULL;
}
@@ -2813,18 +2815,18 @@ static struct file_lock *get_next_blocked_member(struct file_lock *node)
static int locks_show(struct seq_file *f, void *v)
{
struct locks_iterator *iter = f->private;
- struct file_lock *cur, *tmp;
+ struct file_lock_core *cur, *tmp;
struct pid_namespace *proc_pidns = proc_pid_ns(file_inode(f->file)->i_sb);
int level = 0;
- cur = hlist_entry(v, struct file_lock, fl_core.flc_link);
+ cur = hlist_entry(v, struct file_lock_core, flc_link);
- if (locks_translate_pid(&cur->fl_core, proc_pidns) == 0)
+ if (locks_translate_pid(cur, proc_pidns) == 0)
return 0;
- /* View this crossed linked list as a binary tree, the first member of fl_blocked_requests
+ /* View this crossed linked list as a binary tree, the first member of flc_blocked_requests
* is the left child of current node, the next silibing in flc_blocked_member is the
- * right child, we can alse get the parent of current node from fl_blocker, so this
+ * right child, we can alse get the parent of current node from flc_blocker, so this
* question becomes traversal of a binary tree
*/
while (cur != NULL) {
@@ -2833,18 +2835,18 @@ static int locks_show(struct seq_file *f, void *v)
else
lock_get_status(f, cur, iter->li_pos, "", level);
- if (!list_empty(&cur->fl_core.flc_blocked_requests)) {
+ if (!list_empty(&cur->flc_blocked_requests)) {
/* Turn left */
- cur = list_first_entry_or_null(&cur->fl_core.flc_blocked_requests,
- struct file_lock,
- fl_core.flc_blocked_member);
+ cur = list_first_entry_or_null(&cur->flc_blocked_requests,
+ struct file_lock_core,
+ flc_blocked_member);
level++;
} else {
/* Turn right */
tmp = get_next_blocked_member(cur);
/* Fall back to parent node */
- while (tmp == NULL && cur->fl_core.flc_blocker != NULL) {
- cur = file_lock(cur->fl_core.flc_blocker);
+ while (tmp == NULL && cur->flc_blocker != NULL) {
+ cur = cur->flc_blocker;
level--;
tmp = get_next_blocked_member(cur);
}
@@ -2859,14 +2861,13 @@ static void __show_fd_locks(struct seq_file *f,
struct list_head *head, int *id,
struct file *filp, struct files_struct *files)
{
- struct file_lock *fl;
+ struct file_lock_core *fl;
- list_for_each_entry(fl, head, fl_core.flc_list) {
+ list_for_each_entry(fl, head, flc_list) {
- if (filp != fl->fl_core.flc_file)
+ if (filp != fl->flc_file)
continue;
- if (fl->fl_core.flc_owner != files &&
- fl->fl_core.flc_owner != filp)
+ if (fl->flc_owner != files && fl->flc_owner != filp)
continue;
(*id)++;
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/9p/vfs_file.c | 39 +++++++++++++++++++--------------------
1 file changed, 19 insertions(+), 20 deletions(-)
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index a1dabcf73380..4e4f555e0c8b 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -9,7 +9,6 @@
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/sched.h>
#include <linux/file.h>
@@ -108,7 +107,7 @@ static int v9fs_file_lock(struct file *filp, int cmd, struct file_lock *fl)
p9_debug(P9_DEBUG_VFS, "filp: %p lock: %p\n", filp, fl);
- if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) {
+ if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_core.flc_type != F_UNLCK) {
filemap_write_and_wait(inode->i_mapping);
invalidate_mapping_pages(&inode->i_data, 0, -1);
}
@@ -127,7 +126,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
fid = filp->private_data;
BUG_ON(fid == NULL);
- BUG_ON((fl->fl_flags & FL_POSIX) != FL_POSIX);
+ BUG_ON((fl->fl_core.flc_flags & FL_POSIX) != FL_POSIX);
res = locks_lock_file_wait(filp, fl);
if (res < 0)
@@ -136,7 +135,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
/* convert posix lock to p9 tlock args */
memset(&flock, 0, sizeof(flock));
/* map the lock type */
- switch (fl->fl_type) {
+ switch (fl->fl_core.flc_type) {
case F_RDLCK:
flock.type = P9_LOCK_TYPE_RDLCK;
break;
@@ -152,7 +151,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
flock.length = 0;
else
flock.length = fl->fl_end - fl->fl_start + 1;
- flock.proc_id = fl->fl_pid;
+ flock.proc_id = fl->fl_core.flc_pid;
flock.client_id = fid->clnt->name;
if (IS_SETLKW(cmd))
flock.flags = P9_LOCK_FLAGS_BLOCK;
@@ -207,13 +206,13 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
* incase server returned error for lock request, revert
* it locally
*/
- if (res < 0 && fl->fl_type != F_UNLCK) {
- unsigned char type = fl->fl_type;
+ if (res < 0 && fl->fl_core.flc_type != F_UNLCK) {
+ unsigned char type = fl->fl_core.flc_type;
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
/* Even if this fails we want to return the remote error */
locks_lock_file_wait(filp, fl);
- fl->fl_type = type;
+ fl->fl_core.flc_type = type;
}
if (flock.client_id != fid->clnt->name)
kfree(flock.client_id);
@@ -235,7 +234,7 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
* if we have a conflicting lock locally, no need to validate
* with server
*/
- if (fl->fl_type != F_UNLCK)
+ if (fl->fl_core.flc_type != F_UNLCK)
return res;
/* convert posix lock to p9 tgetlock args */
@@ -246,7 +245,7 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
glock.length = 0;
else
glock.length = fl->fl_end - fl->fl_start + 1;
- glock.proc_id = fl->fl_pid;
+ glock.proc_id = fl->fl_core.flc_pid;
glock.client_id = fid->clnt->name;
res = p9_client_getlock_dotl(fid, &glock);
@@ -255,13 +254,13 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
/* map 9p lock type to os lock type */
switch (glock.type) {
case P9_LOCK_TYPE_RDLCK:
- fl->fl_type = F_RDLCK;
+ fl->fl_core.flc_type = F_RDLCK;
break;
case P9_LOCK_TYPE_WRLCK:
- fl->fl_type = F_WRLCK;
+ fl->fl_core.flc_type = F_WRLCK;
break;
case P9_LOCK_TYPE_UNLCK:
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
break;
}
if (glock.type != P9_LOCK_TYPE_UNLCK) {
@@ -270,7 +269,7 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
fl->fl_end = OFFSET_MAX;
else
fl->fl_end = glock.start + glock.length - 1;
- fl->fl_pid = -glock.proc_id;
+ fl->fl_core.flc_pid = -glock.proc_id;
}
out:
if (glock.client_id != fid->clnt->name)
@@ -294,7 +293,7 @@ static int v9fs_file_lock_dotl(struct file *filp, int cmd, struct file_lock *fl)
p9_debug(P9_DEBUG_VFS, "filp: %p cmd:%d lock: %p name: %pD\n",
filp, cmd, fl, filp);
- if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) {
+ if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_core.flc_type != F_UNLCK) {
filemap_write_and_wait(inode->i_mapping);
invalidate_mapping_pages(&inode->i_data, 0, -1);
}
@@ -325,16 +324,16 @@ static int v9fs_file_flock_dotl(struct file *filp, int cmd,
p9_debug(P9_DEBUG_VFS, "filp: %p cmd:%d lock: %p name: %pD\n",
filp, cmd, fl, filp);
- if (!(fl->fl_flags & FL_FLOCK))
+ if (!(fl->fl_core.flc_flags & FL_FLOCK))
goto out_err;
- if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) {
+ if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_core.flc_type != F_UNLCK) {
filemap_write_and_wait(inode->i_mapping);
invalidate_mapping_pages(&inode->i_data, 0, -1);
}
/* Convert flock to posix lock */
- fl->fl_flags |= FL_POSIX;
- fl->fl_flags ^= FL_FLOCK;
+ fl->fl_core.flc_flags |= FL_POSIX;
+ fl->fl_core.flc_flags ^= FL_FLOCK;
if (IS_SETLK(cmd) | IS_SETLKW(cmd))
ret = v9fs_file_do_lock(filp, cmd, fl);
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/afs/flock.c | 55 +++++++++++++++++++++++-----------------------
fs/afs/internal.h | 1 -
include/trace/events/afs.h | 4 ++--
3 files changed, 30 insertions(+), 30 deletions(-)
diff --git a/fs/afs/flock.c b/fs/afs/flock.c
index e7feaf66bddf..34e7510526b9 100644
--- a/fs/afs/flock.c
+++ b/fs/afs/flock.c
@@ -93,13 +93,13 @@ static void afs_grant_locks(struct afs_vnode *vnode)
bool exclusive = (vnode->lock_type == AFS_LOCK_WRITE);
list_for_each_entry_safe(p, _p, &vnode->pending_locks, fl_u.afs.link) {
- if (!exclusive && p->fl_type == F_WRLCK)
+ if (!exclusive && p->fl_core.flc_type == F_WRLCK)
continue;
list_move_tail(&p->fl_u.afs.link, &vnode->granted_locks);
p->fl_u.afs.state = AFS_LOCK_GRANTED;
trace_afs_flock_op(vnode, p, afs_flock_op_grant);
- wake_up(&p->fl_wait);
+ wake_up(&p->fl_core.flc_wait);
}
}
@@ -121,16 +121,16 @@ static void afs_next_locker(struct afs_vnode *vnode, int error)
list_for_each_entry_safe(p, _p, &vnode->pending_locks, fl_u.afs.link) {
if (error &&
- p->fl_type == type &&
- afs_file_key(p->fl_file) == key) {
+ p->fl_core.flc_type == type &&
+ afs_file_key(p->fl_core.flc_file) == key) {
list_del_init(&p->fl_u.afs.link);
p->fl_u.afs.state = error;
- wake_up(&p->fl_wait);
+ wake_up(&p->fl_core.flc_wait);
}
/* Select the next locker to hand off to. */
if (next &&
- (next->fl_type == F_WRLCK || p->fl_type == F_RDLCK))
+ (next->fl_core.flc_type == F_WRLCK || p->fl_core.flc_type == F_RDLCK))
continue;
next = p;
}
@@ -142,7 +142,7 @@ static void afs_next_locker(struct afs_vnode *vnode, int error)
afs_set_lock_state(vnode, AFS_VNODE_LOCK_SETTING);
next->fl_u.afs.state = AFS_LOCK_YOUR_TRY;
trace_afs_flock_op(vnode, next, afs_flock_op_wake);
- wake_up(&next->fl_wait);
+ wake_up(&next->fl_core.flc_wait);
} else {
afs_set_lock_state(vnode, AFS_VNODE_LOCK_NONE);
trace_afs_flock_ev(vnode, NULL, afs_flock_no_lockers, 0);
@@ -166,7 +166,7 @@ static void afs_kill_lockers_enoent(struct afs_vnode *vnode)
struct file_lock, fl_u.afs.link);
list_del_init(&p->fl_u.afs.link);
p->fl_u.afs.state = -ENOENT;
- wake_up(&p->fl_wait);
+ wake_up(&p->fl_core.flc_wait);
}
key_put(vnode->lock_key);
@@ -464,14 +464,14 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
_enter("{%llx:%llu},%llu-%llu,%u,%u",
vnode->fid.vid, vnode->fid.vnode,
- fl->fl_start, fl->fl_end, fl->fl_type, mode);
+ fl->fl_start, fl->fl_end, fl->fl_core.flc_type, mode);
fl->fl_ops = &afs_lock_ops;
INIT_LIST_HEAD(&fl->fl_u.afs.link);
fl->fl_u.afs.state = AFS_LOCK_PENDING;
partial = (fl->fl_start != 0 || fl->fl_end != OFFSET_MAX);
- type = (fl->fl_type == F_RDLCK) ? AFS_LOCK_READ : AFS_LOCK_WRITE;
+ type = (fl->fl_core.flc_type == F_RDLCK) ? AFS_LOCK_READ : AFS_LOCK_WRITE;
if (mode == afs_flock_mode_write && partial)
type = AFS_LOCK_WRITE;
@@ -524,7 +524,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
}
if (vnode->lock_state == AFS_VNODE_LOCK_NONE &&
- !(fl->fl_flags & FL_SLEEP)) {
+ !(fl->fl_core.flc_flags & FL_SLEEP)) {
ret = -EAGAIN;
if (type == AFS_LOCK_READ) {
if (vnode->status.lock_count == -1)
@@ -621,7 +621,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
return 0;
lock_is_contended:
- if (!(fl->fl_flags & FL_SLEEP)) {
+ if (!(fl->fl_core.flc_flags & FL_SLEEP)) {
list_del_init(&fl->fl_u.afs.link);
afs_next_locker(vnode, 0);
ret = -EAGAIN;
@@ -641,7 +641,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
spin_unlock(&vnode->lock);
trace_afs_flock_ev(vnode, fl, afs_flock_waiting, 0);
- ret = wait_event_interruptible(fl->fl_wait,
+ ret = wait_event_interruptible(fl->fl_core.flc_wait,
fl->fl_u.afs.state != AFS_LOCK_PENDING);
trace_afs_flock_ev(vnode, fl, afs_flock_waited, ret);
@@ -704,7 +704,8 @@ static int afs_do_unlk(struct file *file, struct file_lock *fl)
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
int ret;
- _enter("{%llx:%llu},%u", vnode->fid.vid, vnode->fid.vnode, fl->fl_type);
+ _enter("{%llx:%llu},%u", vnode->fid.vid, vnode->fid.vnode,
+ fl->fl_core.flc_type);
trace_afs_flock_op(vnode, fl, afs_flock_op_unlock);
@@ -730,11 +731,11 @@ static int afs_do_getlk(struct file *file, struct file_lock *fl)
if (vnode->lock_state == AFS_VNODE_LOCK_DELETED)
return -ENOENT;
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
/* check local lock records first */
posix_test_lock(file, fl);
- if (fl->fl_type == F_UNLCK) {
+ if (fl->fl_core.flc_type == F_UNLCK) {
/* no local locks; consult the server */
ret = afs_fetch_status(vnode, key, false, NULL);
if (ret < 0)
@@ -743,18 +744,18 @@ static int afs_do_getlk(struct file *file, struct file_lock *fl)
lock_count = READ_ONCE(vnode->status.lock_count);
if (lock_count != 0) {
if (lock_count > 0)
- fl->fl_type = F_RDLCK;
+ fl->fl_core.flc_type = F_RDLCK;
else
- fl->fl_type = F_WRLCK;
+ fl->fl_core.flc_type = F_WRLCK;
fl->fl_start = 0;
fl->fl_end = OFFSET_MAX;
- fl->fl_pid = 0;
+ fl->fl_core.flc_pid = 0;
}
}
ret = 0;
error:
- _leave(" = %d [%hd]", ret, fl->fl_type);
+ _leave(" = %d [%hd]", ret, fl->fl_core.flc_type);
return ret;
}
@@ -769,7 +770,7 @@ int afs_lock(struct file *file, int cmd, struct file_lock *fl)
_enter("{%llx:%llu},%d,{t=%x,fl=%x,r=%Ld:%Ld}",
vnode->fid.vid, vnode->fid.vnode, cmd,
- fl->fl_type, fl->fl_flags,
+ fl->fl_core.flc_type, fl->fl_core.flc_flags,
(long long) fl->fl_start, (long long) fl->fl_end);
if (IS_GETLK(cmd))
@@ -778,7 +779,7 @@ int afs_lock(struct file *file, int cmd, struct file_lock *fl)
fl->fl_u.afs.debug_id = atomic_inc_return(&afs_file_lock_debug_id);
trace_afs_flock_op(vnode, fl, afs_flock_op_lock);
- if (fl->fl_type == F_UNLCK)
+ if (fl->fl_core.flc_type == F_UNLCK)
ret = afs_do_unlk(file, fl);
else
ret = afs_do_setlk(file, fl);
@@ -804,7 +805,7 @@ int afs_flock(struct file *file, int cmd, struct file_lock *fl)
_enter("{%llx:%llu},%d,{t=%x,fl=%x}",
vnode->fid.vid, vnode->fid.vnode, cmd,
- fl->fl_type, fl->fl_flags);
+ fl->fl_core.flc_type, fl->fl_core.flc_flags);
/*
* No BSD flocks over NFS allowed.
@@ -813,14 +814,14 @@ int afs_flock(struct file *file, int cmd, struct file_lock *fl)
* Not sure whether that would be unique, though, or whether
* that would break in other places.
*/
- if (!(fl->fl_flags & FL_FLOCK))
+ if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
fl->fl_u.afs.debug_id = atomic_inc_return(&afs_file_lock_debug_id);
trace_afs_flock_op(vnode, fl, afs_flock_op_flock);
/* we're simulating flock() locks using posix locks on the server */
- if (fl->fl_type == F_UNLCK)
+ if (fl->fl_core.flc_type == F_UNLCK)
ret = afs_do_unlk(file, fl);
else
ret = afs_do_setlk(file, fl);
@@ -843,7 +844,7 @@ int afs_flock(struct file *file, int cmd, struct file_lock *fl)
*/
static void afs_fl_copy_lock(struct file_lock *new, struct file_lock *fl)
{
- struct afs_vnode *vnode = AFS_FS_I(file_inode(fl->fl_file));
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(fl->fl_core.flc_file));
_enter("");
@@ -861,7 +862,7 @@ static void afs_fl_copy_lock(struct file_lock *new, struct file_lock *fl)
*/
static void afs_fl_release_private(struct file_lock *fl)
{
- struct afs_vnode *vnode = AFS_FS_I(file_inode(fl->fl_file));
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(fl->fl_core.flc_file));
_enter("");
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index f5dd428e40f4..9c03fcf7ffaa 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -9,7 +9,6 @@
#include <linux/kernel.h>
#include <linux/ktime.h>
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/pagemap.h>
#include <linux/rxrpc.h>
diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
index 8d73171cb9f0..51b41d957c66 100644
--- a/include/trace/events/afs.h
+++ b/include/trace/events/afs.h
@@ -1164,8 +1164,8 @@ TRACE_EVENT(afs_flock_op,
__entry->from = fl->fl_start;
__entry->len = fl->fl_end - fl->fl_start + 1;
__entry->op = op;
- __entry->type = fl->fl_type;
- __entry->flags = fl->fl_flags;
+ __entry->type = fl->fl_core.flc_type;
+ __entry->flags = fl->fl_core.flc_flags;
__entry->debug_id = fl->fl_u.afs.debug_id;
),
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/ceph/locks.c | 75 +++++++++++++++++++++++++++++----------------------------
1 file changed, 38 insertions(+), 37 deletions(-)
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index ccb358c398ca..89e44e7543eb 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -7,7 +7,6 @@
#include "super.h"
#include "mds_client.h"
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/ceph/pagelist.h>
@@ -34,7 +33,7 @@ void __init ceph_flock_init(void)
static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
{
- struct inode *inode = file_inode(dst->fl_file);
+ struct inode *inode = file_inode(dst->fl_core.flc_file);
atomic_inc(&ceph_inode(inode)->i_filelock_ref);
dst->fl_u.ceph.inode = igrab(inode);
}
@@ -111,17 +110,18 @@ static int ceph_lock_message(u8 lock_type, u16 operation, struct inode *inode,
else
length = fl->fl_end - fl->fl_start + 1;
- owner = secure_addr(fl->fl_owner);
+ owner = secure_addr(fl->fl_core.flc_owner);
doutc(cl, "rule: %d, op: %d, owner: %llx, pid: %llu, "
"start: %llu, length: %llu, wait: %d, type: %d\n",
- (int)lock_type, (int)operation, owner, (u64)fl->fl_pid,
- fl->fl_start, length, wait, fl->fl_type);
+ (int)lock_type, (int)operation, owner,
+ (u64) fl->fl_core.flc_pid,
+ fl->fl_start, length, wait, fl->fl_core.flc_type);
req->r_args.filelock_change.rule = lock_type;
req->r_args.filelock_change.type = cmd;
req->r_args.filelock_change.owner = cpu_to_le64(owner);
- req->r_args.filelock_change.pid = cpu_to_le64((u64)fl->fl_pid);
+ req->r_args.filelock_change.pid = cpu_to_le64((u64) fl->fl_core.flc_pid);
req->r_args.filelock_change.start = cpu_to_le64(fl->fl_start);
req->r_args.filelock_change.length = cpu_to_le64(length);
req->r_args.filelock_change.wait = wait;
@@ -131,13 +131,13 @@ static int ceph_lock_message(u8 lock_type, u16 operation, struct inode *inode,
err = ceph_mdsc_wait_request(mdsc, req, wait ?
ceph_lock_wait_for_completion : NULL);
if (!err && operation == CEPH_MDS_OP_GETFILELOCK) {
- fl->fl_pid = -le64_to_cpu(req->r_reply_info.filelock_reply->pid);
+ fl->fl_core.flc_pid = -le64_to_cpu(req->r_reply_info.filelock_reply->pid);
if (CEPH_LOCK_SHARED == req->r_reply_info.filelock_reply->type)
- fl->fl_type = F_RDLCK;
+ fl->fl_core.flc_type = F_RDLCK;
else if (CEPH_LOCK_EXCL == req->r_reply_info.filelock_reply->type)
- fl->fl_type = F_WRLCK;
+ fl->fl_core.flc_type = F_WRLCK;
else
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
fl->fl_start = le64_to_cpu(req->r_reply_info.filelock_reply->start);
length = le64_to_cpu(req->r_reply_info.filelock_reply->start) +
@@ -151,8 +151,8 @@ static int ceph_lock_message(u8 lock_type, u16 operation, struct inode *inode,
ceph_mdsc_put_request(req);
doutc(cl, "rule: %d, op: %d, pid: %llu, start: %llu, "
"length: %llu, wait: %d, type: %d, err code %d\n",
- (int)lock_type, (int)operation, (u64)fl->fl_pid,
- fl->fl_start, length, wait, fl->fl_type, err);
+ (int)lock_type, (int)operation, (u64) fl->fl_core.flc_pid,
+ fl->fl_start, length, wait, fl->fl_core.flc_type, err);
return err;
}
@@ -228,10 +228,10 @@ static int ceph_lock_wait_for_completion(struct ceph_mds_client *mdsc,
static int try_unlock_file(struct file *file, struct file_lock *fl)
{
int err;
- unsigned int orig_flags = fl->fl_flags;
- fl->fl_flags |= FL_EXISTS;
+ unsigned int orig_flags = fl->fl_core.flc_flags;
+ fl->fl_core.flc_flags |= FL_EXISTS;
err = locks_lock_file_wait(file, fl);
- fl->fl_flags = orig_flags;
+ fl->fl_core.flc_flags = orig_flags;
if (err == -ENOENT) {
if (!(orig_flags & FL_EXISTS))
err = 0;
@@ -254,13 +254,13 @@ int ceph_lock(struct file *file, int cmd, struct file_lock *fl)
u8 wait = 0;
u8 lock_cmd;
- if (!(fl->fl_flags & FL_POSIX))
+ if (!(fl->fl_core.flc_flags & FL_POSIX))
return -ENOLCK;
if (ceph_inode_is_shutdown(inode))
return -ESTALE;
- doutc(cl, "fl_owner: %p\n", fl->fl_owner);
+ doutc(cl, "fl_owner: %p\n", fl->fl_core.flc_owner);
/* set wait bit as appropriate, then make command as Ceph expects it*/
if (IS_GETLK(cmd))
@@ -274,19 +274,19 @@ int ceph_lock(struct file *file, int cmd, struct file_lock *fl)
}
spin_unlock(&ci->i_ceph_lock);
if (err < 0) {
- if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK == fl->fl_type)
+ if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK == fl->fl_core.flc_type)
posix_lock_file(file, fl, NULL);
return err;
}
- if (F_RDLCK == fl->fl_type)
+ if (F_RDLCK == fl->fl_core.flc_type)
lock_cmd = CEPH_LOCK_SHARED;
- else if (F_WRLCK == fl->fl_type)
+ else if (F_WRLCK == fl->fl_core.flc_type)
lock_cmd = CEPH_LOCK_EXCL;
else
lock_cmd = CEPH_LOCK_UNLOCK;
- if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK == fl->fl_type) {
+ if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK == fl->fl_core.flc_type) {
err = try_unlock_file(file, fl);
if (err <= 0)
return err;
@@ -294,7 +294,7 @@ int ceph_lock(struct file *file, int cmd, struct file_lock *fl)
err = ceph_lock_message(CEPH_LOCK_FCNTL, op, inode, lock_cmd, wait, fl);
if (!err) {
- if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK != fl->fl_type) {
+ if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK != fl->fl_core.flc_type) {
doutc(cl, "locking locally\n");
err = posix_lock_file(file, fl, NULL);
if (err) {
@@ -320,13 +320,13 @@ int ceph_flock(struct file *file, int cmd, struct file_lock *fl)
u8 wait = 0;
u8 lock_cmd;
- if (!(fl->fl_flags & FL_FLOCK))
+ if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
if (ceph_inode_is_shutdown(inode))
return -ESTALE;
- doutc(cl, "fl_file: %p\n", fl->fl_file);
+ doutc(cl, "fl_file: %p\n", fl->fl_core.flc_file);
spin_lock(&ci->i_ceph_lock);
if (ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) {
@@ -334,7 +334,7 @@ int ceph_flock(struct file *file, int cmd, struct file_lock *fl)
}
spin_unlock(&ci->i_ceph_lock);
if (err < 0) {
- if (F_UNLCK == fl->fl_type)
+ if (F_UNLCK == fl->fl_core.flc_type)
locks_lock_file_wait(file, fl);
return err;
}
@@ -342,14 +342,14 @@ int ceph_flock(struct file *file, int cmd, struct file_lock *fl)
if (IS_SETLKW(cmd))
wait = 1;
- if (F_RDLCK == fl->fl_type)
+ if (F_RDLCK == fl->fl_core.flc_type)
lock_cmd = CEPH_LOCK_SHARED;
- else if (F_WRLCK == fl->fl_type)
+ else if (F_WRLCK == fl->fl_core.flc_type)
lock_cmd = CEPH_LOCK_EXCL;
else
lock_cmd = CEPH_LOCK_UNLOCK;
- if (F_UNLCK == fl->fl_type) {
+ if (F_UNLCK == fl->fl_core.flc_type) {
err = try_unlock_file(file, fl);
if (err <= 0)
return err;
@@ -357,7 +357,7 @@ int ceph_flock(struct file *file, int cmd, struct file_lock *fl)
err = ceph_lock_message(CEPH_LOCK_FLOCK, CEPH_MDS_OP_SETFILELOCK,
inode, lock_cmd, wait, fl);
- if (!err && F_UNLCK != fl->fl_type) {
+ if (!err && F_UNLCK != fl->fl_core.flc_type) {
err = locks_lock_file_wait(file, fl);
if (err) {
ceph_lock_message(CEPH_LOCK_FLOCK,
@@ -386,9 +386,9 @@ void ceph_count_locks(struct inode *inode, int *fcntl_count, int *flock_count)
ctx = locks_inode_context(inode);
if (ctx) {
spin_lock(&ctx->flc_lock);
- list_for_each_entry(lock, &ctx->flc_posix, fl_list)
+ list_for_each_entry(lock, &ctx->flc_posix, fl_core.flc_list)
++(*fcntl_count);
- list_for_each_entry(lock, &ctx->flc_flock, fl_list)
+ list_for_each_entry(lock, &ctx->flc_flock, fl_core.flc_list)
++(*flock_count);
spin_unlock(&ctx->flc_lock);
}
@@ -409,10 +409,10 @@ static int lock_to_ceph_filelock(struct inode *inode,
cephlock->start = cpu_to_le64(lock->fl_start);
cephlock->length = cpu_to_le64(lock->fl_end - lock->fl_start + 1);
cephlock->client = cpu_to_le64(0);
- cephlock->pid = cpu_to_le64((u64)lock->fl_pid);
- cephlock->owner = cpu_to_le64(secure_addr(lock->fl_owner));
+ cephlock->pid = cpu_to_le64((u64) lock->fl_core.flc_pid);
+ cephlock->owner = cpu_to_le64(secure_addr(lock->fl_core.flc_owner));
- switch (lock->fl_type) {
+ switch (lock->fl_core.flc_type) {
case F_RDLCK:
cephlock->type = CEPH_LOCK_SHARED;
break;
@@ -423,7 +423,8 @@ static int lock_to_ceph_filelock(struct inode *inode,
cephlock->type = CEPH_LOCK_UNLOCK;
break;
default:
- doutc(cl, "Have unknown lock type %d\n", lock->fl_type);
+ doutc(cl, "Have unknown lock type %d\n",
+ lock->fl_core.flc_type);
err = -EINVAL;
}
@@ -454,7 +455,7 @@ int ceph_encode_locks_to_buffer(struct inode *inode,
return 0;
spin_lock(&ctx->flc_lock);
- list_for_each_entry(lock, &ctx->flc_posix, fl_list) {
+ list_for_each_entry(lock, &ctx->flc_posix, fl_core.flc_list) {
++seen_fcntl;
if (seen_fcntl > num_fcntl_locks) {
err = -ENOSPC;
@@ -465,7 +466,7 @@ int ceph_encode_locks_to_buffer(struct inode *inode,
goto fail;
++l;
}
- list_for_each_entry(lock, &ctx->flc_flock, fl_list) {
+ list_for_each_entry(lock, &ctx->flc_flock, fl_core.flc_list) {
++seen_flock;
if (seen_flock > num_flock_locks) {
err = -ENOSPC;
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/lockd/clnt4xdr.c | 14 +++++-----
fs/lockd/clntlock.c | 2 +-
fs/lockd/clntproc.c | 62 +++++++++++++++++++++++--------------------
fs/lockd/clntxdr.c | 14 +++++-----
fs/lockd/svc4proc.c | 10 +++----
fs/lockd/svclock.c | 64 +++++++++++++++++++++++----------------------
fs/lockd/svcproc.c | 10 +++----
fs/lockd/svcsubs.c | 24 ++++++++---------
fs/lockd/xdr.c | 14 +++++-----
fs/lockd/xdr4.c | 14 +++++-----
include/linux/lockd/lockd.h | 8 +++---
include/linux/lockd/xdr.h | 1 -
12 files changed, 121 insertions(+), 116 deletions(-)
diff --git a/fs/lockd/clnt4xdr.c b/fs/lockd/clnt4xdr.c
index 8161667c976f..de58ec4ff374 100644
--- a/fs/lockd/clnt4xdr.c
+++ b/fs/lockd/clnt4xdr.c
@@ -243,7 +243,7 @@ static void encode_nlm4_holder(struct xdr_stream *xdr,
u64 l_offset, l_len;
__be32 *p;
- encode_bool(xdr, lock->fl.fl_type == F_RDLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_RDLCK);
encode_int32(xdr, lock->svid);
encode_netobj(xdr, lock->oh.data, lock->oh.len);
@@ -270,7 +270,7 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, struct nlm_res *result)
goto out_overflow;
exclusive = be32_to_cpup(p++);
lock->svid = be32_to_cpup(p);
- fl->fl_pid = (pid_t)lock->svid;
+ fl->fl_core.flc_pid = (pid_t)lock->svid;
error = decode_netobj(xdr, &lock->oh);
if (unlikely(error))
@@ -280,8 +280,8 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, struct nlm_res *result)
if (unlikely(p == NULL))
goto out_overflow;
- fl->fl_flags = FL_POSIX;
- fl->fl_type = exclusive != 0 ? F_WRLCK : F_RDLCK;
+ fl->fl_core.flc_flags = FL_POSIX;
+ fl->fl_core.flc_type = exclusive != 0 ? F_WRLCK : F_RDLCK;
p = xdr_decode_hyper(p, &l_offset);
xdr_decode_hyper(p, &l_len);
nlm4svc_set_file_lock_range(fl, l_offset, l_len);
@@ -357,7 +357,7 @@ static void nlm4_xdr_enc_testargs(struct rpc_rqst *req,
const struct nlm_lock *lock = &args->lock;
encode_cookie(xdr, &args->cookie);
- encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
}
@@ -380,7 +380,7 @@ static void nlm4_xdr_enc_lockargs(struct rpc_rqst *req,
encode_cookie(xdr, &args->cookie);
encode_bool(xdr, args->block);
- encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
encode_bool(xdr, args->reclaim);
encode_int32(xdr, args->state);
@@ -403,7 +403,7 @@ static void nlm4_xdr_enc_cancargs(struct rpc_rqst *req,
encode_cookie(xdr, &args->cookie);
encode_bool(xdr, args->block);
- encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
}
diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
index 5d85715be763..eaa463f2d44d 100644
--- a/fs/lockd/clntlock.c
+++ b/fs/lockd/clntlock.c
@@ -185,7 +185,7 @@ __be32 nlmclnt_grant(const struct sockaddr *addr, const struct nlm_lock *lock)
continue;
if (!rpc_cmp_addr(nlm_addr(block->b_host), addr))
continue;
- if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_file)), fh) != 0)
+ if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_core.flc_file)), fh) != 0)
continue;
/* Alright, we found a lock. Set the return status
* and wake up the caller
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index 1f71260603b7..0b8d0297523f 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -12,7 +12,6 @@
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/nfs_fs.h>
#include <linux/utsname.h>
@@ -134,7 +133,8 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, struct file_lock *fl)
char *nodename = req->a_host->h_rpcclnt->cl_nodename;
nlmclnt_next_cookie(&argp->cookie);
- memcpy(&lock->fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct nfs_fh));
+ memcpy(&lock->fh, NFS_FH(file_inode(fl->fl_core.flc_file)),
+ sizeof(struct nfs_fh));
lock->caller = nodename;
lock->oh.data = req->a_owner;
lock->oh.len = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
@@ -143,7 +143,7 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, struct file_lock *fl)
lock->svid = fl->fl_u.nfs_fl.owner->pid;
lock->fl.fl_start = fl->fl_start;
lock->fl.fl_end = fl->fl_end;
- lock->fl.fl_type = fl->fl_type;
+ lock->fl.fl_core.flc_type = fl->fl_core.flc_type;
}
static void nlmclnt_release_lockargs(struct nlm_rqst *req)
@@ -183,7 +183,7 @@ int nlmclnt_proc(struct nlm_host *host, int cmd, struct file_lock *fl, void *dat
call->a_callback_data = data;
if (IS_SETLK(cmd) || IS_SETLKW(cmd)) {
- if (fl->fl_type != F_UNLCK) {
+ if (fl->fl_core.flc_type != F_UNLCK) {
call->a_args.block = IS_SETLKW(cmd) ? 1 : 0;
status = nlmclnt_lock(call, fl);
} else
@@ -433,13 +433,14 @@ nlmclnt_test(struct nlm_rqst *req, struct file_lock *fl)
{
int status;
- status = nlmclnt_call(nfs_file_cred(fl->fl_file), req, NLMPROC_TEST);
+ status = nlmclnt_call(nfs_file_cred(fl->fl_core.flc_file), req,
+ NLMPROC_TEST);
if (status < 0)
goto out;
switch (req->a_res.status) {
case nlm_granted:
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
break;
case nlm_lck_denied:
/*
@@ -447,8 +448,8 @@ nlmclnt_test(struct nlm_rqst *req, struct file_lock *fl)
*/
fl->fl_start = req->a_res.lock.fl.fl_start;
fl->fl_end = req->a_res.lock.fl.fl_end;
- fl->fl_type = req->a_res.lock.fl.fl_type;
- fl->fl_pid = -req->a_res.lock.fl.fl_pid;
+ fl->fl_core.flc_type = req->a_res.lock.fl.fl_core.flc_type;
+ fl->fl_core.flc_pid = -req->a_res.lock.fl.fl_core.flc_pid;
break;
default:
status = nlm_stat_to_errno(req->a_res.status);
@@ -486,14 +487,15 @@ static const struct file_lock_operations nlmclnt_lock_ops = {
static void nlmclnt_locks_init_private(struct file_lock *fl, struct nlm_host *host)
{
fl->fl_u.nfs_fl.state = 0;
- fl->fl_u.nfs_fl.owner = nlmclnt_find_lockowner(host, fl->fl_owner);
+ fl->fl_u.nfs_fl.owner = nlmclnt_find_lockowner(host,
+ fl->fl_core.flc_owner);
INIT_LIST_HEAD(&fl->fl_u.nfs_fl.list);
fl->fl_ops = &nlmclnt_lock_ops;
}
static int do_vfs_lock(struct file_lock *fl)
{
- return locks_lock_file_wait(fl->fl_file, fl);
+ return locks_lock_file_wait(fl->fl_core.flc_file, fl);
}
/*
@@ -519,11 +521,11 @@ static int do_vfs_lock(struct file_lock *fl)
static int
nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
{
- const struct cred *cred = nfs_file_cred(fl->fl_file);
+ const struct cred *cred = nfs_file_cred(fl->fl_core.flc_file);
struct nlm_host *host = req->a_host;
struct nlm_res *resp = &req->a_res;
struct nlm_wait block;
- unsigned char flags = fl->fl_flags;
+ unsigned char flags = fl->fl_core.flc_flags;
unsigned char type;
__be32 b_status;
int status = -ENOLCK;
@@ -532,9 +534,9 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
goto out;
req->a_args.state = nsm_local_state;
- fl->fl_flags |= FL_ACCESS;
+ fl->fl_core.flc_flags |= FL_ACCESS;
status = do_vfs_lock(fl);
- fl->fl_flags = flags;
+ fl->fl_core.flc_flags = flags;
if (status < 0)
goto out;
@@ -592,11 +594,11 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
goto again;
}
/* Ensure the resulting lock will get added to granted list */
- fl->fl_flags |= FL_SLEEP;
+ fl->fl_core.flc_flags |= FL_SLEEP;
if (do_vfs_lock(fl) < 0)
printk(KERN_WARNING "%s: VFS is out of sync with lock manager!\n", __func__);
up_read(&host->h_rwsem);
- fl->fl_flags = flags;
+ fl->fl_core.flc_flags = flags;
status = 0;
}
if (status < 0)
@@ -623,13 +625,13 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
req->a_host->h_addrlen, req->a_res.status);
dprintk("lockd: lock attempt ended in fatal error.\n"
" Attempting to unlock.\n");
- type = fl->fl_type;
- fl->fl_type = F_UNLCK;
+ type = fl->fl_core.flc_type;
+ fl->fl_core.flc_type = F_UNLCK;
down_read(&host->h_rwsem);
do_vfs_lock(fl);
up_read(&host->h_rwsem);
- fl->fl_type = type;
- fl->fl_flags = flags;
+ fl->fl_core.flc_type = type;
+ fl->fl_core.flc_flags = flags;
nlmclnt_async_call(cred, req, NLMPROC_UNLOCK, &nlmclnt_unlock_ops);
return status;
}
@@ -652,12 +654,14 @@ nlmclnt_reclaim(struct nlm_host *host, struct file_lock *fl,
nlmclnt_setlockargs(req, fl);
req->a_args.reclaim = 1;
- status = nlmclnt_call(nfs_file_cred(fl->fl_file), req, NLMPROC_LOCK);
+ status = nlmclnt_call(nfs_file_cred(fl->fl_core.flc_file), req,
+ NLMPROC_LOCK);
if (status >= 0 && req->a_res.status == nlm_granted)
return 0;
printk(KERN_WARNING "lockd: failed to reclaim lock for pid %d "
- "(errno %d, status %d)\n", fl->fl_pid,
+ "(errno %d, status %d)\n",
+ fl->fl_core.flc_pid,
status, ntohl(req->a_res.status));
/*
@@ -684,26 +688,26 @@ nlmclnt_unlock(struct nlm_rqst *req, struct file_lock *fl)
struct nlm_host *host = req->a_host;
struct nlm_res *resp = &req->a_res;
int status;
- unsigned char flags = fl->fl_flags;
+ unsigned char flags = fl->fl_core.flc_flags;
/*
* Note: the server is supposed to either grant us the unlock
* request, or to deny it with NLM_LCK_DENIED_GRACE_PERIOD. In either
* case, we want to unlock.
*/
- fl->fl_flags |= FL_EXISTS;
+ fl->fl_core.flc_flags |= FL_EXISTS;
down_read(&host->h_rwsem);
status = do_vfs_lock(fl);
up_read(&host->h_rwsem);
- fl->fl_flags = flags;
+ fl->fl_core.flc_flags = flags;
if (status == -ENOENT) {
status = 0;
goto out;
}
refcount_inc(&req->a_count);
- status = nlmclnt_async_call(nfs_file_cred(fl->fl_file), req,
- NLMPROC_UNLOCK, &nlmclnt_unlock_ops);
+ status = nlmclnt_async_call(nfs_file_cred(fl->fl_core.flc_file), req,
+ NLMPROC_UNLOCK, &nlmclnt_unlock_ops);
if (status < 0)
goto out;
@@ -796,8 +800,8 @@ static int nlmclnt_cancel(struct nlm_host *host, int block, struct file_lock *fl
req->a_args.block = block;
refcount_inc(&req->a_count);
- status = nlmclnt_async_call(nfs_file_cred(fl->fl_file), req,
- NLMPROC_CANCEL, &nlmclnt_cancel_ops);
+ status = nlmclnt_async_call(nfs_file_cred(fl->fl_core.flc_file), req,
+ NLMPROC_CANCEL, &nlmclnt_cancel_ops);
if (status == 0 && req->a_res.status == nlm_lck_denied)
status = -ENOLCK;
nlmclnt_release_call(req);
diff --git a/fs/lockd/clntxdr.c b/fs/lockd/clntxdr.c
index 4df62f635529..e6081fe3a51c 100644
--- a/fs/lockd/clntxdr.c
+++ b/fs/lockd/clntxdr.c
@@ -238,7 +238,7 @@ static void encode_nlm_holder(struct xdr_stream *xdr,
u32 l_offset, l_len;
__be32 *p;
- encode_bool(xdr, lock->fl.fl_type == F_RDLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_RDLCK);
encode_int32(xdr, lock->svid);
encode_netobj(xdr, lock->oh.data, lock->oh.len);
@@ -265,7 +265,7 @@ static int decode_nlm_holder(struct xdr_stream *xdr, struct nlm_res *result)
goto out_overflow;
exclusive = be32_to_cpup(p++);
lock->svid = be32_to_cpup(p);
- fl->fl_pid = (pid_t)lock->svid;
+ fl->fl_core.flc_pid = (pid_t)lock->svid;
error = decode_netobj(xdr, &lock->oh);
if (unlikely(error))
@@ -275,8 +275,8 @@ static int decode_nlm_holder(struct xdr_stream *xdr, struct nlm_res *result)
if (unlikely(p == NULL))
goto out_overflow;
- fl->fl_flags = FL_POSIX;
- fl->fl_type = exclusive != 0 ? F_WRLCK : F_RDLCK;
+ fl->fl_core.flc_flags = FL_POSIX;
+ fl->fl_core.flc_type = exclusive != 0 ? F_WRLCK : F_RDLCK;
l_offset = be32_to_cpup(p++);
l_len = be32_to_cpup(p);
end = l_offset + l_len - 1;
@@ -357,7 +357,7 @@ static void nlm_xdr_enc_testargs(struct rpc_rqst *req,
const struct nlm_lock *lock = &args->lock;
encode_cookie(xdr, &args->cookie);
- encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm_lock(xdr, lock);
}
@@ -380,7 +380,7 @@ static void nlm_xdr_enc_lockargs(struct rpc_rqst *req,
encode_cookie(xdr, &args->cookie);
encode_bool(xdr, args->block);
- encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm_lock(xdr, lock);
encode_bool(xdr, args->reclaim);
encode_int32(xdr, args->state);
@@ -403,7 +403,7 @@ static void nlm_xdr_enc_cancargs(struct rpc_rqst *req,
encode_cookie(xdr, &args->cookie);
encode_bool(xdr, args->block);
- encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+ encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm_lock(xdr, lock);
}
diff --git a/fs/lockd/svc4proc.c b/fs/lockd/svc4proc.c
index b72023a6b4c1..b0a68d8856f9 100644
--- a/fs/lockd/svc4proc.c
+++ b/fs/lockd/svc4proc.c
@@ -52,16 +52,16 @@ nlm4svc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
*filp = file;
/* Set up the missing parts of the file_lock structure */
- lock->fl.fl_flags = FL_POSIX;
- lock->fl.fl_file = file->f_file[mode];
- lock->fl.fl_pid = current->tgid;
+ lock->fl.fl_core.flc_flags = FL_POSIX;
+ lock->fl.fl_core.flc_file = file->f_file[mode];
+ lock->fl.fl_core.flc_pid = current->tgid;
lock->fl.fl_start = (loff_t)lock->lock_start;
lock->fl.fl_end = lock->lock_len ?
(loff_t)(lock->lock_start + lock->lock_len - 1) :
OFFSET_MAX;
lock->fl.fl_lmops = &nlmsvc_lock_operations;
nlmsvc_locks_init_private(&lock->fl, host, (pid_t)lock->svid);
- if (!lock->fl.fl_owner) {
+ if (!lock->fl.fl_core.flc_owner) {
/* lockowner allocation has failed */
nlmsvc_release_host(host);
return nlm_lck_denied_nolocks;
@@ -106,7 +106,7 @@ __nlm4svc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
if ((resp->status = nlm4svc_retrieve_args(rqstp, argp, &host, &file)))
return resp->status == nlm_drop_reply ? rpc_drop_reply :rpc_success;
- test_owner = argp->lock.fl.fl_owner;
+ test_owner = argp->lock.fl.fl_core.flc_owner;
/* Now check for conflicting locks */
resp->status = nlmsvc_testlock(rqstp, file, host, &argp->lock, &resp->lock, &resp->cookie);
if (resp->status == nlm_drop_reply)
diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
index 2dc10900ad1c..e371ec1354dd 100644
--- a/fs/lockd/svclock.c
+++ b/fs/lockd/svclock.c
@@ -150,16 +150,17 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
struct file_lock *fl;
dprintk("lockd: nlmsvc_lookup_block f=%p pd=%d %Ld-%Ld ty=%d\n",
- file, lock->fl.fl_pid,
+ file, lock->fl.fl_core.flc_pid,
(long long)lock->fl.fl_start,
- (long long)lock->fl.fl_end, lock->fl.fl_type);
+ (long long)lock->fl.fl_end,
+ lock->fl.fl_core.flc_type);
spin_lock(&nlm_blocked_lock);
list_for_each_entry(block, &nlm_blocked, b_list) {
fl = &block->b_call->a_args.lock.fl;
dprintk("lockd: check f=%p pd=%d %Ld-%Ld ty=%d cookie=%s\n",
- block->b_file, fl->fl_pid,
+ block->b_file, fl->fl_core.flc_pid,
(long long)fl->fl_start,
- (long long)fl->fl_end, fl->fl_type,
+ (long long)fl->fl_end, fl->fl_core.flc_type,
nlmdbg_cookie2a(&block->b_call->a_args.cookie));
if (block->b_file == file && nlm_compare_locks(fl, &lock->fl)) {
kref_get(&block->b_count);
@@ -244,7 +245,7 @@ nlmsvc_create_block(struct svc_rqst *rqstp, struct nlm_host *host,
goto failed_free;
/* Set notifier function for VFS, and init args */
- call->a_args.lock.fl.fl_flags |= FL_SLEEP;
+ call->a_args.lock.fl.fl_core.flc_flags |= FL_SLEEP;
call->a_args.lock.fl.fl_lmops = &nlmsvc_lock_operations;
nlmclnt_next_cookie(&call->a_args.cookie);
@@ -402,14 +403,14 @@ static struct nlm_lockowner *nlmsvc_find_lockowner(struct nlm_host *host, pid_t
void
nlmsvc_release_lockowner(struct nlm_lock *lock)
{
- if (lock->fl.fl_owner)
- nlmsvc_put_lockowner(lock->fl.fl_owner);
+ if (lock->fl.fl_core.flc_owner)
+ nlmsvc_put_lockowner(lock->fl.fl_core.flc_owner);
}
void nlmsvc_locks_init_private(struct file_lock *fl, struct nlm_host *host,
pid_t pid)
{
- fl->fl_owner = nlmsvc_find_lockowner(host, pid);
+ fl->fl_core.flc_owner = nlmsvc_find_lockowner(host, pid);
}
/*
@@ -425,7 +426,7 @@ static int nlmsvc_setgrantargs(struct nlm_rqst *call, struct nlm_lock *lock)
/* set default data area */
call->a_args.lock.oh.data = call->a_owner;
- call->a_args.lock.svid = ((struct nlm_lockowner *)lock->fl.fl_owner)->pid;
+ call->a_args.lock.svid = ((struct nlm_lockowner *) lock->fl.fl_core.flc_owner)->pid;
if (lock->oh.len > NLMCLNT_OHSIZE) {
void *data = kmalloc(lock->oh.len, GFP_KERNEL);
@@ -489,7 +490,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
dprintk("lockd: nlmsvc_lock(%s/%ld, ty=%d, pi=%d, %Ld-%Ld, bl=%d)\n",
inode->i_sb->s_id, inode->i_ino,
- lock->fl.fl_type, lock->fl.fl_pid,
+ lock->fl.fl_core.flc_type,
+ lock->fl.fl_core.flc_pid,
(long long)lock->fl.fl_start,
(long long)lock->fl.fl_end,
wait);
@@ -512,7 +514,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
goto out;
lock = &block->b_call->a_args.lock;
} else
- lock->fl.fl_flags &= ~FL_SLEEP;
+ lock->fl.fl_core.flc_flags &= ~FL_SLEEP;
if (block->b_flags & B_QUEUED) {
dprintk("lockd: nlmsvc_lock deferred block %p flags %d\n",
@@ -560,10 +562,10 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
spin_unlock(&nlm_blocked_lock);
if (!wait)
- lock->fl.fl_flags &= ~FL_SLEEP;
+ lock->fl.fl_core.flc_flags &= ~FL_SLEEP;
mode = lock_to_openmode(&lock->fl);
error = vfs_lock_file(file->f_file[mode], F_SETLK, &lock->fl, NULL);
- lock->fl.fl_flags &= ~FL_SLEEP;
+ lock->fl.fl_core.flc_flags &= ~FL_SLEEP;
dprintk("lockd: vfs_lock_file returned %d\n", error);
switch (error) {
@@ -616,7 +618,7 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
dprintk("lockd: nlmsvc_testlock(%s/%ld, ty=%d, %Ld-%Ld)\n",
nlmsvc_file_inode(file)->i_sb->s_id,
nlmsvc_file_inode(file)->i_ino,
- lock->fl.fl_type,
+ lock->fl.fl_core.flc_type,
(long long)lock->fl.fl_start,
(long long)lock->fl.fl_end);
@@ -636,19 +638,19 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
goto out;
}
- if (lock->fl.fl_type == F_UNLCK) {
+ if (lock->fl.fl_core.flc_type == F_UNLCK) {
ret = nlm_granted;
goto out;
}
dprintk("lockd: conflicting lock(ty=%d, %Ld-%Ld)\n",
- lock->fl.fl_type, (long long)lock->fl.fl_start,
+ lock->fl.fl_core.flc_type, (long long)lock->fl.fl_start,
(long long)lock->fl.fl_end);
conflock->caller = "somehost"; /* FIXME */
conflock->len = strlen(conflock->caller);
conflock->oh.len = 0; /* don't return OH info */
- conflock->svid = lock->fl.fl_pid;
- conflock->fl.fl_type = lock->fl.fl_type;
+ conflock->svid = lock->fl.fl_core.flc_pid;
+ conflock->fl.fl_core.flc_type = lock->fl.fl_core.flc_type;
conflock->fl.fl_start = lock->fl.fl_start;
conflock->fl.fl_end = lock->fl.fl_end;
locks_release_private(&lock->fl);
@@ -673,21 +675,21 @@ nlmsvc_unlock(struct net *net, struct nlm_file *file, struct nlm_lock *lock)
dprintk("lockd: nlmsvc_unlock(%s/%ld, pi=%d, %Ld-%Ld)\n",
nlmsvc_file_inode(file)->i_sb->s_id,
nlmsvc_file_inode(file)->i_ino,
- lock->fl.fl_pid,
+ lock->fl.fl_core.flc_pid,
(long long)lock->fl.fl_start,
(long long)lock->fl.fl_end);
/* First, cancel any lock that might be there */
nlmsvc_cancel_blocked(net, file, lock);
- lock->fl.fl_type = F_UNLCK;
- lock->fl.fl_file = file->f_file[O_RDONLY];
- if (lock->fl.fl_file)
- error = vfs_lock_file(lock->fl.fl_file, F_SETLK,
+ lock->fl.fl_core.flc_type = F_UNLCK;
+ lock->fl.fl_core.flc_file = file->f_file[O_RDONLY];
+ if (lock->fl.fl_core.flc_file)
+ error = vfs_lock_file(lock->fl.fl_core.flc_file, F_SETLK,
&lock->fl, NULL);
- lock->fl.fl_file = file->f_file[O_WRONLY];
- if (lock->fl.fl_file)
- error |= vfs_lock_file(lock->fl.fl_file, F_SETLK,
+ lock->fl.fl_core.flc_file = file->f_file[O_WRONLY];
+ if (lock->fl.fl_core.flc_file)
+ error |= vfs_lock_file(lock->fl.fl_core.flc_file, F_SETLK,
&lock->fl, NULL);
return (error < 0)? nlm_lck_denied_nolocks : nlm_granted;
@@ -710,7 +712,7 @@ nlmsvc_cancel_blocked(struct net *net, struct nlm_file *file, struct nlm_lock *l
dprintk("lockd: nlmsvc_cancel(%s/%ld, pi=%d, %Ld-%Ld)\n",
nlmsvc_file_inode(file)->i_sb->s_id,
nlmsvc_file_inode(file)->i_ino,
- lock->fl.fl_pid,
+ lock->fl.fl_core.flc_pid,
(long long)lock->fl.fl_start,
(long long)lock->fl.fl_end);
@@ -863,12 +865,12 @@ nlmsvc_grant_blocked(struct nlm_block *block)
/* vfs_lock_file() can mangle fl_start and fl_end, but we need
* them unchanged for the GRANT_MSG
*/
- lock->fl.fl_flags |= FL_SLEEP;
+ lock->fl.fl_core.flc_flags |= FL_SLEEP;
fl_start = lock->fl.fl_start;
fl_end = lock->fl.fl_end;
mode = lock_to_openmode(&lock->fl);
error = vfs_lock_file(file->f_file[mode], F_SETLK, &lock->fl, NULL);
- lock->fl.fl_flags &= ~FL_SLEEP;
+ lock->fl.fl_core.flc_flags &= ~FL_SLEEP;
lock->fl.fl_start = fl_start;
lock->fl.fl_end = fl_end;
@@ -993,8 +995,8 @@ nlmsvc_grant_reply(struct nlm_cookie *cookie, __be32 status)
/* Client doesn't want it, just unlock it */
nlmsvc_unlink_block(block);
fl = &block->b_call->a_args.lock.fl;
- fl->fl_type = F_UNLCK;
- error = vfs_lock_file(fl->fl_file, F_SETLK, fl, NULL);
+ fl->fl_core.flc_type = F_UNLCK;
+ error = vfs_lock_file(fl->fl_core.flc_file, F_SETLK, fl, NULL);
if (error)
pr_warn("lockd: unable to unlock lock rejected by client!\n");
break;
diff --git a/fs/lockd/svcproc.c b/fs/lockd/svcproc.c
index 32784f508c81..16013be0d8ae 100644
--- a/fs/lockd/svcproc.c
+++ b/fs/lockd/svcproc.c
@@ -77,12 +77,12 @@ nlmsvc_retrieve_args(struct svc_rqst *rqstp, struct nlm_args *argp,
/* Set up the missing parts of the file_lock structure */
mode = lock_to_openmode(&lock->fl);
- lock->fl.fl_flags = FL_POSIX;
- lock->fl.fl_file = file->f_file[mode];
- lock->fl.fl_pid = current->tgid;
+ lock->fl.fl_core.flc_flags = FL_POSIX;
+ lock->fl.fl_core.flc_file = file->f_file[mode];
+ lock->fl.fl_core.flc_pid = current->tgid;
lock->fl.fl_lmops = &nlmsvc_lock_operations;
nlmsvc_locks_init_private(&lock->fl, host, (pid_t)lock->svid);
- if (!lock->fl.fl_owner) {
+ if (!lock->fl.fl_core.flc_owner) {
/* lockowner allocation has failed */
nlmsvc_release_host(host);
return nlm_lck_denied_nolocks;
@@ -127,7 +127,7 @@ __nlmsvc_proc_test(struct svc_rqst *rqstp, struct nlm_res *resp)
if ((resp->status = nlmsvc_retrieve_args(rqstp, argp, &host, &file)))
return resp->status == nlm_drop_reply ? rpc_drop_reply :rpc_success;
- test_owner = argp->lock.fl.fl_owner;
+ test_owner = argp->lock.fl.fl_core.flc_owner;
/* Now check for conflicting locks */
resp->status = cast_status(nlmsvc_testlock(rqstp, file, host, &argp->lock, &resp->lock, &resp->cookie));
diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
index e3b6229e7ae5..4b55d0d9365a 100644
--- a/fs/lockd/svcsubs.c
+++ b/fs/lockd/svcsubs.c
@@ -73,7 +73,7 @@ static inline unsigned int file_hash(struct nfs_fh *f)
int lock_to_openmode(struct file_lock *lock)
{
- return (lock->fl_type == F_WRLCK) ? O_WRONLY : O_RDONLY;
+ return (lock->fl_core.flc_type == F_WRLCK) ? O_WRONLY : O_RDONLY;
}
/*
@@ -181,18 +181,18 @@ static int nlm_unlock_files(struct nlm_file *file, const struct file_lock *fl)
struct file_lock lock;
locks_init_lock(&lock);
- lock.fl_type = F_UNLCK;
+ lock.fl_core.flc_type = F_UNLCK;
lock.fl_start = 0;
lock.fl_end = OFFSET_MAX;
- lock.fl_owner = fl->fl_owner;
- lock.fl_pid = fl->fl_pid;
- lock.fl_flags = FL_POSIX;
+ lock.fl_core.flc_owner = fl->fl_core.flc_owner;
+ lock.fl_core.flc_pid = fl->fl_core.flc_pid;
+ lock.fl_core.flc_flags = FL_POSIX;
- lock.fl_file = file->f_file[O_RDONLY];
- if (lock.fl_file && vfs_lock_file(lock.fl_file, F_SETLK, &lock, NULL))
+ lock.fl_core.flc_file = file->f_file[O_RDONLY];
+ if (lock.fl_core.flc_file && vfs_lock_file(lock.fl_core.flc_file, F_SETLK, &lock, NULL))
goto out_err;
- lock.fl_file = file->f_file[O_WRONLY];
- if (lock.fl_file && vfs_lock_file(lock.fl_file, F_SETLK, &lock, NULL))
+ lock.fl_core.flc_file = file->f_file[O_WRONLY];
+ if (lock.fl_core.flc_file && vfs_lock_file(lock.fl_core.flc_file, F_SETLK, &lock, NULL))
goto out_err;
return 0;
out_err:
@@ -218,14 +218,14 @@ nlm_traverse_locks(struct nlm_host *host, struct nlm_file *file,
again:
file->f_locks = 0;
spin_lock(&flctx->flc_lock);
- list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
+ list_for_each_entry(fl, &flctx->flc_posix, fl_core.flc_list) {
if (fl->fl_lmops != &nlmsvc_lock_operations)
continue;
/* update current lock count */
file->f_locks++;
- lockhost = ((struct nlm_lockowner *)fl->fl_owner)->host;
+ lockhost = ((struct nlm_lockowner *) fl->fl_core.flc_owner)->host;
if (match(lockhost, host)) {
spin_unlock(&flctx->flc_lock);
@@ -272,7 +272,7 @@ nlm_file_inuse(struct nlm_file *file)
if (flctx && !list_empty_careful(&flctx->flc_posix)) {
spin_lock(&flctx->flc_lock);
- list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
+ list_for_each_entry(fl, &flctx->flc_posix, fl_core.flc_list) {
if (fl->fl_lmops == &nlmsvc_lock_operations) {
spin_unlock(&flctx->flc_lock);
return 1;
diff --git a/fs/lockd/xdr.c b/fs/lockd/xdr.c
index 2fb5748dae0c..0331278a62ae 100644
--- a/fs/lockd/xdr.c
+++ b/fs/lockd/xdr.c
@@ -88,8 +88,8 @@ svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
return false;
locks_init_lock(fl);
- fl->fl_flags = FL_POSIX;
- fl->fl_type = F_RDLCK;
+ fl->fl_core.flc_flags = FL_POSIX;
+ fl->fl_core.flc_type = F_RDLCK;
end = start + len - 1;
fl->fl_start = s32_to_loff_t(start);
if (len == 0 || end < 0)
@@ -107,7 +107,7 @@ svcxdr_encode_holder(struct xdr_stream *xdr, const struct nlm_lock *lock)
s32 start, len;
/* exclusive */
- if (xdr_stream_encode_bool(xdr, fl->fl_type != F_RDLCK) < 0)
+ if (xdr_stream_encode_bool(xdr, fl->fl_core.flc_type != F_RDLCK) < 0)
return false;
if (xdr_stream_encode_u32(xdr, lock->svid) < 0)
return false;
@@ -164,7 +164,7 @@ nlmsvc_decode_testargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
if (exclusive)
- argp->lock.fl.fl_type = F_WRLCK;
+ argp->lock.fl.fl_core.flc_type = F_WRLCK;
return true;
}
@@ -184,7 +184,7 @@ nlmsvc_decode_lockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
if (exclusive)
- argp->lock.fl.fl_type = F_WRLCK;
+ argp->lock.fl.fl_core.flc_type = F_WRLCK;
if (xdr_stream_decode_bool(xdr, &argp->reclaim) < 0)
return false;
if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
@@ -209,7 +209,7 @@ nlmsvc_decode_cancargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
if (exclusive)
- argp->lock.fl.fl_type = F_WRLCK;
+ argp->lock.fl.fl_core.flc_type = F_WRLCK;
return true;
}
@@ -223,7 +223,7 @@ nlmsvc_decode_unlockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
return false;
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
- argp->lock.fl.fl_type = F_UNLCK;
+ argp->lock.fl.fl_core.flc_type = F_UNLCK;
return true;
}
diff --git a/fs/lockd/xdr4.c b/fs/lockd/xdr4.c
index 5fcbf30cd275..54e30fd064a2 100644
--- a/fs/lockd/xdr4.c
+++ b/fs/lockd/xdr4.c
@@ -89,8 +89,8 @@ svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
return false;
locks_init_lock(fl);
- fl->fl_flags = FL_POSIX;
- fl->fl_type = F_RDLCK;
+ fl->fl_core.flc_flags = FL_POSIX;
+ fl->fl_core.flc_type = F_RDLCK;
nlm4svc_set_file_lock_range(fl, lock->lock_start, lock->lock_len);
return true;
}
@@ -102,7 +102,7 @@ svcxdr_encode_holder(struct xdr_stream *xdr, const struct nlm_lock *lock)
s64 start, len;
/* exclusive */
- if (xdr_stream_encode_bool(xdr, fl->fl_type != F_RDLCK) < 0)
+ if (xdr_stream_encode_bool(xdr, fl->fl_core.flc_type != F_RDLCK) < 0)
return false;
if (xdr_stream_encode_u32(xdr, lock->svid) < 0)
return false;
@@ -159,7 +159,7 @@ nlm4svc_decode_testargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
if (exclusive)
- argp->lock.fl.fl_type = F_WRLCK;
+ argp->lock.fl.fl_core.flc_type = F_WRLCK;
return true;
}
@@ -179,7 +179,7 @@ nlm4svc_decode_lockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
if (exclusive)
- argp->lock.fl.fl_type = F_WRLCK;
+ argp->lock.fl.fl_core.flc_type = F_WRLCK;
if (xdr_stream_decode_bool(xdr, &argp->reclaim) < 0)
return false;
if (xdr_stream_decode_u32(xdr, &argp->state) < 0)
@@ -204,7 +204,7 @@ nlm4svc_decode_cancargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
if (exclusive)
- argp->lock.fl.fl_type = F_WRLCK;
+ argp->lock.fl.fl_core.flc_type = F_WRLCK;
return true;
}
@@ -218,7 +218,7 @@ nlm4svc_decode_unlockargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
return false;
if (!svcxdr_decode_lock(xdr, &argp->lock))
return false;
- argp->lock.fl.fl_type = F_UNLCK;
+ argp->lock.fl.fl_core.flc_type = F_UNLCK;
return true;
}
diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
index 9f565416d186..bc438007a7fb 100644
--- a/include/linux/lockd/lockd.h
+++ b/include/linux/lockd/lockd.h
@@ -375,12 +375,12 @@ static inline int nlm_privileged_requester(const struct svc_rqst *rqstp)
static inline int nlm_compare_locks(const struct file_lock *fl1,
const struct file_lock *fl2)
{
- return file_inode(fl1->fl_file) == file_inode(fl2->fl_file)
- && fl1->fl_pid == fl2->fl_pid
- && fl1->fl_owner == fl2->fl_owner
+ return file_inode(fl1->fl_core.flc_file) == file_inode(fl2->fl_core.flc_file)
+ && fl1->fl_core.flc_pid == fl2->fl_core.flc_pid
+ && fl1->fl_core.flc_owner == fl2->fl_core.flc_owner
&& fl1->fl_start == fl2->fl_start
&& fl1->fl_end == fl2->fl_end
- &&(fl1->fl_type == fl2->fl_type || fl2->fl_type == F_UNLCK);
+ &&(fl1->fl_core.flc_type == fl2->fl_core.flc_type || fl2->fl_core.flc_type == F_UNLCK);
}
extern const struct lock_manager_operations nlmsvc_lock_operations;
diff --git a/include/linux/lockd/xdr.h b/include/linux/lockd/xdr.h
index a3f068b0ca86..80cca9426761 100644
--- a/include/linux/lockd/xdr.h
+++ b/include/linux/lockd/xdr.h
@@ -11,7 +11,6 @@
#define LOCKD_XDR_H
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/nfs.h>
#include <linux/sunrpc/xdr.h>
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfs/delegation.c | 4 ++--
fs/nfs/file.c | 23 +++++++++++------------
fs/nfs/nfs3proc.c | 2 +-
fs/nfs/nfs4_fs.h | 1 -
fs/nfs/nfs4proc.c | 35 +++++++++++++++++++----------------
fs/nfs/nfs4state.c | 6 +++---
fs/nfs/nfs4trace.h | 4 ++--
fs/nfs/nfs4xdr.c | 8 ++++----
fs/nfs/write.c | 9 ++++-----
9 files changed, 46 insertions(+), 46 deletions(-)
diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
index fa1a14def45c..c308db36e932 100644
--- a/fs/nfs/delegation.c
+++ b/fs/nfs/delegation.c
@@ -156,8 +156,8 @@ static int nfs_delegation_claim_locks(struct nfs4_state *state, const nfs4_state
list = &flctx->flc_posix;
spin_lock(&flctx->flc_lock);
restart:
- list_for_each_entry(fl, list, fl_list) {
- if (nfs_file_open_context(fl->fl_file)->state != state)
+ list_for_each_entry(fl, list, fl_core.flc_list) {
+ if (nfs_file_open_context(fl->fl_core.flc_file)->state != state)
continue;
spin_unlock(&flctx->flc_lock);
status = nfs4_lock_delegation_recall(fl, state, stateid);
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 3c9a8ad91540..fb3cd614e36e 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -31,7 +31,6 @@
#include <linux/swap.h>
#include <linux/uaccess.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include "delegation.h"
@@ -721,15 +720,15 @@ do_getlk(struct file *filp, int cmd, struct file_lock *fl, int is_local)
{
struct inode *inode = filp->f_mapping->host;
int status = 0;
- unsigned int saved_type = fl->fl_type;
+ unsigned int saved_type = fl->fl_core.flc_type;
/* Try local locking first */
posix_test_lock(filp, fl);
- if (fl->fl_type != F_UNLCK) {
+ if (fl->fl_core.flc_type != F_UNLCK) {
/* found a conflict */
goto out;
}
- fl->fl_type = saved_type;
+ fl->fl_core.flc_type = saved_type;
if (NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
goto out_noconflict;
@@ -741,7 +740,7 @@ do_getlk(struct file *filp, int cmd, struct file_lock *fl, int is_local)
out:
return status;
out_noconflict:
- fl->fl_type = F_UNLCK;
+ fl->fl_core.flc_type = F_UNLCK;
goto out;
}
@@ -766,7 +765,7 @@ do_unlk(struct file *filp, int cmd, struct file_lock *fl, int is_local)
* If we're signalled while cleaning up locks on process exit, we
* still need to complete the unlock.
*/
- if (status < 0 && !(fl->fl_flags & FL_CLOSE))
+ if (status < 0 && !(fl->fl_core.flc_flags & FL_CLOSE))
return status;
}
@@ -833,12 +832,12 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock *fl)
int is_local = 0;
dprintk("NFS: lock(%pD2, t=%x, fl=%x, r=%lld:%lld)\n",
- filp, fl->fl_type, fl->fl_flags,
+ filp, fl->fl_core.flc_type, fl->fl_core.flc_flags,
(long long)fl->fl_start, (long long)fl->fl_end);
nfs_inc_stats(inode, NFSIOS_VFSLOCK);
- if (fl->fl_flags & FL_RECLAIM)
+ if (fl->fl_core.flc_flags & FL_RECLAIM)
return -ENOGRACE;
if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FCNTL)
@@ -852,7 +851,7 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock *fl)
if (IS_GETLK(cmd))
ret = do_getlk(filp, cmd, fl, is_local);
- else if (fl->fl_type == F_UNLCK)
+ else if (fl->fl_core.flc_type == F_UNLCK)
ret = do_unlk(filp, cmd, fl, is_local);
else
ret = do_setlk(filp, cmd, fl, is_local);
@@ -870,16 +869,16 @@ int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
int is_local = 0;
dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n",
- filp, fl->fl_type, fl->fl_flags);
+ filp, fl->fl_core.flc_type, fl->fl_core.flc_flags);
- if (!(fl->fl_flags & FL_FLOCK))
+ if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
is_local = 1;
/* We're simulating flock() locks using posix locks on the server */
- if (fl->fl_type == F_UNLCK)
+ if (fl->fl_core.flc_type == F_UNLCK)
return do_unlk(filp, cmd, fl, is_local);
return do_setlk(filp, cmd, fl, is_local);
}
diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
index 2de66e4e8280..650ec250d7e5 100644
--- a/fs/nfs/nfs3proc.c
+++ b/fs/nfs/nfs3proc.c
@@ -963,7 +963,7 @@ nfs3_proc_lock(struct file *filp, int cmd, struct file_lock *fl)
struct nfs_open_context *ctx = nfs_file_open_context(filp);
int status;
- if (fl->fl_flags & FL_CLOSE) {
+ if (fl->fl_core.flc_flags & FL_CLOSE) {
l_ctx = nfs_get_lock_context(ctx);
if (IS_ERR(l_ctx))
l_ctx = NULL;
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 752224a48f1c..581698f1b7b2 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -23,7 +23,6 @@
#define NFS4_MAX_LOOP_ON_RECOVER (10)
#include <linux/seqlock.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
struct idmap;
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 5dd936a403f9..bdf9fa468982 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -6800,7 +6800,7 @@ static int _nfs4_proc_getlk(struct nfs4_state *state, int cmd, struct file_lock
status = nfs4_call_sync(server->client, server, &msg, &arg.seq_args, &res.seq_res, 1);
switch (status) {
case 0:
- request->fl_type = F_UNLCK;
+ request->fl_core.flc_type = F_UNLCK;
break;
case -NFS4ERR_DENIED:
status = 0;
@@ -7018,8 +7018,8 @@ static struct rpc_task *nfs4_do_unlck(struct file_lock *fl,
/* Ensure this is an unlock - when canceling a lock, the
* canceled lock is passed in, and it won't be an unlock.
*/
- fl->fl_type = F_UNLCK;
- if (fl->fl_flags & FL_CLOSE)
+ fl->fl_core.flc_type = F_UNLCK;
+ if (fl->fl_core.flc_flags & FL_CLOSE)
set_bit(NFS_CONTEXT_UNLOCK, &ctx->flags);
data = nfs4_alloc_unlockdata(fl, ctx, lsp, seqid);
@@ -7045,11 +7045,11 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int cmd, struct file_lock *
struct rpc_task *task;
struct nfs_seqid *(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
int status = 0;
- unsigned char saved_flags = request->fl_flags;
+ unsigned char saved_flags = request->fl_core.flc_flags;
status = nfs4_set_lock_state(state, request);
/* Unlock _before_ we do the RPC call */
- request->fl_flags |= FL_EXISTS;
+ request->fl_core.flc_flags |= FL_EXISTS;
/* Exclude nfs_delegation_claim_locks() */
mutex_lock(&sp->so_delegreturn_mutex);
/* Exclude nfs4_reclaim_open_stateid() - note nesting! */
@@ -7073,14 +7073,16 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int cmd, struct file_lock *
status = -ENOMEM;
if (IS_ERR(seqid))
goto out;
- task = nfs4_do_unlck(request, nfs_file_open_context(request->fl_file), lsp, seqid);
+ task = nfs4_do_unlck(request,
+ nfs_file_open_context(request->fl_core.flc_file),
+ lsp, seqid);
status = PTR_ERR(task);
if (IS_ERR(task))
goto out;
status = rpc_wait_for_completion_task(task);
rpc_put_task(task);
out:
- request->fl_flags = saved_flags;
+ request->fl_core.flc_flags = saved_flags;
trace_nfs4_unlock(request, state, F_SETLK, status);
return status;
}
@@ -7191,7 +7193,7 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
renew_lease(NFS_SERVER(d_inode(data->ctx->dentry)),
data->timestamp);
if (data->arg.new_lock && !data->cancelled) {
- data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS);
+ data->fl.fl_core.flc_flags &= ~(FL_SLEEP | FL_ACCESS);
if (locks_lock_inode_wait(lsp->ls_state->inode, &data->fl) < 0)
goto out_restart;
}
@@ -7292,7 +7294,8 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
if (nfs_server_capable(state->inode, NFS_CAP_MOVEABLE))
task_setup_data.flags |= RPC_TASK_MOVEABLE;
- data = nfs4_alloc_lockdata(fl, nfs_file_open_context(fl->fl_file),
+ data = nfs4_alloc_lockdata(fl,
+ nfs_file_open_context(fl->fl_core.flc_file),
fl->fl_u.nfs4_fl.owner, GFP_KERNEL);
if (data == NULL)
return -ENOMEM;
@@ -7398,10 +7401,10 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
{
struct nfs_inode *nfsi = NFS_I(state->inode);
struct nfs4_state_owner *sp = state->owner;
- unsigned char flags = request->fl_flags;
+ unsigned char flags = request->fl_core.flc_flags;
int status;
- request->fl_flags |= FL_ACCESS;
+ request->fl_core.flc_flags |= FL_ACCESS;
status = locks_lock_inode_wait(state->inode, request);
if (status < 0)
goto out;
@@ -7410,7 +7413,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
if (test_bit(NFS_DELEGATED_STATE, &state->flags)) {
/* Yes: cache locks! */
/* ...but avoid races with delegation recall... */
- request->fl_flags = flags & ~FL_SLEEP;
+ request->fl_core.flc_flags = flags & ~FL_SLEEP;
status = locks_lock_inode_wait(state->inode, request);
up_read(&nfsi->rwsem);
mutex_unlock(&sp->so_delegreturn_mutex);
@@ -7420,7 +7423,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock
mutex_unlock(&sp->so_delegreturn_mutex);
status = _nfs4_do_setlk(state, cmd, request, NFS_LOCK_NEW);
out:
- request->fl_flags = flags;
+ request->fl_core.flc_flags = flags;
return status;
}
@@ -7562,7 +7565,7 @@ nfs4_proc_lock(struct file *filp, int cmd, struct file_lock *request)
if (!(IS_SETLK(cmd) || IS_SETLKW(cmd)))
return -EINVAL;
- if (request->fl_type == F_UNLCK) {
+ if (request->fl_core.flc_type == F_UNLCK) {
if (state != NULL)
return nfs4_proc_unlck(state, cmd, request);
return 0;
@@ -7571,7 +7574,7 @@ nfs4_proc_lock(struct file *filp, int cmd, struct file_lock *request)
if (state == NULL)
return -ENOLCK;
- if ((request->fl_flags & FL_POSIX) &&
+ if ((request->fl_core.flc_flags & FL_POSIX) &&
!test_bit(NFS_STATE_POSIX_LOCKS, &state->flags))
return -ENOLCK;
@@ -7579,7 +7582,7 @@ nfs4_proc_lock(struct file *filp, int cmd, struct file_lock *request)
* Don't rely on the VFS having checked the file open mode,
* since it won't do this for flock() locks.
*/
- switch (request->fl_type) {
+ switch (request->fl_core.flc_type) {
case F_RDLCK:
if (!(filp->f_mode & FMODE_READ))
return -EBADF;
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 471caf06fa7b..dfa844ff76b8 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -980,7 +980,7 @@ int nfs4_set_lock_state(struct nfs4_state *state, struct file_lock *fl)
if (fl->fl_ops != NULL)
return 0;
- lsp = nfs4_get_lock_state(state, fl->fl_owner);
+ lsp = nfs4_get_lock_state(state, fl->fl_core.flc_owner);
if (lsp == NULL)
return -ENOMEM;
fl->fl_u.nfs4_fl.owner = lsp;
@@ -1529,8 +1529,8 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
down_write(&nfsi->rwsem);
spin_lock(&flctx->flc_lock);
restart:
- list_for_each_entry(fl, list, fl_list) {
- if (nfs_file_open_context(fl->fl_file)->state != state)
+ list_for_each_entry(fl, list, fl_core.flc_list) {
+ if (nfs_file_open_context(fl->fl_core.flc_file)->state != state)
continue;
spin_unlock(&flctx->flc_lock);
status = ops->recover_lock(state, fl);
diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
index d27919d7241d..8cdafca2bb7f 100644
--- a/fs/nfs/nfs4trace.h
+++ b/fs/nfs/nfs4trace.h
@@ -699,7 +699,7 @@ DECLARE_EVENT_CLASS(nfs4_lock_event,
__entry->error = error < 0 ? -error : 0;
__entry->cmd = cmd;
- __entry->type = request->fl_type;
+ __entry->type = request->fl_core.flc_type;
__entry->start = request->fl_start;
__entry->end = request->fl_end;
__entry->dev = inode->i_sb->s_dev;
@@ -771,7 +771,7 @@ TRACE_EVENT(nfs4_set_lock,
__entry->error = error < 0 ? -error : 0;
__entry->cmd = cmd;
- __entry->type = request->fl_type;
+ __entry->type = request->fl_core.flc_type;
__entry->start = request->fl_start;
__entry->end = request->fl_end;
__entry->dev = inode->i_sb->s_dev;
diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
index 69406e60f391..5ff343cd4813 100644
--- a/fs/nfs/nfs4xdr.c
+++ b/fs/nfs/nfs4xdr.c
@@ -1305,7 +1305,7 @@ static void encode_link(struct xdr_stream *xdr, const struct qstr *name, struct
static inline int nfs4_lock_type(struct file_lock *fl, int block)
{
- if (fl->fl_type == F_RDLCK)
+ if (fl->fl_core.flc_type == F_RDLCK)
return block ? NFS4_READW_LT : NFS4_READ_LT;
return block ? NFS4_WRITEW_LT : NFS4_WRITE_LT;
}
@@ -5052,10 +5052,10 @@ static int decode_lock_denied (struct xdr_stream *xdr, struct file_lock *fl)
fl->fl_end = fl->fl_start + (loff_t)length - 1;
if (length == ~(uint64_t)0)
fl->fl_end = OFFSET_MAX;
- fl->fl_type = F_WRLCK;
+ fl->fl_core.flc_type = F_WRLCK;
if (type & 1)
- fl->fl_type = F_RDLCK;
- fl->fl_pid = 0;
+ fl->fl_core.flc_type = F_RDLCK;
+ fl->fl_core.flc_pid = 0;
}
p = xdr_decode_hyper(p, &clientid); /* read 8 bytes */
namelen = be32_to_cpup(p); /* read 4 bytes */ /* have read all 32 bytes now */
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index ed837a3675cf..627700e03371 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -25,7 +25,6 @@
#include <linux/freezer.h>
#include <linux/wait.h>
#include <linux/iversion.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/uaccess.h>
@@ -1302,7 +1301,7 @@ static bool
is_whole_file_wrlock(struct file_lock *fl)
{
return fl->fl_start == 0 && fl->fl_end == OFFSET_MAX &&
- fl->fl_type == F_WRLCK;
+ fl->fl_core.flc_type == F_WRLCK;
}
/* If we know the page is up to date, and we're not using byte range locks (or
@@ -1336,13 +1335,13 @@ static int nfs_can_extend_write(struct file *file, struct folio *folio,
spin_lock(&flctx->flc_lock);
if (!list_empty(&flctx->flc_posix)) {
fl = list_first_entry(&flctx->flc_posix, struct file_lock,
- fl_list);
+ fl_core.flc_list);
if (is_whole_file_wrlock(fl))
ret = 1;
} else if (!list_empty(&flctx->flc_flock)) {
fl = list_first_entry(&flctx->flc_flock, struct file_lock,
- fl_list);
- if (fl->fl_type == F_WRLCK)
+ fl_core.flc_list);
+ if (fl->fl_core.flc_type == F_WRLCK)
ret = 1;
}
spin_unlock(&flctx->flc_lock);
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfsd/filecache.c | 4 +--
fs/nfsd/netns.h | 1 -
fs/nfsd/nfs4callback.c | 2 +-
fs/nfsd/nfs4layouts.c | 15 +++++-----
fs/nfsd/nfs4state.c | 77 +++++++++++++++++++++++++-------------------------
5 files changed, 50 insertions(+), 49 deletions(-)
diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 9cb7f0c33df5..cdd36758c692 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -662,8 +662,8 @@ nfsd_file_lease_notifier_call(struct notifier_block *nb, unsigned long arg,
struct file_lock *fl = data;
/* Only close files for F_SETLEASE leases */
- if (fl->fl_flags & FL_LEASE)
- nfsd_file_close_inode(file_inode(fl->fl_file));
+ if (fl->fl_core.flc_flags & FL_LEASE)
+ nfsd_file_close_inode(file_inode(fl->fl_core.flc_file));
return 0;
}
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index fd91125208be..74b4360779a1 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -10,7 +10,6 @@
#include <net/net_namespace.h>
#include <net/netns/generic.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/percpu_counter.h>
#include <linux/siphash.h>
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 926c29879c6a..3513c94481b4 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -674,7 +674,7 @@ static void nfs4_xdr_enc_cb_notify_lock(struct rpc_rqst *req,
const struct nfsd4_callback *cb = data;
const struct nfsd4_blocked_lock *nbl =
container_of(cb, struct nfsd4_blocked_lock, nbl_cb);
- struct nfs4_lockowner *lo = (struct nfs4_lockowner *)nbl->nbl_lock.fl_owner;
+ struct nfs4_lockowner *lo = (struct nfs4_lockowner *)nbl->nbl_lock.fl_core.flc_owner;
struct nfs4_cb_compound_hdr hdr = {
.ident = 0,
.minorversion = cb->cb_clp->cl_minorversion,
diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
index 5e8096bc5eaa..ddf221d31acf 100644
--- a/fs/nfsd/nfs4layouts.c
+++ b/fs/nfsd/nfs4layouts.c
@@ -193,14 +193,15 @@ nfsd4_layout_setlease(struct nfs4_layout_stateid *ls)
return -ENOMEM;
locks_init_lock(fl);
fl->fl_lmops = &nfsd4_layouts_lm_ops;
- fl->fl_flags = FL_LAYOUT;
- fl->fl_type = F_RDLCK;
+ fl->fl_core.flc_flags = FL_LAYOUT;
+ fl->fl_core.flc_type = F_RDLCK;
fl->fl_end = OFFSET_MAX;
- fl->fl_owner = ls;
- fl->fl_pid = current->tgid;
- fl->fl_file = ls->ls_file->nf_file;
+ fl->fl_core.flc_owner = ls;
+ fl->fl_core.flc_pid = current->tgid;
+ fl->fl_core.flc_file = ls->ls_file->nf_file;
- status = vfs_setlease(fl->fl_file, fl->fl_type, &fl, NULL);
+ status = vfs_setlease(fl->fl_core.flc_file, fl->fl_core.flc_type, &fl,
+ NULL);
if (status) {
locks_free_lock(fl);
return status;
@@ -731,7 +732,7 @@ nfsd4_layout_lm_break(struct file_lock *fl)
* in time:
*/
fl->fl_break_time = 0;
- nfsd4_recall_file_layout(fl->fl_owner);
+ nfsd4_recall_file_layout(fl->fl_core.flc_owner);
return false;
}
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index f66e67394157..5899e5778fe7 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4924,7 +4924,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
static bool
nfsd_break_deleg_cb(struct file_lock *fl)
{
- struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
+ struct nfs4_delegation *dp = (struct nfs4_delegation *) fl->fl_core.flc_owner;
struct nfs4_file *fp = dp->dl_stid.sc_file;
struct nfs4_client *clp = dp->dl_stid.sc_client;
struct nfsd_net *nn;
@@ -4962,7 +4962,7 @@ nfsd_break_deleg_cb(struct file_lock *fl)
*/
static bool nfsd_breaker_owns_lease(struct file_lock *fl)
{
- struct nfs4_delegation *dl = fl->fl_owner;
+ struct nfs4_delegation *dl = fl->fl_core.flc_owner;
struct svc_rqst *rqst;
struct nfs4_client *clp;
@@ -4980,7 +4980,7 @@ static int
nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
struct list_head *dispose)
{
- struct nfs4_delegation *dp = (struct nfs4_delegation *)onlist->fl_owner;
+ struct nfs4_delegation *dp = (struct nfs4_delegation *) onlist->fl_core.flc_owner;
struct nfs4_client *clp = dp->dl_stid.sc_client;
if (arg & F_UNLCK) {
@@ -5340,12 +5340,12 @@ static struct file_lock *nfs4_alloc_init_lease(struct nfs4_delegation *dp,
if (!fl)
return NULL;
fl->fl_lmops = &nfsd_lease_mng_ops;
- fl->fl_flags = FL_DELEG;
- fl->fl_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK;
+ fl->fl_core.flc_flags = FL_DELEG;
+ fl->fl_core.flc_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK;
fl->fl_end = OFFSET_MAX;
- fl->fl_owner = (fl_owner_t)dp;
- fl->fl_pid = current->tgid;
- fl->fl_file = dp->dl_stid.sc_file->fi_deleg_file->nf_file;
+ fl->fl_core.flc_owner = (fl_owner_t)dp;
+ fl->fl_core.flc_pid = current->tgid;
+ fl->fl_core.flc_file = dp->dl_stid.sc_file->fi_deleg_file->nf_file;
return fl;
}
@@ -5533,7 +5533,8 @@ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
if (!fl)
goto out_clnt_odstate;
- status = vfs_setlease(fp->fi_deleg_file->nf_file, fl->fl_type, &fl, NULL);
+ status = vfs_setlease(fp->fi_deleg_file->nf_file,
+ fl->fl_core.flc_type, &fl, NULL);
if (fl)
locks_free_lock(fl);
if (status)
@@ -7149,7 +7150,7 @@ nfsd4_lm_put_owner(fl_owner_t owner)
static bool
nfsd4_lm_lock_expirable(struct file_lock *cfl)
{
- struct nfs4_lockowner *lo = (struct nfs4_lockowner *)cfl->fl_owner;
+ struct nfs4_lockowner *lo = (struct nfs4_lockowner *) cfl->fl_core.flc_owner;
struct nfs4_client *clp = lo->lo_owner.so_client;
struct nfsd_net *nn;
@@ -7171,7 +7172,7 @@ nfsd4_lm_expire_lock(void)
static void
nfsd4_lm_notify(struct file_lock *fl)
{
- struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
+ struct nfs4_lockowner *lo = (struct nfs4_lockowner *) fl->fl_core.flc_owner;
struct net *net = lo->lo_owner.so_client->net;
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
struct nfsd4_blocked_lock *nbl = container_of(fl,
@@ -7208,7 +7209,7 @@ nfs4_set_lock_denied(struct file_lock *fl, struct nfsd4_lock_denied *deny)
struct nfs4_lockowner *lo;
if (fl->fl_lmops == &nfsd_posix_mng_ops) {
- lo = (struct nfs4_lockowner *) fl->fl_owner;
+ lo = (struct nfs4_lockowner *) fl->fl_core.flc_owner;
xdr_netobj_dup(&deny->ld_owner, &lo->lo_owner.so_owner,
GFP_KERNEL);
if (!deny->ld_owner.data)
@@ -7227,7 +7228,7 @@ nfs4_set_lock_denied(struct file_lock *fl, struct nfsd4_lock_denied *deny)
if (fl->fl_end != NFS4_MAX_UINT64)
deny->ld_length = fl->fl_end - fl->fl_start + 1;
deny->ld_type = NFS4_READ_LT;
- if (fl->fl_type != F_RDLCK)
+ if (fl->fl_core.flc_type != F_RDLCK)
deny->ld_type = NFS4_WRITE_LT;
}
@@ -7615,11 +7616,11 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
}
file_lock = &nbl->nbl_lock;
- file_lock->fl_type = type;
- file_lock->fl_owner = (fl_owner_t)lockowner(nfs4_get_stateowner(&lock_sop->lo_owner));
- file_lock->fl_pid = current->tgid;
- file_lock->fl_file = nf->nf_file;
- file_lock->fl_flags = flags;
+ file_lock->fl_core.flc_type = type;
+ file_lock->fl_core.flc_owner = (fl_owner_t)lockowner(nfs4_get_stateowner(&lock_sop->lo_owner));
+ file_lock->fl_core.flc_pid = current->tgid;
+ file_lock->fl_core.flc_file = nf->nf_file;
+ file_lock->fl_core.flc_flags = flags;
file_lock->fl_lmops = &nfsd_posix_mng_ops;
file_lock->fl_start = lock->lk_offset;
file_lock->fl_end = last_byte_offset(lock->lk_offset, lock->lk_length);
@@ -7737,9 +7738,9 @@ static __be32 nfsd_test_lock(struct svc_rqst *rqstp, struct svc_fh *fhp, struct
err = nfserrno(nfsd_open_break_lease(inode, NFSD_MAY_READ));
if (err)
goto out;
- lock->fl_file = nf->nf_file;
+ lock->fl_core.flc_file = nf->nf_file;
err = nfserrno(vfs_test_lock(nf->nf_file, lock));
- lock->fl_file = NULL;
+ lock->fl_core.flc_file = NULL;
out:
inode_unlock(inode);
nfsd_file_put(nf);
@@ -7784,11 +7785,11 @@ nfsd4_lockt(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
switch (lockt->lt_type) {
case NFS4_READ_LT:
case NFS4_READW_LT:
- file_lock->fl_type = F_RDLCK;
+ file_lock->fl_core.flc_type = F_RDLCK;
break;
case NFS4_WRITE_LT:
case NFS4_WRITEW_LT:
- file_lock->fl_type = F_WRLCK;
+ file_lock->fl_core.flc_type = F_WRLCK;
break;
default:
dprintk("NFSD: nfs4_lockt: bad lock type!\n");
@@ -7798,9 +7799,9 @@ nfsd4_lockt(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
lo = find_lockowner_str(cstate->clp, &lockt->lt_owner);
if (lo)
- file_lock->fl_owner = (fl_owner_t)lo;
- file_lock->fl_pid = current->tgid;
- file_lock->fl_flags = FL_POSIX;
+ file_lock->fl_core.flc_owner = (fl_owner_t)lo;
+ file_lock->fl_core.flc_pid = current->tgid;
+ file_lock->fl_core.flc_flags = FL_POSIX;
file_lock->fl_start = lockt->lt_offset;
file_lock->fl_end = last_byte_offset(lockt->lt_offset, lockt->lt_length);
@@ -7811,7 +7812,7 @@ nfsd4_lockt(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
if (status)
goto out;
- if (file_lock->fl_type != F_UNLCK) {
+ if (file_lock->fl_core.flc_type != F_UNLCK) {
status = nfserr_denied;
nfs4_set_lock_denied(file_lock, &lockt->lt_denied);
}
@@ -7867,11 +7868,11 @@ nfsd4_locku(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
goto put_file;
}
- file_lock->fl_type = F_UNLCK;
- file_lock->fl_owner = (fl_owner_t)lockowner(nfs4_get_stateowner(stp->st_stateowner));
- file_lock->fl_pid = current->tgid;
- file_lock->fl_file = nf->nf_file;
- file_lock->fl_flags = FL_POSIX;
+ file_lock->fl_core.flc_type = F_UNLCK;
+ file_lock->fl_core.flc_owner = (fl_owner_t)lockowner(nfs4_get_stateowner(stp->st_stateowner));
+ file_lock->fl_core.flc_pid = current->tgid;
+ file_lock->fl_core.flc_file = nf->nf_file;
+ file_lock->fl_core.flc_flags = FL_POSIX;
file_lock->fl_lmops = &nfsd_posix_mng_ops;
file_lock->fl_start = locku->lu_offset;
@@ -7926,8 +7927,8 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
if (flctx && !list_empty_careful(&flctx->flc_posix)) {
spin_lock(&flctx->flc_lock);
- list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
- if (fl->fl_owner == (fl_owner_t)lowner) {
+ list_for_each_entry(fl, &flctx->flc_posix, fl_core.flc_list) {
+ if (fl->fl_core.flc_owner == (fl_owner_t)lowner) {
status = true;
break;
}
@@ -8455,8 +8456,8 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct inode *inode)
if (!ctx)
return 0;
spin_lock(&ctx->flc_lock);
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
- if (fl->fl_flags == FL_LAYOUT)
+ list_for_each_entry(fl, &ctx->flc_lease, fl_core.flc_list) {
+ if (fl->fl_core.flc_flags == FL_LAYOUT)
continue;
if (fl->fl_lmops != &nfsd_lease_mng_ops) {
/*
@@ -8464,12 +8465,12 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct inode *inode)
* we are done; there isn't any write delegation
* on this inode
*/
- if (fl->fl_type == F_RDLCK)
+ if (fl->fl_core.flc_type == F_RDLCK)
break;
goto break_lease;
}
- if (fl->fl_type == F_WRLCK) {
- dp = fl->fl_owner;
+ if (fl->fl_core.flc_type == F_WRLCK) {
+ dp = fl->fl_core.flc_owner;
if (dp->dl_recall.cb_clp == *(rqstp->rq_lease_breaker)) {
spin_unlock(&ctx->flc_lock);
return 0;
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/smb/client/cifsglob.h | 1 -
fs/smb/client/cifssmb.c | 9 +++---
fs/smb/client/file.c | 75 ++++++++++++++++++++++++------------------------
fs/smb/client/smb2file.c | 3 +-
4 files changed, 43 insertions(+), 45 deletions(-)
diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index fcda4c77c649..20036fb16cec 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -26,7 +26,6 @@
#include <uapi/linux/cifs/cifs_mount.h>
#include "../common/smb2pdu.h"
#include "smb2pdu.h"
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#define SMB_PATH_MAX 260
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index e19ecf692c20..aae4e9ddc59d 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -15,7 +15,6 @@
/* want to reuse a stale file handle and only the caller knows the file info */
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/kernel.h>
#include <linux/vfs.h>
@@ -2067,20 +2066,20 @@ CIFSSMBPosixLock(const unsigned int xid, struct cifs_tcon *tcon,
parm_data = (struct cifs_posix_lock *)
((char *)&pSMBr->hdr.Protocol + data_offset);
if (parm_data->lock_type == cpu_to_le16(CIFS_UNLCK))
- pLockData->fl_type = F_UNLCK;
+ pLockData->fl_core.flc_type = F_UNLCK;
else {
if (parm_data->lock_type ==
cpu_to_le16(CIFS_RDLCK))
- pLockData->fl_type = F_RDLCK;
+ pLockData->fl_core.flc_type = F_RDLCK;
else if (parm_data->lock_type ==
cpu_to_le16(CIFS_WRLCK))
- pLockData->fl_type = F_WRLCK;
+ pLockData->fl_core.flc_type = F_WRLCK;
pLockData->fl_start = le64_to_cpu(parm_data->start);
pLockData->fl_end = pLockData->fl_start +
(le64_to_cpu(parm_data->length) ?
le64_to_cpu(parm_data->length) - 1 : 0);
- pLockData->fl_pid = -le32_to_cpu(parm_data->pid);
+ pLockData->fl_core.flc_pid = -le32_to_cpu(parm_data->pid);
}
}
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index dd87b2ef24dc..9a977ec0fb2f 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -9,7 +9,6 @@
*
*/
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/backing-dev.h>
#include <linux/stat.h>
@@ -1313,20 +1312,20 @@ cifs_lock_test(struct cifsFileInfo *cfile, __u64 offset, __u64 length,
down_read(&cinode->lock_sem);
exist = cifs_find_lock_conflict(cfile, offset, length, type,
- flock->fl_flags, &conf_lock,
+ flock->fl_core.flc_flags, &conf_lock,
CIFS_LOCK_OP);
if (exist) {
flock->fl_start = conf_lock->offset;
flock->fl_end = conf_lock->offset + conf_lock->length - 1;
- flock->fl_pid = conf_lock->pid;
+ flock->fl_core.flc_pid = conf_lock->pid;
if (conf_lock->type & server->vals->shared_lock_type)
- flock->fl_type = F_RDLCK;
+ flock->fl_core.flc_type = F_RDLCK;
else
- flock->fl_type = F_WRLCK;
+ flock->fl_core.flc_type = F_WRLCK;
} else if (!cinode->can_cache_brlcks)
rc = 1;
else
- flock->fl_type = F_UNLCK;
+ flock->fl_core.flc_type = F_UNLCK;
up_read(&cinode->lock_sem);
return rc;
@@ -1402,16 +1401,16 @@ cifs_posix_lock_test(struct file *file, struct file_lock *flock)
{
int rc = 0;
struct cifsInodeInfo *cinode = CIFS_I(file_inode(file));
- unsigned char saved_type = flock->fl_type;
+ unsigned char saved_type = flock->fl_core.flc_type;
- if ((flock->fl_flags & FL_POSIX) == 0)
+ if ((flock->fl_core.flc_flags & FL_POSIX) == 0)
return 1;
down_read(&cinode->lock_sem);
posix_test_lock(file, flock);
- if (flock->fl_type == F_UNLCK && !cinode->can_cache_brlcks) {
- flock->fl_type = saved_type;
+ if (flock->fl_core.flc_type == F_UNLCK && !cinode->can_cache_brlcks) {
+ flock->fl_core.flc_type = saved_type;
rc = 1;
}
@@ -1432,7 +1431,7 @@ cifs_posix_lock_set(struct file *file, struct file_lock *flock)
struct cifsInodeInfo *cinode = CIFS_I(file_inode(file));
int rc = FILE_LOCK_DEFERRED + 1;
- if ((flock->fl_flags & FL_POSIX) == 0)
+ if ((flock->fl_core.flc_flags & FL_POSIX) == 0)
return rc;
cifs_down_write(&cinode->lock_sem);
@@ -1582,7 +1581,7 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
el = locks_to_send.next;
spin_lock(&flctx->flc_lock);
- list_for_each_entry(flock, &flctx->flc_posix, fl_list) {
+ list_for_each_entry(flock, &flctx->flc_posix, fl_core.flc_list) {
if (el == &locks_to_send) {
/*
* The list ended. We don't have enough allocated
@@ -1592,12 +1591,12 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
break;
}
length = cifs_flock_len(flock);
- if (flock->fl_type == F_RDLCK || flock->fl_type == F_SHLCK)
+ if (flock->fl_core.flc_type == F_RDLCK || flock->fl_core.flc_type == F_SHLCK)
type = CIFS_RDLCK;
else
type = CIFS_WRLCK;
lck = list_entry(el, struct lock_to_push, llist);
- lck->pid = hash_lockowner(flock->fl_owner);
+ lck->pid = hash_lockowner(flock->fl_core.flc_owner);
lck->netfid = cfile->fid.netfid;
lck->length = length;
lck->type = type;
@@ -1664,42 +1663,43 @@ static void
cifs_read_flock(struct file_lock *flock, __u32 *type, int *lock, int *unlock,
bool *wait_flag, struct TCP_Server_Info *server)
{
- if (flock->fl_flags & FL_POSIX)
+ if (flock->fl_core.flc_flags & FL_POSIX)
cifs_dbg(FYI, "Posix\n");
- if (flock->fl_flags & FL_FLOCK)
+ if (flock->fl_core.flc_flags & FL_FLOCK)
cifs_dbg(FYI, "Flock\n");
- if (flock->fl_flags & FL_SLEEP) {
+ if (flock->fl_core.flc_flags & FL_SLEEP) {
cifs_dbg(FYI, "Blocking lock\n");
*wait_flag = true;
}
- if (flock->fl_flags & FL_ACCESS)
+ if (flock->fl_core.flc_flags & FL_ACCESS)
cifs_dbg(FYI, "Process suspended by mandatory locking - not implemented yet\n");
- if (flock->fl_flags & FL_LEASE)
+ if (flock->fl_core.flc_flags & FL_LEASE)
cifs_dbg(FYI, "Lease on file - not implemented yet\n");
- if (flock->fl_flags &
+ if (flock->fl_core.flc_flags &
(~(FL_POSIX | FL_FLOCK | FL_SLEEP |
FL_ACCESS | FL_LEASE | FL_CLOSE | FL_OFDLCK)))
- cifs_dbg(FYI, "Unknown lock flags 0x%x\n", flock->fl_flags);
+ cifs_dbg(FYI, "Unknown lock flags 0x%x\n",
+ flock->fl_core.flc_flags);
*type = server->vals->large_lock_type;
- if (flock->fl_type == F_WRLCK) {
+ if (flock->fl_core.flc_type == F_WRLCK) {
cifs_dbg(FYI, "F_WRLCK\n");
*type |= server->vals->exclusive_lock_type;
*lock = 1;
- } else if (flock->fl_type == F_UNLCK) {
+ } else if (flock->fl_core.flc_type == F_UNLCK) {
cifs_dbg(FYI, "F_UNLCK\n");
*type |= server->vals->unlock_lock_type;
*unlock = 1;
/* Check if unlock includes more than one lock range */
- } else if (flock->fl_type == F_RDLCK) {
+ } else if (flock->fl_core.flc_type == F_RDLCK) {
cifs_dbg(FYI, "F_RDLCK\n");
*type |= server->vals->shared_lock_type;
*lock = 1;
- } else if (flock->fl_type == F_EXLCK) {
+ } else if (flock->fl_core.flc_type == F_EXLCK) {
cifs_dbg(FYI, "F_EXLCK\n");
*type |= server->vals->exclusive_lock_type;
*lock = 1;
- } else if (flock->fl_type == F_SHLCK) {
+ } else if (flock->fl_core.flc_type == F_SHLCK) {
cifs_dbg(FYI, "F_SHLCK\n");
*type |= server->vals->shared_lock_type;
*lock = 1;
@@ -1731,7 +1731,7 @@ cifs_getlk(struct file *file, struct file_lock *flock, __u32 type,
else
posix_lock_type = CIFS_WRLCK;
rc = CIFSSMBPosixLock(xid, tcon, netfid,
- hash_lockowner(flock->fl_owner),
+ hash_lockowner(flock->fl_core.flc_owner),
flock->fl_start, length, flock,
posix_lock_type, wait_flag);
return rc;
@@ -1748,7 +1748,7 @@ cifs_getlk(struct file *file, struct file_lock *flock, __u32 type,
if (rc == 0) {
rc = server->ops->mand_lock(xid, cfile, flock->fl_start, length,
type, 0, 1, false);
- flock->fl_type = F_UNLCK;
+ flock->fl_core.flc_type = F_UNLCK;
if (rc != 0)
cifs_dbg(VFS, "Error unlocking previously locked range %d during test of lock\n",
rc);
@@ -1756,7 +1756,7 @@ cifs_getlk(struct file *file, struct file_lock *flock, __u32 type,
}
if (type & server->vals->shared_lock_type) {
- flock->fl_type = F_WRLCK;
+ flock->fl_core.flc_type = F_WRLCK;
return 0;
}
@@ -1768,12 +1768,12 @@ cifs_getlk(struct file *file, struct file_lock *flock, __u32 type,
if (rc == 0) {
rc = server->ops->mand_lock(xid, cfile, flock->fl_start, length,
type | server->vals->shared_lock_type, 0, 1, false);
- flock->fl_type = F_RDLCK;
+ flock->fl_core.flc_type = F_RDLCK;
if (rc != 0)
cifs_dbg(VFS, "Error unlocking previously locked range %d during test of lock\n",
rc);
} else
- flock->fl_type = F_WRLCK;
+ flock->fl_core.flc_type = F_WRLCK;
return 0;
}
@@ -1941,7 +1941,7 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
posix_lock_type = CIFS_UNLCK;
rc = CIFSSMBPosixLock(xid, tcon, cfile->fid.netfid,
- hash_lockowner(flock->fl_owner),
+ hash_lockowner(flock->fl_core.flc_owner),
flock->fl_start, length,
NULL, posix_lock_type, wait_flag);
goto out;
@@ -1951,7 +1951,7 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
struct cifsLockInfo *lock;
lock = cifs_lock_init(flock->fl_start, length, type,
- flock->fl_flags);
+ flock->fl_core.flc_flags);
if (!lock)
return -ENOMEM;
@@ -1990,7 +1990,7 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
rc = server->ops->mand_unlock_range(cfile, flock, xid);
out:
- if ((flock->fl_flags & FL_POSIX) || (flock->fl_flags & FL_FLOCK)) {
+ if ((flock->fl_core.flc_flags & FL_POSIX) || (flock->fl_core.flc_flags & FL_FLOCK)) {
/*
* If this is a request to remove all locks because we
* are closing the file, it doesn't matter if the
@@ -1999,7 +1999,7 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
*/
if (rc) {
cifs_dbg(VFS, "%s failed rc=%d\n", __func__, rc);
- if (!(flock->fl_flags & FL_CLOSE))
+ if (!(flock->fl_core.flc_flags & FL_CLOSE))
return rc;
}
rc = locks_lock_file_wait(file, flock);
@@ -2020,7 +2020,7 @@ int cifs_flock(struct file *file, int cmd, struct file_lock *fl)
xid = get_xid();
- if (!(fl->fl_flags & FL_FLOCK)) {
+ if (!(fl->fl_core.flc_flags & FL_FLOCK)) {
rc = -ENOLCK;
free_xid(xid);
return rc;
@@ -2071,7 +2071,8 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *flock)
xid = get_xid();
cifs_dbg(FYI, "%s: %pD2 cmd=0x%x type=0x%x flags=0x%x r=%lld:%lld\n", __func__, file, cmd,
- flock->fl_flags, flock->fl_type, (long long)flock->fl_start,
+ flock->fl_core.flc_flags, flock->fl_core.flc_type,
+ (long long)flock->fl_start,
(long long)flock->fl_end);
cfile = (struct cifsFileInfo *)file->private_data;
diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
index cd225d15a7c5..53c855b3df5a 100644
--- a/fs/smb/client/smb2file.c
+++ b/fs/smb/client/smb2file.c
@@ -7,7 +7,6 @@
*
*/
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/stat.h>
#include <linux/slab.h>
@@ -229,7 +228,7 @@ smb2_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock,
* flock and OFD lock are associated with an open
* file description, not the process.
*/
- if (!(flock->fl_flags & (FL_FLOCK | FL_OFDLCK)))
+ if (!(flock->fl_core.flc_flags & (FL_FLOCK | FL_OFDLCK)))
continue;
if (cinode->can_cache_brlcks) {
/*
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/smb/server/smb2pdu.c | 45 ++++++++++++++++++++++-----------------------
fs/smb/server/vfs.c | 15 +++++++--------
2 files changed, 29 insertions(+), 31 deletions(-)
diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
index d12d11cdea29..1a1ce70c7b2d 100644
--- a/fs/smb/server/smb2pdu.c
+++ b/fs/smb/server/smb2pdu.c
@@ -12,7 +12,6 @@
#include <linux/ethtool.h>
#include <linux/falloc.h>
#include <linux/mount.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include "glob.h"
@@ -6761,10 +6760,10 @@ struct file_lock *smb_flock_init(struct file *f)
locks_init_lock(fl);
- fl->fl_owner = f;
- fl->fl_pid = current->tgid;
- fl->fl_file = f;
- fl->fl_flags = FL_POSIX;
+ fl->fl_core.flc_owner = f;
+ fl->fl_core.flc_pid = current->tgid;
+ fl->fl_core.flc_file = f;
+ fl->fl_core.flc_flags = FL_POSIX;
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
@@ -6781,30 +6780,30 @@ static int smb2_set_flock_flags(struct file_lock *flock, int flags)
case SMB2_LOCKFLAG_SHARED:
ksmbd_debug(SMB, "received shared request\n");
cmd = F_SETLKW;
- flock->fl_type = F_RDLCK;
- flock->fl_flags |= FL_SLEEP;
+ flock->fl_core.flc_type = F_RDLCK;
+ flock->fl_core.flc_flags |= FL_SLEEP;
break;
case SMB2_LOCKFLAG_EXCLUSIVE:
ksmbd_debug(SMB, "received exclusive request\n");
cmd = F_SETLKW;
- flock->fl_type = F_WRLCK;
- flock->fl_flags |= FL_SLEEP;
+ flock->fl_core.flc_type = F_WRLCK;
+ flock->fl_core.flc_flags |= FL_SLEEP;
break;
case SMB2_LOCKFLAG_SHARED | SMB2_LOCKFLAG_FAIL_IMMEDIATELY:
ksmbd_debug(SMB,
"received shared & fail immediately request\n");
cmd = F_SETLK;
- flock->fl_type = F_RDLCK;
+ flock->fl_core.flc_type = F_RDLCK;
break;
case SMB2_LOCKFLAG_EXCLUSIVE | SMB2_LOCKFLAG_FAIL_IMMEDIATELY:
ksmbd_debug(SMB,
"received exclusive & fail immediately request\n");
cmd = F_SETLK;
- flock->fl_type = F_WRLCK;
+ flock->fl_core.flc_type = F_WRLCK;
break;
case SMB2_LOCKFLAG_UNLOCK:
ksmbd_debug(SMB, "received unlock request\n");
- flock->fl_type = F_UNLCK;
+ flock->fl_core.flc_type = F_UNLCK;
cmd = F_SETLK;
break;
}
@@ -6842,13 +6841,13 @@ static void smb2_remove_blocked_lock(void **argv)
struct file_lock *flock = (struct file_lock *)argv[0];
ksmbd_vfs_posix_lock_unblock(flock);
- wake_up(&flock->fl_wait);
+ wake_up(&flock->fl_core.flc_wait);
}
static inline bool lock_defer_pending(struct file_lock *fl)
{
/* check pending lock waiters */
- return waitqueue_active(&fl->fl_wait);
+ return waitqueue_active(&fl->fl_core.flc_wait);
}
/**
@@ -6939,8 +6938,8 @@ int smb2_lock(struct ksmbd_work *work)
list_for_each_entry(cmp_lock, &lock_list, llist) {
if (cmp_lock->fl->fl_start <= flock->fl_start &&
cmp_lock->fl->fl_end >= flock->fl_end) {
- if (cmp_lock->fl->fl_type != F_UNLCK &&
- flock->fl_type != F_UNLCK) {
+ if (cmp_lock->fl->fl_core.flc_type != F_UNLCK &&
+ flock->fl_core.flc_type != F_UNLCK) {
pr_err("conflict two locks in one request\n");
err = -EINVAL;
locks_free_lock(flock);
@@ -6988,12 +6987,12 @@ int smb2_lock(struct ksmbd_work *work)
list_for_each_entry(conn, &conn_list, conns_list) {
spin_lock(&conn->llist_lock);
list_for_each_entry_safe(cmp_lock, tmp2, &conn->lock_list, clist) {
- if (file_inode(cmp_lock->fl->fl_file) !=
- file_inode(smb_lock->fl->fl_file))
+ if (file_inode(cmp_lock->fl->fl_core.flc_file) !=
+ file_inode(smb_lock->fl->fl_core.flc_file))
continue;
- if (smb_lock->fl->fl_type == F_UNLCK) {
- if (cmp_lock->fl->fl_file == smb_lock->fl->fl_file &&
+ if (smb_lock->fl->fl_core.flc_type == F_UNLCK) {
+ if (cmp_lock->fl->fl_core.flc_file == smb_lock->fl->fl_core.flc_file &&
cmp_lock->start == smb_lock->start &&
cmp_lock->end == smb_lock->end &&
!lock_defer_pending(cmp_lock->fl)) {
@@ -7010,7 +7009,7 @@ int smb2_lock(struct ksmbd_work *work)
continue;
}
- if (cmp_lock->fl->fl_file == smb_lock->fl->fl_file) {
+ if (cmp_lock->fl->fl_core.flc_file == smb_lock->fl->fl_core.flc_file) {
if (smb_lock->flags & SMB2_LOCKFLAG_SHARED)
continue;
} else {
@@ -7052,7 +7051,7 @@ int smb2_lock(struct ksmbd_work *work)
}
up_read(&conn_list_lock);
out_check_cl:
- if (smb_lock->fl->fl_type == F_UNLCK && nolock) {
+ if (smb_lock->fl->fl_core.flc_type == F_UNLCK && nolock) {
pr_err("Try to unlock nolocked range\n");
rsp->hdr.Status = STATUS_RANGE_NOT_LOCKED;
goto out;
@@ -7176,7 +7175,7 @@ int smb2_lock(struct ksmbd_work *work)
struct file_lock *rlock = NULL;
rlock = smb_flock_init(filp);
- rlock->fl_type = F_UNLCK;
+ rlock->fl_core.flc_type = F_UNLCK;
rlock->fl_start = smb_lock->start;
rlock->fl_end = smb_lock->end;
diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
index d0686ec344f5..2b67cccea91c 100644
--- a/fs/smb/server/vfs.c
+++ b/fs/smb/server/vfs.c
@@ -6,7 +6,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/uaccess.h>
#include <linux/backing-dev.h>
@@ -338,18 +337,18 @@ static int check_lock_range(struct file *filp, loff_t start, loff_t end,
return 0;
spin_lock(&ctx->flc_lock);
- list_for_each_entry(flock, &ctx->flc_posix, fl_list) {
+ list_for_each_entry(flock, &ctx->flc_posix, fl_core.flc_list) {
/* check conflict locks */
if (flock->fl_end >= start && end >= flock->fl_start) {
- if (flock->fl_type == F_RDLCK) {
+ if (flock->fl_core.flc_type == F_RDLCK) {
if (type == WRITE) {
pr_err("not allow write by shared lock\n");
error = 1;
goto out;
}
- } else if (flock->fl_type == F_WRLCK) {
+ } else if (flock->fl_core.flc_type == F_WRLCK) {
/* check owner in lock */
- if (flock->fl_file != filp) {
+ if (flock->fl_core.flc_file != filp) {
error = 1;
pr_err("not allow rw access by exclusive lock from other opens\n");
goto out;
@@ -1838,13 +1837,13 @@ int ksmbd_vfs_copy_file_ranges(struct ksmbd_work *work,
void ksmbd_vfs_posix_lock_wait(struct file_lock *flock)
{
- wait_event(flock->fl_wait, !flock->fl_blocker);
+ wait_event(flock->fl_core.flc_wait, !flock->fl_core.flc_blocker);
}
int ksmbd_vfs_posix_lock_wait_timeout(struct file_lock *flock, long timeout)
{
- return wait_event_interruptible_timeout(flock->fl_wait,
- !flock->fl_blocker,
+ return wait_event_interruptible_timeout(flock->fl_core.flc_wait,
+ !flock->fl_core.flc_blocker,
timeout);
}
--
2.43.0
Everything has been converted to access fl_core fields directly, so we
can now drop these.
Signed-off-by: Jeff Layton <[email protected]>
---
include/linux/filelock.h | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 9ddf27faba94..c887fce6dbf9 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -105,22 +105,6 @@ struct file_lock_core {
struct file *flc_file;
};
-/* Temporary macros to allow building during coccinelle conversion */
-#ifdef _NEED_FILE_LOCK_FIELD_MACROS
-#define fl_list fl_core.flc_list
-#define fl_blocker fl_core.flc_blocker
-#define fl_link fl_core.flc_link
-#define fl_blocked_requests fl_core.flc_blocked_requests
-#define fl_blocked_member fl_core.flc_blocked_member
-#define fl_owner fl_core.flc_owner
-#define fl_flags fl_core.flc_flags
-#define fl_type fl_core.flc_type
-#define fl_pid fl_core.flc_pid
-#define fl_link_cpu fl_core.flc_link_cpu
-#define fl_wait fl_core.flc_wait
-#define fl_file fl_core.flc_file
-#endif
-
struct file_lock {
struct file_lock_core fl_core;
loff_t fl_start;
--
2.43.0
Add a new struct file_lease and move the lease-specific fields from
struct file_lock to it. Convert the appropriate API calls to take
struct file_lease instead, and convert the callers to use them.
There is zero overlap between the lock manager operations for file
locks and the ones for file leases, so split the lease-related
operations off into a new lease_manager_operations struct.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/libfs.c | 2 +-
fs/locks.c | 119 ++++++++++++++++++++++++++--------------
fs/nfs/nfs4_fs.h | 2 +-
fs/nfs/nfs4file.c | 2 +-
fs/nfs/nfs4proc.c | 4 +-
fs/nfsd/nfs4layouts.c | 17 +++---
fs/nfsd/nfs4state.c | 21 ++++---
fs/smb/client/cifsfs.c | 2 +-
include/linux/filelock.h | 49 +++++++++++------
include/linux/fs.h | 5 +-
include/trace/events/filelock.h | 18 +++---
11 files changed, 147 insertions(+), 94 deletions(-)
diff --git a/fs/libfs.c b/fs/libfs.c
index eec6031b0155..8b67cb4655d5 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -1580,7 +1580,7 @@ EXPORT_SYMBOL(alloc_anon_inode);
* All arguments are ignored and it just returns -EINVAL.
*/
int
-simple_nosetlease(struct file *filp, int arg, struct file_lock **flp,
+simple_nosetlease(struct file *filp, int arg, struct file_lease **flp,
void **priv)
{
return -EINVAL;
diff --git a/fs/locks.c b/fs/locks.c
index de93d38da2f9..c6c2b2e173fb 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -74,12 +74,17 @@ static struct file_lock *file_lock(struct file_lock_core *flc)
return container_of(flc, struct file_lock, fl_core);
}
-static bool lease_breaking(struct file_lock *fl)
+static struct file_lease *file_lease(struct file_lock_core *flc)
+{
+ return container_of(flc, struct file_lease, fl_core);
+}
+
+static bool lease_breaking(struct file_lease *fl)
{
return fl->fl_core.flc_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
}
-static int target_leasetype(struct file_lock *fl)
+static int target_leasetype(struct file_lease *fl)
{
if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
return F_UNLCK;
@@ -166,6 +171,7 @@ static DEFINE_SPINLOCK(blocked_lock_lock);
static struct kmem_cache *flctx_cache __ro_after_init;
static struct kmem_cache *filelock_cache __ro_after_init;
+static struct kmem_cache *filelease_cache __ro_after_init;
static struct file_lock_context *
locks_get_lock_context(struct inode *inode, int type)
@@ -275,6 +281,18 @@ struct file_lock *locks_alloc_lock(void)
}
EXPORT_SYMBOL_GPL(locks_alloc_lock);
+/* Allocate an empty lock structure. */
+struct file_lease *locks_alloc_lease(void)
+{
+ struct file_lease *fl = kmem_cache_zalloc(filelease_cache, GFP_KERNEL);
+
+ if (fl)
+ locks_init_lock_heads(&fl->fl_core);
+
+ return fl;
+}
+EXPORT_SYMBOL_GPL(locks_alloc_lease);
+
void locks_release_private(struct file_lock *fl)
{
struct file_lock_core *flc = &fl->fl_core;
@@ -336,15 +354,25 @@ void locks_free_lock(struct file_lock *fl)
}
EXPORT_SYMBOL(locks_free_lock);
+/* Free a lease which is not in use. */
+void locks_free_lease(struct file_lease *fl)
+{
+ kmem_cache_free(filelease_cache, fl);
+}
+EXPORT_SYMBOL(locks_free_lease);
+
static void
locks_dispose_list(struct list_head *dispose)
{
- struct file_lock *fl;
+ struct file_lock_core *flc;
while (!list_empty(dispose)) {
- fl = list_first_entry(dispose, struct file_lock, fl_core.flc_list);
- list_del_init(&fl->fl_core.flc_list);
- locks_free_lock(fl);
+ flc = list_first_entry(dispose, struct file_lock_core, flc_list);
+ list_del_init(&flc->flc_list);
+ if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
+ locks_free_lease(file_lease(flc));
+ else
+ locks_free_lock(file_lock(flc));
}
}
@@ -355,6 +383,13 @@ void locks_init_lock(struct file_lock *fl)
}
EXPORT_SYMBOL(locks_init_lock);
+void locks_init_lease(struct file_lease *fl)
+{
+ memset(fl, 0, sizeof(*fl));
+ locks_init_lock_heads(&fl->fl_core);
+}
+EXPORT_SYMBOL(locks_init_lease);
+
/*
* Initialize a new lock from an existing file_lock structure.
*/
@@ -518,14 +553,14 @@ static int flock_to_posix_lock(struct file *filp, struct file_lock *fl,
/* default lease lock manager operations */
static bool
-lease_break_callback(struct file_lock *fl)
+lease_break_callback(struct file_lease *fl)
{
kill_fasync(&fl->fl_fasync, SIGIO, POLL_MSG);
return false;
}
static void
-lease_setup(struct file_lock *fl, void **priv)
+lease_setup(struct file_lease *fl, void **priv)
{
struct file *filp = fl->fl_core.flc_file;
struct fasync_struct *fa = *priv;
@@ -541,7 +576,7 @@ lease_setup(struct file_lock *fl, void **priv)
__f_setown(filp, task_pid(current), PIDTYPE_TGID, 0);
}
-static const struct lock_manager_operations lease_manager_ops = {
+static const struct lease_manager_operations lease_manager_ops = {
.lm_break = lease_break_callback,
.lm_change = lease_modify,
.lm_setup = lease_setup,
@@ -550,7 +585,7 @@ static const struct lock_manager_operations lease_manager_ops = {
/*
* Initialize a lease, use the default lock manager operations
*/
-static int lease_init(struct file *filp, int type, struct file_lock *fl)
+static int lease_init(struct file *filp, int type, struct file_lease *fl)
{
if (assign_type(&fl->fl_core, type) != 0)
return -EINVAL;
@@ -560,17 +595,14 @@ static int lease_init(struct file *filp, int type, struct file_lock *fl)
fl->fl_core.flc_file = filp;
fl->fl_core.flc_flags = FL_LEASE;
- fl->fl_start = 0;
- fl->fl_end = OFFSET_MAX;
- fl->fl_ops = NULL;
fl->fl_lmops = &lease_manager_ops;
return 0;
}
/* Allocate a file_lock initialised to this type of lease */
-static struct file_lock *lease_alloc(struct file *filp, int type)
+static struct file_lease *lease_alloc(struct file *filp, int type)
{
- struct file_lock *fl = locks_alloc_lock();
+ struct file_lease *fl = locks_alloc_lease();
int error = -ENOMEM;
if (fl == NULL)
@@ -578,7 +610,7 @@ static struct file_lock *lease_alloc(struct file *filp, int type)
error = lease_init(filp, type, fl);
if (error) {
- locks_free_lock(fl);
+ locks_free_lease(fl);
return ERR_PTR(error);
}
return fl;
@@ -1395,7 +1427,7 @@ static int posix_lock_inode_wait(struct inode *inode, struct file_lock *fl)
return error;
}
-static void lease_clear_pending(struct file_lock *fl, int arg)
+static void lease_clear_pending(struct file_lease *fl, int arg)
{
switch (arg) {
case F_UNLCK:
@@ -1407,7 +1439,7 @@ static void lease_clear_pending(struct file_lock *fl, int arg)
}
/* We already had a lease on this file; just change its type */
-int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
+int lease_modify(struct file_lease *fl, int arg, struct list_head *dispose)
{
int error = assign_type(&fl->fl_core, arg);
@@ -1442,7 +1474,7 @@ static bool past_time(unsigned long then)
static void time_out_leases(struct inode *inode, struct list_head *dispose)
{
struct file_lock_context *ctx = inode->i_flctx;
- struct file_lock *fl, *tmp;
+ struct file_lease *fl, *tmp;
lockdep_assert_held(&ctx->flc_lock);
@@ -1458,8 +1490,8 @@ static void time_out_leases(struct inode *inode, struct list_head *dispose)
static bool leases_conflict(struct file_lock_core *lc, struct file_lock_core *bc)
{
bool rc;
- struct file_lock *lease = file_lock(lc);
- struct file_lock *breaker = file_lock(bc);
+ struct file_lease *lease = file_lease(lc);
+ struct file_lease *breaker = file_lease(bc);
if (lease->fl_lmops->lm_breaker_owns_lease
&& lease->fl_lmops->lm_breaker_owns_lease(lease))
@@ -1480,7 +1512,7 @@ static bool leases_conflict(struct file_lock_core *lc, struct file_lock_core *bc
}
static bool
-any_leases_conflict(struct inode *inode, struct file_lock *breaker)
+any_leases_conflict(struct inode *inode, struct file_lease *breaker)
{
struct file_lock_context *ctx = inode->i_flctx;
struct file_lock_core *flc;
@@ -1511,7 +1543,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
{
int error = 0;
struct file_lock_context *ctx;
- struct file_lock *new_fl, *fl, *tmp;
+ struct file_lease *new_fl, *fl, *tmp;
unsigned long break_time;
int want_write = (mode & O_ACCMODE) != O_RDONLY;
LIST_HEAD(dispose);
@@ -1571,7 +1603,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
}
restart:
- fl = list_first_entry(&ctx->flc_lease, struct file_lock, fl_core.flc_list);
+ fl = list_first_entry(&ctx->flc_lease, struct file_lease, fl_core.flc_list);
break_time = fl->fl_break_time;
if (break_time != 0)
break_time -= jiffies;
@@ -1590,7 +1622,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock);
trace_break_lease_unblock(inode, new_fl);
- locks_delete_block(new_fl);
+ __locks_delete_block(&new_fl->fl_core);
if (error >= 0) {
/*
* Wait for the next conflicting lease that has not been
@@ -1607,7 +1639,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose);
free_lock:
- locks_free_lock(new_fl);
+ locks_free_lease(new_fl);
return error;
}
EXPORT_SYMBOL(__break_lease);
@@ -1625,13 +1657,13 @@ void lease_get_mtime(struct inode *inode, struct timespec64 *time)
{
bool has_lease = false;
struct file_lock_context *ctx;
- struct file_lock *fl;
+ struct file_lease *fl;
ctx = locks_inode_context(inode);
if (ctx && !list_empty_careful(&ctx->flc_lease)) {
spin_lock(&ctx->flc_lock);
fl = list_first_entry_or_null(&ctx->flc_lease,
- struct file_lock, fl_core.flc_list);
+ struct file_lease, fl_core.flc_list);
if (fl && (fl->fl_core.flc_type == F_WRLCK))
has_lease = true;
spin_unlock(&ctx->flc_lock);
@@ -1667,7 +1699,7 @@ EXPORT_SYMBOL(lease_get_mtime);
*/
int fcntl_getlease(struct file *filp)
{
- struct file_lock *fl;
+ struct file_lease *fl;
struct inode *inode = file_inode(filp);
struct file_lock_context *ctx;
int type = F_UNLCK;
@@ -1739,9 +1771,9 @@ check_conflicting_open(struct file *filp, const int arg, int flags)
}
static int
-generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **priv)
+generic_add_lease(struct file *filp, int arg, struct file_lease **flp, void **priv)
{
- struct file_lock *fl, *my_fl = NULL, *lease;
+ struct file_lease *fl, *my_fl = NULL, *lease;
struct inode *inode = file_inode(filp);
struct file_lock_context *ctx;
bool is_deleg = (*flp)->fl_core.flc_flags & FL_DELEG;
@@ -1850,7 +1882,7 @@ generic_add_lease(struct file *filp, int arg, struct file_lock **flp, void **pri
static int generic_delete_lease(struct file *filp, void *owner)
{
int error = -EAGAIN;
- struct file_lock *fl, *victim = NULL;
+ struct file_lease *fl, *victim = NULL;
struct inode *inode = file_inode(filp);
struct file_lock_context *ctx;
LIST_HEAD(dispose);
@@ -1890,7 +1922,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
* The (input) flp->fl_lmops->lm_break function is required
* by break_lease().
*/
-int generic_setlease(struct file *filp, int arg, struct file_lock **flp,
+int generic_setlease(struct file *filp, int arg, struct file_lease **flp,
void **priv)
{
struct inode *inode = file_inode(filp);
@@ -1937,7 +1969,7 @@ lease_notifier_chain_init(void)
}
static inline void
-setlease_notifier(int arg, struct file_lock *lease)
+setlease_notifier(int arg, struct file_lease *lease)
{
if (arg != F_UNLCK)
srcu_notifier_call_chain(&lease_notifier_chain, arg, lease);
@@ -1973,7 +2005,7 @@ EXPORT_SYMBOL_GPL(lease_unregister_notifier);
* may be NULL if the lm_setup operation doesn't require it.
*/
int
-vfs_setlease(struct file *filp, int arg, struct file_lock **lease, void **priv)
+vfs_setlease(struct file *filp, int arg, struct file_lease **lease, void **priv)
{
if (lease)
setlease_notifier(arg, *lease);
@@ -1986,7 +2018,7 @@ EXPORT_SYMBOL_GPL(vfs_setlease);
static int do_fcntl_add_lease(unsigned int fd, struct file *filp, int arg)
{
- struct file_lock *fl;
+ struct file_lease *fl;
struct fasync_struct *new;
int error;
@@ -1996,14 +2028,14 @@ static int do_fcntl_add_lease(unsigned int fd, struct file *filp, int arg)
new = fasync_alloc();
if (!new) {
- locks_free_lock(fl);
+ locks_free_lease(fl);
return -ENOMEM;
}
new->fa_fd = fd;
error = vfs_setlease(filp, arg, &fl, (void **)&new);
if (fl)
- locks_free_lock(fl);
+ locks_free_lease(fl);
if (new)
fasync_free(new);
return error;
@@ -2626,7 +2658,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx)
static void
locks_remove_lease(struct file *filp, struct file_lock_context *ctx)
{
- struct file_lock *fl, *tmp;
+ struct file_lease *fl, *tmp;
LIST_HEAD(dispose);
if (list_empty(&ctx->flc_lease))
@@ -2756,14 +2788,16 @@ static void lock_get_status(struct seq_file *f, struct file_lock_core *flc,
} else if (flc->flc_flags & FL_FLOCK) {
seq_puts(f, "FLOCK ADVISORY ");
} else if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
- type = target_leasetype(fl);
+ struct file_lease *lease = file_lease(flc);
+
+ type = target_leasetype(lease);
if (flc->flc_flags & FL_DELEG)
seq_puts(f, "DELEG ");
else
seq_puts(f, "LEASE ");
- if (lease_breaking(fl))
+ if (lease_breaking(lease))
seq_puts(f, "BREAKING ");
else if (flc->flc_file)
seq_puts(f, "ACTIVE ");
@@ -2946,6 +2980,9 @@ static int __init filelock_init(void)
filelock_cache = kmem_cache_create("file_lock_cache",
sizeof(struct file_lock), 0, SLAB_PANIC, NULL);
+ filelease_cache = kmem_cache_create("file_lock_cache",
+ sizeof(struct file_lease), 0, SLAB_PANIC, NULL);
+
for_each_possible_cpu(i) {
struct file_lock_list_struct *fll = per_cpu_ptr(&file_lock_list, i);
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 581698f1b7b2..6ff41ceb9f1c 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -330,7 +330,7 @@ extern int update_open_stateid(struct nfs4_state *state,
const nfs4_stateid *deleg_stateid,
fmode_t fmode);
extern int nfs4_proc_setlease(struct file *file, int arg,
- struct file_lock **lease, void **priv);
+ struct file_lease **lease, void **priv);
extern int nfs4_proc_get_lease_time(struct nfs_client *clp,
struct nfs_fsinfo *fsinfo);
extern void nfs4_update_changeattr(struct inode *dir,
diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
index e238abc78a13..1cd9652f3c28 100644
--- a/fs/nfs/nfs4file.c
+++ b/fs/nfs/nfs4file.c
@@ -439,7 +439,7 @@ void nfs42_ssc_unregister_ops(void)
}
#endif /* CONFIG_NFS_V4_2 */
-static int nfs4_setlease(struct file *file, int arg, struct file_lock **lease,
+static int nfs4_setlease(struct file *file, int arg, struct file_lease **lease,
void **priv)
{
return nfs4_proc_setlease(file, arg, lease, priv);
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index bdf9fa468982..62b6906d5c01 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -7604,7 +7604,7 @@ static int nfs4_delete_lease(struct file *file, void **priv)
return generic_setlease(file, F_UNLCK, NULL, priv);
}
-static int nfs4_add_lease(struct file *file, int arg, struct file_lock **lease,
+static int nfs4_add_lease(struct file *file, int arg, struct file_lease **lease,
void **priv)
{
struct inode *inode = file_inode(file);
@@ -7622,7 +7622,7 @@ static int nfs4_add_lease(struct file *file, int arg, struct file_lock **lease,
return -EAGAIN;
}
-int nfs4_proc_setlease(struct file *file, int arg, struct file_lock **lease,
+int nfs4_proc_setlease(struct file *file, int arg, struct file_lease **lease,
void **priv)
{
switch (arg) {
diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
index ddf221d31acf..3a187bff7deb 100644
--- a/fs/nfsd/nfs4layouts.c
+++ b/fs/nfsd/nfs4layouts.c
@@ -25,7 +25,7 @@ static struct kmem_cache *nfs4_layout_cache;
static struct kmem_cache *nfs4_layout_stateid_cache;
static const struct nfsd4_callback_ops nfsd4_cb_layout_ops;
-static const struct lock_manager_operations nfsd4_layouts_lm_ops;
+static const struct lease_manager_operations nfsd4_layouts_lm_ops;
const struct nfsd4_layout_ops *nfsd4_layout_ops[LAYOUT_TYPE_MAX] = {
#ifdef CONFIG_NFSD_FLEXFILELAYOUT
@@ -182,20 +182,19 @@ nfsd4_free_layout_stateid(struct nfs4_stid *stid)
static int
nfsd4_layout_setlease(struct nfs4_layout_stateid *ls)
{
- struct file_lock *fl;
+ struct file_lease *fl;
int status;
if (nfsd4_layout_ops[ls->ls_layout_type]->disable_recalls)
return 0;
- fl = locks_alloc_lock();
+ fl = locks_alloc_lease();
if (!fl)
return -ENOMEM;
- locks_init_lock(fl);
+ locks_init_lease(fl);
fl->fl_lmops = &nfsd4_layouts_lm_ops;
fl->fl_core.flc_flags = FL_LAYOUT;
fl->fl_core.flc_type = F_RDLCK;
- fl->fl_end = OFFSET_MAX;
fl->fl_core.flc_owner = ls;
fl->fl_core.flc_pid = current->tgid;
fl->fl_core.flc_file = ls->ls_file->nf_file;
@@ -203,7 +202,7 @@ nfsd4_layout_setlease(struct nfs4_layout_stateid *ls)
status = vfs_setlease(fl->fl_core.flc_file, fl->fl_core.flc_type, &fl,
NULL);
if (status) {
- locks_free_lock(fl);
+ locks_free_lease(fl);
return status;
}
BUG_ON(fl != NULL);
@@ -724,7 +723,7 @@ static const struct nfsd4_callback_ops nfsd4_cb_layout_ops = {
};
static bool
-nfsd4_layout_lm_break(struct file_lock *fl)
+nfsd4_layout_lm_break(struct file_lease *fl)
{
/*
* We don't want the locks code to timeout the lease for us;
@@ -737,14 +736,14 @@ nfsd4_layout_lm_break(struct file_lock *fl)
}
static int
-nfsd4_layout_lm_change(struct file_lock *onlist, int arg,
+nfsd4_layout_lm_change(struct file_lease *onlist, int arg,
struct list_head *dispose)
{
BUG_ON(!(arg & F_UNLCK));
return lease_modify(onlist, arg, dispose);
}
-static const struct lock_manager_operations nfsd4_layouts_lm_ops = {
+static const struct lease_manager_operations nfsd4_layouts_lm_ops = {
.lm_break = nfsd4_layout_lm_break,
.lm_change = nfsd4_layout_lm_change,
};
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 5899e5778fe7..a80e8e41d9ff 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4922,7 +4922,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
/* Called from break_lease() with flc_lock held. */
static bool
-nfsd_break_deleg_cb(struct file_lock *fl)
+nfsd_break_deleg_cb(struct file_lease *fl)
{
struct nfs4_delegation *dp = (struct nfs4_delegation *) fl->fl_core.flc_owner;
struct nfs4_file *fp = dp->dl_stid.sc_file;
@@ -4960,7 +4960,7 @@ nfsd_break_deleg_cb(struct file_lock *fl)
* %true: Lease conflict was resolved
* %false: Lease conflict was not resolved.
*/
-static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+static bool nfsd_breaker_owns_lease(struct file_lease *fl)
{
struct nfs4_delegation *dl = fl->fl_core.flc_owner;
struct svc_rqst *rqst;
@@ -4977,7 +4977,7 @@ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
}
static int
-nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
+nfsd_change_deleg_cb(struct file_lease *onlist, int arg,
struct list_head *dispose)
{
struct nfs4_delegation *dp = (struct nfs4_delegation *) onlist->fl_core.flc_owner;
@@ -4991,7 +4991,7 @@ nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
return -EAGAIN;
}
-static const struct lock_manager_operations nfsd_lease_mng_ops = {
+static const struct lease_manager_operations nfsd_lease_mng_ops = {
.lm_breaker_owns_lease = nfsd_breaker_owns_lease,
.lm_break = nfsd_break_deleg_cb,
.lm_change = nfsd_change_deleg_cb,
@@ -5331,18 +5331,17 @@ static bool nfsd4_cb_channel_good(struct nfs4_client *clp)
return clp->cl_minorversion && clp->cl_cb_state == NFSD4_CB_UNKNOWN;
}
-static struct file_lock *nfs4_alloc_init_lease(struct nfs4_delegation *dp,
+static struct file_lease *nfs4_alloc_init_lease(struct nfs4_delegation *dp,
int flag)
{
- struct file_lock *fl;
+ struct file_lease *fl;
- fl = locks_alloc_lock();
+ fl = locks_alloc_lease();
if (!fl)
return NULL;
fl->fl_lmops = &nfsd_lease_mng_ops;
fl->fl_core.flc_flags = FL_DELEG;
fl->fl_core.flc_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK;
- fl->fl_end = OFFSET_MAX;
fl->fl_core.flc_owner = (fl_owner_t)dp;
fl->fl_core.flc_pid = current->tgid;
fl->fl_core.flc_file = dp->dl_stid.sc_file->fi_deleg_file->nf_file;
@@ -5463,7 +5462,7 @@ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
struct nfs4_clnt_odstate *odstate = stp->st_clnt_odstate;
struct nfs4_delegation *dp;
struct nfsd_file *nf = NULL;
- struct file_lock *fl;
+ struct file_lease *fl;
u32 dl_type;
/*
@@ -5536,7 +5535,7 @@ nfs4_set_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
status = vfs_setlease(fp->fi_deleg_file->nf_file,
fl->fl_core.flc_type, &fl, NULL);
if (fl)
- locks_free_lock(fl);
+ locks_free_lease(fl);
if (status)
goto out_clnt_odstate;
@@ -8449,7 +8448,7 @@ nfsd4_deleg_getattr_conflict(struct svc_rqst *rqstp, struct inode *inode)
{
__be32 status;
struct file_lock_context *ctx;
- struct file_lock *fl;
+ struct file_lease *fl;
struct nfs4_delegation *dp;
ctx = locks_inode_context(inode);
diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
index e902de4e475a..5eee5b00547f 100644
--- a/fs/smb/client/cifsfs.c
+++ b/fs/smb/client/cifsfs.c
@@ -1085,7 +1085,7 @@ static loff_t cifs_llseek(struct file *file, loff_t offset, int whence)
}
static int
-cifs_setlease(struct file *file, int arg, struct file_lock **lease, void **priv)
+cifs_setlease(struct file *file, int arg, struct file_lease **lease, void **priv)
{
/*
* Note that this is called by vfs setlease with i_lock held to
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index c887fce6dbf9..4f1d1a32dc50 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -27,6 +27,7 @@
#define FILE_LOCK_DEFERRED 1
struct file_lock;
+struct file_lease;
struct file_lock_operations {
void (*fl_copy_lock)(struct file_lock *, struct file_lock *);
@@ -39,14 +40,17 @@ struct lock_manager_operations {
void (*lm_put_owner)(fl_owner_t);
void (*lm_notify)(struct file_lock *); /* unblock callback */
int (*lm_grant)(struct file_lock *, int);
- bool (*lm_break)(struct file_lock *);
- int (*lm_change)(struct file_lock *, int, struct list_head *);
- void (*lm_setup)(struct file_lock *, void **);
- bool (*lm_breaker_owns_lease)(struct file_lock *);
bool (*lm_lock_expirable)(struct file_lock *cfl);
void (*lm_expire_lock)(void);
};
+struct lease_manager_operations {
+ bool (*lm_break)(struct file_lease *);
+ int (*lm_change)(struct file_lease *, int, struct list_head *);
+ void (*lm_setup)(struct file_lease *, void **);
+ bool (*lm_breaker_owns_lease)(struct file_lease *);
+};
+
struct lock_manager {
struct list_head list;
/*
@@ -110,11 +114,6 @@ struct file_lock {
loff_t fl_start;
loff_t fl_end;
- struct fasync_struct * fl_fasync; /* for lease break notifications */
- /* for lease breaks: */
- unsigned long fl_break_time;
- unsigned long fl_downgrade_time;
-
const struct file_lock_operations *fl_ops; /* Callbacks for filesystems */
const struct lock_manager_operations *fl_lmops; /* Callbacks for lockmanagers */
union {
@@ -131,6 +130,15 @@ struct file_lock {
} fl_u;
} __randomize_layout;
+struct file_lease {
+ struct file_lock_core fl_core;
+ struct fasync_struct * fl_fasync; /* for lease break notifications */
+ /* for lease breaks: */
+ unsigned long fl_break_time;
+ unsigned long fl_downgrade_time;
+ const struct lease_manager_operations *fl_lmops; /* Callbacks for lease managers */
+} __randomize_layout;
+
struct file_lock_context {
spinlock_t flc_lock;
struct list_head flc_flock;
@@ -156,7 +164,7 @@ int fcntl_getlease(struct file *filp);
void locks_free_lock_context(struct inode *inode);
void locks_free_lock(struct file_lock *fl);
void locks_init_lock(struct file_lock *);
-struct file_lock * locks_alloc_lock(void);
+struct file_lock *locks_alloc_lock(void);
void locks_copy_lock(struct file_lock *, struct file_lock *);
void locks_copy_conflock(struct file_lock *, struct file_lock *);
void locks_remove_posix(struct file *, fl_owner_t);
@@ -170,11 +178,15 @@ int vfs_lock_file(struct file *, unsigned int, struct file_lock *, struct file_l
int vfs_cancel_lock(struct file *filp, struct file_lock *fl);
bool vfs_inode_has_locks(struct inode *inode);
int locks_lock_inode_wait(struct inode *inode, struct file_lock *fl);
+
+void locks_init_lease(struct file_lease *);
+void locks_free_lease(struct file_lease *fl);
+struct file_lease *locks_alloc_lease(void);
int __break_lease(struct inode *inode, unsigned int flags, unsigned int type);
void lease_get_mtime(struct inode *, struct timespec64 *time);
-int generic_setlease(struct file *, int, struct file_lock **, void **priv);
-int vfs_setlease(struct file *, int, struct file_lock **, void **);
-int lease_modify(struct file_lock *, int, struct list_head *);
+int generic_setlease(struct file *, int, struct file_lease **, void **priv);
+int vfs_setlease(struct file *, int, struct file_lease **, void **);
+int lease_modify(struct file_lease *, int, struct list_head *);
struct notifier_block;
int lease_register_notifier(struct notifier_block *);
@@ -238,6 +250,11 @@ static inline void locks_init_lock(struct file_lock *fl)
return;
}
+static inline void locks_init_lease(struct file_lease *fl)
+{
+ return;
+}
+
static inline void locks_copy_conflock(struct file_lock *new, struct file_lock *fl)
{
return;
@@ -312,18 +329,18 @@ static inline void lease_get_mtime(struct inode *inode,
}
static inline int generic_setlease(struct file *filp, int arg,
- struct file_lock **flp, void **priv)
+ struct file_lease **flp, void **priv)
{
return -EINVAL;
}
static inline int vfs_setlease(struct file *filp, int arg,
- struct file_lock **lease, void **priv)
+ struct file_lease **lease, void **priv)
{
return -EINVAL;
}
-static inline int lease_modify(struct file_lock *fl, int arg,
+static inline int lease_modify(struct file_lease *fl, int arg,
struct list_head *dispose)
{
return -EINVAL;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ed5966a70495..162877197bf1 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1064,6 +1064,7 @@ struct file *get_file_active(struct file **f);
typedef void *fl_owner_t;
struct file_lock;
+struct file_lease;
/* The following constant reflects the upper bound of the file/locking space */
#ifndef OFFSET_MAX
@@ -2005,7 +2006,7 @@ struct file_operations {
ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, loff_t *, size_t, unsigned int);
ssize_t (*splice_read)(struct file *, loff_t *, struct pipe_inode_info *, size_t, unsigned int);
void (*splice_eof)(struct file *file);
- int (*setlease)(struct file *, int, struct file_lock **, void **);
+ int (*setlease)(struct file *, int, struct file_lease **, void **);
long (*fallocate)(struct file *file, int mode, loff_t offset,
loff_t len);
void (*show_fdinfo)(struct seq_file *m, struct file *f);
@@ -3238,7 +3239,7 @@ extern int simple_write_begin(struct file *file, struct address_space *mapping,
extern const struct address_space_operations ram_aops;
extern int always_delete_dentry(const struct dentry *);
extern struct inode *alloc_anon_inode(struct super_block *);
-extern int simple_nosetlease(struct file *, int, struct file_lock **, void **);
+extern int simple_nosetlease(struct file *, int, struct file_lease **, void **);
extern const struct dentry_operations simple_dentry_operations;
extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned int flags);
diff --git a/include/trace/events/filelock.h b/include/trace/events/filelock.h
index c0b92e888d16..c19d0e2ae677 100644
--- a/include/trace/events/filelock.h
+++ b/include/trace/events/filelock.h
@@ -117,12 +117,12 @@ DEFINE_EVENT(filelock_lock, flock_lock_inode,
TP_ARGS(inode, fl, ret));
DECLARE_EVENT_CLASS(filelock_lease,
- TP_PROTO(struct inode *inode, struct file_lock *fl),
+ TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl),
TP_STRUCT__entry(
- __field(struct file_lock *, fl)
+ __field(struct file_lease *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
__field(struct file_lock_core *, blocker)
@@ -153,23 +153,23 @@ DECLARE_EVENT_CLASS(filelock_lease,
__entry->break_time, __entry->downgrade_time)
);
-DEFINE_EVENT(filelock_lease, break_lease_noblock, TP_PROTO(struct inode *inode, struct file_lock *fl),
+DEFINE_EVENT(filelock_lease, break_lease_noblock, TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl));
-DEFINE_EVENT(filelock_lease, break_lease_block, TP_PROTO(struct inode *inode, struct file_lock *fl),
+DEFINE_EVENT(filelock_lease, break_lease_block, TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl));
-DEFINE_EVENT(filelock_lease, break_lease_unblock, TP_PROTO(struct inode *inode, struct file_lock *fl),
+DEFINE_EVENT(filelock_lease, break_lease_unblock, TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl));
-DEFINE_EVENT(filelock_lease, generic_delete_lease, TP_PROTO(struct inode *inode, struct file_lock *fl),
+DEFINE_EVENT(filelock_lease, generic_delete_lease, TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl));
-DEFINE_EVENT(filelock_lease, time_out_leases, TP_PROTO(struct inode *inode, struct file_lock *fl),
+DEFINE_EVENT(filelock_lease, time_out_leases, TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl));
TRACE_EVENT(generic_add_lease,
- TP_PROTO(struct inode *inode, struct file_lock *fl),
+ TP_PROTO(struct inode *inode, struct file_lease *fl),
TP_ARGS(inode, fl),
@@ -204,7 +204,7 @@ TRACE_EVENT(generic_add_lease,
);
TRACE_EVENT(leases_conflict,
- TP_PROTO(bool conflict, struct file_lock *lease, struct file_lock *breaker),
+ TP_PROTO(bool conflict, struct file_lease *lease, struct file_lease *breaker),
TP_ARGS(conflict, lease, breaker),
--
2.43.0
These don't add a lot of value over just open-coding the flag check.
Suggested-by: NeilBrown <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
---
fs/locks.c | 32 +++++++++++++++-----------------
1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 1eceaa56e47f..87212f86eca9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -70,12 +70,6 @@
#include <linux/uaccess.h>
-#define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
-#define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
-#define IS_LEASE(fl) (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
-#define IS_OFDLCK(fl) (fl->fl_flags & FL_OFDLCK)
-#define IS_REMOTELCK(fl) (fl->fl_pid <= 0)
-
static bool lease_breaking(struct file_lock *fl)
{
return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
@@ -767,7 +761,7 @@ static void __locks_insert_block(struct file_lock *blocker,
}
waiter->fl_blocker = blocker;
list_add_tail(&waiter->fl_blocked_member, &blocker->fl_blocked_requests);
- if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))
+ if ((blocker->fl_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
locks_insert_global_blocked(waiter);
/* The requests in waiter->fl_blocked are known to conflict with
@@ -999,7 +993,7 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
* This deadlock detector can't reasonably detect deadlocks with
* FL_OFDLCK locks, since they aren't owned by a process, per-se.
*/
- if (IS_OFDLCK(caller_fl))
+ if (caller_fl->fl_flags & FL_OFDLCK)
return 0;
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
@@ -2150,10 +2144,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
pid_t vnr;
struct pid *pid;
- if (IS_OFDLCK(fl))
+ if (fl->fl_flags & FL_OFDLCK)
return -1;
- if (IS_REMOTELCK(fl))
+
+ /* Remote locks report a negative pid value */
+ if (fl->fl_pid <= 0)
return fl->fl_pid;
+
/*
* If the flock owner process is dead and its pid has been already
* freed, the translation below won't work, but we still want to show
@@ -2697,7 +2694,7 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = proc_pid_ns(file_inode(f->file)->i_sb);
- int type;
+ int type = fl->fl_type;
pid = locks_translate_pid(fl, proc_pidns);
/*
@@ -2714,19 +2711,21 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
- if (IS_POSIX(fl)) {
+ if (fl->fl_flags & FL_POSIX) {
if (fl->fl_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
- else if (IS_OFDLCK(fl))
+ else if (fl->fl_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
seq_printf(f, " %s ",
(inode == NULL) ? "*NOINODE*" : "ADVISORY ");
- } else if (IS_FLOCK(fl)) {
+ } else if (fl->fl_flags & FL_FLOCK) {
seq_puts(f, "FLOCK ADVISORY ");
- } else if (IS_LEASE(fl)) {
+ } else if (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+ type = target_leasetype(fl);
+
if (fl->fl_flags & FL_DELEG)
seq_puts(f, "DELEG ");
else
@@ -2741,7 +2740,6 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
} else {
seq_puts(f, "UNKNOWN UNKNOWN ");
}
- type = IS_LEASE(fl) ? target_leasetype(fl) : fl->fl_type;
seq_printf(f, "%s ", (type == F_WRLCK) ? "WRITE" :
(type == F_RDLCK) ? "READ" : "UNLCK");
@@ -2753,7 +2751,7 @@ static void lock_get_status(struct seq_file *f, struct file_lock *fl,
} else {
seq_printf(f, "%d <none>:0 ", pid);
}
- if (IS_POSIX(fl)) {
+ if (fl->fl_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else
--
2.43.0
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton <[email protected]>
---
fs/gfs2/file.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 9e7cd054e924..dc0c4f7d7cc7 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -15,7 +15,6 @@
#include <linux/mm.h>
#include <linux/mount.h>
#include <linux/fs.h>
-#define _NEED_FILE_LOCK_FIELD_MACROS
#include <linux/filelock.h>
#include <linux/gfs2_ondisk.h>
#include <linux/falloc.h>
@@ -1441,10 +1440,10 @@ static int gfs2_lock(struct file *file, int cmd, struct file_lock *fl)
struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
struct lm_lockstruct *ls = &sdp->sd_lockstruct;
- if (!(fl->fl_flags & FL_POSIX))
+ if (!(fl->fl_core.flc_flags & FL_POSIX))
return -ENOLCK;
if (gfs2_withdrawing_or_withdrawn(sdp)) {
- if (fl->fl_type == F_UNLCK)
+ if (fl->fl_core.flc_type == F_UNLCK)
locks_lock_file_wait(file, fl);
return -EIO;
}
@@ -1452,7 +1451,7 @@ static int gfs2_lock(struct file *file, int cmd, struct file_lock *fl)
return dlm_posix_cancel(ls->ls_dlm, ip->i_no_addr, file, fl);
else if (IS_GETLK(cmd))
return dlm_posix_get(ls->ls_dlm, ip->i_no_addr, file, fl);
- else if (fl->fl_type == F_UNLCK)
+ else if (fl->fl_core.flc_type == F_UNLCK)
return dlm_posix_unlock(ls->ls_dlm, ip->i_no_addr, file, fl);
else
return dlm_posix_lock(ls->ls_dlm, ip->i_no_addr, file, cmd, fl);
@@ -1484,7 +1483,7 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl)
int error = 0;
int sleeptime;
- state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
+ state = (fl->fl_core.flc_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
flags = GL_EXACT | GL_NOPID;
if (!IS_SETLKW(cmd))
flags |= LM_FLAG_TRY_1CB;
@@ -1496,8 +1495,8 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl)
if (fl_gh->gh_state == state)
goto out;
locks_init_lock(&request);
- request.fl_type = F_UNLCK;
- request.fl_flags = FL_FLOCK;
+ request.fl_core.flc_type = F_UNLCK;
+ request.fl_core.flc_flags = FL_FLOCK;
locks_lock_file_wait(file, &request);
gfs2_glock_dq(fl_gh);
gfs2_holder_reinit(state, flags, fl_gh);
@@ -1558,10 +1557,10 @@ static void do_unflock(struct file *file, struct file_lock *fl)
static int gfs2_flock(struct file *file, int cmd, struct file_lock *fl)
{
- if (!(fl->fl_flags & FL_FLOCK))
+ if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
- if (fl->fl_type == F_UNLCK) {
+ if (fl->fl_core.flc_type == F_UNLCK) {
do_unflock(file, fl);
return 0;
} else {
--
2.43.0
On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> Long ago, file locks used to hang off of a singly-linked list in struct
> inode. Because of this, when leases were added, they were added to the
> same list and so they had to be tracked using the same sort of
> structure.
>
> Several years ago, we added struct file_lock_context, which allowed us
> to use separate lists to track different types of file locks. Given
> that, leases no longer need to be tracked using struct file_lock.
>
> That said, a lot of the underlying infrastructure _is_ the same between
> file leases and locks, so we can't completely separate everything.
>
> This patchset first splits a group of fields used by both file locks and
> leases into a new struct file_lock_core, that is then embedded in struct
> file_lock. Coccinelle was then used to convert a lot of the callers to
> deal with the move, with the remaining 25% or so converted by hand.
>
> It then converts several internal functions in fs/locks.c to work
> with struct file_lock_core. Lastly, struct file_lock is split into
> struct file_lock and file_lease, and the lease-related APIs converted to
> take struct file_lease.
>
> After the first few patches (which I left split up for easier review),
> the set should be bisectable. I'll plan to squash the first few
> together to make sure the resulting set is bisectable before merge.
>
> Finally, I left the coccinelle scripts I used in tree. I had heard it
> was preferable to merge those along with the patches that they
> generate, but I wasn't sure where they go. I can either move those to a
> more appropriate location or we can just drop that commit if it's not
> needed.
>
> Signed-off-by: Jeff Layton <[email protected]>
v2 looks nicer.
I would add a few list handling primitives, as I see enough
instances of list_for_each_entry, list_for_each_entry_safe,
list_first_entry, and list_first_entry_or_null on fl_core.flc_list
to make it worth having those.
Also, there doesn't seem to be benefit for API consumers to have to
understand the internal structure of struct file_lock/lease to reach
into fl_core. Having accessor functions for common fields like
fl_type and fl_flags could be cleaner.
For the series:
Reviewed-by: Chuck Lever <[email protected]>
For the nfsd and lockd parts:
Acked-by: Chuck Lever <[email protected]>
> ---
> Changes in v2:
> - renamed file_lock_core fields to have "flc_" prefix
> - used macros to more easily do the change piecemeal
> - broke up patches into per-subsystem ones
> - Link to v1: https://lore.kernel.org/r/[email protected]
>
> ---
> Jeff Layton (41):
> filelock: rename some fields in tracepoints
> filelock: rename fl_pid variable in lock_get_status
> dlm: rename fl_flags variable in dlm_posix_unlock
> nfs: rename fl_flags variable in nfs4_proc_unlck
> nfsd: rename fl_type and fl_flags variables in nfsd4_lock
> lockd: rename fl_flags and fl_type variables in nlmclnt_lock
> 9p: rename fl_type variable in v9fs_file_do_lock
> afs: rename fl_type variable in afs_next_locker
> filelock: drop the IS_* macros
> filelock: split common fields into struct file_lock_core
> filelock: add coccinelle scripts to move fields to struct file_lock_core
> filelock: have fs/locks.c deal with file_lock_core directly
> filelock: convert some internal functions to use file_lock_core instead
> filelock: convert more internal functions to use file_lock_core
> filelock: make posix_same_owner take file_lock_core pointers
> filelock: convert posix_owner_key to take file_lock_core arg
> filelock: make locks_{insert,delete}_global_locks take file_lock_core arg
> filelock: convert locks_{insert,delete}_global_blocked
> filelock: make __locks_delete_block and __locks_wake_up_blocks take file_lock_core
> filelock: convert __locks_insert_block, conflict and deadlock checks to use file_lock_core
> filelock: convert fl_blocker to file_lock_core
> filelock: clean up locks_delete_block internals
> filelock: reorganize locks_delete_block and __locks_insert_block
> filelock: make assign_type helper take a file_lock_core pointer
> filelock: convert locks_wake_up_blocks to take a file_lock_core pointer
> filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx
> filelock: convert locks_translate_pid to take file_lock_core
> filelock: convert seqfile handling to use file_lock_core
> 9p: adapt to breakup of struct file_lock
> afs: adapt to breakup of struct file_lock
> ceph: adapt to breakup of struct file_lock
> dlm: adapt to breakup of struct file_lock
> gfs2: adapt to breakup of struct file_lock
> lockd: adapt to breakup of struct file_lock
> nfs: adapt to breakup of struct file_lock
> nfsd: adapt to breakup of struct file_lock
> ocfs2: adapt to breakup of struct file_lock
> smb/client: adapt to breakup of struct file_lock
> smb/server: adapt to breakup of struct file_lock
> filelock: remove temporary compatability macros
> filelock: split leases out of struct file_lock
>
> cocci/filelock.cocci | 88 +++++
> cocci/nlm.cocci | 81 ++++
> fs/9p/vfs_file.c | 40 +-
> fs/afs/flock.c | 59 +--
> fs/ceph/locks.c | 74 ++--
> fs/dlm/plock.c | 44 +--
> fs/gfs2/file.c | 16 +-
> fs/libfs.c | 2 +-
> fs/lockd/clnt4xdr.c | 14 +-
> fs/lockd/clntlock.c | 2 +-
> fs/lockd/clntproc.c | 65 +--
> fs/lockd/clntxdr.c | 14 +-
> fs/lockd/svc4proc.c | 10 +-
> fs/lockd/svclock.c | 64 +--
> fs/lockd/svcproc.c | 10 +-
> fs/lockd/svcsubs.c | 24 +-
> fs/lockd/xdr.c | 14 +-
> fs/lockd/xdr4.c | 14 +-
> fs/locks.c | 848 ++++++++++++++++++++++------------------
> fs/nfs/delegation.c | 4 +-
> fs/nfs/file.c | 22 +-
> fs/nfs/nfs3proc.c | 2 +-
> fs/nfs/nfs4_fs.h | 2 +-
> fs/nfs/nfs4file.c | 2 +-
> fs/nfs/nfs4proc.c | 39 +-
> fs/nfs/nfs4state.c | 22 +-
> fs/nfs/nfs4trace.h | 4 +-
> fs/nfs/nfs4xdr.c | 8 +-
> fs/nfs/write.c | 8 +-
> fs/nfsd/filecache.c | 4 +-
> fs/nfsd/nfs4callback.c | 2 +-
> fs/nfsd/nfs4layouts.c | 34 +-
> fs/nfsd/nfs4state.c | 118 +++---
> fs/ocfs2/locks.c | 12 +-
> fs/ocfs2/stack_user.c | 2 +-
> fs/open.c | 2 +-
> fs/posix_acl.c | 4 +-
> fs/smb/client/cifsfs.c | 2 +-
> fs/smb/client/cifssmb.c | 8 +-
> fs/smb/client/file.c | 76 ++--
> fs/smb/client/smb2file.c | 2 +-
> fs/smb/server/smb2pdu.c | 44 +--
> fs/smb/server/vfs.c | 14 +-
> include/linux/filelock.h | 80 ++--
> include/linux/fs.h | 5 +-
> include/linux/lockd/lockd.h | 8 +-
> include/linux/lockd/xdr.h | 2 +-
> include/trace/events/afs.h | 4 +-
> include/trace/events/filelock.h | 102 ++---
> 49 files changed, 1198 insertions(+), 923 deletions(-)
> ---
> base-commit: 615d300648869c774bd1fe54b4627bb0c20faed4
> change-id: 20240116-flsplit-bdb46824db68
>
> Best regards,
> --
> Jeff Layton <[email protected]>
>
--
Chuck Lever
On Thu, 2024-01-25 at 09:57 -0500, Chuck Lever wrote:
> On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> > Long ago, file locks used to hang off of a singly-linked list in struct
> > inode. Because of this, when leases were added, they were added to the
> > same list and so they had to be tracked using the same sort of
> > structure.
> >
> > Several years ago, we added struct file_lock_context, which allowed us
> > to use separate lists to track different types of file locks. Given
> > that, leases no longer need to be tracked using struct file_lock.
> >
> > That said, a lot of the underlying infrastructure _is_ the same between
> > file leases and locks, so we can't completely separate everything.
> >
> > This patchset first splits a group of fields used by both file locks and
> > leases into a new struct file_lock_core, that is then embedded in struct
> > file_lock. Coccinelle was then used to convert a lot of the callers to
> > deal with the move, with the remaining 25% or so converted by hand.
> >
> > It then converts several internal functions in fs/locks.c to work
> > with struct file_lock_core. Lastly, struct file_lock is split into
> > struct file_lock and file_lease, and the lease-related APIs converted to
> > take struct file_lease.
> >
> > After the first few patches (which I left split up for easier review),
> > the set should be bisectable. I'll plan to squash the first few
> > together to make sure the resulting set is bisectable before merge.
> >
> > Finally, I left the coccinelle scripts I used in tree. I had heard it
> > was preferable to merge those along with the patches that they
> > generate, but I wasn't sure where they go. I can either move those to a
> > more appropriate location or we can just drop that commit if it's not
> > needed.
> >
> > Signed-off-by: Jeff Layton <[email protected]>
>
> v2 looks nicer.
>
> I would add a few list handling primitives, as I see enough
> instances of list_for_each_entry, list_for_each_entry_safe,
> list_first_entry, and list_first_entry_or_null on fl_core.flc_list
> to make it worth having those.
>
> Also, there doesn't seem to be benefit for API consumers to have to
> understand the internal structure of struct file_lock/lease to reach
> into fl_core. Having accessor functions for common fields like
> fl_type and fl_flags could be cleaner.
>
That is a good suggestion. I had considered it before and figured "why
bother", but I think that would make things simpler.
I'll plan to do a v3 that has more helpers. Possibly we can just convert
some of the subsystems ahead of time and avoid some churn. Stay tuned...
> For the series:
>
> Reviewed-by: Chuck Lever <[email protected]>
>
> For the nfsd and lockd parts:
>
> Acked-by: Chuck Lever <[email protected]>
>
>
> > ---
> > Changes in v2:
> > - renamed file_lock_core fields to have "flc_" prefix
> > - used macros to more easily do the change piecemeal
> > - broke up patches into per-subsystem ones
> > - Link to v1: https://lore.kernel.org/r/[email protected]
> >
> > ---
> > Jeff Layton (41):
> > filelock: rename some fields in tracepoints
> > filelock: rename fl_pid variable in lock_get_status
> > dlm: rename fl_flags variable in dlm_posix_unlock
> > nfs: rename fl_flags variable in nfs4_proc_unlck
> > nfsd: rename fl_type and fl_flags variables in nfsd4_lock
> > lockd: rename fl_flags and fl_type variables in nlmclnt_lock
> > 9p: rename fl_type variable in v9fs_file_do_lock
> > afs: rename fl_type variable in afs_next_locker
> > filelock: drop the IS_* macros
> > filelock: split common fields into struct file_lock_core
> > filelock: add coccinelle scripts to move fields to struct file_lock_core
> > filelock: have fs/locks.c deal with file_lock_core directly
> > filelock: convert some internal functions to use file_lock_core instead
> > filelock: convert more internal functions to use file_lock_core
> > filelock: make posix_same_owner take file_lock_core pointers
> > filelock: convert posix_owner_key to take file_lock_core arg
> > filelock: make locks_{insert,delete}_global_locks take file_lock_core arg
> > filelock: convert locks_{insert,delete}_global_blocked
> > filelock: make __locks_delete_block and __locks_wake_up_blocks take file_lock_core
> > filelock: convert __locks_insert_block, conflict and deadlock checks to use file_lock_core
> > filelock: convert fl_blocker to file_lock_core
> > filelock: clean up locks_delete_block internals
> > filelock: reorganize locks_delete_block and __locks_insert_block
> > filelock: make assign_type helper take a file_lock_core pointer
> > filelock: convert locks_wake_up_blocks to take a file_lock_core pointer
> > filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx
> > filelock: convert locks_translate_pid to take file_lock_core
> > filelock: convert seqfile handling to use file_lock_core
> > 9p: adapt to breakup of struct file_lock
> > afs: adapt to breakup of struct file_lock
> > ceph: adapt to breakup of struct file_lock
> > dlm: adapt to breakup of struct file_lock
> > gfs2: adapt to breakup of struct file_lock
> > lockd: adapt to breakup of struct file_lock
> > nfs: adapt to breakup of struct file_lock
> > nfsd: adapt to breakup of struct file_lock
> > ocfs2: adapt to breakup of struct file_lock
> > smb/client: adapt to breakup of struct file_lock
> > smb/server: adapt to breakup of struct file_lock
> > filelock: remove temporary compatability macros
> > filelock: split leases out of struct file_lock
> >
> > cocci/filelock.cocci | 88 +++++
> > cocci/nlm.cocci | 81 ++++
> > fs/9p/vfs_file.c | 40 +-
> > fs/afs/flock.c | 59 +--
> > fs/ceph/locks.c | 74 ++--
> > fs/dlm/plock.c | 44 +--
> > fs/gfs2/file.c | 16 +-
> > fs/libfs.c | 2 +-
> > fs/lockd/clnt4xdr.c | 14 +-
> > fs/lockd/clntlock.c | 2 +-
> > fs/lockd/clntproc.c | 65 +--
> > fs/lockd/clntxdr.c | 14 +-
> > fs/lockd/svc4proc.c | 10 +-
> > fs/lockd/svclock.c | 64 +--
> > fs/lockd/svcproc.c | 10 +-
> > fs/lockd/svcsubs.c | 24 +-
> > fs/lockd/xdr.c | 14 +-
> > fs/lockd/xdr4.c | 14 +-
> > fs/locks.c | 848 ++++++++++++++++++++++------------------
> > fs/nfs/delegation.c | 4 +-
> > fs/nfs/file.c | 22 +-
> > fs/nfs/nfs3proc.c | 2 +-
> > fs/nfs/nfs4_fs.h | 2 +-
> > fs/nfs/nfs4file.c | 2 +-
> > fs/nfs/nfs4proc.c | 39 +-
> > fs/nfs/nfs4state.c | 22 +-
> > fs/nfs/nfs4trace.h | 4 +-
> > fs/nfs/nfs4xdr.c | 8 +-
> > fs/nfs/write.c | 8 +-
> > fs/nfsd/filecache.c | 4 +-
> > fs/nfsd/nfs4callback.c | 2 +-
> > fs/nfsd/nfs4layouts.c | 34 +-
> > fs/nfsd/nfs4state.c | 118 +++---
> > fs/ocfs2/locks.c | 12 +-
> > fs/ocfs2/stack_user.c | 2 +-
> > fs/open.c | 2 +-
> > fs/posix_acl.c | 4 +-
> > fs/smb/client/cifsfs.c | 2 +-
> > fs/smb/client/cifssmb.c | 8 +-
> > fs/smb/client/file.c | 76 ++--
> > fs/smb/client/smb2file.c | 2 +-
> > fs/smb/server/smb2pdu.c | 44 +--
> > fs/smb/server/vfs.c | 14 +-
> > include/linux/filelock.h | 80 ++--
> > include/linux/fs.h | 5 +-
> > include/linux/lockd/lockd.h | 8 +-
> > include/linux/lockd/xdr.h | 2 +-
> > include/trace/events/afs.h | 4 +-
> > include/trace/events/filelock.h | 102 ++---
> > 49 files changed, 1198 insertions(+), 923 deletions(-)
> > ---
> > base-commit: 615d300648869c774bd1fe54b4627bb0c20faed4
> > change-id: 20240116-flsplit-bdb46824db68
> >
> > Best regards,
> > --
> > Jeff Layton <[email protected]>
> >
>
--
Jeff Layton <[email protected]>
On Fri, 26 Jan 2024, Chuck Lever wrote:
> On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> > Long ago, file locks used to hang off of a singly-linked list in struct
> > inode. Because of this, when leases were added, they were added to the
> > same list and so they had to be tracked using the same sort of
> > structure.
> >
> > Several years ago, we added struct file_lock_context, which allowed us
> > to use separate lists to track different types of file locks. Given
> > that, leases no longer need to be tracked using struct file_lock.
> >
> > That said, a lot of the underlying infrastructure _is_ the same between
> > file leases and locks, so we can't completely separate everything.
> >
> > This patchset first splits a group of fields used by both file locks and
> > leases into a new struct file_lock_core, that is then embedded in struct
> > file_lock. Coccinelle was then used to convert a lot of the callers to
> > deal with the move, with the remaining 25% or so converted by hand.
> >
> > It then converts several internal functions in fs/locks.c to work
> > with struct file_lock_core. Lastly, struct file_lock is split into
> > struct file_lock and file_lease, and the lease-related APIs converted to
> > take struct file_lease.
> >
> > After the first few patches (which I left split up for easier review),
> > the set should be bisectable. I'll plan to squash the first few
> > together to make sure the resulting set is bisectable before merge.
> >
> > Finally, I left the coccinelle scripts I used in tree. I had heard it
> > was preferable to merge those along with the patches that they
> > generate, but I wasn't sure where they go. I can either move those to a
> > more appropriate location or we can just drop that commit if it's not
> > needed.
> >
> > Signed-off-by: Jeff Layton <[email protected]>
>
> v2 looks nicer.
>
> I would add a few list handling primitives, as I see enough
> instances of list_for_each_entry, list_for_each_entry_safe,
> list_first_entry, and list_first_entry_or_null on fl_core.flc_list
> to make it worth having those.
>
> Also, there doesn't seem to be benefit for API consumers to have to
> understand the internal structure of struct file_lock/lease to reach
> into fl_core. Having accessor functions for common fields like
> fl_type and fl_flags could be cleaner.
I'm not a big fan of accessor functions. They don't *look* like normal
field access, so a casual reader has to go find out what the function
does, just to find the it doesn't really do anything.
But neither am I a fan have requiring filesystems to use
"fl_core.flc_foo". As you say, reaching into fl_core isn't ideal.
It would be nice if we could make fl_core and anonymous structure, but
that really requires -fplan9-extensions which Linus is on-record as not
liking.
Unless...
How horrible would it be to use
union {
struct file_lock_core flc_core;
struct file_lock_core;
};
I think that only requires -fms-extensions, which Linus was less
negative towards. That would allow access to the members of
file_lock_core without the "flc_core." prefix, but would still allow
getting the address of 'flc_core'.
Maybe it's too ugly.
While fl_type and fl_flags are most common, fl_pid, fl_owner, fl_file
and even fl_wait are also used. Having accessor functions for all of those
would be too much I think.
Maybe higher-level functions which meet the real need of the filesystem
might be a useful approach:
locks_wakeup(lock)
locks_wait_interruptible(lock, condition)
locks_posix_init(lock, type, pid, ...) ??
locks_is_unlock() - fl_type is compared with F_UNLCK 22 times.
While those are probably a good idea, through don't really help much
with reducing the need for accessor functions.
I don't suppose we could just leave the #defines in place? Probably not
a good idea.
Maybe spell "fl_core" as "c"? lk->c.flc_flags ???
And I wonder if we could have a new fl_flag for 'FOREIGN' locks rather
than encoding that flag in the sign of the pid. That seems a bit ...
clunky?
NeilBrown