Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752806AbdGRX1n (ORCPT ); Tue, 18 Jul 2017 19:27:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:58849 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751846AbdGRX1l (ORCPT ); Tue, 18 Jul 2017 19:27:41 -0400 From: NeilBrown To: Oleg Drokin , Greg Kroah-Hartman , Andreas Dilger Date: Wed, 19 Jul 2017 09:26:47 +1000 Subject: [PATCH 04/12] staging: lustre: ldlm: remove 'first_enq' arg from ldlm_process_flock_lock() Cc: Linux Kernel Mailing List , Lustre Development List Message-ID: <150042040748.20736.16657316764497795112.stgit@noble> In-Reply-To: <150041997277.20736.17112251996623587423.stgit@noble> References: <150041997277.20736.17112251996623587423.stgit@noble> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1933 Lines: 50 it is only ever set to '1', so we can just assume that and remove the code. Signed-off-by: NeilBrown --- drivers/staging/lustre/lustre/ldlm/ldlm_flock.c | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c index b7f28b39c7b3..8ba3eaf49c65 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c @@ -121,15 +121,9 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags) * It is also responsible for splitting a lock if a portion of the lock * is released. * - * If \a first_enq is 0 (ie, called from ldlm_reprocess_queue): - * - blocking ASTs have already been sent - * - * If \a first_enq is 1 (ie, called from ldlm_lock_enqueue): - * - blocking ASTs have not been sent yet, so list of conflicting locks - * would be collected and ASTs sent. */ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags, - int first_enq, enum ldlm_error *err, + enum ldlm_error *err, struct list_head *work_list) { struct ldlm_resource *res = req->l_resource; @@ -197,11 +191,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags, if (!ldlm_flocks_overlap(lock, req)) continue; - if (!first_enq) { - reprocess_failed = 1; - continue; - } - if (*flags & LDLM_FL_BLOCK_NOWAIT) { ldlm_flock_destroy(req, mode, *flags); *err = -EAGAIN; @@ -605,7 +594,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data) /* We need to reprocess the lock to do merges or splits * with existing locks owned by this process. */ - ldlm_process_flock_lock(lock, &noreproc, 1, &err, NULL); + ldlm_process_flock_lock(lock, &noreproc, &err, NULL); } unlock_res_and_lock(lock); return rc;