Return-Path: Received: from mail-it0-f68.google.com ([209.85.214.68]:35193 "EHLO mail-it0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751804AbdHCNpl (ORCPT ); Thu, 3 Aug 2017 09:45:41 -0400 Received: by mail-it0-f68.google.com with SMTP id v127so1319072itd.2 for ; Thu, 03 Aug 2017 06:45:40 -0700 (PDT) From: Trond Myklebust To: Chuck Lever , linux-nfs@vger.kernel.org Subject: [PATCH v2 06/28] NFS: Fix an ABBA issue in nfs_lock_and_join_requests() Date: Thu, 3 Aug 2017 09:45:01 -0400 Message-Id: <20170803134523.4922-7-trond.myklebust@primarydata.com> In-Reply-To: <20170803134523.4922-6-trond.myklebust@primarydata.com> References: <20170803134523.4922-1-trond.myklebust@primarydata.com> <20170803134523.4922-2-trond.myklebust@primarydata.com> <20170803134523.4922-3-trond.myklebust@primarydata.com> <20170803134523.4922-4-trond.myklebust@primarydata.com> <20170803134523.4922-5-trond.myklebust@primarydata.com> <20170803134523.4922-6-trond.myklebust@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org List-ID: All other callers of nfs_page_group_lock() appear to already hold the page lock on the head page, so doing it in the opposite order here is inefficient, although not deadlock prone since we roll back all locks on contention. Signed-off-by: Trond Myklebust --- fs/nfs/write.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 1ca759719429..c940e615f5dc 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -383,7 +383,7 @@ nfs_unroll_locks_and_wait(struct inode *inode, struct nfs_page *head, int ret; /* relinquish all the locks successfully grabbed this run */ - for (tmp = head ; tmp != req; tmp = tmp->wb_this_page) + for (tmp = head->wb_this_page ; tmp != req; tmp = tmp->wb_this_page) nfs_unlock_request(tmp); WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags)); @@ -395,7 +395,7 @@ nfs_unroll_locks_and_wait(struct inode *inode, struct nfs_page *head, spin_unlock(&inode->i_lock); /* release ref from nfs_page_find_head_request_locked */ - nfs_release_request(head); + nfs_unlock_and_release_request(head); ret = nfs_wait_on_request(req); nfs_release_request(req); @@ -484,10 +484,6 @@ nfs_lock_and_join_requests(struct page *page) int ret; try_again: - total_bytes = 0; - - WARN_ON_ONCE(destroy_list); - spin_lock(&inode->i_lock); /* @@ -502,6 +498,16 @@ nfs_lock_and_join_requests(struct page *page) return NULL; } + /* lock the page head first in order to avoid an ABBA inefficiency */ + if (!nfs_lock_request(head)) { + spin_unlock(&inode->i_lock); + ret = nfs_wait_on_request(head); + nfs_release_request(head); + if (ret < 0) + return ERR_PTR(ret); + goto try_again; + } + /* holding inode lock, so always make a non-blocking call to try the * page group lock */ ret = nfs_page_group_lock(head, true); @@ -509,13 +515,14 @@ nfs_lock_and_join_requests(struct page *page) spin_unlock(&inode->i_lock); nfs_page_group_lock_wait(head); - nfs_release_request(head); + nfs_unlock_and_release_request(head); goto try_again; } /* lock each request in the page group */ - subreq = head; - do { + total_bytes = head->wb_bytes; + for (subreq = head->wb_this_page; subreq != head; + subreq = subreq->wb_this_page) { /* * Subrequests are always contiguous, non overlapping * and in order - but may be repeated (mirrored writes). @@ -541,9 +548,7 @@ nfs_lock_and_join_requests(struct page *page) return ERR_PTR(ret); } - - subreq = subreq->wb_this_page; - } while (subreq != head); + } /* Now that all requests are locked, make sure they aren't on any list. * Commit list removal accounting is done after locks are dropped */ -- 2.13.3