Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp422043imm; Tue, 5 Jun 2018 23:12:29 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLCNJZA2/mxRtbQKYW/cm1RORuUKf4/B0hNs4JR1VkMcoAPVgzmEhKI5yJ0k1KHgg0vBMow X-Received: by 2002:a63:63c4:: with SMTP id x187-v6mr1527758pgb.9.1528265549073; Tue, 05 Jun 2018 23:12:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528265549; cv=none; d=google.com; s=arc-20160816; b=FQbl9TKpGeDHqQnk9mFp9Lw4Ppv0/NHefm0WAXCAG+NgCcPk9EjEJHSuu0c4RlN5lx 6e1zJopNtWJxpux+8vRdxMroDs5rp6LeQav59hquqH2zbzHGTpNC/ljgE/ya772wpVwk 2SDx8vQQs7tlHlmEorv6nqKL5EWwsnTsApJtQdLN1Y0LlU1ipHcZOm95SKn5pMcWyXVq 98vZM2xkjxD9a2QPmsgZlvt2CZgFCH8TmW6XhDyP/UryJ8/HR9SM99VXFsZg8+AvR/Nm 3F7WlCibARlj7LtKKgPBZeibZA9E2aGXVo6gm3/oPNbacXj0+meYe08FTd7WFd1SHCel 3h2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:cc:subject:date:to :from:arc-authentication-results; bh=jKcoAbhuqYLj0ORA3LK0YTWwghkVLSaQjBDwScSUH7Y=; b=N1yv+irCln07+aRdAuUjZKEddFzUW76MbJ4Yj8OAeur3AGj/pQ6jBHIN2TSwnLx4/R FZA4pQ6QCBDAI9vFQwZOOMe8qF9BumjTFQD/lQKiDxxW1MwaC9tKrSguM48U4/lDdWah fHwL/glTsssAjTR0g0iK5uTWADOLGNdDSqaHrSuhBIcrckdSRSbjGPduDTMx+ZrwjWMy Vg7VxJn4xomQQ5wEWwDWhOq+finfbnf7IetpTwBSBVLVU2erDdyuB314KMSFuVEz2qbK IcMcTwhDZ/Oymhz7gj+FeROblhB5cSdLHM5cPC1O6D7/GsYXO014G+rok0+0I4dABrTx IArw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d12-v6si24140289pfk.166.2018.06.05.23.12.14; Tue, 05 Jun 2018 23:12:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932248AbeFFGLj (ORCPT + 99 others); Wed, 6 Jun 2018 02:11:39 -0400 Received: from mx2.suse.de ([195.135.220.15]:52305 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932128AbeFFGLh (ORCPT ); Wed, 6 Jun 2018 02:11:37 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext-too.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D7B08AF55; Wed, 6 Jun 2018 06:11:35 +0000 (UTC) From: NeilBrown To: Oleg Drokin , Greg Kroah-Hartman , James Simmons , Andreas Dilger Date: Wed, 06 Jun 2018 16:05:19 +1000 Subject: [PATCH 04/11] staging: lustre: convert range_lock to linux interval_trees. Cc: Linux Kernel Mailing List , Lustre Development List Message-ID: <152826511902.16761.12169723574817283239.stgit@noble> In-Reply-To: <152826510267.16761.14361003167157833896.stgit@noble> References: <152826510267.16761.14361003167157833896.stgit@noble> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Linux has a fully-generic interval tree implementation which can be tailored to different use cases. Use it for range_lock rather than the lustre version. This allows us to get rid of some call-backs and generally simplifies the code. We cannot use the pre-built version in lib/interval_tree.c as we need 64bit endpoints and it only provides "unsigned long". Signed-off-by: NeilBrown --- drivers/staging/lustre/lustre/llite/file.c | 8 +- drivers/staging/lustre/lustre/llite/range_lock.c | 94 ++++++++-------------- drivers/staging/lustre/lustre/llite/range_lock.h | 17 ++-- 3 files changed, 47 insertions(+), 72 deletions(-) diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c index 02295931883b..e888ed6e74bc 100644 --- a/drivers/staging/lustre/lustre/llite/file.c +++ b/drivers/staging/lustre/lustre/llite/file.c @@ -1086,8 +1086,8 @@ ll_file_io_generic(const struct lu_env *env, struct vvp_io_args *args, (iot == CIT_READ && (file->f_flags & O_DIRECT))) && !(vio->vui_fd->fd_flags & LL_FILE_GROUP_LOCKED)) { CDEBUG(D_VFSTRACE, "Range lock [%llu, %llu]\n", - range.rl_node.in_extent.start, - range.rl_node.in_extent.end); + range.rl_start, + range.rl_last); rc = range_lock(&lli->lli_write_tree, &range); if (rc < 0) goto out; @@ -1099,8 +1099,8 @@ ll_file_io_generic(const struct lu_env *env, struct vvp_io_args *args, ll_cl_remove(file, env); if (range_locked) { CDEBUG(D_VFSTRACE, "Range unlock [%llu, %llu]\n", - range.rl_node.in_extent.start, - range.rl_node.in_extent.end); + range.rl_start, + range.rl_last); range_unlock(&lli->lli_write_tree, &range); } } else { diff --git a/drivers/staging/lustre/lustre/llite/range_lock.c b/drivers/staging/lustre/lustre/llite/range_lock.c index eaa23f4c414e..acdb0dc00a89 100644 --- a/drivers/staging/lustre/lustre/llite/range_lock.c +++ b/drivers/staging/lustre/lustre/llite/range_lock.c @@ -37,7 +37,13 @@ #include "range_lock.h" #include #include +#include +#define START(node) ((node)->rl_start) +#define LAST(node) ((node)->rl_last) + +INTERVAL_TREE_DEFINE(struct range_lock, rl_rb, __u64, __subtree_last, + START, LAST, static, range); /** * Initialize a range lock tree * @@ -48,7 +54,7 @@ */ void range_lock_tree_init(struct range_lock_tree *tree) { - tree->rlt_root = NULL; + tree->rlt_root = RB_ROOT_CACHED; tree->rlt_sequence = 0; spin_lock_init(&tree->rlt_lock); } @@ -65,43 +71,19 @@ void range_lock_tree_init(struct range_lock_tree *tree) */ int range_lock_init(struct range_lock *lock, __u64 start, __u64 end) { - int rc; + RB_CLEAR_NODE(&lock->rl_rb); - memset(&lock->rl_node, 0, sizeof(lock->rl_node)); if (end != LUSTRE_EOF) end >>= PAGE_SHIFT; - rc = interval_set(&lock->rl_node, start >> PAGE_SHIFT, end); - if (rc) - return rc; + lock->rl_start = start >> PAGE_SHIFT; + lock->rl_last = end; + if (lock->rl_start > lock->rl_last) + return -ERANGE; lock->rl_task = NULL; lock->rl_blocking_ranges = 0; lock->rl_sequence = 0; - return rc; -} - -/** - * Helper function of range_unlock() - * - * \param node [in] a range lock found overlapped during interval node - * search - * \param arg [in] the range lock to be tested - * - * \retval INTERVAL_ITER_CONT indicate to continue the search for next - * overlapping range node - * \retval INTERVAL_ITER_STOP indicate to stop the search - */ -static enum interval_iter range_unlock_cb(struct interval_node *node, void *arg) -{ - struct range_lock *lock = arg; - struct range_lock *overlap = node2rangelock(node); - - if (overlap->rl_sequence > lock->rl_sequence) { - --overlap->rl_blocking_ranges; - if (overlap->rl_blocking_ranges == 0) - wake_up_process(overlap->rl_task); - } - return INTERVAL_ITER_CONT; + return 0; } /** @@ -112,36 +94,27 @@ static enum interval_iter range_unlock_cb(struct interval_node *node, void *arg) * * If this lock has been granted, relase it; if not, just delete it from * the tree or the same region lock list. Wake up those locks only blocked - * by this lock through range_unlock_cb(). + * by this lock. */ void range_unlock(struct range_lock_tree *tree, struct range_lock *lock) { - spin_lock(&tree->rlt_lock); - LASSERT(interval_is_intree(&lock->rl_node)); - interval_erase(&lock->rl_node, &tree->rlt_root); + struct range_lock *overlap; - interval_search(tree->rlt_root, &lock->rl_node.in_extent, - range_unlock_cb, lock); - spin_unlock(&tree->rlt_lock); -} + spin_lock(&tree->rlt_lock); + LASSERT(!RB_EMPTY_NODE(&lock->rl_rb)); + range_remove(lock, &tree->rlt_root); -/** - * Helper function of range_lock() - * - * \param node [in] a range lock found overlapped during interval node - * search - * \param arg [in] the range lock to be tested - * - * \retval INTERVAL_ITER_CONT indicate to continue the search for next - * overlapping range node - * \retval INTERVAL_ITER_STOP indicate to stop the search - */ -static enum interval_iter range_lock_cb(struct interval_node *node, void *arg) -{ - struct range_lock *lock = arg; + for (overlap = range_iter_first(&tree->rlt_root, + lock->rl_start, lock->rl_last); + overlap; + overlap = range_iter_next(overlap, lock->rl_start, lock->rl_last)) + if (overlap->rl_sequence > lock->rl_sequence) { + --overlap->rl_blocking_ranges; + if (overlap->rl_blocking_ranges == 0) + wake_up_process(overlap->rl_task); + } - lock->rl_blocking_ranges++; - return INTERVAL_ITER_CONT; + spin_unlock(&tree->rlt_lock); } /** @@ -160,15 +133,20 @@ static enum interval_iter range_lock_cb(struct interval_node *node, void *arg) int range_lock(struct range_lock_tree *tree, struct range_lock *lock) { int rc = 0; + struct range_lock *it; spin_lock(&tree->rlt_lock); /* * We need to check for all conflicting intervals * already in the tree. */ - interval_search(tree->rlt_root, &lock->rl_node.in_extent, - range_lock_cb, lock); - interval_insert(&lock->rl_node, &tree->rlt_root); + for (it = range_iter_first(&tree->rlt_root, + lock->rl_start, lock->rl_last); + it; + it = range_iter_next(it, lock->rl_start, lock->rl_last)) + lock->rl_blocking_ranges++; + + range_insert(lock, &tree->rlt_root); lock->rl_sequence = ++tree->rlt_sequence; while (lock->rl_blocking_ranges > 0) { diff --git a/drivers/staging/lustre/lustre/llite/range_lock.h b/drivers/staging/lustre/lustre/llite/range_lock.h index 10ef1a995d26..2a0704d21481 100644 --- a/drivers/staging/lustre/lustre/llite/range_lock.h +++ b/drivers/staging/lustre/lustre/llite/range_lock.h @@ -38,10 +38,12 @@ #define _RANGE_LOCK_H #include -#include +#include struct range_lock { - struct interval_node rl_node; + struct rb_node rl_rb; + __u64 rl_start, rl_last; + __u64 __subtree_last; /** * Process to enqueue this lock. */ @@ -57,15 +59,10 @@ struct range_lock { __u64 rl_sequence; }; -static inline struct range_lock *node2rangelock(const struct interval_node *n) -{ - return container_of(n, struct range_lock, rl_node); -} - struct range_lock_tree { - struct interval_node *rlt_root; - spinlock_t rlt_lock; /* protect range lock tree */ - __u64 rlt_sequence; + struct rb_root_cached rlt_root; + spinlock_t rlt_lock; /* protect range lock tree */ + __u64 rlt_sequence; }; void range_lock_tree_init(struct range_lock_tree *tree);