Received: by 10.223.164.202 with SMTP id h10csp659749wrb; Wed, 22 Nov 2017 13:13:10 -0800 (PST) X-Google-Smtp-Source: AGs4zMYA68KRgqtr4b3F/EjRkgO7YJ2L/UE3rvFExK3pwFYO2Y21eardJdRJkfe/YiDiRMx7cfiW X-Received: by 10.101.82.129 with SMTP id y1mr22195871pgp.137.1511385190776; Wed, 22 Nov 2017 13:13:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511385190; cv=none; d=google.com; s=arc-20160816; b=fMs8x3qjlmfvR+Pb2pQfN41LaZtIlntzVpRyC+rajfLxhEfTTEQOsOQ5uevJjXLFK7 4E7BHE2p4Jbo5eF1/a79hciA0kKzY2K2MDeLoHq0Su4lHo6eHy1lcdqghyfRw89d9Ilk kLFg1kvT//SxzLOUbMU0Vf0ixU6ilXrEuNUKNu+PF0MNpvwL1XkEqm7X7MHg/xHXUMyQ a/QE8vtUrIP8lNzwFPD63G8PnmbAPd0v+FdycuyQQ1fNspj03qky1Y0ONy4dJCLa0ANO HxxrVpZp6J7qFz+mon+0WDGRdMgHQVugYKbayk75KO53PBHAytzVu51GfNLtM5LusXnD UY7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=X/O534jpbsqGc66d3Df5CzSCYG1YLm42Vk4Ibz0lJ2c=; b=vK9rv0oUGkESYkZCWRfWz1vZzpI0FRB72tAzBZ5Hu/oOyctv7UidvyHIc5aFxGGV1X nWLT8mXhp3fWRchjLiNZ1zG4fNVKGXK9s2M92Gz09haWaXKk6Rl89aDR9bRZe9gB+406 9Fk4yeXh6aeolLJSDiUS574TkPY3lZXTcUitfb/9RvH4AzdzcmoYbBNB+wtR9BWY4Y2s a7S05fR/WKLXTG/WmWqJXGV8y8ZYKKt4E2gFzO5rTvwyZ7uHmNfYvi0UHxLt368sKZuu z2U8FNvfOv/U9FBzP02diighzYiNIA/VgDNmQaf3I/KHZ6ZlUnXTWrqrke1VA4ItQkxK 41Kg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=ZmffunWY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q4si11489025plk.376.2017.11.22.13.12.58; Wed, 22 Nov 2017 13:13:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=ZmffunWY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751942AbdKVVLm (ORCPT + 77 others); Wed, 22 Nov 2017 16:11:42 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:43511 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751858AbdKVVIU (ORCPT ); Wed, 22 Nov 2017 16:08:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=X/O534jpbsqGc66d3Df5CzSCYG1YLm42Vk4Ibz0lJ2c=; b=ZmffunWYLl7wnIRMfXnagP/Wg 4y5wTTYNfjxhk7yXUAH1ODiHrz2pbctqYxhVlOH+gzCjvORgqVgNnuW7vKmDuxMzG2hmTpPjj5rAU pbqSPK2PTiRPVAwv0+D3sUclGhOmSrzS0EeO4RIqcg4APIhLYnK7yZA4zyMWa7YSWW+0O7jVgwOmI kaSoM2sMDAU102dgUJ5wTYiRhpRSh62808aByHqqR5hobRYxwgLiZ6Vi9hgWTB/+EPbTQD3RoAUMV iwJFVnodLtiDesVrIDodLl7QgTl6k8Fwa5VF5f5XUmALjgRqmd0nqBRJhge3HfcrgvzS8J1p/SGjG q6lZpCsTg==; Received: from willy by bombadil.infradead.org with local (Exim 4.87 #1 (Red Hat Linux)) id 1eHcFn-0007xL-Fn; Wed, 22 Nov 2017 21:08:19 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 47/62] xfs: Convert mru cache to XArray Date: Wed, 22 Nov 2017 13:07:24 -0800 Message-Id: <20171122210739.29916-48-willy@infradead.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171122210739.29916-1-willy@infradead.org> References: <20171122210739.29916-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox This eliminates a call to radix_tree_preload(). Signed-off-by: Matthew Wilcox --- fs/xfs/xfs_mru_cache.c | 71 +++++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 36 deletions(-) diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c index f8a674d7f092..d665ac490045 100644 --- a/fs/xfs/xfs_mru_cache.c +++ b/fs/xfs/xfs_mru_cache.c @@ -101,10 +101,9 @@ * an infinite loop in the code. */ struct xfs_mru_cache { - struct radix_tree_root store; /* Core storage data structure. */ + struct xarray store; /* Core storage data structure. */ struct list_head *lists; /* Array of lists, one per grp. */ struct list_head reap_list; /* Elements overdue for reaping. */ - spinlock_t lock; /* Lock to protect this struct. */ unsigned int grp_count; /* Number of discrete groups. */ unsigned int grp_time; /* Time period spanned by grps. */ unsigned int lru_grp; /* Group containing time zero. */ @@ -232,22 +231,24 @@ _xfs_mru_cache_list_insert( * data store, removing it from the reap list, calling the client's free * function and deleting the element from the element zone. * - * We get called holding the mru->lock, which we drop and then reacquire. - * Sparse need special help with this to tell it we know what we are doing. + * We get called holding the mru->store lock, which we drop and then reacquire. + * Sparse needs special help with this to tell it we know what we are doing. */ STATIC void _xfs_mru_cache_clear_reap_list( struct xfs_mru_cache *mru) - __releases(mru->lock) __acquires(mru->lock) + __releases(mru->store) __acquires(mru->store) { + XA_STATE(xas, 0); struct xfs_mru_cache_elem *elem, *next; struct list_head tmp; INIT_LIST_HEAD(&tmp); list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { + xas_set(&xas, elem->key); /* Remove the element from the data store. */ - radix_tree_delete(&mru->store, elem->key); + xas_store(&mru->store, &xas, NULL); /* * remove to temp list so it can be freed without @@ -255,14 +256,14 @@ _xfs_mru_cache_clear_reap_list( */ list_move(&elem->list_node, &tmp); } - spin_unlock(&mru->lock); + xa_unlock(&mru->store); list_for_each_entry_safe(elem, next, &tmp, list_node) { list_del_init(&elem->list_node); mru->free_func(elem); } - spin_lock(&mru->lock); + xa_lock(&mru->store); } /* @@ -284,7 +285,7 @@ _xfs_mru_cache_reap( if (!mru || !mru->lists) return; - spin_lock(&mru->lock); + xa_lock(&mru->store); next = _xfs_mru_cache_migrate(mru, jiffies); _xfs_mru_cache_clear_reap_list(mru); @@ -298,7 +299,7 @@ _xfs_mru_cache_reap( queue_delayed_work(xfs_mru_reap_wq, &mru->work, next); } - spin_unlock(&mru->lock); + xa_unlock(&mru->store); } int @@ -358,13 +359,8 @@ xfs_mru_cache_create( for (grp = 0; grp < mru->grp_count; grp++) INIT_LIST_HEAD(mru->lists + grp); - /* - * We use GFP_KERNEL radix tree preload and do inserts under a - * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. - */ - INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); + xa_init(&mru->store); INIT_LIST_HEAD(&mru->reap_list); - spin_lock_init(&mru->lock); INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); mru->grp_time = grp_time; @@ -394,17 +390,17 @@ xfs_mru_cache_flush( if (!mru || !mru->lists) return; - spin_lock(&mru->lock); + xa_lock(&mru->store); if (mru->queued) { - spin_unlock(&mru->lock); + xa_unlock(&mru->store); cancel_delayed_work_sync(&mru->work); - spin_lock(&mru->lock); + xa_lock(&mru->store); } _xfs_mru_cache_migrate(mru, jiffies + mru->grp_count * mru->grp_time); _xfs_mru_cache_clear_reap_list(mru); - spin_unlock(&mru->lock); + xa_unlock(&mru->store); } void @@ -431,24 +427,25 @@ xfs_mru_cache_insert( unsigned long key, struct xfs_mru_cache_elem *elem) { + XA_STATE(xas, key); int error; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return -EINVAL; - if (radix_tree_preload(GFP_NOFS)) - return -ENOMEM; - INIT_LIST_HEAD(&elem->list_node); elem->key = key; - spin_lock(&mru->lock); - error = radix_tree_insert(&mru->store, key, elem); - radix_tree_preload_end(); +retry: + xa_lock(&mru->store); + xas_store(&mru->store, &xas, elem); + error = xas_error(&xas); if (!error) _xfs_mru_cache_list_insert(mru, elem); - spin_unlock(&mru->lock); + xa_unlock(&mru->store); + if (xas_nomem(&xas, GFP_NOFS)) + goto retry; return error; } @@ -464,17 +461,18 @@ xfs_mru_cache_remove( struct xfs_mru_cache *mru, unsigned long key) { + XA_STATE(xas, key); struct xfs_mru_cache_elem *elem; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return NULL; - spin_lock(&mru->lock); - elem = radix_tree_delete(&mru->store, key); + xa_lock(&mru->store); + elem = xas_store(&mru->store, &xas, NULL); if (elem) list_del(&elem->list_node); - spin_unlock(&mru->lock); + xa_unlock(&mru->store); return elem; } @@ -520,20 +518,21 @@ xfs_mru_cache_lookup( struct xfs_mru_cache *mru, unsigned long key) { + XA_STATE(xas, key); struct xfs_mru_cache_elem *elem; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return NULL; - spin_lock(&mru->lock); - elem = radix_tree_lookup(&mru->store, key); + xa_lock(&mru->store); + elem = xas_load(&mru->store, &xas); if (elem) { list_del(&elem->list_node); _xfs_mru_cache_list_insert(mru, elem); - __release(mru_lock); /* help sparse not be stupid */ + __release(&mru->store); /* help sparse not be stupid */ } else - spin_unlock(&mru->lock); + xa_unlock(&mru->store); return elem; } @@ -546,7 +545,7 @@ xfs_mru_cache_lookup( void xfs_mru_cache_done( struct xfs_mru_cache *mru) - __releases(mru->lock) + __releases(mru->store) { - spin_unlock(&mru->lock); + xa_unlock(&mru->store); } -- 2.15.0 From 1584802397758866396@xxx Wed Nov 22 21:15:43 +0000 2017 X-GM-THRID: 1584802397758866396 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread