Received: by 10.223.148.5 with SMTP id 5csp6360609wrq; Wed, 17 Jan 2018 12:40:35 -0800 (PST) X-Google-Smtp-Source: ACJfBovcN+a2hG78dxCFssS7OuQdhbT8Hd4mmoqtYhu3g1nK1/QDxzm0kA89v9ZtEzwlIHcQU1FH X-Received: by 10.159.196.151 with SMTP id c23mr33664490plo.139.1516221634990; Wed, 17 Jan 2018 12:40:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516221634; cv=none; d=google.com; s=arc-20160816; b=NgJ4XV4QXBYmNdMg8IWFLMsgpcYjildH2NozyB3xSyLdIJLkoxvTcIQTwIAJ7a5bbe RRZ1fJjHYAU6GrzQR6+vZWETCqx/XKY0uZ3/6ZLqye5r5PJkeWF7H5NJH6l1txLyxR3b R9qX5i1pZKd9GRylehKVV9B43VoEJs1SnHVY9Rhl9fJSQiEN/5UKAHw9HHlNHWSatS0T If97VSiWAZmZq3bpBt/Hsg0BRkePMIDZgRCq7LlhIdN2fE6lC9nnjwwj7JXekPQS23Ea 3zlKaVt7wKBsB0o/POwfj2cSr5lSiYVHfoCrCMSRKxO3yzIDi5R3hUtJ0EcGPfclbRmf pt2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=r1Akuhbdl0teZZbCCxOt80TOgcqoxMXID+sOxJvEPeg=; b=v5caVlZl2EvjgNjfBNDeStRONGNsx2F3uD0FUJFHTKa2BYHec6yZVRtKRYMNVJLx44 5FMqe7MSNyAbwgJR1yk2DtLeIoxBB+amORZZ++y+DWhCGlOOwE2mof1+fYhB3rbEC2Mc O7ldBWsUrs58JrvgJ5raA+xu2B9HJlYZ6hGUM+8oCRySAJ+Km2IRonMu01CpuT+17H7N SJOBcTNYqrKOd4AnzvcqeyHaUpT+D6ZPyWm8YLw80L+0usv1YYDP9VxDtdkbGovaRVjJ TtnFFrCOM2XrSEZu++SpO3ZYLlKcRE3cLOI6Va/7rUvcbPyf0Wc/n8SUUAEzr8jn/fhB zMxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=SElaJooo; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a11si5436870pln.694.2018.01.17.12.40.20; Wed, 17 Jan 2018 12:40:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=SElaJooo; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754087AbeAQUiK (ORCPT + 99 others); Wed, 17 Jan 2018 15:38:10 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:59791 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753843AbeAQUWz (ORCPT ); Wed, 17 Jan 2018 15:22:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=r1Akuhbdl0teZZbCCxOt80TOgcqoxMXID+sOxJvEPeg=; b=SElaJoooYCCQiUMGRz2tf66rF DPtHwlYz8GPBtNYLi3vAueXXnSh9G+crUlrGSr405pniebN8uh8hDs9VZE2EYaapuDx+V2bVvQpNG OwXiR+fHmN5qNsCp70z35OnlTGtgWH2bazx1g2d8SQqhktK7kmt2wNIFNR3yBfrlNV8fMhrM6oNZ2 HYl2DG8gd27t5A7lrBvWeKEhzI4gh79BE2cYUUMPXm8xM8YbeyDbveEujZKZp0Wq1MP05Rc06tF16 zgrL+yiTQYoeyPZQomfTrKK0GaBB8rWF+tOsFf6jsETMo/BZeVdWgbc7SAzBjWIsuphMZa48X4NII zzZS6XE4Q==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEY-00068l-Dh; Wed, 17 Jan 2018 20:22:54 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Cc: Matthew Wilcox , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, Bjorn Andersson , Stefano Stabellini , iommu@lists.linux-foundation.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, intel-gfx@lists.freedesktop.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, David Howells Subject: [PATCH v6 69/99] brd: Convert to XArray Date: Wed, 17 Jan 2018 12:21:33 -0800 Message-Id: <20180117202203.19756-70-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox Convert brd_pages from a radix tree to an XArray. Simpler and smaller code; in particular another user of radix_tree_preload is eliminated. Signed-off-by: Matthew Wilcox --- drivers/block/brd.c | 93 ++++++++++++++++------------------------------------- 1 file changed, 28 insertions(+), 65 deletions(-) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 8028a3a7e7fd..59a1af7aaa79 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -29,9 +29,9 @@ #define PAGE_SECTORS (1 << PAGE_SECTORS_SHIFT) /* - * Each block ramdisk device has a radix_tree brd_pages of pages that stores - * the pages containing the block device's contents. A brd page's ->index is - * its offset in PAGE_SIZE units. This is similar to, but in no way connected + * Each block ramdisk device has an xarray brd_pages that stores the pages + * containing the block device's contents. A brd page's ->index is its + * offset in PAGE_SIZE units. This is similar to, but in no way connected * with, the kernel's pagecache or buffer cache (which sit above our block * device). */ @@ -41,13 +41,7 @@ struct brd_device { struct request_queue *brd_queue; struct gendisk *brd_disk; struct list_head brd_list; - - /* - * Backing store of pages and lock to protect it. This is the contents - * of the block device. - */ - spinlock_t brd_lock; - struct radix_tree_root brd_pages; + struct xarray brd_pages; }; /* @@ -62,17 +56,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) * The page lifetime is protected by the fact that we have opened the * device node -- brd pages will never be deleted under us, so we * don't need any further locking or refcounting. - * - * This is strictly true for the radix-tree nodes as well (ie. we - * don't actually need the rcu_read_lock()), however that is not a - * documented feature of the radix-tree API so it is better to be - * safe here (we don't have total exclusion from radix tree updates - * here, only deletes). */ - rcu_read_lock(); idx = sector >> PAGE_SECTORS_SHIFT; /* sector to page index */ - page = radix_tree_lookup(&brd->brd_pages, idx); - rcu_read_unlock(); + page = xa_load(&brd->brd_pages, idx); BUG_ON(page && page->index != idx); @@ -87,7 +73,7 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) { pgoff_t idx; - struct page *page; + struct page *curr, *page; gfp_t gfp_flags; page = brd_lookup_page(brd, sector); @@ -108,62 +94,40 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) if (!page) return NULL; - if (radix_tree_preload(GFP_NOIO)) { - __free_page(page); - return NULL; - } - - spin_lock(&brd->brd_lock); idx = sector >> PAGE_SECTORS_SHIFT; page->index = idx; - if (radix_tree_insert(&brd->brd_pages, idx, page)) { + curr = xa_cmpxchg(&brd->brd_pages, idx, NULL, page, GFP_NOIO); + if (curr) { __free_page(page); - page = radix_tree_lookup(&brd->brd_pages, idx); - BUG_ON(!page); - BUG_ON(page->index != idx); + if (xa_err(curr)) { + page = NULL; + } else { + page = curr; + BUG_ON(!page); + BUG_ON(page->index != idx); + } } - spin_unlock(&brd->brd_lock); - - radix_tree_preload_end(); return page; } /* - * Free all backing store pages and radix tree. This must only be called when + * Free all backing store pages and xarray. This must only be called when * there are no other users of the device. */ -#define FREE_BATCH 16 static void brd_free_pages(struct brd_device *brd) { - unsigned long pos = 0; - struct page *pages[FREE_BATCH]; - int nr_pages; - - do { - int i; - - nr_pages = radix_tree_gang_lookup(&brd->brd_pages, - (void **)pages, pos, FREE_BATCH); - - for (i = 0; i < nr_pages; i++) { - void *ret; - - BUG_ON(pages[i]->index < pos); - pos = pages[i]->index; - ret = radix_tree_delete(&brd->brd_pages, pos); - BUG_ON(!ret || ret != pages[i]); - __free_page(pages[i]); - } - - pos++; + XA_STATE(xas, &brd->brd_pages, 0); + struct page *page; - /* - * This assumes radix_tree_gang_lookup always returns as - * many pages as possible. If the radix-tree code changes, - * so will this have to. - */ - } while (nr_pages == FREE_BATCH); + /* lockdep can't know there are no other users */ + xas_lock(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + BUG_ON(page->index != xas.xa_index); + __free_page(page); + xas_store(&xas, NULL); + } + xas_unlock(&xas); } /* @@ -373,8 +337,7 @@ static struct brd_device *brd_alloc(int i) if (!brd) goto out; brd->brd_number = i; - spin_lock_init(&brd->brd_lock); - INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); + xa_init(&brd->brd_pages); brd->brd_queue = blk_alloc_queue(GFP_KERNEL); if (!brd->brd_queue) -- 2.15.1