Received: by 10.223.164.202 with SMTP id h10csp662084wrb; Wed, 22 Nov 2017 13:15:49 -0800 (PST) X-Google-Smtp-Source: AGs4zMZ9SBAe5GoxqOwj2T/7qnIi1B5g60wRqB4gyeDX3FTX68INUwRZqT5teYQrs0OjxwGfXbeH X-Received: by 10.98.127.149 with SMTP id a143mr20806368pfd.65.1511385348931; Wed, 22 Nov 2017 13:15:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511385348; cv=none; d=google.com; s=arc-20160816; b=UptYfaHLiFYujqP4xwUedNZ36erozU13M/sNOY4pKYLxei1ytPcZ1vZMX21MMEiiLb PnrMixuV4uG+ibyRThfQ1TSFlS++ULfFVy6NCtMtO6OGgRIQVOqqVmeF0QMDO6X/dTgc KIPcMV4S4TCmo8p1qfmPRepUTtWiXlP6JVCkyoH+O3BaiImlnipUW3FWg3gamH9X7PCp Abxz0OwBy2MqiWivIR8ce9H5SvlwYVDHNwlTa3UAE9QXikgPflgrE7evHxRfyBNbWw05 ukvoBn8d4dFCSr8V9Eo8KCXa0uN2o54TnOcf3Krnc6Eg1NeWU3arY1iSsVggilJvlY7d j2eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=nPxU9SKxGk26bdI8ISYWTijZ5/cpWmbqJ9gU7m3m08s=; b=Vckk6t5f1NWlUgQF4MPWYX7o6kavHGLY/kqUHSqLcxaTCezL2dSApas1phyjbyLnuy GYlYyrg91K/SELk4qhU+g+ns+zF0dV0lP9sBrUgm7JZjjrpLxvWgNEM5nYNdNUnv3vxJ yO1FhbX1mNYsCaHLttFTAEhhAauJKN3pF7/o90s1V2WzxZWMl0EpwJKBZT8Y/+JiM4db /jM+cCX+IFewaDipGTE9puJ+qpnRRHSdlpSqWccKHec8rOuYMHwWhGWJb+5VBX/MV0Kl wRftOko5VuXrhbES0dWGiOXWgtyP0gWwip/47PrK+97A/fJmP6nwDIrjHGjVMiY3ZuI7 AnEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=dy/m6QNP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si14501111pls.720.2017.11.22.13.15.37; Wed, 22 Nov 2017 13:15:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=dy/m6QNP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751688AbdKVVNv (ORCPT + 77 others); Wed, 22 Nov 2017 16:13:51 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:42750 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751845AbdKVVIT (ORCPT ); Wed, 22 Nov 2017 16:08:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nPxU9SKxGk26bdI8ISYWTijZ5/cpWmbqJ9gU7m3m08s=; b=dy/m6QNPKU7CfLFjsHJl9A+5s 4IL/9CTs+V56qbDVOG/v085KF5Jpiox3r+I9Af6enbhnS7aMjASQYWSLi6fdPhkZ/IJe8QPnOYglu uH06qy2jFkdxKtLIDR+YKGT6sBzsagBN8erptszjnm7mG7Is5n6zmCb2atldq+1SbwspC1aP3qTOH 3J5//vMxPkuMHbXFIE9U3WDOMpToVyOTafnJj9/8brKMy9ht3kj4UenNyjHaVD2hhGvlvue24H0wO cdGPdtRQg1JaGR2qrvvFi9eiA/knzvAXhba+PVBQ6TqgcLn0ENguS/80okxKEc58enqaQtc06k4gv svSRMYreA==; Received: from willy by bombadil.infradead.org with local (Exim 4.87 #1 (Red Hat Linux)) id 1eHcFn-0007ws-5F; Wed, 22 Nov 2017 21:08:19 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 43/62] brd: Convert to XArray Date: Wed, 22 Nov 2017 13:07:20 -0800 Message-Id: <20171122210739.29916-44-willy@infradead.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171122210739.29916-1-willy@infradead.org> References: <20171122210739.29916-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox Convert brd_pages from a radix tree to an XArray. Simpler and smaller code; in particular another user of radix_tree_preload is eliminated. Signed-off-by: Matthew Wilcox --- drivers/block/brd.c | 87 ++++++++++++++--------------------------------------- 1 file changed, 23 insertions(+), 64 deletions(-) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 8028a3a7e7fd..c9fda03810ea 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -29,9 +29,9 @@ #define PAGE_SECTORS (1 << PAGE_SECTORS_SHIFT) /* - * Each block ramdisk device has a radix_tree brd_pages of pages that stores - * the pages containing the block device's contents. A brd page's ->index is - * its offset in PAGE_SIZE units. This is similar to, but in no way connected + * Each block ramdisk device has an xarray brd_pages that stores the pages + * containing the block device's contents. A brd page's ->index is its + * offset in PAGE_SIZE units. This is similar to, but in no way connected * with, the kernel's pagecache or buffer cache (which sit above our block * device). */ @@ -41,13 +41,7 @@ struct brd_device { struct request_queue *brd_queue; struct gendisk *brd_disk; struct list_head brd_list; - - /* - * Backing store of pages and lock to protect it. This is the contents - * of the block device. - */ - spinlock_t brd_lock; - struct radix_tree_root brd_pages; + struct xarray brd_pages; }; /* @@ -62,17 +56,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) * The page lifetime is protected by the fact that we have opened the * device node -- brd pages will never be deleted under us, so we * don't need any further locking or refcounting. - * - * This is strictly true for the radix-tree nodes as well (ie. we - * don't actually need the rcu_read_lock()), however that is not a - * documented feature of the radix-tree API so it is better to be - * safe here (we don't have total exclusion from radix tree updates - * here, only deletes). */ - rcu_read_lock(); idx = sector >> PAGE_SECTORS_SHIFT; /* sector to page index */ - page = radix_tree_lookup(&brd->brd_pages, idx); - rcu_read_unlock(); + page = xa_load(&brd->brd_pages, idx); BUG_ON(page && page->index != idx); @@ -87,7 +73,7 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) { pgoff_t idx; - struct page *page; + struct page *curr, *page; gfp_t gfp_flags; page = brd_lookup_page(brd, sector); @@ -108,62 +94,36 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) if (!page) return NULL; - if (radix_tree_preload(GFP_NOIO)) { - __free_page(page); - return NULL; - } - - spin_lock(&brd->brd_lock); idx = sector >> PAGE_SECTORS_SHIFT; page->index = idx; - if (radix_tree_insert(&brd->brd_pages, idx, page)) { + curr = xa_cmpxchg(&brd->brd_pages, idx, NULL, page, GFP_NOIO); + if (curr) { __free_page(page); - page = radix_tree_lookup(&brd->brd_pages, idx); + page = curr; BUG_ON(!page); BUG_ON(page->index != idx); } - spin_unlock(&brd->brd_lock); - - radix_tree_preload_end(); return page; } /* - * Free all backing store pages and radix tree. This must only be called when + * Free all backing store pages and xarray. This must only be called when * there are no other users of the device. */ -#define FREE_BATCH 16 static void brd_free_pages(struct brd_device *brd) { - unsigned long pos = 0; - struct page *pages[FREE_BATCH]; - int nr_pages; - - do { - int i; - - nr_pages = radix_tree_gang_lookup(&brd->brd_pages, - (void **)pages, pos, FREE_BATCH); - - for (i = 0; i < nr_pages; i++) { - void *ret; - - BUG_ON(pages[i]->index < pos); - pos = pages[i]->index; - ret = radix_tree_delete(&brd->brd_pages, pos); - BUG_ON(!ret || ret != pages[i]); - __free_page(pages[i]); - } - - pos++; - - /* - * This assumes radix_tree_gang_lookup always returns as - * many pages as possible. If the radix-tree code changes, - * so will this have to. - */ - } while (nr_pages == FREE_BATCH); + XA_STATE(xas, 0); + struct page *page; + + /* lockdep can't know there are no other users */ + xa_lock(&brd->brd_pages); + xas_for_each(&brd->brd_pages, &xas, page, ULONG_MAX) { + BUG_ON(page->index != xas.xa_index); + __free_page(page); + xas_store(&brd->brd_pages, &xas, NULL); + } + xa_unlock(&brd->brd_pages); } /* @@ -373,8 +333,7 @@ static struct brd_device *brd_alloc(int i) if (!brd) goto out; brd->brd_number = i; - spin_lock_init(&brd->brd_lock); - INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); + xa_init(&brd->brd_pages); brd->brd_queue = blk_alloc_queue(GFP_KERNEL); if (!brd->brd_queue) -- 2.15.0 From 1584790374784076181@xxx Wed Nov 22 18:04:37 +0000 2017 X-GM-THRID: 1584790374784076181 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread