Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2449598imm; Sat, 16 Jun 2018 19:06:05 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKbFTN1C5ue9qTMYgWWaiX4ZPI7m+93as63CbeatpDNJDE17nxWTaFMrRNVdzbx2KQRinKb X-Received: by 2002:a17:902:3281:: with SMTP id z1-v6mr8530957plb.226.1529201165624; Sat, 16 Jun 2018 19:06:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529201165; cv=none; d=google.com; s=arc-20160816; b=ebjDnkF+7hw1A/gwc0Tu9lpxLlEsBqmIgeJVmhYYfYQzOBg6ImNFOeRmyi98OnGxR+ N4+PKhm2HHg7bLKOaQMapnBTJYMi6p+bJF6xAhgdJgFMSd1zOac0Ok9ahLBefcMFn07X ZlVen5OqIg2CEDD2BJ5DL3PlY0bcYl2xT8EJEaFhhzBlNvnRy4tZsCNZUADt8SRVKuLF tCUVwHtxufX3AEwR3XqXmB3n1zGWzvvXtJRRxTKqJ91swI6ZpOphrgnUJD1CglGbL6Jb NpmHzNjqB0Q7tH3B6aFn+1glaYLL+rPhnk6g71+mGtETZbP6y6BoTY/+S4DaTKWlMKmN /nsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=CjM3ufNnj7C7JJCH94R+YEBzzht8Do1jsqaRSwkI/Ok=; b=Xz93A4pw260Nkqtze77FkNaAfnOc6boCbrhLRMd3fNoO7LFgZ8JXf1H5txnzevVM8K 4ge2hev7/D+CjH/Po/NiyAmm8TdvBEK1ZZBJCmtacA6qN9GmjzURAM+j8FGxDQYqUOlL cwZB0YZcJXqqqIx10S5wsXOby+k/AYeH00H8QXPL2AcJBTxJHzqOIebJVprR+xwQYsBB Ovv082QflsAyrfZp5dDSHO3jmHq9M/5lz8jTAT/V4L3vRFwQDnoHhPZm/pDyYScCyATz 6WjedS7IEXJGDWTbo1hNoJLdjaeHOp0BtDsvgBwXRuIaLaqLfQvJac04zBIrSzHJMqr4 28ZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=XrCh3f2V; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x6-v6si11273316pln.486.2018.06.16.19.05.51; Sat, 16 Jun 2018 19:06:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=XrCh3f2V; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934421AbeFQCBa (ORCPT + 99 others); Sat, 16 Jun 2018 22:01:30 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:60128 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934362AbeFQCB2 (ORCPT ); Sat, 16 Jun 2018 22:01:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=CjM3ufNnj7C7JJCH94R+YEBzzht8Do1jsqaRSwkI/Ok=; b=XrCh3f2VfRQxnl9MOIXQ7HjQA zS5FFePOn100Yh7aPRqfcg0H2Ykcft6L0xum9lFOCSw+osQvK5Mn8PswAZF9mN8T4oRprG+kcxngU DReirM7B5h83tN8J1du5ziLedeL1jEx0XrnRjbK2TTqn001cl/fM/bak4nvz6IcldNADgjBx0fkZ5 Q+nOPPh/mCgoBD3FiZJAtcRHeIaCbaapiai50jQWiFztQMmC0afa6oF8izmvkggcK5BWi/qH2y8kf OB+1qkKsqKCX7utOwm4/WMnuQ+t5sPFCwtlm24U82IYj9icymynWaHVr9UJC1SDJpsOJ+jwy1Eh0x 4GdrD5YWw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fUN0Q-0001YG-41; Sun, 17 Jun 2018 02:01:26 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v14 60/74] nilfs2: Convert to XArray Date: Sat, 16 Jun 2018 19:00:38 -0700 Message-Id: <20180617020052.4759-61-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180617020052.4759-1-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is close to a 1:1 replacement of radix tree APIs with their XArray equivalents. It would be possible to optimise nilfs_copy_back_pages(), but that doesn't seem to be in the performance path. Also, I think it has a pre-existing bug, and I've added a note to that effect in the source code. Signed-off-by: Matthew Wilcox --- fs/nilfs2/btnode.c | 26 +++++++++----------------- fs/nilfs2/page.c | 29 +++++++++++++---------------- 2 files changed, 22 insertions(+), 33 deletions(-) diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c index dec98cab729d..e9fcffb3a15c 100644 --- a/fs/nilfs2/btnode.c +++ b/fs/nilfs2/btnode.c @@ -177,24 +177,18 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, ctxt->newbh = NULL; if (inode->i_blkbits == PAGE_SHIFT) { - lock_page(obh->b_page); - /* - * We cannot call radix_tree_preload for the kernels older - * than 2.6.23, because it is not exported for modules. - */ + struct page *opage = obh->b_page; + lock_page(opage); retry: - err = radix_tree_preload(GFP_NOFS & ~__GFP_HIGHMEM); - if (err) - goto failed_unlock; /* BUG_ON(oldkey != obh->b_page->index); */ - if (unlikely(oldkey != obh->b_page->index)) - NILFS_PAGE_BUG(obh->b_page, + if (unlikely(oldkey != opage->index)) + NILFS_PAGE_BUG(opage, "invalid oldkey %lld (newkey=%lld)", (unsigned long long)oldkey, (unsigned long long)newkey); xa_lock_irq(&btnc->i_pages); - err = radix_tree_insert(&btnc->i_pages, newkey, obh->b_page); + err = __xa_insert(&btnc->i_pages, newkey, opage, GFP_NOFS); xa_unlock_irq(&btnc->i_pages); /* * Note: page->index will not change to newkey until @@ -202,7 +196,6 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, * To protect the page in intermediate state, the page lock * is held. */ - radix_tree_preload_end(); if (!err) return 0; else if (err != -EEXIST) @@ -212,7 +205,7 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, if (!err) goto retry; /* fallback to copy mode */ - unlock_page(obh->b_page); + unlock_page(opage); } nbh = nilfs_btnode_create_block(btnc, newkey); @@ -252,9 +245,8 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc, mark_buffer_dirty(obh); xa_lock_irq(&btnc->i_pages); - radix_tree_delete(&btnc->i_pages, oldkey); - radix_tree_tag_set(&btnc->i_pages, newkey, - PAGECACHE_TAG_DIRTY); + __xa_erase(&btnc->i_pages, oldkey); + __xa_set_tag(&btnc->i_pages, newkey, PAGECACHE_TAG_DIRTY); xa_unlock_irq(&btnc->i_pages); opage->index = obh->b_blocknr = newkey; @@ -284,7 +276,7 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc, if (nbh == NULL) { /* blocksize == pagesize */ xa_lock_irq(&btnc->i_pages); - radix_tree_delete(&btnc->i_pages, newkey); + __xa_erase(&btnc->i_pages, newkey); xa_unlock_irq(&btnc->i_pages); unlock_page(ctxt->bh->b_page); } else diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c index 4cb850a6f1c2..8384473b98b8 100644 --- a/fs/nilfs2/page.c +++ b/fs/nilfs2/page.c @@ -298,7 +298,7 @@ int nilfs_copy_dirty_pages(struct address_space *dmap, * @dmap: destination page cache * @smap: source page cache * - * No pages must no be added to the cache during this process. + * No pages must be added to the cache during this process. * This must be ensured by the caller. */ void nilfs_copy_back_pages(struct address_space *dmap, @@ -307,7 +307,6 @@ void nilfs_copy_back_pages(struct address_space *dmap, struct pagevec pvec; unsigned int i, n; pgoff_t index = 0; - int err; pagevec_init(&pvec); repeat: @@ -322,35 +321,34 @@ void nilfs_copy_back_pages(struct address_space *dmap, lock_page(page); dpage = find_lock_page(dmap, offset); if (dpage) { - /* override existing page on the destination cache */ + /* overwrite existing page in the destination cache */ WARN_ON(PageDirty(dpage)); nilfs_copy_page(dpage, page, 0); unlock_page(dpage); put_page(dpage); + /* Do we not need to remove page from smap here? */ } else { - struct page *page2; + struct page *p; /* move the page to the destination cache */ xa_lock_irq(&smap->i_pages); - page2 = radix_tree_delete(&smap->i_pages, offset); - WARN_ON(page2 != page); - + p = __xa_erase(&smap->i_pages, offset); + WARN_ON(page != p); smap->nrpages--; xa_unlock_irq(&smap->i_pages); xa_lock_irq(&dmap->i_pages); - err = radix_tree_insert(&dmap->i_pages, offset, page); - if (unlikely(err < 0)) { - WARN_ON(err == -EEXIST); + p = __xa_store(&dmap->i_pages, offset, page, GFP_NOFS); + if (unlikely(p)) { + /* Probably -ENOMEM */ page->mapping = NULL; - put_page(page); /* for cache */ + put_page(page); } else { page->mapping = dmap; dmap->nrpages++; if (PageDirty(page)) - radix_tree_tag_set(&dmap->i_pages, - offset, - PAGECACHE_TAG_DIRTY); + __xa_set_tag(&dmap->i_pages, offset, + PAGECACHE_TAG_DIRTY); } xa_unlock_irq(&dmap->i_pages); } @@ -476,8 +474,7 @@ int __nilfs_clear_page_dirty(struct page *page) if (mapping) { xa_lock_irq(&mapping->i_pages); if (test_bit(PG_dirty, &page->flags)) { - radix_tree_tag_clear(&mapping->i_pages, - page_index(page), + __xa_clear_tag(&mapping->i_pages, page_index(page), PAGECACHE_TAG_DIRTY); xa_unlock_irq(&mapping->i_pages); return clear_page_dirty_for_io(page); -- 2.17.1