Received: by 10.223.164.202 with SMTP id h10csp657059wrb; Wed, 22 Nov 2017 13:10:14 -0800 (PST) X-Google-Smtp-Source: AGs4zMZFkKEf4S/O5TmrzNBsQzFbkqdeqLMahqpa3czZWIhwWbhaI2+vGq80Ija7q2xlzKcDbDYm X-Received: by 10.84.209.136 with SMTP id y8mr22781385plh.439.1511385014852; Wed, 22 Nov 2017 13:10:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511385014; cv=none; d=google.com; s=arc-20160816; b=bZgeN4H7hvS8/MM8g3MFvWJp5s69+18HCmp5y9o6tN04pLgn6BDkE3bJYlAe3rbGMc OZTEAl6/PvzN/iQ1Dy5uhEkr5DGPSrI0hWJD9kGQddeSX2vI9T0Z+zHMbZtlNMNExEou 3y8DkUZUOgqnyA1XkI2VLPnhjDnTrNflD7zhdH/Z69AiztPXAXzQCgXgJk8yBzHaSAHK bQipzZV+yjj8YEoUncBjD5W/dHCnNFWDHGKNUuC9r+P0vFvzgR0w4gU1WNvIEA8/OWJ+ pKB7rWSN039jxaslZLFN4JQgw2jmioxOnQCq3PY/OtLAMtXQO8QoCcygUUdMm90ZVAQ1 Mfig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=nq0YMbv0fdNexBG/mUSl8GEg80WiAxiSG/xC+ZQBcu4=; b=nyns58hXGHWSZ4F7zdbn430WMnqn+lqxKcLNKLXZID6AKi2YkK0+bXgvPi5wDpjQv8 S7Ac5YiiaDJSEYXQ2FnhNResj4TuzDXqVB6Qw94SpaL94JYtOxin1G0Iml2Png+wU30V g9TNMNYGynPZ0XcqAb9O2yRUXx8Im6VvMfgvigiFPwOxJYHnJmAAPmF7KFfCqZXaM7gP 79kkchscfJ1z9XoQnlK8AAJtEfA8QFU+NICm3xMRZZexrDXC/EebiE8wcxj2sFrJuhQj mtJMw5FP3+n/+03heQVxkd6q7B5HJ/spL+GxGy6qt6T/MExfqmYScYxysx4MuzT+OMik BeoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=fSHvW3If; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h90si8552929plb.644.2017.11.22.13.10.03; Wed, 22 Nov 2017 13:10:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=fSHvW3If; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752028AbdKVVIu (ORCPT + 77 others); Wed, 22 Nov 2017 16:08:50 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:56518 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751893AbdKVVIV (ORCPT ); Wed, 22 Nov 2017 16:08:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nq0YMbv0fdNexBG/mUSl8GEg80WiAxiSG/xC+ZQBcu4=; b=fSHvW3Ifm9wfBuuRtQz1ORp1q fzV+SVscB9tnkJH5CLCK7T2C3LNyJUuUxjKSb8byQ2e8my076oaxZKBOiLvViGygM1V/Thu+NY57C ovEDEbDzK934HKqB6+DJpwdPoYyncVD+Zjnu5Pw8C4qQX2kaYAT/xAhdASWOwc5wnDC4N6wwhQqL9 JKo5EYN336rmakdENRbbma0ZjHj72SThLmYdjFIojmOTyJlU8hn6ORiqtT0Q7Uxf9eKGg0ayr+er+ 77ZkzmhTzNGeqF2XBeUOgIZgwNcgH7NEChMgBjZUoS7N5RcMzFPcz05cOJZsDvGsomjNVjZByQRal X+UT3VMSw==; Received: from willy by bombadil.infradead.org with local (Exim 4.87 #1 (Red Hat Linux)) id 1eHcFo-0007zK-Ks; Wed, 22 Nov 2017 21:08:20 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 62/62] mm: Convert page-writeback to XArray Date: Wed, 22 Nov 2017 13:07:39 -0800 Message-Id: <20171122210739.29916-63-willy@infradead.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171122210739.29916-1-willy@infradead.org> References: <20171122210739.29916-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox Includes moving mapping_tagged() to fs.h as a static inline, and changing it to return bool. Signed-off-by: Matthew Wilcox --- include/linux/fs.h | 17 +++++++++------ mm/page-writeback.c | 63 +++++++++++++++++++---------------------------------- 2 files changed, 33 insertions(+), 47 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index a5c105d292a7..14b7406e03e0 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -470,15 +470,18 @@ struct block_device { struct mutex bd_fsfreeze_mutex; } __randomize_layout; +/* XArray tags, for tagging dirty and writeback pages in the pagecache. */ +#define PAGECACHE_TAG_DIRTY XA_TAG_0 +#define PAGECACHE_TAG_WRITEBACK XA_TAG_1 +#define PAGECACHE_TAG_TOWRITE XA_TAG_2 + /* - * Radix-tree tags, for tagging dirty and writeback pages within the pagecache - * radix trees + * Returns true if any of the pages in the mapping are marked with the tag. */ -#define PAGECACHE_TAG_DIRTY 0 -#define PAGECACHE_TAG_WRITEBACK 1 -#define PAGECACHE_TAG_TOWRITE 2 - -int mapping_tagged(struct address_space *mapping, int tag); +static inline bool mapping_tagged(struct address_space *mapping, xa_tag_t tag) +{ + return xa_tagged(&mapping->pages, tag); +} static inline void i_mmap_lock_write(struct address_space *mapping) { diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 6a5b92629727..28175cba7e72 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2100,33 +2100,26 @@ void __init page_writeback_init(void) * dirty pages in the file (thus it is important for this function to be quick * so that it can tag pages faster than a dirtying process can create them). */ -/* - * We tag pages in batches of WRITEBACK_TAG_BATCH to reduce xa_lock latency. - */ void tag_pages_for_writeback(struct address_space *mapping, pgoff_t start, pgoff_t end) { -#define WRITEBACK_TAG_BATCH 4096 - unsigned long tagged = 0; - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, start); + struct xarray *xa = &mapping->pages; + unsigned int tagged = 0; + void *page; - xa_lock_irq(&mapping->pages); - radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, - PAGECACHE_TAG_DIRTY) { - if (iter.index > end) - break; - radix_tree_iter_tag_set(&mapping->pages, &iter, - PAGECACHE_TAG_TOWRITE); - tagged++; - if ((tagged % WRITEBACK_TAG_BATCH) != 0) + xa_lock_irq(xa); + xas_for_each_tag(xa, &xas, page, end, PAGECACHE_TAG_DIRTY) { + xas_set_tag(xa, &xas, PAGECACHE_TAG_TOWRITE); + if (++tagged % XA_CHECK_SCHED) continue; - slot = radix_tree_iter_resume(slot, &iter); - xa_unlock_irq(&mapping->pages); + + xas_pause(&xas); + xa_unlock_irq(xa); cond_resched(); - xa_lock_irq(&mapping->pages); + xa_lock_irq(xa); } - xa_unlock_irq(&mapping->pages); + xa_unlock_irq(xa); } EXPORT_SYMBOL(tag_pages_for_writeback); @@ -2166,7 +2159,7 @@ int write_cache_pages(struct address_space *mapping, pgoff_t done_index; int cycled; int range_whole = 0; - int tag; + xa_tag_t tag; pagevec_init(&pvec); if (wbc->range_cyclic) { @@ -2447,7 +2440,7 @@ void account_page_cleaned(struct page *page, struct address_space *mapping, /* * For address_spaces which do not use buffers. Just tag the page as dirty in - * its radix tree. + * the xarray. * * This is also used when a single buffer is being dirtied: we want to set the * page dirty in that case, but not all the buffers. This is a "bottom-up" @@ -2473,7 +2466,7 @@ int __set_page_dirty_nobuffers(struct page *page) BUG_ON(page_mapping(page) != mapping); WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page)); account_page_dirtied(page, mapping); - radix_tree_tag_set(&mapping->pages, page_index(page), + __xa_set_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_DIRTY); xa_unlock_irqrestore(&mapping->pages, flags); unlock_page_memcg(page); @@ -2636,13 +2629,13 @@ EXPORT_SYMBOL(__cancel_dirty_page); * Returns true if the page was previously dirty. * * This is for preparing to put the page under writeout. We leave the page - * tagged as dirty in the radix tree so that a concurrent write-for-sync + * tagged as dirty in the xarray so that a concurrent write-for-sync * can discover it via a PAGECACHE_TAG_DIRTY walk. The ->writepage * implementation will run either set_page_writeback() or set_page_dirty(), - * at which stage we bring the page's dirty flag and radix-tree dirty tag + * at which stage we bring the page's dirty flag and xarray dirty tag * back into sync. * - * This incoherency between the page's dirty flag and radix-tree tag is + * This incoherency between the page's dirty flag and xarray tag is * unfortunate, but it only exists while the page is locked. */ int clear_page_dirty_for_io(struct page *page) @@ -2723,7 +2716,7 @@ int test_clear_page_writeback(struct page *page) xa_lock_irqsave(&mapping->pages, flags); ret = TestClearPageWriteback(page); if (ret) { - radix_tree_tag_clear(&mapping->pages, page_index(page), + __xa_clear_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_WRITEBACK); if (bdi_cap_account_writeback(bdi)) { struct bdi_writeback *wb = inode_to_wb(inode); @@ -2775,7 +2768,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write) on_wblist = mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK); - radix_tree_tag_set(&mapping->pages, page_index(page), + __xa_set_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_WRITEBACK); if (bdi_cap_account_writeback(bdi)) inc_wb_stat(inode_to_wb(inode), WB_WRITEBACK); @@ -2789,10 +2782,10 @@ int __test_set_page_writeback(struct page *page, bool keep_write) sb_mark_inode_writeback(mapping->host); } if (!PageDirty(page)) - radix_tree_tag_clear(&mapping->pages, page_index(page), + __xa_clear_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_DIRTY); if (!keep_write) - radix_tree_tag_clear(&mapping->pages, page_index(page), + __xa_clear_tag(&mapping->pages, page_index(page), PAGECACHE_TAG_TOWRITE); xa_unlock_irqrestore(&mapping->pages, flags); } else { @@ -2808,16 +2801,6 @@ int __test_set_page_writeback(struct page *page, bool keep_write) } EXPORT_SYMBOL(__test_set_page_writeback); -/* - * Return true if any of the pages in the mapping are marked with the - * passed tag. - */ -int mapping_tagged(struct address_space *mapping, int tag) -{ - return radix_tree_tagged(&mapping->pages, tag); -} -EXPORT_SYMBOL(mapping_tagged); - /** * wait_for_stable_page() - wait for writeback to finish, if necessary. * @page: The page to wait on. -- 2.15.0 From 1584802020351626540@xxx Wed Nov 22 21:09:43 +0000 2017 X-GM-THRID: 1584802020351626540 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread