Received: by 10.223.185.116 with SMTP id b49csp4143819wrg; Mon, 19 Feb 2018 11:59:12 -0800 (PST) X-Google-Smtp-Source: AH8x2241OSEg7S5yKNFAsmjQ1G70rQFTG9SritevTiVVDCqaQNylwgzLaFyx6YphooURmEIchcQd X-Received: by 10.98.33.4 with SMTP id h4mr15496147pfh.144.1519070352593; Mon, 19 Feb 2018 11:59:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519070352; cv=none; d=google.com; s=arc-20160816; b=kfmUtLCSBbebXKJ3YIly76tjOXKyLthkxZLNsHdMUYOutfe+2PrVRNnHqb4DYUZFxS n5D77JhFkQcPYmsU0gIb/4ybuZLgKNhNZvjzt6LygNJ8nBPw7u7oK2qkxMDgm6lRZSNk sOzmMMbQDiBvWv9grJq5QTOA0shW8EnPyO7wQwcJeOpV4gIrkVV+OaAXCV3NL5uaBZXa r80OIRD+3bYLr6wsaQf0BgTLTX86SjN+o98yPISDprxhsIaW9WwOfFyM5XarKrSrqqqa HaOFn0uYKL+SEWSZY0tX2k+Il7sbAa991ADCAvKRXAuuJ+uIwsxtkJmBo0sZavMMoZF8 zgNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=YHD1GYzHyIVzz7hdX6FW2K4t09tMVsLJ57FaHHMYRYg=; b=WtPfdub6Pw6x3aOocrZd9PXCA20aTgk+1olC1GD0CV38KISF17024faFt1vZ7t70Tl psoKBX6FWdeoKaF/1MKQ6dZ/DAACIvT/aM4ogyNyWbkNTIY1RwF7EeyytJm1q7pT/sdE WMK5VUDelu+K2fFBwnSn82aHuKOpINFFWCDdPFzFEKZU+rYWkLO02cBaTdIxjRVPSa1J 5SQCl8Quk0E4i45j57ZuW+t3eDGltiJu0gKR9s988Jy9zP+ngA2jc+MUJBg0wL9Aehhe DPiIao2VJ45DV0neO4tdkasxpEj+cygkomqUjF/8/VSb8b8cX+fArUmDfxEJBe4qcfQ1 4OIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=L9jplsdY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2si4074242pgc.758.2018.02.19.11.58.58; Mon, 19 Feb 2018 11:59:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=L9jplsdY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932068AbeBST5J (ORCPT + 99 others); Mon, 19 Feb 2018 14:57:09 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:44744 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753708AbeBSTqR (ORCPT ); Mon, 19 Feb 2018 14:46:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=YHD1GYzHyIVzz7hdX6FW2K4t09tMVsLJ57FaHHMYRYg=; b=L9jplsdY1ZbrIaAIsyY9dpskl GlbEjCO0SYnGiMukGe35V8RyfaxDTqLFi9xEV9dQwvP8lb6JnMjGackvRDFO4jG2k1PZ/A00G09X4 tmTmhGr6jqdUbqbGd3uBpLpdSrE6TvlcbTJF2RH/1cTMTgmOhjQV3gM+rdBstGsgfnNS5EpbI07LT 4WVlbHgO0Rz2hU5bkWXQnbdGS/48Nk4l4xndHgvrZjYOPvrC3Sd5IAYimL/PpdAcvEjR8vREpKEer RvJ+zEHqiajBEXVMctojXf/XBQMmef0f+R2abTyQ+wKG8uF5cbMwi9iXRxPHy+pqLY7UbV3l7xZ7A /KZ1Bu6fA==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1enrOB-0001og-OF; Mon, 19 Feb 2018 19:46:15 +0000 From: Matthew Wilcox To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v7 29/61] page cache: Convert page cache lookups to XArray Date: Mon, 19 Feb 2018 11:45:24 -0800 Message-Id: <20180219194556.6575-30-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180219194556.6575-1-willy@infradead.org> References: <20180219194556.6575-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox Introduce page_cache_pin() to factor out the common logic between the various lookup routines: find_get_entry find_get_entries find_get_pages_range find_get_pages_contig find_get_pages_range_tag find_get_entries_tag filemap_map_pages By using the xa_state to control the iteration, we can remove most of the gotos and just use the normal break/continue loop control flow. Also convert the regression1 read-side to XArray since that simulates the functions being modified here. Signed-off-by: Matthew Wilcox --- include/linux/pagemap.h | 6 +- mm/filemap.c | 380 +++++++++------------------------ tools/testing/radix-tree/regression1.c | 68 +++--- 3 files changed, 129 insertions(+), 325 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 34d4fa3ad1c5..1a59f4a5424a 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -365,17 +365,17 @@ static inline unsigned find_get_pages(struct address_space *mapping, unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t start, unsigned int nr_pages, struct page **pages); unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, - pgoff_t end, int tag, unsigned int nr_pages, + pgoff_t end, xa_tag_t tag, unsigned int nr_pages, struct page **pages); static inline unsigned find_get_pages_tag(struct address_space *mapping, - pgoff_t *index, int tag, unsigned int nr_pages, + pgoff_t *index, xa_tag_t tag, unsigned int nr_pages, struct page **pages) { return find_get_pages_range_tag(mapping, index, (pgoff_t)-1, tag, nr_pages, pages); } unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, - int tag, unsigned int nr_entries, + xa_tag_t tag, unsigned int nr_entries, struct page **entries, pgoff_t *indices); struct page *grab_cache_page_write_begin(struct address_space *mapping, diff --git a/mm/filemap.c b/mm/filemap.c index 0223f8054e3a..fa8aa015096e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1400,6 +1400,32 @@ bool page_cache_range_empty(struct address_space *mapping, pgoff_t index, } EXPORT_SYMBOL_GPL(page_cache_range_empty); +/* + * page_cache_pin() - Try to pin a page in the page cache. + * @xas: The XArray operation state. + * @pagep: The page which has been previously found at this location. + * + * On success, the page has an elevated refcount, but is not locked. + * This implements the lockless pagecache protocol as described in + * include/linux/pagemap.h; see page_cache_get_speculative(). + * + * Return: True if the page is still in the cache. + */ +static bool page_cache_pin(struct xa_state *xas, struct page *page) +{ + struct page *head = compound_head(page); + bool got = page_cache_get_speculative(head); + + if (likely(got && (xas_reload(xas) == page) && + (compound_head(page) == head))) + return true; + + if (got) + put_page(head); + xas_reset(xas); + return false; +} + /** * find_get_entry - find and get a page cache entry * @mapping: the address_space to search @@ -1415,51 +1441,21 @@ EXPORT_SYMBOL_GPL(page_cache_range_empty); */ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) { - void **pagep; - struct page *head, *page; + XA_STATE(xas, &mapping->pages, offset); + struct page *page; rcu_read_lock(); -repeat: - page = NULL; - pagep = radix_tree_lookup_slot(&mapping->pages, offset); - if (pagep) { - page = radix_tree_deref_slot(pagep); - if (unlikely(!page)) - goto out; - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) - goto repeat; - /* - * A shadow entry of a recently evicted page, - * or a swap entry from shmem/tmpfs. Return - * it without attempting to raise page count. - */ - goto out; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } + do { + page = xas_load(&xas); + if (xas_retry(&xas, page)) + continue; + if (!page || xa_is_value(page)) + break; + if (!page_cache_pin(&xas, page)) + continue; + } while (0); - /* - * Has the page moved? - * This is part of the lockless pagecache protocol. See - * include/linux/pagemap.h for details. - */ - if (unlikely(page != *pagep)) { - put_page(head); - goto repeat; - } - } -out: rcu_read_unlock(); - return page; } EXPORT_SYMBOL(find_get_entry); @@ -1486,7 +1482,7 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset) repeat: page = find_get_entry(mapping, offset); - if (page && !radix_tree_exception(page)) { + if (page && !xa_is_value(page)) { lock_page(page); /* Has the page been truncated? */ if (unlikely(page_mapping(page) != mapping)) { @@ -1619,50 +1615,21 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t start, unsigned int nr_entries, struct page **entries, pgoff_t *indices) { - void **slot; + XA_STATE(xas, &mapping->pages, start); + struct page *page; unsigned int ret = 0; - struct radix_tree_iter iter; if (!nr_entries) return 0; rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - struct page *head, *page; -repeat: - page = radix_tree_deref_slot(slot); - if (unlikely(!page)) + xas_for_each(&xas, page, ULONG_MAX) { + if (xas_retry(&xas, page)) + continue; + if (!xa_is_value(page) && !page_cache_pin(&xas, page)) continue; - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - /* - * A shadow entry of a recently evicted page, a swap - * entry from shmem/tmpfs or a DAX entry. Return it - * without attempting to raise page count. - */ - goto export; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } - /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } -export: - indices[ret] = iter.index; + indices[ret] = xas.xa_index; entries[ret] = page; if (++ret == nr_entries) break; @@ -1696,56 +1663,26 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, pgoff_t end, unsigned int nr_pages, struct page **pages) { - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, &mapping->pages, *start); + struct page *page; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, *start) { - struct page *head, *page; - - if (iter.index > end) - break; -repeat: - page = radix_tree_deref_slot(slot); - if (unlikely(!page)) + xas_for_each(&xas, page, end) { + if (xas_retry(&xas, page)) continue; - - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - /* - * A shadow entry of a recently evicted page, - * or a swap entry from shmem/tmpfs. Skip - * over it. - */ + /* Skip over shadow or swap entries */ + if (xa_is_value(page)) + continue; + if (!page_cache_pin(&xas, page)) continue; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } - - /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } pages[ret] = page; if (++ret == nr_pages) { - *start = pages[ret - 1]->index + 1; + *start = page->index + 1; goto out; } } @@ -1753,7 +1690,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, /* * We come here when there is no page beyond @end. We take care to not * overflow the index @start as it confuses some of the callers. This - * breaks the iteration when there is page at index -1 but that is + * breaks the iteration when there is a page at index -1 but that is * already broken anyway. */ if (end == (pgoff_t)-1) @@ -1781,57 +1718,28 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, unsigned int nr_pages, struct page **pages) { - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, &mapping->pages, index); + struct page *page; unsigned int ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - radix_tree_for_each_contig(slot, &mapping->pages, &iter, index) { - struct page *head, *page; -repeat: - page = radix_tree_deref_slot(slot); - /* The hole, there no reason to continue */ - if (unlikely(!page)) - break; - - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - /* - * A shadow entry of a recently evicted page, - * or a swap entry from shmem/tmpfs. Stop - * looking for contiguous pages. - */ + for (page = xas_load(&xas); page; page = xas_next(&xas)) { + if (xas_retry(&xas, page)) + continue; + if (xa_is_value(page)) break; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } - - /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } + if (!page_cache_pin(&xas, page)) + continue; /* * must check mapping and index after taking the ref. * otherwise we can get both false positives and false * negatives, which is just confusing to the caller. */ - if (page->mapping == NULL || page_to_pgoff(page) != iter.index) { + if (!page->mapping || page_to_pgoff(page) != xas.xa_index) { put_page(page); break; } @@ -1858,74 +1766,42 @@ EXPORT_SYMBOL(find_get_pages_contig); * @tag. We update @index to index the next page for the traversal. */ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, - pgoff_t end, int tag, unsigned int nr_pages, + pgoff_t end, xa_tag_t tag, unsigned int nr_pages, struct page **pages) { - struct radix_tree_iter iter; - void **slot; + XA_STATE(xas, &mapping->pages, *index); + struct page *page; unsigned ret = 0; if (unlikely(!nr_pages)) return 0; rcu_read_lock(); - radix_tree_for_each_tagged(slot, &mapping->pages, &iter, *index, tag) { - struct page *head, *page; - - if (iter.index > end) - break; -repeat: - page = radix_tree_deref_slot(slot); - if (unlikely(!page)) + xas_for_each_tag(&xas, page, end, tag) { + if (xas_retry(&xas, page)) continue; - - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - /* - * A shadow entry of a recently evicted page. - * - * Those entries should never be tagged, but - * this tree walk is lockless and the tags are - * looked up in bulk, one radix tree node at a - * time, so there is a sizable window for page - * reclaim to evict a page we saw tagged. - * - * Skip over it. - */ + /* + * Shadow entries should never be tagged, but this iteration + * is lockless so there is a window for page reclaim to evict + * a page we saw tagged. Skip over it. + */ + if (xa_is_value(page)) + continue; + if (!page_cache_pin(&xas, page)) continue; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } - - /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } pages[ret] = page; if (++ret == nr_pages) { - *index = pages[ret - 1]->index + 1; + *index = page->index + 1; goto out; } } /* - * We come here when we got at @end. We take care to not overflow the + * We come here when we got to @end. We take care to not overflow the * index @index as it confuses some of the callers. This breaks the - * iteration when there is page at index -1 but that is already broken - * anyway. + * iteration when there is a page at index -1 but that is already + * broken anyway. */ if (end == (pgoff_t)-1) *index = (pgoff_t)-1; @@ -1951,54 +1827,24 @@ EXPORT_SYMBOL(find_get_pages_range_tag); * @tag. */ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, - int tag, unsigned int nr_entries, + xa_tag_t tag, unsigned int nr_entries, struct page **entries, pgoff_t *indices) { - void **slot; + XA_STATE(xas, &mapping->pages, start); + struct page *page; unsigned int ret = 0; - struct radix_tree_iter iter; if (!nr_entries) return 0; rcu_read_lock(); - radix_tree_for_each_tagged(slot, &mapping->pages, &iter, start, tag) { - struct page *head, *page; -repeat: - page = radix_tree_deref_slot(slot); - if (unlikely(!page)) + xas_for_each_tag(&xas, page, ULONG_MAX, tag) { + if (xas_retry(&xas, page)) + continue; + if (!xa_is_value(page) && !page_cache_pin(&xas, page)) continue; - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - - /* - * A shadow entry of a recently evicted page, a swap - * entry from shmem/tmpfs or a DAX entry. Return it - * without attempting to raise page count. - */ - goto export; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } - /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } -export: - indices[ret] = iter.index; + indices[ret] = xas.xa_index; entries[ret] = page; if (++ret == nr_entries) break; @@ -2607,45 +2453,21 @@ EXPORT_SYMBOL(filemap_fault); void filemap_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff) { - struct radix_tree_iter iter; - void **slot; struct file *file = vmf->vma->vm_file; struct address_space *mapping = file->f_mapping; pgoff_t last_pgoff = start_pgoff; unsigned long max_idx; - struct page *head, *page; + XA_STATE(xas, &mapping->pages, start_pgoff); + struct page *page; rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start_pgoff) { - if (iter.index > end_pgoff) - break; -repeat: - page = radix_tree_deref_slot(slot); - if (unlikely(!page)) - goto next; - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } + xas_for_each(&xas, page, end_pgoff) { + if (xas_retry(&xas, page)) + continue; + if (xa_is_value(page)) goto next; - } - - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); - goto repeat; - } - - /* Has the page moved? */ - if (unlikely(page != *slot)) { - put_page(head); - goto repeat; - } + if (!page_cache_pin(&xas, page)) + continue; if (!PageUptodate(page) || PageReadahead(page) || @@ -2664,10 +2486,10 @@ void filemap_map_pages(struct vm_fault *vmf, if (file->f_ra.mmap_miss > 0) file->f_ra.mmap_miss--; - vmf->address += (iter.index - last_pgoff) << PAGE_SHIFT; + vmf->address += (xas.xa_index - last_pgoff) << PAGE_SHIFT; if (vmf->pte) - vmf->pte += iter.index - last_pgoff; - last_pgoff = iter.index; + vmf->pte += xas.xa_index - last_pgoff; + last_pgoff = xas.xa_index; if (alloc_set_pte(vmf, NULL, page)) goto unlock; unlock_page(page); @@ -2680,8 +2502,6 @@ void filemap_map_pages(struct vm_fault *vmf, /* Huge page is mapped? No need to proceed. */ if (pmd_trans_huge(*vmf->pmd)) break; - if (iter.index == end_pgoff) - break; } rcu_read_unlock(); } diff --git a/tools/testing/radix-tree/regression1.c b/tools/testing/radix-tree/regression1.c index 0aece092f40e..cd84d7de45e6 100644 --- a/tools/testing/radix-tree/regression1.c +++ b/tools/testing/radix-tree/regression1.c @@ -58,7 +58,7 @@ static struct page *page_alloc(void) struct page *p; p = malloc(sizeof(struct page)); p->count = 1; - p->index = 1; + p->index = (unsigned long)p; pthread_mutex_init(&p->lock, NULL); return p; @@ -77,53 +77,37 @@ static void page_free(struct page *p) call_rcu(&p->rcu, page_rcu_free); } +static bool page_cache_pin(struct xa_state *xas, struct page *page) +{ + pthread_mutex_lock(&page->lock); + if (!page->count) { + pthread_mutex_unlock(&page->lock); + goto fail; + } + /* don't actually update page refcount */ + pthread_mutex_unlock(&page->lock); + + /* Has the page moved? */ + if (xas_reload(xas) == page) + return true; +fail: + xas_reset(xas); + return false; +} + static unsigned find_get_pages(unsigned long start, unsigned int nr_pages, struct page **pages) { - unsigned int i; - unsigned int ret; - unsigned int nr_found; + XA_STATE(xas, &mt_tree, start); + struct page *page; + unsigned int ret = 0; rcu_read_lock(); -restart: - nr_found = radix_tree_gang_lookup_slot(&mt_tree, - (void ***)pages, NULL, start, nr_pages); - ret = 0; - for (i = 0; i < nr_found; i++) { - struct page *page; -repeat: - page = radix_tree_deref_slot((void **)pages[i]); - if (unlikely(!page)) + xas_for_each(&xas, page, ULONG_MAX) { + if (xas_retry(&xas, page)) + continue; + if (!page_cache_pin(&xas, page)) continue; - - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - /* - * Transient condition which can only trigger - * when entry at index 0 moves out of or back - * to root: none yet gotten, safe to restart. - */ - assert((start | i) == 0); - goto restart; - } - /* - * No exceptional entries are inserted in this test. - */ - assert(0); - } - - pthread_mutex_lock(&page->lock); - if (!page->count) { - pthread_mutex_unlock(&page->lock); - goto repeat; - } - /* don't actually update page refcount */ - pthread_mutex_unlock(&page->lock); - - /* Has the page moved? */ - if (unlikely(page != *((void **)pages[i]))) { - goto repeat; - } pages[ret] = page; ret++; -- 2.16.1