Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2453306imm; Sat, 16 Jun 2018 19:13:02 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJnKwJrVsx3Ltrw/Kmux0RF3VrzoOObS6++L8OH91XBfUvc/5rLGrVPIjDHs2x8DbdpGtZn X-Received: by 2002:a62:b02:: with SMTP id t2-v6mr8152735pfi.36.1529201582772; Sat, 16 Jun 2018 19:13:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529201582; cv=none; d=google.com; s=arc-20160816; b=ehteSnw3k4JTM9zC41pA4t406M6WtOXLR9rZse7TmkVWVverwm8UpPsxACZ2JyEAf0 nzElcI8+io18Jg075otHxNtkLjXzxCa+iMfpZQI9UMUcVre92oB4Z5nQ6de9DCBMAjPh hEaFs5ymK+vUv10zIYZAXRHSXrGRmWf7ZDwrJ92zFJEJEEtaxzu4CuHZcZ2/T0tYhMsK IZemzr4EsyfOfMCnH+vSoljyZniSglk8+qHRKouAdQUuYbKijV6x457dxybQn/Z0TodY pOcF6McM0vLBQbPDhojPrK4WuIpvCxqBDVWECG99dOzSbtFs514W3Q6/McdadYgCPm/5 4Buw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=kigL8g3aYbzvZka3/rOW3PB0wzA2KguJVZtmsLKqLYY=; b=M60XneC9VMgE8wiE/KLHaCvN+ogvdUr6+y9NSj9Dp94IDtjJfalQAcLiSS/aGqficu 5XZdg5x0rCY9ksnNqtQeSqFkO+s2eH8YB6QhJFxq1+AYfDNWzg9QqLmPiQz9xk1hivoS l89VbJJB9WyalFFCZWGD0t+/G5bDQ8bWKEfESRjrKNmK+wG5x7ZGNKofb35+ByK/gqps SCL9oHqMtLt5b9U+7OsQDZjbO4++bCsS+2RXtPqkilswHPLJCvS/k/loCKKJLuBiqGJR i7KDh7mL6n7m2yY0nLESjhGfFIMd9/T0MIlO08bzMIfflBMk+JfPx+GIVM/LlR90vp5Q ckaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=VqYOcuTE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l14-v6si7211462pgs.155.2018.06.16.19.12.48; Sat, 16 Jun 2018 19:13:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=VqYOcuTE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935565AbeFQCMP (ORCPT + 99 others); Sat, 16 Jun 2018 22:12:15 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:59032 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933916AbeFQCBG (ORCPT ); Sat, 16 Jun 2018 22:01:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=kigL8g3aYbzvZka3/rOW3PB0wzA2KguJVZtmsLKqLYY=; b=VqYOcuTE1Koc2W3twhouD6F0p BEHeP+kvNvYD6mf/Eja2Pm8/LAqc+sHRSHuauLR12A/3hRb4xSpJlcPcbVpXAoy+7ifQCoukveGnX F+0kh1A36M4y/vUlqeFksybrMJNMTwt/x1Ch1rn6E5Yj90bhppN+hr4Qci7nR/p3U2rO3TMyrbkOg OO2MmSlaCDhpblpTfADZsQ0PcdNGCzTU1zHhccaYKXk8+XzUQQFVD5/Dx3bcfovSxesOAdd+cCaK8 cc8y7gkm/UaPP5FwHS1gDy9yowxz1zWoB9cp9ihm4Pt2NPKskf0j60lmKNcJVz3Xd6+ErB8Qcm+w2 abizE/kWw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fUN05-0001MD-Es; Sun, 17 Jun 2018 02:01:05 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v14 41/74] mm: Convert page migration to XArray Date: Sat, 16 Jun 2018 19:00:19 -0700 Message-Id: <20180617020052.4759-42-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180617020052.4759-1-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Signed-off-by: Matthew Wilcox --- mm/migrate.c | 48 ++++++++++++++++++------------------------------ 1 file changed, 18 insertions(+), 30 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 8c0af0f7cab1..80dc5b8c9738 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -323,7 +323,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, page = migration_entry_to_page(entry); /* - * Once radix-tree replacement of page migration started, page_count + * Once page cache replacement of page migration started, page_count * *must* be zero. And, we don't want to call wait_on_page_locked() * against a page without get_page(). * So, we use get_page_unless_zero(), here. Even failed, page fault @@ -438,10 +438,10 @@ int migrate_page_move_mapping(struct address_space *mapping, struct buffer_head *head, enum migrate_mode mode, int extra_count) { + XA_STATE(xas, &mapping->i_pages, page_index(page)); struct zone *oldzone, *newzone; int dirty; int expected_count = 1 + extra_count; - void **pslot; /* * Device public or private pages have an extra refcount as they are @@ -467,21 +467,16 @@ int migrate_page_move_mapping(struct address_space *mapping, oldzone = page_zone(page); newzone = page_zone(newpage); - xa_lock_irq(&mapping->i_pages); - - pslot = radix_tree_lookup_slot(&mapping->i_pages, - page_index(page)); + xas_lock_irq(&xas); expected_count += hpage_nr_pages(page) + page_has_private(page); - if (page_count(page) != expected_count || - radix_tree_deref_slot_protected(pslot, - &mapping->i_pages.xa_lock) != page) { - xa_unlock_irq(&mapping->i_pages); + if (page_count(page) != expected_count || xas_load(&xas) != page) { + xas_unlock_irq(&xas); return -EAGAIN; } if (!page_ref_freeze(page, expected_count)) { - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -495,7 +490,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (mode == MIGRATE_ASYNC && head && !buffer_migrate_lock_buffers(head, mode)) { page_ref_unfreeze(page, expected_count); - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -523,16 +518,13 @@ int migrate_page_move_mapping(struct address_space *mapping, SetPageDirty(newpage); } - radix_tree_replace_slot(&mapping->i_pages, pslot, newpage); + xas_store(&xas, newpage); if (PageTransHuge(page)) { int i; - int index = page_index(page); for (i = 1; i < HPAGE_PMD_NR; i++) { - pslot = radix_tree_lookup_slot(&mapping->i_pages, - index + i); - radix_tree_replace_slot(&mapping->i_pages, pslot, - newpage + i); + xas_next(&xas); + xas_store(&xas, newpage + i); } } @@ -543,7 +535,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ page_ref_unfreeze(page, expected_count - hpage_nr_pages(page)); - xa_unlock(&mapping->i_pages); + xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ /* @@ -583,22 +575,18 @@ EXPORT_SYMBOL(migrate_page_move_mapping); int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page) { + XA_STATE(xas, &mapping->i_pages, page_index(page)); int expected_count; - void **pslot; - - xa_lock_irq(&mapping->i_pages); - - pslot = radix_tree_lookup_slot(&mapping->i_pages, page_index(page)); + xas_lock_irq(&xas); expected_count = 2 + page_has_private(page); - if (page_count(page) != expected_count || - radix_tree_deref_slot_protected(pslot, &mapping->i_pages.xa_lock) != page) { - xa_unlock_irq(&mapping->i_pages); + if (page_count(page) != expected_count || xas_load(&xas) != page) { + xas_unlock_irq(&xas); return -EAGAIN; } if (!page_ref_freeze(page, expected_count)) { - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -607,11 +595,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, get_page(newpage); - radix_tree_replace_slot(&mapping->i_pages, pslot, newpage); + xas_store(&xas, newpage); page_ref_unfreeze(page, expected_count - 1); - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return MIGRATEPAGE_SUCCESS; } -- 2.17.1