Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp8131705imu; Tue, 4 Dec 2018 03:33:21 -0800 (PST) X-Google-Smtp-Source: AFSGD/X4NKlUXdNm2PBZPivzX2y8oef3Pd986IijJw/U6M8lkoUMHpS/4nLFPfpU/fOUyH9vMT9q X-Received: by 2002:a63:2643:: with SMTP id m64mr16190596pgm.35.1543923201676; Tue, 04 Dec 2018 03:33:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543923201; cv=none; d=google.com; s=arc-20160816; b=eT1Ftkf80BKBpwutSYe52jQ9BgRD5jfB+PRSURX9kL3tpROsN7KUrotn/LWBo5tWSw AkL8Y4JENm2bbwYC7Vc0csiFJ8yRfO3LW/gUVav9wyaU52cCvdwsonWotaIIUQpZHYyT pfqGADxnLgw4Yc69BAGIcYNp1yzqtvgx9D+t09nPrLSDZbYtNJNuUJA16kuuohCeY7Kk x8GgUphnzs40bjnhCGbuPyUyZHZ1wggmScW7NiqvqfLoB1jFxWM/77LShqf1boEt2rOO dObaNldqyUcNr39YXRNf3gFhGzn3ZFDT6A9OPKL8RtZQZMy+ded+wW7g5dnnY/dmEy9t YY+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=OhZb+cyqMrzqHkG0/Ktco405CGndme1NEHtrWvRzNRk=; b=bGh1NBatnHcY7UsMZCwTwFoHAxpnCLbWCqYgp3/ddD7P6wxlX9LQF4jWbrFf3j7Qrw 4JQR1kCa1WmR3Q7k3GEfCdcC2PaukK6JVcsLe810fPZgNxTkbWal56YlLkum8ODZJ6F2 hyEn2l+FR+Qp6BJZl2b5vw9Ye5me5V+4jcdltFlTWEhRn5GjV7MOI4mG4Bw6PUMvd0d9 7VHt4try9zgQ3DQS085Y2B9Xc4MBGCv3HfbWxFVCqJ+x9oyIfCV2FHerOf8iXzCVGOfg izWzuseLY7G4ZY3ateftyOSp8KpuC2mXijNJw7XDUkB9h25iEQZvVTIHlWGk4RpkxSRS gucQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Kj02J1SJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d4si13430328pla.58.2018.12.04.03.33.06; Tue, 04 Dec 2018 03:33:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Kj02J1SJ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726535AbeLDK4k (ORCPT + 99 others); Tue, 4 Dec 2018 05:56:40 -0500 Received: from mail.kernel.org ([198.145.29.99]:39992 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726184AbeLDK4g (ORCPT ); Tue, 4 Dec 2018 05:56:36 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E709A214C1; Tue, 4 Dec 2018 10:56:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1543920995; bh=Kpf8zoHVtLASvAFmI86NJPrxaNh9ML0FexWRMT0doB0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kj02J1SJJg2PyVbNFjvNBrePvqJPgzFDBDuhP6LNwWWT4/tWJjb9zQZhm0DMkLZif /15DOGeyMuM///ZCLcCdJRKBvlS+k/CEcH1KfhihYt6mq2dpASTzqy+f/V2JRt+NAO dn6qKHtwPO+JWRvZqj/n9Alww1NP88JN6aMm15XY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hugh Dickins , "Kirill A. Shutemov" , Jerome Glisse , Konstantin Khlebnikov , Matthew Wilcox , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 4.19 007/139] mm/khugepaged: minor reorderings in collapse_shmem() Date: Tue, 4 Dec 2018 11:48:08 +0100 Message-Id: <20181204103650.262956844@linuxfoundation.org> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181204103649.950154335@linuxfoundation.org> References: <20181204103649.950154335@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ commit 042a30824871fa3149b0127009074b75cc25863c upstream. Several cleanups in collapse_shmem(): most of which probably do not really matter, beyond doing things in a more familiar and reassuring order. Simplify the failure gotos in the main loop, and on success update stats while interrupts still disabled from the last iteration. Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261526400.2275@eggly.anvils Fixes: f3f0e1d2150b2 ("khugepaged: add support of collapse for tmpfs/shmem pages") Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: Jerome Glisse Cc: Konstantin Khlebnikov Cc: Matthew Wilcox Cc: [4.8+] Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/khugepaged.c | 73 ++++++++++++++++++++----------------------------- 1 file changed, 30 insertions(+), 43 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 068868763b78..d0a347e6fd08 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1330,13 +1330,12 @@ static void collapse_shmem(struct mm_struct *mm, goto out; } + __SetPageLocked(new_page); + __SetPageSwapBacked(new_page); new_page->index = start; new_page->mapping = mapping; - __SetPageSwapBacked(new_page); - __SetPageLocked(new_page); BUG_ON(!page_ref_freeze(new_page, 1)); - /* * At this point the new_page is 'frozen' (page_count() is zero), locked * and not up-to-date. It's safe to insert it into radix tree, because @@ -1365,13 +1364,13 @@ static void collapse_shmem(struct mm_struct *mm, */ if (n && !shmem_charge(mapping->host, n)) { result = SCAN_FAIL; - break; + goto tree_locked; } - nr_none += n; for (; index < min(iter.index, end); index++) { radix_tree_insert(&mapping->i_pages, index, new_page + (index % HPAGE_PMD_NR)); } + nr_none += n; /* We are done. */ if (index >= end) @@ -1387,12 +1386,12 @@ static void collapse_shmem(struct mm_struct *mm, result = SCAN_FAIL; goto tree_unlocked; } - xa_lock_irq(&mapping->i_pages); } else if (trylock_page(page)) { get_page(page); + xa_unlock_irq(&mapping->i_pages); } else { result = SCAN_PAGE_LOCK; - break; + goto tree_locked; } /* @@ -1407,11 +1406,10 @@ static void collapse_shmem(struct mm_struct *mm, result = SCAN_TRUNCATED; goto out_unlock; } - xa_unlock_irq(&mapping->i_pages); if (isolate_lru_page(page)) { result = SCAN_DEL_PAGE_LRU; - goto out_isolate_failed; + goto out_unlock; } if (page_mapped(page)) @@ -1432,7 +1430,9 @@ static void collapse_shmem(struct mm_struct *mm, */ if (!page_ref_freeze(page, 3)) { result = SCAN_PAGE_COUNT; - goto out_lru; + xa_unlock_irq(&mapping->i_pages); + putback_lru_page(page); + goto out_unlock; } /* @@ -1448,17 +1448,10 @@ static void collapse_shmem(struct mm_struct *mm, slot = radix_tree_iter_resume(slot, &iter); index++; continue; -out_lru: - xa_unlock_irq(&mapping->i_pages); - putback_lru_page(page); -out_isolate_failed: - unlock_page(page); - put_page(page); - goto tree_unlocked; out_unlock: unlock_page(page); put_page(page); - break; + goto tree_unlocked; } /* @@ -1466,7 +1459,7 @@ static void collapse_shmem(struct mm_struct *mm, * This code only triggers if there's nothing in radix tree * beyond 'end'. */ - if (result == SCAN_SUCCEED && index < end) { + if (index < end) { int n = end - index; /* Stop if extent has been truncated, and is now empty */ @@ -1478,7 +1471,6 @@ static void collapse_shmem(struct mm_struct *mm, result = SCAN_FAIL; goto tree_locked; } - for (; index < end; index++) { radix_tree_insert(&mapping->i_pages, index, new_page + (index % HPAGE_PMD_NR)); @@ -1486,14 +1478,19 @@ static void collapse_shmem(struct mm_struct *mm, nr_none += n; } + __inc_node_page_state(new_page, NR_SHMEM_THPS); + if (nr_none) { + struct zone *zone = page_zone(new_page); + + __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); + __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); + } + tree_locked: xa_unlock_irq(&mapping->i_pages); tree_unlocked: if (result == SCAN_SUCCEED) { - unsigned long flags; - struct zone *zone = page_zone(new_page); - /* * Replacing old pages with new one has succeed, now we need to * copy the content and free old pages. @@ -1507,11 +1504,11 @@ static void collapse_shmem(struct mm_struct *mm, copy_highpage(new_page + (page->index % HPAGE_PMD_NR), page); list_del(&page->lru); - unlock_page(page); - page_ref_unfreeze(page, 1); page->mapping = NULL; + page_ref_unfreeze(page, 1); ClearPageActive(page); ClearPageUnevictable(page); + unlock_page(page); put_page(page); index++; } @@ -1520,28 +1517,17 @@ static void collapse_shmem(struct mm_struct *mm, index++; } - local_irq_save(flags); - __inc_node_page_state(new_page, NR_SHMEM_THPS); - if (nr_none) { - __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); - __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); - } - local_irq_restore(flags); - - /* - * Remove pte page tables, so we can re-faulti - * the page as huge. - */ - retract_page_tables(mapping, start); - /* Everything is ready, let's unfreeze the new_page */ - set_page_dirty(new_page); SetPageUptodate(new_page); page_ref_unfreeze(new_page, HPAGE_PMD_NR); + set_page_dirty(new_page); mem_cgroup_commit_charge(new_page, memcg, false, true); lru_cache_add_anon(new_page); - unlock_page(new_page); + /* + * Remove pte page tables, so we can re-fault the page as huge. + */ + retract_page_tables(mapping, start); *hpage = NULL; khugepaged_pages_collapsed++; @@ -1573,8 +1559,8 @@ static void collapse_shmem(struct mm_struct *mm, radix_tree_replace_slot(&mapping->i_pages, slot, page); slot = radix_tree_iter_resume(slot, &iter); xa_unlock_irq(&mapping->i_pages); - putback_lru_page(page); unlock_page(page); + putback_lru_page(page); xa_lock_irq(&mapping->i_pages); } VM_BUG_ON(nr_none); @@ -1583,9 +1569,10 @@ static void collapse_shmem(struct mm_struct *mm, /* Unfreeze new_page, caller would take care about freeing it */ page_ref_unfreeze(new_page, 1); mem_cgroup_cancel_charge(new_page, memcg, true); - unlock_page(new_page); new_page->mapping = NULL; } + + unlock_page(new_page); out: VM_BUG_ON(!list_empty(&pagelist)); /* TODO: tracepoints */ -- 2.17.1