Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp3697842iob; Sat, 7 May 2022 12:18:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyt8IF4J3yF/hxocnBszWcgsQ5KpeWUFh0qb6UpI5jeNWTu4k62gPtXLLB6XtZZYnW6y2Ib X-Received: by 2002:a17:90a:ac15:b0:1d9:839d:3fac with SMTP id o21-20020a17090aac1500b001d9839d3facmr18738085pjq.195.1651951096152; Sat, 07 May 2022 12:18:16 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1651951096; cv=pass; d=google.com; s=arc-20160816; b=U7QDAvmh8OWAXEz/De7Dk6GWMnSWeQL0+Hia17dJibTfaDVXgpih1S5eqN7EmBrKdW eAMnsUWrqIpOgvDCUqccWXT6K6RRzoRhvW/EQnwR2rFiYZMNVMpy3yGYM340M7cVfEZQ nhIByYgR+iq8k+m6Xh6YqgR1YxS86gQa5M8J2kcrQGhGU8hye8o83u60U+5Nw3lpdYVj tgsXl2y1MKfnBXeB04RiP06kjiwiBwjgbo2lV4p58vr3MytepkzbmPby0ZhuBOd4/Jo6 Z4ad4xsB8/Azz1NCQCkkXXlmGVmI9rlwXwCmzpWnCHtvNrUcQY5DgoX0UEGmXUtmDh+7 5T5w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=if0H0QCoyRo0sJv9gd+yO7kqUKukXawMx0XCaW35lkQ=; b=CTVsWVnEv9bCbcjq+JPFfWQ6C5bAGpI2Wr1v6aO8D6SWVeni66G/liDODfJckra6pY POPNzR2/jNm3Z3MQFj+IzeoXdsfnph6dSile0O6t3TRva7n1B8QMp2MHTLKVuRfc6zfr oPWiduxd+WGCfZiEAWhcBYz7kEgVNYj41fESqwZSw3AU8taDkH3ViIKWA6RdPTbNB6Xp j2dI+qXlK5n7yuQaCSDCnB5JalVFvoezr0T+N4pW2jw9ZOnbkwhfgVw1AxXo58wYv0ix 6hMyS3F+1xspDQES63WV2Ur0VkRFs4F01jzaxqD99sZ9WmBD9yZfN5yXhPFJYZZ9Jby2 KipA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=AOBTPBwu; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f2-20020aa782c2000000b0050e0d9b0e4dsi7157135pfn.166.2022.05.07.12.17.50; Sat, 07 May 2022 12:18:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@amd.com header.s=selector1 header.b=AOBTPBwu; arc=pass (i=1 spf=pass spfdomain=amd.com dmarc=pass fromdomain=amd.com); spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=amd.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238281AbiEEVj1 (ORCPT + 99 others); Thu, 5 May 2022 17:39:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1385959AbiEEViy (ORCPT ); Thu, 5 May 2022 17:38:54 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2082.outbound.protection.outlook.com [40.107.93.82]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 602D94AE07; Thu, 5 May 2022 14:35:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f4iZpmqu19080OM2YqGzuCdtKix8AAjbePvqBBHuvpqDWC5RSBN3zsPQMTt8E/i5DBmqtViUfNkBA+7kCOG6/2ChIqgeo8FUUtwaP+n6TxqseM2/LK8urmWd2vdaQiJRZtXXjAmma7OCZ4e/Ry0aFAFURObc1KOQLO+tbnXr0nIepF/6VH4GSWzDiJTKt9iI6lQMMxvB3okxboTtRkVs89nf/hz7Csgt8aIXrIhdZr2ZjYunwi0ZQheP3p2iormGiXho3Y08nwfDA09Vu1n8G8vQMipybu2eD3Uqrevor5lAVnl64nLNi7GrwY5qEbdnm5v7WFDuRlaz2SzmGwGXqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=if0H0QCoyRo0sJv9gd+yO7kqUKukXawMx0XCaW35lkQ=; b=JLmFb109uEfu2feeYd/S6jPbSDxlDuQLGK+8i7CxV0ecjdoYlpfgBGmRuLvmBy8Xp2GfI8ZejveNm3+SEAAy5+FsxHsQCTfG7ab6g3Cx9+J8eOZ3XHuUBOuK6WmQksBqhBZ9RwMwIcG/KGtuuiUkaF4fDcJSeFlIs4isg66iyO+Fgve3TM8soa99dh9v5ET6W7YyDv1rMTa4ZOBe2473NbfcJfspPfKViGuqwnRrSWPt88gm4e5wKrLSZ+EOCNpBgvTUmITOMPzfIZUhBHdAN++OStWkREW1cwXGnVTuYPi3wokJxY7rc5ZlmuTkrVRULfLSxLE+X3XK4GHPCrgu2w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=nvidia.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=if0H0QCoyRo0sJv9gd+yO7kqUKukXawMx0XCaW35lkQ=; b=AOBTPBwu1hHCAyelJwNAKU5nNWtS0yqbg+30HBs3vEFCEpxFsPFyZdNkiAy8jHo7daoe+AbZ/MlRDmRwqoYUeciE9zZRP78gOAKiEVPU0sIHQh1aSqpa/a/J98Imc940tV2xgS8VYJmIL4bhqCjunBmbyxC41OW2ZFYq+q384Yg= Received: from MW4P220CA0011.NAMP220.PROD.OUTLOOK.COM (2603:10b6:303:115::16) by BY5PR12MB4068.namprd12.prod.outlook.com (2603:10b6:a03:203::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.13; Thu, 5 May 2022 21:35:07 +0000 Received: from CO1NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:303:115:cafe::3b) by MW4P220CA0011.outlook.office365.com (2603:10b6:303:115::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.13 via Frontend Transport; Thu, 5 May 2022 21:35:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; Received: from SATLEXMB04.amd.com (165.204.84.17) by CO1NAM11FT066.mail.protection.outlook.com (10.13.175.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5227.15 via Frontend Transport; Thu, 5 May 2022 21:35:06 +0000 Received: from alex-MS-7B09.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 5 May 2022 16:35:03 -0500 From: Alex Sierra To: CC: , , , , , , , , , , , , Subject: [PATCH v1 13/15] mm: handling Non-LRU pages returned by vm_normal_pages Date: Thu, 5 May 2022 16:34:36 -0500 Message-ID: <20220505213438.25064-14-alex.sierra@amd.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220505213438.25064-1-alex.sierra@amd.com> References: <20220505213438.25064-1-alex.sierra@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 37c935eb-650a-4dc5-0b38-08da2edf1fbd X-MS-TrafficTypeDiagnostic: BY5PR12MB4068:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0Acqge5q3nGDWWMYzxvFwFYyXCL3RxUBAmJYh59rM+LY58AOLPJJw6xmUa79Kojdm1cKmKICcpBo7Drsg2jYpikGBULUpwQtwtlGfwPazxXXwl2w4cBctBejgs3MPzgUp7PqqhMPekzPDMFH0x2Dp7Z2gLQab0Qieprg/0ruKyO1j7cBj+xgRfIhTa8MPKsvJUy+ZZriPRIWUrsJhFQBHrcZeCjXhZm5mwKk/3RiJ9x41tAXyo1iGeekcKiEQb7eFNZL9L8PfjlyOMIDBpmIp7A7PWu1MA1M9pKTmt5SduQQUlbMmn6uROCTIB7twiimv92xJO3Di+EYIUON2ezqaFSubr2RDvSiu5cI/BUcyZ0Y+Y1XkxjXvcCUEnyHM52MJdsoTk68uhpXBxbMpJ2JlPkVUjkCUty7JGW+/pCPgPn8FgS0j9g8EOpnHwXr0BLqgzH3eM3TcHoxx3waR5uvL+3xB+9Xh3xgX3Np5ZIAwH9MpWpV8VpYGvi+K0I33A9xXa8kADSm03YdKLts6ovzThxpGDb+IOYUAH5CQwIPfD19mF6RmTKAM6Z1G5WKjp982kwtsMTTUn1Bkl4C8pePRH94yGpClhpOur06ibuTuhIi1PcR5tnVg6oHt6RPLVzYrtNFvR/hMnq/QKFPif+IfW/zN0B3n0YpAWl1J9uB6t+vWM0fKN7rnVd6YRhTSghG8uTFX9iFFCBmxNSoTglLMw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(44832011)(2616005)(40460700003)(86362001)(7696005)(6666004)(356005)(36756003)(508600001)(83380400001)(70586007)(8676002)(2906002)(81166007)(70206006)(186003)(4326008)(16526019)(36860700001)(82310400005)(54906003)(8936002)(1076003)(316002)(47076005)(26005)(5660300002)(426003)(336012)(7416002)(6916009)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2022 21:35:06.9070 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 37c935eb-650a-4dc5-0b38-08da2edf1fbd X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4068 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org With DEVICE_COHERENT, we'll soon have vm_normal_pages() return device-managed anonymous pages that are not LRU pages. Although they behave like normal pages for purposes of mapping in CPU page, and for COW. They do not support LRU lists, NUMA migration or THP. We also introduced a FOLL_LRU flag that adds the same behaviour to follow_page and related APIs, to allow callers to specify that they expect to put pages on an LRU list. Signed-off-by: Alex Sierra Acked-by: Felix Kuehling --- fs/proc/task_mmu.c | 2 +- include/linux/mm.h | 3 ++- mm/gup.c | 2 ++ mm/huge_memory.c | 2 +- mm/khugepaged.c | 9 ++++++--- mm/ksm.c | 6 +++--- mm/madvise.c | 4 ++-- mm/memory.c | 9 ++++++++- mm/mempolicy.c | 2 +- mm/migrate.c | 4 ++-- mm/mlock.c | 2 +- mm/mprotect.c | 2 +- 12 files changed, 30 insertions(+), 17 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f46060eb91b5..5d620733f173 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1785,7 +1785,7 @@ static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma, return NULL; page = vm_normal_page(vma, addr, pte); - if (!page) + if (!page || is_zone_device_page(page)) return NULL; if (PageReserved(page)) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9f44254af8ce..d7f253a0c41e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -601,7 +601,7 @@ struct vm_operations_struct { #endif /* * Called by vm_normal_page() for special PTEs to find the - * page for @addr. This is useful if the default behavior + * page for @addr. This is useful if the default behavior * (using pte_page()) would not find the correct page. */ struct page *(*find_special_page)(struct vm_area_struct *vma, @@ -2929,6 +2929,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_NUMA 0x200 /* force NUMA hinting page fault */ #define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ #define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ +#define FOLL_LRU 0x1000 /* return only LRU (anon or page cache) */ #define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ #define FOLL_COW 0x4000 /* internal GUP flag */ #define FOLL_ANON 0x8000 /* don't do file mappings */ diff --git a/mm/gup.c b/mm/gup.c index a214c8df7140..13bf04bce132 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -479,6 +479,8 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } page = vm_normal_page(vma, address, pte); + if ((flags & FOLL_LRU) && page && is_zone_device_page(page)) + page = NULL; if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { /* * Only return device mapping pages in the FOLL_GET or FOLL_PIN diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c468fee595ff..dc572bc9fa1c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2851,7 +2851,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } /* FOLL_DUMP to ignore special (like zero) pages */ - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP | FOLL_LRU); if (IS_ERR(page)) continue; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a4e5eaf3eb01..eb3cfd679800 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -627,7 +627,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, goto out; } page = vm_normal_page(vma, address, pteval); - if (unlikely(!page)) { + if (unlikely(!page) || unlikely(is_zone_device_page(page))) { result = SCAN_PAGE_NULL; goto out; } @@ -1276,7 +1276,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, writable = true; page = vm_normal_page(vma, _address, pteval); - if (unlikely(!page)) { + if (unlikely(!page) || unlikely(is_zone_device_page(page))) { result = SCAN_PAGE_NULL; goto out_unmap; } @@ -1484,7 +1484,8 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) goto abort; page = vm_normal_page(vma, addr, *pte); - + if (page && is_zone_device_page(page)) + page = NULL; /* * Note that uprobe, debugger, or MAP_PRIVATE may change the * page table, but the new page will not be a subpage of hpage. @@ -1502,6 +1503,8 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) if (pte_none(*pte)) continue; page = vm_normal_page(vma, addr, *pte); + if (page && is_zone_device_page(page)) + goto abort; page_remove_rmap(page, vma, false); } diff --git a/mm/ksm.c b/mm/ksm.c index 063a48eeb5ee..f16056efca21 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -474,7 +474,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) do { cond_resched(); page = follow_page(vma, addr, - FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); + FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE | FOLL_LRU); if (IS_ERR_OR_NULL(page)) break; if (PageKsm(page)) @@ -559,7 +559,7 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item) if (!vma) goto out; - page = follow_page(vma, addr, FOLL_GET); + page = follow_page(vma, addr, FOLL_GET | FOLL_LRU); if (IS_ERR_OR_NULL(page)) goto out; if (PageAnon(page)) { @@ -2288,7 +2288,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) while (ksm_scan.address < vma->vm_end) { if (ksm_test_exit(mm)) break; - *page = follow_page(vma, ksm_scan.address, FOLL_GET); + *page = follow_page(vma, ksm_scan.address, FOLL_GET | FOLL_LRU); if (IS_ERR_OR_NULL(*page)) { ksm_scan.address += PAGE_SIZE; cond_resched(); diff --git a/mm/madvise.c b/mm/madvise.c index 1873616a37d2..e9c24c834e98 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -413,7 +413,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, continue; page = vm_normal_page(vma, addr, ptent); - if (!page) + if (!page || is_zone_device_page(page)) continue; /* @@ -628,7 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, } page = vm_normal_page(vma, addr, ptent); - if (!page) + if (!page || is_zone_device_page(page)) continue; /* diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..892c4cc54dc2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -621,6 +621,13 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, if (is_zero_pfn(pfn)) return NULL; if (pte_devmap(pte)) +/* + * NOTE: Technically this should goto check_pfn label. However, page->_mapcount + * is never incremented for device pages that are mmap through DAX mechanism + * using pmem driver mounted into ext4 filesystem. When these pages are unmap, + * zap_pte_range is called and vm_normal_page return a valid page with + * page_mapcount() = 0, before page_remove_rmap is called. + */ return NULL; print_bad_pte(vma, addr, pte, NULL); @@ -4422,7 +4429,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) pte = pte_modify(old_pte, vma->vm_page_prot); page = vm_normal_page(vma, vmf->address, pte); - if (!page) + if (!page || is_zone_device_page(page)) goto out_map; /* TODO: handle PTE-mapped THP */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8c74107a2b15..e32edbecb0cd 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -528,7 +528,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, if (!pte_present(*pte)) continue; page = vm_normal_page(vma, addr, *pte); - if (!page) + if (!page || is_zone_device_page(page)) continue; /* * vm_normal_page() filters out zero pages, but there might diff --git a/mm/migrate.c b/mm/migrate.c index e18ddee56f37..ed7ea77de539 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1612,7 +1612,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr, goto out; /* FOLL_DUMP to ignore special (like zero) pages */ - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP | FOLL_LRU); err = PTR_ERR(page); if (IS_ERR(page)) @@ -1803,7 +1803,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages, goto set_status; /* FOLL_DUMP to ignore special (like zero) pages */ - page = follow_page(vma, addr, FOLL_DUMP); + page = follow_page(vma, addr, FOLL_DUMP | FOLL_LRU); err = PTR_ERR(page); if (IS_ERR(page)) diff --git a/mm/mlock.c b/mm/mlock.c index 716caf851043..b14e929084cc 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -333,7 +333,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, if (!pte_present(*pte)) continue; page = vm_normal_page(vma, addr, *pte); - if (!page) + if (!page || is_zone_device_page(page)) continue; if (PageTransCompound(page)) continue; diff --git a/mm/mprotect.c b/mm/mprotect.c index b69ce7a7b2b7..a6f3587ea29a 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -91,7 +91,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, continue; page = vm_normal_page(vma, addr, oldpte); - if (!page || PageKsm(page)) + if (!page || is_zone_device_page(page) || PageKsm(page)) continue; /* Also skip shared copy-on-write pages */ -- 2.32.0