Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932597AbcKRRTI (ORCPT ); Fri, 18 Nov 2016 12:19:08 -0500 Received: from mx1.redhat.com ([209.132.183.28]:39328 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754340AbcKRRR4 (ORCPT ); Fri, 18 Nov 2016 12:17:56 -0500 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= To: akpm@linux-foundation.org, , linux-mm@kvack.org Cc: John Hubbard , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [HMM v13 14/18] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Date: Fri, 18 Nov 2016 13:18:23 -0500 Message-Id: <1479493107-982-15-git-send-email-jglisse@redhat.com> In-Reply-To: <1479493107-982-1-git-send-email-jglisse@redhat.com> References: <1479493107-982-1-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Fri, 18 Nov 2016 17:17:56 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2723 Lines: 105 Allow to unmap and restore special swap entry of un-addressable ZONE_DEVICE memory. Signed-off-by: Jérôme Glisse --- mm/migrate.c | 11 ++++++++++- mm/rmap.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index 66ce6b4..6b6b457 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -40,6 +40,7 @@ #include #include #include +#include #include @@ -248,7 +249,15 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma, pte = arch_make_huge_pte(pte, vma, new, 0); } #endif - flush_dcache_page(new); + + if (unlikely(is_zone_device_page(new)) && !is_addressable_page(new)) { + entry = make_device_entry(new, pte_write(pte)); + pte = swp_entry_to_pte(entry); + if (pte_swp_soft_dirty(*ptep)) + pte = pte_mksoft_dirty(pte); + } else + flush_dcache_page(new); + set_pte_at(mm, addr, ptep, pte); if (PageHuge(new)) { diff --git a/mm/rmap.c b/mm/rmap.c index 1ef3640..fff3578 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -61,6 +61,7 @@ #include #include #include +#include #include @@ -1455,6 +1456,52 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, goto out; } + if ((flags & TTU_MIGRATION) && is_zone_device_page(page)) { + swp_entry_t entry; + pte_t swp_pte; + pmd_t *pmdp; + + if (!(page->pgmap->flags & MEMORY_MOVABLE)) + goto out; + + pmdp = mm_find_pmd(mm, address); + if (!pmdp) + goto out; + + pte = pte_offset_map_lock(mm, pmdp, address, &ptl); + if (!pte) + goto out; + + pteval = ptep_get_and_clear(mm, address, pte); + if (pte_present(pteval) || pte_none(pteval)) { + set_pte_at(mm, address, pte, pteval); + goto out_unmap; + } + + entry = pte_to_swp_entry(pteval); + if (!is_device_entry(entry)) { + set_pte_at(mm, address, pte, pteval); + goto out_unmap; + } + + if (device_entry_to_page(entry) != page) { + set_pte_at(mm, address, pte, pteval); + goto out_unmap; + } + + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + entry = make_migration_entry(page, 0); + swp_pte = swp_entry_to_pte(entry); + if (pte_soft_dirty(*pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + set_pte_at(mm, address, pte, swp_pte); + goto discard; + } + pte = page_check_address(page, mm, address, &ptl, PageTransCompound(page)); if (!pte) -- 2.4.3