Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp376670imm; Thu, 21 Jun 2018 20:57:44 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKjrlcf8Jv9EO/qQaWQvGpkBTaLBiBVEobIb8f+imvOmSO6crdyBnn16m+o4Fm7jOo7xVFi X-Received: by 2002:a63:3759:: with SMTP id g25-v6mr25138834pgn.59.1529639864575; Thu, 21 Jun 2018 20:57:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529639864; cv=none; d=google.com; s=arc-20160816; b=WzohNwySET+Lzq/TbN1rF9H+JHAS5XWTqtEOjCi76usfivD1rNpdvtVJpapdy8Ul+a BXwN/wvBbQGOtSixat/nrtngh4g+GIfijR8IrKE2LCAW3v2KvBrMJs6jC/KSP7jbHReO sz6HV+lUKnSgPnr+Ol5/G5r9aUuggXF4hVQ5g72Lm1MJFrM0Yv4ExsdRitB9NAQ3dMO4 6RDzsIgNx8w2pxsNJPgz68se3RSGzeU+5VPCWVmPewE32SlWST2Ds+JCPc6nTwZ0CrCU 5Z7U9qI3+2jU+rlVQY6cQ8W/MmUMeV0eZOiQJvJo8GMcIy43OLOkTTGsUzFc4q5u2bOL dqvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=xlswHgvYAWS77/4GnnM19IknMOsagxmLaCnUYAqnhvo=; b=UVn/cBnWZIZAXt5FL8UJ4BdNbf16saa0p4TAdNvuunweV5ZSlMbngim6YSLPQeNXNx viPCOtSBihJwFj+pkXkX3Q7GwAtQkjE7QnTiMwlLzFXJOy4Gt8vj7P8uunGbyFsNYTOV bfz2742p0+IvImHWiHdRU2hwDGeC0nWHzEQwtqbqKRCFYN6xonVPhW0BnjUV+f7Rcxgv Kz0EE6ctBEzm4E0sX4bEoJ4OOgtAlchPEgq/Ga/evVqRXz0bPX/mW4MEv1ucPF9+Ogek 8/M6XwHOwCWGik5riFoZPMtVWmqRMGkrrAMMbO0ERhru7TJGPs+76vnOIL6U5Lk0Idg4 9Wbw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s1-v6si5467197pgb.486.2018.06.21.20.57.30; Thu, 21 Jun 2018 20:57:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934639AbeFVD4G (ORCPT + 99 others); Thu, 21 Jun 2018 23:56:06 -0400 Received: from mga17.intel.com ([192.55.52.151]:9412 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934625AbeFVD4D (ORCPT ); Thu, 21 Jun 2018 23:56:03 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jun 2018 20:56:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,255,1526367600"; d="scan'208";a="65335206" Received: from wanpingl-mobl.ccr.corp.intel.com (HELO yhuang6-ux31a.ccr.corp.intel.com) ([10.254.212.200]) by fmsmga004.fm.intel.com with ESMTP; 21 Jun 2018 20:55:57 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -mm -v4 19/21] mm, THP, swap: Support PMD swap mapping in common path Date: Fri, 22 Jun 2018 11:51:49 +0800 Message-Id: <20180622035151.6676-20-ying.huang@intel.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180622035151.6676-1-ying.huang@intel.com> References: <20180622035151.6676-1-ying.huang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huang Ying Original code is only for PMD migration entry, it is revised to support PMD swap mapping. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- fs/proc/task_mmu.c | 8 ++++---- mm/gup.c | 34 ++++++++++++++++++++++------------ mm/huge_memory.c | 6 +++--- mm/mempolicy.c | 2 +- 4 files changed, 30 insertions(+), 20 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e9679016271f..afcf6ac57219 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -978,7 +978,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, pmd = pmd_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); - } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { + } else if (is_swap_pmd(pmd)) { pmd = pmd_swp_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } @@ -1309,7 +1309,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, frame = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); } -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION +#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP) else if (is_swap_pmd(pmd)) { swp_entry_t entry = pmd_to_swp_entry(pmd); unsigned long offset; @@ -1323,8 +1323,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, flags |= PM_SWAP; if (pmd_swp_soft_dirty(pmd)) flags |= PM_SOFT_DIRTY; - VM_BUG_ON(!is_pmd_migration_entry(pmd)); - page = migration_entry_to_page(entry); + if (is_pmd_migration_entry(pmd)) + page = migration_entry_to_page(entry); } #endif diff --git a/mm/gup.c b/mm/gup.c index b70d7ba7cc13..84ba4ad8120d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -216,6 +216,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; + swp_entry_t entry; pmd = pmd_offset(pudp, address); /* @@ -243,18 +244,21 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, if (!pmd_present(pmdval)) { if (likely(!(flags & FOLL_MIGRATION))) return no_page_table(vma, flags); - VM_BUG_ON(thp_migration_supported() && - !is_pmd_migration_entry(pmdval)); - if (is_pmd_migration_entry(pmdval)) + entry = pmd_to_swp_entry(pmdval); + if (thp_migration_supported() && is_migration_entry(entry)) { pmd_migration_entry_wait(mm, pmd); - pmdval = READ_ONCE(*pmd); - /* - * MADV_DONTNEED may convert the pmd to null because - * mmap_sem is held in read mode - */ - if (pmd_none(pmdval)) + pmdval = READ_ONCE(*pmd); + /* + * MADV_DONTNEED may convert the pmd to null because + * mmap_sem is held in read mode + */ + if (pmd_none(pmdval)) + return no_page_table(vma, flags); + goto retry; + } + if (thp_swap_supported() && !non_swap_entry(entry)) return no_page_table(vma, flags); - goto retry; + VM_BUG_ON(1); } if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); @@ -276,11 +280,17 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags); } if (unlikely(!pmd_present(*pmd))) { + entry = pmd_to_swp_entry(*pmd); spin_unlock(ptl); if (likely(!(flags & FOLL_MIGRATION))) return no_page_table(vma, flags); - pmd_migration_entry_wait(mm, pmd); - goto retry_locked; + if (thp_migration_supported() && is_migration_entry(entry)) { + pmd_migration_entry_wait(mm, pmd); + goto retry_locked; + } + if (thp_swap_supported() && !non_swap_entry(entry)) + return no_page_table(vma, flags); + VM_BUG_ON(1); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6b9ca1c14500..e50adc6b59b2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2074,7 +2074,7 @@ static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl, static pmd_t move_soft_dirty_pmd(pmd_t pmd) { #ifdef CONFIG_MEM_SOFT_DIRTY - if (unlikely(is_pmd_migration_entry(pmd))) + if (unlikely(is_swap_pmd(pmd))) pmd = pmd_swp_mksoft_dirty(pmd); else if (pmd_present(pmd)) pmd = pmd_mksoft_dirty(pmd); @@ -2160,11 +2160,11 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, preserve_write = prot_numa && pmd_write(*pmd); ret = 1; -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION +#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP) if (is_swap_pmd(*pmd)) { swp_entry_t entry = pmd_to_swp_entry(*pmd); - VM_BUG_ON(!is_pmd_migration_entry(*pmd)); + VM_BUG_ON(!thp_swap_supported() && !is_migration_entry(entry)); if (is_write_migration_entry(entry)) { pmd_t newpmd; /* diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 9ac49ef17b4e..180d7c08f6cc 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -436,7 +436,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, struct queue_pages *qp = walk->private; unsigned long flags; - if (unlikely(is_pmd_migration_entry(*pmd))) { + if (unlikely(is_swap_pmd(*pmd))) { ret = 1; goto unlock; } -- 2.16.4