Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp557600imm; Wed, 23 May 2018 01:28:45 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoIYdxFEhXJU1Y4XHU5KRm8m8/y3euWxP9rQinavO8XB8JTzRoFrh1Sgq2RuOCSi7XtfnrW X-Received: by 2002:a65:410d:: with SMTP id w13-v6mr1545721pgp.111.1527064125321; Wed, 23 May 2018 01:28:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527064125; cv=none; d=google.com; s=arc-20160816; b=mfLZ0pnFcm7JGlaI2bb+9IA3RE9xt2ZkiHt21zLiCRLYqY+huLcDq/SxdP4t13Ek3z 5fw57lkgKkpaCQqM6k6vU7RuxkyU6oH6utadqAUjHdbrrqRuejEFxZ0RtFvgQ0UTJwK9 iTnn8mJ7So1gt5J8cVSp4gKWfMSQAZECGJvKfE/5PybpHOYWhvYYB5VZ9bUSFJDMP6DS Rwr6KK46Vkc9DUghJ9CK/W7h7HxEzZ1G3H904CUiegmlgQDY9OLS2VmXip1bZQ4MkcUf /6TEAANIfjwNzD0OVCcEOlokv3pmLiLVBoGRLoUND3Em/95d2GNRXIJzhAQThFGDAYh2 QQ2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=gMQ3TplL1eJvLqKUiYQQxMUbTIBG/VrcVoTakXERoFQ=; b=LgFreO7sSckhO6MISedVBDjnDwk9Vv8vSoNREgF+h9FH/cuNDj5Pfuqrq9vGOO6MQw 0/X7tXOzCKTBB3F+8+PM+8E65ryp6JqRaVW5yuZ0BY4uTABlKP1JbY3FQWWfK51cNB4U Wm6Co6fCObePoi/4YCc+3dMnOa2s5kJr6SyYE3DYrzjNeK3PVQzpYac+CBXifcX/x1fA eOolJhHp//7tzmgwrq1EfNCCGapGCa/xKql7Nr8Orc5WBOYXbjr2vHKIWHZ2zVPltMHG FAs/KZf2NCTIiBCB3NkdXIB/0pwAcL0YHwPAtXgqdNBKdEu3+7ZiqgJrinpERkaSNzaX hL5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m2-v6si18717770pfb.259.2018.05.23.01.28.30; Wed, 23 May 2018 01:28:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754560AbeEWI1K (ORCPT + 99 others); Wed, 23 May 2018 04:27:10 -0400 Received: from mga03.intel.com ([134.134.136.65]:7732 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754548AbeEWI1G (ORCPT ); Wed, 23 May 2018 04:27:06 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 May 2018 01:27:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,432,1520924400"; d="scan'208";a="57726139" Received: from yhuang6-ux31a.sh.intel.com ([10.239.197.97]) by fmsmga001.fm.intel.com with ESMTP; 23 May 2018 01:27:03 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan Subject: [PATCH -mm -V3 13/21] mm, THP, swap: Support PMD swap mapping in madvise_free() Date: Wed, 23 May 2018 16:26:17 +0800 Message-Id: <20180523082625.6897-14-ying.huang@intel.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180523082625.6897-1-ying.huang@intel.com> References: <20180523082625.6897-1-ying.huang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huang Ying When madvise_free() found a PMD swap mapping, if only part of the huge swap cluster is operated on, the PMD swap mapping will be split and fallback to PTE swap mapping processing. Otherwise, if all huge swap cluster is operated on, free_swap_and_cache() will be called to decrease the PMD swap mapping count and probably free the swap space and the THP in swap cache too. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan --- mm/huge_memory.c | 50 +++++++++++++++++++++++++++++++++++--------------- mm/madvise.c | 2 +- 2 files changed, 36 insertions(+), 16 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 668d77cec14d..a8af2ddc578a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1842,6 +1842,15 @@ static inline void __split_huge_swap_pmd(struct vm_area_struct *vma, } #endif +static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd) +{ + pgtable_t pgtable; + + pgtable = pgtable_trans_huge_withdraw(mm, pmd); + pte_free(mm, pgtable); + mm_dec_nr_ptes(mm); +} + /* * Return true if we do MADV_FREE successfully on entire pmd page. * Otherwise, return false. @@ -1862,15 +1871,35 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, goto out_unlocked; orig_pmd = *pmd; - if (is_huge_zero_pmd(orig_pmd)) - goto out; - if (unlikely(!pmd_present(orig_pmd))) { - VM_BUG_ON(thp_migration_supported() && - !is_pmd_migration_entry(orig_pmd)); - goto out; + swp_entry_t entry = pmd_to_swp_entry(orig_pmd); + + if (is_migration_entry(entry)) { + VM_BUG_ON(!thp_migration_supported()); + goto out; + } else if (thp_swap_supported() && !non_swap_entry(entry)) { + /* If part of THP is discarded */ + if (next - addr != HPAGE_PMD_SIZE) { + unsigned long haddr = addr & HPAGE_PMD_MASK; + + __split_huge_swap_pmd(vma, haddr, pmd); + goto out; + } + free_swap_and_cache(entry, true); + pmd_clear(pmd); + zap_deposited_table(mm, pmd); + if (current->mm == mm) + sync_mm_rss(mm); + add_mm_counter(mm, MM_SWAPENTS, -HPAGE_PMD_NR); + ret = true; + goto out; + } else + VM_BUG_ON(1); } + if (is_huge_zero_pmd(orig_pmd)) + goto out; + page = pmd_page(orig_pmd); /* * If other processes are mapping this page, we couldn't discard @@ -1916,15 +1945,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return ret; } -static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd) -{ - pgtable_t pgtable; - - pgtable = pgtable_trans_huge_withdraw(mm, pmd); - pte_free(mm, pgtable); - mm_dec_nr_ptes(mm); -} - int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr) { diff --git a/mm/madvise.c b/mm/madvise.c index d180000c626b..e03e85a20fb4 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -321,7 +321,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, unsigned long next; next = pmd_addr_end(addr, end); - if (pmd_trans_huge(*pmd)) + if (pmd_trans_huge(*pmd) || is_swap_pmd(*pmd)) if (madvise_free_huge_pmd(tlb, vma, pmd, addr, next)) goto next; -- 2.16.1