Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp376916imm; Thu, 21 Jun 2018 20:58:11 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJZarGrDkF/ixBJzhEUkBq86DgNKUbCTZC7m7z1upHlGhr6ay+/+U0033vP+Wim3twVC64y X-Received: by 2002:a62:a0c:: with SMTP id s12-v6mr30357158pfi.33.1529639891398; Thu, 21 Jun 2018 20:58:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529639891; cv=none; d=google.com; s=arc-20160816; b=ScuWh1MSfkYIiacdSEBu5929VtDF9KzZlDXbp6LUxLM0cOAP83Ri6TFubnL1SPalgF vK6ZxQRdNKDyVWqFhZEDqnp1XA5Y+UIdGJS9WloexkI4X8YxzA5GRv0DKpxrMvZmBcpY fZM3IR29wXLVcETVs0PqZyO6SWBrassIc7G8yYyBOwFbTIp6sJNCKEj3T7bMWxsWUod0 hQasgzYqEWtSyUCnJwEL6SvLIH0wECVXSMf8fLYRX+txX3W/OAtbfowKme4fs5wvBVWi cSFOfp43jR8EFR96wusrehftcrRanifciAfb2y0UoxIh/FOMEgRBE0XiuEnZS3FBmZP2 1l7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=jv0bXo/88j1GmtIcTWYUDzZ+BgpJykyIu9VHJ6oEk6M=; b=zqD04xfOwNq3pFnsAIBqEkPXAT8zI0VPwdeh9fnhdqK20tYhjX0xJm6MYTYbzt1qnQ lWmUcD3MsCF9dePiU5LprDiBk19rUHLn9Prsk5gnI+zhcQm6MWM3W50s/GYO86Pu9JAK Pyx/O2JHkdtxBWgcdVbrBZdTckoEASXj+ADJ+YHmgAJf9lCFCuVPAR3zaKHIG+jy7Nz6 SZPzIUQ1dxS1GOj7AiF0URzFTajzdxHS2CHXMzf6VfrNtvYznhSHOpgbzjOm4KvW1Omx Fa/Di4lmqgtOXXmFH/jKUF9kmhzXwMCrA3MrR5vdcY8uDMXtGak6YXJHjjmYOcBatx8z YHdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u8-v6si6059793pfm.63.2018.06.21.20.57.56; Thu, 21 Jun 2018 20:58:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934609AbeFVDz7 (ORCPT + 99 others); Thu, 21 Jun 2018 23:55:59 -0400 Received: from mga01.intel.com ([192.55.52.88]:30400 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934373AbeFVDz4 (ORCPT ); Thu, 21 Jun 2018 23:55:56 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jun 2018 20:55:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,255,1526367600"; d="scan'208";a="65335182" Received: from wanpingl-mobl.ccr.corp.intel.com (HELO yhuang6-ux31a.ccr.corp.intel.com) ([10.254.212.200]) by fmsmga004.fm.intel.com with ESMTP; 21 Jun 2018 20:55:49 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -mm -v4 16/21] mm, THP, swap: Free PMD swap mapping when zap_huge_pmd() Date: Fri, 22 Jun 2018 11:51:46 +0800 Message-Id: <20180622035151.6676-17-ying.huang@intel.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180622035151.6676-1-ying.huang@intel.com> References: <20180622035151.6676-1-ying.huang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huang Ying For a PMD swap mapping, zap_huge_pmd() will clear the PMD and call free_swap_and_cache() to decrease the swap reference count and maybe free or split the huge swap cluster and the THP in swap cache. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- mm/huge_memory.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 38c247a38f67..6b9ca1c14500 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2007,7 +2007,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, spin_unlock(ptl); if (is_huge_zero_pmd(orig_pmd)) tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); - } else if (is_huge_zero_pmd(orig_pmd)) { + } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) { zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); @@ -2020,17 +2020,27 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); - } else if (thp_migration_supported()) { - swp_entry_t entry; - - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); - entry = pmd_to_swp_entry(orig_pmd); - page = pfn_to_page(swp_offset(entry)); + } else { + swp_entry_t entry = pmd_to_swp_entry(orig_pmd); + + if (thp_migration_supported() && + is_migration_entry(entry)) + page = pfn_to_page(swp_offset(entry)); + else if (thp_swap_supported() && + !non_swap_entry(entry)) + free_swap_and_cache(entry, true); + else { + WARN_ONCE(1, +"Non present huge pmd without pmd migration or swap enabled!"); + goto unlock; + } flush_needed = 0; - } else - WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + } - if (PageAnon(page)) { + if (!page) { + zap_deposited_table(tlb->mm, pmd); + add_mm_counter(tlb->mm, MM_SWAPENTS, -HPAGE_PMD_NR); + } else if (PageAnon(page)) { zap_deposited_table(tlb->mm, pmd); add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { @@ -2038,7 +2048,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, zap_deposited_table(tlb->mm, pmd); add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR); } - +unlock: spin_unlock(ptl); if (flush_needed) tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE); -- 2.16.4