Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp556966imm; Wed, 23 May 2018 01:27:52 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp9Tg5vaOPyf+1lfLVT+p/m0d0tktOzu63FOd4No1m3litmMbx3+XF+XnNiwyfiRPnx+ZCF X-Received: by 2002:a63:6887:: with SMTP id d129-v6mr1596879pgc.128.1527064072161; Wed, 23 May 2018 01:27:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527064072; cv=none; d=google.com; s=arc-20160816; b=xc1PKEuKiOCb1VJNSouSr72y9SuibZq/Dp76Nwr9m7rdmeo2EDKbuysyEpxKy//1Ke i3cWPl7P/5n84w4+qOJZnxcXNIt9/9HGG2VM3QUqviZKusMFP5nFrnZ25hzICYhRtVq+ W+ePAqE3mrS12p0qv6sQbLG7FBwl8cm93oeNgRVguOFkaAjigAoGPu3RjeWgiCcx3EzQ YztXzUNQqpgucUYrcXSJDkIwzRtxVdpaDs02XivG4HMh0q6daehoXFwUi13YBQdMiJoE eZlKOuXGcuu135yKiUBu9T1ymnRjMYZQonE5LD3pO8HMsNH0EhMfcenw6ArwOG2cLdn9 kXxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=xl8TmbcNdPvAJxMtkZII7M3vTWYgGCvFlTrsDqOKsqU=; b=QJUvuuOi1QYy8dwNEtVUElpKLfVjILhHJsSxiRpyLXBHMPecjhalQpVWVAgWeCseOZ PDlBuoWnt23PsJzPQIczzxFciNcvLHNQwYGofITJ/3+0UBhlE3kZBSC2wTp/Hi73EVgp AV5fnE/FMSy8+0YQzweUlcSyENyW4yUi91FueX1h30Ir8+ORAtGPMb/F0JSYub1pGiZP y0SGLbkcX3Jh3SyaS1xYXGNIK3cZ985RQP5QlpWFITz3P6GB2UM1HpX2sjNtJvNyuuX7 ez4+MkUdC17gYtaUBtTeRRqzCZGmaQ7YO+xUvaWzr2RcGZdLNi4T75WZxViJ5dZWmGv4 kv/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p2-v6si14300681pgq.478.2018.05.23.01.27.37; Wed, 23 May 2018 01:27:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754582AbeEWI1X (ORCPT + 99 others); Wed, 23 May 2018 04:27:23 -0400 Received: from mga03.intel.com ([134.134.136.65]:7732 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754573AbeEWI1S (ORCPT ); Wed, 23 May 2018 04:27:18 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 May 2018 01:27:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,432,1520924400"; d="scan'208";a="57726233" Received: from yhuang6-ux31a.sh.intel.com ([10.239.197.97]) by fmsmga001.fm.intel.com with ESMTP; 23 May 2018 01:27:14 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan Subject: [PATCH -mm -V3 16/21] mm, THP, swap: Free PMD swap mapping when zap_huge_pmd() Date: Wed, 23 May 2018 16:26:20 +0800 Message-Id: <20180523082625.6897-17-ying.huang@intel.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180523082625.6897-1-ying.huang@intel.com> References: <20180523082625.6897-1-ying.huang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huang Ying For a PMD swap mapping, zap_huge_pmd() will clear the PMD and call free_swap_and_cache() to decrease the swap reference count and maybe free or split the huge swap cluster and the THP in swap cache. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan --- mm/huge_memory.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 01fdd59fe6d4..e057b966ea68 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2007,7 +2007,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, spin_unlock(ptl); if (is_huge_zero_pmd(orig_pmd)) tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); - } else if (is_huge_zero_pmd(orig_pmd)) { + } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) { zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); @@ -2020,17 +2020,27 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); - } else if (thp_migration_supported()) { - swp_entry_t entry; - - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); - entry = pmd_to_swp_entry(orig_pmd); - page = pfn_to_page(swp_offset(entry)); + } else { + swp_entry_t entry = pmd_to_swp_entry(orig_pmd); + + if (thp_migration_supported() && + is_migration_entry(entry)) + page = pfn_to_page(swp_offset(entry)); + else if (thp_swap_supported() && + !non_swap_entry(entry)) + free_swap_and_cache(entry, true); + else { + WARN_ONCE(1, +"Non present huge pmd without pmd migration or swap enabled!"); + goto unlock; + } flush_needed = 0; - } else - WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + } - if (PageAnon(page)) { + if (!page) { + zap_deposited_table(tlb->mm, pmd); + add_mm_counter(tlb->mm, MM_SWAPENTS, -HPAGE_PMD_NR); + } else if (PageAnon(page)) { zap_deposited_table(tlb->mm, pmd); add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { @@ -2038,7 +2048,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, zap_deposited_table(tlb->mm, pmd); add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR); } - +unlock: spin_unlock(ptl); if (flush_needed) tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE); -- 2.16.1