Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp558120imm; Wed, 23 May 2018 01:29:33 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpHREJyII8p8JQ04zWvRfQfVjjmcFi5/8Bn/pNCb6v623dDYvn1SaOv8d0Ky4ITVwDXjmqE X-Received: by 2002:a17:902:4203:: with SMTP id g3-v6mr2013927pld.315.1527064173811; Wed, 23 May 2018 01:29:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527064173; cv=none; d=google.com; s=arc-20160816; b=eR8HgvMzd4SIqx0kZiP0C8d0Y1DuzVWYoCD89m3p4/5ab9KjM4QMJw5BFZ7m0NyBCG xe2IkIK0R6khGakFvYgKiZSGN4V/dhRTHlnbDDOT14x0FlgfGZL9t1PBCB50vvSsdaGd Swe1RanrFF/+f5qQEP+C+yd8M869vZR6gJVSsdvaSl/+5nkTrqMYZ1tp4VHdseJJs0B3 67XbPIauXs0DQWsl0K/OmsrSAazlc3pvh7N9XTx6rTMlDp1ek1bgRRNZxW2EN5kdHqoZ F4PQgKFkTND+5zE+7ju/TQi6oIlFtuhYYobMOjGVu+J+Fse9bSOkrtYtzNSNgk+V1xVJ Sydw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=HWBhY/zwhz0Qfy3X4HdeCi7YEqSRFqvakCXGwSOpDhk=; b=Df0/8Lc71ofG6mb+PcBNn4cDm6E7CpYBNVwrQCJm7BL2R+pkaCVbYIBeoczrZng8Or lHI3vKg+99mgT0xh89lqExAxXkEB7LzpSa0Qx429QlqVrK1usV7fVMXqLDWh/yWtnQgR G5DCx2YAnCeKXw7zmN9B3KkZ0KEry4USh4j9OfbFDgo9bNWtZgGuTfVVu0Yj3a9Nh6i5 J96criz0tirsY4vcsMxZK7qoSq+G2bzfBTGB/2MQkQRIo4oHspqx3tyehpuWmP9aMl/d e+uvB2uMaJdUSmsbDHY+oe8NDLrSjH55ESagSBRLgwZfQmIge/dPVSzBI7aYYM5R++36 nygQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w18-v6si19159653pfl.359.2018.05.23.01.29.18; Wed, 23 May 2018 01:29:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754591AbeEWI1b (ORCPT + 99 others); Wed, 23 May 2018 04:27:31 -0400 Received: from mga03.intel.com ([134.134.136.65]:7732 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754196AbeEWI11 (ORCPT ); Wed, 23 May 2018 04:27:27 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 May 2018 01:27:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,432,1520924400"; d="scan'208";a="57726271" Received: from yhuang6-ux31a.sh.intel.com ([10.239.197.97]) by fmsmga001.fm.intel.com with ESMTP; 23 May 2018 01:27:22 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan Subject: [PATCH -mm -V3 18/21] mm, THP, swap: Support PMD swap mapping in mincore() Date: Wed, 23 May 2018 16:26:22 +0800 Message-Id: <20180523082625.6897-19-ying.huang@intel.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180523082625.6897-1-ying.huang@intel.com> References: <20180523082625.6897-1-ying.huang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huang Ying During mincore(), for PMD swap mapping, swap cache will be looked up. If the resulting page isn't compound page, the PMD swap mapping will be split and fallback to PTE swap mapping processing. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan --- mm/mincore.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/mm/mincore.c b/mm/mincore.c index a66f2052c7b1..897dd2c187e8 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -48,7 +48,8 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, * and is up to date; i.e. that no page-in operation would be required * at this time if an application were to map and access this page. */ -static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) +static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff, + bool *compound) { unsigned char present = 0; struct page *page; @@ -86,6 +87,8 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) #endif if (page) { present = PageUptodate(page); + if (compound) + *compound = PageCompound(page); put_page(page); } @@ -103,7 +106,8 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end, pgoff = linear_page_index(vma, addr); for (i = 0; i < nr; i++, pgoff++) - vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff); + vec[i] = mincore_page(vma->vm_file->f_mapping, + pgoff, NULL); } else { for (i = 0; i < nr; i++) vec[i] = 0; @@ -127,14 +131,36 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t *ptep; unsigned char *vec = walk->private; int nr = (end - addr) >> PAGE_SHIFT; + swp_entry_t entry; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { - memset(vec, 1, nr); + unsigned char val = 1; + bool compound; + + if (thp_swap_supported() && is_swap_pmd(*pmd)) { + entry = pmd_to_swp_entry(*pmd); + if (!non_swap_entry(entry)) { + val = mincore_page(swap_address_space(entry), + swp_offset(entry), + &compound); + /* + * The huge swap cluster has been + * split under us + */ + if (!compound) { + __split_huge_swap_pmd(vma, addr, pmd); + spin_unlock(ptl); + goto fallback; + } + } + } + memset(vec, val, nr); spin_unlock(ptl); goto out; } +fallback: if (pmd_trans_unstable(pmd)) { __mincore_unmapped_range(addr, end, vma, vec); goto out; @@ -150,8 +176,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, else if (pte_present(pte)) *vec = 1; else { /* pte is a swap entry */ - swp_entry_t entry = pte_to_swp_entry(pte); - + entry = pte_to_swp_entry(pte); if (non_swap_entry(entry)) { /* * migration or hwpoison entries are always @@ -161,7 +186,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } else { #ifdef CONFIG_SWAP *vec = mincore_page(swap_address_space(entry), - swp_offset(entry)); + swp_offset(entry), NULL); #else WARN_ON(1); *vec = 1; -- 2.16.1