Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp376466imm; Thu, 21 Jun 2018 20:57:22 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJYrouizncKYxUjzvhzMFoTEmdnPgOIOw9mXZDtjC0VuBbE9w5RY3wiJutVQRcAVrjy/nPm X-Received: by 2002:a17:902:4301:: with SMTP id i1-v6mr31247615pld.280.1529639842252; Thu, 21 Jun 2018 20:57:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529639842; cv=none; d=google.com; s=arc-20160816; b=RBn1DCFt5vlg+plNpSOlY+iYxVZEPh2pyRgQtkWbSA3fnyPtruKDOBKxRdORJ/UFvm 2kTYhYwoMHMJxutN3dXjfq7XsM5fTqVsMYnDZyWgL7IXt2FVjeNt/f+CqKelp6Gh7NDw WTIaxJpXV35W2I9+vIDkxaU3SBVgg5gw/++Ummng6p8KsqlKqrrD8Qe6OGSvpPkY3Q3K dqt3Y5u6ce3l29A4G0RQwcPqJMoaYvVn2bBQ5Z225AXH7Nq/USU00ZT8H4dYbhaCIKlK PfvJ+JRGrYXKCyIjSPmGD2Jj+EBOAJbcREXExPVQU/YIefL04HkUlublvhl8NSt7BBG8 cEzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=j5/mvwZfU5Ghq3EHB8RjMJ0HuRfX5OjKDsKnYUAwx2A=; b=FZn6TJlR0D/MannBxe/k4X22iBkCofu/VAOVF0/RagLbOJTzHFQbz24irpWnKIudXy PkSQNDTr8uHvA/tpRj9tl4nz4EZqhZNx/RwbzNuX4kJsdwSOcJ1DMFka6FQIapTl0h0x E58nq5dPl4b3sVLRt5wCcYb/cMKg4Q7EAyz7e3kG9y7QNe8sqYW36LVvXxnTqXV+8Tro KrHPubrtWMhni2HhGiWDW2GCnrj68YXw6yUicrBaNfVsEMsf9Oi4PhOKL+vIdPYY8joa DWaJE5aa0MNgWy+QGo6nikqZCZY/BbP2AaeTep3dApCU80WQS2rWuEaYhj8mZh4z+5bM SjRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i9-v6si5520811pgo.36.2018.06.21.20.57.08; Thu, 21 Jun 2018 20:57:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934682AbeFVD4J (ORCPT + 99 others); Thu, 21 Jun 2018 23:56:09 -0400 Received: from mga17.intel.com ([192.55.52.151]:9412 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934373AbeFVD4D (ORCPT ); Thu, 21 Jun 2018 23:56:03 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jun 2018 20:56:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,255,1526367600"; d="scan'208";a="65335195" Received: from wanpingl-mobl.ccr.corp.intel.com (HELO yhuang6-ux31a.ccr.corp.intel.com) ([10.254.212.200]) by fmsmga004.fm.intel.com with ESMTP; 21 Jun 2018 20:55:55 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -mm -v4 18/21] mm, THP, swap: Support PMD swap mapping in mincore() Date: Fri, 22 Jun 2018 11:51:48 +0800 Message-Id: <20180622035151.6676-19-ying.huang@intel.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180622035151.6676-1-ying.huang@intel.com> References: <20180622035151.6676-1-ying.huang@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Huang Ying During mincore(), for PMD swap mapping, swap cache will be looked up. If the resulting page isn't compound page, the PMD swap mapping will be split and fallback to PTE swap mapping processing. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- mm/mincore.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/mm/mincore.c b/mm/mincore.c index a66f2052c7b1..897dd2c187e8 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -48,7 +48,8 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, * and is up to date; i.e. that no page-in operation would be required * at this time if an application were to map and access this page. */ -static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) +static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff, + bool *compound) { unsigned char present = 0; struct page *page; @@ -86,6 +87,8 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) #endif if (page) { present = PageUptodate(page); + if (compound) + *compound = PageCompound(page); put_page(page); } @@ -103,7 +106,8 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end, pgoff = linear_page_index(vma, addr); for (i = 0; i < nr; i++, pgoff++) - vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff); + vec[i] = mincore_page(vma->vm_file->f_mapping, + pgoff, NULL); } else { for (i = 0; i < nr; i++) vec[i] = 0; @@ -127,14 +131,36 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t *ptep; unsigned char *vec = walk->private; int nr = (end - addr) >> PAGE_SHIFT; + swp_entry_t entry; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { - memset(vec, 1, nr); + unsigned char val = 1; + bool compound; + + if (thp_swap_supported() && is_swap_pmd(*pmd)) { + entry = pmd_to_swp_entry(*pmd); + if (!non_swap_entry(entry)) { + val = mincore_page(swap_address_space(entry), + swp_offset(entry), + &compound); + /* + * The huge swap cluster has been + * split under us + */ + if (!compound) { + __split_huge_swap_pmd(vma, addr, pmd); + spin_unlock(ptl); + goto fallback; + } + } + } + memset(vec, val, nr); spin_unlock(ptl); goto out; } +fallback: if (pmd_trans_unstable(pmd)) { __mincore_unmapped_range(addr, end, vma, vec); goto out; @@ -150,8 +176,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, else if (pte_present(pte)) *vec = 1; else { /* pte is a swap entry */ - swp_entry_t entry = pte_to_swp_entry(pte); - + entry = pte_to_swp_entry(pte); if (non_swap_entry(entry)) { /* * migration or hwpoison entries are always @@ -161,7 +186,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } else { #ifdef CONFIG_SWAP *vec = mincore_page(swap_address_space(entry), - swp_offset(entry)); + swp_offset(entry), NULL); #else WARN_ON(1); *vec = 1; -- 2.16.4