Received: by 2002:ac0:de83:0:0:0:0:0 with SMTP id b3csp1141153imk; Sun, 3 Jul 2022 18:35:38 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uOgfNn0/xaaTDn/gIUA4COSDdhW3Yafn0Q40DCh2bxFTsxobaN7aXzxV23trH58tj6zVQb X-Received: by 2002:a05:6402:50d0:b0:436:d04:ebb8 with SMTP id h16-20020a05640250d000b004360d04ebb8mr35996485edb.255.1656898538249; Sun, 03 Jul 2022 18:35:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656898538; cv=none; d=google.com; s=arc-20160816; b=AtpKQxPuwuQe6EgJCILIwTjRHoM8j4nawH2Fx2V2udn1AMt/fPNj4meG1OzMKh0u/P hFuKLvX6QpJ+8COnWthdyZy4OBr6kR22X4/OXD+d6OWQhuY3FSopSm1cCaxXL95XEarD hzqVXghrOkYIRCo/9moOxUp8OtVg+4qJa4WovcHkDe0ikkjAe3PLWI1U6P+cJxgvRiDV dIHiD9ew+aOfP5dAfEzLV6TDsWxmbfoF0klY26gr0RfGbLVH+rFaG7CgxOzUXR8eyMFe sZML2t/F7CkrAFgF0XJ3hvSghmpAowSVedcX77vROpcYstSUYz+A2F8WHGvRdXa7Dd+F nCjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=F1I02BcWpIjgDvFk+4MlrVlp1TpbgaySUNPdkhFmIaM=; b=kRBrETXiO/MfInhrv58koVzD4mbRo3yz3zNIcSapO39cYduDSWVf0auRMZ20VjYl9+ iGV9U/LW282UcMhFh+FzCv9vIkeDOoEnldQ/YKAsoFUX91oWRaTSDaonq7UaEFGRtUQ3 1czeI9P7wJtroCMcyMcsiC/w0iWT5bZKjQDnav1nKIqhF1/feF7s7VO3glRjXtKEJdX/ rLX7TC+j7g8caZJKssDbXu4fYBZbvMdz3dSIzrkU5tQMIhFCfdo5sTxwaHl7l0TDc9T7 V0ZWig0Z7MZhrxRvp2o1GuaY5hC9Enuy5EGVPllmDS4nEAZX8+6PTjEwamyW1EkdOqWf wU2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=A+nOPsGx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sd11-20020a1709076e0b00b00726be0cce7bsi10571343ejc.24.2022.07.03.18.35.13; Sun, 03 Jul 2022 18:35:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=A+nOPsGx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232772AbiGDBdq (ORCPT + 99 others); Sun, 3 Jul 2022 21:33:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232735AbiGDBdj (ORCPT ); Sun, 3 Jul 2022 21:33:39 -0400 Received: from out0.migadu.com (out0.migadu.com [IPv6:2001:41d0:2:267::]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 808783886 for ; Sun, 3 Jul 2022 18:33:38 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1656898417; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F1I02BcWpIjgDvFk+4MlrVlp1TpbgaySUNPdkhFmIaM=; b=A+nOPsGxoBTDr95l5v5rnDzy8nPqcFEjSJsAQUxmeWQN1u/oKq/1uBzEAubMOjoLLZFph8 b1CMBITvKzQTmQ1+ORt6FxEO2yLd/p2XbNWv+lvmbCrc59dkih2S+ABJYFnik0iovj1nVQ M1QoSLF1VIO3oRbiDXSaJoCF7AvZyeg= From: Naoya Horiguchi To: linux-mm@kvack.org Cc: Andrew Morton , David Hildenbrand , Mike Kravetz , Miaohe Lin , Liu Shixin , Yang Shi , Oscar Salvador , Muchun Song , Naoya Horiguchi , linux-kernel@vger.kernel.org Subject: [mm-unstable PATCH v4 3/9] mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present pud entry Date: Mon, 4 Jul 2022 10:33:06 +0900 Message-Id: <20220704013312.2415700-4-naoya.horiguchi@linux.dev> In-Reply-To: <20220704013312.2415700-1-naoya.horiguchi@linux.dev> References: <20220704013312.2415700-1-naoya.horiguchi@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Migadu-Auth-User: linux.dev X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_PASS, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Naoya Horiguchi follow_pud_mask() does not support non-present pud entry now. As long as I tested on x86_64 server, follow_pud_mask() still simply returns no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe user-visible effect should happen. But generally we should call follow_huge_pud() for non-present pud entry for 1GB hugetlb page. Update pud_huge() and follow_huge_pud() to handle non-present pud entries. The changes are similar to previous works for pud entries commit e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage"). Signed-off-by: Naoya Horiguchi --- v2 -> v3: - fixed typos in subject and description, - added comment on pud_huge(), - added comment about fallback for hwpoisoned entry, - updated initial check about FOLL_{PIN,GET} flags. --- arch/x86/mm/hugetlbpage.c | 8 +++++++- mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++-- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 509408da0da1..6b3033845c6d 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd) (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; } +/* + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. + * Otherwise, returns 0. + */ int pud_huge(pud_t pud) { - return !!(pud_val(pud) & _PAGE_PSE); + return !pud_none(pud) && + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; } #ifdef CONFIG_HUGETLB_PAGE diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ad621688370b..66bb39e0fce8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6994,10 +6994,38 @@ struct page * __weak follow_huge_pud(struct mm_struct *mm, unsigned long address, pud_t *pud, int flags) { - if (flags & (FOLL_GET | FOLL_PIN)) + struct page *page = NULL; + spinlock_t *ptl; + pte_t pte; + + if (WARN_ON_ONCE(flags & FOLL_PIN)) return NULL; - return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); +retry: + ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud); + if (!pud_huge(*pud)) + goto out; + pte = huge_ptep_get((pte_t *)pud); + if (pte_present(pte)) { + page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(!try_grab_page(page, flags))) { + page = NULL; + goto out; + } + } else { + if (is_hugetlb_entry_migration(pte)) { + spin_unlock(ptl); + __migration_entry_wait(mm, (pte_t *)pud, ptl); + goto retry; + } + /* + * hwpoisoned entry is treated as no_page_table in + * follow_page_mask(). + */ + } +out: + spin_unlock(ptl); + return page; } struct page * __weak -- 2.25.1