Received: by 2002:a05:7412:b101:b0:e2:908c:2ebd with SMTP id az1csp2984844rdb; Wed, 15 Nov 2023 17:30:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IFE6tg/rJMG+83NaMiu4qzyalDVQg1WyQSLQ/Tb4u2yNttapTm18vf0FXsxNh4Svdt8p24j X-Received: by 2002:a05:6358:c95:b0:16b:c5bb:f702 with SMTP id o21-20020a0563580c9500b0016bc5bbf702mr6613626rwj.2.1700098203465; Wed, 15 Nov 2023 17:30:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700098203; cv=none; d=google.com; s=arc-20160816; b=pT3eR9whEO8ssZouNPOhxqCn4sk1NYljDqm2Uozqv2pQROV9QcJuWv1C6zSQLNqKoX 9SrIaypGHJZGlpV6eC/AV9M4z1dfkZOUFHmM368PRWfEaR4spFrNCpX5cSgwgfCLmFLM blIf0aWOXEomBSv/Sd48oA8TgoFqhIv2+FIPfPLznfGPpL9BhHuebf1fk7ikSemPOTj/ OdGVv4GYscjxGg1eHuWeLfvOxFrByqwYBeSesa2KGNjt4XR2wmNUYJpaEofjSzAt205m VCACgpYDrlWOo2MuwxDPimeqLV/gfK/XCkfNLIEuwYlPgzKXSrFYDt44N4016zzonVa+ OzUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hxqjSAwi8aTfo4zdaLYLd0e+DlQCcl09BvXUJR2cui0=; fh=o+FJD7UTEFgAOwGBZppViFyhRpBzPGRfoO/6xzRNnMs=; b=VHIC3AgdBKlzaZjjwtUF+CdMPfnlOzfovJBillD2iQZt5mjObS4/YjVwmIyw73rjr1 9O675JFtqUIXz3seqn7AYTDp10xB4LxkRXOsQ0M2Y8/Loa3FiXqKmbijmTgET8TkybR4 kf4jkNCATeI/KIbBYdL0rUkpLRoOB83Z+LKBZLEnhPDfQ6bv6Q0nBJuaIo0uDgchbrjm vR7YyAIUw3ln4pamjgIFYiQSCRzSqDEC7kmXNgriy6mY1rhpwifdqmz6/NkUNbRfkLaP 4eJtpey9L1kkLQEbalufizI6thJX10PlHF1f5kYC+FgP//EIrlVVNsX+nOd2hXREUtEY D7FQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=hIEgXfZb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id o38-20020a635d66000000b005b958401e4fsi11475809pgm.418.2023.11.15.17.30.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 17:30:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=hIEgXfZb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id F118181068C6; Wed, 15 Nov 2023 17:30:01 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344510AbjKPB34 (ORCPT + 99 others); Wed, 15 Nov 2023 20:29:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344331AbjKPB3o (ORCPT ); Wed, 15 Nov 2023 20:29:44 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2524AD4B for ; Wed, 15 Nov 2023 17:29:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700098168; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hxqjSAwi8aTfo4zdaLYLd0e+DlQCcl09BvXUJR2cui0=; b=hIEgXfZbAOrqX4z98mch3jAAm23q23QDu+Aq8lyd13NZEJ1Q/SibbZQC3gXtWRm/FQbSMB xqmAClvuvk3/eymMzQ0fxsWnX8gZvPi54cKIiivOGHf+8M/beQQ9/vefR/Y23u9DdtsgKy X2J5mvgDswmKAX3LEEuKra+DpbDCeLM= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-271-c5saubBzPJWJI-o4qu_cBQ-1; Wed, 15 Nov 2023 20:29:26 -0500 X-MC-Unique: c5saubBzPJWJI-o4qu_cBQ-1 Received: by mail-qk1-f198.google.com with SMTP id af79cd13be357-77a02ceef95so2904185a.0 for ; Wed, 15 Nov 2023 17:29:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700098165; x=1700702965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hxqjSAwi8aTfo4zdaLYLd0e+DlQCcl09BvXUJR2cui0=; b=ANwI1CkwDUj98nrauz5703pYnACz6tz/f14E7bzSKJJUkOI+Xlkqs37rZGcaGlg1nu 9yfToxOmMgDqjyEAsUELKbwT71XENoYOX63zoiSRrePmL//irgZmGrTQcmBdTa//qwTG zll3ly+5uTcbpNYRSTvoyQf6v8itw19kUw/zNra6x/wGBm30ceF+ZPee017oMFwXhN5m 8Fh1XkG6R0d7o3uc3yJL0+kdm+C/CrkeST8FR3GRARpZGO1DYfFumjge/adBmXwQkLKw rJBue0H7yyneT2wyoDb/n1b1hIurNaQGTMmSoxgAT9CB9SGUQcsJMGWEqIryPKJclOJ2 zDHA== X-Gm-Message-State: AOJu0Yxu16moxEGh2N3tVTzc0Fpydzdo6RdQVkZkMQJ2WMIvtkteNn+2 r13KyY+KSO5wEeJNPsy3cHtIQaTIICs6hlJ6JzWvFjSXjS/2YQE179CttWUx1KiVeBQKc1SDkAC IkW0Lz0QMaoJ6pzt0dYcgcdu7ThFznbIeicQApy40h7V9kdxFnxHks0CGreg0sE0iz1s4GqeUNz LdEXnrUg== X-Received: by 2002:a05:622a:4291:b0:41c:d528:6589 with SMTP id cr17-20020a05622a429100b0041cd5286589mr8639226qtb.4.1700098164758; Wed, 15 Nov 2023 17:29:24 -0800 (PST) X-Received: by 2002:a05:622a:4291:b0:41c:d528:6589 with SMTP id cr17-20020a05622a429100b0041cd5286589mr8639189qtb.4.1700098164258; Wed, 15 Nov 2023 17:29:24 -0800 (PST) Received: from x1n.redhat.com (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id c24-20020ac85198000000b0041e383d527esm3922598qtn.66.2023.11.15.17.29.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Nov 2023 17:29:23 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Mike Kravetz , "Kirill A . Shutemov" , Lorenzo Stoakes , Axel Rasmussen , Matthew Wilcox , John Hubbard , Mike Rapoport , peterx@redhat.com, Hugh Dickins , David Hildenbrand , Andrea Arcangeli , Rik van Riel , James Houghton , Yang Shi , Jason Gunthorpe , Vlastimil Babka , Andrew Morton Subject: [PATCH RFC 08/12] mm/gup: Handle hugetlb for no_page_table() Date: Wed, 15 Nov 2023 20:29:04 -0500 Message-ID: <20231116012908.392077-9-peterx@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231116012908.392077-1-peterx@redhat.com> References: <20231116012908.392077-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 15 Nov 2023 17:30:02 -0800 (PST) no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Signed-off-by: Peter Xu --- mm/gup.c | 40 ++++++++++++++++++++++++---------------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 69dae51f3eb1..89c1584d68f0 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,9 +709,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -714,12 +722,12 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -750,7 +758,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); @@ -758,7 +766,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, return page; } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -772,10 +780,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d = p4d_offset(pgdp, address); if (p4d_none(*p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_huge(*p4d)); if (unlikely(p4d_bad(*p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4d, flags, ctx); } @@ -825,7 +833,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); } -- 2.41.0