Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp4590171imw; Tue, 19 Jul 2022 09:22:15 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sfqhefWMzBjlNTCJCmhBnAUH+K9pRrMPrMsG3qaICLjXs0l+InyhKBe4I0yRm8zcQ7gF1l X-Received: by 2002:ae9:e704:0:b0:6b5:6bb9:48c3 with SMTP id m4-20020ae9e704000000b006b56bb948c3mr21940477qka.618.1658247735541; Tue, 19 Jul 2022 09:22:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658247735; cv=none; d=google.com; s=arc-20160816; b=SoB3L4ZcVCXuyEqRfJ/8lP4s4PyXROgGIFhnQ8/r4s2sh3+3sbDIJM7HcnL4lDUfum 7ZQNBOTtEPefcu6iYzocFAG+pJNqoC7Tyc1caTEOFBw8whu9QFcvjy3nG6M8rYoteY54 Z6cXIuNIsdgdJezG00Wg2gBnh5WyRLYE08ZcIqYdJ1eWDudH6WtnK9edpD4CnLJRPb8Y FHVIUf0XQNERrF+2WVhg+1k0JIUGeXbQPMW+ungLqlhEOZcCr1tXRJ+2pLQS/TIvWLWj F60dn+Gi0/1V4UFsJt6+Wb8j+Eb+id8X2UA6A0mVu9X0jhQq1B1CmNe7HhJmfjAWfqMM ZKIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=BrqvVrOaDm2mbyP/RnfrNJ8rtAxk2BgRZfLzeM1+qLc=; b=L6I5vds2O0wZjKagBepBvSIJRTHPlfmetkGopVYjaBna9ki5yy2ZUvbk+Wqa2+Br+l d9ts6KcpsxdQFCePtY9r3XUpHP7q4DQSgEbjT8J+juxTlBOTS3zruQOpKfsr7C4Yv10X azknvhIOwSRQ0b/A2wtfaIl7IOP08SHXNIEHs8xBj8vuYgcqbw/Me3ttDd9+STls3WWu WW4LvcTIvT/K9JufdTq94cp6x/Ccq1Me6iz9jByOYfhyOI2bxO4ueXrOdBhDiyxldjFI dw6NJL07r3aHayYcd/JRpr14haqwRz8qgh62BhDyd9SSR/PwumUAe9taRXpQTnyWZKNl nJLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Fk6lMSj5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g2-20020ac84802000000b0031eb18ef7ebsi6922150qtq.309.2022.07.19.09.21.59; Tue, 19 Jul 2022 09:22:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Fk6lMSj5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234989AbiGSQTq (ORCPT + 99 others); Tue, 19 Jul 2022 12:19:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232148AbiGSQTo (ORCPT ); Tue, 19 Jul 2022 12:19:44 -0400 Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com [IPv6:2a00:1450:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DE914F6B9 for ; Tue, 19 Jul 2022 09:19:43 -0700 (PDT) Received: by mail-lj1-x233.google.com with SMTP id q7so17937879lji.12 for ; Tue, 19 Jul 2022 09:19:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BrqvVrOaDm2mbyP/RnfrNJ8rtAxk2BgRZfLzeM1+qLc=; b=Fk6lMSj51AUQX4dJS5Q2Kb0tfTfor4DoshfwFoKqjvNkOiduxXwJEIju1MzkIOm1p9 g83YE66Q5BYok/Oblfr7qAA7/y5PjXLmDei1U4f50WuZZ+O3OnNx+rR9kPIRi0SGkcz4 +4dQERTJBKGO0TfzbYaTtnM5ycSaT5tA423vIwe/Rv/UJ850JZ0xANli9XKdHqjXaZ5p I86p78NN/+0lbPZsJJ+hH7KsVCn/lELJdIFFHAVyleOFkmPVEHYxcg1BFzT47OM8GtM/ yobhEhitHSeLHzdaOx4jOMBnoeoDi4kWkWBSlee7TnBm+3KuJN50ninNi3j55FpCGPfj R0wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BrqvVrOaDm2mbyP/RnfrNJ8rtAxk2BgRZfLzeM1+qLc=; b=IkWZAzmEsXQTB78RNxHjkZKQcwOySVWBqkZXl3lEwlF7ppeYpPnwIxguuzRyaPvkAY 6SkX0kIVhWd+ZynvAeF6wk7sBsGigGJ6DmoCq7BiQHPU+CHkNt0yWP7EKKkBBziGgXkw 6w369IRuySsW2tVnGp48HFMpvoB6Awc4aT3Na4buDLO+WYdwa5uXUxXB27IN62/+zofS PJBoIwfO5V1MWerxjcK4bmtCPU4sBqUr+07l9O3sOtaz/ylVoAvmE+ybc5XOraUrTelm NDanSc7MoLSjuvH6oH4hMo3DdhK07KMoI9c58CrWYlL6zFSC+i/AiEaSI/duSR9BnNo4 C+EA== X-Gm-Message-State: AJIora9u5HolEjQkkXtzgR6W5um/xPsG/ojVtxL4zMoDrP64hUUbe94W HLajX1H8EPhSvxMr1awEoIU3737NvuP87mcoTtpR4Q== X-Received: by 2002:a2e:7006:0:b0:25d:80b0:d16b with SMTP id l6-20020a2e7006000000b0025d80b0d16bmr15834985ljc.436.1658247581361; Tue, 19 Jul 2022 09:19:41 -0700 (PDT) MIME-Version: 1.0 References: <20220624173656.2033256-1-jthoughton@google.com> <20220624173656.2033256-18-jthoughton@google.com> <673a3024-bf82-3770-b737-4c7e53e70fe5@nutanix.com> In-Reply-To: <673a3024-bf82-3770-b737-4c7e53e70fe5@nutanix.com> From: James Houghton Date: Tue, 19 Jul 2022 09:19:29 -0700 Message-ID: Subject: Re: [RFC PATCH 17/26] hugetlb: update follow_hugetlb_page to support HGM To: "manish.mishra" Cc: Mike Kravetz , Muchun Song , Peter Xu , David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , Jue Wang , "Dr . David Alan Gilbert" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 19, 2022 at 3:48 AM manish.mishra wrote: > > > On 24/06/22 11:06 pm, James Houghton wrote: > > This enables support for GUP, and it is needed for the KVM demand paging > > self-test to work. > > > > One important change here is that, before, we never needed to grab the > > i_mmap_sem, but now, to prevent someone from collapsing the page tables > > out from under us, we grab it for reading when doing high-granularity PT > > walks. > > > > Signed-off-by: James Houghton > > --- > > mm/hugetlb.c | 70 ++++++++++++++++++++++++++++++++++++++++++---------- > > 1 file changed, 57 insertions(+), 13 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index f9c7daa6c090..aadfcee947cf 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -6298,14 +6298,18 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > unsigned long vaddr = *position; > > unsigned long remainder = *nr_pages; > > struct hstate *h = hstate_vma(vma); > > + struct address_space *mapping = vma->vm_file->f_mapping; > > int err = -EFAULT, refs; > > + bool has_i_mmap_sem = false; > > > > while (vaddr < vma->vm_end && remainder) { > > pte_t *pte; > > spinlock_t *ptl = NULL; > > bool unshare = false; > > int absent; > > + unsigned long pages_per_hpte; > > struct page *page; > > + struct hugetlb_pte hpte; > > > > /* > > * If we have a pending SIGKILL, don't keep faulting pages and > > @@ -6325,9 +6329,23 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > */ > > pte = huge_pte_offset(mm, vaddr & huge_page_mask(h), > > huge_page_size(h)); > > - if (pte) > > - ptl = huge_pte_lock(h, mm, pte); > > - absent = !pte || huge_pte_none(huge_ptep_get(pte)); > > + if (pte) { > > + hugetlb_pte_populate(&hpte, pte, huge_page_shift(h)); > > + if (hugetlb_hgm_enabled(vma)) { > > + BUG_ON(has_i_mmap_sem); > > Just thinking can we do without i_mmap_lock_read in most cases. Like earlier > > this function was good without i_mmap_lock_read doing almost everything > > which is happening now? We need something to prevent the page tables from being rearranged while we're walking them. In this RFC, I used the i_mmap_lock. I'm going to change it, probably to a per-VMA lock (or maybe a per-hpage lock. I'm trying to figure out if a system with PTLs/hugetlb_pte_lock could work too :)). > > > + i_mmap_lock_read(mapping); > > + /* > > + * Need to hold the mapping semaphore for > > + * reading to do a HGM walk. > > + */ > > + has_i_mmap_sem = true; > > + hugetlb_walk_to(mm, &hpte, vaddr, PAGE_SIZE, > > + /*stop_at_none=*/true); > > + } > > + ptl = hugetlb_pte_lock(mm, &hpte); > > + } > > + > > + absent = !pte || hugetlb_pte_none(&hpte); > > > > /* > > * When coredumping, it suits get_dump_page if we just return > > @@ -6338,8 +6356,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > */ > > if (absent && (flags & FOLL_DUMP) && > > !hugetlbfs_pagecache_present(h, vma, vaddr)) { > > - if (pte) > > + if (pte) { > > + if (has_i_mmap_sem) { > > + i_mmap_unlock_read(mapping); > > + has_i_mmap_sem = false; > > + } > > spin_unlock(ptl); > > + } > > remainder = 0; > > break; > > } > > @@ -6359,8 +6382,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > vm_fault_t ret; > > unsigned int fault_flags = 0; > > > > - if (pte) > > + if (pte) { > > + if (has_i_mmap_sem) { > > + i_mmap_unlock_read(mapping); > > + has_i_mmap_sem = false; > > + } > > spin_unlock(ptl); > > + } > > if (flags & FOLL_WRITE) > > fault_flags |= FAULT_FLAG_WRITE; > > else if (unshare) > > @@ -6403,8 +6431,11 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > continue; > > } > > > > - pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; > > - page = pte_page(huge_ptep_get(pte)); > > + pfn_offset = (vaddr & ~hugetlb_pte_mask(&hpte)) >> PAGE_SHIFT; > > + page = pte_page(hugetlb_ptep_get(&hpte)); > > + pages_per_hpte = hugetlb_pte_size(&hpte) / PAGE_SIZE; > > + if (hugetlb_hgm_enabled(vma)) > > + page = compound_head(page); > > > > VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && > > !PageAnonExclusive(page), page); > > @@ -6414,17 +6445,21 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > * and skip the same_page loop below. > > */ > > if (!pages && !vmas && !pfn_offset && > > - (vaddr + huge_page_size(h) < vma->vm_end) && > > - (remainder >= pages_per_huge_page(h))) { > > - vaddr += huge_page_size(h); > > - remainder -= pages_per_huge_page(h); > > - i += pages_per_huge_page(h); > > + (vaddr + pages_per_hpte < vma->vm_end) && > > + (remainder >= pages_per_hpte)) { > > + vaddr += pages_per_hpte; > > + remainder -= pages_per_hpte; > > + i += pages_per_hpte; > > spin_unlock(ptl); > > + if (has_i_mmap_sem) { > > + has_i_mmap_sem = false; > > + i_mmap_unlock_read(mapping); > > + } > > continue; > > } > > > > /* vaddr may not be aligned to PAGE_SIZE */ > > - refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, > > + refs = min3(pages_per_hpte - pfn_offset, remainder, > > (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); > > > > if (pages || vmas) > > @@ -6447,6 +6482,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, > > flags))) { > > spin_unlock(ptl); > > + if (has_i_mmap_sem) { > > + has_i_mmap_sem = false; > > + i_mmap_unlock_read(mapping); > > + } > > remainder = 0; > > err = -ENOMEM; > > break; > > @@ -6458,8 +6497,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > > i += refs; > > > > spin_unlock(ptl); > > + if (has_i_mmap_sem) { > > + has_i_mmap_sem = false; > > + i_mmap_unlock_read(mapping); > > + } > > } > > *nr_pages = remainder; > > + BUG_ON(has_i_mmap_sem); > > /* > > * setting position is actually required only if remainder is > > * not zero but it's faster not to add a "if (remainder)" > > Thanks > > Manish Mishra >