Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp1694378iog; Tue, 14 Jun 2022 11:05:34 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tupLGjsv4u49M14cA12NHIpoNeIaKZE/p2EwGMZwvk1A/Mld0NdBvhiwaxOT7f1O2wCmur X-Received: by 2002:a17:906:94c9:b0:711:e7c8:ba15 with SMTP id d9-20020a17090694c900b00711e7c8ba15mr5265877ejy.414.1655229934022; Tue, 14 Jun 2022 11:05:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655229934; cv=none; d=google.com; s=arc-20160816; b=A5Kx1O1nakNeN28iAEQ1kHCdRSF0om8/64gBrFRrHjLqkMCeoBgRach7xCp8QQUwBw OXsiH9yDRfld6uYuPaPHNZ+zY0tXhKydGAVdIdqHdiEe+USaKwyUgXunRD0rI6dQMzjI +5w6p5QQMIFeOcuoE7Hq7ZvOhqsMG7gR7A0nocwhNUG11zT/i9odxYlL5JPJS/Izx1TU jb6UrWFaQlWWr/1eBGamxPMeoFySMlty/hz3AEhiLVnjvhiQecEBAQ+y+7yX6CLVZqbW arCKRP24sEJGterufIzjOdEI0/FeKYTKETihyDtdeM0PR0ZHdepuiCASI4ITeVjKUDJc AU5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=HbCwvGm436ndW6wqAhp00D7eAGdXo+hQ4PsSSU7sJGM=; b=X6nHjzmuZxqIWkJLC6LdjrxsxrleEGQmq4XW/TbhZWMrsuMuiFYe/gNGGaJozpjt30 6fQxOHclR7M/LZmxX/wWPQ8xN2oCGwchLfAClT9twpJ00+uuMl4gZpLM9UY/ZbE4bfIo V8t5osHdywxgNDBVZsYbqfKMIlvSH5qEQiRfGOmaEvyCwHEgmpZleLdEGS7D4hx7vTb8 Z7pa7Ouv7CbbYjozrT2fs5BNjMNqJPTtuM9CI0mEOxVvEOscrxQcV3cH1KB6eQSzOSZR 5Dza6yxPDDZSUhpsVQ6baLDtfdiEeq88GW8cgwZcmCG1JU3HuqtA1cLuBFAXFtxYeOwm 4z/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=UFyh0nL4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h11-20020a05640250cb00b0042dc89c66besi14107854edb.503.2022.06.14.11.05.04; Tue, 14 Jun 2022 11:05:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=UFyh0nL4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239258AbiFNRkl (ORCPT + 99 others); Tue, 14 Jun 2022 13:40:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238099AbiFNRkj (ORCPT ); Tue, 14 Jun 2022 13:40:39 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 278A02F02C for ; Tue, 14 Jun 2022 10:40:38 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id cx11so9091090pjb.1 for ; Tue, 14 Jun 2022 10:40:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=HbCwvGm436ndW6wqAhp00D7eAGdXo+hQ4PsSSU7sJGM=; b=UFyh0nL4KHkqGH81uq7bInpN/bR1qquiPBxLs1wdVy42VWJsjui0a5wP4yk/5p8pIr /tmAzJ2+8LueFtTj5N2GJ7Hrq8gOC+ucplIL9jKt4v4nY+h9F73faYrXoENoG03EcjKS T0IIIaaok/1f8UP8zS7bBtrvMWcmyNbCl+MqnlzTykVukkVWYlu4o2g3eRTMop/Vu8cD HmmIHR07nM64S/yVG4OwFAONZkOfAjoxFDOm2X4LYwHDoN1ziG2bFLRbbPT+4PPEmlw+ qC985dRhz+NQDBqZLpWUP7SgPhHV4Q4o0JQDNNhlCHj+jEMlHv9UfeLGtqHZ0re0S3j5 ymRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=HbCwvGm436ndW6wqAhp00D7eAGdXo+hQ4PsSSU7sJGM=; b=ISXkgFEjryRYfPHeWT8/7WXnWndcUZgackwqimVj8JEEYzKkpeTAP6MqAYwlmGXBFu cjtLnSqRQaHxteFLozsSvsrWA4J8Wjk7oNNGsEV2pWrFXy3eqBSrkq6X4Rzqh4/GWyFv q8BUZgTSmGkaRqIggAyr5etaRn++MjZwTEaGj6r13k/onNEE5gM8EYibHkKzd8bTU2LK m7kHgM0T8/21ly7royRXmFHzc1IuBmo3yrFw4UhJ1R5JFDbeAXdG5Fk75bdguKaBumub qlUdhjwKdI4UYyF6f+qgsn7Uuw+hdJsgTdr6ef1VqOH74fKDnLrnfm5IREUnz+Wau84A zCRg== X-Gm-Message-State: AJIora+DIv44rQ9OHSzQOX3nW7cLBE3OTHlJboG+5PNvq+Bu0wdAP2P1 UP2xGxk4+9vRQPeETo5fVy82hiEU4mctqnlLaR95Fl2J X-Received: by 2002:a17:903:32c4:b0:167:6e6f:204b with SMTP id i4-20020a17090332c400b001676e6f204bmr5709758plr.117.1655228437522; Tue, 14 Jun 2022 10:40:37 -0700 (PDT) MIME-Version: 1.0 References: <20220606214414.736109-1-shy828301@gmail.com> <20220606214414.736109-5-shy828301@gmail.com> In-Reply-To: From: Yang Shi Date: Tue, 14 Jun 2022 10:40:24 -0700 Message-ID: Subject: Re: [v3 PATCH 4/7] mm: khugepaged: use transhuge_vma_suitable replace open-code To: "Zach O'Keefe" Cc: Vlastimil Babka , "Kirill A. Shutemov" , Matthew Wilcox , Andrew Morton , Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jun 11, 2022 at 2:43 PM Zach O'Keefe wrote: > > On 10 Jun 20:25, Yang Shi wrote: > > On Fri, Jun 10, 2022 at 5:28 PM Zach O'Keefe wrote: > > > > > > On Fri, Jun 10, 2022 at 3:04 PM Yang Shi wrote: > > > > > > > > On Fri, Jun 10, 2022 at 9:59 AM Yang Shi wrote: > > > > > > > > > > On Thu, Jun 9, 2022 at 6:52 PM Zach O'Keefe wrote: > > > > > > > > > > > > On Mon, Jun 6, 2022 at 2:44 PM Yang Shi wrote: > > > > > > > > > > > > > > The hugepage_vma_revalidate() needs to check if the address is still in > > > > > > > the aligned HPAGE_PMD_SIZE area of the vma when reacquiring mmap_lock, > > > > > > > but it was open-coded, use transhuge_vma_suitable() to do the job. And > > > > > > > add proper comments for transhuge_vma_suitable(). > > > > > > > > > > > > > > Signed-off-by: Yang Shi > > > > > > > --- > > > > > > > include/linux/huge_mm.h | 6 ++++++ > > > > > > > mm/khugepaged.c | 5 +---- > > > > > > > 2 files changed, 7 insertions(+), 4 deletions(-) > > > > > > > > > > > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > > > > > > index a8f61db47f2a..79d5919beb83 100644 > > > > > > > --- a/include/linux/huge_mm.h > > > > > > > +++ b/include/linux/huge_mm.h > > > > > > > @@ -128,6 +128,12 @@ static inline bool transhuge_vma_size_ok(struct vm_area_struct *vma) > > > > > > > return false; > > > > > > > } > > > > > > > > > > > > > > +/* > > > > > > > + * Do the below checks: > > > > > > > + * - For non-anon vma, check if the vm_pgoff is HPAGE_PMD_NR aligned. > > > > > > > + * - For all vmas, check if the haddr is in an aligned HPAGE_PMD_SIZE > > > > > > > + * area. > > > > > > > + */ > > > > > > > > > > > > AFAIK we aren't checking if vm_pgoff is HPAGE_PMD_NR aligned, but > > > > > > rather that linear_page_index(vma, round_up(vma->vm_start, > > > > > > HPAGE_PMD_SIZE)) is HPAGE_PMD_NR aligned within vma->vm_file. I was > > > > > > > > > > Yeah, you are right. > > > > > > > > > > > pretty confused about this (hopefully I have it right now - if not - > > > > > > case and point :) ), so it might be a good opportunity to add some > > > > > > extra commentary to help future travelers understand why this > > > > > > constraint exists. > > > > > > > > > > I'm not fully sure I understand this 100%. I think this is related to > > > > > how page cache is structured. I will try to add more comments. > > > > > > > > How's about "The underlying THP is always properly aligned in page > > > > cache, but it may be across the boundary of VMA if the VMA is > > > > misaligned, so the THP can't be PMD mapped for this case." > > > > > > I could certainly still be wrong / am learning here - but I *thought* > > > the reason for this check was to make sure that the hugepage > > > to-be-collapsed is naturally aligned within the file (since, AFAIK, > > > without this constraint, different mm's might have different ideas > > > about where hugepages in the file should be). > > > > The hugepage is definitely naturally aligned within the file, this is > > guaranteed by how page cache is organized, you could find some example > > code from shmem fault, for example, the below code snippet: > > > > hindex = round_down(index, folio_nr_pages(folio)); > > error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & > > GFP_RECLAIM_MASK, charge_mm); > > > > The index is actually rounded down to HPAGE_PMD_NR aligned. > > Thanks for the reference here. > > > The check in hugepage_vma_check() is used to guarantee there is an PMD > > aligned area in the vma exactly overlapping with a PMD range in the > > page cache. For example, you have a vma starting from 0x1000 maps to > > the file's page offset of 0, even though you get THP for the file, it > > can not be PMD mapped to the vma. But if it maps to the file's page > > offset of 1, then starting from 0x200000 (assuming the vma is big > > enough) it can PMD map the second THP in the page cache. Does it make > > sense? > > > > Yes, this makes sense - thanks for providing your insight. I think I was > basically thinking the same thing ; except your description is more accurate > (namely, that is *some* pmd-aligned range covered by the vma that maps to a > hugepage-aligned offset in the file (I mistakenly took this to be the *first* > pmd-aligned address >= vma->vm_start)). > > Also, with this in mind, your previous suggested comment makes sense. If I had > to take a stab at it, I would say something like: > > "The hugepage is guaranteed to be hugepage-aligned within the file, but we must > check that the PMD-aligned addresses in the VMA map to PMD-aligned offsets > within the file, else the hugepage will not be PMD-mappable". > > WDYT? Looks good to me. Thanks for the wording. > > > > > > > > > > > > > > > > > > > > > Also I wonder while we're at it if we can rename this to > > > > > > transhuge_addr_aligned() or transhuge_addr_suitable() or something. > > > > > > > > > > I think it is still actually used to check vma. > > > > > > > > > > > > > > > > > Otherwise I think the change is a nice cleanup. > > > > > > > > > > > > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > > > > > > > unsigned long addr) > > > > > > > { > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > > > index 7a5d1c1a1833..ca1754d3a827 100644 > > > > > > > --- a/mm/khugepaged.c > > > > > > > +++ b/mm/khugepaged.c > > > > > > > @@ -951,7 +951,6 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > > > > > > struct vm_area_struct **vmap) > > > > > > > { > > > > > > > struct vm_area_struct *vma; > > > > > > > - unsigned long hstart, hend; > > > > > > > > > > > > > > if (unlikely(khugepaged_test_exit(mm))) > > > > > > > return SCAN_ANY_PROCESS; > > > > > > > @@ -960,9 +959,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > > > > > > if (!vma) > > > > > > > return SCAN_VMA_NULL; > > > > > > > > > > > > > > - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; > > > > > > > - hend = vma->vm_end & HPAGE_PMD_MASK; > > > > > > > - if (address < hstart || address + HPAGE_PMD_SIZE > hend) > > > > > > > + if (!transhuge_vma_suitable(vma, address)) > > > > > > > return SCAN_ADDRESS_RANGE; > > > > > > > if (!hugepage_vma_check(vma, vma->vm_flags)) > > > > > > > return SCAN_VMA_CHECK; > > > > > > > -- > > > > > > > 2.26.3 > > > > > > > > > > > > > >