Received: by 2002:a5d:925a:0:0:0:0:0 with SMTP id e26csp808979iol; Sat, 11 Jun 2022 20:21:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzpmCohiVWETjMMOLihSA+kj8gYUktm9W1e5FOvJkG2VZZ4dNZHGFkrjsHWQYYPCylJPhLP X-Received: by 2002:a17:907:97d5:b0:706:76ba:f28f with SMTP id js21-20020a17090797d500b0070676baf28fmr46313451ejc.367.1655004078882; Sat, 11 Jun 2022 20:21:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655004078; cv=none; d=google.com; s=arc-20160816; b=yIRZH/uu1lixW8lu1KQKJKF2MtVWe9LGyggUIdQTUeMIcfZZ0Wg2uBTwTCcMWjvlTS Vf+4yWEGwNWPIcatyDtqzvl1tmCYwjp9PTVsArElLZkt253Hy1/4RgSAtmRh8GItqEwZ Urc4wN+SrcBH9VQhdVKMemGXNioK/maAlU/bRm9jpXTUxvXqlJvaVmSFMu04ZhFbxBCN H3G8a9C1YaO8WAoNQWXTvOojiNvpAEFWr6hHWCrdxiGfcQtRYOdYfC0abrSFlDv3G9Ph gBo4UZ72m7ojuG2aDgpMSFphgcFSnxaKFrV9/KMUp1pKmTBhBUqozr3WmD8NY4vW3Txs RmKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=DlWC0afQkPL/3yt1Hfspvlz/uMYSCScGBzo3WdZAJ+w=; b=vhYRMqlGGstZiAr/qK/w1n9EJCD/Bkd2pIFQYhQhRZgzSyVqLl3SYmBtiLAeFZpbTU tbkQHTc2t0ZbohmAH2luxeV8+UkUaln9WVgJvU8YZ0ZTaTuBzuM1tysv+Cru1Lk3dv48 VUCj/d9uxltsvGm19YD+CLKMRy0GQ69/Kbm0swOXCtOYyi5w0+Kb278yP/FIzqin0MKM ZuS4pVbJ6N9ajmWX/3UAmM3JrSvryt50e4pehpi1JQeGdYV+JYHa5WxMjWNYrzdmfvyo 3/W9COUHS1C2rWmxZ2rGf3+XBKhuI4G+0SeB+Y/0TbjmeOUKFmsyjvGyDFMsN7UsmSCx T/Ig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=lOpg0sqo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g23-20020a170906395700b00711d50b288fsi3146249eje.547.2022.06.11.20.20.54; Sat, 11 Jun 2022 20:21:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=lOpg0sqo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231958AbiFKVnt (ORCPT + 99 others); Sat, 11 Jun 2022 17:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231376AbiFKVnp (ORCPT ); Sat, 11 Jun 2022 17:43:45 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F8866B0B3 for ; Sat, 11 Jun 2022 14:43:43 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id g10-20020a17090a708a00b001ea8aadd42bso2562577pjk.0 for ; Sat, 11 Jun 2022 14:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=DlWC0afQkPL/3yt1Hfspvlz/uMYSCScGBzo3WdZAJ+w=; b=lOpg0sqoYeNKdGSkXS37B2UfADpL0MCZ+bozvEFHG/5nxBIOiSCfttMoa3oVin25ZH oEljLGSqkfQIFjNMKE4pYzMobK7TyoNfhKEIc6JprR00Q53n3BEb67txbm1066c21a1N ZRGIaxJyw9wolN3ZslwhjNJ6DqG5Bz03ZVMcrRtsN5EWq4zS+4bE+7BjWBdt6HAR6U6L jlbsa2rMlnCqPgcjnylcMKD9Hb7W5zkcRcWA8oX+/cAmBTQedkuBkP7ZWKq8qFqMF6fQ kDxFbidptYyvaliJM6ZeFvpqK0e9sP9z/PuQv/lqsiwGWOu5YXMjmBqPKIvSE8mmIc73 2u7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=DlWC0afQkPL/3yt1Hfspvlz/uMYSCScGBzo3WdZAJ+w=; b=aizKG7n7dfN8rOPoVJiDsnWrid4Pujv7ymZssfydBPTU83DRsET5mjI9We6BV9hpdC kKJXVsvfkKhoXLd9uprG0Dm06fmsUugWIU/tUJ+O6bdtEaoilT9HBvyH9PmmTbuyKK0U 3/jybjxUs7Zp0f7jCiqT+mOA1BH2QurRwzzqmN4sB0kSgpMwSGmLed3PSRnHBe9ln2L7 ++4kJYqc5uGxik56q9Uu1O9kCL4p3YO4iHXI6Ohjbt58s9RWjkw4POPLvzYAAEO/iaHk gT/6DYyG2oK0jsJ/Jkwr1puhQze6+cn+k7M7o2ywZRwSu+V2hjgZ75AGgQI8qDMcefNK ySnQ== X-Gm-Message-State: AOAM531d2jGsKWUFBUtC3ycfONU0YEaz0Mf4mK90WGnrmQHmHlpdg/KU 8KPyKwGfrq7HvcKqgdw5qCy3OlEEF3cLWg== X-Received: by 2002:a17:90a:e68a:b0:1e3:252f:24e0 with SMTP id s10-20020a17090ae68a00b001e3252f24e0mr6847288pjy.122.1654983822069; Sat, 11 Jun 2022 14:43:42 -0700 (PDT) Received: from google.com (55.212.185.35.bc.googleusercontent.com. [35.185.212.55]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b00163f35bd8f5sm1939942plk.289.2022.06.11.14.43.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Jun 2022 14:43:40 -0700 (PDT) Date: Sat, 11 Jun 2022 14:43:37 -0700 From: Zach O'Keefe To: Yang Shi Cc: Vlastimil Babka , "Kirill A. Shutemov" , Matthew Wilcox , Andrew Morton , Linux MM , Linux Kernel Mailing List Subject: Re: [v3 PATCH 4/7] mm: khugepaged: use transhuge_vma_suitable replace open-code Message-ID: References: <20220606214414.736109-1-shy828301@gmail.com> <20220606214414.736109-5-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10 Jun 20:25, Yang Shi wrote: > On Fri, Jun 10, 2022 at 5:28 PM Zach O'Keefe wrote: > > > > On Fri, Jun 10, 2022 at 3:04 PM Yang Shi wrote: > > > > > > On Fri, Jun 10, 2022 at 9:59 AM Yang Shi wrote: > > > > > > > > On Thu, Jun 9, 2022 at 6:52 PM Zach O'Keefe wrote: > > > > > > > > > > On Mon, Jun 6, 2022 at 2:44 PM Yang Shi wrote: > > > > > > > > > > > > The hugepage_vma_revalidate() needs to check if the address is still in > > > > > > the aligned HPAGE_PMD_SIZE area of the vma when reacquiring mmap_lock, > > > > > > but it was open-coded, use transhuge_vma_suitable() to do the job. And > > > > > > add proper comments for transhuge_vma_suitable(). > > > > > > > > > > > > Signed-off-by: Yang Shi > > > > > > --- > > > > > > include/linux/huge_mm.h | 6 ++++++ > > > > > > mm/khugepaged.c | 5 +---- > > > > > > 2 files changed, 7 insertions(+), 4 deletions(-) > > > > > > > > > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > > > > > index a8f61db47f2a..79d5919beb83 100644 > > > > > > --- a/include/linux/huge_mm.h > > > > > > +++ b/include/linux/huge_mm.h > > > > > > @@ -128,6 +128,12 @@ static inline bool transhuge_vma_size_ok(struct vm_area_struct *vma) > > > > > > return false; > > > > > > } > > > > > > > > > > > > +/* > > > > > > + * Do the below checks: > > > > > > + * - For non-anon vma, check if the vm_pgoff is HPAGE_PMD_NR aligned. > > > > > > + * - For all vmas, check if the haddr is in an aligned HPAGE_PMD_SIZE > > > > > > + * area. > > > > > > + */ > > > > > > > > > > AFAIK we aren't checking if vm_pgoff is HPAGE_PMD_NR aligned, but > > > > > rather that linear_page_index(vma, round_up(vma->vm_start, > > > > > HPAGE_PMD_SIZE)) is HPAGE_PMD_NR aligned within vma->vm_file. I was > > > > > > > > Yeah, you are right. > > > > > > > > > pretty confused about this (hopefully I have it right now - if not - > > > > > case and point :) ), so it might be a good opportunity to add some > > > > > extra commentary to help future travelers understand why this > > > > > constraint exists. > > > > > > > > I'm not fully sure I understand this 100%. I think this is related to > > > > how page cache is structured. I will try to add more comments. > > > > > > How's about "The underlying THP is always properly aligned in page > > > cache, but it may be across the boundary of VMA if the VMA is > > > misaligned, so the THP can't be PMD mapped for this case." > > > > I could certainly still be wrong / am learning here - but I *thought* > > the reason for this check was to make sure that the hugepage > > to-be-collapsed is naturally aligned within the file (since, AFAIK, > > without this constraint, different mm's might have different ideas > > about where hugepages in the file should be). > > The hugepage is definitely naturally aligned within the file, this is > guaranteed by how page cache is organized, you could find some example > code from shmem fault, for example, the below code snippet: > > hindex = round_down(index, folio_nr_pages(folio)); > error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & > GFP_RECLAIM_MASK, charge_mm); > > The index is actually rounded down to HPAGE_PMD_NR aligned. Thanks for the reference here. > The check in hugepage_vma_check() is used to guarantee there is an PMD > aligned area in the vma exactly overlapping with a PMD range in the > page cache. For example, you have a vma starting from 0x1000 maps to > the file's page offset of 0, even though you get THP for the file, it > can not be PMD mapped to the vma. But if it maps to the file's page > offset of 1, then starting from 0x200000 (assuming the vma is big > enough) it can PMD map the second THP in the page cache. Does it make > sense? > Yes, this makes sense - thanks for providing your insight. I think I was basically thinking the same thing ; except your description is more accurate (namely, that is *some* pmd-aligned range covered by the vma that maps to a hugepage-aligned offset in the file (I mistakenly took this to be the *first* pmd-aligned address >= vma->vm_start)). Also, with this in mind, your previous suggested comment makes sense. If I had to take a stab at it, I would say something like: "The hugepage is guaranteed to be hugepage-aligned within the file, but we must check that the PMD-aligned addresses in the VMA map to PMD-aligned offsets within the file, else the hugepage will not be PMD-mappable". WDYT? > > > > > > > > > > > > > > > > Also I wonder while we're at it if we can rename this to > > > > > transhuge_addr_aligned() or transhuge_addr_suitable() or something. > > > > > > > > I think it is still actually used to check vma. > > > > > > > > > > > > > > Otherwise I think the change is a nice cleanup. > > > > > > > > > > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > > > > > > unsigned long addr) > > > > > > { > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > > index 7a5d1c1a1833..ca1754d3a827 100644 > > > > > > --- a/mm/khugepaged.c > > > > > > +++ b/mm/khugepaged.c > > > > > > @@ -951,7 +951,6 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > > > > > struct vm_area_struct **vmap) > > > > > > { > > > > > > struct vm_area_struct *vma; > > > > > > - unsigned long hstart, hend; > > > > > > > > > > > > if (unlikely(khugepaged_test_exit(mm))) > > > > > > return SCAN_ANY_PROCESS; > > > > > > @@ -960,9 +959,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > > > > > if (!vma) > > > > > > return SCAN_VMA_NULL; > > > > > > > > > > > > - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; > > > > > > - hend = vma->vm_end & HPAGE_PMD_MASK; > > > > > > - if (address < hstart || address + HPAGE_PMD_SIZE > hend) > > > > > > + if (!transhuge_vma_suitable(vma, address)) > > > > > > return SCAN_ADDRESS_RANGE; > > > > > > if (!hugepage_vma_check(vma, vma->vm_flags)) > > > > > > return SCAN_VMA_CHECK; > > > > > > -- > > > > > > 2.26.3 > > > > > > > > > > > >