Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2809751rwd; Mon, 22 May 2023 04:51:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7qYjJ0k/FuPJjuGUuCuFm+Y12IyLLo08l4C8WFWBfYhKsQnx6vcNi6DVWmY8yhGNPNm/Bq X-Received: by 2002:a05:6a21:6da5:b0:10b:1c14:688f with SMTP id wl37-20020a056a216da500b0010b1c14688fmr6322567pzb.13.1684756289676; Mon, 22 May 2023 04:51:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684756289; cv=none; d=google.com; s=arc-20160816; b=lkyVilUge3CyOC/mtdtayvljT0S5Gv0tLxV8eJk18CNhKOe53Bj1ExgEl30+o24x4p xoJP7pB0qcGgKMzmZGEAcyI5k/Ghv3FavgNzrKFUjUV/YEvLWic0c08quDXt72Im/WXI D7k66fGXdMRM1T+X+VBisaNgaO2MwelfDqNnuf28EuvYDFo4TXusrdcdUE8B0+82obnE cGG+sYuAsjgJLW3YgQLzXgw923vLujXV9RclvH5Iguh5Mxg6/973RAVzUFW6KxMgeWP3 w5jkLBeLtv0WVlfeyQh5JQeY+HnBTGEDM1TvkdBMIgZMTuVpwnFYBhEHjONyiWcaz5V2 fxew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:mime-version:date :dkim-signature:message-id; bh=s9cmspJubmVALzr9R4FiGgADvWy/zsxX0daTT9ez0U8=; b=MSpZrsjO8eSVQUbt7yGI7j4wjccT7V7dMGsJa6RKcOQd9BNuapwbLMpuV4Jow7+8K4 GR6PUC+yUoXMKwBE4EPsMaUsJx2MYotRw0zR1Z1ayBjo89FSS3uaQActfkv07fVQhhH7 2MR70XF6rtm5uHdXeGBnK1YPgnfBzofYrZcTfNSZkeatCzjjPoyABElK0LQkXOq0qr1/ sEmfQ1iwk3mlynKN0SAym9fnrRqaGPo9ZvyBZ9ZFG/S9U/E0cg24UiYpH4j/1PkC8KKt Z3OXjHj4aY1COs1G8IRywt0cxLH3/iO59uaYNN3853hcCdtLzBabTTVx0BQQZiuY1ay9 gTTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=h991mvuT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n26-20020a638f1a000000b0051372ec9316si4445681pgd.166.2023.05.22.04.51.14; Mon, 22 May 2023 04:51:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=h991mvuT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233317AbjEVLlw (ORCPT + 99 others); Mon, 22 May 2023 07:41:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229486AbjEVLlv (ORCPT ); Mon, 22 May 2023 07:41:51 -0400 Received: from out-20.mta0.migadu.com (out-20.mta0.migadu.com [IPv6:2001:41d0:1004:224b::14]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 338B0A0 for ; Mon, 22 May 2023 04:41:50 -0700 (PDT) Message-ID: <10e58e7e-a52e-751d-f693-cd4e05ac10ca@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1684755708; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s9cmspJubmVALzr9R4FiGgADvWy/zsxX0daTT9ez0U8=; b=h991mvuT0dKHypdxWkkXlCCOJMdEV93/HXLbiuDCCAYWfZAEW+iIZmx2CFWeYnusaxXwiw bZgajcN3QcfG1zDR6ctnIODlf73lQXniPNoiUh5+JPuawnwzifrfYHdRSFTTAwvRID93vG a7yzyDbsTRUR7J842zzLmZCTH+r/yVk= Date: Mon, 22 May 2023 19:41:35 +0800 MIME-Version: 1.0 Subject: Re: [PATCH 08/31] mm/page_vma_mapped: pte_offset_map_nolock() not pte_lockptr() Content-Language: en-US To: Hugh Dickins , Andrew Morton Cc: Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <68a97fbe-5c1e-7ac6-72c-7b9c6290b370@google.com> <8fa3fb6e-2e39-cbea-c529-ee9e64c7d2d0@google.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Qi Zheng In-Reply-To: <8fa3fb6e-2e39-cbea-c529-ee9e64c7d2d0@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2023/5/22 12:58, Hugh Dickins wrote: > map_pte() use pte_offset_map_nolock(), to make sure of the ptl belonging > to pte, even if pmd entry is then changed racily: page_vma_mapped_walk() > use that instead of getting pte_lockptr() later, or restart if map_pte() > found no page table. > > Signed-off-by: Hugh Dickins > --- > mm/page_vma_mapped.c | 28 ++++++++++++++++++++++------ > 1 file changed, 22 insertions(+), 6 deletions(-) > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 947dc7491815..2af734274073 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -13,16 +13,28 @@ static inline bool not_found(struct page_vma_mapped_walk *pvmw) > return false; > } > > -static bool map_pte(struct page_vma_mapped_walk *pvmw) > +static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) > { > if (pvmw->flags & PVMW_SYNC) { > /* Use the stricter lookup */ > pvmw->pte = pte_offset_map_lock(pvmw->vma->vm_mm, pvmw->pmd, > pvmw->address, &pvmw->ptl); > - return true; > + *ptlp = pvmw->ptl; > + return !!pvmw->pte; > } > > - pvmw->pte = pte_offset_map(pvmw->pmd, pvmw->address); > + /* > + * It is important to return the ptl corresponding to pte, > + * in case *pvmw->pmd changes underneath us; so we need to > + * return it even when choosing not to lock, in case caller > + * proceeds to loop over next ptes, and finds a match later. > + * Though, in most cases, page lock already protects this. > + */ > + pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd, > + pvmw->address, ptlp); > + if (!pvmw->pte) > + return false; > + > if (pvmw->flags & PVMW_MIGRATION) { > if (!is_swap_pte(*pvmw->pte)) > return false; > @@ -51,7 +63,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) > } else if (!pte_present(*pvmw->pte)) { > return false; > } > - pvmw->ptl = pte_lockptr(pvmw->vma->vm_mm, pvmw->pmd); > + pvmw->ptl = *ptlp; > spin_lock(pvmw->ptl); > return true; > } > @@ -156,6 +168,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > struct vm_area_struct *vma = pvmw->vma; > struct mm_struct *mm = vma->vm_mm; > unsigned long end; > + spinlock_t *ptl; > pgd_t *pgd; > p4d_t *p4d; > pud_t *pud; > @@ -257,8 +270,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > step_forward(pvmw, PMD_SIZE); > continue; > } > - if (!map_pte(pvmw)) > + if (!map_pte(pvmw, &ptl)) { > + if (!pvmw->pte) > + goto restart; Could pvmw->pmd be changed? Otherwise, how about just jumping to the retry label below? @@ -205,6 +205,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) } pvmw->pmd = pmd_offset(pud, pvmw->address); + +retry: /* * Make sure the pmd value isn't cached in a register by the * compiler and used as a stale value after we've observed a > goto next_pte; > + } > this_pte: > if (check_pte(pvmw)) > return true; > @@ -281,7 +297,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > } while (pte_none(*pvmw->pte)); > > if (!pvmw->ptl) { > - pvmw->ptl = pte_lockptr(mm, pvmw->pmd); > + pvmw->ptl = ptl; > spin_lock(pvmw->ptl); > } > goto this_pte; -- Thanks, Qi