Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3904236pxf; Mon, 22 Mar 2021 19:36:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy94IlBEzG/kdGKEzvjc3yAHDVn3cZljEBR1ztMgcN09bSBSQneHoYI/PWdHxYbOd2bJ/5M X-Received: by 2002:aa7:d792:: with SMTP id s18mr2381548edq.176.1616466983740; Mon, 22 Mar 2021 19:36:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616466983; cv=none; d=google.com; s=arc-20160816; b=EwhxuepOLfpdGaXIKOuj5ioO2kip/tK0UvBsqehNYPsXQx5gr7VHlN5GaQ5/n5G2xX CadtPH78SNZZtjAXzxmsnsp7EdjX/voHbv4Et8nJhfOm7pnJ5kDTbnT54MCThbQUzaME 4w61SkkOzKELvkGBYt31chnjn4K/oKhDH//Bj4lujS94h/NdEHWAnJtncZ3azolElbi7 AWODzDbPnIuwXZo7qd4rNavWJw10LBjNaYnKM/z8Vrd/5qg6KwetVsRuUVyWdI0DA0D7 rEn4lvN+0ly6JKquBWTAbp9VC9Vsg47+8hvTZ0Sy+QO/BPEL/usetD62Pcit2IofMIPt 4dXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=5FhgatpnNdwyiuH3BaYC07rrD1u79qfEXaQkax2vgdU=; b=DRHMGGWcXjCkQeoZmjr4m0UiWrbsgm1Bi6ZbF1nJLm0qYbZdFxYyvrIvoBsR4tMiH/ Khs80c9utgWY9IzRZlYW/cdrHfVq+M5qhXf8rPcUjbHayqT8g7sY1afAjyIY6UaloFtq 8iZSxg5nPhBPRg2L960zcP1xyVJPd0u0NSq5LBSVN0wacWpVnE2294c10wzXYpLOXBs0 Av+S1cIH461EVchFUKo6LJwmjqYJoRoOnPktpatTlwUdFPFvzt4obujD/LkGVS75d2l2 Vei9z+xJpgiMizA7WeJEoF7E5JtSzT12PO3BmFKo2sZiZpjUi7fLLiBwZIipoWDiz5Xy fYaw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q17si13019772ejy.390.2021.03.22.19.36.01; Mon, 22 Mar 2021 19:36:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229590AbhCWCe4 (ORCPT + 99 others); Mon, 22 Mar 2021 22:34:56 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:14843 "EHLO szxga07-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229467AbhCWCeu (ORCPT ); Mon, 22 Mar 2021 22:34:50 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4F4FlQ169Nz92lh; Tue, 23 Mar 2021 10:32:50 +0800 (CST) Received: from [10.174.178.163] (10.174.178.163) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Tue, 23 Mar 2021 10:34:45 +0800 Subject: Re: [PATCH 02/23] mm: Clear vmf->pte after pte_unmap_same() returns To: Peter Xu CC: "Kirill A . Shutemov" , Jerome Glisse , Mike Kravetz , Matthew Wilcox , Andrew Morton , "Axel Rasmussen" , Hugh Dickins , "Nadav Amit" , Andrea Arcangeli , "Mike Rapoport" , , References: <20210323004912.35132-1-peterx@redhat.com> <20210323004912.35132-3-peterx@redhat.com> From: Miaohe Lin Message-ID: <28c1dfdc-b72b-88a7-411c-effc078f774a@huawei.com> Date: Tue, 23 Mar 2021 10:34:45 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20210323004912.35132-3-peterx@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.163] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi: On 2021/3/23 8:48, Peter Xu wrote: > pte_unmap_same() will always unmap the pte pointer. After the unmap, vmf->pte > will not be valid any more. We should clear it. > > It was safe only because no one is accessing vmf->pte after pte_unmap_same() > returns, since the only caller of pte_unmap_same() (so far) is do_swap_page(), > where vmf->pte will in most cases be overwritten very soon. > > pte_unmap_same() will be used in other places in follow up patches, so that > vmf->pte will not always be re-written. This patch enables us to call > functions like finish_fault() because that'll conditionally unmap the pte by > checking vmf->pte first. Or, alloc_set_pte() will make sure to allocate a new > pte even after calling pte_unmap_same(). > > Since we'll need to modify vmf->pte, directly pass in vmf into pte_unmap_same() > and then we can also avoid the long parameter list. > > Signed-off-by: Peter Xu Good cleanup! Thanks. Reviewed-by: Miaohe Lin > --- > mm/memory.c | 13 +++++++------ > 1 file changed, 7 insertions(+), 6 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index a458a595331f..d534eba85756 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2607,19 +2607,20 @@ EXPORT_SYMBOL_GPL(apply_to_existing_page_range); > * proceeding (but do_wp_page is only called after already making such a check; > * and do_anonymous_page can safely check later on). > */ > -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, > - pte_t *page_table, pte_t orig_pte) > +static inline int pte_unmap_same(struct vm_fault *vmf) > { > int same = 1; > #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) > if (sizeof(pte_t) > sizeof(unsigned long)) { > - spinlock_t *ptl = pte_lockptr(mm, pmd); > + spinlock_t *ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); > spin_lock(ptl); > - same = pte_same(*page_table, orig_pte); > + same = pte_same(*vmf->pte, vmf->orig_pte); > spin_unlock(ptl); > } > #endif > - pte_unmap(page_table); > + pte_unmap(vmf->pte); > + /* After unmap of pte, the pointer is invalid now - clear it. */ > + vmf->pte = NULL; > return same; > } > > @@ -3308,7 +3309,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > vm_fault_t ret = 0; > void *shadow = NULL; > > - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) > + if (!pte_unmap_same(vmf)) > goto out; > > entry = pte_to_swp_entry(vmf->orig_pte); >