Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp72841rwb; Wed, 17 Aug 2022 23:07:41 -0700 (PDT) X-Google-Smtp-Source: AA6agR7UKf5ae/JW2X6ehGyP/7JobbNC6ZKxNVaieglG2i3fo4ZzE/6YziFZSUKenn7eXoibTIhy X-Received: by 2002:a05:6402:12cb:b0:43d:6e77:19a7 with SMTP id k11-20020a05640212cb00b0043d6e7719a7mr1058604edx.342.1660802860947; Wed, 17 Aug 2022 23:07:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660802860; cv=none; d=google.com; s=arc-20160816; b=aFZ+q/4pB8XuAGPVBrKlOVu9uAD7OnQM3OC5jrYZmx9/bm6ao0DAJMo6eJxh/cBCHt k8sv0aDif+gfynqpabaR24La8PISnmKOK9JOesltnlgt0RAvDrRf5kZZFnkXHYHbfyFE Q+ft/Y/2VZ7Qcr2MohE2z5hopfIEQ6y+4Bj4oQQ7Coe1Rfc+tm1Wpg12DimsFx1GaBq/ xeB8wb+dh5ygQ4iGX+1VhbVmibYLGbkVmBfyXCmU6JlSbdLMO9x3YxhVIszWV1T7Vx3K 0jLrOXHSJZ9tRxYmkdcmQkB6t/VkAY9KPuqN5hagzSTIHBcnHXF1I0nQYIcEqFnDenek W0KQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:message-id:in-reply-to:date:references:subject:cc:to :from:dkim-signature; bh=NwwN75JIU7w3cuA+CaPpimSLW1M+Md7cXzmHPVdt+Jw=; b=PpIyIjGDef+WR/cJ9NuzmMg0HsXSKnsRfWcBlQvMDmlNxwG092wytvezwd41vu5Rk4 jOOlAOpcLdYYVmPcLqytNV4Ao6ekv2YTRhr3nfkeRDBCsbrhh4kdg4BmzMxJz1s1Z9+r LOQ+jXO20Ts+xKPOfagxWqNDLkbwd806pecY5SAdJDDNKgIDkv3JysnORv/ZG0S74Uyo ZK6+DTcpl/7H6tnmpewi3O/YxQbmo5F6J0wIpHz7M+Q3ktxDK2Ayuyz3KBrRAoPvdXvC v8iDamEqLV8BHsrJ2d2I/gCmUTL5MbLD7dA8JTl4iq8Hgf1HXe+dREgyi0+oBF2An7/X Yigg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FNnig1FR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o17-20020a170906975100b0073096092e31si558262ejy.537.2022.08.17.23.07.14; Wed, 17 Aug 2022 23:07:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FNnig1FR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243378AbiHRGAN (ORCPT + 99 others); Thu, 18 Aug 2022 02:00:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240278AbiHRGAL (ORCPT ); Thu, 18 Aug 2022 02:00:11 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D11D7CAA7; Wed, 17 Aug 2022 23:00:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660802408; x=1692338408; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version:content-transfer-encoding; bh=0K95CIdwabnHvsYz7rM9st6e0p/vmx0aIBOyC+F0+UE=; b=FNnig1FRRpyfbS816eVJApGQC+HSP8zfZZyC66jcvc4CSv5N0DzfMixF 0rW4z5d5jSpHwQwoRyjCWx3Vn1hKkvf3gAnP9wFSc3xBLvdNsrK720Jjj 6Ru4/4PxsE94BIJ6kZP1HTOV9ifc2dK0UsOKcDPqWOKqNMaJCxqH+Itxe K2zmpMkdY/ryNE40y7Oj/BD5r+6iS3e12Tx3Xiex4vPa/Dw+5jI6I1rk9 7hfADCnTVj9IBu0zE/QIuIvCd7Nd1UfNsuqwdaJAC0UuJI0KDKsLExocO VWW9AZQv5ebE9WXyyYRAGvrBcnfyqy51lqChDXwdBGHrfGaePCNJysHyJ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10442"; a="356665472" X-IronPort-AV: E=Sophos;i="5.93,245,1654585200"; d="scan'208";a="356665472" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 23:00:04 -0700 X-IronPort-AV: E=Sophos;i="5.93,245,1654585200"; d="scan'208";a="584069199" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 22:59:58 -0700 From: "Huang, Ying" To: Nadav Amit Cc: Alistair Popple , Peter Xu , huang ying , Linux MM , Andrew Morton , LKML , "Sierra Guiza, Alejandro (Alex)" , Felix Kuehling , Jason Gunthorpe , John Hubbard , David Hildenbrand , Ralph Campbell , Matthew Wilcox , Karol Herbst , Lyude Paul , Ben Skeggs , Logan Gunthorpe , paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, stable@vger.kernel.org Subject: Re: [PATCH v2 1/2] mm/migrate_device.c: Copy pte dirty bit to page References: <6e77914685ede036c419fa65b6adc27f25a6c3e9.1660635033.git-series.apopple@nvidia.com> <871qtfvdlw.fsf@nvdebian.thelocal> <87o7wjtn2g.fsf@nvdebian.thelocal> <87tu6bbaq7.fsf@yhuang6-desk2.ccr.corp.intel.com> <1D2FB37E-831B-445E-ADDC-C1D3FF0425C1@gmail.com> Date: Thu, 18 Aug 2022 13:59:49 +0800 In-Reply-To: <1D2FB37E-831B-445E-ADDC-C1D3FF0425C1@gmail.com> (Nadav Amit's message of "Wed, 17 Aug 2022 02:41:19 -0700") Message-ID: <87ilmqay7e.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Nadav Amit writes: > On Aug 17, 2022, at 12:17 AM, Huang, Ying wrote: > >> Alistair Popple writes: >>=20 >>> Peter Xu writes: >>>=20 >>>> On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote: >>>>> Peter Xu writes: >>>>>=20 >>>>>> On Tue, Aug 16, 2022 at 04:10:29PM +0800, huang ying wrote: >>>>>>>> @@ -193,11 +194,10 @@ static int migrate_vma_collect_pmd(pmd_t *pm= dp, >>>>>>>> bool anon_exclusive; >>>>>>>> pte_t swp_pte; >>>>>>>>=20 >>>>>>>> + flush_cache_page(vma, addr, pte_pfn(*ptep)= ); >>>>>>>> + pte =3D ptep_clear_flush(vma, addr, ptep); >>>>>>>=20 >>>>>>> Although I think it's possible to batch the TLB flushing just before >>>>>>> unlocking PTL. The current code looks correct. >>>>>>=20 >>>>>> If we're with unconditionally ptep_clear_flush(), does it mean we sh= ould >>>>>> probably drop the "unmapped" and the last flush_tlb_range() already = since >>>>>> they'll be redundant? >>>>>=20 >>>>> This patch does that, unless I missed something? >>>>=20 >>>> Yes it does. Somehow I didn't read into the real v2 patch, sorry! >>>>=20 >>>>>> If that'll need to be dropped, it looks indeed better to still keep = the >>>>>> batch to me but just move it earlier (before unlock iiuc then it'll = be >>>>>> safe), then we can keep using ptep_get_and_clear() afaiu but keep "p= te" >>>>>> updated. >>>>>=20 >>>>> I think we would also need to check should_defer_flush(). Looking at >>>>> try_to_unmap_one() there is this comment: >>>>>=20 >>>>> if (should_defer_flush(mm, flags) && !anon_exclusive) { >>>>> /* >>>>> * We clear the PTE but do not flush so potentially >>>>> * a remote CPU could still be writing to the folio. >>>>> * If the entry was previously clean then the >>>>> * architecture must guarantee that a clear->dirty >>>>> * transition on a cached TLB entry is written through >>>>> * and traps if the PTE is unmapped. >>>>> */ >>>>>=20 >>>>> And as I understand it we'd need the same guarantee here. Given >>>>> try_to_migrate_one() doesn't do batched TLB flushes either I'd rather >>>>> keep the code as consistent as possible between >>>>> migrate_vma_collect_pmd() and try_to_migrate_one(). I could look at >>>>> introducing TLB flushing for both in some future patch series. >>>>=20 >>>> should_defer_flush() is TTU-specific code? >>>=20 >>> I'm not sure, but I think we need the same guarantee here as mentioned >>> in the comment otherwise we wouldn't see a subsequent CPU write that >>> could dirty the PTE after we have cleared it but before the TLB flush. >>>=20 >>> My assumption was should_defer_flush() would ensure we have that >>> guarantee from the architecture, but maybe there are alternate/better >>> ways of enforcing that? >>>> IIUC the caller sets TTU_BATCH_FLUSH showing that tlb can be omitted s= ince >>>> the caller will be responsible for doing it. In migrate_vma_collect_p= md() >>>> iiuc we don't need that hint because it'll be flushed within the same >>>> function but just only after the loop of modifying the ptes. Also it'= ll be >>>> with the pgtable lock held. >>>=20 >>> Right, but the pgtable lock doesn't protect against HW PTE changes such >>> as setting the dirty bit so we need to ensure the HW does the right >>> thing here and I don't know if all HW does. >>=20 >> This sounds sensible. But I take a look at zap_pte_range(), and find >> that it appears that the implementation requires the PTE dirty bit to be >> write-through. Do I miss something? >>=20 >> Hi, Nadav, Can you help? > > Sorry for joining the discussion late. I read most ofthis thread and I ho= pe > I understand what you ask me. So at the risk of rehashing or repeating wh= at > you already know - here are my 2 cents. Feel free to ask me again if I did > not understand your questions: > > 1. ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH is currently x86 specific. There is a > recent patch that want to use it for arm64 as well [1]. The assumption th= at > Alistair cited from the code (regarding should_defer_flush()) might not be > applicable to certain architectures (although most likely it is). I tried > to encapsulate the logic on whether an unflushed RO entry can become dirty > in an arch specific function [2]. > > 2. Having said all of that, using the logic of =E2=80=9Cflush if there ar= e pending > TLB flushes for this mm=E2=80=9D as done by UNMAP_TLB_FLUSH makes sense I= MHO > (although I would have considered doing it in finer granularity of > VMA/page-table as I proposed before and got somewhat lukewarm response [3= ]). > > 3. There is no question that flushing after dropping the ptl is wrong. But > reading the thread, I think that you only focus on whether a dirty > indication might get lost. The problem, I think, is bigger, as it might a= lso > cause correction problems after concurrently removing mappings. What happ= ens > if we get for a clean PTE something like: > > CPU0 CPU1 > ---- ---- > > migrate_vma_collect_pmd() > [ defer flush, release ptl ] > madvise(MADV_DONTNEED) > -> zap_pte_range() > [ PTE not present; mmu_gather > not updated ] >=20=09=09=09=09=09 > [ no flush; stale PTE in TLB ] > > [ page is still accessible ] > > [ might apply to munmap(); I usually regard MADV_DONTNEED since it does > not call mmap_write_lock() ] Yes. You are right. Flushing after PTL unlocking can cause more serious problems. I also want to check whether the dirty bit can be lost in zap_pte_range(), where the TLB flush will be delayed if the PTE is clean. Per my understanding, PTE dirty bit must be write-through to make this work correctly. And I cannot imagine how CPU can do except page fault if - PTE is non-present - TLB entry is clean - CPU writes the page Then, can we assume PTE dirty bit are always write-through on any architecture? > 4. Having multiple TLB flushing infrastructures makes all of these > discussions very complicated and unmaintainable. I need to convince myself > in every occasion (including this one) whether calls to > flush_tlb_batched_pending() and tlb_flush_pending() are needed or not. > > What I would like to have [3] is a single infrastructure that gets a > =E2=80=9Cticket=E2=80=9D (generation when the batching started), the old = PTE and the new PTE > and checks whether a TLB flush is needed based on the arch behavior and t= he > current TLB generation. If needed, it would update the =E2=80=9Cticket=E2= =80=9D to the new > generation. Andy wanted a ring for pending TLB flushes, but I think it is= an > overkill with more overhead and complexity than needed. > > But the current situation in which every TLB flush is a basis for long > discussions and prone to bugs is impossible. > > I hope it helps. Let me know if you want me to revive the patch-set or ot= her > feedback. > > [1] https://lore.kernel.org/all/20220711034615.482895-5-21cnbao@gmail.com/ > [2] https://lore.kernel.org/all/20220718120212.3180-13-namit@vmware.com/ > [3] https://lore.kernel.org/all/20210131001132.3368247-16-namit@vmware.co= m/ Haven't looked at this in depth yet. Will do that. Best Regards, Huang, Ying