Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp4501860rwb; Wed, 17 Aug 2022 00:38:12 -0700 (PDT) X-Google-Smtp-Source: AA6agR77cs7W0sUU6x46CKhCxkjbsV0q9b0H/m83mzyNCwHZOfO1C+gKJRlFqeq2xQfJSuG1eEYt X-Received: by 2002:a17:906:cc48:b0:730:7545:bf51 with SMTP id mm8-20020a170906cc4800b007307545bf51mr15356628ejb.247.1660721892113; Wed, 17 Aug 2022 00:38:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660721892; cv=none; d=google.com; s=arc-20160816; b=ITUN2OOolTrnI28cTOXPd0FeY/OuFjyFLxj1GkuFb/olhl1UNbRol6Ee99cetDexou iA6cTe1GDHg7xFS2Qo/wSkPJIzEXpHwX4eVDoOTc6Z0ZznnoggMAZhj03ysekqw2Y3ll dCq03Aj+4xCbyAV5BkvyoZkD+Lt33iw6hEdd/BeH8wxhmPN2hZKUgo3HXnVMxT1gaoKf HSWMBKjxXYMSAZasg8z9p9ezg98TWvJNFgCaFLCY4PJCUUztdki5iiHuzonnvowFv/aF RJOhOFuArxi/beutNH0qEB+m3weYrGjFcidsJn7F7erlBEYF4IoF7oKcLh0NUh4z52up yJ+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=ZLQLzbkeTWolQQly/XsT1GIEwFhogIcDpKF6QPEFsGw=; b=R6Eyxvk4tE6BfUAyYBoAfKLmwM86qghXEEgv+QeGUTwkByjjpA5VeJrWp6v89JmiTp 2rID3opmniF03QQ14goox8IM7ahdA6yGNIAJFNteFNpN8U5/ryK0tC5TWI75DqgkcRg6 CNeWYhuaUIbABOWU8J7ah29/7T7X843VfKi79nmqLf6t4kVlO2C9il4ai/TCeH0s67qQ yJtJsKU+5vQFEKjw3pE2xjDzqItA9wVwmJOzb8daCbpkJeTvQM8UkQxdcw7iIoi8rMag KNWeeSsMLr9AW0YxbHhjqo1nxr0lEl0jM5QQbCjhatfyTagEo++7O+ame1OtCDMC68VF MKQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dF35UqcF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qw22-20020a1709066a1600b007303cf80042si12885272ejc.697.2022.08.17.00.37.45; Wed, 17 Aug 2022 00:38:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dF35UqcF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238563AbiHQHRW (ORCPT + 99 others); Wed, 17 Aug 2022 03:17:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231834AbiHQHRV (ORCPT ); Wed, 17 Aug 2022 03:17:21 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD42A6B8D6; Wed, 17 Aug 2022 00:17:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660720640; x=1692256640; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=ZS9/pnsjWMHpVVY5COIkT1/+PC3cjs3sNi1oR7AcMOA=; b=dF35UqcFT+sFhKuXL7fgalyd2F36E9Wdh2AaKxauKrEd3G3x++cCxiYS q/R/TtX71YOVNCCga3lRygICInyWcdcf58VBT1KakIaDCGCzEGj540byG 2QhOfCbJVIsfRQl+q43GNAU6Wwv+NU1GVK28v9AawpEH9bZqiVCP1BSgI U9+iPfDpKO444ZgpVkzma/fAO17p59uiAIo46wyQhJEdxKsQVUB+SK7IS NBe5zasSb6+mK1vKC6O8F+y8urLh4sG4PBkbXUbN8L2Pll3Zhb/gzpI09 D6Q2YXycDLzGrH6+3F2rUpNr337Ks9LONWajgIjz9Q1Y6cnxM6H7mlauo g==; X-IronPort-AV: E=McAfee;i="6400,9594,10441"; a="292421017" X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="292421017" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 00:17:20 -0700 X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="749605160" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 00:17:16 -0700 From: "Huang, Ying" To: Alistair Popple , Nadav Amit Cc: Peter Xu , huang ying , , , , "Sierra Guiza, Alejandro (Alex)" , Felix Kuehling , Jason Gunthorpe , John Hubbard , David Hildenbrand , Ralph Campbell , Matthew Wilcox , Karol Herbst , Lyude Paul , Ben Skeggs , Logan Gunthorpe , , , Subject: Re: [PATCH v2 1/2] mm/migrate_device.c: Copy pte dirty bit to page References: <6e77914685ede036c419fa65b6adc27f25a6c3e9.1660635033.git-series.apopple@nvidia.com> <871qtfvdlw.fsf@nvdebian.thelocal> <87o7wjtn2g.fsf@nvdebian.thelocal> Date: Wed, 17 Aug 2022 15:17:04 +0800 In-Reply-To: <87o7wjtn2g.fsf@nvdebian.thelocal> (Alistair Popple's message of "Wed, 17 Aug 2022 15:41:16 +1000") Message-ID: <87tu6bbaq7.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Alistair Popple writes: > Peter Xu writes: > >> On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote: >>> >>> Peter Xu writes: >>> >>> > On Tue, Aug 16, 2022 at 04:10:29PM +0800, huang ying wrote: >>> >> > @@ -193,11 +194,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, >>> >> > bool anon_exclusive; >>> >> > pte_t swp_pte; >>> >> > >>> >> > + flush_cache_page(vma, addr, pte_pfn(*ptep)); >>> >> > + pte = ptep_clear_flush(vma, addr, ptep); >>> >> >>> >> Although I think it's possible to batch the TLB flushing just before >>> >> unlocking PTL. The current code looks correct. >>> > >>> > If we're with unconditionally ptep_clear_flush(), does it mean we should >>> > probably drop the "unmapped" and the last flush_tlb_range() already since >>> > they'll be redundant? >>> >>> This patch does that, unless I missed something? >> >> Yes it does. Somehow I didn't read into the real v2 patch, sorry! >> >>> >>> > If that'll need to be dropped, it looks indeed better to still keep the >>> > batch to me but just move it earlier (before unlock iiuc then it'll be >>> > safe), then we can keep using ptep_get_and_clear() afaiu but keep "pte" >>> > updated. >>> >>> I think we would also need to check should_defer_flush(). Looking at >>> try_to_unmap_one() there is this comment: >>> >>> if (should_defer_flush(mm, flags) && !anon_exclusive) { >>> /* >>> * We clear the PTE but do not flush so potentially >>> * a remote CPU could still be writing to the folio. >>> * If the entry was previously clean then the >>> * architecture must guarantee that a clear->dirty >>> * transition on a cached TLB entry is written through >>> * and traps if the PTE is unmapped. >>> */ >>> >>> And as I understand it we'd need the same guarantee here. Given >>> try_to_migrate_one() doesn't do batched TLB flushes either I'd rather >>> keep the code as consistent as possible between >>> migrate_vma_collect_pmd() and try_to_migrate_one(). I could look at >>> introducing TLB flushing for both in some future patch series. >> >> should_defer_flush() is TTU-specific code? > > I'm not sure, but I think we need the same guarantee here as mentioned > in the comment otherwise we wouldn't see a subsequent CPU write that > could dirty the PTE after we have cleared it but before the TLB flush. > > My assumption was should_defer_flush() would ensure we have that > guarantee from the architecture, but maybe there are alternate/better > ways of enforcing that? >> IIUC the caller sets TTU_BATCH_FLUSH showing that tlb can be omitted since >> the caller will be responsible for doing it. In migrate_vma_collect_pmd() >> iiuc we don't need that hint because it'll be flushed within the same >> function but just only after the loop of modifying the ptes. Also it'll be >> with the pgtable lock held. > > Right, but the pgtable lock doesn't protect against HW PTE changes such > as setting the dirty bit so we need to ensure the HW does the right > thing here and I don't know if all HW does. This sounds sensible. But I take a look at zap_pte_range(), and find that it appears that the implementation requires the PTE dirty bit to be write-through. Do I miss something? Hi, Nadav, Can you help? Best Regards, Huang, Ying [snip]