Received: by 2002:a05:7412:a9a2:b0:e2:908c:2ebd with SMTP id o34csp210114rdh; Wed, 25 Oct 2023 23:02:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHftJesomdh8A0Jz2GMCwsQoekHr7RqheFDQCkzXQQSgBkt6qBZpn2ksIVdWABjylW9Q3yD X-Received: by 2002:a25:40cc:0:b0:d84:da24:96de with SMTP id n195-20020a2540cc000000b00d84da2496demr17230980yba.33.1698300150247; Wed, 25 Oct 2023 23:02:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698300150; cv=none; d=google.com; s=arc-20160816; b=MzDTf5YctOjJ/4/t1baYgScHmBmRTZhvI82a73mr2QIfLQyTncypKK/KZ0dyFjaaCw fPZNiJsG1beP822xKUB2bCKx54c9lZOV4fy7gF1PDJwqtrvpzXDqw8CEyxosWqOfgeAE UZ0tWh6JdNp0x3GZ8xRZ7lOPE/xGTs0E0cDXn3G6qf6G4ok9/0VsMdK4jzdjQDAgietf rOzUWdp9B7/2O3SUPlcZPOMPKUkvT+1Y77tGC0zaO0Pcy9T4uvVWf/CVFXbO1dxuh4u2 sLBagg+6Cku/3m+7ohc9kOuZArNrtDqug8c7D6Zppq/6QKr6pvZNCTxSk/9ojUr9wMyz CBLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=Hnusy6WI7GqzkNfEROFKbV+x1x8pta0ZIAdtCE0I2LI=; fh=y/IVp7u6gMKTlnucGy7UvNBltQCJJQk57y2K73+xVj8=; b=xQqfaPtVQZnzS9z9+arALdM0fN0aYnPOK2HGsNtt3292cdUUIrkR1Q3O/+EgKxnlqk t5wvZDlVfkm6pZIvpl63TyG7aovEwtCC7lx5ffv3kz/DhWwBbeUUX7uhz1Q9nduS+o4z nvCtL3HhcZ1rVnaC1b4WLMPTnPeE8+Cd8AG4zetVfdZm0Fw0UGzIlCFJeqJHE0sjsqXO vCaNQzqHCxJcram+k9U86PKNgWwcMunR7et8yJCGxdDrkbena7PMKe9mFf3aqC4h7FrI JNki3LTh1HpM0W1+GzyjfLMxnROgNAzLVrlRiHrGwgvAvADfawDgGi6xM8bmk3S3XB6/ 1fOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id x15-20020a056902102f00b00da042925538si7839888ybt.40.2023.10.25.23.02.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Oct 2023 23:02:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id E69278116E6C; Wed, 25 Oct 2023 23:02:24 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230385AbjJZGCK (ORCPT + 99 others); Thu, 26 Oct 2023 02:02:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229638AbjJZGCI (ORCPT ); Thu, 26 Oct 2023 02:02:08 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3BEDD137 for ; Wed, 25 Oct 2023 23:02:05 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64E852F4; Wed, 25 Oct 2023 23:02:46 -0700 (PDT) Received: from [10.162.41.8] (a077893.blr.arm.com [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F41F43F64C; Wed, 25 Oct 2023 23:02:01 -0700 (PDT) Message-ID: Date: Thu, 26 Oct 2023 11:31:59 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit Content-Language: en-US To: Barry Song <21cnbao@gmail.com> Cc: Baolin Wang , catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, v-songbaohua@oppo.com, yuzhao@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <2f55f62b-cae2-4eee-8572-1b662a170880@arm.com> From: Anshuman Khandual In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Wed, 25 Oct 2023 23:02:25 -0700 (PDT) On 10/26/23 11:24, Barry Song wrote: > On Thu, Oct 26, 2023 at 12:55 PM Anshuman Khandual > wrote: >> >> >> >> On 10/24/23 18:26, Baolin Wang wrote: >>> Now ptep_clear_flush_young() is only called by folio_referenced() to >>> check if the folio was referenced, and now it will call a tlb flush on >>> ARM64 architecture. However the tlb flush can be expensive on ARM64 >>> servers, especially for the systems with a large CPU numbers. >> >> TLB flush would be expensive on *any* platform with large CPU numbers ? >> >>> >>> Similar to the x86 architecture, below comments also apply equally to >>> ARM64 architecture. So we can drop the tlb flush operation in >>> ptep_clear_flush_young() on ARM64 architecture to improve the performance. >>> " >>> /* Clearing the accessed bit without a TLB flush >>> * doesn't cause data corruption. [ It could cause incorrect >>> * page aging and the (mistaken) reclaim of hot pages, but the >>> * chance of that should be relatively low. ] >>> * >>> * So as a performance optimization don't flush the TLB when >>> * clearing the accessed bit, it will eventually be flushed by >>> * a context switch or a VM operation anyway. [ In the rare >>> * event of it not getting flushed for a long time the delay >>> * shouldn't really matter because there's no real memory >>> * pressure for swapout to react to. ] >>> */ >> >> If always true, this sounds generic enough for all platforms, why only >> x86 and arm64 ? >> >>> " >>> Running the thpscale to show some obvious improvements for compaction >>> latency with this patch: >>> base patched >>> Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%* >>> Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%* >>> Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%* >>> Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%* >>> Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%* >>> Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%* >>> Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%* >>> Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%* >>> Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%* >>> base patched >>> Duration User 167.78 161.03 >>> Duration System 1836.66 1673.01 >>> Duration Elapsed 2074.58 2059.75 >> >> Could you please point to the test repo you are running ? >> >>> >>> Barry Song submitted a similar patch [1] before, that replaces the >>> ptep_clear_flush_young_notify() with ptep_clear_young_notify() in >>> folio_referenced_one(). However, I'm not sure if removing the tlb flush >>> operation is applicable to every architecture in kernel, so dropping >>> the tlb flush for ARM64 seems a sensible change. >> >> The reasoning provided here sounds generic when true, hence there seems >> to be no justification to keep it limited just for arm64 and x86. Also >> what about pmdp_clear_flush_young_notify() when THP is enabled. Should >> that also not do a TLB flush after clearing access bit ? Although arm64 >> does not enable __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH, rather depends on >> the generic pmdp_clear_flush_young() which also does a TLB flush via >> flush_pmd_tlb_range() while clearing the access bit. >> >>> >>> Note: I am okay for both approach, if someone can help to ensure that >>> all architectures do not need the tlb flush when clearing the accessed >>> bit, then I also think Barry's patch is better (hope Barry can resend >>> his patch). >> >> This paragraph belongs after the '----' below and not part of the commit >> message. >> >>> >>> [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gmail.com/ >>> Signed-off-by: Baolin Wang >>> --- >>> arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++--------------- >>> 1 file changed, 16 insertions(+), 15 deletions(-) >>> >>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>> index 0bd18de9fd97..2979d796ba9d 100644 >>> --- a/arch/arm64/include/asm/pgtable.h >>> +++ b/arch/arm64/include/asm/pgtable.h >>> @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, >>> static inline int ptep_clear_flush_young(struct vm_area_struct *vma, >>> unsigned long address, pte_t *ptep) >>> { >>> - int young = ptep_test_and_clear_young(vma, address, ptep); >>> - >>> - if (young) { >>> - /* >>> - * We can elide the trailing DSB here since the worst that can >>> - * happen is that a CPU continues to use the young entry in its >>> - * TLB and we mistakenly reclaim the associated page. The >>> - * window for such an event is bounded by the next >>> - * context-switch, which provides a DSB to complete the TLB >>> - * invalidation. >>> - */ >>> - flush_tlb_page_nosync(vma, address); >>> - } >>> - >>> - return young; >>> + /* >>> + * This comment is borrowed from x86, but applies equally to ARM64: >>> + * >>> + * Clearing the accessed bit without a TLB flush doesn't cause >>> + * data corruption. [ It could cause incorrect page aging and >>> + * the (mistaken) reclaim of hot pages, but the chance of that >>> + * should be relatively low. ] >>> + * >>> + * So as a performance optimization don't flush the TLB when >>> + * clearing the accessed bit, it will eventually be flushed by >>> + * a context switch or a VM operation anyway. [ In the rare >>> + * event of it not getting flushed for a long time the delay >>> + * shouldn't really matter because there's no real memory >>> + * pressure for swapout to react to. ] >>> + */ >>> + return ptep_test_and_clear_young(vma, address, ptep); >>> } >>> >>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> >> There are three distinct concerns here >> >> 1) What are the chances of this misleading existing hot page reclaim process >> 2) How secondary MMU such as SMMU adapt to change in mappings without a flush >> 3) Could this break the architecture rule requiring a TLB flush after access >> bit clear on a page table entry > > In terms of all of above concerns, though 2 is different, which is an > issue between > cpu and non-cpu, > i feel kernel has actually dropped tlb flush at least for mglru, there > is no flush in > lru_gen_look_around(), > > static bool folio_referenced_one(struct folio *folio, > struct vm_area_struct *vma, unsigned long address, void *arg) > { > ... > > if (pvmw.pte) { > if (lru_gen_enabled() && > pte_young(ptep_get(pvmw.pte))) { > lru_gen_look_around(&pvmw); > referenced++; > } > > if (ptep_clear_flush_young_notify(vma, address, > pvmw.pte)) > referenced++; > } > > return true; > } > > and so is in walk_pte_range() of vmscan. linux has been surviving with > all above concerns for a while, believing it or not :-) Although the first two concerns could be worked upon in the SW, kernel surviving after breaking arch rules explicitly is not a correct state to be in IMHO.