Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6022DC636D3 for ; Fri, 3 Feb 2023 02:46:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231821AbjBCCqo (ORCPT ); Thu, 2 Feb 2023 21:46:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231770AbjBCCqk (ORCPT ); Thu, 2 Feb 2023 21:46:40 -0500 X-Greylist: delayed 336 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 02 Feb 2023 18:46:38 PST Received: from out-145.mta0.migadu.com (out-145.mta0.migadu.com [IPv6:2001:41d0:1004:224b::91]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CC52B88F13 for ; Thu, 2 Feb 2023 18:46:38 -0800 (PST) Content-Type: text/plain; charset=us-ascii DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1675392059; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jj3GgXLgwqlpVNNeih4lY84Atxjucfjplf8oK7/F1wY=; b=Xa2m/tn0hC54LMTqejYSOldRpGghWTb63X8LYY1ttiePcZXk1Qwe3kKVOWyDw32AvIh0D/ OqQIq0jgmNwkkXnD0KxTCQ91VFPjYt4sMFA2G7/+qFsN3+TlK4v4vKdS8UcJNAGzaSu8lk ET5vzNE2i1HTfZJZNEQtq4gRjJotjsk= Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.300.101.1.3\)) Subject: Re: [PATCH V2] arm64/mm: Intercept pfn changes in set_pte_at() X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: Date: Fri, 3 Feb 2023 10:40:18 +0800 Cc: Will Deacon , Robin Murphy , Anshuman Khandual , linux-arm-kernel@lists.infradead.org, Mark Rutland , Andrew Morton , linux-kernel@vger.kernel.org, Mark Brown Content-Transfer-Encoding: quoted-printable Message-Id: References: <20230109052816.405335-1-anshuman.khandual@arm.com> <20230126133321.GB29148@willie-the-truck> <20230131154950.GB2646@willie-the-truck> To: Catalin Marinas X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Feb 2, 2023, at 18:45, Catalin Marinas = wrote: >=20 > On Thu, Feb 02, 2023 at 05:51:39PM +0800, Muchun Song wrote: >>> On Feb 1, 2023, at 20:20, Catalin Marinas = wrote: >>>> Bah, sorry! Catalin reckons it may have been him talking about the = vmemmap. >>>=20 >>> Indeed. The discussion with Anshuman started from this thread: >>>=20 >>> = https://lore.kernel.org/all/20221025014215.3466904-1-mawupeng1@huawei.com/= >>>=20 >>> We already trip over the existing checks even without Anshuman's = patch, >>> though only by chance. We are not setting the software PTE_DIRTY on = the >>> new pte (we don't bother with this bit for kernel mappings). >>>=20 >>> Given that the vmemmap ptes are still live when such change happens = and >>> no-one came with a solution to the break-before-make problem, I = propose >>> we revert the arm64 part of commit 47010c040dec ("mm: = hugetlb_vmemmap: >>> cleanup CONFIG_HUGETLB_PAGE_FREE_VMEMMAP*"). We just need this hunk: >>>=20 >>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>> index 27b2592698b0..5263454a5794 100644 >>> --- a/arch/arm64/Kconfig >>> +++ b/arch/arm64/Kconfig >>> @@ -100,7 +100,6 @@ config ARM64 >>> select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT >>> select ARCH_WANT_FRAME_POINTERS >>> select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || = (ARM64_16K_PAGES && !ARM64_VA_BITS_36) >>> - select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP >>=20 >> Maybe it is a little overkill for HVO as it can significantly = minimize the >> overhead of vmemmap on ARM64 servers for some workloads (like qemu, = DPDK). >> So I don't think disabling it is a good approach. Indeed, HVO broke = BBM, >> but the waring does not affect anything since the tail vmemmap pages = are >> supposed to be read-only. So, I suggest skipping warnings if it is = the >> vmemmap address in set_pte_at(). What do you think of? >=20 > IIUC, vmemmap_remap_pte() not only makes the pte read-only but also > changes the output address. Architecturally, this needs a BBM = sequence. > We can avoid going through an invalid pte if we first make the pte > read-only, TLBI but keeping the same pfn, followed by a change of the > pfn while keeping the pte readonly. This also assumes that the content > of the page pointed at by the pte is the same at both old and new pfn. Right. I think using BBM is to avoid possibly creating multiple TLB = entries for the same address for a extremely short period. But accessing either = the old page or the new page is fine in this case. Is it acceptable for this special case without using BBM? Thanks, Muchun.