Received: by 2002:a05:7412:5112:b0:fa:6e18:a558 with SMTP id fm18csp919614rdb; Tue, 23 Jan 2024 21:45:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IHmgzuZ7AgEAK8GRxHyU6VVldC84g56BEFxaKZhlqCG284hGGFvtkszFFjUmQ8IxJ5vLDcr X-Received: by 2002:a17:907:c089:b0:a30:f6b8:5dae with SMTP id st9-20020a170907c08900b00a30f6b85daemr490691ejc.92.1706075131516; Tue, 23 Jan 2024 21:45:31 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706075131; cv=pass; d=google.com; s=arc-20160816; b=v47Or6WHXhQecBA/z/0MUY1U2l10Ji7BtbzOdXHsYSzr8a4UuVHIOwRovWwLqYTVb/ olYpBzYMT75m9TJ6Iiqm+bxXlU8A0aCMA+FJ2ntbPc7J0C4R2Ix3Syz126lskVgrJhgF uegScvs+uDYuoYGwV4n7waOZHjIk4gcy526tIhsJoBLMgsYrPwE6zzO+OXDORlxYSZvn kWToxE8SZe6ZHpH7y2JomfJioWs/HlUPsX5JY3BaciHqiCBorFt5C1xs2vPTm9ZZc11w xEXuCtpVaB84U6E/ysnWXqnJaJEDCdKr9gbVpWa5MkLtOf6WET54s2svGnuiehMAmUav at0A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=kvALqGbvAT39uWD6H2I26EFOok4A/Z6blBfEdRfIOiM=; fh=5iY82gU1iRsITmDfyePNU1PSCi4oDtDm+jpIUgBrDiY=; b=VjtUiDJPOZcBngsNA6qais4Vmpgzk1tQLHijUsaZH5Zs2TL1gkX5FHBYGZlgtVfPD6 ap8Kf5GYBDuv9KNAPfyWMn+yeBvj03RqsJrk1HuGSFAHdCrZ9ReIJtnnPu4osEM2StpN Hb2EGxuGoDt3gv0CIoprjQwaZ9cCfvesimJpfIXzXfKbmdFmclsb4jC+1ePlmhsU152h +bbbFe2tw9N83NTChNCFq8BZ54siwlIFxBAyymNVk688MQuutscIsNzqkJYLLAlchZVt 7cyQzi6R+WiUwp8Vv2SEhF9FDYv+5Y48fxR+b8+HGlEE0b6/wHb6vMzPNRwJkugHrpPy UAkA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=O1skpl4y; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-36481-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36481-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id p22-20020a17090635d600b00a3109845476si542637ejb.874.2024.01.23.21.45.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jan 2024 21:45:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36481-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=O1skpl4y; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-36481-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36481-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 484561F25235 for ; Wed, 24 Jan 2024 05:45:31 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 96A3AE573; Wed, 24 Jan 2024 05:45:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="O1skpl4y" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0F35E55E; Wed, 24 Jan 2024 05:45:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706075117; cv=none; b=KbDoc9VohrdGnJ3EPORde5xbZDWrobDGInxYsXZb5HnMbS12RG/0k6wGPE3H0HoXVpswHZm7ekOJiSNnF25Ix4JIUJkPBzgkbDWpSHMMGO2kVAXkV/8Gv05b0AYO8m5OUHWlkHk5JGpAR5k7BwKzrgg2W7CjdGMyFPuA07wcUHQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706075117; c=relaxed/simple; bh=5dLOTB19eqgu6rIiWX9LXiC2Xe6lJz1n50JwsYBoJ6w=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=tVrfSvUmAfwL7xnoy5tbOH2cswnQGC/fpww91hH/NaHNq2RhvK5I6IdTH5vWfM9qHn+ZaozxJiEG+yD9TQt3Hfk2l6r2O9NQI3a0Xanx9EMCwbhpw6Q9lEXVSJVCoBA/FLbR39kIpEyDmELXsSIvD4J6wO0nERI+yf4aWaC/rPs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=O1skpl4y; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id DAA06C433F1; Wed, 24 Jan 2024 05:45:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706075117; bh=5dLOTB19eqgu6rIiWX9LXiC2Xe6lJz1n50JwsYBoJ6w=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=O1skpl4y2xjw6BP+gzZ5mPe0LKyiUIrkGJSxeRvOjofb/gZvo6wl7nFZdgUFQ8p8g mKeTO4ObwcUC2+OyxrBUFcF6ovICMunK8lCIQIgR+AqdvzCkI0GjG8X1Rp61fv1x0O sXYkIvLLfqVyEudSZNXUQ7C6skbTv82wU+p9IkYysxMfhaPHouWfsN/hk5ltc2+h0S XzFm2ohqjonzRS2gKeLXiMhGX5iu1V+ZgCwo5e1vYgDR7B50640MCgmUl3IJbLWDcS qOgK2IzFJaPYHpWQn+N9JUVQLiSVJ/DJXQoM6urufjCoLD8pjAm5mcQutss0xfdEOG C2zeUU0iz2qHw== X-Mailer: emacs 29.1 (via feedmail 11-beta-1 I) From: Aneesh Kumar K.V To: David Hildenbrand , Ryan Roberts , Christophe Leroy , "linux-kernel@vger.kernel.org" Cc: "linux-mm@kvack.org" , Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , "linux-arm-kernel@lists.infradead.org" , "linuxppc-dev@lists.ozlabs.org" , "linux-riscv@lists.infradead.org" , "linux-s390@vger.kernel.org" , "sparclinux@vger.kernel.org" Subject: Re: [PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64 In-Reply-To: <794c1dc3-520c-4030-b0fe-e24782576347@redhat.com> References: <20240122194200.381241-1-david@redhat.com> <20240122194200.381241-2-david@redhat.com> <46080ac1-7789-499b-b7f3-0231d7bd6de7@redhat.com> <6703b648-10ab-4fea-b7f1-75421319465b@arm.com> <3a970289-a72f-418e-b43c-89f67f0d5283@redhat.com> <794c1dc3-520c-4030-b0fe-e24782576347@redhat.com> Date: Wed, 24 Jan 2024 11:15:03 +0530 Message-ID: <8734unqo80.fsf@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable David Hildenbrand writes: > On 23.01.24 12:38, Ryan Roberts wrote: >> On 23/01/2024 11:31, David Hildenbrand wrote: >>>>> >>>>>> If high bits are used for >>>>>> something else, then we might produce a garbage PTE on overflow, but= that >>>>>> shouldn't really matter I concluded for folio_pte_batch() purposes, = we'd not >>>>>> detect "belongs to this folio batch" either way. >>>>> >>>>> Exactly. >>>>> >>>>>> >>>>>> Maybe it's likely cleaner to also have a custom pte_next_pfn() on pp= c, I just >>>>>> hope that we don't lose any other arbitrary PTE bits by doing the pt= e_pgprot(). >>>>> >>>>> I don't see the need for ppc to implement pte_next_pfn(). >>>> >>>> Agreed. >>> >>> So likely we should then do on top for powerpc (whitespace damage): >>> >>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c >>> index a04ae4449a025..549a440ed7f65 100644 >>> --- a/arch/powerpc/mm/pgtable.c >>> +++ b/arch/powerpc/mm/pgtable.c >>> @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long = addr, >>> pte_t *ptep, >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 break; >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 ptep++; >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 addr +=3D PAGE_SIZE; >>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 /* >>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 * increment the pfn. >>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 */ >>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 pte =3D pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); >>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 pte =3D pte_next_pfn(pte); >>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } >>> =C2=A0} >>=20 >> Looks like commit 47b8def9358c ("powerpc/mm: Avoid calling >> arch_enter/leave_lazy_mmu() in set_ptes") changed from doing the simple >> increment to this more complex approach, but the log doesn't say why. > > @Aneesh, was that change on purpose? > Because we had a bug with the patch that introduced the change and that line was confusing. The right thing should have been to add pte_pfn_next() to make it clear. It was confusing because not all pte format had pfn at PAGE_SHIFT offset (even though we did use the correct PTE_RPN_SHIFT in this specific case). To make it simpler I ended up switching that line to pte_pfn(pte) + 1 . -aneesh