Received: by 2002:ab2:1689:0:b0:1f7:5705:b850 with SMTP id d9csp1326024lqa; Mon, 29 Apr 2024 05:41:04 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWjcm/kr75ZGzx++AfJK8ECIsy0AY7vmUyy2zC2+qH0We9InLQv7iSyvuEe3ZJO9V5qaStn+S/77BAo6j6RRGitqarN43I1/ZSaTJffiA== X-Google-Smtp-Source: AGHT+IHsMbJMqShwWDkxhtmvjhTpoibLFoSdTjk4wVagbtehXQp8fcQ4otsjido3XoBYvgKW/PUi X-Received: by 2002:a05:6a21:339f:b0:1ad:7fab:233 with SMTP id yy31-20020a056a21339f00b001ad7fab0233mr11807867pzb.17.1714394463935; Mon, 29 Apr 2024 05:41:03 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714394463; cv=pass; d=google.com; s=arc-20160816; b=uCWazdLniBUTiYp2SPVuCS/KvW+zQ+s4VuvUO+hqwa+ZqKNmsBxHgh/KO7kC5hREWq t3+VhxGH8vyEyef9VGCS6HeUM/JhLRqkHaEiDIshzjnQA6KR1ikbea9+PijFFUz3AnlF 2vh4iTC8qRg9O9tfur9CkoDdRNsHp5gGE7VBHtXMyu33+Hmvr/4On6D/Xdv1I8AoKakE Gr8LlpMKhPsXVgOEyuH/r1PM1oXcjgvw3yyVE4o/Xr4r3hXbdi724r4DG3dH/i2rOgZO DsI5NuWOMdG9kjZkDgZ0CW5kymwVpjFzz0iJ8AxxsSoiSdR5ZXo1dbS8mQS4+ibwxBx7 B/xA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date; bh=spdK92z9T3PCHqOIYX5ioJ2oWIb2Oz+qMUPxnZQNX2k=; fh=JVqw+d9sr3BAaVRUAhrATqUG9oEJrdCNIHyy50p59QM=; b=gqo5I3BS+tq67xnf2h38jWVolrc283cL3MH+/2FpvhFZKe7ErQydKO5ubaeWQYKWXq aa2Uem+lHzj/na0iBQF6UouD4q3hvSuxl+BWeHVuRHTXZnDRA8DHwEKemyI632bdeBMA bfi0jYhYVgBCvgbJmiG6SO0zo6g8RsETDUFbzc94H+OfIWp8DrmzkAd7BxpRMMunXgO2 gqSgTttNS3pj7OUDt/W0UZBdTqyTUWg+9fIt0uAipCJKu0of4k9v7iGnn8ykJrOz8dTO 8hOOkgY7Bk6fxc3zI8cQdQrH7mEsdqYnhmBgBDAQjlPxsC54QgzVVgJy0wpvcyVwJcs8 je1g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-162258-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-162258-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id 128-20020a630286000000b005dc85a7eb1dsi19310719pgc.299.2024.04.29.05.41.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Apr 2024 05:41:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-162258-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-162258-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-162258-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 53F352846BC for ; Mon, 29 Apr 2024 12:38:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F17C741A85; Mon, 29 Apr 2024 12:38:38 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7666738DC7 for ; Mon, 29 Apr 2024 12:38:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714394318; cv=none; b=Tt5giKWInNK4ZuwqmBIukSyrrxP1XeYU9YEaU85VeAQPp6CMMx0+cxIseVCaKZXIbkCxgsewEleV8aZMzY9noUsQMvjAqR67KxrzWk3+kUsF1rraJstLXwnERLjLjIw0XpwSw/sbZbKCRV3QGYfeN8jVLMwkuDA2PR2i7EBvzhM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714394318; c=relaxed/simple; bh=Q5yExRS4uoTndDoprkbbxWEANHw4gUs9VYdK6Qtoo1I=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=J3i2hBykNTygqV4mjc/c2yCMTnzO4gi9mgNPMIKRbT03X7uGgnicmKT3r2cIBts3SpumMsSFPVTiWd1k4fHh+gvxMTQGNtBbcVKq2X0ckr1qvRKtymAdFPn+km94ddwoLrlPlx/SvAbb6swrEmm/q0ezvX0aQq5nFcZ8Oqi6o/M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA46FC4AF17; Mon, 29 Apr 2024 12:38:35 +0000 (UTC) Date: Mon, 29 Apr 2024 13:38:33 +0100 From: Catalin Marinas To: Ryan Roberts Cc: David Hildenbrand , Will Deacon , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , Peter Xu , Mike Rapoport , Shivansh Vij , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v1 1/2] arm64/mm: Move PTE_PROT_NONE and PMD_PRESENT_INVALID Message-ID: References: <20240424111017.3160195-1-ryan.roberts@arm.com> <20240424111017.3160195-2-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Mon, Apr 29, 2024 at 11:04:53AM +0100, Ryan Roberts wrote: > On 26/04/2024 15:48, Catalin Marinas wrote: > > On Thu, Apr 25, 2024 at 11:37:42AM +0100, Ryan Roberts wrote: > >> Also, IMHO we shouldn't really need to reserve PMD_PRESENT_INVALID for swap > >> ptes; it would be cleaner to have one bit that defines "present" when valid is > >> clear (similar to PTE_PROT_NONE today) then another bit which is only defined > >> when "present && !valid" which tells us if this is PTE_PROT_NONE or > >> PMD_PRESENT_INVALID (I don't think you can ever have both at the same time?). > > > > I think this make sense, maybe rename the above to PTE_PRESENT_INVALID > > and use it for both ptes and pmds. > > Yep, sounds good. I've already got a patch to do this, but it's exposed a bug in > core-mm so will now fix that before I can validate my change. see > https://lore.kernel.org/linux-arm-kernel/ZiuyGXt0XWwRgFh9@x1n/ > > With this in place, I'm proposing to remove PTE_PROT_NONE entirely and instead > represent PROT_NONE as a present but invalid pte (PTE_VALID=0, PTE_INVALID=1) > with both PTE_WRITE=0 and PTE_RDONLY=0. > > While the HW would interpret PTE_WRITE=0/PTE_RDONLY=0 as "RW without dirty bit > modification", this is not a problem as the pte is invalid, so the HW doesn't > interpret it. And SW always uses the PTE_WRITE bit to interpret the writability > of the pte. So PTE_WRITE=0/PTE_RDONLY=0 was previously an unused combination > that we now repurpose for PROT_NONE. Why not just keep the bits currently in PAGE_NONE (PTE_RDONLY would be set) and check PTE_USER|PTE_UXN == 0b01 which is a unique combination for PAGE_NONE (bar the kernel mappings). For ptes, it doesn't matter, we can assume that PTE_PRESENT_INVALID means pte_protnone(). For pmds, however, we can end up with pmd_protnone(pmd_mkinvalid(pmd)) == true for any of the PAGE_* permissions encoded into a valid pmd. That's where a dedicated PTE_PROT_NONE bit helped. Let's say a CPU starts splitting a pmd and does a pmdp_invalidate*() first to set PTE_PRESENT_INVALID. A different CPU gets a fault and since the pmd is present, it goes and checks pmd_protnone() which returns true, ending up on do_huge_pmd_numa_page() path. Maybe some locks help but it looks fragile to rely on them. So I think for protnone we need to check some other bits (like USER and UXN) in addition to PTE_PRESENT_INVALID. > This will subtly change behaviour in an edge case though. Imagine: > > pte_t pte; > > pte = pte_modify(pte, PAGE_NONE); > pte = pte_mkwrite_novma(pte); > WARN_ON(pte_protnone(pte)); > > Should that warning fire or not? Previously, because we had a dedicated bit for > PTE_PROT_NONE it would fire. With my proposed change it will not fire. To me > it's more intuitive if it doesn't fire. Regardless there is no core code that > ever does this. Once you have a protnone pte, its terminal - nothing ever > modifies it with these helpers AFAICS. I don't think any core code should try to make page a PAGE_NONE pte writeable. > Personally I think this is a nice tidy up that saves a SW bit in both present > and swap ptes. What do you think? (I'll just post the series if its easier to > provide feedback in that context). It would be nice to tidy this up and get rid of PTE_PROT_NONE as long as it doesn't affect the pmd case I mentioned above. > >> But there is a problem with this: __split_huge_pmd_locked() calls > >> pmdp_invalidate() for a pmd before it determines that it is pmd_present(). So > >> the PMD_PRESENT_INVALID can be set in a swap pte today. That feels wrong to me, > >> but was trying to avoid the whole thing unravelling so didn't persue. > > > > Maybe what's wrong is the arm64 implementation setting this bit on a > > swap/migration pmd (though we could handle this in the core code as > > well, it depends what the other architectures do). The only check for > > the PMD_PRESENT_INVALID bit is in the arm64 code and it can be absorbed > > into the pmd_present() check. I think it is currently broken as > > pmd_present() can return true for a swap pmd after pmd_mkinvalid(). > > I've posted a fix here: > https://lore.kernel.org/linux-mm/20240425170704.3379492-1-ryan.roberts@arm.com/ > > My position is that you shouldn't be calling pmd_mkinvalid() on a non-present pmd. I agree, thanks. -- Catalin