Received: by 2002:ab2:1689:0:b0:1f7:5705:b850 with SMTP id d9csp1461826lqa; Mon, 29 Apr 2024 09:05:24 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWXer6zoCsmsB4+nUttwXB7jt1+rmU8yrw8AbQGnbb/pUZHR7uQIW9mb+ljhrA2j4rA3giL8yUBJgfVdQ+VFJqEBoJGAqmtJXVn3oiozw== X-Google-Smtp-Source: AGHT+IENayOSj5xvL2LbiSz61avVfSKk7aFaOJeNP4ZEhQgDWIzbLP+NIv3kr0CSni6ik11oIOQf X-Received: by 2002:a17:902:b610:b0:1e3:e1e8:bb5 with SMTP id b16-20020a170902b61000b001e3e1e80bb5mr10048571pls.28.1714406724228; Mon, 29 Apr 2024 09:05:24 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714406724; cv=pass; d=google.com; s=arc-20160816; b=jZ90eMyKQR02GeY4EjKvR64UIH8ZHr6qxHDy1s396zv8ZYYpvCzdVG/OkQ1ZL43+XU 2To5Jx7G7WgeyZErmu6NGpgrHuDoBZaMNoB4HSjxPJ/4MSsx+RnV1d3BszUEBgrRxWVI 6Qom660mNiZy5ASyXLWX33fy6FmpxIa4uQ1Z5dSs4P08oEwikWv1uQRjjNdgmXg7RwTl 417C6JWaSPrj9PCnh/yVD68FsS5JwVexDMPQZOEdOztaaeRmQT+pCO4obPPe5ozKaO1v leVCmRNyNYU++dZndMYhCBtfqpzgHRRW8lycWRmzkbcKvlLG8a0Ht0lXbFKLDqpjx1NZ 41KA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=k/KCqr6mtixyP2ceoIOXoD/HgpBRNAdh9hJVimPdxB4=; fh=vEfBhkKQH5VYjvAMFtDN4kge6cua00SnRNsVAWFUzUg=; b=hZI0g52V6DcNdHJ84/3oMs9ic6zcLu45SFFZ3uTLOnc3+YBjxexfbmsUz4LR/KuAtX c/arGC+7gxh4OIr1WKV4IuUxv+4zan9yU454PMU3QfkVwLJzaTly4eOVr70vD3O3aZTt fQjc0Jf2X6WAWx61yABZ59PxGh+E2RUdccYXf2ogADivlBieLKBjwMtcqNQz7a7sxH0u cMFFQDcGRX/bWrdb8fWUfGmwYnRUT1VnmIS+sEr9Gh/hzONeMH3glf5cEJQ6odMeZb9Z qQxjuqTkCbpOige4n8KVyppINITUZJCeKZ5boH11tvYe6EcweWKajKhmQyMX1ljvaYFh WHgQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-162597-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-162597-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id u11-20020a170902e80b00b001e5b6a45cb6si20221184plg.263.2024.04.29.09.05.23 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Apr 2024 09:05:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-162597-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-162597-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-162597-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 9BE34B22F9C for ; Mon, 29 Apr 2024 15:35:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A85AB7FBD3; Mon, 29 Apr 2024 15:35:49 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 890177F487 for ; Mon, 29 Apr 2024 15:35:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714404949; cv=none; b=Zd/Dpbws5Bs9hIRgsA/hes/sl6l/IzUxWu6tMCap+OmrFtlO4ZaslIPvonux0VnGREIL0MlOm3hvI0wvp0gHGtVL+5EE/B2xKG2xwrFSJktM8MwYOKvJjItc8RvgkxFNpTcPGOG9e0ugeo5WKw/VLxTc94msL2cguEt+f61A8cU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714404949; c=relaxed/simple; bh=mKGgToYQ13e9D7krRqcYAkAAGnA73eVTq34uRAjXWb4=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=q6HBcIH8oAVV7iMd53d/br0qqvSXxTjU4gL/rYhqTq855jpCMi/xJ6NFKoirR7Pa6GSnj8v0nR5M39VI2b3j1e6xLXQpfSlQHkA5PwHZPwhuDeZK68Sp8yK+Ykb9RFBJGCDDyxH0Zc51A6lZalBwifT7Z4pWpozJxApDWQHty5k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B962F2F4; Mon, 29 Apr 2024 08:36:13 -0700 (PDT) Received: from [10.57.65.53] (unknown [10.57.65.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EEA353F793; Mon, 29 Apr 2024 08:35:45 -0700 (PDT) Message-ID: <830f83d3-6f88-42e3-929e-c87597441a1c@arm.com> Date: Mon, 29 Apr 2024 16:35:44 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] mm: Fix race between __split_huge_pmd_locked() and GUP-fast Content-Language: en-GB To: Zi Yan Cc: John Hubbard , Andrew Morton , Zi Yan , "Aneesh Kumar K.V" , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240425170704.3379492-1-ryan.roberts@arm.com> <17956e0f-1101-42d7-9cba-87e196312484@nvidia.com> <2C9D897B-0E96-42F0-B3C4-9926E6DF5F4B@nvidia.com> <328b2b69-e4ab-4a00-9137-1f7c3dc2d195@nvidia.com> <23fbca83-a175-4d5a-8cf7-ed54c1830d37@arm.com> <64037B35-3E80-4367-BA0B-334356373A78@nvidia.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 29/04/2024 16:29, Zi Yan wrote: > On 29 Apr 2024, at 10:45, Zi Yan wrote: > >> On 29 Apr 2024, at 5:29, Ryan Roberts wrote: >> >>> On 27/04/2024 20:11, John Hubbard wrote: >>>> On 4/27/24 8:14 AM, Zi Yan wrote: >>>>> On 27 Apr 2024, at 0:41, John Hubbard wrote: >>>>>> On 4/25/24 10:07 AM, Ryan Roberts wrote: >>>>>>> __split_huge_pmd_locked() can be called for a present THP, devmap or >>>>>>> (non-present) migration entry. It calls pmdp_invalidate() >>>>>>> unconditionally on the pmdp and only determines if it is present or not >>>>>>> based on the returned old pmd. This is a problem for the migration entry >>>>>>> case because pmd_mkinvalid(), called by pmdp_invalidate() must only be >>>>>>> called for a present pmd. >>>>>>> >>>>>>> On arm64 at least, pmd_mkinvalid() will mark the pmd such that any >>>>>>> future call to pmd_present() will return true. And therefore any >>>>>>> lockless pgtable walker could see the migration entry pmd in this state >>>>>>> and start interpretting the fields as if it were present, leading to >>>>>>> BadThings (TM). GUP-fast appears to be one such lockless pgtable walker. >>>>>>> I suspect the same is possible on other architectures. >>>>>>> >>>>>>> Fix this by only calling pmdp_invalidate() for a present pmd. And for >>>>>> >>>>>> Yes, this seems like a good design decision (after reading through the >>>>>> discussion that you all had in the other threads). >>>>> >>>>> This will only be good for arm64 and does not prevent other arch developers >>>>> to write code breaking arm64, since only arm64's pmd_mkinvalid() can turn >>>>> a swap entry to a pmd_present() entry. >>>> >>>> Well, let's characterize it in a bit more detail, then: >>> >>> Hi All, >>> >>> Thanks for all the feedback! I had thought that this patch would be entirely >>> uncontraversial - obviously I was wrong :) >>> >>> I've read all the emails, and trying to summarize a way forward here... >>> >>>> >>>> 1) This patch will make things better for arm64. That's important! >>>> >>>> 2) Equally important, this patch does not make anything worse for >>>>    other CPU arches. >>>> >>>> 3) This patch represents a new design constraint on the CPU arch >>>>    layer, and thus requires documentation and whatever enforcement >>>>    we can provide, in order to keep future code out of trouble. >>> >>> I know its only semantics, but I don't view this as a new design constraint. I >>> see it as an existing constraint that was previously being violated, and this >>> patch aims to fix that. The generic version of pmdp_invalidate() unconditionally >>> does a tlb invalidation on the address range covered by the pmd. That makes no >>> sense unless the pmd was previously present. So my conclusion is that the >>> function only expects to be called for present pmds. >>> >>> Additionally Documentation/mm/arch_pgtable_helpers.rst already says this: >>> >>> " >>> | pmd_mkinvalid | Invalidates a mapped PMD [1] | >>> " >>> >>> I read "mapped" to be a synonym for "present". So I think its already >>> documented. Happy to explcitly change "mapped" to "present" though, if it helps? >>> >>> Finally, [1] which is linked from Documentation/mm/arch_pgtable_helpers.rst, >>> also implies this constraint, although it doesn't explicitly say it. >>> >>> [1] https://lore.kernel.org/linux-mm/20181017020930.GN30832@redhat.com/ >>> >>>> >>>> 3.a) See the VM_WARN_ON() hunks below. >>> >>> It sounds like everybody would be happy if I sprinkle these into the arches that >>> override pmdp_invalidate[_ad]()? There are 3 arches that have their own version >>> of pmdp_invalidate(); powerpc, s390 and sparc. And 1 that has its own version of >>> pmdp_invalidate_ad(); x86. I'll add them in all of those. >>> >>> I'll use VM_WARN_ON_ONCE() as suggested by John. >>> >>> I'd rather not put it directly into pmd_mkinvalid() since that would set a >>> precedent for adding them absolutely everywhere. (e.g. pte_mkdirty(), ...). >> >> I understand your concern here. I assume you also understand the potential issue >> with this, namely it does not prevent one from using pmd_mkinvalid() improperly >> and causing a bug and the bug might only appear on arm64. >> >>> >>>> >>>> 3.b) I like the new design constraint, because it is reasonable and >>>>      clearly understandable: don't invalidate a non-present page >>>>      table entry. >>>> >>>> I do wonder if there is somewhere else that this should be documented? >>> >>> If I change: >>> >>> " >>> | pmd_mkinvalid | Invalidates a mapped PMD [1] | >>> " >>> >>> To: >>> >>> " >>> | pmd_mkinvalid | Invalidates a present PMD; do not call for | >>> | non-present pmd [1] | >>> " >>> >>> Is that sufficient? (I'll do the same for pud_mkinvalid() too. >> >> Sounds good to me. >> >> Also, if you move pmdp_invalidate(), please move the big comment with it to >> avoid confusion. Thanks. > > And the Fixes tag does not need to go back that far, since this only affects arm64, > which enables thp migration at commit 53fa117bb33c ("arm64/mm: Enable THP migration"). Yes, will do - good point. > > -- > Best Regards, > Yan, Zi