Received: by 2002:a05:7412:2a8a:b0:fc:a2b0:25d7 with SMTP id u10csp265201rdh; Wed, 7 Feb 2024 04:20:42 -0800 (PST) X-Google-Smtp-Source: AGHT+IGNWUgCnD8mUMN99uiWIKVwoWZysCyXsC+JYptb3yZJlq7vQUrMSBvk+gDgsTIbYbX2HAqM X-Received: by 2002:ac8:5a09:0:b0:42b:e527:3665 with SMTP id n9-20020ac85a09000000b0042be5273665mr7365783qta.10.1707308442431; Wed, 07 Feb 2024 04:20:42 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707308442; cv=pass; d=google.com; s=arc-20160816; b=CZioyX1DhfTjKdc5/x/rSemHst3nvxsrY7OO81O3VHOye8OStBW1iNpwZysKyWbffV 8Xv1DG9OcsH39SOj3iHh95gr1JuNaoaJWdc8+DpbqKQt4K7Z9tYabWe4vYtXLxC2ctNQ Etx3OhuIVEcpLjKYyhdDfvMdOWKgFVKui0v3D14WSvcmRZOHmPRj7DNgdr0sfMjvUy6t S7pDuhfonb3IeI/v2Y15ZUnFUoEOSBOqW0TjapziI3Klma/cAxk3i5G7WsDYB2llcwHn pPgOcnYlmL8oEBU6sHfGuUFgWTx+FoFi/GKTqRB1pcX2v5YTSdX11Sug5cyX8HKiRjoM /NKw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date; bh=9TUqxq6nlS/LiBvBHCcRJAZcFW9F5D+siAOHVKsTQ44=; fh=LTXIPaYiuIR6SU2/Sa3iDfUr5InledUKexH3MF/Ndqk=; b=HJUDaY8LvfSAVRgBIOVVq6CdxIuk0Vr9Mn0/+qgJouOb1HCHrDNpd1mxlILgiH8Ze4 FIARNo3u045y6BynOo/fHd7IzN9FlkURfIcphubX/W2vU2HvQODZmtVVCpmj/Gc5f4Eh B4kQMJ1wAPAYkH+th8qxXxEyJwd50gAj1EWjq8cZBrESAaiFsvg3TPHIe+O+F9asqvYd WDHXo7JhjuKUDT/ZGgfNsawzHriXqmmMnXsTHx1BkJrFeEFbLbIqZEy7bONo1qydiQMl xPXF3jGqtu6fX5uC2YaUew/OWG1ARaL2J8ZIkovzXUs+8HhR18oiZ/Bkd72uB6XCDnxf gFFg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-56476-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-56476-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com X-Forwarded-Encrypted: i=2; AJvYcCXGZsY7xqZemRrMKQdgjJjHfMK6CIjQ4K7ZAa27XaOwC+K/jOXXRIPVUTJj7E4Hay5+NFtdQMfBleJdPIhPIOBT/QLSbsJBgFgNdXehLA== Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id 6-20020ac85946000000b0042c3e04578bsi938680qtz.688.2024.02.07.04.20.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Feb 2024 04:20:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-56476-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-56476-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-56476-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id F05DA1C24ADB for ; Wed, 7 Feb 2024 12:20:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3F57C5A0F7; Wed, 7 Feb 2024 12:20:34 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F26EE5A785 for ; Wed, 7 Feb 2024 12:20:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707308428; cv=none; b=tKxfPR+Vs8dZFFW+kTd/DQUpuM4oIAVSK9fQC3F4gCMxRhbV4k/Vxz9yoP/mE4+NhRPeP7H0qzpXj9uGctsT8X1No3Z9pelhM7cMxwFm0j74IxAw32fnxx5XGKCBMApPiBw5lF2kol9GGQ7k3FbyxAwd5DovgnS5OYKR9eS8e5w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707308428; c=relaxed/simple; bh=Pn7tOMM6QU+EjDKvoXtQ6tV+upxMGu1a0ytn1xCKq+c=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=J3I6T85CD+Y8RbhwK34GKeGqa3bwb+/tZQNMofDB79wjuZhTrKPEBP4Fz2L9TYLCrzsKdi1MnpZS/0i+e89GjbUSY5tEJNu4VdEWet1vYmWw04AlD9mUfp5BUZTNnupvUYpJT8OQkXwZkcuyfD7OXscPI7XyXp3fPRKa1lAhjPo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7F91C433C7; Wed, 7 Feb 2024 12:20:23 +0000 (UTC) Date: Wed, 7 Feb 2024 12:20:21 +0000 From: Catalin Marinas To: Matthew Wilcox Cc: Will Deacon , Nanyong Sun , mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org, anshuman.khandual@arm.com, wangkefeng.wang@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize Message-ID: References: <20240113094436.2506396-1-sunnanyong@huawei.com> <20240207111252.GA22167@willie-the-truck> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, Feb 07, 2024 at 11:21:17AM +0000, Matthew Wilcox wrote: > On Wed, Feb 07, 2024 at 11:12:52AM +0000, Will Deacon wrote: > > On Sat, Jan 27, 2024 at 01:04:15PM +0800, Nanyong Sun wrote: > > > On 2024/1/26 2:06, Catalin Marinas wrote: > > > > On Sat, Jan 13, 2024 at 05:44:33PM +0800, Nanyong Sun wrote: > > > > > HVO was previously disabled on arm64 [1] due to the lack of necessary > > > > > BBM(break-before-make) logic when changing page tables. > > > > > This set of patches fix this by adding necessary BBM sequence when > > > > > changing page table, and supporting vmemmap page fault handling to > > > > > fixup kernel address translation fault if vmemmap is concurrently accessed. > > > > I'm not keen on this approach. I'm not even sure it's safe. In the > > > > second patch, you take the init_mm.page_table_lock on the fault path but > > > > are we sure this is unlocked when the fault was taken? > > > I think this situation is impossible. In the implementation of the second > > > patch, when the page table is being corrupted > > > (the time window when a page fault may occur), vmemmap_update_pte() already > > > holds the init_mm.page_table_lock, > > > and unlock it until page table update is done.Another thread could not hold > > > the init_mm.page_table_lock and > > > also trigger a page fault at the same time. > > > If I have missed any points in my thinking, please correct me. Thank you. > > > > It still strikes me as incredibly fragile to handle the fault and trying > > to reason about all the users of 'struct page' is impossible. For example, > > can the fault happen from irq context? > > The pte lock cannot be taken in irq context (which I think is what > you're asking?) With this patchset, I think it can: IRQ -> interrupt handler accesses vmemmap -> faults -> fault handler in patch 2 takes the init_mm.page_table_lock to wait for the vmemmap rewriting to complete. Maybe it works if the hugetlb code disabled the IRQs but, as Will said, such fault in any kernel context looks fragile. -- Catalin