Received: by 2002:a05:7412:d024:b0:f9:90c9:de9f with SMTP id bd36csp31647rdb; Wed, 20 Dec 2023 05:38:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IEiMeP1XLnShGXp0UKIJkiX8t86rD7cTQR8tofihzIPYrvTP2jzS52fBVewCsgXexB4g/MV X-Received: by 2002:a05:620a:1a28:b0:77f:289c:bb0e with SMTP id bk40-20020a05620a1a2800b0077f289cbb0emr23943635qkb.99.1703079486551; Wed, 20 Dec 2023 05:38:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703079486; cv=none; d=google.com; s=arc-20160816; b=thZuYhq8AA32LiPTsOUnTdJjDmlYip6F3j9Dvsn+T0LvthlIXKISHoej9fP27wDCuU 7Z+Z6njm3om9M9q9WgIPf4/6kAHAP1hzFBMxIXTXOWQ315N5RhxEvQ2GCc7EoYe4D2zB n86fUsuq9xznApIyUDfC9EFA6yCHLdCkQNK58TkPm5UQcchaKEtLb39r6+gMqwvE7cqr t5GYNJhsTwA3y2fg28ZPmnqoOrqizaoYcu62iqLx75vib9A7N+8Tb1BJX2q3XQP8hP5X wqyuq7uIQYd4lB/H82noQDkhHMd55SFzFqTSQDDAtTStdj1C/I0S6X+iKhsO44r3qqPp EYpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:user-agent:date:message-id:from :references:cc:to:subject; bh=mvkUW80o7/jgC9wxhIGT2lcErPBspA93sVm8dwnTv08=; fh=KAIhXqbJiS2NikLLWKqhgWue2arJn9nzm0f+vQhtqIk=; b=SclQSbUuvl6kW4krCNf4Dam3SgG7MEqMuFgQU0BAe3I5oXZOePs4/VTSljwDVhTcom yTBlzqXWtjspQsbKyGrnpG00jDuLVP59AyWkeZ5gW6JE/2Q8BtmqbpAO7ap3mlq8tvY+ B2uzdVPIDNgm3J9I6BXUgqhZX4/ZdbEeAObsoeGYCDU4LTriTaNxFQ+jC2tdVmFkMlCv Vq65bb0CuDuHLU+XNeur+Wv0HbVgyBJfvwlIKI6wA6uVDyo1LCb+Ngdj+OTPrK5Lg4fx BzfhWkf3GcMXD3tzG2dcodZAJ7p31VPrLbNcV5ucQuudw/KXvL6dQQuz0Vt1tOoC5nPV bogg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-6970-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6970-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id ov11-20020a05620a628b00b0077d69b8a6c4si29888157qkn.663.2023.12.20.05.38.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 05:38:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6970-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-6970-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6970-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 4DA881C2268B for ; Wed, 20 Dec 2023 13:38:06 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DC217374F0; Wed, 20 Dec 2023 13:38:00 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 165FA374C1 for ; Wed, 20 Dec 2023 13:37:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4SwF022SzBz1FFG9; Wed, 20 Dec 2023 21:34:10 +0800 (CST) Received: from kwepemm000003.china.huawei.com (unknown [7.193.23.66]) by mail.maildlp.com (Postfix) with ESMTPS id 342EB1A0172; Wed, 20 Dec 2023 21:37:54 +0800 (CST) Received: from [10.174.179.79] (10.174.179.79) by kwepemm000003.china.huawei.com (7.193.23.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 20 Dec 2023 21:37:53 +0800 Subject: Re: [PATCH v2 2/3] arm64: mm: HVO: support BBM of vmemmap pgtable safely To: Muchun Song CC: Catalin Marinas , Will Deacon , Mike Kravetz , Andrew Morton , Anshuman Khandual , "Matthew Wilcox (Oracle)" , Kefeng Wang , , LKML , Linux-MM References: <20231220051855.47547-1-sunnanyong@huawei.com> <20231220051855.47547-3-sunnanyong@huawei.com> <08DCC8BB-631C-4F7A-BB0A-494AD2AD3465@linux.dev> From: Nanyong Sun Message-ID: <8e3b03bc-af43-adaf-5980-82548893a7c5@huawei.com> Date: Wed, 20 Dec 2023 21:37:52 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: <08DCC8BB-631C-4F7A-BB0A-494AD2AD3465@linux.dev> Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 8bit X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm000003.china.huawei.com (7.193.23.66) On 2023/12/20 14:32, Muchun Song wrote: > >> On Dec 20, 2023, at 13:18, Nanyong Sun wrote: >> >> Implement vmemmap_update_pmd and vmemmap_update_pte on arm64 to do >> BBM(break-before-make) logic when change the page table of vmemmap >> address, they will under the init_mm.page_table_lock. >> If a translation fault of vmemmap address concurrently happened after >> pte/pmd cleared, vmemmap page fault handler will acquire the >> init_mm.page_table_lock to wait for vmemmap update to complete, >> by then the virtual address is valid again, so PF can return and >> access can continue. >> In other case, do the traditional kernel fault. >> >> Implement vmemmap_flush_tlb_all/range on arm64 with nothing >> to do because tlb already flushed in every single BBM. >> >> Signed-off-by: Nanyong Sun >> --- >> arch/arm64/include/asm/esr.h | 4 ++ >> arch/arm64/include/asm/mmu.h | 20 +++++++++ >> arch/arm64/mm/fault.c | 78 ++++++++++++++++++++++++++++++++++-- >> arch/arm64/mm/mmu.c | 28 +++++++++++++ >> 4 files changed, 127 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h >> index ae35939f395b..1c63256efd25 100644 >> --- a/arch/arm64/include/asm/esr.h >> +++ b/arch/arm64/include/asm/esr.h >> @@ -116,6 +116,10 @@ >> #define ESR_ELx_FSC_SERROR (0x11) >> #define ESR_ELx_FSC_ACCESS (0x08) >> #define ESR_ELx_FSC_FAULT (0x04) >> +#define ESR_ELx_FSC_FAULT_L0 (0x04) >> +#define ESR_ELx_FSC_FAULT_L1 (0x05) >> +#define ESR_ELx_FSC_FAULT_L2 (0x06) >> +#define ESR_ELx_FSC_FAULT_L3 (0x07) >> #define ESR_ELx_FSC_PERM (0x0C) >> #define ESR_ELx_FSC_SEA_TTW0 (0x14) >> #define ESR_ELx_FSC_SEA_TTW1 (0x15) >> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h >> index 2fcf51231d6e..b553bc37c925 100644 >> --- a/arch/arm64/include/asm/mmu.h >> +++ b/arch/arm64/include/asm/mmu.h >> @@ -76,5 +76,25 @@ extern bool kaslr_requires_kpti(void); >> #define INIT_MM_CONTEXT(name) \ >> .pgd = init_pg_dir, >> >> +#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP >> +void vmemmap_update_pmd(unsigned long addr, pmd_t *pmdp, pte_t *ptep); >> +#define vmemmap_update_pmd vmemmap_update_pmd >> +void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte); >> +#define vmemmap_update_pte vmemmap_update_pte >> + >> +static inline void vmemmap_flush_tlb_all(void) >> +{ >> + /* do nothing, already flushed tlb in every single BBM */ >> +} >> +#define vmemmap_flush_tlb_all vmemmap_flush_tlb_all >> + >> +static inline void vmemmap_flush_tlb_range(unsigned long start, >> + unsigned long end) >> +{ >> + /* do nothing, already flushed tlb in every single BBM */ >> +} >> +#define vmemmap_flush_tlb_range vmemmap_flush_tlb_range >> +#endif > I think those declaration related to TLB flushing should be moved > to arch/arm64/include/asm/tlbflush.h since we do not include > explicitly in hugetlb_vmemmap.c and its functionality > is to flush TLB. Luckily, is included by hugetlb_vmemmap.c. > > Additionally, vmemmap_update_pmd/pte helpers should be moved to > arch/arm64/include/asm/pgtable.h since it is really pgtable related > operations. > > Thanks. Yes??I will move them in next version. Thanks for your time. > > > > > .