Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp5925797rdb; Thu, 14 Dec 2023 03:54:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IEsVVtr033wTB8lrZu/BFoDwJsv/0luqowIPt+Jir6Wx9ac3fyM5iT1LRMB/rg5Bseifvjk X-Received: by 2002:a17:90b:17c3:b0:286:cc3d:9df8 with SMTP id me3-20020a17090b17c300b00286cc3d9df8mr6995458pjb.16.1702554860109; Thu, 14 Dec 2023 03:54:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702554860; cv=none; d=google.com; s=arc-20160816; b=VnWYxhoCo8NKRZFtcQYHcG+C3F6EY1Vda13G0cqCLKjRj67X/C868J9RtYzkgoh1GR EH5RaWhyDvmTcImjXmU2g0WmsUCG8YvEHzo6J9hAJ7mMP9ssdwZ87zpBP8rjH7AjxLts GhIsKc8pVZLNh6PDsLZ7RflZlJWM9wVWRrEFFyFji4SKX3xXTd8hJwLEl5h3gDrBkaPl TGspiN6Qk+ZiZRmhufiF5gKes4v0xwysgch22cTQFHrgKMaLi9VP84lMvjiEpNBGnCcp F1iA64R83+rnld9EmMkfk2nVZ1IzRRMBlMVk0GwD/izHvrj1AHgBJ5TF/e7IpTT2o5FQ 93wQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:references :cc:to:from:content-language:subject:user-agent:mime-version:date :message-id; bh=ZgIgEGkNoToPX8sQzY6I1Az767/3Ita1HHAYtqLNkAA=; fh=wqtoyKzv3qfWu+ePnoW5bxwAKIpoa2HYk0hundNi6g0=; b=pJA1WF4thlzaxyONWTQ9EjzGoz8Vi0meuITlb98HIla4oGHWRYWY9/LaZChSQ5rNhv MvNo2pmjjTC0xIpPt7SHHGA/XlzRSKLCuv8zqu/J1tmkq6zALk+/z/UkZupMjD7oCguI xOksjQGJNrIMp5Zp+lkMy8tvvFbPepqo61fWXdmq8zlvme5nnVxzIq/t6Zw8Ix1hjrxU 7qhi3+weNW+NNC+2IwiKftD8+k0Hf/l/YfM3Zs1kC1GoBd8ZHyzkb3tXv07A8RvW7DEQ PEeyEwN7jYEy90hDDTgu2fDzVgmM6ig+qj0t/cUTwpcoEayUAe1wfUx4ER1Vbb9SQDp6 cybQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id v2-20020a17090a898200b0028b18032405si252571pjn.62.2023.12.14.03.54.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Dec 2023 03:54:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 4A69C8229D07; Thu, 14 Dec 2023 03:54:16 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1444146AbjLNLxx (ORCPT + 99 others); Thu, 14 Dec 2023 06:53:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1444125AbjLNLxw (ORCPT ); Thu, 14 Dec 2023 06:53:52 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6A08211D for ; Thu, 14 Dec 2023 03:53:57 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F719C15; Thu, 14 Dec 2023 03:54:42 -0800 (PST) Received: from [10.1.38.142] (XHFQ2J9959.cambridge.arm.com [10.1.38.142]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 795493F738; Thu, 14 Dec 2023 03:53:53 -0800 (PST) Message-ID: <2e6f06d3-6c8e-4b44-b6f2-e55bd5be83d6@arm.com> Date: Thu, 14 Dec 2023 11:53:52 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 12/15] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Content-Language: en-GB From: Ryan Roberts To: Will Deacon Cc: Catalin Marinas , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231204105440.61448-1-ryan.roberts@arm.com> <20231204105440.61448-13-ryan.roberts@arm.com> <20231212113517.GA28857@willie-the-truck> <0969c413-bf40-4c46-9f1e-a92101ff2d2e@arm.com> In-Reply-To: <0969c413-bf40-4c46-9f1e-a92101ff2d2e@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Thu, 14 Dec 2023 03:54:16 -0800 (PST) Hi Will, On 12/12/2023 11:47, Ryan Roberts wrote: > On 12/12/2023 11:35, Will Deacon wrote: >> On Mon, Dec 04, 2023 at 10:54:37AM +0000, Ryan Roberts wrote: >>> Split __flush_tlb_range() into __flush_tlb_range_nosync() + >>> __flush_tlb_range(), in the same way as the existing flush_tlb_page() >>> arrangement. This allows calling __flush_tlb_range_nosync() to elide the >>> trailing DSB. Forthcoming "contpte" code will take advantage of this >>> when clearing the young bit from a contiguous range of ptes. >>> >>> Signed-off-by: Ryan Roberts >>> --- >>> arch/arm64/include/asm/tlbflush.h | 13 +++++++++++-- >>> 1 file changed, 11 insertions(+), 2 deletions(-) >>> >>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h >>> index bb2c2833a987..925ef3bdf9ed 100644 >>> --- a/arch/arm64/include/asm/tlbflush.h >>> +++ b/arch/arm64/include/asm/tlbflush.h >>> @@ -399,7 +399,7 @@ do { \ >>> #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ >>> __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false) >>> >>> -static inline void __flush_tlb_range(struct vm_area_struct *vma, >>> +static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, >>> unsigned long start, unsigned long end, >>> unsigned long stride, bool last_level, >>> int tlb_level) >>> @@ -431,10 +431,19 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, >>> else >>> __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); >>> >>> - dsb(ish); >>> mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); >>> } >>> >>> +static inline void __flush_tlb_range(struct vm_area_struct *vma, >>> + unsigned long start, unsigned long end, >>> + unsigned long stride, bool last_level, >>> + int tlb_level) >>> +{ >>> + __flush_tlb_range_nosync(vma, start, end, stride, >>> + last_level, tlb_level); >>> + dsb(ish); >>> +} >> >> Hmm, are you sure it's safe to defer the DSB until after the secondary TLB >> invalidation? It will have a subtle effect on e.g. an SMMU participating >> in broadcast TLB maintenance, because now the ATC will be invalidated >> before completion of the TLB invalidation and it's not obviously safe to me. > > I'll be honest; I don't know that it's safe. The notifier calls turned up during > a rebase and I stared at it for a while, before eventually concluding that I > should just follow the existing pattern in __flush_tlb_page_nosync(): That one > calls the mmu notifier without the dsb, then flush_tlb_page() does the dsb > after. So I assumed it was safe. > > If you think it's not safe, I guess there is a bug to fix in > __flush_tlb_page_nosync()? Did you have an opinion on this? I'm just putting together a v4 of this series, and I'll remove this optimization if you think it's unsound. But in that case, I guess we have an existing bug to fix too? Thanks, Ryan > > > >> >> Will >