Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp4577199rdb; Tue, 12 Dec 2023 03:47:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IGF08pk8jlxhWv5eTNGIhwuDglPKZ678QXHYlsqnpmoANBBA7SQhnzxZj0gy8Cg/n9fdSLz X-Received: by 2002:a17:902:e748:b0:1cf:59ad:9637 with SMTP id p8-20020a170902e74800b001cf59ad9637mr8724444plf.22.1702381641156; Tue, 12 Dec 2023 03:47:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702381641; cv=none; d=google.com; s=arc-20160816; b=mu5p9F3hs/XNuvDmiLPl10ZD1gmehyq+DcsTRPtH6twraqIZEK+x8JFBpRT1XWr4iN 09Kd0ChNfnLqN5xbdeiF/jWCSff6gFbj6YdRGS/VpvX2Dmi6GDjQPxcKMguBy+EXk+En iJRVD6l/ayizYJUjbkvFTpuJBJkn2GbiexQa0YXg2yErlhR2E2YqWHLpO865u1Ac6MNh OOjBtagi6A8SkkXSPg51ZSvKz2PgSHrYuBZqfjXToJzWun7o75QctyhnuBu8b0l0IKuc 039svFCNPAlygCsuBI0SWjKhXcKLY9JoVnNSnH/mSWtoQZmPTsDf51/bB0j7Uyt2YhwD LBqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :content-language:references:cc:to:subject:user-agent:mime-version :date:message-id; bh=1JrxTOEFsThyeU10XyijKeeY5sRnmgrmqYzYUw345mQ=; fh=wqtoyKzv3qfWu+ePnoW5bxwAKIpoa2HYk0hundNi6g0=; b=KUlYXpzdrgpcE+aB/CRMQCPFdXHpFL3ql5tdWCsVS3BJ/85ESgsAb9Ep9Q1sQP+JPk iKSZTt1k3knR2BjvF82Vxv3gDXIlaPdQicTbgI1sneIYpLwsRP0ABzGg3rl+Rtz9xyI7 AwT+jS4B9JT8AWrxEmua8qcy++tXGMveNpV7iak9QBhtr9azFkYEGekORg1ykkqeZJFZ /VakcNlIGPSZ9hkrX1TobdfpdmI9EwrmrMWqnRShD+Us3j7YVOSgxhX276Q4rzXO0JfC xcE2PiVF6jD0/aGXmrViXEpHphqXJwicp2ElM0gGgHldGCw40qf9Ak8EHdu6v7MheapA CiRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id bq5-20020a056a02044500b005be00724141si7854857pgb.533.2023.12.12.03.47.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 03:47:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id AE2CC80BD4DA; Tue, 12 Dec 2023 03:47:18 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232209AbjLLLrE (ORCPT + 99 others); Tue, 12 Dec 2023 06:47:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229963AbjLLLrC (ORCPT ); Tue, 12 Dec 2023 06:47:02 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9167EA6 for ; Tue, 12 Dec 2023 03:47:08 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 61161143D; Tue, 12 Dec 2023 03:47:54 -0800 (PST) Received: from [10.1.39.183] (XHFQ2J9959.cambridge.arm.com [10.1.39.183]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B122A3F762; Tue, 12 Dec 2023 03:47:04 -0800 (PST) Message-ID: <0969c413-bf40-4c46-9f1e-a92101ff2d2e@arm.com> Date: Tue, 12 Dec 2023 11:47:03 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 12/15] arm64/mm: Split __flush_tlb_range() to elide trailing DSB To: Will Deacon Cc: Catalin Marinas , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231204105440.61448-1-ryan.roberts@arm.com> <20231204105440.61448-13-ryan.roberts@arm.com> <20231212113517.GA28857@willie-the-truck> Content-Language: en-GB From: Ryan Roberts In-Reply-To: <20231212113517.GA28857@willie-the-truck> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 12 Dec 2023 03:47:18 -0800 (PST) On 12/12/2023 11:35, Will Deacon wrote: > On Mon, Dec 04, 2023 at 10:54:37AM +0000, Ryan Roberts wrote: >> Split __flush_tlb_range() into __flush_tlb_range_nosync() + >> __flush_tlb_range(), in the same way as the existing flush_tlb_page() >> arrangement. This allows calling __flush_tlb_range_nosync() to elide the >> trailing DSB. Forthcoming "contpte" code will take advantage of this >> when clearing the young bit from a contiguous range of ptes. >> >> Signed-off-by: Ryan Roberts >> --- >> arch/arm64/include/asm/tlbflush.h | 13 +++++++++++-- >> 1 file changed, 11 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h >> index bb2c2833a987..925ef3bdf9ed 100644 >> --- a/arch/arm64/include/asm/tlbflush.h >> +++ b/arch/arm64/include/asm/tlbflush.h >> @@ -399,7 +399,7 @@ do { \ >> #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \ >> __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false) >> >> -static inline void __flush_tlb_range(struct vm_area_struct *vma, >> +static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, >> unsigned long start, unsigned long end, >> unsigned long stride, bool last_level, >> int tlb_level) >> @@ -431,10 +431,19 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, >> else >> __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); >> >> - dsb(ish); >> mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); >> } >> >> +static inline void __flush_tlb_range(struct vm_area_struct *vma, >> + unsigned long start, unsigned long end, >> + unsigned long stride, bool last_level, >> + int tlb_level) >> +{ >> + __flush_tlb_range_nosync(vma, start, end, stride, >> + last_level, tlb_level); >> + dsb(ish); >> +} > > Hmm, are you sure it's safe to defer the DSB until after the secondary TLB > invalidation? It will have a subtle effect on e.g. an SMMU participating > in broadcast TLB maintenance, because now the ATC will be invalidated > before completion of the TLB invalidation and it's not obviously safe to me. I'll be honest; I don't know that it's safe. The notifier calls turned up during a rebase and I stared at it for a while, before eventually concluding that I should just follow the existing pattern in __flush_tlb_page_nosync(): That one calls the mmu notifier without the dsb, then flush_tlb_page() does the dsb after. So I assumed it was safe. If you think it's not safe, I guess there is a bug to fix in __flush_tlb_page_nosync()? > > Will