Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp3109844rwp; Fri, 14 Jul 2023 17:57:55 -0700 (PDT) X-Google-Smtp-Source: APBJJlFcQNheEGDl27K10NNphGxelcD2B8aMz4HvGCmwWhgLI5Ts84T5rE3RDhoIfi6qRWdObb56 X-Received: by 2002:a2e:a793:0:b0:2b6:a882:129c with SMTP id c19-20020a2ea793000000b002b6a882129cmr358368ljf.0.1689382675285; Fri, 14 Jul 2023 17:57:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689382675; cv=none; d=google.com; s=arc-20160816; b=BdwjU3SJPMB3wBDQmroQKcFFTvcf0AiYK3nXOEwD+WrPVIASOtkbfECFYvaDnDOIkO W7Yxqvx/Dggbgs4RjJxskItpI52n0KLrw46Avg54O1vzriM4TzjStB9UPxK18wGFrBYy oZGjjEB5iLx2uYRQOKq5oqolGaaLO85ILzjRu6rXQzP+0MlesRSXZluXJMm5ATi5S9cG 8VljsTDBGJKMaNV7XS6oVjSM4+OcV+RBW7ftlnXCSYI8YOphKKagJE76E7PaoBephMSJ L94bkgDyEuCSb9rTyZE37cWRpuVchIB/2XUWNjXEVyIrsLCNtcxhPLj2ooalK+x9J1GT O/wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=3JwopgRpIMp0X2glfVxGtAHu1MpQVveVbpsM0JdVJZQ=; fh=P7Qh0bwx49OavDeCm3fRuN2JzmKS37W+qb86jcEfAsM=; b=UFEDK6tOL2WybqJDHgLNmHuvok5kpKo3gNNl533ni1AKbNxGauIkMXArmBe7pfQUGz 09PFTdE+H3pMD4G33qlUL3QzzFRGFf6NvG1UFIWgXwqX+dzAuFdGkT8FRkVkDokVocim 0sRM3bxIBDWCC92x4lZeQtfreGD8DE4n9ZZ3pI/lgWvbPILv+QyQbisjewxFRY3dEAtp aKJsWnOLIbScmchcAlSwizVZstBFVb1usTBKFsiKn5lMByrlb6pXWHalPeCC64QmG5Ku irTajWWSUvL1k08EjK2RT4al8ljTk77HlX6ZuFqmF41712T8yhfGFISObPI9ZnbG1UFe GlyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=WzuwSrgU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jx14-20020a170906ca4e00b00993470850bbsi10018932ejb.749.2023.07.14.17.57.31; Fri, 14 Jul 2023 17:57:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=WzuwSrgU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230470AbjGOAya (ORCPT + 99 others); Fri, 14 Jul 2023 20:54:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230308AbjGOAyU (ORCPT ); Fri, 14 Jul 2023 20:54:20 -0400 Received: from mail-oo1-xc49.google.com (mail-oo1-xc49.google.com [IPv6:2607:f8b0:4864:20::c49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88CE83A96 for ; Fri, 14 Jul 2023 17:54:16 -0700 (PDT) Received: by mail-oo1-xc49.google.com with SMTP id 006d021491bc7-56662adc40bso3481156eaf.1 for ; Fri, 14 Jul 2023 17:54:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689382456; x=1689987256; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3JwopgRpIMp0X2glfVxGtAHu1MpQVveVbpsM0JdVJZQ=; b=WzuwSrgUcQNReWT/D/pSdl6UdHpKLb7UWimjp08dHNmKHuBIOKxgMS6JBBZc39QJNk eacA7f7nF2F6YWMDYDnPPQvEpz3NlFofChUilfoTLlEYkk0DkhCZgLEynkawoE+RpTUX Ns4tBoa5xtQc+9jQWG0frC1oChGMy8jf+AuG/sc6qu/CzdXVdWhW5hphT+444Z7wyXI1 BLI2JkvNKoV1cv5zYFt2wIvBAI14k4hN+8Tbsy3tnISF81SvhE7X+mz/35ntRFv3hhr4 TqaflOLXmJSXILRf6JGluQkxdFIkW50nBjB6DNmc1BPVJQa5wqAG7QvgmBMK+IThkKHH yaZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689382456; x=1689987256; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3JwopgRpIMp0X2glfVxGtAHu1MpQVveVbpsM0JdVJZQ=; b=cpDDbhfJ+KY0xQTngHC6tMIzGGv6EHhUOGjeVd3CMxaKrUlndK5iEIgRVvglPAkpSk ZhOo2unD6qNBGp4qeS8nQirY2SGZkJdf8CzTzUllec7DnfqhJ8SbmeaQ/j25jxnwbuRD zVDghan5eDO/9X/WI8hS4EYavEc5aIrwFHnxmBdwCGq7o81bmIk9jdiVzCLro5uVdrX9 2LNd8jiCx9tM1UHtDVRQwHGD2BfCwH/PdnaSeCUFyXHymupy6GEy8dVy15USvkQAumtD utpfKpXUYvt/XGRd2QH6kN0XSLVDgdeuqFCijwPcytOB8GJ6km1veKpoCxrpeNe/3GR7 8zaQ== X-Gm-Message-State: ABy/qLa6xtvlCpx+jpJlxQEQmdLA+8qWkl/r+cw9GO1wIySG6NTNLdL6 wz+u4Gic8ZSykWeGvkKuH14jVbQ5ko1J X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6870:d883:b0:1b3:7919:e9dd with SMTP id dv3-20020a056870d88300b001b37919e9ddmr6035172oab.5.1689382455968; Fri, 14 Jul 2023 17:54:15 -0700 (PDT) Date: Sat, 15 Jul 2023 00:53:59 +0000 In-Reply-To: <20230715005405.3689586-1-rananta@google.com> Mime-Version: 1.0 References: <20230715005405.3689586-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.455.g037347b96a-goog Message-ID: <20230715005405.3689586-6-rananta@google.com> Subject: [PATCH v6 05/11] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , David Matlack , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas , Gavin Shan Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas Reviewed-by: Gavin Shan --- arch/arm64/include/asm/tlbflush.h | 109 +++++++++++++++--------------- 1 file changed, 56 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25..f7fafba25add 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,62 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) \ +do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +355,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } -- 2.41.0.455.g037347b96a-goog