Received: by 2002:a05:6359:6284:b0:131:369:b2a3 with SMTP id se4csp4998310rwb; Tue, 8 Aug 2023 18:28:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFd9tJYSFKAeugIvgSSnunEGW16MzQxE6GQzy3sTEUstTRhLUT2Cl63SEZBWZSy7FsS4YWq X-Received: by 2002:a05:6402:160a:b0:522:2aba:bc3b with SMTP id f10-20020a056402160a00b005222ababc3bmr1142111edv.28.1691544482312; Tue, 08 Aug 2023 18:28:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691544482; cv=none; d=google.com; s=arc-20160816; b=S/kr6850cUYSWWdlBKouO94Ew0EpQe1KyAWhkny9LlYi69Kz44HTRpZ4KQWfPanJIM 0NzfQGiwmUnAnv2bSjFIVGQtaJEktSKCAYYGc5BgL4DmBpfgHKdMqAlS9HgxtT3p3i/I 5gL4EPSmw6AZ4px6LqjVirMwphFnseGikArXujU4jF7UF3BvJuOYOTc3L/PNlPnjFN9a tTnIXGXk7ECpjiOSbO61O4pwR+ZtZ7GFze/V4eD7cGM8l1+jeWGbEK8jJ/b0fllNyLXJ 7Vq88H88JqC3T3xwRSoLp+pg1rDr2lcxGQR3ohHCY9yakpEQi1goAXHZhIcr+tkNKcuO 5uvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; fh=EzJpY2jUya2/7MeB///4FRay7yU1Ie96zlqev0X7uNI=; b=itGtA0WTO3fxPF1efhssgKZjkQjg51+vqgC6iw1tkyo9xwH7NDWxK8xHr0QKzfjeIK 0b2nIhWaxkpkpv0wD1tY9Ud79K6AZMqUkI7VX4Fu+6RPnCLq6pI0+SRCFpTT/EdCUjiL xck13oJmbFLt5pfK1foBLac5mMXChNrTzNFxTpS1Rjh6uP9Nsr0hu01hUfrQYZpnAlif QO7VZmzQkYWoSTklqee3SjAVcTLFY71oVH6mVwCDdjncyWN4WRKyFMByPRrj86//w8rj TPCa985OhCfxU0T7eGDQO0htoW4OAkKFirYaiexo2Opyd2zrJxz2+jWLqClLgLBinGEJ XrhQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=VdwO9zpC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b7-20020aa7df87000000b0052328243febsi6625654edy.153.2023.08.08.18.27.38; Tue, 08 Aug 2023 18:28:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=VdwO9zpC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231740AbjHHXOG (ORCPT + 99 others); Tue, 8 Aug 2023 19:14:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231861AbjHHXNv (ORCPT ); Tue, 8 Aug 2023 19:13:51 -0400 Received: from mail-oi1-x249.google.com (mail-oi1-x249.google.com [IPv6:2607:f8b0:4864:20::249]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1535F19BF for ; Tue, 8 Aug 2023 16:13:43 -0700 (PDT) Received: by mail-oi1-x249.google.com with SMTP id 5614622812f47-3a7a17912d2so5755136b6e.2 for ; Tue, 08 Aug 2023 16:13:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536422; x=1692141222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=VdwO9zpCylwoLk2VWROe5L67a/7tekWsn7A41eH6FI22rEu0/jHlzNkiGuD5QSU2r4 heeaTspfhcXGEZ4qFJxVIyrxjc0hHb5cQfJc1qxh7n1P3ML2VVzOESTMC81RcoH0f4F/ EjpyTbgU71zDoK+wd45wJvsGUqkEDjnyOMLm7DSdWRcgt9XoYCuiReiEWpwCuhW7OePA lULJsmxsVya0hLV68oD3RZTeA1GbtiDF/VJ4/TbciC0JjFABt8rg7VHPqKAkFCSX0oh8 Odw2TKiJRTdIeDkYgX0f+kV73C9Szk+ZM0HXTYYKkFU9XsAXh016PlcC9AGO1l0YzgPo jJWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536422; x=1692141222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b8Ihb497LpuMIgM0TiiBODXNuX0ZiHa7Y+wfWyh7fKg=; b=XlSZsApKs8ec7fwo5I87dHpQ6MDrUOzn74+O+x0roPTJJPfHWeaisG4JjJEYz6EiIA mBX1JWW5tSP04PgmyNMaU86tEFPZPBAOZ1OY2JThBZmIl4A2T1VEPr9fhcVc6ygMeRrk J/l+q4cCWKNYxMV2cb/OJpK66gAkwi11hDeBldoGUUY3jcJmsl8frmuae70Zh6Jb8elH sXDZXH2CwDzlWCFxLcRu41p3EZTKscg+C5+V+T1+5ml+io5vUCRAGbAhg+Bnk4QOPEzQ 8iXWjxAs6C3AA7VYuuLMu/i2Qgz6FC8z6RbrDWyiWIk4QniWuy+KtE/Y65L1B7ip/TZn GigQ== X-Gm-Message-State: AOJu0YwHS2iEQrm3qCWKtp4wGgNBzdc+trIqnom3Sgh1A3vUrxt1daCH 8K2cSZ3n/SeyJDmPp1RFNekspN+6WYPu X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6808:1307:b0:3a7:aef2:8b1e with SMTP id y7-20020a056808130700b003a7aef28b1emr601739oiv.10.1691536422468; Tue, 08 Aug 2023 16:13:42 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:23 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-8-rananta@google.com> Subject: [PATCH v8 07/14] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas , Gavin Shan , Shaoqin Huang Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/tlbflush.h | 121 +++++++++++++++++------------- 1 file changed, 68 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..b9475a852d5be 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,74 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* + * __flush_tlb_range_op - Perform TLBI operation upon a range + * + * @op: TLBI instruction that operates on a range (has 'r' prefix) + * @start: The start address of the range + * @pages: Range as the number of pages from 'start' + * @stride: Flush granularity + * @asid: The ASID of the task (0 for IPA instructions) + * @tlb_level: Translation Table level hint, if known + * @tlbi_user: If 'true', call an additional __tlbi_user() + * (typically for user ASIDs). 'flase' for IPA instructions + * + * When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) \ +do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +367,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } -- 2.41.0.640.ga95def55d0-goog