Received: by 2002:a05:6358:53a8:b0:117:f937:c515 with SMTP id z40csp856598rwe; Fri, 14 Apr 2023 10:33:01 -0700 (PDT) X-Google-Smtp-Source: AKy350bNfoOaCeNuxYslEdqU/gayV2dicCeKzgy84A8GyaBent91zDYHF6img4zXutWOekaWO1mc X-Received: by 2002:a05:6a00:a81:b0:63b:19e5:a9ec with SMTP id b1-20020a056a000a8100b0063b19e5a9ecmr9507232pfl.33.1681493581517; Fri, 14 Apr 2023 10:33:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681493581; cv=none; d=google.com; s=arc-20160816; b=bH6dctpRmImiQINtVYbxz2mMAx5xwQaqWCyFxjJfzowimau3wbD1SnFw5ycKlPreUk 2Xn3rV6VNyJsrySoNXy7EFE8BE1nVTVi++Eh1XuR8hrnMX8qdf4LAahS+6Bqh3vD+7fh SDtmGx9JWefMVVomMr50Of4m7UZt7B0JHS0Bfg/fisopPU8K45QLqIkz89EgGyYzNbqH yfpCtPL8kz/SeXlwIf51IjLA8d6sdEvE5eRDxCqm8D7EIGZVSkgMXIoAUb6v4TmBHSqm RgiqMHBJR8DCVZW0sRHnROtzOP8Y3ZnleU4GbHWVnFVrPo36EIYFhTmt1pX4qS8bNs8u ABkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=WC+f6WYZxQuIE0h558GKbGHGb7jY1oqGq41gtUF7kQzB3DOUPWZ86n+hsh8yhBc9HQ kfnuVrnixmosWE30JrkoQIxiUEjdNp+58E0tnoL5Bd/gX/a9TntzdcaOYvapjSV6BRgZ cNX1a4dGHrtdxIkAg4ItN2t89THag9QbX0lp7I2yOyPu3XXmms+Hyjp9TengbYqGpFSD MpqNRej7mS6YJq7L0PGYPHYZjehSQK6LiSfkX/nOtUJ3scfB0n+cCHkp87wGIxXTkql5 DxutCswO+QJLBWYBbofHUgNHBV82QKNIaVPdb3tWsXcuRpF4pFUx8wFwhpcqaDkDAE3f Wqnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Nx6yKI3E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e188-20020a6369c5000000b0051854bde329si5033813pgc.367.2023.04.14.10.32.50; Fri, 14 Apr 2023 10:33:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Nx6yKI3E; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230406AbjDNRaE (ORCPT + 99 others); Fri, 14 Apr 2023 13:30:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230413AbjDNR3r (ORCPT ); Fri, 14 Apr 2023 13:29:47 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3661383CF for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id r14-20020a5e950e000000b0074cc9aba965so9862252ioj.11 for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=Nx6yKI3EIRZ1j9gTwptzX9+uokMoja8+/mQJmU3Rbml7atU0NoaDLBGoC2ADYzVIzd 7aha5aGITbsGDsKM1l7NfS0DEzKHMH2NeeQayQA6VslRgzCarBgdYEG+PUEtCAER1UXf fyrUxrpkNjxH5oSb49dS7Rfk+7CWIo2gqojXOL8s9xSwDoxq+npACrZokCS2ovl6fOzW 8l09mW/ccEk4oAdZg2MnlEm5WTsX7ut19ZEDWM18wpQZ6t2WzsLVdUB3PkYURffuJxRY g+aUVAE5+Yw98jqRcnug+ZTFUE+1aDOhSwxOp+NtIoyMuMlVf2eoWotUhma2O9uNKFW9 +UAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=jl1d9muijtDChmqSqCroRouls7VWAvl7zbwAtB3c9Kj5Jz56ZMoTY+RqH1Tamg+vOv lyR67/hMVkAxCAEdZwXlhyF3KB6Zu0x5b9tITFWg58aZ6j+Fgu3cdPsSgGgKSBXe4dNY UQBvSeP1X2+1bY7v28MYomtZc++R4xPEoV4BaOj+IUbYktrlcnsF2dWyDJeN/br9MuHH liX5BzGMwtNEY1Ql2sn6VHz7IZTkRGSpenvv/s4sV3LrVQqzhlSmNLrjtoHhKhrr04lu 9hSsEEYv7Y+98LNA4Y7cDAD9jUs3e/2IwNEMT7v1Ci0dWdKequC21nhcsB+ERfP+Dzi4 dZDw== X-Gm-Message-State: AAQBX9dwwmYRefk4s180/+ZlXAnDDwUzcfhrIdEkBFHY3fqd76wWAVmd BAmBAyCXmqLvbt1xeL5g3Q0GBHBMo5Dj X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:85ed:0:b0:40b:bd17:3c31 with SMTP id d100-20020a0285ed000000b0040bbd173c31mr2482742jai.0.1681493372447; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:22 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-8-rananta@google.com> Subject: [PATCH v3 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition this upon S2FWB in order to avoid walking the page-table again to perform the CMOs after issuing the TLBI. Signed-off-by: Raghavendra Rao Ananta Suggested-by: Oliver Upton --- arch/arm64/kvm/hyp/pgtable.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3f136e35feb5e..bcb748e3566c7 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -987,10 +987,16 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, return ret; } +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool skip_pte_tlbis; +}; + static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt = ctx->arg; + struct stage2_unmap_data *unmap_data = ctx->arg; + struct kvm_pgtable *pgt = unmap_data->pgt; struct kvm_s2_mmu *mmu = pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; kvm_pte_t *childp = NULL; @@ -1018,7 +1024,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops, false); + stage2_put_pte(ctx, mmu, mm_ops, unmap_data->skip_pte_tlbis); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1032,13 +1038,32 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data = { + .pgt = pgt, + /* + * If FEAT_TLBIRANGE is implemented, defer the individial PTE + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition this upon S2FWB in order to avoid + * a page-table walk again to perform the CMOs after TLBI. + */ + .skip_pte_tlbis = system_supports_tlb_range() && + stage2_has_fwb(pgt), + }; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, - .arg = pgt, + .arg = &unmap_data, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (unmap_data.skip_pte_tlbis) + /* Perform the deferred TLB invalidations */ + kvm_call_hyp(__kvm_tlb_flush_vmid_range, pgt->mmu, + addr, addr + size); + + return ret; } struct stage2_attr_data { -- 2.40.0.634.g4ca3ef3211-goog