Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1637000ybt; Thu, 18 Jun 2020 13:31:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw9b1yBm/bbk06uIHVUGceW4TY/8DodcU7hs6sQg/qiN1DmhcxdStkhJ12+xfLo05rhc+ZJ X-Received: by 2002:a17:907:209b:: with SMTP id pv27mr464836ejb.146.1592512262653; Thu, 18 Jun 2020 13:31:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592512262; cv=none; d=google.com; s=arc-20160816; b=dAV0MrtR5AZGoMxFRmq2nhmpS4sJVjE2ug36mhjKTV3uCIl7DKqwOU0+AzZrRb93ms 2TKpzA4wmnrQfC42yQ6NkcTK/CHCKruKiXH6cBnx1jSvPkUXbPNE14Yq8Bn9czRtkOhA aedkcXaiQTX/BDBdhxdDDZm2Xc4m9bj6P+8iergjl7oEnHdPw/32Ey94EmWHvUZPzud6 L9BHp4A10q3TD7rnGGPuEdXCNNifsU1ZioT45wLreZymz9FgjllCUseUXjdgtkezwwPJ 0UVwRdOhRncy8n4YN+eHTYFH9qVEkGLr8JTDn44/0luJb9jqrzfd5WYnk7DiTioGUD2h dwZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=74VV+hWrAL0zCCFEfjNCgA2uQ+DitYEe58gLlSBqoCk=; b=NBqEMkQSDpH/LpglNm8PrQuyBtQmAI8anVJNbz5K3l88WYmqiIxGS3aVFczD3jqFUf eiR6kE2NJh8J1Hw1Y64LSQsWrWzg5d3rT5siY9XYqhENP5doYX1LQ4hHNNJZX6xif9Li dRgGdr4PvvVlocmrPN1K8anE1KtuhFoCYKT2INx7A6N30ow/S/iKP1a74zIergPQnw0K 5crz2WHww48Jp2SSNEEtEL/rEL6I7crmIbq9tjhBntzt2DJEyLfWgP/sTxVjYtsFpTs4 Qf06V9ZdBsvlhYyA5cRApON0lokwvLhxE8N/dseOjhB4T8YHRmNGXDHWOIFBaA4hM4Y2 pVNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=E5P4NQGY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g8si2517794ejb.653.2020.06.18.13.30.36; Thu, 18 Jun 2020 13:31:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=E5P4NQGY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729883AbgFRM3h (ORCPT + 99 others); Thu, 18 Jun 2020 08:29:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729274AbgFRM0H (ORCPT ); Thu, 18 Jun 2020 08:26:07 -0400 Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7773C061798 for ; Thu, 18 Jun 2020 05:26:02 -0700 (PDT) Received: by mail-wr1-x441.google.com with SMTP id t18so5855218wru.6 for ; Thu, 18 Jun 2020 05:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=74VV+hWrAL0zCCFEfjNCgA2uQ+DitYEe58gLlSBqoCk=; b=E5P4NQGYzL5UaWmwY1z+kkRyF7ccG4gozbkk0/NooJZsCcZoMcebVMGxpN+qI5+JEb KUYsfeDWJHlGXrBd7hZ6va1sAzy+qiEtqPcXWEKyXUJEBA8wwHQhdD9D2zXjvBWEbvdL u8bayVSKwjOiprCOYesKXtXr5gH0PJ6PpT0G88khtJoO1YfT+xz/V4pfL54ySbhe8FJu X1/e6ZIds9kc7nFl9gm0lqWzc9iv8H6JWXJxGvhsQCp9ob8f/POoF9hPi1Mh7PrPwczs 5+lJR544iurizR6YWCUfRRNMaIACJkWnTGd3MJX88UWJ/sMSZvSOgYwCIiBwwL2oHyV/ z3Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=74VV+hWrAL0zCCFEfjNCgA2uQ+DitYEe58gLlSBqoCk=; b=H23MhOO6D+bs26uUFi0kgI1qUoV3AtIy1BHJylhdpPOnTGEbrTyMMDdYr/KQSVBgfo 9kW8CiHpX5Tv9320KUuyyrVuMbkaT1ViSYEshdltFmSn6m2i9nrKX3b7rhjzhSzM/o1Y ri9b+7p3wdsCT5dfIXZ7g9tblQGOsQOj16+S2Tm/Uc+Mo3aGU19xPGkWIlb0mepZeBDq I+PZ3LuyjSXXwiLFpOW/lxgsCmFpVj5TjBF93J/k2hn2EnkbspW1fgGJHtioGPT6LEa1 mZCMw75mZnwkdc9tIEXDfel74FCoxTiD3LbBQP7QUgJcLa6au6dIcQ5mmj/rd+g4R6EG wCOg== X-Gm-Message-State: AOAM53339i787MSid4K9DZffdLAycdTIpNnIFTFuSyl8vvF/6lEPP8q9 BSnPDECsGF+r2oLrJWynm5rVew== X-Received: by 2002:adf:fd81:: with SMTP id d1mr4505813wrr.96.1592483161018; Thu, 18 Jun 2020 05:26:01 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id o6sm3426241wrp.3.2020.06.18.05.25.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:26:00 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, android-kvm@google.com, kernel-team@android.com, David Brazdil Subject: [PATCH v3 07/15] arm64: kvm: Split hyp/tlb.c to VHE/nVHE Date: Thu, 18 Jun 2020 13:25:29 +0100 Message-Id: <20200618122537.9625-8-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. tlb.c contains code for flushing the TLB, with parts shared between VHE/nVHE. These common routines are moved into a header file tlb.h, VHE-specific code remains in tlb.c and nVHE-specific code is moved to nvhe/tlb.c. The header file expects its users to implement two helper functions declared at the top of the file. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 8 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 70 +++++++++++++ arch/arm64/kvm/hyp/tlb.c | 171 +++---------------------------- arch/arm64/kvm/hyp/tlb.h | 134 ++++++++++++++++++++++++ 5 files changed, 222 insertions(+), 163 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/tlb.c create mode 100644 arch/arm64/kvm/hyp/tlb.h diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 4dc969ccda9e..e8a8aa6bc7bd 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -63,13 +63,10 @@ __efistub__ctype = _ctype; __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___hyp_stub_vectors = __hyp_stub_vectors; +__kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; -__kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; -__kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; -__kvm_nvhe___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; __kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; __kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; @@ -79,8 +76,11 @@ __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; __kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; +__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; +__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; __kvm_nvhe_hyp_panic = hyp_panic; __kvm_nvhe_idmap_t0sz = idmap_t0sz; __kvm_nvhe_kimage_voffset = kimage_voffset; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index fef6f1881765..3bfc51de1679 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := hyp-init.o ../hyp-entry.o +obj-y := tlb.o hyp-init.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c new file mode 100644 index 000000000000..111c4b0a23d3 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include +#include + +#include "../tlb.h" + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + u64 val; + + /* + * For CPUs that are affected by ARM 1319367, we need to + * avoid a host Stage-1 walk while we have the guest's + * VMID set in the VTTBR in order to invalidate TLBs. + * We're guaranteed that the S1 MMU is enabled, so we can + * simply set the EPD bits to avoid any further TLB fill. + */ + val = cxt->tcr = read_sysreg_el1(SYS_TCR); + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; + write_sysreg_el1(val, SYS_TCR); + isb(); + } + + /* __load_guest_stage2() includes an ISB for the workaround. */ + __load_guest_stage2(kvm); + asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); +} + +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + write_sysreg(0, vttbr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { + /* Ensure write of the host VMID */ + isb(); + /* Restore the host's TCR_EL1 */ + write_sysreg_el1(cxt->tcr, SYS_TCR); + } +} + +void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + __tlb_flush_vmid_ipa(kvm, ipa); +} + +void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +{ + __tlb_flush_vmid(kvm); +} + +void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + __tlb_flush_local_vmid(vcpu); +} + +void __hyp_text __kvm_flush_vm_context(void) +{ + __tlb_flush_vm_context(); +} diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index d063a576d511..4e190f8c7e9c 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -10,14 +10,10 @@ #include #include -struct tlb_inv_context { - unsigned long flags; - u64 tcr; - u64 sctlr; -}; +#include "tlb.h" -static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) { u64 val; @@ -60,41 +56,8 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - u64 val; - - /* - * For CPUs that are affected by ARM 1319367, we need to - * avoid a host Stage-1 walk while we have the guest's - * VMID set in the VTTBR in order to invalidate TLBs. - * We're guaranteed that the S1 MMU is enabled, so we can - * simply set the EPD bits to avoid any further TLB fill. - */ - val = cxt->tcr = read_sysreg_el1(SYS_TCR); - val |= TCR_EPD1_MASK | TCR_EPD0_MASK; - write_sysreg_el1(val, SYS_TCR); - isb(); - } - - /* __load_guest_stage2() includes an ISB for the workaround. */ - __load_guest_stage2(kvm); - asm(ALTERNATIVE("isb", "nop", ARM64_WORKAROUND_SPECULATIVE_AT)); -} - -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_guest_vhe(kvm, cxt); - else - __tlb_switch_to_guest_nvhe(kvm, cxt); -} - -static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's @@ -113,130 +76,22 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, local_irq_restore(cxt->flags); } -static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - write_sysreg(0, vttbr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { - /* Ensure write of the host VMID */ - isb(); - /* Restore the host's TCR_EL1 */ - write_sysreg_el1(cxt->tcr, SYS_TCR); - } -} - -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_host_vhe(kvm, cxt); - else - __tlb_switch_to_host_nvhe(kvm, cxt); -} - -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - /* - * We could do so much better if we had the VA as well. - * Instead, we invalidate Stage-2 for this IPA, and the - * whole of Stage-1. Weep... - */ - ipa >>= 12; - __tlbi(ipas2e1is, ipa); - - /* - * We have to ensure completion of the invalidation at Stage-2, - * since a table walk on another CPU could refill a TLB with a - * complete (S1 + S2) walk based on the old Stage-2 mapping if - * the Stage-1 invalidation happened first. - */ - dsb(ish); - __tlbi(vmalle1is); - dsb(ish); - isb(); - - /* - * If the host is running at EL1 and we have a VPIPT I-cache, - * then we must perform I-cache maintenance at EL2 in order for - * it to have an effect on the guest. Since the guest cannot hit - * I-cache lines allocated with a different VMID, we don't need - * to worry about junk out of guest reset (we nuke the I-cache on - * VMID rollover), but we do need to be careful when remapping - * executable pages for the same guest. This can happen when KSM - * takes a CoW fault on an executable page, copies the page into - * a page that was previously mapped in the guest and then needs - * to invalidate the guest view of the I-cache for that page - * from EL1. To solve this, we invalidate the entire I-cache when - * unmapping a page from a guest if we have a VPIPT I-cache but - * the host is running at EL1. As above, we could do better if - * we had the VA. - * - * The moral of this story is: if you have a VPIPT I-cache, then - * you should be running with VHE enabled. - */ - if (!has_vhe() && icache_is_vpipt()) - __flush_icache_all(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_vmid_ipa(kvm, ipa); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalls12e1is); - dsb(ish); - isb(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); - struct tlb_inv_context cxt; - - /* Switch to requested VMID */ - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalle1); - dsb(nsh); - isb(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_flush_vm_context(void) +void __kvm_flush_vm_context(void) { - dsb(ishst); - __tlbi(alle1is); - - /* - * VIPT and PIPT caches are not affected by VMID, so no maintenance - * is necessary across a VMID rollover. - * - * VPIPT caches constrain lookup and maintenance to the active VMID, - * so we need to invalidate lines with a stale VMID to avoid an ABA - * race after multiple rollovers. - * - */ - if (icache_is_vpipt()) - asm volatile("ic ialluis"); - - dsb(ish); + __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h new file mode 100644 index 000000000000..841ef400c8ec --- /dev/null +++ b/arch/arm64/kvm/hyp/tlb.h @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_TLB_H__ +#define __ARM64_KVM_HYP_TLB_H__ + +#include + +#include +#include +#include + +struct tlb_inv_context { + unsigned long flags; + u64 tcr; + u64 sctlr; +}; + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt); +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt); + +static inline void __hyp_text +__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi(ipas2e1is, ipa); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (!has_vhe() && icache_is_vpipt()) + __flush_icache_all(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalls12e1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); + struct tlb_inv_context cxt; + + /* Switch to requested VMID */ + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vm_context(void) +{ + dsb(ishst); + __tlbi(alle1is); + + /* + * VIPT and PIPT caches are not affected by VMID, so no maintenance + * is necessary across a VMID rollover. + * + * VPIPT caches constrain lookup and maintenance to the active VMID, + * so we need to invalidate lines with a stale VMID to avoid an ABA + * race after multiple rollovers. + * + */ + if (icache_is_vpipt()) + asm volatile("ic ialluis"); + + dsb(ish); +} + +#endif /* __ARM64_KVM_HYP_TLB_H__ */ -- 2.27.0