Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp958860pxb; Wed, 3 Mar 2021 22:32:42 -0800 (PST) X-Google-Smtp-Source: ABdhPJyuGOT14cwcVF0Asq/OTD5PHXYOqwzy+NhZaWx/eG9rJoPVnmu1C05DSra5jMkdTOfv1JrT X-Received: by 2002:a17:906:85b:: with SMTP id f27mr2644846ejd.414.1614839561837; Wed, 03 Mar 2021 22:32:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614839561; cv=none; d=google.com; s=arc-20160816; b=JAJFKJeBshFAofta/e5iDFgEtkhkw6P0x+eLR0UThkPzAXQLdAurefuK0+kJRl+Yyt Z9oHuEeGdNYRpCpqiU7jYdeDRpJygFykegzoa7ks4fnex2A29GDiTV5WkaBEttCIjlE5 lfZnCvPrKXdZRxBf+XoM7DlnyW0THhfrzbEkRnWz39q5MU3FQIK3JJ7eFQBKaQ3VY2ig zG/5ndlz7IML1OXyJN6N7kWD8fpw7AYY7Qa0ab2sMBSXkZxlhx/6CGWSgovoPkD8Sb8h KsVAiwfKyWEopvJR/kcV8JnLWGqDSe3DCW5JDC6qCQqIHlQrxIFYhSQ6zwMJoK1znFCE /Y7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=Jg5go4Wd5FLiBP8kRkVAj+VGU2ss+/GL7/UumznY/w4=; b=C5xO8fAT4mu5HZOyde81zr6qj0qRi20AoB3EVLxXbOhMwlEsl6ytHD4LZ78deoppB2 8MXavPLQqqfzSJcSmi3L7whZzUu9iriAWODbofs1z3InvUgp8PXUVckP7rMlh5Muhfst Uhht3SL0x6QI9W9vIDoJk0OvJ/PTSoEBe42b9kPMDkH3UeRWr83YmT4D6EZPluKjdnRr it8YbFZ+3ynPvb6ppCE/BCR+JACepjscbds378QXIO179wLnhmLWTYHy4PPrfkt2T5IN d/XJdBl9OdrRXu2FSFJKvh85ANyIStGoEv2BHkHZ1hb5bobazYO/ghdfY53QgbqpiFTh Z87Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=igasOdtu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e26si17289263edr.556.2021.03.03.22.32.19; Wed, 03 Mar 2021 22:32:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=igasOdtu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1580060AbhCBRWS (ORCPT + 99 others); Tue, 2 Mar 2021 12:22:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1448628AbhCBPHs (ORCPT ); Tue, 2 Mar 2021 10:07:48 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2336EC0698D1 for ; Tue, 2 Mar 2021 07:01:13 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id t19so2546616qta.2 for ; Tue, 02 Mar 2021 07:01:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Jg5go4Wd5FLiBP8kRkVAj+VGU2ss+/GL7/UumznY/w4=; b=igasOdtuDiqpVcdxAPHRRksVB2sNzpiejUlUsNf4C0NgpeiqQekJZ58+YXImo1MIm2 5YYTVoFeW7pDPidv/lkWDaQcIXzUF81QTEMU6a/9G6agVkO5Cfp9t8ZUQ3iCmwtbbF8u AbzJLyM1T4hyEY24TIEuBOZEWJ2TzZUiKKz1AoRNbzOECeDEc8h/Zy9VQ/IxC6b2AH2D qhE2O02SLEEDvCKhMaFBIb3ATWJZqGsMkTDomc3/9IuCgUgB7c+cBK5Pc9UhpLX0+Zfr A7YX4LyFjSs3l9CLNrJPLho4gyziCvyf+YE1KvJPPWzhKyLcpc0BeT14U4/om7V63hXh bfUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Jg5go4Wd5FLiBP8kRkVAj+VGU2ss+/GL7/UumznY/w4=; b=SB0/5Spg9n7VkJP4bAGZ3bWMONumrRl72fElGhT13Qu2z3u1UXVk+AWqGwefVuPFKF KTauIR17KjnY86xwgO+MrLW7k1Y91TGVEwnsbnz2QcvQpu1KyJV7FWK5YD/YNsIIyl0j +FouyKiHlpXdWG2bFcZK2CfBiu+8EeKImt9DkAXy1r16kghZDFjL0W73UnFjT/iQc1D1 pP3CrsnM3fd03MrCVfGn5TePUxOkLTdlLgk4iJBkAMyLXtslll0cITqN1UUR8HeYBUWs ExnrlxKL1nqqJVDAGV45EwO+6TeZvNIqyMoyfcBm2XiV69D6bzY//vGgYOe+VtBHA+Lw BpQQ== X-Gm-Message-State: AOAM530VuJazvLcecWFTT+9fPrljopYMM036y59CGrezdKqVG4K5ndD3 bs+ToCPzyFiqowg/qebzY5mJhFtCpOXt Sender: "qperret via sendgmr" X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:fd47:: with SMTP id j7mr3869257qvs.22.1614697272185; Tue, 02 Mar 2021 07:01:12 -0800 (PST) Date: Tue, 2 Mar 2021 14:59:59 +0000 In-Reply-To: <20210302150002.3685113-1-qperret@google.com> Message-Id: <20210302150002.3685113-30-qperret@google.com> Mime-Version: 1.0 References: <20210302150002.3685113-1-qperret@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v3 29/32] KVM: arm64: Wrap the host with a stage 2 From: Quentin Perret To: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, tabba@google.com, mark.rutland@arm.com, dbrazdil@google.com, mate.toth-pal@arm.com, seanjc@google.com, qperret@google.com, robh+dt@kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When KVM runs in protected nVHE mode, make use of a stage 2 page-table to give the hypervisor some control over the host memory accesses. The host stage 2 is created lazily using large block mappings if possible, and will default to page mappings in absence of a better solution. From this point on, memory accesses from the host to protected memory regions (e.g. marked PROT_NONE) are fatal and lead to hyp_panic(). Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_cpufeature.h | 2 + arch/arm64/kernel/image-vars.h | 3 + arch/arm64/kvm/arm.c | 10 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 34 +++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/hyp-init.S | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 213 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 5 + arch/arm64/kvm/hyp/nvhe/switch.c | 7 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 4 +- 12 files changed, 286 insertions(+), 7 deletions(-) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mem_protect.h create mode 100644 arch/arm64/kvm/hyp/nvhe/mem_protect.c diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 6dce860f8bca..b127af02bd45 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -61,6 +61,7 @@ #define __KVM_HOST_SMCCC_FUNC___pkvm_create_mappings 16 #define __KVM_HOST_SMCCC_FUNC___pkvm_create_private_mapping 17 #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector 18 +#define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/kvm_cpufeature.h b/arch/arm64/include/asm/kvm_cpufeature.h index d34f85cba358..74043a149322 100644 --- a/arch/arm64/include/asm/kvm_cpufeature.h +++ b/arch/arm64/include/asm/kvm_cpufeature.h @@ -15,3 +15,5 @@ #endif KVM_HYP_CPU_FTR_REG(SYS_CTR_EL0, arm64_ftr_reg_ctrel0) +KVM_HYP_CPU_FTR_REG(SYS_ID_AA64MMFR0_EL1, arm64_ftr_reg_id_aa64mmfr0_el1) +KVM_HYP_CPU_FTR_REG(SYS_ID_AA64MMFR1_EL1, arm64_ftr_reg_id_aa64mmfr1_el1) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 940c378fa837..d5dc2b792651 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -131,6 +131,9 @@ KVM_NVHE_ALIAS(__hyp_bss_end); KVM_NVHE_ALIAS(__hyp_rodata_start); KVM_NVHE_ALIAS(__hyp_rodata_end); +/* pKVM static key */ +KVM_NVHE_ALIAS(kvm_protected_mode_initialized); + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b6a818f88051..a31c56bc55b3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1889,12 +1889,22 @@ static int init_hyp_mode(void) return err; } +void _kvm_host_prot_finalize(void *discard) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_prot_finalize)); +} + static int finalize_hyp_mode(void) { if (!is_protected_kvm_enabled()) return 0; + /* + * Flip the static key upfront as that may no longer be possible + * once the host stage 2 is installed. + */ static_branch_enable(&kvm_protected_mode_initialized); + on_each_cpu(_kvm_host_prot_finalize, NULL, 1); return 0; } diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h new file mode 100644 index 000000000000..d293cb328cc4 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#ifndef __KVM_NVHE_MEM_PROTECT__ +#define __KVM_NVHE_MEM_PROTECT__ +#include +#include +#include +#include +#include + +struct host_kvm { + struct kvm_arch arch; + struct kvm_pgtable pgt; + struct kvm_pgtable_mm_ops mm_ops; + hyp_spinlock_t lock; +}; +extern struct host_kvm host_kvm; + +int __pkvm_prot_finalize(void); +int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool); +void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); + +static __always_inline void __load_host_stage2(void) +{ + if (static_branch_likely(&kvm_protected_mode_initialized)) + __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + else + write_sysreg(0, vttbr_el2); +} +#endif /* __KVM_NVHE_MEM_PROTECT__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index e204ea77ab27..ce49795324a7 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -14,7 +14,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs)) obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o stub.o page_alloc.o \ - cache.o cpufeature.o setup.o mm.o + cache.o cpufeature.o setup.o mm.o mem_protect.o obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ ../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o obj-y += $(lib-objs) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index f312672d895e..6fa01b04954f 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -119,6 +119,7 @@ alternative_else_nop_endif /* Invalidate the stale TLBs from Bootloader */ tlbi alle2 + tlbi vmalls12e1 dsb sy /* diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index ae6503c9be15..f47028d3fd0a 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -151,6 +152,10 @@ static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) = __pkvm_create_private_mapping(phys, size, prot); } +static void handle___pkvm_prot_finalize(struct kvm_cpu_context *host_ctxt) +{ + cpu_reg(host_ctxt, 1) = __pkvm_prot_finalize(); +} typedef void (*hcall_t)(struct kvm_cpu_context *); #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x @@ -174,6 +179,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_cpu_set_vector), HANDLE_FUNC(__pkvm_create_mappings), HANDLE_FUNC(__pkvm_create_private_mapping), + HANDLE_FUNC(__pkvm_prot_finalize), }; static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) @@ -226,6 +232,11 @@ void handle_trap(struct kvm_cpu_context *host_ctxt) case ESR_ELx_EC_SMC64: handle_host_smc(host_ctxt); break; + case ESR_ELx_EC_IABT_LOW: + fallthrough; + case ESR_ELx_EC_DABT_LOW: + handle_host_mem_abort(host_ctxt); + break; default: hyp_panic(); } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c new file mode 100644 index 000000000000..2252ad1a8945 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -0,0 +1,213 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 Google LLC + * Author: Quentin Perret + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include + +extern unsigned long hyp_nr_cpus; +struct host_kvm host_kvm; + +struct hyp_pool host_s2_mem; +struct hyp_pool host_s2_dev; + +static void *host_s2_zalloc_pages_exact(size_t size) +{ + return hyp_alloc_pages(&host_s2_mem, get_order(size)); +} + +static void *host_s2_zalloc_page(void *pool) +{ + return hyp_alloc_pages(pool, 0); +} + +static int prepare_s2_pools(void *mem_pgt_pool, void *dev_pgt_pool) +{ + unsigned long nr_pages, pfn; + int ret; + + pfn = hyp_virt_to_pfn(mem_pgt_pool); + nr_pages = host_s2_mem_pgtable_pages(); + ret = hyp_pool_init(&host_s2_mem, pfn, nr_pages, 0); + if (ret) + return ret; + + pfn = hyp_virt_to_pfn(dev_pgt_pool); + nr_pages = host_s2_dev_pgtable_pages(); + ret = hyp_pool_init(&host_s2_dev, pfn, nr_pages, 0); + if (ret) + return ret; + + host_kvm.mm_ops.zalloc_pages_exact = host_s2_zalloc_pages_exact; + host_kvm.mm_ops.zalloc_page = host_s2_zalloc_page; + host_kvm.mm_ops.phys_to_virt = hyp_phys_to_virt; + host_kvm.mm_ops.virt_to_phys = hyp_virt_to_phys; + host_kvm.mm_ops.page_count = hyp_page_count; + host_kvm.mm_ops.get_page = hyp_get_page; + host_kvm.mm_ops.put_page = hyp_put_page; + + return 0; +} + +static void prepare_host_vtcr(void) +{ + u32 parange, phys_shift; + u64 mmfr0, mmfr1; + + mmfr0 = arm64_ftr_reg_id_aa64mmfr0_el1.sys_val; + mmfr1 = arm64_ftr_reg_id_aa64mmfr1_el1.sys_val; + + /* The host stage 2 is id-mapped, so use parange for T0SZ */ + parange = kvm_get_parange(mmfr0); + phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange); + + host_kvm.arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); +} + +int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) +{ + struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; + int ret; + + prepare_host_vtcr(); + hyp_spin_lock_init(&host_kvm.lock); + + ret = prepare_s2_pools(mem_pgt_pool, dev_pgt_pool); + if (ret) + return ret; + + ret = kvm_pgtable_stage2_init(&host_kvm.pgt, &host_kvm.arch, + &host_kvm.mm_ops); + if (ret) + return ret; + + mmu->pgd_phys = __hyp_pa(host_kvm.pgt.pgd); + mmu->arch = &host_kvm.arch; + mmu->pgt = &host_kvm.pgt; + mmu->vmid.vmid_gen = 0; + mmu->vmid.vmid = 0; + + return 0; +} + +int __pkvm_prot_finalize(void) +{ + struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; + struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params); + + params->vttbr = kvm_get_vttbr(mmu); + params->vtcr = host_kvm.arch.vtcr; + params->hcr_el2 |= HCR_VM; + if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + params->hcr_el2 |= HCR_FWB; + kvm_flush_dcache_to_poc(params, sizeof(*params)); + + write_sysreg(params->hcr_el2, hcr_el2); + __load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr); + + __tlbi(vmalls12e1is); + dsb(ish); + isb(); + + return 0; +} + +static void host_stage2_unmap_dev_all(void) +{ + struct kvm_pgtable *pgt = &host_kvm.pgt; + struct memblock_region *reg; + u64 addr = 0; + int i; + + /* Unmap all non-memory regions to recycle the pages */ + for (i = 0; i < hyp_memblock_nr; i++, addr = reg->base + reg->size) { + reg = &hyp_memory[i]; + kvm_pgtable_stage2_unmap(pgt, addr, reg->base - addr); + } + kvm_pgtable_stage2_unmap(pgt, addr, ULONG_MAX); +} + +static bool find_mem_range(phys_addr_t addr, struct kvm_mem_range *range) +{ + int cur, left = 0, right = hyp_memblock_nr; + struct memblock_region *reg; + phys_addr_t end; + + range->start = 0; + range->end = ULONG_MAX; + + /* The list of memblock regions is sorted, binary search it */ + while (left < right) { + cur = (left + right) >> 1; + reg = &hyp_memory[cur]; + end = reg->base + reg->size; + if (addr < reg->base) { + right = cur; + range->end = reg->base; + } else if (addr >= end) { + left = cur + 1; + range->start = end; + } else { + range->start = reg->base; + range->end = end; + return true; + } + } + + return false; +} + +static int host_stage2_idmap(u64 addr) +{ + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W; + struct kvm_mem_range range; + bool is_memory = find_mem_range(addr, &range); + struct hyp_pool *pool = is_memory ? &host_s2_mem : &host_s2_dev; + int ret; + + if (is_memory) + prot |= KVM_PGTABLE_PROT_X; + + hyp_spin_lock(&host_kvm.lock); + ret = kvm_pgtable_stage2_idmap_greedy(&host_kvm.pgt, addr, prot, + &range, pool); + if (is_memory || ret != -ENOMEM) + goto unlock; + host_stage2_unmap_dev_all(); + ret = kvm_pgtable_stage2_idmap_greedy(&host_kvm.pgt, addr, prot, + &range, pool); +unlock: + hyp_spin_unlock(&host_kvm.lock); + + return ret; +} + +void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_vcpu_fault_info fault; + u64 esr, addr; + int ret = 0; + + esr = read_sysreg_el2(SYS_ESR); + if (!__get_fault_info(esr, &fault)) + hyp_panic(); + + addr = (fault.hpfar_el2 & HPFAR_MASK) << 8; + ret = host_stage2_idmap(addr); + if (ret && ret != -EAGAIN) + hyp_panic(); +} diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 7e923b25271c..94b9f14491f9 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -157,6 +158,10 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; + ret = kvm_host_prepare_stage2(host_s2_mem_pgt_base, host_s2_dev_pgt_base); + if (ret) + goto out; + pkvm_pgtable_mm_ops.zalloc_page = hyp_zalloc_hyp_page; pkvm_pgtable_mm_ops.phys_to_virt = hyp_phys_to_virt; pkvm_pgtable_mm_ops.virt_to_phys = hyp_virt_to_phys; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 979a76cdf9fb..31bc1a843bf8 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -28,6 +28,8 @@ #include #include +#include + /* Non-VHE specific context */ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); @@ -102,11 +104,6 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(__kvm_hyp_host_vector, vbar_el2); } -static void __load_host_stage2(void) -{ - write_sysreg(0, vttbr_el2); -} - /* Save VGICv3 state on non-VHE systems */ static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index fbde89a2c6e8..255a23a1b2db 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -8,6 +8,8 @@ #include #include +#include + struct tlb_inv_context { u64 tcr; }; @@ -43,7 +45,7 @@ static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, static void __tlb_switch_to_host(struct tlb_inv_context *cxt) { - write_sysreg(0, vttbr_el2); + __load_host_stage2(); if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { /* Ensure write of the host VMID */ -- 2.30.1.766.gb4fecdf3b7-goog