Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1295988ybt; Thu, 18 Jun 2020 05:30:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9/fVYwXEo+d9q0x4VbNPbPL3aIYa7g/c+opaVNPtJyvLttVPI4XK/3Tro8Cc6DROqJhHm X-Received: by 2002:a05:6402:1d82:: with SMTP id dk2mr3703962edb.75.1592483430977; Thu, 18 Jun 2020 05:30:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592483430; cv=none; d=google.com; s=arc-20160816; b=MPxr5jkMgsipPXBGJl1SQEWT24lzdhhUtp56rSejUPsywoM92qx7y7LTsPVDc05LJ8 T+298BQPnAlBB+PwS1Hisee4JVwGvtRdqmWAYX/qSM7J/fVnHbX4fydFY0BuY+RCFJN0 OuMiTIK2NjXIH1ah35aq0bdPADUdlzT5GlqzorZS4E1oAlGsnO8jv33olm0zztY+saMw kQ5WojW60zsNGWt3bE5Wz0Ugvoas+8NFfy4znE58PtJjoVjeMSR3rpswKOYOZ9xWJuN1 kdIfwmkVL+vm/YtQfVSpq/tdJsoab7k7Q0L0uqSC4apigwqEPbok6nQ5GSdAh/Wiw093 9kdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JeSZ3ju04Bd2crqjXPTxRnTO+d3f02vbHN2E9FmQ9OE=; b=Te6NZjNXvTmMIByg38jfqZQld+Jg/FXQO09dB2y+yXPAdGkxtdteUhmev601z4qoOr fYDuS98/y/DlfNFS7eqwIyiWHox+Wwy8z3a/OhiIZMWc+dLuoXjAWpr5ORVTtRMcZJoC IB6iTDcHxZ8zn4oqPqBN8NyHs7OYJP9D3enhdBzeQybW9dOmYFpYH3ikJp/mnv5BK4HU XUlboGuR2HZ60xAGG1CPfvjNctWSEx5MJx505Z0LmUrqRDc8Tr/+f8Gsaoa63t4FQYa9 sFRtfZLKueTuJ7oYqRLlV3bu96wKkbPXECs0fl/24aZyZrnXQPGDMae+m5vS+j8LwmW8 KfBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=YbBmkwDV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dn17si2573611ejc.556.2020.06.18.05.30.07; Thu, 18 Jun 2020 05:30:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=YbBmkwDV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729555AbgFRM2B (ORCPT + 99 others); Thu, 18 Jun 2020 08:28:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727048AbgFRMZ7 (ORCPT ); Thu, 18 Jun 2020 08:25:59 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D1B3C0613F0 for ; Thu, 18 Jun 2020 05:25:59 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id g10so4998780wmh.4 for ; Thu, 18 Jun 2020 05:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JeSZ3ju04Bd2crqjXPTxRnTO+d3f02vbHN2E9FmQ9OE=; b=YbBmkwDV1/gylcLDyHnszLrFtvXgH+KtGc3MDiyBJnyEYoMA8o0vM/ZhK5Z7obhvQO sRXgfJIotpxP8PpdV5u0K50lO1NaBpawiZOFwKhG4nwT46i+DI7R5awMGDuukOXMZrWH hrUi3qpw00jEcSVnWxBcJ3NiYs0yJgFGEMKrrWjincfrt0IAjpTpK/09Aovjeale8AF/ nL7zHffqJKGNK5uuopcHPhWtIupsEIpnB17WOkRsZ5WOLX7oj3xJTNHSLjSvj9Uj3698 Lc6+mgNLfDzvy8MszEe8sjKRhJ4eX/wXiJFp+05c0u4wSyzsjLXrOYODYqKW81g1eREt 5kDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JeSZ3ju04Bd2crqjXPTxRnTO+d3f02vbHN2E9FmQ9OE=; b=hjXHdkAn/91MhbSipqimxSjaP3fBDKuZKWgnY7gdaH/kfSY8TMALGeqKQ+HQSM81gj G3Ga6K/6rwSc1XP0vzu+o0jRHVxKODKVS/Y7ZpL5DNMcRL+kulEwwPKy19b3jNyKAA5W h9tboNedsjoG3JShi2NdVyEkgTkJSAJf81TxvEdeVEUgKRdrKVzrOiqzgs1xh1cmjKCn Dm0sMeLJuTIschosnlT+AU1ZTLchHTdPFegeLCOMOqHFkxWGQsyAhadwGJRWxglKXvh9 oAXiUe38eKF64XpQHLDb31LHSAeFJ9l75nNu4v7o1nGeAQK+FjalXUgPiKT1ZBWoF+5g EKWg== X-Gm-Message-State: AOAM530U3bfxbYzsMU74e48Q5pFn/9px8utlVwKP9S2vy2mGLe3lTPV7 9yUa4YdBmSskIQ6bx3WKLa3wbw== X-Received: by 2002:a7b:c5d5:: with SMTP id n21mr3824690wmk.106.1592483157586; Thu, 18 Jun 2020 05:25:57 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:c1af:c724:158a:e200]) by smtp.gmail.com with ESMTPSA id j4sm3518516wma.7.2020.06.18.05.25.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 18 Jun 2020 05:25:56 -0700 (PDT) From: David Brazdil To: Marc Zyngier , Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, android-kvm@google.com, kernel-team@android.com, David Brazdil Subject: [PATCH v3 05/15] arm64: kvm: Build hyp-entry.S separately for VHE/nVHE Date: Thu, 18 Jun 2020 13:25:27 +0100 Message-Id: <20200618122537.9625-6-dbrazdil@google.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200618122537.9625-1-dbrazdil@google.com> References: <20200618122537.9625-1-dbrazdil@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. hyp-entry.S contains implementation of KVM hyp vectors. This code is mostly shared between VHE/nVHE, therefore compile it under both VHE and nVHE build rules. nVHE-specific host HVC handler is hidden behind __KVM_NVHE_HYPERVISOR__. Adjust code which selects which KVM hyp vecs to install to choose the correct VHE/nVHE symbol. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 7 ++++++- arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++------ arch/arm64/include/asm/mmu.h | 7 ------- arch/arm64/kernel/cpu_errata.c | 4 +++- arch/arm64/kernel/image-vars.h | 12 ++++++++++++ arch/arm64/kvm/hyp/hyp-entry.S | 2 ++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/va_layout.c | 2 +- 8 files changed, 35 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 6a682d66a640..2baa69324cc9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -76,7 +76,12 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; +DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); + +#ifdef CONFIG_KVM_INDIRECT_VECTORS +DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); +extern atomic_t arm64_el2_vector_last_slot; +#endif extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index b12bfc1f051a..5bfc7ee61997 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -461,11 +461,15 @@ extern int __kvm_harden_el2_vector_slot; static inline void *kvm_get_hyp_vector(void) { struct bp_hardening_data *data = arm64_get_bp_hardening_data(); - void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); int slot = -1; + void *vect = kern_hyp_va(has_vhe() + ? kvm_ksym_ref(__kvm_hyp_vector) + : kvm_ksym_ref_nvhe(__kvm_hyp_vector)); if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { - vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); + vect = kern_hyp_va(has_vhe() + ? kvm_ksym_ref(__bp_harden_hyp_vecs) + : kvm_ksym_ref_nvhe(__bp_harden_hyp_vecs)); slot = data->hyp_vectors_slot; } @@ -494,12 +498,11 @@ static inline int kvm_map_vectors(void) * HBP + HEL2 -> use hardened vertors and use exec mapping */ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { - __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs); - __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); + __kvm_bp_vect_base = kern_hyp_va(kvm_ksym_ref_nvhe(__bp_harden_hyp_vecs)); } if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { - phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs); + phys_addr_t vect_pa = __pa_symbol(kvm_nvhe_sym(__bp_harden_hyp_vecs)); unsigned long size = __BP_HARDEN_HYP_VECS_SZ; /* @@ -518,7 +521,8 @@ static inline int kvm_map_vectors(void) #else static inline void *kvm_get_hyp_vector(void) { - return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); + return kern_hyp_va(has_vhe() ? kvm_ksym_ref(__kvm_hyp_vector) + : kvm_ksym_ref_nvhe(__kvm_hyp_vector)); } static inline int kvm_map_vectors(void) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 68140fdd89d6..4d913f6dd366 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -42,13 +42,6 @@ struct bp_hardening_data { bp_hardening_cb_t fn; }; -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ - defined(CONFIG_HARDEN_EL2_VECTORS)) - -extern char __bp_harden_hyp_vecs[]; -extern atomic_t arm64_el2_vector_last_slot; -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ - #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index ad06d6802d2e..318b76a62c56 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -117,7 +117,9 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { - void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K); + char *vec = has_vhe() ? __bp_harden_hyp_vecs + : kvm_nvhe_sym(__bp_harden_hyp_vecs); + void *dst = lm_alias(vec + slot * SZ_2K); int i; for (i = 0; i < SZ_2K; i += 0x80) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 89affa38b143..dc7ee85531f5 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,9 +61,11 @@ __efistub__ctype = _ctype; * memory mappings. */ +__kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; +__kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; __kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; __kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; @@ -75,6 +77,16 @@ __kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; __kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; +__kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; +__kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; +__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_hyp_panic = hyp_panic; +__kvm_nvhe_kimage_voffset = kimage_voffset; +__kvm_nvhe_kvm_host_data = kvm_host_data; +__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; +__kvm_nvhe_panic = panic; #endif /* CONFIG_KVM */ diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index d362fad97cc8..7e3c72fa634f 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -40,6 +40,7 @@ el1_sync: // Guest trapped into EL2 ccmp x0, #ESR_ELx_EC_HVC32, #4, ne b.ne el1_trap +#ifdef __KVM_NVHE_HYPERVISOR__ mrs x1, vttbr_el2 // If vttbr is valid, the guest cbnz x1, el1_hvc_guest // called HVC @@ -74,6 +75,7 @@ el1_sync: // Guest trapped into EL2 eret sb +#endif /* __KVM_NVHE_HYPERVISOR__ */ el1_hvc_guest: /* diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 7d64235dba62..c68801e24950 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := +obj-y := ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index a4f48c1ac28c..157d106235f7 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -150,7 +150,7 @@ void kvm_patch_vector_branch(struct alt_instr *alt, /* * Compute HYP VA by using the same computation as kern_hyp_va() */ - addr = (uintptr_t)kvm_ksym_ref(__kvm_hyp_vector); + addr = (uintptr_t)kvm_ksym_ref_nvhe(__kvm_hyp_vector); addr &= va_mask; addr |= tag_val << tag_lsb; -- 2.27.0