Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3516806pxj; Tue, 1 Jun 2021 07:08:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6Xgm2uSUcSIQaQ+RjLmqMPDpAOzLNexdwgCEdEdOzG6ejSVgNlXL8bbSJCU6S55oKI4Xu X-Received: by 2002:a05:6e02:1d0b:: with SMTP id i11mr21704590ila.36.1622556538120; Tue, 01 Jun 2021 07:08:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622556538; cv=none; d=google.com; s=arc-20160816; b=AZ/7rnAw2wIoSZ0DTHAHPTHT6CD9hVpqbqj0Tu1mc8P7FQtntYb8rJx36DAJ3pwG/v 3CXR/RkrK8Im7ublCJEQtbPd/ngE1RZ0V00Qny0XuFGDli1I6UoLe+Ewzg9OL6K1Czp9 VWde+Xm0/x1Hsgql8x3vWQngwi1elYwVoNx5Vqo06b7BEV8oBYEOg4tBra04v4+12Qne aLYu9oXYLtlsCEjAnKrlqa0hzr/Syk/zjpc0xLVvdl+3Dup+OYvuRzGnchcUc75OqXL6 Wx4c+kODX/xkLUIOcAPnSJkAC46j/ZCPIgT1z2L/Ys80gVy0M7TYoM2sh/oN1HMVhVHF ErAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=hdYXOJwopwj2yyxMj/3QB57/88sGoFNoIHYESMR/Byo=; b=0dt/xdFv+dKdFr7hfL2YdqeuGgeotjRJ3LkyvU8yORBpdiFOSVodNDDrkDzPqjuGQi LnY+NRCe6fICkbgE0xYAEGTK9tOREBXuCwzcO4T9kux4UyMxRh/8YWmj13d/OtjuxcMK oWHhM1sUSj7Y/vcoHI1cc7oVjpgqdkkzDsDk2ULu5W8VJoiAt0EGRAPkqNGgHftRIPGO K524kRhtoObkRrEoeOv2Ggf5vtcTON5n5ziqN/jcY3AKg4NtQW0QMVwPxIotoXeUlXzX wZeueG2Lz2eiqL5me83DvTQPEIvyysiuQ1uvTw0G4k38YFqRBdO/tN9it0qo+6LaAVAl lbdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f184si17567480jac.102.2021.06.01.07.08.42; Tue, 01 Jun 2021 07:08:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234298AbhFAOJf (ORCPT + 99 others); Tue, 1 Jun 2021 10:09:35 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:3373 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234210AbhFAOJc (ORCPT ); Tue, 1 Jun 2021 10:09:32 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4FvYmj2x7Mz65Qn; Tue, 1 Jun 2021 22:04:05 +0800 (CST) Received: from dggema764-chm.china.huawei.com (10.1.198.206) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Tue, 1 Jun 2021 22:07:48 +0800 Received: from DESKTOP-8RFUVS3.china.huawei.com (10.174.185.179) by dggema764-chm.china.huawei.com (10.1.198.206) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 1 Jun 2021 22:07:48 +0800 From: Zenghui Yu To: , CC: , , , , , Subject: [PATCH v2 stable-5.12.y backport 1/2] KVM: arm64: Commit pending PC adjustemnts before returning to userspace Date: Tue, 1 Jun 2021 22:07:37 +0800 Message-ID: <20210601140738.2026-2-yuzenghui@huawei.com> X-Mailer: git-send-email 2.23.0.windows.1 In-Reply-To: <20210601140738.2026-1-yuzenghui@huawei.com> References: <20210601140738.2026-1-yuzenghui@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.185.179] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggema764-chm.china.huawei.com (10.1.198.206) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Marc Zyngier commit 26778aaa134a9aefdf5dbaad904054d7be9d656d upstream. KVM currently updates PC (and the corresponding exception state) using a two phase approach: first by setting a set of flags, then by converting these flags into a state update when the vcpu is about to enter the guest. However, this creates a disconnect with userspace if the vcpu thread returns there with any exception/PC flag set. In this case, the exposed context is wrong, as userspace doesn't have access to these flags (they aren't architectural). It also means that these flags are preserved across a reset, which isn't expected. To solve this problem, force an explicit synchronisation of the exception state on vcpu exit to userspace. As an optimisation for nVHE systems, only perform this when there is something pending. Reported-by: Zenghui Yu Reviewed-by: Alexandru Elisei Reviewed-by: Zenghui Yu Tested-by: Zenghui Yu Signed-off-by: Marc Zyngier Cc: stable@vger.kernel.org # 5.11 [yuz: stable-5.12.y backport: allocate a new number (15) for __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc to keep the host_hcall array tightly packed] Signed-off-by: Zenghui Yu Reviewed-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/arm.c | 11 +++++++++++ arch/arm64/kvm/hyp/exception.c | 4 ++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 ++++++++ 4 files changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a8578d650bb6..f362f72bcb50 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -57,6 +57,7 @@ #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2 12 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs 13 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs 14 +#define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc 15 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 84b5f79c9eab..c18740a1e541 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -892,6 +892,17 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_sigset_deactivate(vcpu); + /* + * In the unlikely event that we are returning to userspace + * with pending exceptions or PC adjustment, commit these + * adjustments in order to give userspace a consistent view of + * the vcpu state. Note that this relies on __kvm_adjust_pc() + * being preempt-safe on VHE. + */ + if (unlikely(vcpu->arch.flags & (KVM_ARM64_PENDING_EXCEPTION | + KVM_ARM64_INCREMENT_PC))) + kvm_call_hyp(__kvm_adjust_pc, vcpu); + vcpu_put(vcpu); return ret; } diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 0812a496725f..11541b94b328 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -331,8 +331,8 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) } /* - * Adjust the guest PC on entry, depending on flags provided by EL1 - * for the purpose of emulation (MMIO, sysreg) or exception injection. + * Adjust the guest PC (and potentially exception state) depending on + * flags provided by the emulation code. */ void __kvm_adjust_pc(struct kvm_vcpu *vcpu) { diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 936328207bde..e52582e14087 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -25,6 +25,13 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); } +static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + + __kvm_adjust_pc(kern_hyp_va(vcpu)); +} + static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt) { __kvm_flush_vm_context(); @@ -112,6 +119,7 @@ typedef void (*hcall_t)(struct kvm_cpu_context *); static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), + HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), -- 2.19.1