Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp76031ybl; Thu, 22 Aug 2019 20:09:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqwUnT277PS8o6RraOzm6AkYcpb71tNdMFbgunHbBh8GhuThftI6t+qJlovKapMmcWmIv0xu X-Received: by 2002:a63:67c6:: with SMTP id b189mr2063017pgc.163.1566529747302; Thu, 22 Aug 2019 20:09:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566529747; cv=none; d=google.com; s=arc-20160816; b=TXnEvi95Yy4deX8wcyWCePGh+JKeOkH9Sa8D16uLYBrucFS63H7mI/a6icM/OE9IYu KuDAYNtnmpR8qmhUm2WsfWtmCh8zC+lntVTIkGaUszUIFJeaX6QIyJ17ea4wjgCPaboQ yG1d7cqLrFLNMqVymMlUrK8Uu08i6vTB8iIOwWT+VlZZDqO7UhFJSV6Qo/GUut7CJN2v 9Dvo0BP1yprqNGHBPE/soBfk4ha/GBIpCWSsD3uHCbHDbDJnJbDqB4fYKC5GLfVPR8nV lsJBHOTLqk2RWS+C7h5aqYAg4PBneAhLBAtLJR3lrlanm69TF5F+K8d1/Qc7H5EIsB9I hAOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=quWD714tlCMwxRv0GxYRx/DgmmCFF3QlnlLBQnvh7h8=; b=HD6Cg9vyYj4X01pFe4qHxAKiE/HKUXLdutiaM8JSNSyfxYP6l6GHM/E4MMyJl1K+00 wJU8w3lfLWbVgk+OVsSSTsSodmxLz/FXMtWVq1zUCJKvjbbq6SqHv9FbvpTd3UpToTXZ RBGUS2W3P2Vfu2NcdYiglVoD8uYciYM3w5tEunFu8UstHYg5c+gXVicv99BsRKUdBAZ/ g5WNHxWeXUXAn2ypZ5G5BQ7QiIjJpbO6VvLuFPAK5iK23Yg3uh22x2iFXqX40oy2hSwT rJQcFHJ43mHhKP8j06gY4lxuaQceO7kkAoAHP5l/Jpq0bW/a/XC3ypTtlBm3HVxf54as /k4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=eTwDVALF; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 33si1199013pli.149.2019.08.22.20.08.50; Thu, 22 Aug 2019 20:09:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=eTwDVALF; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732546AbfHVRbd (ORCPT + 99 others); Thu, 22 Aug 2019 13:31:33 -0400 Received: from mail.kernel.org ([198.145.29.99]:50290 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404531AbfHVRZ4 (ORCPT ); Thu, 22 Aug 2019 13:25:56 -0400 Received: from localhost (wsip-184-188-36-2.sd.sd.cox.net [184.188.36.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 018502341A; Thu, 22 Aug 2019 17:25:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1566494755; bh=L6Ts5bftaZf21xQLuA/uTS+x4IavQms4A2KBRy6TvT4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eTwDVALFYLGwS3BiTSasnEUrlTmTHOHzhgVLZ/NKLYT8GYqZGpCbmP9m6EO4o3skO D5jgzAzlxk4FcmWxzjagzQ2QR571QWkf0gfVa+bIjtPYHCOAuaLzUlw987kk9kJjSi r4BIJnRX48DXOF7dYezERJX9s5pMLSgs4RblnrgQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Marc Zyngier Subject: [PATCH 4.19 55/85] KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block Date: Thu, 22 Aug 2019 10:19:28 -0700 Message-Id: <20190822171733.610647908@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190822171731.012687054@linuxfoundation.org> References: <20190822171731.012687054@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Marc Zyngier commit 5eeaf10eec394b28fad2c58f1f5c3a5da0e87d1c upstream. Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or its GICv2 equivalent) loaded as long as we can, only syncing it back when we're scheduled out. There is a small snag with that though: kvm_vgic_vcpu_pending_irq(), which is indirectly called from kvm_vcpu_check_block(), needs to evaluate the guest's view of ICC_PMR_EL1. At the point were we call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever changes to PMR is not visible in memory until we do a vcpu_put(). Things go really south if the guest does the following: mov x0, #0 // or any small value masking interrupts msr ICC_PMR_EL1, x0 [vcpu preempted, then rescheduled, VMCR sampled] mov x0, #ff // allow all interrupts msr ICC_PMR_EL1, x0 wfi // traps to EL2, so samping of VMCR [interrupt arrives just after WFI] Here, the hypervisor's view of PMR is zero, while the guest has enabled its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no interrupts are pending (despite an interrupt being received) and we'll block for no reason. If the guest doesn't have a periodic interrupt firing once it has blocked, it will stay there forever. To avoid this unfortuante situation, let's resync VMCR from kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block() will observe the latest value of PMR. This has been found by booting an arm64 Linux guest with the pseudo NMI feature, and thus using interrupt priorities to mask interrupts instead of the usual PSTATE masking. Cc: stable@vger.kernel.org # 4.12 Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put") Signed-off-by: Marc Zyngier Signed-off-by: Greg Kroah-Hartman --- include/kvm/arm_vgic.h | 1 + virt/kvm/arm/arm.c | 11 +++++++++++ virt/kvm/arm/vgic/vgic-v2.c | 9 ++++++++- virt/kvm/arm/vgic/vgic-v3.c | 7 ++++++- virt/kvm/arm/vgic/vgic.c | 11 +++++++++++ virt/kvm/arm/vgic/vgic.h | 2 ++ 6 files changed, 39 insertions(+), 2 deletions(-) --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -361,6 +361,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm void kvm_vgic_load(struct kvm_vcpu *vcpu); void kvm_vgic_put(struct kvm_vcpu *vcpu); +void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu); #define irqchip_in_kernel(k) (!!((k)->arch.vgic.in_kernel)) #define vgic_initialized(k) ((k)->arch.vgic.initialized) --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -338,6 +338,17 @@ int kvm_cpu_has_pending_timer(struct kvm void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { kvm_timer_schedule(vcpu); + /* + * If we're about to block (most likely because we've just hit a + * WFI), we need to sync back the state of the GIC CPU interface + * so that we have the lastest PMR and group enables. This ensures + * that kvm_arch_vcpu_runnable has up-to-date data to decide + * whether we have pending interrupts. + */ + preempt_disable(); + kvm_vgic_vmcr_sync(vcpu); + preempt_enable(); + kvm_vgic_v4_enable_doorbell(vcpu); } --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -495,10 +495,17 @@ void vgic_v2_load(struct kvm_vcpu *vcpu) kvm_vgic_global_state.vctrl_base + GICH_APR); } -void vgic_v2_put(struct kvm_vcpu *vcpu) +void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu) { struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR); +} + +void vgic_v2_put(struct kvm_vcpu *vcpu) +{ + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; + + vgic_v2_vmcr_sync(vcpu); cpu_if->vgic_apr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_APR); } --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -674,12 +674,17 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) __vgic_v3_activate_traps(vcpu); } -void vgic_v3_put(struct kvm_vcpu *vcpu) +void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; if (likely(cpu_if->vgic_sre)) cpu_if->vgic_vmcr = kvm_call_hyp(__vgic_v3_read_vmcr); +} + +void vgic_v3_put(struct kvm_vcpu *vcpu) +{ + vgic_v3_vmcr_sync(vcpu); kvm_call_hyp(__vgic_v3_save_aprs, vcpu); --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -902,6 +902,17 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu) vgic_v3_put(vcpu); } +void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu) +{ + if (unlikely(!irqchip_in_kernel(vcpu->kvm))) + return; + + if (kvm_vgic_global_state.type == VGIC_V2) + vgic_v2_vmcr_sync(vcpu); + else + vgic_v3_vmcr_sync(vcpu); +} + int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -204,6 +204,7 @@ int vgic_register_dist_iodev(struct kvm void vgic_v2_init_lrs(void); void vgic_v2_load(struct kvm_vcpu *vcpu); void vgic_v2_put(struct kvm_vcpu *vcpu); +void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu); void vgic_v2_save_state(struct kvm_vcpu *vcpu); void vgic_v2_restore_state(struct kvm_vcpu *vcpu); @@ -234,6 +235,7 @@ bool vgic_v3_check_base(struct kvm *kvm) void vgic_v3_load(struct kvm_vcpu *vcpu); void vgic_v3_put(struct kvm_vcpu *vcpu); +void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu); bool vgic_has_its(struct kvm *kvm); int kvm_vgic_register_its_device(void);