Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp259969imu; Mon, 26 Nov 2018 10:30:35 -0800 (PST) X-Google-Smtp-Source: AFSGD/W+t3so5oPdOHMyWSArtIuhbtOc2/7nfOt5xHNrLSJMfxx9C+XA6FIuy6zWt+/fmpHEsuPD X-Received: by 2002:a63:24c2:: with SMTP id k185mr25287134pgk.406.1543257035769; Mon, 26 Nov 2018 10:30:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543257035; cv=none; d=google.com; s=arc-20160816; b=D7vLqsODKvPMmIMgEsXkOWRP2s3tsys7k9fk8oyKmyBaCxhXTEU1U93u0mHeSEhbww j6OHlsY5hC0pG31Zxb7PDTbpsBLQcnJ3fmQgsRO0CseRtvUD7S7ahJ75SboGkYascwTq IyHZXTyvWOaeGPxFPsbpjZxutuv5G8Sccf2eKh2ZhHhGh5X4yeLBfCGnHmC8A3fiVOig 5htbzz1zpgWrjs/2ioUAnAR3muWwGFllTjBjIsygZ8f7yn/LeZd8pr2yB5UXpO/g6hSX ijX9J4sTyvZQRkgJhQ+uSWBOJZ0lS3dL6pr+LdaF6gnDgT7VvA4rJ/ImWu3E2W5ttTpl 6A3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=obFekOXBiJrcMpg/0fplp8p1Ehsxm+QiR+93GfMciGw=; b=EOXlwsjAuOVt280W5pVg+AcBGBFmwF3525H3FqPDTfHCBJu17P4oSxSdnRcWpPdspm XpZivr3RjtZpN0PmJasFQfO1bwBDs6Bpu13wYHBMIQo3ve9ohLm8Ph2UZMz6TJtlT9o4 VotY+2pMdEpvPndaN8al5ZkNY9G8njJym3GjJ8cL7I9n02UxjRTEpunhw9Odp1FmN54p V06xebJO0RPxAC9Zsleq631hADyDkrbvvm+xyvRr+zQmRvvOTYGyt1YNwDw3LX+6p++P w59z9M6WMOGBp4EGWx62UBpjsbiA7wFpjK3sdP5+NCLCbUcagSHNAUDmSxpyIWA6lLyI vkug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v5si988202ply.74.2018.11.26.10.30.01; Mon, 26 Nov 2018 10:30:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726672AbeK0FVx (ORCPT + 99 others); Tue, 27 Nov 2018 00:21:53 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:44590 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725199AbeK0FVw (ORCPT ); Tue, 27 Nov 2018 00:21:52 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 44D0C1B55; Mon, 26 Nov 2018 10:26:56 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 282A23F5AF; Mon, 26 Nov 2018 10:26:54 -0800 (PST) From: Julien Thierry To: linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: marc.zyngier@arm.com, Christoffer.Dall@arm.com, linux-arm-kernel@lists.infradead.org, linux-rt-users@vger.kernel.org, tglx@linutronix.de, rostedt@goodmis.org, bigeasy@linutronix.de, Julien Thierry , Christoffer Dall , stable@vger.kernel.org Subject: [PATCH v2 1/4] KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled Date: Mon, 26 Nov 2018 18:26:44 +0000 Message-Id: <1543256807-9768-2-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1543256807-9768-1-git-send-email-julien.thierry@arm.com> References: <1543256807-9768-1-git-send-email-julien.thierry@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org To change the active state of an MMIO, halt is requested for all vcpus of the affected guest before modifying the IRQ state. This is done by calling cond_resched_lock() in vgic_mmio_change_active(). However interrupts are disabled at this point and we cannot reschedule a vcpu. Solve this by waiting for all vcpus to be halted after emmiting the halt request. Signed-off-by: Julien Thierry Suggested-by: Marc Zyngier Cc: Christoffer Dall Cc: Marc Zyngier Cc: stable@vger.kernel.org --- virt/kvm/arm/vgic/vgic-mmio.c | 36 ++++++++++++++---------------------- 1 file changed, 14 insertions(+), 22 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index f56ff1c..5c76a92 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -313,27 +313,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, spin_lock_irqsave(&irq->irq_lock, flags); - /* - * If this virtual IRQ was written into a list register, we - * have to make sure the CPU that runs the VCPU thread has - * synced back the LR state to the struct vgic_irq. - * - * As long as the conditions below are true, we know the VCPU thread - * may be on its way back from the guest (we kicked the VCPU thread in - * vgic_change_active_prepare) and still has to sync back this IRQ, - * so we release and re-acquire the spin_lock to let the other thread - * sync back the IRQ. - * - * When accessing VGIC state from user space, requester_vcpu is - * NULL, which is fine, because we guarantee that no VCPUs are running - * when accessing VGIC state from user space so irq->vcpu->cpu is - * always -1. - */ - while (irq->vcpu && /* IRQ may have state in an LR somewhere */ - irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */ - irq->vcpu->cpu != -1) /* VCPU thread is running */ - cond_resched_lock(&irq->irq_lock); - if (irq->hw) { vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); } else { @@ -368,8 +347,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, */ static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid) { - if (intid > VGIC_NR_PRIVATE_IRQS) + if (intid > VGIC_NR_PRIVATE_IRQS) { + struct kvm_vcpu *tmp; + int i; + kvm_arm_halt_guest(vcpu->kvm); + + /* Wait for each vcpu to be halted */ + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { + if (tmp == vcpu) + continue; + + while (tmp->cpu != -1) + cond_resched(); + } + } } /* See vgic_change_active_prepare */ -- 1.9.1