Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp742174imu; Tue, 20 Nov 2018 06:19:39 -0800 (PST) X-Google-Smtp-Source: AFSGD/WLF1i+z80zHyGp/qNJd1Tyqb2NjqJVl3cIwr7DlA6STdOg+qRpsV8hNJ0vPAInI4ve50/s X-Received: by 2002:a63:c942:: with SMTP id y2mr2025629pgg.331.1542723579870; Tue, 20 Nov 2018 06:19:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542723579; cv=none; d=google.com; s=arc-20160816; b=vWPTsYRzDfwyBNadS/qTQH84Y7heyz/bJ4VMfeJnORU9kjC04o9H3IHUliGnnzF6nR 86L8iQSWqF2xpgx+O1cxWMhNr0C0AOi9mOahvilGljWDz7bUkGybrZPsRt94fmeO9Ws9 b5f2P3WwkhArUmY8g+3Bu5K++pL4PLSU0+z5T7fD0LNCIkoX+vBuQGC/1+barAMy64QO SvHZbyF/aExJFqf6Wt2SB73MNaMbbt1OGIdix8qqgeYnABp4+mXNST0Io/X5IpfbUXpr 7drfpFlXJIn8ztirciDOMeIsdYIF4v7rTeMx7Py1HVBS/3Hn/PInkcQcHdMyOdKXrlwc eiTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Wt7tmEQPGVaHDfN/xqSFwvbDqpKzfcu1ZhSomBL1xQ0=; b=kC9j4MseGexZZz6sTSuKVWYJsB9KEm8fmEh5kz401loeZgR8twVqf0qNlZxwk0ptaW dBq+sRGVxYgwT45j1sQFzD8Ql79vSB9YkuLVDuO8ahnenLvEGIWT1iFlLXPOy3SEzuSI lyb8YJIGWhlLEs8acxVzQRkFnL2Kq9cFpaU7xZpql+TksidloEH8ggf7ug/DtUOnJYzf 6S+2KwPVfivWNMn8SBMEQVYjel486GrBXaVIsrujJmWnu5ChcGIPAhhWPaLwphJ1iM/s pS5BmrVVzvLiYIlgZSGuYimkZuUUGAmJWufO0HnoHJY3qW1W6bqF57KM94P6mYmt4+ie StzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p14si10859980pgf.52.2018.11.20.06.19.25; Tue, 20 Nov 2018 06:19:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729604AbeKUAsC (ORCPT + 99 others); Tue, 20 Nov 2018 19:48:02 -0500 Received: from foss.arm.com ([217.140.101.70]:49632 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726916AbeKUAsC (ORCPT ); Tue, 20 Nov 2018 19:48:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1535BEBD; Tue, 20 Nov 2018 06:18:40 -0800 (PST) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.144.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 78A303F5A0; Tue, 20 Nov 2018 06:18:34 -0800 (PST) Date: Tue, 20 Nov 2018 15:18:32 +0100 From: Christoffer Dall To: Julien Thierry Cc: linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, linux-rt-users@vger.kernel.org, tglx@linutronix.de, rostedt@goodmis.org, bigeasy@linutronix.de, stable@vger.kernel.org Subject: Re: [PATCH 1/4] KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled Message-ID: <20181120141832.GA11162@e113682-lin.lund.arm.com> References: <1542647279-46609-1-git-send-email-julien.thierry@arm.com> <1542647279-46609-2-git-send-email-julien.thierry@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1542647279-46609-2-git-send-email-julien.thierry@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 19, 2018 at 05:07:56PM +0000, Julien Thierry wrote: > To change the active state of an MMIO, halt is requested for all vcpus of > the affected guest before modifying the IRQ state. This is done by calling > cond_resched_lock() in vgic_mmio_change_active(). However interrupts are > disabled at this point and running a vcpu cannot get rescheduled. "running a vcpu cannot get rescheduled" ? > > Solve this by waiting for all vcpus to be halted after emmiting the halt > request. > > Fixes commit 6c1b7521f4a07cc63bbe2dfe290efed47cdb780a ("KVM: arm/arm64: > Factor out functionality to get vgic mmio requester_vcpu") > > Signed-off-by: Julien Thierry > Suggested-by: Marc Zyngier > Cc: Christoffer Dall > Cc: Marc Zyngier > Cc: stable@vger.kernel.org > --- > virt/kvm/arm/vgic/vgic-mmio.c | 33 +++++++++++---------------------- > 1 file changed, 11 insertions(+), 22 deletions(-) > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c > index f56ff1c..eefd877 100644 > --- a/virt/kvm/arm/vgic/vgic-mmio.c > +++ b/virt/kvm/arm/vgic/vgic-mmio.c > @@ -313,27 +313,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, > > spin_lock_irqsave(&irq->irq_lock, flags); > > - /* > - * If this virtual IRQ was written into a list register, we > - * have to make sure the CPU that runs the VCPU thread has > - * synced back the LR state to the struct vgic_irq. > - * > - * As long as the conditions below are true, we know the VCPU thread > - * may be on its way back from the guest (we kicked the VCPU thread in > - * vgic_change_active_prepare) and still has to sync back this IRQ, > - * so we release and re-acquire the spin_lock to let the other thread > - * sync back the IRQ. > - * > - * When accessing VGIC state from user space, requester_vcpu is > - * NULL, which is fine, because we guarantee that no VCPUs are running > - * when accessing VGIC state from user space so irq->vcpu->cpu is > - * always -1. > - */ > - while (irq->vcpu && /* IRQ may have state in an LR somewhere */ > - irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */ > - irq->vcpu->cpu != -1) /* VCPU thread is running */ > - cond_resched_lock(&irq->irq_lock); > - > if (irq->hw) { > vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); > } else { > @@ -368,8 +347,18 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, > */ > static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid) > { > - if (intid > VGIC_NR_PRIVATE_IRQS) > + if (intid > VGIC_NR_PRIVATE_IRQS) { > + struct kvm_vcpu *tmp; > + int i; > + > kvm_arm_halt_guest(vcpu->kvm); > + > + /* Wait for each vcpu to be halted */ > + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { > + while (tmp->cpu != -1) > + cond_resched(); We used to have something like this which Andre then found out it could deadlock the system, because the VCPU making this request wouldn't have called kvm_arch_vcpu_put, and its cpu value would still have a value. That's why we have the vcpu && vcpu != requester check. Thanks, Christoffer