Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1858662imu; Fri, 14 Dec 2018 01:38:44 -0800 (PST) X-Google-Smtp-Source: AFSGD/Uwe8VHE6k/LXa8cI+846hWWeJ9a0Y0ps2o0hZ6tlZBNi9MF7p6zjoEE6UqBCuKTA6DnTwt X-Received: by 2002:a63:db02:: with SMTP id e2mr2048266pgg.419.1544780323997; Fri, 14 Dec 2018 01:38:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544780323; cv=none; d=google.com; s=arc-20160816; b=X6dXT/CFJ9NtvHd+ovhgi/09wE9bhfc0ffRA3C009ygSI9QPDwf1mYZpOVE94L92s7 6hwP6G02rxlMriMzm8T+6gbVCzyzRNQ6GrgZ7X0HLnywxrQFmC3fUc9oc/yIfIdCL17a NTd/Iv7gH5VKmFPPaRdpR2eB/K28wbaM5tzT6WHg1jq5n17WaflaE1VejlXXWRdmEetV ny/GgZLviYqwzFPEFIwsBTmwIqUtNMOP+uLs5vj1LOF9JR5SBV7KiSVpqgBQ5J13a5vj 4UbgU6L2l32AS34qSJzg0HXJhOfvNgeQvJDijdvaPTCBd4ZA1anU8eDCfkrLFGTRAFNY 75sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=YiMgyH5vPVHzUWci3Ctg3ChIoKfvTTDh5HYgLjbYhAc=; b=Zz2E/ps4GoQM0xvC1Kkw5OIF4nlYqnMZ/g4Xwxq1i+U1OoIk++6fthPlhyt46kWCsY 8FyTLBwYVFZWbiF3wluPjEEhS8UHpPcf62+Zd9YmygmeKaDaoxjNKrCZlxs0ni860Fi3 lJx868Y+b37q22y21Cl1fNmWZSJMpDd6Dwa9jDUyj+Enh039TXkXHAyDGQJi7CP7rXWQ 1mQQRmU8rGD66c0sWG5N6aTXezjPRVctJ1oz9EDj9h4ZKat1ZTx3diLzRDbR88rV3FdW W4B4a1pazbZvKZKGH0lMiAQ9r4HrVMQ9mcZlIzYBeY51Skt/dhcz8CzA4ULyySr+vpw3 fYrA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g184si3750302pfb.288.2018.12.14.01.38.28; Fri, 14 Dec 2018 01:38:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729024AbeLNJhD (ORCPT + 99 others); Fri, 14 Dec 2018 04:37:03 -0500 Received: from foss.arm.com ([217.140.101.70]:47706 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726418AbeLNJhD (ORCPT ); Fri, 14 Dec 2018 04:37:03 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6AF6780D; Fri, 14 Dec 2018 01:37:02 -0800 (PST) Received: from [10.1.197.36] (e112298-lin.cambridge.arm.com [10.1.197.36]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B9AAF3F614; Fri, 14 Dec 2018 01:37:00 -0800 (PST) Subject: Re: [PATCH v2 1/4] KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled To: Christoffer Dall Cc: linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, linux-rt-users@vger.kernel.org, tglx@linutronix.de, rostedt@goodmis.org, bigeasy@linutronix.de, stable@vger.kernel.org References: <1543256807-9768-1-git-send-email-julien.thierry@arm.com> <1543256807-9768-2-git-send-email-julien.thierry@arm.com> <20181211102015.GV30263@e113682-lin.lund.arm.com> From: Julien Thierry Message-ID: <0573c8b2-db76-19e3-db76-5433b2e4ad0a@arm.com> Date: Fri, 14 Dec 2018 09:36:59 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <20181211102015.GV30263@e113682-lin.lund.arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/12/2018 10:20, Christoffer Dall wrote: > On Mon, Nov 26, 2018 at 06:26:44PM +0000, Julien Thierry wrote: >> To change the active state of an MMIO, halt is requested for all vcpus of >> the affected guest before modifying the IRQ state. This is done by calling >> cond_resched_lock() in vgic_mmio_change_active(). However interrupts are >> disabled at this point and we cannot reschedule a vcpu. >> >> Solve this by waiting for all vcpus to be halted after emmiting the halt >> request. >> >> Signed-off-by: Julien Thierry >> Suggested-by: Marc Zyngier >> Cc: Christoffer Dall >> Cc: Marc Zyngier >> Cc: stable@vger.kernel.org >> --- >> virt/kvm/arm/vgic/vgic-mmio.c | 36 ++++++++++++++---------------------- >> 1 file changed, 14 insertions(+), 22 deletions(-) >> >> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c >> index f56ff1c..5c76a92 100644 >> --- a/virt/kvm/arm/vgic/vgic-mmio.c >> +++ b/virt/kvm/arm/vgic/vgic-mmio.c >> @@ -313,27 +313,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, >> >> spin_lock_irqsave(&irq->irq_lock, flags); >> >> - /* >> - * If this virtual IRQ was written into a list register, we >> - * have to make sure the CPU that runs the VCPU thread has >> - * synced back the LR state to the struct vgic_irq. >> - * >> - * As long as the conditions below are true, we know the VCPU thread >> - * may be on its way back from the guest (we kicked the VCPU thread in >> - * vgic_change_active_prepare) and still has to sync back this IRQ, >> - * so we release and re-acquire the spin_lock to let the other thread >> - * sync back the IRQ. >> - * >> - * When accessing VGIC state from user space, requester_vcpu is >> - * NULL, which is fine, because we guarantee that no VCPUs are running >> - * when accessing VGIC state from user space so irq->vcpu->cpu is >> - * always -1. >> - */ >> - while (irq->vcpu && /* IRQ may have state in an LR somewhere */ >> - irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */ >> - irq->vcpu->cpu != -1) /* VCPU thread is running */ >> - cond_resched_lock(&irq->irq_lock); >> - >> if (irq->hw) { >> vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); >> } else { >> @@ -368,8 +347,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, >> */ >> static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid) >> { >> - if (intid > VGIC_NR_PRIVATE_IRQS) >> + if (intid > VGIC_NR_PRIVATE_IRQS) { >> + struct kvm_vcpu *tmp; >> + int i; >> + >> kvm_arm_halt_guest(vcpu->kvm); >> + >> + /* Wait for each vcpu to be halted */ >> + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { >> + if (tmp == vcpu) >> + continue; >> + >> + while (tmp->cpu != -1) >> + cond_resched(); >> + } > > I'm actually thinking we don't need this loop at all after the requet > rework which causes: > > 1. kvm_arm_halt_guest() to use kvm_make_all_cpus_request(kvm, KVM_REQ_SLEEP), and > 2. KVM_REQ_SLEEP uses REQ_WAIT, and > 3. REQ_WAIT requires the VCPU to respond to IPIs before returning, and > 4. a VCPU thread can only respond when it enables interrupt, and > 5. enabling interrupts when running a VCPU only happens after syncing > the VGIC hwstate. > > Does that make sense? I'm not super familiar with what goes on with the vgic hwstate syncing, but looking at kvm_arm_halt_guest() and kvm_arch_vcpu_ioctl_run(), I agree with the reasoning. > It would be good if someone can validate this, but if it holds this > patch just becomes a nice deletion of the logic in > vgic-mmio_change_active. > As long as running kvm_vgic_sync_hwstate() on each vcpu is all that is needed before we can modify the active state, I think your solution is definitely the way to go. Thanks, -- Julien Thierry