Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3769550imu; Mon, 7 Jan 2019 09:07:11 -0800 (PST) X-Google-Smtp-Source: ALg8bN7timlVX3L/zTkWNXpm+WoIUxaytnSPy5zZkdePYOj7R1K9kXad/q3r1XBXnrAQnVB4uLx+ X-Received: by 2002:a17:902:850c:: with SMTP id bj12mr60746971plb.46.1546880831791; Mon, 07 Jan 2019 09:07:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546880831; cv=none; d=google.com; s=arc-20160816; b=NUR9DUH9BHJeHu+1nborUDeuRBMDAkVOTGQfgfcF29kfLeZf7kZ0wSmqjUtEjFuhaR G82sqfzKYMp63I5HFPcTY3hvSvxSuvfYDxvWs2Fc5CPYAYKNR7aFAjgdeCfrwHCEXnNb EvUbJdn4VkJr50uoJF4hEuuXuikOsHx2DEWQtV+1dNHzSIQHEk9rUhnVYOkaWzh4ilIq CexnfDzVfAjmW7x2frgSEJyrabzsOMuQDbqk/4Xt3fiqZYPcwG75pq9pEbfc9Kz8w9We ILAoAP9kwRGwxW6Yoq/lla3uZmvK2lVoizj7dqTR9DqE+4+gQCAwe2UNR8DahEHwJt+W byMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=fbPFPOnB26nPaI4xP0nIwfTqrO+4jYBztLt15NEZm58=; b=xMxhCmln0rkJ1iFkuMSDvumPwrLVqmjXKpapWG+M6l0yEdi8VM7zuv9aFt6QjxVWxv gn894McNaVBuJqINV0CLOYHqRIUoVu9OQhrH7oyZLvcUnRa4rXFw9W+703Bnh0Qw1v5z smVlSu2YgAX9EOqnf4+gnJa+FQoAkeDpKdvpPkju32CQf3D8w8yPx2ccmgoQ2sVKTklE CcT7QAR0K0qW7sLoUSP8522eWBsAvRXBLiJ7Fs3KbKXLM+oBxzYl2OC1Z8DfdqON7Kd2 /OViIJjnoYRVsGKLe10/jQr2GcjvdMjJbg07iAE+GixYqgboJTIUo8zpRy9fCmjdG8hK nnZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=QXAoVocv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f22si61362231pgm.81.2019.01.07.09.06.56; Mon, 07 Jan 2019 09:07:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=QXAoVocv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728788AbfAGMpC (ORCPT + 99 others); Mon, 7 Jan 2019 07:45:02 -0500 Received: from mail.kernel.org ([198.145.29.99]:33936 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727594AbfAGMpA (ORCPT ); Mon, 7 Jan 2019 07:45:00 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 31C782183E; Mon, 7 Jan 2019 12:44:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1546865099; bh=u7b4o1jaLgPIippnEODpgri1oaBiu1Td8rTnkp8dvHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QXAoVocvDeLbDnN43BgE8tkMJjLqE4j7mZJuG0hV4KchLgIqD+4dKAWRS21r0VDVV FbRLhVPnRO6Vawo/V0HA0bnbUkU+3hho3Eby8XLEhtkkKwPj9XIbADPlv8O23fI9oo CMRqZj8BpRxXtydZg80SQqikQLsvJJBmiDQBl8cQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Julien Thierry , Christoffer Dall , Marc Zyngier Subject: [PATCH 4.20 135/145] KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled Date: Mon, 7 Jan 2019 13:32:52 +0100 Message-Id: <20190107104454.882943697@linuxfoundation.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190107104437.308206189@linuxfoundation.org> References: <20190107104437.308206189@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.20-stable review patch. If anyone has any objections, please let me know. ------------------ From: Julien Thierry commit 2e2f6c3c0b08eed3fcf7de3c7684c940451bdeb1 upstream. To change the active state of an MMIO, halt is requested for all vcpus of the affected guest before modifying the IRQ state. This is done by calling cond_resched_lock() in vgic_mmio_change_active(). However interrupts are disabled at this point and we cannot reschedule a vcpu. We actually don't need any of this, as kvm_arm_halt_guest ensures that all the other vcpus are out of the guest. Let's just drop that useless code. Signed-off-by: Julien Thierry Suggested-by: Christoffer Dall Cc: stable@vger.kernel.org Signed-off-by: Marc Zyngier Signed-off-by: Greg Kroah-Hartman --- virt/kvm/arm/vgic/vgic-mmio.c | 21 --------------------- 1 file changed, 21 deletions(-) --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -313,27 +313,6 @@ static void vgic_mmio_change_active(stru spin_lock_irqsave(&irq->irq_lock, flags); - /* - * If this virtual IRQ was written into a list register, we - * have to make sure the CPU that runs the VCPU thread has - * synced back the LR state to the struct vgic_irq. - * - * As long as the conditions below are true, we know the VCPU thread - * may be on its way back from the guest (we kicked the VCPU thread in - * vgic_change_active_prepare) and still has to sync back this IRQ, - * so we release and re-acquire the spin_lock to let the other thread - * sync back the IRQ. - * - * When accessing VGIC state from user space, requester_vcpu is - * NULL, which is fine, because we guarantee that no VCPUs are running - * when accessing VGIC state from user space so irq->vcpu->cpu is - * always -1. - */ - while (irq->vcpu && /* IRQ may have state in an LR somewhere */ - irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */ - irq->vcpu->cpu != -1) /* VCPU thread is running */ - cond_resched_lock(&irq->irq_lock); - if (irq->hw) { vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); } else {