Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp4882693ybf; Wed, 4 Mar 2020 12:37:51 -0800 (PST) X-Google-Smtp-Source: ADFU+vvcTyFACBNnz6KVC4ff6HJ2rZuaqxQ4Xvy/3cwxNuGTLCNK9MJZKRB0JyvVrvY6M0iEM/xQ X-Received: by 2002:a9d:7696:: with SMTP id j22mr3980467otl.188.1583354271172; Wed, 04 Mar 2020 12:37:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583354271; cv=none; d=google.com; s=arc-20160816; b=kJF3LfubTm0Tg0xFCTbD71SWbg7lxjWNRuRpbsFSMsQ3Zdw+M3omKSOH2dBqkKuFIt eBKZMTAPESWT53bMKo04WtstltBfa+4ok6t8Sjv07HltXg8n4LO6DkN/ydoV67poGZAp vwBwghUDrcTsfDUKh4YECIaH9KYThPT40BMXO5vIlXeP9YiGI4jreM5jxZ7nq0BP5wfN Fr3NYMiZTkMtzbQdJsSFEfNKcLLBxYWO06GcMzRG11Mkr3CLEDP8vGMNQ7WlNPzXzeEU imdQ8KAeddQ4FchsRRxoSmkndIPPUZ4F+hkjR+zpz9QLgfI25J87VC4hc0FzN9JID6Xi n4ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oyQwGM20xts6TURSRwp0fxM5PgdTf9TAGjgTwotgDaI=; b=Wykmee+E2xm9NfMPayVdAWXCLhVCCbr8uVZYJIVlvz2c0N4kn00y8pueEdGuE17tbI JkOUeaObD2Sk2iCc3YkH8CM6O2fupbB6Uw6CTQ5gTxgcsuHP5JLbmXyOLnVwkBwVvdja fOgALcpSnq/DLcHOKhnWlJ3KNQ/hLFTpsrXzwsdHUETod7rjj0fjKrrHi4jMOqHBLfxk cLbGH/+Q+6T5g/w2P4vJZG2UpuA9sSdp5emljWMW+31tHRNDMaNTLWQrRANnQakJfMNh lJqWaTiOwcu49KPToFVmu3A3F0swJJbqAec7bQPqDXd5OSIPpHytzg+obJEBGPiY6vC8 gu8A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f3B3veFM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 26si1895602oiz.107.2020.03.04.12.37.39; Wed, 04 Mar 2020 12:37:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=f3B3veFM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388576AbgCDUgu (ORCPT + 99 others); Wed, 4 Mar 2020 15:36:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:37030 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388554AbgCDUgs (ORCPT ); Wed, 4 Mar 2020 15:36:48 -0500 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 90B4821556; Wed, 4 Mar 2020 20:36:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583354207; bh=KHln7xyOG0c/LWpyIWiosDeoEecBHjTHH6YSKb+FXgg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f3B3veFM7TrHO30UhFgrGk4rJsdX4mUUeBNQ3qSYqHNCQv3UykM8jKDAJHGyeeEio QfCy3er+YsmRIHwCPZHY54BrQBZkMXqb21hOENUb1cKUUqHIcAlBZB9SF3SULL8pFG DFRLhl5wnz1YSvdfvli6K147BC093uV8yydvf6nM= Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=why.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1j9ajE-00A59R-UH; Wed, 04 Mar 2020 20:34:53 +0000 From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Lorenzo Pieralisi , Jason Cooper , Robert Richter , Thomas Gleixner , Zenghui Yu , Eric Auger , James Morse , Julien Thierry , Suzuki K Poulose Subject: [PATCH v5 18/23] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers Date: Wed, 4 Mar 2020 20:33:25 +0000 Message-Id: <20200304203330.4967-19-maz@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200304203330.4967-1-maz@kernel.org> References: <20200304203330.4967-1-maz@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, lorenzo.pieralisi@arm.com, jason@lakedaemon.net, rrichter@marvell.com, tglx@linutronix.de, yuzenghui@huawei.com, eric.auger@redhat.com, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Most of the GICv3 emulation code that deals with SGIs now has to be aware of the v4.1 capabilities in order to benefit from it. Add such support, keyed on the interrupt having the hw flag set and being a SGI. Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-mmio-v3.c | 15 +++++- virt/kvm/arm/vgic/vgic-mmio.c | 88 ++++++++++++++++++++++++++++++-- 2 files changed, 96 insertions(+), 7 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index ebc218840fc2..de89da76a379 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -942,8 +943,18 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) * generate interrupts of either group. */ if (!irq->group || allow_group1) { - irq->pending_latch = true; - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + if (!irq->hw) { + irq->pending_latch = true; + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + } else { + /* HW SGI? Ask the GIC to inject it */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + true); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } } else { raw_spin_unlock_irqrestore(&irq->irq_lock, flags); } diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index 97fb2a40e6ba..2199302597fa 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -5,6 +5,8 @@ #include #include +#include +#include #include #include #include @@ -59,6 +61,11 @@ unsigned long vgic_mmio_read_group(struct kvm_vcpu *vcpu, return value; } +static void vgic_update_vsgi(struct vgic_irq *irq) +{ + WARN_ON(its_prop_update_vsgi(irq->host_irq, irq->priority, irq->group)); +} + void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { @@ -71,7 +78,12 @@ void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr, raw_spin_lock_irqsave(&irq->irq_lock, flags); irq->group = !!(val & BIT(i)); - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + vgic_update_vsgi(irq); + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } else { + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + } vgic_put_irq(vcpu->kvm, irq); } @@ -113,7 +125,21 @@ void vgic_mmio_write_senable(struct kvm_vcpu *vcpu, struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); raw_spin_lock_irqsave(&irq->irq_lock, flags); - if (vgic_irq_is_mapped_level(irq)) { + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + if (!irq->enabled) { + struct irq_data *data; + + irq->enabled = true; + data = &irq_to_desc(irq->host_irq)->irq_data; + while (irqd_irq_disabled(data)) + enable_irq(irq->host_irq); + } + + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->kvm, irq); + + continue; + } else if (vgic_irq_is_mapped_level(irq)) { bool was_high = irq->line_level; /* @@ -148,6 +174,8 @@ void vgic_mmio_write_cenable(struct kvm_vcpu *vcpu, struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); raw_spin_lock_irqsave(&irq->irq_lock, flags); + if (irq->hw && vgic_irq_is_sgi(irq->intid) && irq->enabled) + disable_irq_nosync(irq->host_irq); irq->enabled = false; @@ -167,10 +195,22 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, for (i = 0; i < len * 8; i++) { struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); unsigned long flags; + bool val; raw_spin_lock_irqsave(&irq->irq_lock, flags); - if (irq_is_pending(irq)) - value |= (1U << i); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + int err; + + val = false; + err = irq_get_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + &val); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + } else { + val = irq_is_pending(irq); + } + + value |= ((u32)val << i); raw_spin_unlock_irqrestore(&irq->irq_lock, flags); vgic_put_irq(vcpu->kvm, irq); @@ -215,6 +255,21 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, } raw_spin_lock_irqsave(&irq->irq_lock, flags); + + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + /* HW SGI? Ask the GIC to inject it */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + true); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->kvm, irq); + + continue; + } + if (irq->hw) vgic_hw_irq_spending(vcpu, irq, is_uaccess); else @@ -269,6 +324,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu, raw_spin_lock_irqsave(&irq->irq_lock, flags); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + /* HW SGI? Ask the GIC to clear its pending bit */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + false); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->kvm, irq); + + continue; + } + if (irq->hw) vgic_hw_irq_cpending(vcpu, irq, is_uaccess); else @@ -318,8 +387,15 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, raw_spin_lock_irqsave(&irq->irq_lock, flags); - if (irq->hw) { + if (irq->hw && !vgic_irq_is_sgi(irq->intid)) { vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); + } else if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + /* + * GICv4.1 VSGI feature doesn't track an active state, + * so let's not kid ourselves, there is nothing we can + * do here. + */ + irq->active = false; } else { u32 model = vcpu->kvm->arch.vgic.vgic_model; u8 active_source; @@ -493,6 +569,8 @@ void vgic_mmio_write_priority(struct kvm_vcpu *vcpu, raw_spin_lock_irqsave(&irq->irq_lock, flags); /* Narrow the priority range to what we actually support */ irq->priority = (val >> (i * 8)) & GENMASK(7, 8 - VGIC_PRI_BITS); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) + vgic_update_vsgi(irq); raw_spin_unlock_irqrestore(&irq->irq_lock, flags); vgic_put_irq(vcpu->kvm, irq); -- 2.20.1