Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp7497235ybl; Tue, 24 Dec 2019 03:43:47 -0800 (PST) X-Google-Smtp-Source: APXvYqx89pgKS+JN77gksal9EUl8Ng/OSDxO19OgTqnqc5BoTW6hEYsN1udq2GcP0/nkhDOsY1i6 X-Received: by 2002:a05:6830:18e9:: with SMTP id d9mr29232758otf.332.1577187827777; Tue, 24 Dec 2019 03:43:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577187827; cv=none; d=google.com; s=arc-20160816; b=drZ/kENz7RGljhRxlKl5YT4e0L6c0xpX4g00UfkrHr1Kx7ZUpzZJ215bLCRlCm4btb 0n1YbbtzAp5hnnVxMqr81LnJYysB94mixWjqIJqEcSbpEP+pPSXhTkUD9uNo/eOXohQ6 IQ27XZwYoL95R3pAj9ngn2EZXATEh0x4taUZZteebLSEYQp345bEsRnjav4af/b2lFLo dyNOWkgAoQfmMlutKZESP+BMX0FreybWBseEttVy8j7JoVOqagjjasTaXNisG94ZJ+mr OEMt9whhMUpyDKjpQf6OS2juFo9eyybQcnhJvIYxh9Y0PAxAA5LZkFuHxXCEijNOR27v AOwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=l8N7Im3OZqsYkPRyLloN2gdIc/7TNUx3jaf7w+nwzLY=; b=Dp4fbsbLUun+4gTusThQ7sCVQM3FhnrJpeNoCzGtBXBf5NxTEhDlbaFo2TFZYBcoO4 reUBD5YUpMS//sjYFp32ufwZA/03lFlfbzJ7H/Dvoj0yrfXaop53I0tEgAh3j/GTo0Or pH0g+A7Acr8Mox/eN393NtPX0ipMGG4W31i9n+yKEYVY1scKBbDLgl/Kjma8wYcPd6uV u99p+y4XfjHoPmv8GaLodOGRX3ONQjZQ8lz5Ijwk/TWE3j4dr1yuZ5qKZhL91EkBtvvg shjuq5LaXyO0VfkWCKtErYHQVv29obVRlAL2qBBZMwXNSSwnX5H7IfknQliXcYh+z+3p BNiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n8si10647025otr.102.2019.12.24.03.43.36; Tue, 24 Dec 2019 03:43:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727036AbfLXLmy (ORCPT + 99 others); Tue, 24 Dec 2019 06:42:54 -0500 Received: from inca-roads.misterjones.org ([213.251.177.50]:46261 "EHLO inca-roads.misterjones.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726237AbfLXLmy (ORCPT ); Tue, 24 Dec 2019 06:42:54 -0500 Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=why.lan) by cheepnis.misterjones.org with esmtpsa (TLSv1.2:DHE-RSA-AES128-GCM-SHA256:128) (Exim 4.80) (envelope-from ) id 1iji6D-000169-TJ; Tue, 24 Dec 2019 12:11:38 +0100 From: Marc Zyngier To: kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Cc: Eric Auger , James Morse , Julien Thierry , Suzuki K Poulose , Thomas Gleixner , Jason Cooper , Lorenzo Pieralisi , Andrew Murray , Zenghui Yu , Robert Richter Subject: [PATCH v3 28/32] KVM: arm64: GICv4.1: Add direct injection capability to SGI registers Date: Tue, 24 Dec 2019 11:10:51 +0000 Message-Id: <20191224111055.11836-29-maz@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191224111055.11836-1-maz@kernel.org> References: <20191224111055.11836-1-maz@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, eric.auger@redhat.com, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, tglx@linutronix.de, jason@lakedaemon.net, lorenzo.pieralisi@arm.com, Andrew.Murray@arm.com, yuzenghui@huawei.com, rrichter@marvell.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on cheepnis.misterjones.org); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Most of the GICv3 emulation code that deals with SGIs now has to be aware of the v4.1 capabilities in order to benefit from it. Add such support, keyed on the interrupt having the hw flag set and being a SGI. Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-mmio-v3.c | 15 +++++- virt/kvm/arm/vgic/vgic-mmio.c | 88 ++++++++++++++++++++++++++++++-- 2 files changed, 96 insertions(+), 7 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index 7dfd15dbb308..d73f4ffd7d36 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -939,8 +940,18 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) * generate interrupts of either group. */ if (!irq->group || allow_group1) { - irq->pending_latch = true; - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + if (!irq->hw) { + irq->pending_latch = true; + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + } else { + /* HW SGI? Ask the GIC to inject it */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + true); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } } else { raw_spin_unlock_irqrestore(&irq->irq_lock, flags); } diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index 0d090482720d..6ebf747a7806 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -5,6 +5,8 @@ #include #include +#include +#include #include #include #include @@ -59,6 +61,11 @@ unsigned long vgic_mmio_read_group(struct kvm_vcpu *vcpu, return value; } +static void vgic_update_vsgi(struct vgic_irq *irq) +{ + WARN_ON(its_prop_update_vsgi(irq->host_irq, irq->priority, irq->group)); +} + void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { @@ -71,7 +78,12 @@ void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr, raw_spin_lock_irqsave(&irq->irq_lock, flags); irq->group = !!(val & BIT(i)); - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + vgic_update_vsgi(irq); + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + } else { + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + } vgic_put_irq(vcpu->kvm, irq); } @@ -113,7 +125,21 @@ void vgic_mmio_write_senable(struct kvm_vcpu *vcpu, struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); raw_spin_lock_irqsave(&irq->irq_lock, flags); - if (vgic_irq_is_mapped_level(irq)) { + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + if (!irq->enabled) { + struct irq_data *data; + + irq->enabled = true; + data = &irq_to_desc(irq->host_irq)->irq_data; + while (irqd_irq_disabled(data)) + enable_irq(irq->host_irq); + } + + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->kvm, irq); + + continue; + } else if (vgic_irq_is_mapped_level(irq)) { bool was_high = irq->line_level; /* @@ -148,6 +174,8 @@ void vgic_mmio_write_cenable(struct kvm_vcpu *vcpu, struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); raw_spin_lock_irqsave(&irq->irq_lock, flags); + if (irq->hw && vgic_irq_is_sgi(irq->intid) && irq->enabled) + disable_irq_nosync(irq->host_irq); irq->enabled = false; @@ -167,10 +195,22 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, for (i = 0; i < len * 8; i++) { struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); unsigned long flags; + bool val; raw_spin_lock_irqsave(&irq->irq_lock, flags); - if (irq_is_pending(irq)) - value |= (1U << i); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + int err; + + val = false; + err = irq_get_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + &val); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + } else { + val = irq_is_pending(irq); + } + + value |= ((u32)val << i); raw_spin_unlock_irqrestore(&irq->irq_lock, flags); vgic_put_irq(vcpu->kvm, irq); @@ -236,6 +276,21 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, } raw_spin_lock_irqsave(&irq->irq_lock, flags); + + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + /* HW SGI? Ask the GIC to inject it */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + true); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->kvm, irq); + + continue; + } + if (irq->hw) vgic_hw_irq_spending(vcpu, irq, is_uaccess); else @@ -290,6 +345,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu, raw_spin_lock_irqsave(&irq->irq_lock, flags); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + /* HW SGI? Ask the GIC to inject it */ + int err; + err = irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + false); + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); + + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(vcpu->kvm, irq); + + continue; + } + if (irq->hw) vgic_hw_irq_cpending(vcpu, irq, is_uaccess); else @@ -339,8 +408,15 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, raw_spin_lock_irqsave(&irq->irq_lock, flags); - if (irq->hw) { + if (irq->hw && !vgic_irq_is_sgi(irq->intid)) { vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); + } else if (irq->hw && vgic_irq_is_sgi(irq->intid)) { + /* + * GICv4.1 VSGI feature doesn't track an active state, + * so let's not kid ourselves, there is nothing we can + * do here. + */ + irq->active = false; } else { u32 model = vcpu->kvm->arch.vgic.vgic_model; u8 active_source; @@ -514,6 +590,8 @@ void vgic_mmio_write_priority(struct kvm_vcpu *vcpu, raw_spin_lock_irqsave(&irq->irq_lock, flags); /* Narrow the priority range to what we actually support */ irq->priority = (val >> (i * 8)) & GENMASK(7, 8 - VGIC_PRI_BITS); + if (irq->hw && vgic_irq_is_sgi(irq->intid)) + vgic_update_vsgi(irq); raw_spin_unlock_irqrestore(&irq->irq_lock, flags); vgic_put_irq(vcpu->kvm, irq); -- 2.20.1