From: Zenghui Yu <[email protected]>
When setting the forwarding path of a VLPI (switch to the HW mode),
we could also transfer the pending state from irq->pending_latch to
VPT (especially in migration, the pending states of VLPIs are restored
into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
a VLPI to pending.
Signed-off-by: Zenghui Yu <[email protected]>
Signed-off-by: Shenming Lu <[email protected]>
---
arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
index f211a7c32704..7945d6d09cdd 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -454,6 +454,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
irq->host_irq = virq;
atomic_inc(&map.vpe->vlpi_count);
+ /* Transfer pending state */
+ ret = irq_set_irqchip_state(irq->host_irq,
+ IRQCHIP_STATE_PENDING,
+ irq->pending_latch);
+ WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
+
+ /*
+ * Let it be pruned from ap_list later and don't bother
+ * the List Register.
+ */
+ irq->pending_latch = false;
+
out:
mutex_unlock(&its->its_lock);
return ret;
--
2.19.1
On 2021-01-04 08:16, Shenming Lu wrote:
> From: Zenghui Yu <[email protected]>
>
> When setting the forwarding path of a VLPI (switch to the HW mode),
> we could also transfer the pending state from irq->pending_latch to
> VPT (especially in migration, the pending states of VLPIs are restored
> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
> a VLPI to pending.
>
> Signed-off-by: Zenghui Yu <[email protected]>
> Signed-off-by: Shenming Lu <[email protected]>
> ---
> arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c
> b/arch/arm64/kvm/vgic/vgic-v4.c
> index f211a7c32704..7945d6d09cdd 100644
> --- a/arch/arm64/kvm/vgic/vgic-v4.c
> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
> @@ -454,6 +454,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm,
> int virq,
> irq->host_irq = virq;
> atomic_inc(&map.vpe->vlpi_count);
>
> + /* Transfer pending state */
> + ret = irq_set_irqchip_state(irq->host_irq,
> + IRQCHIP_STATE_PENDING,
> + irq->pending_latch);
> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
Why do this if pending_latch is 0, which is likely to be
the overwhelming case?
> +
> + /*
> + * Let it be pruned from ap_list later and don't bother
> + * the List Register.
> + */
> + irq->pending_latch = false;
What guarantees the pruning? Pruning only happens on vcpu exit,
which means we may have the same interrupt via both the LR and
the stream interface, which I don't believe is legal (it is
like having two LRs holding the same interrupt).
> +
> out:
> mutex_unlock(&its->its_lock);
> return ret;
Thanks,
M.
--
Jazz is not dead. It just smells funny...
On 2021/1/5 17:25, Marc Zyngier wrote:
> On 2021-01-04 08:16, Shenming Lu wrote:
>> From: Zenghui Yu <[email protected]>
>>
>> When setting the forwarding path of a VLPI (switch to the HW mode),
>> we could also transfer the pending state from irq->pending_latch to
>> VPT (especially in migration, the pending states of VLPIs are restored
>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>> a VLPI to pending.
>>
>> Signed-off-by: Zenghui Yu <[email protected]>
>> Signed-off-by: Shenming Lu <[email protected]>
>> ---
>> arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>> index f211a7c32704..7945d6d09cdd 100644
>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>> @@ -454,6 +454,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>> irq->host_irq = virq;
>> atomic_inc(&map.vpe->vlpi_count);
>>
>> + /* Transfer pending state */
>> + ret = irq_set_irqchip_state(irq->host_irq,
>> + IRQCHIP_STATE_PENDING,
>> + irq->pending_latch);
>> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>
> Why do this if pending_latch is 0, which is likely to be
> the overwhelming case?
Yes, there is no need to do this if pending_latch is 0.
>
>> +
>> + /*
>> + * Let it be pruned from ap_list later and don't bother
>> + * the List Register.
>> + */
>> + irq->pending_latch = false;
>
> What guarantees the pruning? Pruning only happens on vcpu exit,
> which means we may have the same interrupt via both the LR and
> the stream interface, which I don't believe is legal (it is
> like having two LRs holding the same interrupt).
Since the irq's pending_latch is set to false here, it will not be
populated to the LR in vgic_flush_lr_state() (vgic_target_oracle()
will return NULL).
>
>> +
>> out:
>> mutex_unlock(&its->its_lock);
>> return ret;
>
> Thanks,
>
> M.