Hi,
In GICv4.1, migration has been supported except for (directly-injected)
VLPI. And GICv4.1 Spec explicitly gives a way to get the VLPI's pending
state (which was crucially missing in GICv4.0). So we make VLPI migration
capable on GICv4.1 in this series.
In order to support VLPI migration, we need to save and restore all
required configuration information and pending states of VLPIs. But
in fact, the configuration information of VLPIs has already been saved
(or will be reallocated on the dst host...) in vgic(kvm) migration.
So we only have to migrate the pending states of VLPIs specially.
Below is the related workflow in migration.
On the save path:
In migration completion:
pause all vCPUs
|
call each VM state change handler:
pause other devices (just keep from sending interrupts, and
such as VFIO migration protocol has already realized it [1])
|
flush ITS tables into guest RAM
|
flush RDIST pending tables (also flush VLPI pending states here)
|
...
On the resume path:
load each device's state:
restore ITS tables (include pending tables) from guest RAM
|
for other (PCI) devices (paused), if configured to have VLPIs,
establish the forwarding paths of their VLPIs (and transfer
the pending states from kvm's vgic to VPT here)
We have tested this series in VFIO migration, and found some related
issues in QEMU [2].
Links:
[1] vfio: UAPI for migration interface for device state:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a8a24f3f6e38103b77cf399c38eb54e1219d00d6
[2] vfio: Some fixes and optimizations for VFIO migration:
https://patchwork.ozlabs.org/project/qemu-devel/cover/[email protected]/
History:
v3 -> v4
- Nit fixes.
- Add a CPU cache invalidation right after unmapping the vPE. (Patch 1)
- Drop the setting of PTZ altogether. (Patch 2)
- Bail out if spot !vgic_initialized(). (in Patch 4)
- Communicate the state change (clear pending_latch) via
vgic_queue_irq_unlock. (in Patch 5)
Thanks a lot for the suggestions from Marc!
v2 -> v3
- Add the vgic initialized check to ensure that the allocation and enabling
of the doorbells have already been done before unmapping the vPEs.
- Check all get_vlpi_state related conditions in save_pending_tables in one place.
- Nit fixes.
v1 -> v2:
- Get the VLPI state from the KVM side.
- Nit fixes.
Thanks,
Shenming
Marc Zyngier (1):
irqchip/gic-v3-its: Add a cache invalidation right after vPE unmapping
Shenming Lu (4):
irqchip/gic-v3-its: Drop the setting of PTZ altogether
KVM: arm64: GICv4.1: Add function to get VLPI state
KVM: arm64: GICv4.1: Try to save VLPI state in save_pending_tables
KVM: arm64: GICv4.1: Give a chance to save VLPI state
Zenghui Yu (1):
KVM: arm64: GICv4.1: Restore VLPI pending state to physical side
.../virt/kvm/devices/arm-vgic-its.rst | 2 +-
arch/arm64/kvm/vgic/vgic-its.c | 6 +-
arch/arm64/kvm/vgic/vgic-v3.c | 66 +++++++++++++++++--
arch/arm64/kvm/vgic/vgic-v4.c | 37 +++++++++++
arch/arm64/kvm/vgic/vgic.h | 1 +
drivers/irqchip/irq-gic-v3-its.c | 21 +++++-
6 files changed, 121 insertions(+), 12 deletions(-)
--
2.19.1
With GICv4.1 and the vPE unmapped, which indicates the invalidation
of any VPT caches associated with the vPE, we can get the VLPI state
by peeking at the VPT. So we add a function for this.
Signed-off-by: Shenming Lu <[email protected]>
---
arch/arm64/kvm/vgic/vgic-v4.c | 19 +++++++++++++++++++
arch/arm64/kvm/vgic/vgic.h | 1 +
2 files changed, 20 insertions(+)
diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
index 66508b03094f..ac029ba3d337 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -203,6 +203,25 @@ void vgic_v4_configure_vsgis(struct kvm *kvm)
kvm_arm_resume_guest(kvm);
}
+/*
+ * Must be called with GICv4.1 and the vPE unmapped, which
+ * indicates the invalidation of any VPT caches associated
+ * with the vPE, thus we can get the VLPI state by peeking
+ * at the VPT.
+ */
+void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val)
+{
+ struct its_vpe *vpe = &irq->target_vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
+ int mask = BIT(irq->intid % BITS_PER_BYTE);
+ void *va;
+ u8 *ptr;
+
+ va = page_address(vpe->vpt_page);
+ ptr = va + irq->intid / BITS_PER_BYTE;
+
+ *val = !!(*ptr & mask);
+}
+
/**
* vgic_v4_init - Initialize the GICv4 data structures
* @kvm: Pointer to the VM being initialized
diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
index 64fcd7511110..d8cfd360838c 100644
--- a/arch/arm64/kvm/vgic/vgic.h
+++ b/arch/arm64/kvm/vgic/vgic.h
@@ -317,5 +317,6 @@ bool vgic_supports_direct_msis(struct kvm *kvm);
int vgic_v4_init(struct kvm *kvm);
void vgic_v4_teardown(struct kvm *kvm);
void vgic_v4_configure_vsgis(struct kvm *kvm);
+void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val);
#endif
--
2.19.1
Before GICv4.1, we don't have direct access to the VLPI state. So
we simply let it fail early when encountering any VLPI in saving.
But now we don't have to return -EACCES directly if on GICv4.1. Let’s
change the hard code and give a chance to save the VLPI state (and
preserve the UAPI).
Signed-off-by: Shenming Lu <[email protected]>
---
Documentation/virt/kvm/devices/arm-vgic-its.rst | 2 +-
arch/arm64/kvm/vgic/vgic-its.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/Documentation/virt/kvm/devices/arm-vgic-its.rst b/Documentation/virt/kvm/devices/arm-vgic-its.rst
index 6c304fd2b1b4..d257eddbae29 100644
--- a/Documentation/virt/kvm/devices/arm-vgic-its.rst
+++ b/Documentation/virt/kvm/devices/arm-vgic-its.rst
@@ -80,7 +80,7 @@ KVM_DEV_ARM_VGIC_GRP_CTRL
-EFAULT Invalid guest ram access
-EBUSY One or more VCPUS are running
-EACCES The virtual ITS is backed by a physical GICv4 ITS, and the
- state is not available
+ state is not available without GICv4.1
======= ==========================================================
KVM_DEV_ARM_VGIC_GRP_ITS_REGS
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
index 40cbaca81333..ec7543a9617c 100644
--- a/arch/arm64/kvm/vgic/vgic-its.c
+++ b/arch/arm64/kvm/vgic/vgic-its.c
@@ -2218,10 +2218,10 @@ static int vgic_its_save_itt(struct vgic_its *its, struct its_device *device)
/*
* If an LPI carries the HW bit, this means that this
* interrupt is controlled by GICv4, and we do not
- * have direct access to that state. Let's simply fail
- * the save operation...
+ * have direct access to that state without GICv4.1.
+ * Let's simply fail the save operation...
*/
- if (ite->irq->hw)
+ if (ite->irq->hw && !kvm_vgic_global_state.has_gicv4_1)
return -EACCES;
ret = vgic_its_save_ite(its, device, ite, gpa, ite_esz);
--
2.19.1
From: Zenghui Yu <[email protected]>
When setting the forwarding path of a VLPI (switch to the HW mode),
we can also transfer the pending state from irq->pending_latch to
VPT (especially in migration, the pending states of VLPIs are restored
into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
a VLPI to pending.
Signed-off-by: Zenghui Yu <[email protected]>
Signed-off-by: Shenming Lu <[email protected]>
---
arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
index ac029ba3d337..3b82ab80c2f3 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
irq->host_irq = virq;
atomic_inc(&map.vpe->vlpi_count);
+ /* Transfer pending state */
+ if (irq->pending_latch) {
+ unsigned long flags;
+
+ ret = irq_set_irqchip_state(irq->host_irq,
+ IRQCHIP_STATE_PENDING,
+ irq->pending_latch);
+ WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
+
+ /*
+ * Clear pending_latch and communicate this state
+ * change via vgic_queue_irq_unlock.
+ */
+ raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ irq->pending_latch = false;
+ vgic_queue_irq_unlock(kvm, irq, flags);
+ }
+
out:
mutex_unlock(&its->its_lock);
return ret;
--
2.19.1
After pausing all vCPUs and devices capable of interrupting, in order
to save the states of all interrupts, besides flushing the states in
kvm’s vgic, we also try to flush the states of VLPIs in the virtual
pending tables into guest RAM, but we need to have GICv4.1 and safely
unmap the vPEs first.
As for the saving of VSGIs, which needs the vPEs to be mapped and might
conflict with the saving of VLPIs, but since we will map the vPEs back
at the end of save_pending_tables and both savings require the kvm->lock
to be held (thus only happen serially), it will work fine.
Signed-off-by: Shenming Lu <[email protected]>
---
arch/arm64/kvm/vgic/vgic-v3.c | 66 +++++++++++++++++++++++++++++++----
1 file changed, 60 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 52915b342351..359d4dc35264 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/irqchip/arm-gic-v3.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
#include <linux/kvm.h>
#include <linux/kvm_host.h>
#include <kvm/arm_vgic.h>
@@ -356,6 +358,32 @@ int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq)
return 0;
}
+/*
+ * The deactivation of the doorbell interrupt will trigger the
+ * unmapping of the associated vPE.
+ */
+static void unmap_all_vpes(struct vgic_dist *dist)
+{
+ struct irq_desc *desc;
+ int i;
+
+ for (i = 0; i < dist->its_vm.nr_vpes; i++) {
+ desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
+ irq_domain_deactivate_irq(irq_desc_get_irq_data(desc));
+ }
+}
+
+static void map_all_vpes(struct vgic_dist *dist)
+{
+ struct irq_desc *desc;
+ int i;
+
+ for (i = 0; i < dist->its_vm.nr_vpes; i++) {
+ desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
+ irq_domain_activate_irq(irq_desc_get_irq_data(desc), false);
+ }
+}
+
/**
* vgic_v3_save_pending_tables - Save the pending tables into guest RAM
* kvm lock and all vcpu lock must be held
@@ -365,13 +393,28 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
struct vgic_dist *dist = &kvm->arch.vgic;
struct vgic_irq *irq;
gpa_t last_ptr = ~(gpa_t)0;
- int ret;
+ bool vlpi_avail = false;
+ int ret = 0;
u8 val;
+ if (unlikely(!vgic_initialized(kvm)))
+ return -ENXIO;
+
+ /*
+ * A preparation for getting any VLPI states.
+ * The above vgic initialized check also ensures that the allocation
+ * and enabling of the doorbells have already been done.
+ */
+ if (kvm_vgic_global_state.has_gicv4_1) {
+ unmap_all_vpes(dist);
+ vlpi_avail = true;
+ }
+
list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) {
int byte_offset, bit_nr;
struct kvm_vcpu *vcpu;
gpa_t pendbase, ptr;
+ bool is_pending;
bool stored;
vcpu = irq->target_vcpu;
@@ -387,24 +430,35 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
if (ptr != last_ptr) {
ret = kvm_read_guest_lock(kvm, ptr, &val, 1);
if (ret)
- return ret;
+ goto out;
last_ptr = ptr;
}
stored = val & (1U << bit_nr);
- if (stored == irq->pending_latch)
+
+ is_pending = irq->pending_latch;
+
+ if (irq->hw && vlpi_avail)
+ vgic_v4_get_vlpi_state(irq, &is_pending);
+
+ if (stored == is_pending)
continue;
- if (irq->pending_latch)
+ if (is_pending)
val |= 1 << bit_nr;
else
val &= ~(1 << bit_nr);
ret = kvm_write_guest_lock(kvm, ptr, &val, 1);
if (ret)
- return ret;
+ goto out;
}
- return 0;
+
+out:
+ if (vlpi_avail)
+ map_all_vpes(dist);
+
+ return ret;
}
/**
--
2.19.1
On 2021-03-13 08:38, Shenming Lu wrote:
> From: Zenghui Yu <[email protected]>
>
> When setting the forwarding path of a VLPI (switch to the HW mode),
> we can also transfer the pending state from irq->pending_latch to
> VPT (especially in migration, the pending states of VLPIs are restored
> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
> a VLPI to pending.
>
> Signed-off-by: Zenghui Yu <[email protected]>
> Signed-off-by: Shenming Lu <[email protected]>
> ---
> arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c
> b/arch/arm64/kvm/vgic/vgic-v4.c
> index ac029ba3d337..3b82ab80c2f3 100644
> --- a/arch/arm64/kvm/vgic/vgic-v4.c
> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm,
> int virq,
> irq->host_irq = virq;
> atomic_inc(&map.vpe->vlpi_count);
>
> + /* Transfer pending state */
> + if (irq->pending_latch) {
> + unsigned long flags;
> +
> + ret = irq_set_irqchip_state(irq->host_irq,
> + IRQCHIP_STATE_PENDING,
> + irq->pending_latch);
> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
> +
> + /*
> + * Clear pending_latch and communicate this state
> + * change via vgic_queue_irq_unlock.
> + */
> + raw_spin_lock_irqsave(&irq->irq_lock, flags);
> + irq->pending_latch = false;
> + vgic_queue_irq_unlock(kvm, irq, flags);
> + }
> +
> out:
> mutex_unlock(&its->its_lock);
> return ret;
The read side of the pending state isn't locked, but the write side is.
I'd rather you lock the whole sequence for peace of mind.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
On 2021/3/15 16:30, Marc Zyngier wrote:
> On 2021-03-13 08:38, Shenming Lu wrote:
>> From: Zenghui Yu <[email protected]>
>>
>> When setting the forwarding path of a VLPI (switch to the HW mode),
>> we can also transfer the pending state from irq->pending_latch to
>> VPT (especially in migration, the pending states of VLPIs are restored
>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>> a VLPI to pending.
>>
>> Signed-off-by: Zenghui Yu <[email protected]>
>> Signed-off-by: Shenming Lu <[email protected]>
>> ---
>> arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>> 1 file changed, 18 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>> index ac029ba3d337..3b82ab80c2f3 100644
>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>> irq->host_irq = virq;
>> atomic_inc(&map.vpe->vlpi_count);
>>
>> + /* Transfer pending state */
>> + if (irq->pending_latch) {
>> + unsigned long flags;
>> +
>> + ret = irq_set_irqchip_state(irq->host_irq,
>> + IRQCHIP_STATE_PENDING,
>> + irq->pending_latch);
>> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>> +
>> + /*
>> + * Clear pending_latch and communicate this state
>> + * change via vgic_queue_irq_unlock.
>> + */
>> + raw_spin_lock_irqsave(&irq->irq_lock, flags);
>> + irq->pending_latch = false;
>> + vgic_queue_irq_unlock(kvm, irq, flags);
>> + }
>> +
>> out:
>> mutex_unlock(&its->its_lock);
>> return ret;
>
> The read side of the pending state isn't locked, but the write side is.
> I'd rather you lock the whole sequence for peace of mind.
Did you mean to lock before emitting the mapping request, Or just before reading
the pending state?
Thanks,
Shenming
>
> Thanks,
>
> M.
On 2021-03-15 09:11, Shenming Lu wrote:
> On 2021/3/15 16:30, Marc Zyngier wrote:
>> On 2021-03-13 08:38, Shenming Lu wrote:
>>> From: Zenghui Yu <[email protected]>
>>>
>>> When setting the forwarding path of a VLPI (switch to the HW mode),
>>> we can also transfer the pending state from irq->pending_latch to
>>> VPT (especially in migration, the pending states of VLPIs are
>>> restored
>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>>> a VLPI to pending.
>>>
>>> Signed-off-by: Zenghui Yu <[email protected]>
>>> Signed-off-by: Shenming Lu <[email protected]>
>>> ---
>>> arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>> 1 file changed, 18 insertions(+)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c
>>> b/arch/arm64/kvm/vgic/vgic-v4.c
>>> index ac029ba3d337..3b82ab80c2f3 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm,
>>> int virq,
>>> irq->host_irq = virq;
>>> atomic_inc(&map.vpe->vlpi_count);
>>>
>>> + /* Transfer pending state */
>>> + if (irq->pending_latch) {
>>> + unsigned long flags;
>>> +
>>> + ret = irq_set_irqchip_state(irq->host_irq,
>>> + IRQCHIP_STATE_PENDING,
>>> + irq->pending_latch);
>>> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>>> +
>>> + /*
>>> + * Clear pending_latch and communicate this state
>>> + * change via vgic_queue_irq_unlock.
>>> + */
>>> + raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>> + irq->pending_latch = false;
>>> + vgic_queue_irq_unlock(kvm, irq, flags);
>>> + }
>>> +
>>> out:
>>> mutex_unlock(&its->its_lock);
>>> return ret;
>>
>> The read side of the pending state isn't locked, but the write side
>> is.
>> I'd rather you lock the whole sequence for peace of mind.
>
> Did you mean to lock before emitting the mapping request, Or just
> before reading
> the pending state?
Just before reading the pending state, so that we can't get a concurrent
modification of that state while we make the interrupt pending in the
VPT
and clearing it in the emulation.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
On 2021/3/15 17:20, Marc Zyngier wrote:
> On 2021-03-15 09:11, Shenming Lu wrote:
>> On 2021/3/15 16:30, Marc Zyngier wrote:
>>> On 2021-03-13 08:38, Shenming Lu wrote:
>>>> From: Zenghui Yu <[email protected]>
>>>>
>>>> When setting the forwarding path of a VLPI (switch to the HW mode),
>>>> we can also transfer the pending state from irq->pending_latch to
>>>> VPT (especially in migration, the pending states of VLPIs are restored
>>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to trigger
>>>> a VLPI to pending.
>>>>
>>>> Signed-off-by: Zenghui Yu <[email protected]>
>>>> Signed-off-by: Shenming Lu <[email protected]>
>>>> ---
>>>> arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>>> 1 file changed, 18 insertions(+)
>>>>
>>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
>>>> index ac029ba3d337..3b82ab80c2f3 100644
>>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>>>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
>>>> irq->host_irq = virq;
>>>> atomic_inc(&map.vpe->vlpi_count);
>>>>
>>>> + /* Transfer pending state */
>>>> + if (irq->pending_latch) {
>>>> + unsigned long flags;
>>>> +
>>>> + ret = irq_set_irqchip_state(irq->host_irq,
>>>> + IRQCHIP_STATE_PENDING,
>>>> + irq->pending_latch);
>>>> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>>>> +
>>>> + /*
>>>> + * Clear pending_latch and communicate this state
>>>> + * change via vgic_queue_irq_unlock.
>>>> + */
>>>> + raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>>> + irq->pending_latch = false;
>>>> + vgic_queue_irq_unlock(kvm, irq, flags);
>>>> + }
>>>> +
>>>> out:
>>>> mutex_unlock(&its->its_lock);
>>>> return ret;
>>>
>>> The read side of the pending state isn't locked, but the write side is.
>>> I'd rather you lock the whole sequence for peace of mind.
>>
>> Did you mean to lock before emitting the mapping request, Or just before reading
>> the pending state?
>
> Just before reading the pending state, so that we can't get a concurrent
> modification of that state while we make the interrupt pending in the VPT
> and clearing it in the emulation.
Get it. I will correct it right now.
Thanks,
Shenming
>
> Thanks,
>
> M.
On 2021-03-15 09:25, Shenming Lu wrote:
> On 2021/3/15 17:20, Marc Zyngier wrote:
>> On 2021-03-15 09:11, Shenming Lu wrote:
>>> On 2021/3/15 16:30, Marc Zyngier wrote:
>>>> On 2021-03-13 08:38, Shenming Lu wrote:
>>>>> From: Zenghui Yu <[email protected]>
>>>>>
>>>>> When setting the forwarding path of a VLPI (switch to the HW mode),
>>>>> we can also transfer the pending state from irq->pending_latch to
>>>>> VPT (especially in migration, the pending states of VLPIs are
>>>>> restored
>>>>> into kvm’s vgic first). And we currently send "INT+VSYNC" to
>>>>> trigger
>>>>> a VLPI to pending.
>>>>>
>>>>> Signed-off-by: Zenghui Yu <[email protected]>
>>>>> Signed-off-by: Shenming Lu <[email protected]>
>>>>> ---
>>>>> arch/arm64/kvm/vgic/vgic-v4.c | 18 ++++++++++++++++++
>>>>> 1 file changed, 18 insertions(+)
>>>>>
>>>>> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> b/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> index ac029ba3d337..3b82ab80c2f3 100644
>>>>> --- a/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
>>>>> @@ -449,6 +449,24 @@ int kvm_vgic_v4_set_forwarding(struct kvm
>>>>> *kvm, int virq,
>>>>> irq->host_irq = virq;
>>>>> atomic_inc(&map.vpe->vlpi_count);
>>>>>
>>>>> + /* Transfer pending state */
>>>>> + if (irq->pending_latch) {
>>>>> + unsigned long flags;
>>>>> +
>>>>> + ret = irq_set_irqchip_state(irq->host_irq,
>>>>> + IRQCHIP_STATE_PENDING,
>>>>> + irq->pending_latch);
>>>>> + WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
>>>>> +
>>>>> + /*
>>>>> + * Clear pending_latch and communicate this state
>>>>> + * change via vgic_queue_irq_unlock.
>>>>> + */
>>>>> + raw_spin_lock_irqsave(&irq->irq_lock, flags);
>>>>> + irq->pending_latch = false;
>>>>> + vgic_queue_irq_unlock(kvm, irq, flags);
>>>>> + }
>>>>> +
>>>>> out:
>>>>> mutex_unlock(&its->its_lock);
>>>>> return ret;
>>>>
>>>> The read side of the pending state isn't locked, but the write side
>>>> is.
>>>> I'd rather you lock the whole sequence for peace of mind.
>>>
>>> Did you mean to lock before emitting the mapping request, Or just
>>> before reading
>>> the pending state?
>>
>> Just before reading the pending state, so that we can't get a
>> concurrent
>> modification of that state while we make the interrupt pending in the
>> VPT
>> and clearing it in the emulation.
>
> Get it. I will correct it right now.
Please hold off sending a new version for a few days. My inbox is
exploding...
Thanks,
M.
--
Jazz is not dead. It just smells funny...